|
|
|
# Frequently Asked Questions Supplement
|
|
|
|
|
|
|
|
This is where new FAQ items can be quickly added. Note that you should
|
|
|
|
also make sure to read the published FAQ at
|
|
|
|
<http://www.bacula.org/rel-manual/Bacula_Freque_Asked_Questi.html> There
|
|
|
|
is a interesting wiki documentation project called Baculapedia, at:
|
|
|
|
<http://bacula.neocodesoftware.com>
|
|
|
|
|
|
|
|
## Does Bat work with Bacula 1.38 or previous director ?
|
|
|
|
|
|
|
|
No, Bat will definitely not work with 1.38, as it requires a new API
|
|
|
|
that wasn\'t released until (I believe ) the 2.2 branch.
|
|
|
|
|
|
|
|
## Does Bacula backup to disk?
|
|
|
|
|
|
|
|
Yes, it does. And tape (just about any kind of tape if it has a SCSI
|
|
|
|
connection). Some people are also backing up to CD and DVD.
|
|
|
|
|
|
|
|
## Why does Bacula keep crashing on my 64 bit system?
|
|
|
|
|
|
|
|
There is a suspected compiler bug in GCC on 64 bit systems. The
|
|
|
|
workaround is to remove all optimizations by removing any -O compiler
|
|
|
|
options such as -O2.
|
|
|
|
|
|
|
|
Versions 1.38.6 and newer include a workaround that should permit
|
|
|
|
compilation with -O2 as normal.
|
|
|
|
|
|
|
|
## MySQL Server Has Gone Away
|
|
|
|
|
|
|
|
If you suddenly started seeing this problem after switching to
|
|
|
|
[MySQL](https://www.everipedia.com/MySQL/) 5.0 from a previous version,
|
|
|
|
you have run into a change in MySQLs default behavior. The default was
|
|
|
|
previously to reconnect on a server timeout. As of 5.0.13, the default
|
|
|
|
is now to close down the connection completely. This can easily be
|
|
|
|
worked around by increasing the timeout in your MySQL configuration. You
|
|
|
|
must set it to longer than it takes for your longest job to complete.
|
|
|
|
|
|
|
|
You can set it by adding a wait_timeout line to the mysqld section of
|
|
|
|
your my.cnf configuration file. The value is specified in seconds. For
|
|
|
|
example, to set the timeout to 24 hours:
|
|
|
|
|
|
|
|
[mysqld]
|
|
|
|
|
|
|
|
wait_timeout = 86400
|
|
|
|
|
|
|
|
Other than this known issue, you should check the MySQL docs for other
|
|
|
|
troubleshooting steps.
|
|
|
|
|
|
|
|
- MySQL v3/v4: <http://dev.mysql.com/doc/refman/4.1/en/gone-away.html>
|
|
|
|
- MySQL v5.0: <http://dev.mysql.com/doc/refman/5.0/en/gone-away.html>
|
|
|
|
- MySQL v5.1: <http://dev.mysql.com/doc/refman/5.1/en/gone-away.html>
|
|
|
|
|
|
|
|
## Why does dbcheck take forever to run?
|
|
|
|
|
|
|
|
On some larger databases, the dbcheck program can take an inordinate
|
|
|
|
amount of time to run. If you\'re running into this problem, you can try
|
|
|
|
adding a few additional indexes. Make sure that there is an index on
|
|
|
|
these columns:
|
|
|
|
|
|
|
|
- File.PathId
|
|
|
|
- File.FilenameId
|
|
|
|
- Job.FileSetId
|
|
|
|
- Job.ClientId
|
|
|
|
|
|
|
|
Be patient when adding these indexes, as the database server will have
|
|
|
|
to scan the entire table to create them. At least one user has reported
|
|
|
|
that after adding them, running dbcheck went from over two days to under
|
|
|
|
two minutes. Check the documentation for the database you are running
|
|
|
|
for the best way to do this.
|
|
|
|
|
|
|
|
## Fix broken Client table after change from Sqlite to MySQL
|
|
|
|
|
|
|
|
Adding a new client failed with following message:
|
|
|
|
|
|
|
|
`31-Jan 11:29 compaq-dir: *Console*.2006-01-30_16.24.16 Warning: Error updating Client record. ERR=sql_create.c:503 Create DB Client record INSERT INTO Client +(Name,Uname,AutoPrune,FileRetention,JobRetention) VALUES ('newclient-fd','Windows 2000,MVS,NT 5.0.2195',1,2592000,15552000) failed. ERR=Duplicate entry '0' for key 1`
|
|
|
|
|
|
|
|
Problem is missing auto_increment field in \"bacula.Client ClientId\"
|
|
|
|
which assigned ClientId \'0\' to the client added **before** this one.
|
|
|
|
Following procedure fixed the problem:
|
|
|
|
|
|
|
|
stop bacula-dir, enter mysql prompt
|
|
|
|
|
|
|
|
`mysql> delete from bacula.Client where ClientId='0';`
|
|
|
|
|
|
|
|
start bacula-dir, enter bconsole and status the client:
|
|
|
|
|
|
|
|
`*status client=name-of-deleted-client`
|
|
|
|
|
|
|
|
This inserts a new record for the client into the Client table with
|
|
|
|
ClientId other than \'0\'.
|
|
|
|
|
|
|
|
If there are Jobs assigned to ClientId \'0\' they can easily be
|
|
|
|
reassigned to the new ClientId (it was number 7 in my case) with
|
|
|
|
following mysql-statement:
|
|
|
|
|
|
|
|
`mysql> update bacula.Job set ClientId = 7 where ClientId = 0;`
|
|
|
|
|
|
|
|
Always make sure you have a backup of your catalog when doing things
|
|
|
|
like this ;)
|
|
|
|
|
|
|
|
## How large/small can Bacula scale?
|
|
|
|
|
|
|
|
The overhead of Bacula is actually fairly modest, allowing it to run
|
|
|
|
well on older hardware. As for the scaling up, the limiting primary
|
|
|
|
factors are how much storage space you have available, and how well your
|
|
|
|
database can handle the catalog size. For a few data points on what
|
|
|
|
other people are backing up with Bacula, check out the
|
|
|
|
[database_statistics](database_statistics) wiki page.
|
|
|
|
|
|
|
|
## Why does MySQL say my File table is full?
|
|
|
|
|
|
|
|
A fairly common problem among MySQL users with large databases is that
|
|
|
|
that inserts into the File table start to fail with an error that the
|
|
|
|
table is full. There are several possible causes. For more complete
|
|
|
|
documentation, you can check the MySQL manual at
|
|
|
|
<http://dev.mysql.com/doc/refman/5.0/en/full-table.html>
|
|
|
|
|
|
|
|
The most common cause of this is that the default maximum file table
|
|
|
|
size for MyISAM tables for all versions of MySQL prior 5.0.6 is 4GB.
|
|
|
|
|
|
|
|
You can verify this is the problem from the mysql shell:
|
|
|
|
|
|
|
|
mysql> show table status from bacula like 'File';
|
|
|
|
|
|
|
|
If the pointer size is too small, you can fix the problem by using ALTER
|
|
|
|
TABLE:
|
|
|
|
|
|
|
|
ALTER TABLE tbl_name MAX_ROWS=1000000000 AVG_ROW_LENGTH=nnn;
|
|
|
|
|
|
|
|
## What do all those job status codes mean?
|
|
|
|
|
|
|
|
Be sure to reference the full manual for a more detailed meaning of
|
|
|
|
these status codes.
|
|
|
|
|
|
|
|
Taken from jcr.h in Bacula 2.2.3.
|
|
|
|
|
|
|
|
Backup Level Code Meaning
|
|
|
|
------------------- --------------------------
|
|
|
|
F Full backup
|
|
|
|
I Incremental backup
|
|
|
|
D Differential backup
|
|
|
|
C Verify from catalog
|
|
|
|
V Verify init db
|
|
|
|
O Verify volume to catalog
|
|
|
|
d Verify disk to catalog
|
|
|
|
A Verify data on volume
|
|
|
|
B Base job
|
|
|
|
Restore or admin job
|
|
|
|
|
|
|
|
Job Type Code Meaning
|
|
|
|
--------------- -------------------------------------
|
|
|
|
B Backup
|
|
|
|
M Previous job that has been migrated
|
|
|
|
V Verify
|
|
|
|
R Restore
|
|
|
|
c Console
|
|
|
|
C Copy
|
|
|
|
I Internal system job
|
|
|
|
D Admin job
|
|
|
|
A Archive
|
|
|
|
C Copy
|
|
|
|
g Migration
|
|
|
|
S Scan
|
|
|
|
|
|
|
|
NOTE: for a complete list of values see [the source
|
|
|
|
code](http://www.bacula.org/git/cgit.cgi/bacula/tree/bacula/src/jcr.h).
|
|
|
|
|
|
|
|
Job Status Code Meaning
|
|
|
|
----------------- --------------------------------------------
|
|
|
|
A Canceled by user
|
|
|
|
B Blocked
|
|
|
|
C Created, but not running
|
|
|
|
c Waiting for client resource
|
|
|
|
D Verify differences
|
|
|
|
d Waiting for maximum jobs
|
|
|
|
E Terminated in error
|
|
|
|
e Non-fatal error
|
|
|
|
f fatal error
|
|
|
|
F Waiting on File Daemon
|
|
|
|
j Waiting for job resource
|
|
|
|
M Waiting for mount
|
|
|
|
m Waiting for new media
|
|
|
|
p Waiting for higher priority jobs to finish
|
|
|
|
R Running
|
|
|
|
S Scan
|
|
|
|
s Waiting for storage resource
|
|
|
|
T Terminated normally
|
|
|
|
t Waiting for start time
|
|
|
|
|
|
|
|
## Can bacula-dir.conf include others files?
|
|
|
|
|
|
|
|
Yes, the Director configuration doesn\'t have to be in just one file.
|
|
|
|
You can do this:
|
|
|
|
|
|
|
|
@/path/to/file1
|
|
|
|
@/path/to/file1
|
|
|
|
|
|
|
|
In fact, the \@filename can appear anywhere within the conf file where a
|
|
|
|
token would be read, and the contents of the named file will be
|
|
|
|
logically inserted in the place of the \@filename. What must be in the
|
|
|
|
file depends on the location the \@filename is specified in the conf
|
|
|
|
file.
|
|
|
|
|
|
|
|
Actually, the best documention is the section in the manual on
|
|
|
|
[Including other Configuration
|
|
|
|
Files](http://www.bacula.org/5.0.x-manuals/en/main/main/Customizing_Configuration_F.html#SECTION001723000000000000000).
|
|
|
|
|
|
|
|
## Why does Bacula crash on a \"reload\" command?
|
|
|
|
|
|
|
|
Typically this happens because you have configured the director and/or
|
|
|
|
storage daemon to run as the bacula user, but the configuration files
|
|
|
|
are only readable by root. They are able to read the files on initial
|
|
|
|
startup, but on subsequent reloads they have already switched to the
|
|
|
|
bacula account, and can no longer acccess the files. Check the
|
|
|
|
permissions on the configuration files, including any secondary file
|
|
|
|
such as included config fragments or certificate files, and make sure
|
|
|
|
they are readable by the bacula user.
|
|
|
|
|
|
|
|
## Can Bacula tell me how much space is left on my tapes?
|
|
|
|
|
|
|
|
The short answer is no.
|
|
|
|
|
|
|
|
The reason is because although it\'s possible to know the raw capacity
|
|
|
|
of each tape and how much data has been stored on each tape, hardware
|
|
|
|
tape drive compression makes it impossible to reliably know beforehand
|
|
|
|
how much raw tape capacity a given amount of data will take up.
|
|
|
|
|
|
|
|
So if you have 1G of data that has to be stored on tape, it might take
|
|
|
|
up only a few hundred megs on tape if it is highly compressible text, or
|
|
|
|
it might take up the full 1G if it is non-compressible binary data. Due
|
|
|
|
to this ambiguity and wide variation, it\'s not possible to tell
|
|
|
|
beforehand how much more data Bacula will be able to fit on a given
|
|
|
|
tape, even with the catalog data.
|
|
|
|
|
|
|
|
## Why is my backup larger than my disk space usage?
|
|
|
|
|
|
|
|
The most common culprit of this is having one or more sparse files.
|
|
|
|
|
|
|
|
A sparse file is one with large blocks of nothing but zeroes that the
|
|
|
|
operating system has optimized. Instead of actually storing disk blocks
|
|
|
|
of nothing but zeroes, the filesystem simply contains a note that from
|
|
|
|
point A to point B, the file is nothing but zeroes. Only blocks that
|
|
|
|
contain non-zero data are allocated physical disk blocks.
|
|
|
|
|
|
|
|
The single biggest culprit seems to be the contents of /var/log/lastlog
|
|
|
|
on 64 bit systems. Since the lastlog file is extended to preallocate
|
|
|
|
space for all UIDs, the switch from a 32 bit UID space to a 64 bit UID
|
|
|
|
increases the full size to over 1TB.
|
|
|
|
|
|
|
|
Luckily the fix is simple - turn on sparse file support in the FileSet,
|
|
|
|
will detect sparse files and not store the zerofill blocks.
|
|
|
|
|
|
|
|
Another possible cause is that your fileset accidentally includes some
|
|
|
|
folders twice. Taken [from the
|
|
|
|
manual](http://www.bacula.org/5.0.x-manuals/en/main/main/Configuring_Director.html#SECTION001870000000000000000):
|
|
|
|
\<blockquote\>Take special care not to include a directory twice or
|
|
|
|
Bacula will backup the same files two times wasting a lot of space on
|
|
|
|
your archive device. Including a directory twice is very easy to do. For
|
|
|
|
example:
|
|
|
|
|
|
|
|
Include {
|
|
|
|
File = /
|
|
|
|
File = /usr
|
|
|
|
Options { compression=GZIP }
|
|
|
|
}
|
|
|
|
|
|
|
|
on a Unix system where /usr is a subdirectory (rather than a mounted
|
|
|
|
filesystem) will cause /usr to be backed up twice.\</blockquote\>
|
|
|
|
|
|
|
|
## Why do I still see old jobs in status messages after I dropped the catalog?
|
|
|
|
|
|
|
|
Because this information is kept in ordinary files on the machine each
|
|
|
|
daemon is running on, not in the catalog.
|
|
|
|
|
|
|
|
Look for files ending in .state in bacula\'s working directory and
|
|
|
|
delete them. All information about old jobs will be gone.
|
|
|
|
|
|
|
|
## Why do my client side scripts fail on 64 bit Windows?
|
|
|
|
|
|
|
|
Short answer - the Windows 32 bit compatability layer of 64 bit Windows.
|
|
|
|
|
|
|
|
The Windows Bacula FD is compiled as a 32 bit application. When a 32 bit
|
|
|
|
application is run on 64 bit Windows, any access to
|
|
|
|
c:\\windows\\system32 is remapped to c:\\windows\\sysWOW64 instead.
|
|
|
|
sysWOW64 contains the 32 bit contents of the system32 directory, but may
|
|
|
|
not necessarily have all of the same files. The most likely problem
|
|
|
|
candidate is the ntbackup.exe application. Although
|
|
|
|
c:\\windows\\system32\\ntbackup.exe exists, the 32 bit Bacula FD gets
|
|
|
|
remapped to accessing c:\\windows\\sysWOW64\\ntbackup.exe, which does
|
|
|
|
not exist.
|
|
|
|
|
|
|
|
The simplest solution is to place a copy of ntbackup.exe in a different,
|
|
|
|
un-remaped directory where Bacula can access it.
|
|
|
|
|
|
|
|
## Why doesn\'t Bacula store configuration in the catalog database?
|
|
|
|
|
|
|
|
In short, simplicity.
|
|
|
|
|
|
|
|
The first half is keeping it simple to configure Bacula, both in terms
|
|
|
|
of writing the config files out and parsing them. The nice, simple flat
|
|
|
|
text files are easy to edit with any decent text editor out there, and
|
|
|
|
Bacula has some good utility routines that make it relatively easy to
|
|
|
|
add support for new options. If the config were to be moved into a
|
|
|
|
database, editing the configuration without a custom app written
|
|
|
|
explicitly for editing the Bacula configuration database would be much
|
|
|
|
more difficult. Likewise, representing and parsing the kind of
|
|
|
|
hierarchical data with lots of different key/value pairs is actually a
|
|
|
|
pain in the rear by comparison.
|
|
|
|
|
|
|
|
The second half is when stuff blows up. In particular, think of the
|
|
|
|
scenario where for some reason, the catalog and all backups of it have
|
|
|
|
been completely blown away, and all you have is a brand new server and a
|
|
|
|
box of tapes. In such a scenario, it\'s entirely possible to rebuild the
|
|
|
|
config files from scratch, and then fill up the catalog data with bscan.
|
|
|
|
This would again be much more complex if you had to generate the
|
|
|
|
configuration completely in the database.
|
|
|
|
|
|
|
|
## How To Clear Your Console History
|
|
|
|
|
|
|
|
It may be desirable, particularly for newcomers who are first learning
|
|
|
|
about Bacula and running through the various tutorials, to clear the
|
|
|
|
Bacula Console\'s history. This can be done by first shutting down the
|
|
|
|
Bacula daemon(s) running on your machine and then removing the \"state\"
|
|
|
|
files from the Bacula Working directory. For example:
|
|
|
|
|
|
|
|
\# cd /opt/local/var/bacula/working
|
|
|
|
|
|
|
|
\# rm bacula-dir.9101.state bacula-fd.9102.state bacula-sd.9103.state
|
|
|
|
|
|
|
|
## Fixing table \'bacula.batch\' doesn\'t exist
|
|
|
|
|
|
|
|
Newer versions of Bacula create a temporary working table in order to do
|
|
|
|
batch inserts, which can greatly speed up inserting attributes into the
|
|
|
|
catalog. Unfortunately, if the MySQL connection gets dropped for any
|
|
|
|
reason (such as a timeout) the temporary table goes away. If you\'re
|
|
|
|
seeing these problems, there are two workarounds.
|
|
|
|
|
|
|
|
The simplest one is to turn off batch inserts. This will revert to the
|
|
|
|
older, somewhat slower behavior, but it should avoid this particular
|
|
|
|
glitch.
|
|
|
|
|
|
|
|
The other option is to alter the MySQL timeouts to a sufficiently long
|
|
|
|
value that the connection never gets yanked out from under Bacula. For
|
|
|
|
example, adding these two lines to your my.cnf file in the \[mysqld\]
|
|
|
|
block will set the relevant timeouts to 8 days.
|
|
|
|
|
|
|
|
wait_timeout=691200
|
|
|
|
interactive_timeout=691200
|
|
|
|
|
|
|
|
Future versions of MySQL will automatically set the timeout values,
|
|
|
|
which should prevent the problem without requiring changing the global
|
|
|
|
MySQL timeout values.
|
|
|
|
|
|
|
|
## Why can\'t Bacula see mapped drives on Windows?
|
|
|
|
|
|
|
|
When a drive is mapped, not all users on that machine are able to see
|
|
|
|
it; this even applies to users like Administrator. See
|
|
|
|
<http://support.microsoft.com/kb/149984> for more details.
|
|
|
|
|
|
|
|
A good workaround is to have the client map the drive before the job.
|
|
|
|
Thus, in `bacula-dir.conf` you\'d use the `ClientRunBeforeJob`
|
|
|
|
directive:
|
|
|
|
|
|
|
|
Job {
|
|
|
|
Name = "client-1-x_drive"
|
|
|
|
JobDefs = "DefaultJob"
|
|
|
|
Client = "client-1-fd"
|
|
|
|
FileSet = "windows_x_drive"
|
|
|
|
# See http://www.mail-archive.com/bacula-users@lists.sourceforge.net/msg34801.html
|
|
|
|
ClientRunBeforeJob = "c:/bacula-netuse.bat"
|
|
|
|
}
|
|
|
|
|
|
|
|
On the client itself, `bacula-netuse.bat` would simply look like this:
|
|
|
|
|
|
|
|
net use x: /delete
|
|
|
|
net use x: \\server\path\to\share
|
|
|
|
REM Can also use "net use x: \\10.0.0.1\path\to\share"
|
|
|
|
|
|
|
|
## Fixing Corrupted Batch Table
|
|
|
|
|
|
|
|
Bacula uses a temporary table called \"batch\" to speed up inserting
|
|
|
|
attributes into the catalog. In some cases, MySQL will complain that the
|
|
|
|
table became corrupted, which in turn causes the backup job to fail. A
|
|
|
|
typical error message would look something like this:
|
|
|
|
|
|
|
|
17-Apr 16:26 gaff-dir JobId 1: Fatal error: sql_create.c:732 sql_create.c:732 insert INSERT INTO batch VALUES
|
|
|
|
(2094743,1,'/home/gary/devel/LMS7.0.0/src/lib/.svn/prop-base/','WebImportCycleRatings.pm.svn-base','P4A EMXG IEk B Pt Pt A i BAA I BIBr+D BHITLl BHITLm A A E','JebC91WLdIQADU0JDepbkg') failed:
|
|
|
|
Incorrect key file for table '/tmp/#sql136f_26_0.MYI'; try to repair it
|
|
|
|
|
|
|
|
17-Apr 16:26 gaff-dir JobId 1: Fatal error: catreq.c:482 Attribute create error.
|
|
|
|
sql_get.c:1005 Media record for Volume "full-0001" not found.
|
|
|
|
|
|
|
|
The most common cause is simply that the partition holding the temporary
|
|
|
|
table ran out of disk space part way through the backup job.
|
|
|
|
|
|
|
|
Note the path shown for the table. Temporary tables are not stored in
|
|
|
|
the same location as persistent ones. Instead, they\'re stored in
|
|
|
|
whatever directory MySQL is configured to use as a temporary directory,
|
|
|
|
usually /tmp. This means that it is quite possible to have a dedicated
|
|
|
|
MySQL partition with plenty of space, but still run out of space for
|
|
|
|
temporary tables.
|
|
|
|
|
|
|
|
## Why i use compression=gzip got error c:332 ?
|
|
|
|
|
|
|
|
1.my bacula-dir.conf FileSet setting
|
|
|
|
|
|
|
|
FileSet {
|
|
|
|
|
|
|
|
Name = "192.168.101.239-dir"
|
|
|
|
Include =compression=gzip {
|
|
|
|
File = /usr/local
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
2\. system and zlib information
|
|
|
|
|
|
|
|
\[root@bacula3 bacula\]# rpm -qa \| grep zlib
|
|
|
|
|
|
|
|
zlib-devel-1.2.3-3 zlib-1.2.3-3
|
|
|
|
|
|
|
|
\[root@bacula3 bacula\]# ls -al /usr/lib/libz.a -rwxr-xr-x 1 root root
|
|
|
|
92598 Jan 10 2007 /usr/lib/libz.a
|
|
|
|
|
|
|
|
\[root@bacula3 bacula\]# uname -a Linux bacula3 2.6.18-53.el5 #1 SMP Mon
|
|
|
|
Nov 12 02:22:48 EST 2007 i686 i686 i386 GNU/Linux
|
|
|
|
|
|
|
|
3.the library is found by Bacula during the ./configure it will be
|
|
|
|
mentioned in the config.out line by:
|
|
|
|
|
|
|
|
Configuration on Sun Oct 19 17:17:43 CST 2008:
|
|
|
|
|
|
|
|
Host: i686-pc-linux-gnu -- redhat
|
|
|
|
Bacula version: 2.4.3 (10 October 2008)
|
|
|
|
Source code location: .
|
|
|
|
Install binaries: /usr/sbin
|
|
|
|
Install config files: /etc/bacula
|
|
|
|
Scripts directory: /etc/bacula
|
|
|
|
Archive directory:
|
|
|
|
Working directory: /var/bacula
|
|
|
|
PID directory: /var/run
|
|
|
|
Subsys directory: /var/lock/subsys
|
|
|
|
Man directory: ${datarootdir}/man
|
|
|
|
Data directory: ${prefix}/share
|
|
|
|
C Compiler: gcc 4.1.2
|
|
|
|
C++ Compiler: /usr/bin/g++ 4.1.2
|
|
|
|
Compiler flags: -g -Wall -fno-strict-aliasing -fno-exceptions -fno-rtti
|
|
|
|
Linker flags:
|
|
|
|
Libraries: -lpthread
|
|
|
|
Statically Linked Tools: no
|
|
|
|
Statically Linked FD: no
|
|
|
|
Statically Linked SD: no
|
|
|
|
Statically Linked DIR: no
|
|
|
|
Statically Linked CONS: no
|
|
|
|
Database type: MySQL
|
|
|
|
Database lib: -L/usr/lib/mysql -lmysqlclient_r -lz
|
|
|
|
Database name: bacula
|
|
|
|
Database user: bacula
|
|
|
|
|
|
|
|
Job Output Email: root@localhost
|
|
|
|
Traceback Email: root@localhost
|
|
|
|
SMTP Host Address: localhost
|
|
|
|
|
|
|
|
Director Port: 9101
|
|
|
|
File daemon Port: 9102
|
|
|
|
Storage daemon Port: 9103
|
|
|
|
|
|
|
|
Director User:
|
|
|
|
Director Group:
|
|
|
|
Storage Daemon User:
|
|
|
|
Storage DaemonGroup:
|
|
|
|
File Daemon User:
|
|
|
|
File Daemon Group:
|
|
|
|
|
|
|
|
SQL binaries Directory /usr/bin
|
|
|
|
|
|
|
|
Large file support: yes
|
|
|
|
Bacula conio support: yes -ltermcap
|
|
|
|
readline support: no
|
|
|
|
TCP Wrappers support: no
|
|
|
|
TLS support: no
|
|
|
|
Encryption support: no
|
|
|
|
ZLIB support: yes
|
|
|
|
enable-smartalloc: yes
|
|
|
|
bat support: yes
|
|
|
|
enable-gnome: yes Version 2.x
|
|
|
|
enable-bwx-console: no
|
|
|
|
enable-tray-monitor:
|
|
|
|
client-only: no
|
|
|
|
build-dird: yes
|
|
|
|
build-stored: yes
|
|
|
|
ACL support: yes
|
|
|
|
Python support: no
|
|
|
|
Batch insert enabled: yes
|
|
|
|
|
|
|
|
5\. error detail
|
|
|
|
|
|
|
|
\[root@bacula3 bacula\]# ./bacula start
|
|
|
|
|
|
|
|
Starting the Bacula Storage daemon Starting the Bacula File daemon
|
|
|
|
Starting the Bacula Director daemon 19-Oct 18:05 bacula-dir: ERROR
|
|
|
|
TERMINATION at inc_conf.c:332 Config error: Old style Include/Exclude
|
|
|
|
not supported
|
|
|
|
|
|
|
|
: line 186, col 16 of file /etc/bacula/bacula-dir.conf
|
|
|
|
Include =compression=gzip {
|
|
|
|
|
|
|
|
6\. Problem solve now:
|
|
|
|
|
|
|
|
Include {
|
|
|
|
|
|
|
|
File = /
|
|
|
|
File = /usr
|
|
|
|
Options { compression=gzip }
|
|
|
|
}
|
|
|
|
|
|
|
|
## How can I view the Jobs associated with a Tape?
|
|
|
|
|
|
|
|
To resolve this I found it most easy to go into the bacula database and
|
|
|
|
run a SQL query to find the appropriate information. to get to our
|
|
|
|
database (PostgreSQL) we needed to enter the database as the postgres
|
|
|
|
user.
|
|
|
|
|
|
|
|
[root@bacula]# su - postgres
|
|
|
|
-bash-3.1$ psql bacula
|
|
|
|
|
|
|
|
Here is my SQL statement.
|
|
|
|
|
|
|
|
select distinct (job.jobid),jobmedia.mediaid,job.name,job.joberrors,job.level,job.realendtime
|
|
|
|
from jobmedia
|
|
|
|
left join job on jobmedia.jobid=job.jobid
|
|
|
|
where mediaid = 76
|
|
|
|
order by job.jobid desc;
|
|
|
|
|
|
|
|
Just a note if you want to try to find sort by some more information
|
|
|
|
then it is required to put single quotes around text.
|
|
|
|
|
|
|
|
# SLOW BACKUP or RESTORE, BETTER SPEED, BETTER ?
|
|
|
|
|
|
|
|
Good Backup speed comes with fast hard drives, fast interface between
|
|
|
|
hard disk and mother board, fast network, memory and processor. Useing
|
|
|
|
Accurate backup\'s causes lenghty SQL statements that take time. Total
|
|
|
|
backup time builds from time needed by bacula-director to know what to
|
|
|
|
backup, time needed to bacula-fd (file deamon) to compress and send data
|
|
|
|
to bacula-sd (storage). Bacula-director is useing SQL sofware like
|
|
|
|
postgresql, mysql or MariaDB ( MariaDB is drop in replacement for mysql
|
|
|
|
) and it\'s parameters as well choise DATBASE engine like MyIsam/Aria,
|
|
|
|
InnoDB or tokudb are as important as hardware issues. SQL operations can
|
|
|
|
take much more than compression, transfer and writeing backup.
|
|
|
|
|
|
|
|
## bacula-sd.conf speed related parameters
|
|
|
|
|
|
|
|
You can get better backup speed adjusting:
|
|
|
|
|
|
|
|
Maximum File Size
|
|
|
|
Maximum Block Size
|
|
|
|
Maximum Network Buffer Size
|
|
|
|
|
|
|
|
WARNING: Maximum Block Size can not be changed on fly. You can BACKUP
|
|
|
|
and RESTORE backup\'s written same value.
|
|
|
|
|
|
|
|
## Jobs with Accurate filesets take forever DEPRECATED
|
|
|
|
|
|
|
|
Several reasons can cause speed drop. Indexing, size of backup and
|
|
|
|
database engine , and not limited to these! Fast solution is to turn OFF
|
|
|
|
accurate if you can do FULL backups, still go trough all suggestion
|
|
|
|
there can be solution to overall problems.
|
|
|
|
|
|
|
|
### Index - missing or too much
|
|
|
|
|
|
|
|
On new version (5.0.1), the speed should be correct and doesn\'t need
|
|
|
|
any new indexes. Please, never add this index on production server:
|
|
|
|
|
|
|
|
CREATE INDEX FilenameId_2 ON File (FilenameId, PathId); -- NEVER ADD THIS INDEX!!!!
|
|
|
|
|
|
|
|
If you had previously already added that index, you may remove it with
|
|
|
|
the command
|
|
|
|
|
|
|
|
ALTER TABLE File DROP INDEX FilenameId_2;
|
|
|
|
|
|
|
|

|
|
|
|
|
|
|
|
See more bench on [this
|
|
|
|
page](http://sourceforge.net/apps/wordpress/bacula/2009/09/28/performance-issue-with-a-useless-index-on-postgresql/)
|
|
|
|
and [this
|
|
|
|
one](http://sourceforge.net/apps/wordpress/bacula/2009/09/29/sqlite-index-update/)
|
|
|
|
|
|
|
|
### Size of backup
|
|
|
|
|
|
|
|
Backuping Basic operateing system distribution (accurate backupo) needs
|
|
|
|
around 16GB free RAM. If there is less OS starts swapping and preformace
|
|
|
|
drops dramatically.
|
|
|
|
|
|
|
|
### Optimize SQL
|
|
|
|
|
|
|
|
Whit MySql you can start whit mysqltuner.pl
|
|
|
|
|
|
|
|
### Database engine
|
|
|
|
|
|
|
|
MyISAM and InnoDB are old database engines. Main reason why SDD drives
|
|
|
|
are not used whit SQL is that sql server writes same locations and that
|
|
|
|
will distroy current SDD drives fastly. Also there had not been any
|
|
|
|
development in decades these engines. TokuDB engine work\'s whit SDD and
|
|
|
|
can be 25x faster than oldtime engines. You can use tokudb from
|
|
|
|
www.tokutek.com whit mysql. I\'am currently testing MariaDB
|
|
|
|
www.mariadb.com whit tokudb engine whit SDD drives and whit normal hard
|
|
|
|
disks Aria ( MariaDB db variation of MyISAM, crash proof ) Tokudb engine
|
|
|
|
is quite smart, faster search, better compression and for example dabase
|
|
|
|
is not totaly locked while bacula dose lenghty INSERT or SELECT
|
|
|
|
operations. MariaDB Beta has also parallel search functions, I\'am
|
|
|
|
starting to test twin machine real time replication whit parrallel
|
|
|
|
searching at this week, that could be smart way to have \"realtime
|
|
|
|
backup\" and more search power!
|
|
|
|
|
|
|
|
### Restore takes a long time to retrieve SQL results from MySQL catalog
|
|
|
|
|
|
|
|
Also known as: Restore hanging on \"Building directory tree\"
|
|
|
|
|
|
|
|
At least if you use MySQL and bacula 5.0.x, especially if your File
|
|
|
|
table is big (dozen million records and up), there are big performance
|
|
|
|
issues - SQL queries can takes many minutes and even dozens of hours to
|
|
|
|
complete (both on MyISAM and InnoDB) !
|
|
|
|
|
|
|
|
The main issue seems to be additional indexes. \*Removing\* them allows
|
|
|
|
query times to drop to 3.0.3 speeds (so they take 5 minutes instead of
|
|
|
|
10 hours). Additional indexes are supposed to only (slightly) slow down
|
|
|
|
creating of new records, but due to MySQL engine which sometimes chooses
|
|
|
|
totally inappropriate indexes, it will enormously slow down the complex
|
|
|
|
SELECTs the bacula uses.
|
|
|
|
|
|
|
|
Most notably, you need to drop all the indexes from File table except
|
|
|
|
the primary key and indexes on (\`JobId\`,\`PathId\`,\`FilenameId\`) and
|
|
|
|
(\`JobId\`)
|
|
|
|
|
|
|
|
Note that dbcheck(8) provided by bacula will create such indexes if
|
|
|
|
allowed (although it should clean them up when it finishes, but aborting
|
|
|
|
it might leave them on the database). Also the mysql creation script has
|
|
|
|
comments about adding some indexes to speedup Verify jobs \-- **DO NOT
|
|
|
|
DO THAT** as it will slow down everything to the extreme instead !
|
|
|
|
|
|
|
|
You should periodically do \"analyze table\" (if InnoDB) or \"optimize
|
|
|
|
table\" (if MyISAM).
|
|
|
|
|
|
|
|
Also, there is [memory leak bug in
|
|
|
|
mysql](http://bugs.mysql.com/bug.php?id=27732) (in versions less than
|
|
|
|
5.0.60 / 5.1.24 / 6.0.5, such as the MySQL in Debian Lenny), which will
|
|
|
|
make your server go in swap and become extremely slow and maybe even
|
|
|
|
trigger out-of-memory task killer. Also, you will probably need to tune
|
|
|
|
your MySQL, and not just regular key_buffer and/or
|
|
|
|
innodb_buffer_pool_size, but also other like join_buffer_size,
|
|
|
|
max_heap_table_size, tmp_table_size, sort_buffer_size, read_buffer_size,
|
|
|
|
read_rnd_buffer_size).
|
|
|
|
|
|
|
|
see for example [bacula bug
|
|
|
|
1472](http://bugs.bacula.org/view.php?id=1472), which didn\'t get solved
|
|
|
|
by moving to InnoDB (although mysql upgrade did solve the excessive
|
|
|
|
memory usage and allow queries to complete \-- but they still take too
|
|
|
|
long in 5.0.1. Note that 3.0.3 didn\'t have those problems (or they were
|
|
|
|
much faster anyway), the longest queries on 3.0.3 took about 5-10
|
|
|
|
minutes for same dataset size on same mysql/hardware).
|
|
|
|
|
|
|
|
Moving from MySQL to PostgreSQL should make it work much better due to
|
|
|
|
different (more optimized) queries and different SQL engine.
|
|
|
|
|
|
|
|

|
|
|
|
|
|
|
|
In this bench, PostgreSQL is 6 times faster in the file selection
|
|
|
|
process.
|
|
|
|
|
|
|
|

|
|
|
|
|
|
|
|
In this bench, we can see that it takes 5mins to select 7 million files
|
|
|
|
on MySQL, it\'s much faster with PostgreSQL, but it\'s not a big deal
|
|
|
|
for this kind of job.
|
|
|
|
|
|
|
|
Greatly reducing the retention periods for files helps a lot (but is a
|
|
|
|
problem if you need to restore from older backup\...)
|
|
|
|
|
|
|
|
## My backup starts, but dies after a while with \"Connection reset by peer\" error
|
|
|
|
|
|
|
|
This is usually due to some router/firewall having a connection timeout,
|
|
|
|
and it kills connections that are \"idle\" for too long. To fix, you
|
|
|
|
should do **both** of:
|
|
|
|
|
|
|
|
\(1\) activate keepalive by adding
|
|
|
|
|
|
|
|
Heartbeat Interval = 60
|
|
|
|
|
|
|
|
in all of SD, FD and DIR configurations. Note that by itself this still
|
|
|
|
won\'t be enough if you have some slow operations like SQL queries,
|
|
|
|
accurate backups enabled, etc. You need the second point too.
|
|
|
|
|
|
|
|
\(2\) lower the system time before keepalives are sent
|
|
|
|
|
|
|
|
Setting the system SO_KEEPALIVE timeouts is needed as the defaults might
|
|
|
|
be quite long (like 2 hours of inactivity or even longer before system
|
|
|
|
starts to send keepalives). See
|
|
|
|
<http://tldp.org/HOWTO/html_single/TCP-Keepalive-HOWTO/> for
|
|
|
|
instructions for GNU/Linux systems, or some other systems at
|
|
|
|
<http://www.gnugk.org/keepalive.html> or
|
|
|
|
<http://www.starquest.com/Supportdocs/techStarLicense/SL002_TCPKeepAlive.shtml>
|
|
|
|
|
|
|
|
You can check if SO_KEEPALIVE timeout is setup correctly by restarting
|
|
|
|
bacula and starting a new job, and then check the current state of TCP
|
|
|
|
connections with \"netstat -to\". Here is how it looks when it is
|
|
|
|
**WRONG** and needs fixing (this example shows it will not start sending
|
|
|
|
keepalives for about 7200 seconds, that is 2 hours!):
|
|
|
|
|
|
|
|
# netstat -to
|
|
|
|
tcp 0 0 client:9102 server:54043 ESTABLISHED keepalive (7196.36/0/0)
|
|
|
|
tcp 0 0 client:43628 server:9103 ESTABLISHED keepalive (7197.26/0/0)
|
|
|
|
|
|
|
|
In that case you can try setting the system defaults to lower value, for
|
|
|
|
example on GNU/Linux systems (or check URLs above for other systems)
|
|
|
|
with:
|
|
|
|
|
|
|
|
sysctl -w net.ipv4.tcp_keepalive_time=60
|
|
|
|
|
|
|
|
Put that in /etc/sysctl.conf or /etc/sysctl.d/\* to keep it across
|
|
|
|
reboots.
|
|
|
|
|
|
|
|
Alternatively, one could try to increase router/firewall timeouts and/or
|
|
|
|
number of simultaneous connections, and/or reduce time needed for backup
|
|
|
|
(turning off accurate backups, reducing filesets, etc) or reducing the
|
|
|
|
time network connection is idle during backup (for example, running Full
|
|
|
|
backup instead of Incremental will take longer time, but the network
|
|
|
|
connection will be idle for much less time, as bacula won\'t have to
|
|
|
|
check if files have changed, which can take some time).
|
|
|
|
|
|
|
|
Some routers/firewalls (those having connection tracking / stateful
|
|
|
|
firewall / NAT capabilities) will also reset all running connections if
|
|
|
|
they reboot. Not much helping here, apart from avoiding condition which
|
|
|
|
may make them reboot, or turning off their connection tracking /
|
|
|
|
firewall / NAT (which might make them useless, of course) |