View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1137 [bareos-core] General text always 2019-11-12 00:22 2019-11-20 23:49
Reporter: embareossed Platform: Linux  
Assigned To: OS: Debian  
Priority: normal OS Version: 9  
Status: new Product Version: 17.2.4  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Maximum Network Buffer Size must be set for network operations
Description: In attempting to copy a job from the the source bareos system 1 (with its own director, storage, and file daemons) to the target bareos system 2 (also with its own director, storage, and file daemons), the job initially failed with an error which appeared in the syslog as:

bareos-sd: bsock_tcp.c:591 Packet size too big from client:192.168.xx.xx:9103

The documentation at https://docs.bareos.org/Configuration/FileDaemon.html#config-Fd_Client_MaximumNetworkBufferSize says default is 65536. However, running bareos-sd -xc reveals:

MaximumNetworkBufferSize = 0
(repeated for each storage and device resource, of course)

Setting this value to a larger number, such as the supposed default value of 65536, permits the copy job to proceed.

Tags:
Steps To Reproduce: Set MaximumNetworkBufferSize to zero, or omit the setting entirely in the Storage and Device resources of both the source and target systems involved in the network operation (copy, migrate, or maybe a distributed configuration?).

Attempt the network operation. A simple test could be running bconsole and requesting status on the remote storage daemon.
Additional Information: The source system is running bareos 16.2, whereas the target system is running 17.2.

The documentation needs to be updated to instruct the administrator to set these values appropriately for network operations, or else the defaults in the code itself need to be set to the default values indicated in the current documentation.
System Description
Attached Files:
Notes
(0003626)
embareossed   
2019-11-12 00:26   
Incidentally, both the Storage Resource and the Device Resource of the storage daemon both support the MaximumNetworkBufferSize option. However, only the Device Resource has detailed information on the option. It might be a good idea to either reference the documentation of the Device Resource for more on this option, or add the same information there.
(0003627)
embareossed   
2019-11-12 00:31   
The link I included in the description is actually the wrong one; it should be https://docs.bareos.org/Configuration/StorageDaemon.html#config-Sd_Device_MaximumNetworkBufferSize. However, the information there is the same as for Storage and Device resources.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1135 [bareos-core] director feature N/A 2019-11-11 02:38 2019-11-20 23:49
Reporter: embareossed Platform: Linux  
Assigned To: OS: Debian  
Priority: normal OS Version: 9  
Status: new Product Version: 17.2.4  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Means to limit the total size of a backup
Description: If a backup suddenly grows due to an explosion in file growth in one or more of the directories in a fileset it can use up a lot of resources including time and storage. It may be the case that the additional data is due to a bug or some other circumstance that needs to be corrected at some point, or any other reason that some data might not be needed or wanted for long-term storage in a backup.

An example of this could be when a developer is testing their work in the same directory with their source files, perhaps creating a lot of, or large, log files or other output during normal work. There are other scenarios. Developers and administrators should always take care to avoid accumulation of files, but sometimes they are required for some time or may be overlooked. Whatever the case, there is always a chance that a backup could grow by large amounts.

Currently, I have a before run script that calls estimate and decides whether to run the backup based on the result. It would be more efficient if bareos had a built-in feature to limit the size of a backup, either by its total projected size or perhaps by the total number of media volumes it is expected to use. My script has worked well for years, but it is a clumsy approach better served by built-in faculties in bareos. I hope you will consider adding this as a feature in the future.
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1133 [bareos-core] General minor always 2019-11-06 11:16 2019-11-20 23:48
Reporter: embareossed Platform: Linux  
Assigned To: OS: Debian  
Priority: normal OS Version: 9  
Status: new Product Version: 17.2.4  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Restore Job always requires Pool to be specified
Description: One may not know which pool a certain file or set of files one wants to restore might reside in. This is particularly true if one is using a Full-Differential-Incremental backup scheme.

As it stands, however, the restores do work. But this is an annoyance, having to specify a parameter which in many cases could be unknown, or even irrelevant.
Tags:
Steps To Reproduce: Remove the "Pool = " directive from a job (and its jobdefs if jobdefs are used in that job).
Additional Information:
System Description
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1132 [bareos-core] documentation minor always 2019-11-06 11:12 2019-11-20 23:48
Reporter: embareossed Platform: Linux  
Assigned To: OS: Debian  
Priority: normal OS Version: 9  
Status: new Product Version: 17.2.4  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: How to make Counter Resources increment
Description: BareOS Counter Resources do not automatically increment themselves; this must be done by nudging the corresponding counter variable appropriately. In addition, one must know the exact syntax to to use.

Merely creating a tape label with, e.g.:

Label Format = "DIFF-$DiffSeq"

does NOT increment the value of "DiffSeq," which will cause backups to error and send annoying intervention emails. The error indicates that the director could not find an appendable volume.

There does not appear to be documentation on how to actually use counter variables corresponding to Counter Resources. I searched high and low for hours and only discovered the solution by examining a google hit on Bacula, not BareOS. Apparently, Bacula has some sophisticated facilities for generating tape labels from counter variables; I have not tried them all. However, the Bacula documentation on this topic was enough (along with my own experience and knowledge of *nix-type systems) to help me figure out how to do something similar in BareOS. I changed the directive to:

Label Format = "DIFF-${DiffSeq+}"

Note that the plus sign (the '+') MUST be INSIDE the curly braces. I have not tried padding and other features the Bacula documentation discusses as I feel I have spent enough time on this already. I use "faux" padding by starting my sequences in the Counter Resource at 1000; this is satisfactory for my own use. Others might try experimenting with some other features, if they are implemented in Bareos also.

BTW, Counter Variables really do work in BareOS. The secret is knowing exactly how to use them.

The Bacula doc on this topic is at: https://www.bacula.org/9.0.x-manuals/en/misc/Variable_Expansion.html

I think an update to the BareOS documentation on the use, very specifically, of counter variables would be appropriate.

Tags:
Steps To Reproduce: Create a Counter Resource and reference it with a simple shell-like substitution like $CounterVariable.
Additional Information:
System Description
Attached Files:
Notes
(0003614)
embareossed   
2019-11-06 11:20   
Incidentally, if this is fixed in post-17.2 releases, do not immediately close this issue. The webui in 18.2 has some serious defects. Short of using a mix of 17.2 serverware and 18.2 webui clientware (which may be a bit of a configuration hassle on some platforms depending on packaging system), using BareOS 18.2 at this time is not possible. Production shops, and many users preferring stability over cutting edges, could well decide to avoid nightlies.

Please leave this open so others can see they are not losing their minds. Thanks.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
937 [bareos-core] file daemon minor always 2018-04-08 02:50 2019-11-20 23:47
Reporter: bozonius Platform: Linux  
Assigned To: OS: Ubuntu  
Priority: low OS Version: 14.04  
Status: new Product Version: 16.2.4  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: estimate causes error if time on filedaemon and director are out of sync
Description: If the bareos-dir system and the bareos-fd system are not in time sync (by more than hardcoded 3 second threshold), the file daemon will generate an error message like "JobId 0: Error: getmsg.c:212 Malformed message: DIR and FD clocks differ by -35 seconds, FD automatically compensating" where the call is

Jmsg(jcr, type, 0, _("DIR and FD clocks differ by %lld seconds, FD automatically compensating.\n"), adj);

in the file daemon from about line 1565 in dir_cmd.cc. The problem is that the director code handles this message as an exception case but cannot parse it at about line 212 in bget_dirmsg() in the director code. In my configuration, it generates an email and a log entry.

In some ways, it is helpful to get these alerts in emails, but the message could be handled more consistently like others the director expects, and it would be nice if the client were identified. Otherwise, it is guesswork figuring out which client is out of time sync. For me, there are only a few clients, but a data center might have a greater challenge. Of course, I'd expect a data center to have this sort of business much better under control...

This is low priority and mainly a nuisance. It should not be too difficult to either 1) change the calling code (level_cmd in dir_cmd.cc) to issue an error message in a format that is parsable by the catchall case in the director, or 2) add a specific case handler for time sync problem in the director code (bget_dirmsg in getmsg.cc). It would be very helpful to include the client name or backup name in the message to aid in resolving such an issue.
Tags:
Steps To Reproduce: In bconsole, run

echo 'estimate accurate=yes level=incremental job=backup_name' | bconsole

where client "backup_name" is a system whose time is out of sync with the director by more than 3 seconds. See resulting bareos.log for the message (assuming Daemon Messages are configured to log errors).
Additional Information:
System Description
Attached Files:
Notes
(0002975)
aron_s   
2018-04-27 13:26   
I could not reproduce this problem. In my test environment, Director and storage daemon both run virtually and the filedaemon runs on a OS X system.
Time was set 30 seconds off, yet the estimation did not return any errors.
(0002979)
joergs   
2018-04-27 14:56   
A quick look at the code indicates, that this warning is only produced if level=since, not level=incremental. Are you sure it happens on your system with level=incremental?
(0002981)
bozonius   
2018-04-27 18:28   
@joergs: level can be incremental or full. I've not tried other variations.

@aron_s: maybe the code has been corrected since the release I am using (you did not mention yours)? I did not do version comparisons of source code.
(0002982)
joergs   
2018-04-27 18:47   
I checked against 16.2.7 and now also verified that the code has not changed between 16.2.4 and 16.2.7.

Setting the filed to debug level 10 (very much output!) will contain the line "level_cmd: ...". It must be since_utime in your case, maybe because of accurate.

Debug level can be changed during runtime using
setdebug client=<NAME> level=10 trace=1

Result will be written to a trace file.
(0002983)
bozonius   
2018-04-28 02:20   
@joergs: I just set one of the clients to a time about 10 mins earlier than the director's (and storage's, since they are on the same system). I re-ran my experiment, just as before, and I get the same result as I reported originally.

I tried the setdebug as you described. The audit log indicates that the command was received; no error there. So I am assuming that debugging is set, yet the main bareos log does not show any new messages other than the usual ones.

The client I tested against is running bareos-fd 14.2.1, which is a bit older, and may explain why you don't see the same problem. And also maybe why the debugging doesn't kick in.

I have upgraded the client as much as I can using the repos, short of building the client myself. I'll see if anti-X might have newer package updates for the client.

The other clients are also at various levels of the client (filedaemon).
(0002984)
bozonius   
2018-04-28 02:26   
Would running bareos in compatibility mode help to eliminate these messages, or would that just introduce more problems?
(0002985)
joergs   
2018-04-28 12:40   
Debug is not written to the main bareos.log file, but to a local trace file. Issuing the setdebug command tells you where to find this file.

Compatibility mode should have no impact on this.

So the error did only occur with old bareos filedaemons? Sorry, but the version should be at least 16.2, better 7.2 or master (experimental, nightly) to be relevant for us. What platform are the clients running on?
(0002986)
bozonius   
2018-04-28 13:41   
@joergs: "Local?" to what? Local to the system where the setdebug was run, or local to where the filedaemon is running? Or local to where the director is running? I just ran setdebug again, and it did not indicate where the output was going.

Could not parse: at least 16.2, better 7.2 -- what does this mean?

Anti-X repo version is 14.2.1. There is no update available. However, keep in mind that other clients have seen this error, and they are running other versions of the file daemon software (depending on the distro). There's a couple of debian variants, and an arch-based system.
(0002987)
joergs   
2018-04-28 15:24   
Local: local on the system that runs the daemon. In case of "setdebug client=>NAME>" it is on the client.

at least 16.2, better 7.2 => at least 16.2, better 17.2
(0002988)
bozonius   
2018-04-28 18:56   
(Last edited: 2018-04-28 19:07)
OK, so I have setdebug as requested, found the trace file (it was not obvious to me where "current directory" might be), and yes, found since_utime= entries in the output.

But I assure you I am running these backups ONLY as full or incremental. I do not use any others (even differential). Here is an extract from the trace file:

$ grep level_cmd /shared/antix-fd.trace
antix-fd: dir_cmd.c:1228-0 level_cmd: level = accurate_incremental mtime_only=0
antix-fd: dir_cmd.c:1228-0 level_cmd: level = since_utime 1524830415 mtime_only=0 prev_job=antix-sys.2018-04-27_05.00.00_10
antix-fd: dir_cmd.c:1228-0 level_cmd: level = accurate_incremental mtime_only=0
antix-fd: dir_cmd.c:1228-0 level_cmd: level = since_utime 1524830415 mtime_only=0 prev_job=antix-sys.2018-04-27_05.00.00_10
antix-fd: dir_cmd.c:1228-0 level_cmd: level = incremental mtime_only=0
antix-fd: dir_cmd.c:1228-0 level_cmd: level = since_utime 1524830415 mtime_only=0 prev_job=antix-sys.2018-04-27_05.00.00_10
antix-fd: dir_cmd.c:1228-0 level_cmd: level = accurate_incremental mtime_only=0
antix-fd: dir_cmd.c:1228-0 level_cmd: level = since_utime 1524830438 mtime_only=0 prev_job=antix-user.2018-04-27_05.00.01_11
antix-fd: dir_cmd.c:1228-0 level_cmd: level = incremental mtime_only=0
antix-fd: dir_cmd.c:1228-0 level_cmd: level = since_utime 1524830438 mtime_only=0 prev_job=antix-user.2018-04-27_05.00.01_11

(0003009)
bozonius   
2018-05-22 00:39   
This error has surfaced again, today. I have searched the logs in /var/log/bareos, /var/log/syslog, and my own logging and cannot force an explanation out of any of them.

The strangest part of this is that the first of two jobs failed on just two Vbox VM filedaemons, the other of the two ran fine on each of the two systems. These run smoothly every day, nearly forever (I'm not complaining, believe me). It's just that every so often, I see this crop up. I might not see it again for a long time.

I note that bareos tools do more audit logging in some tools than others. For instance, the audit log for today is relatively "quiet" for the jobs I run overnight, but the log gets noisy, logging every SQL query when I run BAT. I will see if I can get more "noise" from bconsole.

The director (dir 16.2.4) and storage (sd 16.2.4) daemons are still running on Ubuntu 14.04. The clients affected today were Devuan Jessie (fd 14.2.1) and Anti-X (fd 14.2.1). As I mentioned, these backups run without serious* issues over 99% of the time, which makes this an intermittent problem, and thus one that is hard to nail down the source of. Increasing the verbosity of the tools to the logs will be helpful, and I will look into this.

(* sometimes clocks are off, but this presents no serious problem; the jobs do not fail in those instances)
(0003021)
bozonius   
2018-05-30 09:07   
I have debug logging running on the director (I may add logging for other daemons later if needed). Log level is set to 200.

It would be helpful to have a listing of debug levels. I want to be sure to gather enough debugging info to pin down problems when they come up, but not so much that it will eat disk space unnecessarily.

Now I wait for the next time there is a problem and see if there is adequate log output to analyze it.
(0003023)
bozonius   
2018-06-03 04:25   
I have now enabled debugging on calls to bconsole for one backup which fails as described. This might be helpful, too, since the failure occurs while bconsole is running.
(0003024)
bozonius   
2018-06-04 01:31   
I have now enabled debugging on all jobs.
(0003025)
bozonius   
2018-06-04 01:34   
I just realized this wasn't the bug I was trying to address with the debug statements (though they will help perhaps). The other problem I have is with backups that fail in bconsole (not just issue warning) when requesting an estimate.

At any rate, maybe this additional logging will help going forward for any sort of issue that arises. The logging does not add a significant amount of disk usage at level 200.
(0003026)
aron_s   
2018-06-04 13:41   
So I reproduced this again and got the same warning, but with the name of the filedaemon as source of the warning. Identifying the out-of-sync client is doable. The job itself also runs perfectly fine.
I get the following output when running a job on a client with time difference > 60s:

04-Jun 13:35 ubuntu1604-VirtualBox-dir JobId 10: Start Backup JobId 10, Job=macbook-log-backup.2018-06-04_13.35.08_12
04-Jun 13:35 ubuntu1604-VirtualBox-dir JobId 10: Created new Volume "Incremental-0002" in catalog.
04-Jun 13:35 ubuntu1604-VirtualBox-dir JobId 10: Using Device "FileStorage" to write.
04-Jun 13:32 macbook-fd JobId 10: DIR and FD clocks differ by -180 seconds, FD automatically compensating.
04-Jun 13:32 macbook-fd JobId 10: Warning: LZ4 compression support requested in fileset but not available on this platform. Disabling ...
04-Jun 13:35 ubuntu1604-VirtualBox-sd JobId 10: Labeled new Volume "Incremental-0002" on device "FileStorage" (/var/lib/bareos/storage).
04-Jun 13:35 ubuntu1604-VirtualBox-sd JobId 10: Wrote label to prelabeled Volume "Incremental-0002" on device "FileStorage" (/var/lib/bareos/storage)
04-Jun 13:35 ubuntu1604-VirtualBox-sd JobId 10: Elapsed time=00:00:01, Transfer rate=19.44 K Bytes/second
04-Jun 13:35 ubuntu1604-VirtualBox-dir JobId 10: sql_create.c:872 Insert of attributes batch table done
04-Jun 13:35 ubuntu1604-VirtualBox-dir JobId 10: Bareos ubuntu1604-VirtualBox-dir 17.2.4 (21Sep17):
  Build OS: i686-pc-linux-gnu ubuntu Ubuntu 16.04 LTS
  JobId: 10
  Job: macbook-log-backup.2018-06-04_13.35.08_12
  Backup Level: Incremental, since=2018-05-17 13:11:57
  Client: "macbook-fd" 17.2.4 (21Sep17) x86_64-apple-darwin17.4.0,osx,17.4.0
  FileSet: "MacbookLogFileset" 2018-05-17 13:00:50
  Pool: "Incremental" (From Job IncPool override)
  Catalog: "MyCatalog" (From Client resource)
  Storage: "ubuntu1604-VirtualBox-sd" (From Job resource)
  Scheduled time: 04-Jun-2018 13:35:06
  Start time: 04-Jun-2018 13:35:14
  End time: 04-Jun-2018 13:35:14
  Elapsed time: 0 secs
  Priority: 10
  FD Files Written: 7
  SD Files Written: 7
  FD Bytes Written: 18,515 (18.51 KB)
  SD Bytes Written: 19,448 (19.44 KB)
  Rate: 0.0 KB/s
  Software Compression: None
  VSS: no
  Encryption: no
  Accurate: no
  Volume name(s): Incremental-0002
  Volume Session Id: 1
  Volume Session Time: 1528110773
  Last Volume Bytes: 20,363 (20.36 KB)
  Non-fatal FD errors: 0
  SD Errors: 0
  FD termination status: OK
  SD termination status: OK
  Termination: Backup OK
(0003033)
bozonius   
2018-06-06 23:04   
(Last edited: 2018-06-06 23:13)
@aron_s: Thanks for trying this again. BTW, there are actually two problems here (may be related, maybe not). The original issue was that I got email message indicating times were out of sync between client and director systems, but that message did not name the client. The actual run of the backup worked fine, just as you confirm. This problem is, ultimately, minor though annoying.

The more vexing problem is the other one. Many times, a call to bconsole for an estimate of backup size fails. I am now able to compare debug traces of normal (non-failing) job and failing job (debug level is 200). So far, my investigation reveals "bconsole (100): console.c:346-0 Got poll BNET_EOD" appears in the trace only once for a normal job, but twice for the failing job. This "once versus twice" behavior seems to be consistent in comparing failed to normal jobs.

I note it is possible the 2nd message, if it is indeed actually sent by the director, might be getting swallowed by my logging as the result, perhaps, of my script closing the pipe before some final flush of messages to my script, or something like that. I'll double-check the perl code I wrote, but I have been running this script for years now and has always had this intermittent results from the 'estimate' command. bconsole estimate is called from perl as a "Command Before Job".

In both scenarios, bconsole connects successfully with the director. I can see the 'estimate' request in the director's trace in both instances.

Is there a much better way to obtain the estimate? If I end up with some run-away job (meaning huge amounts of data have appeared since the last run), I don't want to end up eating gobs of backup disk space (and time!). One example of this is backing up the home directory where, say, a program has experienced extremely increased amounts of data in its files. I have excluded a long list of files and directories in all backups, but every so often another one would come along that I had overlooked that just happened to bollix the works. This call to bconsole for an estimate prior to the run is just an additional prevention.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1115 [bareos-core] director major always 2019-09-21 12:58 2019-11-20 23:47
Reporter: bozonius Platform: Linux  
Assigned To: joergs OS: Ubuntu  
Priority: normal OS Version: 14.04  
Status: confirmed Product Version: 16.2.4  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: changing onefs value in fileset should force a full backup
Description: The documentation in the FileSet Resource section of bareos documents (as of 21 September 2019) states, in the second paragraph:

"Any change to the list of the included files will cause Bareos to automatically create a new FileSet (defined by the name and an MD5 checksum of the Include/Exclude contents). Each time a new FileSet is created, Bareos will ensure that the next backup is always a Full save."

This says very specifically that a change to the list of included files will force a full backup; merely changing an option value such as onefs does not do so.

I mounted a filesystem in the area to be backed up, changing (actually, adding) "onefs = no" to the fileset. I tried several times to get the files in the mounted file system to be backed up, with no luck. After CAREFULLY reading your documentation, I realized that (probably) a change to the disposition of the onefs option does not apply to the quoted paragraph.

In my situation at least, mounting the filesystem within the fileset definition's area introduced new files to be backed up. I am not sure how this might affect other users, nor do I know if any other options should force a full backup. I am cautious not to conjecture too much on an ideal solution, as I am not aware of how every last installation is configured or what results users expect. (I certainly expected the next backup to pull in the files on the mounted file system.)
Tags:
Steps To Reproduce: Mount a file system containing some files in an area defined to be backed up in a fileset.

Change or add the option "onefs = no" to the fileset (where the options section did not specify it previously).

Run the same backup(s) which use that fileset, ensuring they are set to be run next as incremental.

Check to see if the files on the mounted file system are included in the backup.
Additional Information:
System Description
Attached Files:
Notes
(0003620)
joergs   
2019-11-11 19:16   
Well, this should not happen.

Can you please verify, that your fileset changes are properly detected?
Use the following bconsole command to get all known versions of the fileset:

.sql query="select * from fileset where fileset='<YOURFILESETNAME>'"
(0003624)
bozonius   
2019-11-11 20:43   
(Last edited: 2019-11-11 20:57)
I found the fileset in question. It was updated on 9/21/2019 to include another path and OneFS set to no. The added path was in the excluded section, which might explain why there was no full backup?

One other note: Despite the database showing the additional path in the exclude section in the most recent (9/21/2019) row of FileSet for that fileset, the configuration file for the fileset now has no reference to the excluded path. I'm not sure that this is harmful, but I thought I should mention it.

(0003630)
joergs   
2019-11-12 15:36   
I'm not sure, if I understood this correctly.

The fileset entry is written to the database, when a job did run which uses the fileset. Has there been a job using the new fileset?
Please also compare it to the output of the bconsole command "show fileset='<YOURFILESETNAME>", just in case, the new configuration is not loaded for any reason.
(0003631)
joergs   
2019-11-12 15:46   
Forget about my last question.
I verified that the behavior you describe still exists in the current Bareos master branch.

A change to the file list do upgrade the backup level to Full, a change in the options does not upgrade to a Full backup.
(0003632)
embareossed   
2019-11-12 20:45   
[I just saw your last post, but I'll include this anyway, for reference. You may delete this note if desired.]

The last row for the fileset in question looks like:

| 30 | SystemConfig | B9/TG7tfvx+q7++Yw/th+C | 2019-09-21 03:00:01 | FileSet {
  Name = "SystemConfig"
  Include {
    Options {
      OneFS = No
      Compression = GZIP9
    }
    File = "/etc"
    File = "/root"
    File = "/var"
  }
  Exclude {
    File = "/var/tmp"
    File = "/var/run"
    File = "/var/lock"
    File = "/var/cache"
    File = "/root/tmp"
    File = "/var/lib/nothing"
  }
}

This resource record is part of 5 backups every night and has been since before this update was made to the config file:

FileSet {
  Name = "SystemConfig"
  Include {
    Options {
      One FS = no
      Compression = GZIP9
    }
    File = "/etc"
    File = "/root"
    File = "/var"
  }
  Exclude {
    File = "/var/tmp"
    File = "/var/run"
    File = "/var/lock"
    File = "/var/cache"
    File = "/root/tmp"
  }
}

The timestamp on this file is:

-rw-rw-r-- 1 bareos bareos 302 Sep 21 03:16 SystemConfig.conf

Also, I DO backup the /etc/bareos tree every night. I only keep about a month's worth of backups. Dates and times are sync'd to an ntp server.
(0003633)
embareossed   
2019-11-12 20:47   
So is this the intended behavior by design and not a bug? Or is it a defect after all?
(0003634)
embareossed   
2019-11-12 21:19   
I'm now realizing that this issue/bug might be mis-labeled. I think the real issue here is that the Exclude { } was modified. As you can see, the database thinks the files to be excluded would also mean /var/lib/nothing is to be excluded. But I removed that one exclusion, but the database continues to think /var/lib/nothing should be excluded. (In this case, no problem arises because obviously this was only a test case.) Once the newer definition of the fileset was used once (and it has been, many times since that change) I would expect the database to now have an updated definition as well, but it doesn't.

Excluded files probably need to be considered in backup calculations for the same reason that the included ones are. Here, I gave an example of where possibly new files might be added to subsequent backups, but the database might ignore the change and continue to exclude files and directories under /var/lib/nothing. I haven't tested this, but it does seem inconsistent at very least.

As to the change to OneFS, that may have been the change I made, probably earlier, that precipitated filing the current issue. Being this was months ago, I have forgotten. Sorry for any confusion.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
909 [bareos-core] director major always 2018-02-12 09:50 2019-11-20 23:47
Reporter: rightmirem Platform: Intel  
Assigned To: arogge OS: Debian Jessie  
Priority: normal OS Version: 8  
Status: feedback Product Version:  
Product Build: Resolution: reopened  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: "Reschedule on error" recognized, but not actually rescheduling the job
Description: I have been testing my backup's ability to recover from an error.

I have a job that has the following settings...

  Reschedule Interval = 1 minute
  Reschedule On Error = yes
  Reschedule Times = 5

... and I start it as a full. I then restart the Bareos director (to error out the job intentionally).

In the log, it shows that the job has been rescheduled - but the job never starts. The job should have started at 10:20. But by 10:26 there was nothing running being reported by "list jobs" in bconsole.

=== LOG ===
    01-Feb 10:19 server-dir JobId 569: Fatal error: Network error with FD during Backup: ERR=No data available
    01-Feb 10:19 server-sd JobId 569: Fatal error: append.c:245 Network error reading from FD. ERR=No data available
    01-Feb 10:19 server-sd JobId 569: Elapsed time=00:01:24, Transfer rate=112.6 M Bytes/second
    01-Feb 10:19 server-dir JobId 569: Error: Director's comm line to SD dropped.
    01-Feb 10:19 server-dir JobId 569: Fatal error: No Job status returned from FD.
    01-Feb 10:19 server-dir JobId 569: Error: Bareos server-dir 17.2.4 (21Sep17):
      Build OS: x86_64-pc-linux-gnu debian Debian GNU/Linux 8.0 (jessie)
      JobId: 569
      Job: backupJobName.2018-02-01_10.18.12_04
      Backup Level: Full
      Client: "server-fd" 17.2.4 (21Sep17) x86_64-pc-linux-gnu,debian,Debian GNU/Linux 8.0 (jessie),Debian_8.0,x86_64
      FileSet: "backupJobName" 2018-01-29 15:00:00
      Pool: "6mo-Full" (From Job FullPool override)
      Catalog: "MyCatalog" (From Client resource)
      Storage: "Tape" (From Job resource)
      Scheduled time: 01-Feb-2018 10:18:10
      Start time: 01-Feb-2018 10:18:14
      End time: 01-Feb-2018 10:19:45
      Elapsed time: 1 min 31 secs
      Priority: 10
      FD Files Written: 0
      SD Files Written: 0
      FD Bytes Written: 0 (0 B)
      SD Bytes Written: 1,042 (1.042 KB)
      Rate: 0.0 KB/s
      Software Compression: None
      VSS: no
      Encryption: no
      Accurate: no
      Volume name(s): DL011BL7
      Volume Session Id: 1
      Volume Session Time: 1517476667
      Last Volume Bytes: 5,035,703,887,872 (5.035 TB)
      Non-fatal FD errors: 2
      SD Errors: 0
      FD termination status: Error
      SD termination status: Error
      Termination: *** Backup Error ***

    01-Feb 10:19 server-dir JobId 569: Rescheduled Job backupJobName.2018-02-01_10.18.12_04 at 01-Feb-2018 10:19 to re-run in 60 seconds (01-Feb-2018 10:20).
    01-Feb 10:19 server-dir JobId 569: Job backupJobName.2018-02-01_10.18.12_04 waiting 60 seconds for scheduled start time.


Tags:
Steps To Reproduce: I have scheduled a job with "reschedule on error"

I have both started the job manually, and let t he schedule start the job through the scheduler

I have tried killing the job BOTH by killing the core Bareos process with "kill -9" AND by simply restarting bareos with the restart commands.

Regardless of the method to kill the job, the log recognizes the job ended on an error, and states it is rescheduling the job (in 60 seconds).

However, the job never actually restarts.
Additional Information: See the main issue description for the log data
System Description
Attached Files:
Notes
(0002908)
joergs   
2018-02-12 18:30   
Reschedule on error is not intended to cover Bareos Director restart.
However, it should work if you restart the fd.
(0002932)
rightmirem   
2018-02-20 15:16   
Can we reopen this. I never got notification that it was in progress.

So, is it indicative of a problem when the log TRIES to reschedule the job - but simply doesn't?
(0002933)
rightmirem   
2018-02-20 15:23   
OK. It DID work when I killed the fd.

However, can you tell me what sorts of errors WILL trigger a restart (I don't see that in the manual). We're not just concerned with file errors, but also...

- Tape drive failure.
- Accidental system restart or server power failure.
- OS crash or hang.
- Daemon crashes or hangs.
(0002945)
rightmirem   
2018-03-13 12:16   
This can be marked as resolved
(0003597)
b.braunger@syseleven.de   
2019-10-14 14:33   
How was this resolved? Is there some kind of documentation by now?
(0003600)
arogge   
2019-10-16 10:08   
I don't see what kind of documentation you expect?
Reschedule on error does not work for a director restart (and was never intended to do this).
Its purpose is to rerun a job that failed.

So what else do you need?
(0003606)
b.braunger@syseleven.de   
2019-10-18 11:24   
Well I did not reproduce this but the Log of rightmirem says that the job terminated with an error and therefore it should be rescheduled as far as I understand. https://docs.bareos.org/Configuration/Director.html?highlight=reschedule#config-Dir_Job_RescheduleOnError
Although I see that this feature is not intended to cover director crashes the documentation should mention on what kind of failure a user can expect a reschedule and what does not trigger one (like the already mentioned 'Cancel')
However the log should never report that a job is rescheduled if that one is not going to be executed.
(0003607)
arogge   
2019-10-18 11:32   
The problem is probably that the director does not have a persistent schedule.
So when the job is rescheduled (and the reschedule log message is written) and then the director is restarted, the scheduling information is lost during the restart.
With the current design of rescheduling this cannot be fixed.

However, we can document that limitation.
(0003608)
b.braunger@syseleven.de   
2019-10-18 11:37   
Thanks for the info! I would appreciate if the doc explains that behaviour and as far as I'm concerned this ticket can be closed then.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1046 [bareos-core] webui minor always 2019-02-05 23:50 2019-11-20 23:47
Reporter: DDOS Platform: Linux  
Assigned To: arogge OS: CentOS  
Priority: normal OS Version: 7  
Status: resolved Product Version: 18.2.5  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: BareOS-WebUI interface elements not listed in Storages Tab after upgrade to 18.2.5
Description: Not all items are displayed on the tab "Storages" after upgrade to 18.2.5.
Pools an Volumes tab not displayed.
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files: bareos_18.2.5_error.JPG (35,672 bytes) 2019-02-05 23:50
https://bugs.bareos.org/file_download.php?file_id=347&type=bug
jpg
Notes
(0003255)
teka74   
2019-02-10 16:15   
Yes !! I had this on my small display too. Try to zoom out in your browser. on my display (1280x1024) it works with 75%, and no problems on bigger (wide) screens 1650 pixels and more
(0003535)
artusxl   
2019-07-30 23:17   
It seems the " get official binaries and support on bareos.com" -banner is overlaying the " Devices" / "Pools" "Volumes" tabs.
(0003536)
arogge   
2019-07-31 09:31   
You're right, this is broken in 18.2.5 and 18.2.6. We have already fixed in in the 18.2 and master branches, so the next releases will not show this behaviour anymore, but will switch to the mobile mode (with a hamburger menu) before the break occurs.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
144 [bareos-core] director feature have not tried 2013-04-07 12:35 2019-11-20 23:46
Reporter: pstorz Platform: Linux  
Assigned To: OS: any  
Priority: normal OS Version: 3  
Status: new Product Version: 12.4.2  
Product Build: Resolution: reopened  
Projection: none      
ETA: none Fixed in Version:  
    Target Version: 13.2.0  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Extend cancel command to be able to cancel all running jobs
Description: Especially when testing, it would be a very nice feature to be able to
cancel multiple jobs at once.

For the first step, it would be nice to be able to cancel all jobs.

Also useful would be to be able to cancel, jobs of a certain state, e.g.

cancel jobs state=waiting
Tags:
Steps To Reproduce:
Additional Information:
System Description General issues regarding all linux platforms or not specific to only one distribution
Attached Files:
Notes
(0000597)
mvwieringen adm   
2013-08-13 03:12   
Fix committed to bareos master branch with changesetid 696.
(0001419)
mvwieringen   
2015-03-25 16:51   
Fix committed to bareos2015 bareos-13.2 branch with changesetid 4252.
(0001572)
joergs   
2015-03-25 19:19   
Due to the reimport of the Github repository to bugs.bareos.org, the status of some tickets have been changed. These tickets will be closed again.
Sorry for the noise.
(0003579)
bozonius   
2019-09-21 13:25   
Sorry, but I think this bug /might/ be back. I am running 16.2.4, and I am seeing jobs with a status of "waiting" and I cannot cancel them.

This is on Ubuntu Linux 14.04 (yes, I know that is old, and I am moving on soon...), and the bareos packages come from the repository (not hand built).
(0003637)
bozonius   
2019-11-20 23:34   
I am now running Bareos 18.2.5 on Devuan and the problem persists. I have a 'waiting' job that won't go away. I've tried using bconsole and the job is still in waiting state.

The only 'solution' I have found is to manually remove it in the database, but I prefer not to perform such invasive actions since they could damage the integrity of Bareos overall.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
886 [bareos-core] director minor always 2017-12-27 09:14 2019-11-20 07:49
Reporter: bozonius Platform: Linux  
Assigned To: OS: Ubuntu  
Priority: normal OS Version: 14.04  
Status: acknowledged Product Version: 16.2.4  
Product Build: Resolution: reopened  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: "Run Before Job" in JobDefs is run even though a backup job specifies a different "Run Before Job"
Description: My JobDefs file specifies a script that is to be run before a backup job. This is valid for nearly all of my jobs. However, one backup job should not run that script; instead, I want to run a different script before running that backup. So in that one specific backup job definition, I override the default from JobDefs with a different script. But the JobDefs script still gets run anyway.
Tags:
Steps To Reproduce: Create a JobDefs with a "Run Before Job" set to run "script1."

Create one or more jobs (e.g., maybe "job-general1," "job-general2," "job-general3") for the general case, not setting "Run Before Job" in those, allowing the setting to default to the one specified in JobDefs (i.e., "script1").

Create one job for a specific case ("job-special"), setting "Run Before Job" to "script2."

Run any of the general case jobs ("job-general1," etc.) and "script1" is correctly run, since no override is specified for any of those jobs.

Run "job-special" and BOTH script1 AND script2 are run before the job.
Additional Information: From the documentation, section 9.3, "Only the changes from the defaults need to be mentioned in each Job." (http://doc.bareos.org/master/html/bareos-manual-main-reference.html#QQ2-1-216) I infer that the description of Bareos's defaulting mechanism fits the industry-standard definition of the term "default."

Please note that this was also an issue in Bacula, and one of the (many) reasons I chose to move on to Bareos. I was told by Bacula support that I did not understand the meaning of "default," which actually, I really am sure I do. I've worked with software for over 30 years, and the documented semantics (as per section 9.3) seem to comport with general agreement about the meaning of default mechanisms. To this day, I do not think I've ever seen a single exception to the generally agreed-upon meaning of "default." Even overloading mechanisms in OO languages all appear to comport to this intention and implementation.

I hope this will not result in the same stonewalling of this issue and that this bug, while minor for the most part but potential for disaster for those unawares in other more complicated contexts, can and will be fixed in some upcoming release. Thanks.
System Description
Attached Files:
Notes
(0002845)
joergs   
2017-12-27 12:59   
The "Run Before Job" directive can be specified multiple times in a resource to run multiple run scripts. So the behavior you describe is to be expected. You can extend a DefaultJob with additional run scripts.

To achieve the behavior requested by you, you might use the fact, that DefaultJobs can be used recursively.

JobDefs {
  Name = "defaultjob-without-script"
  ...
}

JobDefs {
  Name = "defaultjob-with-script"
  JobDefs = "defaultjob-without-script"
  Run Before Job = "echo 'test'"
}

Jobs can than either refer to "defaultjob-with-script" or "defaultjob-without-script".
(0002848)
bozonius   
2017-12-27 21:30   
How is this reflected in the documentation? I was kind of wondering if one could specify multiple JobDefs (default configurations), which would be handy. However, the documentation, as I read it, does not seem to encourage the reader to consider multiple JobDefs, as you have illustrated.

Thank you for this information, and I will DEFINITELY make use of this facility -- actually, it addresses precisely what I wanted! However, it might be a good idea to add something to the JobDefs documentation to illustrate exactly what you have shown here. Thanks.

I wonder, even had a bareos admin read every last word of the documentation, if this approach to handling job defaults would be obvious to them. This is not a criticism of the current docs; as docs go, this is one of the more complete I've seen. It's just that sometimes it takes more explanation of various options than those one might glean by reading the material quite literally as I have. I also don't believe that one will necessarily read every last bit of the documentation in the first place (though you might be within your right to claim every user really should have in order to leverage the full advantage of this extensive system). Users may be tasked with correcting or adding an urgent matter and not have time to read everything, rather heading straight to the portion of the docs seeming to be most relevant for the task at hand.

In the opening paragraph of JobDefs might be a good place to add this information. It is the place where it would be most likely NOT to be missed. Again, thanks for the info.
(0002853)
joergs   
2018-01-02 16:39   
Already documented at http://doc.bareos.org/master/html/bareos-manual-main-reference.html#directiveDirJobJob%20Defs :

To structure the configuration even more, Job Defs themselves can also refer to other Job Defs.

If you know a better way to describe this: also patches to the documentation are always welcome.
(0002856)
bozonius   
2018-01-03 03:35   
From the doc:

Any value that you explicitly define in the current Job resource, will override any defaults specified in the Job Defs resource.

That isn't quite exactly correct. What really happens is something a bit more akin to subclassing I think, albeit these "classes" are linear; there is no "multiple inheritance," so to speak (though THAT might even be useful?).

BareOS JobDefs values are not always replaced; they may be replaced, or appended instead. Appending in this manner is not typical of the way we normally speak of defaults in most of the software kingdom (at least in my own experience).

Not sure how to improve the docs, but just thought I'd put this out there: This notion of single parent subclassing. Perhaps this could lead us toward a better description.
(0003636)
bozonius   
2019-11-20 07:49   
I tried the link in your note https://bugs.bareos.org/view.php?id=886#c2853, but it takes me to the start of the doc, not the specific section, which is what I think your link is supposed to do. Seems like a lot of links (and anchors) in the doc are broken this way. References to the bareos doc from websites (like linuxquestions) are now broken; I realize there might not be much that can be done about external references outside of bareos.org.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1141 [bareos-core] General block always 2019-11-19 22:23 2019-11-19 22:23
Reporter: harryruhr Platform: amd64  
Assigned To: OS: OpenBSD  
Priority: high OS Version: 6.6  
Status: new Product Version: 18.2.6  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Can't build on OpenBSD / against libressl
Description: The build aborts with the following error:

/usr/ports/pobj/bareos-18.2.6/bareos-Release-18.2.6/core/src/lib/crypto_openssl.cc:425:24: error: use of undeclared identifier 'M_ASN1_OCTET_STRING_dup'; did you mean 'ASN1_OCTET_STRING_dup'?
      newpair->keyid = M_ASN1_OCTET_STRING_dup(keypair->keyid);
                       ^~~~~~~~~~~~~~~~~~~~~~~
                       ASN1_OCTET_STRING_dup
/usr/include/openssl/asn1.h:692:20: note: 'ASN1_OCTET_STRING_dup' declared here
ASN1_OCTET_STRING *ASN1_OCTET_STRING_dup(const ASN1_OCTET_STRING *a);
                   ^
Tags:
Steps To Reproduce: - Get the OpenBSD port from https://github.com/jasperla/openbsd-wip/tree/master/sysutils/bareos and try to build it.
Additional Information: This error seems to occur in general, when building against libressl (instead of openssl), see also https://bugs.gentoo.org/692370
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1140 [bareos-core] webui minor always 2019-11-19 15:40 2019-11-19 15:40
Reporter: koef Platform: Linux  
Assigned To: OS: CentOS  
Priority: normal OS Version: 7  
Status: new Product Version: 18.2.5  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Restore feature always fails from webui (cats/bvfs.cc:927-0 Can't execute q)
Description: Hello.

Restore feature doesn't create restore job from webui. But it works fine from bconsole.
Please ask for additional info if it's needed.

You can see debug output with level 200 and mysql query log below.

Thanks.
Tags: director, webui
Steps To Reproduce: Merge all client filesets - No
Merge all related jobs to last full backup of selected backup job - No
Additional Information: bareos-dir debug trace:
19-Nov-2019 15:29:15.191147 bareos-dir (100): lib/bsock.cc:81-0 Construct BareosSocket
19-Nov-2019 15:29:15.191410 bareos-dir (100): include/jcr.h:320-0 Construct JobControlRecord
19-Nov-2019 15:29:15.191460 bareos-dir (200): lib/bsock.cc:631-0 Identified from Bareos handshake: admin-R_CONSOLE recognized version: 18.2
19-Nov-2019 15:29:15.191491 bareos-dir (110): dird/socket_server.cc:109-0 Conn: Hello admin calling version 18.2.5
19-Nov-2019 15:29:15.191506 bareos-dir (100): include/jcr.h:320-0 Construct JobControlRecord
19-Nov-2019 15:29:15.191528 bareos-dir (100): dird/storage.cc:157-0 write_storage_list=File
19-Nov-2019 15:29:15.191547 bareos-dir (100): dird/storage.cc:166-0 write_storage=File where=Job resource
19-Nov-2019 15:29:15.191559 bareos-dir (100): dird/job.cc:1519-0 JobId=0 created Job=-Console-.2019-11-19_15.29.15_07
19-Nov-2019 15:29:15.191776 bareos-dir (50): lib/cram_md5.cc:69-0 send: auth cram-md5 <1114491002.1574173755@bareos-dir> ssl=0
19-Nov-2019 15:29:15.192019 bareos-dir (50): lib/cram_md5.cc:88-0 Authenticate OK Gd1+i91cs2Tf7pZiQJs+ew
19-Nov-2019 15:29:15.192200 bareos-dir (100): lib/cram_md5.cc:116-0 cram-get received: auth cram-md5 <9503288492.1574173755@php-bsock> ssl=0
19-Nov-2019 15:29:15.192239 bareos-dir (99): lib/cram_md5.cc:135-0 sending resp to challenge: 1y/il6/RE9/FU8dciG/X6A
19-Nov-2019 15:29:15.273737 bareos-dir (10): dird/ua_audit.cc:146-0 : Console [admin] from [10.105.132.139] cmdline llist backups client="someclient.domain.com" fileset="any" order=desc
19-Nov-2019 15:29:15.273867 bareos-dir (100): dird/ua_db.cc:155-0 UA Open database
19-Nov-2019 15:29:15.273903 bareos-dir (100): cats/sql_pooling.cc:61-0 DbSqlGetNonPooledConnection allocating 1 new non pooled database connection to database bareos, backend type mysql
19-Nov-2019 15:29:15.273929 bareos-dir (100): cats/cats_backends.cc:81-0 db_init_database: Trying to find mapping of given interfacename mysql to mapping interfacename dbi, partly_compare = true
19-Nov-2019 15:29:15.273943 bareos-dir (100): cats/cats_backends.cc:81-0 db_init_database: Trying to find mapping of given interfacename mysql to mapping interfacename mysql, partly_compare = false
19-Nov-2019 15:29:15.273959 bareos-dir (100): cats/mysql.cc:869-0 db_init_database first time
19-Nov-2019 15:29:15.273990 bareos-dir (50): cats/mysql.cc:181-0 mysql_init done
19-Nov-2019 15:29:15.274839 bareos-dir (50): cats/mysql.cc:205-0 mysql_real_connect done
19-Nov-2019 15:29:15.274873 bareos-dir (50): cats/mysql.cc:207-0 db_user=someuser db_name=bareos db_password=somepass
19-Nov-2019 15:29:15.275378 bareos-dir (100): cats/mysql.cc:230-0 opendb ref=1 connected=1 db=7effb000ab20
19-Nov-2019 15:29:15.275854 bareos-dir (150): dird/ua_db.cc:188-0 DB bareos opened
19-Nov-2019 15:29:15.275887 bareos-dir (20): dird/ua_output.cc:579-0 list: llist backups client="someclient.domain.com" fileset="any" order=desc
19-Nov-2019 15:29:15.275937 bareos-dir (100): cats/sql_query.cc:96-0 called: void BareosDb::FillQueryVaList(PoolMem&, BareosDbQueryEnum::SQL_QUERY_ENUM, __va_list_tag*) with query name list_jobs_long (6)
19-Nov-2019 15:29:15.276015 bareos-dir (100): cats/sql_query.cc:102-0 called: void BareosDb::FillQueryVaList(PoolMem&, BareosDbQueryEnum::SQL_QUERY_ENUM, __va_list_tag*) query is now SELECT DISTINCT Job.JobId, Job.Job, Job.Name, Job.PurgedFiles, Job.Type, Job.Level, Job.ClientId, Client.Name as Client, Job.JobStatus, Job.SchedTime, Job.StartTime, Job.EndTime, Job.RealEndTime, Job.JobTDate, Job.VolSessionId, Job.VolSessionTime, Job.JobFiles, Job.JobBytes, Job.JobErrors, Job.JobMissingFiles, Job.PoolId, Pool.Name as PoolName, Job.PriorJobId, Job.FileSetId, FileSet.FileSet FROM Job LEFT JOIN Client ON Client.ClientId=Job.ClientId LEFT JOIN Pool ON Pool.PoolId=Job.PoolId LEFT JOIN JobMedia ON JobMedia.JobId=Job.JobId LEFT JOIN Media ON JobMedia.MediaId=Media.MediaId LEFT JOIN FileSet ON FileSet.FileSetId=Job.FileSetId WHERE Job.JobId > 0 AND Job.Type='B' AND Client.Name='someclient.domain.com' AND JobStatus IN ('T','W') AND (FileSet='v2iFileset' OR FileSet='SelfTest' OR FileSet='LinuxAll' OR FileSet='InfluxdbFileset' OR FileSet='IcingaFileset' OR FileSet='GraylogFileset' OR FileSet='GrafanaFileset' OR FileSet='Catalog') ORDER BY StartTime DESC;
19-Nov-2019 15:29:15.276067 bareos-dir (100): cats/sql_query.cc:124-0 called: bool BareosDb::SqlQuery(const char*, int) with query SELECT DISTINCT Job.JobId, Job.Job, Job.Name, Job.PurgedFiles, Job.Type, Job.Level, Job.ClientId, Client.Name as Client, Job.JobStatus, Job.SchedTime, Job.StartTime, Job.EndTime, Job.RealEndTime, Job.JobTDate, Job.VolSessionId, Job.VolSessionTime, Job.JobFiles, Job.JobBytes, Job.JobErrors, Job.JobMissingFiles, Job.PoolId, Pool.Name as PoolName, Job.PriorJobId, Job.FileSetId, FileSet.FileSet FROM Job LEFT JOIN Client ON Client.ClientId=Job.ClientId LEFT JOIN Pool ON Pool.PoolId=Job.PoolId LEFT JOIN JobMedia ON JobMedia.JobId=Job.JobId LEFT JOIN Media ON JobMedia.MediaId=Media.MediaId LEFT JOIN FileSet ON FileSet.FileSetId=Job.FileSetId WHERE Job.JobId > 0 AND Job.Type='B' AND Client.Name='someclient.domain.com' AND JobStatus IN ('T','W') AND (FileSet='v2iFileset' OR FileSet='SelfTest' OR FileSet='LinuxAll' OR FileSet='InfluxdbFileset' OR FileSet='IcingaFileset' OR FileSet='GraylogFileset' OR FileSet='GrafanaFileset' OR FileSet='Catalog') ORDER BY StartTime DESC;
19-Nov-2019 15:29:15.354800 bareos-dir (10): dird/ua_audit.cc:146-0 : Console [admin] from [10.105.132.139] cmdline llist clients current
19-Nov-2019 15:29:15.354928 bareos-dir (20): dird/ua_output.cc:579-0 list: llist clients current
19-Nov-2019 15:29:15.354968 bareos-dir (100): cats/sql_query.cc:124-0 called: bool BareosDb::SqlQuery(const char*, int) with query SELECT ClientId,Name,Uname,AutoPrune,FileRetention,JobRetention FROM Client ORDER BY ClientId
19-Nov-2019 15:29:15.355739 bareos-dir (200): dird/ua_output.cc:1626-0 filterit: Filter on resource_type 1002 value bareos-fd, suppress output
19-Nov-2019 15:29:15.355779 bareos-dir (200): dird/ua_output.cc:1626-0 filterit: Filter on resource_type 1002 value bareos-dir-node, suppress output
19-Nov-2019 15:29:15.610801 bareos-dir (10): dird/ua_audit.cc:146-0 : Console [admin] from [10.105.132.139] cmdline .bvfs_update jobid=142
19-Nov-2019 15:29:15.616201 bareos-dir (100): lib/htable.cc:77-0 malloc buf=7effb006a718 size=9830400 rem=9830376
19-Nov-2019 15:29:15.616266 bareos-dir (100): lib/htable.cc:220-0 Allocated big buffer of 9830400 bytes
19-Nov-2019 15:29:15.616634 bareos-dir (10): cats/bvfs.cc:359-0 Updating cache for 142
19-Nov-2019 15:29:15.616656 bareos-dir (10): cats/bvfs.cc:190-0 UpdatePathHierarchyCache()
19-Nov-2019 15:29:15.616694 bareos-dir (100): cats/sql_query.cc:124-0 called: bool BareosDb::SqlQuery(const char*, int) with query SELECT 1 FROM Job WHERE JobId = 142 AND HasCache=1
19-Nov-2019 15:29:15.617365 bareos-dir (10): cats/bvfs.cc:202-0 Already computed 142
19-Nov-2019 15:29:15.617405 bareos-dir (100): lib/htable.cc:90-0 free malloc buf=7effb006a718
Pool Maxsize Maxused Inuse
NoPool 256 86 0
NAME 1318 16 4
FNAME 2304 75 65
MSG 2634 31 17
EMSG 2299 10 4
BareosSocket 31080 4 2
RECORD 128 0 0

19-Nov-2019 15:29:15.619407 bareos-dir (100): lib/htable.cc:601-0 Done destroy.
19-Nov-2019 15:29:15.620312 bareos-dir (10): dird/ua_audit.cc:146-0 : Console [admin] from [10.105.132.139] cmdline .bvfs_restore jobid=142 fileid=6914 dirid= path=b2000928016
19-Nov-2019 15:29:15.620348 bareos-dir (100): cats/sql_query.cc:124-0 called: bool BareosDb::SqlQuery(const char*, int) with query SELECT VolSessionId,VolSessionTime,PoolId,StartTime,EndTime,JobFiles,JobBytes,JobTDate,Job,JobStatus,Type,Level,ClientId,Name,PriorJobId,RealEndTime,JobId,FileSetId,SchedTime,RealEndTime,ReadBytes,HasBase,PurgedFiles FROM Job WHERE JobId=142
19-Nov-2019 15:29:15.620781 bareos-dir (100): cats/sql_query.cc:124-0 called: bool BareosDb::SqlQuery(const char*, int) with query SELECT ClientId,Name,Uname,AutoPrune,FileRetention,JobRetention FROM Client WHERE Client.ClientId=8
19-Nov-2019 15:29:15.621038 bareos-dir (100): cats/sql_query.cc:124-0 called: bool BareosDb::SqlQuery(const char*, int) with query DROP TABLE btempb2000928016
19-Nov-2019 15:29:15.621252 bareos-dir (100): cats/sql_query.cc:124-0 called: bool BareosDb::SqlQuery(const char*, int) with query DROP TABLE b2000928016
19-Nov-2019 15:29:15.621419 bareos-dir (15): cats/bvfs.cc:924-0 q=CREATE TABLE btempb2000928016 AS SELECT Job.JobId, JobTDate, FileIndex, File.Name, PathId, FileId FROM File JOIN Job USING (JobId) WHERE FileId IN (6914)
19-Nov-2019 15:29:15.621434 bareos-dir (100): cats/sql_query.cc:124-0 called: bool BareosDb::SqlQuery(const char*, int) with query CREATE TABLE btempb2000928016 AS SELECT Job.JobId, JobTDate, FileIndex, File.Name, PathId, FileId FROM File JOIN Job USING (JobId) WHERE FileId IN (6914)
19-Nov-2019 15:29:15.621634 bareos-dir (10): cats/bvfs.cc:927-0 Can't execute q
19-Nov-2019 15:29:15.621662 bareos-dir (100): cats/sql_query.cc:124-0 called: bool BareosDb::SqlQuery(const char*, int) with query DROP TABLE btempb2000928016
19-Nov-2019 15:29:15.661510 bareos-dir (10): dird/ua_audit.cc:146-0 : Console [admin] from [10.105.132.139] cmdline restore file=?b2000928016 client=someclient.domain.com restoreclient=someclient.domain.com restorejob="RestoreFiles" where=/tmp/bareos-restores/ replace=never yes
19-Nov-2019 15:29:15.661580 bareos-dir (100): cats/sql_query.cc:124-0 called: bool BareosDb::SqlQuery(const char*, int) with query SELECT ClientId,Name,Uname,AutoPrune,FileRetention,JobRetention FROM Client WHERE Client.Name='someclient.domain.com'
19-Nov-2019 15:29:15.661982 bareos-dir (100): cats/sql_query.cc:96-0 called: void BareosDb::FillQueryVaList(PoolMem&, BareosDbQueryEnum::SQL_QUERY_ENUM, __va_list_tag*) with query name uar_jobid_fileindex_from_table (32)
19-Nov-2019 15:29:15.662011 bareos-dir (100): cats/sql_query.cc:102-0 called: void BareosDb::FillQueryVaList(PoolMem&, BareosDbQueryEnum::SQL_QUERY_ENUM, __va_list_tag*) query is now SELECT JobId, FileIndex FROM b2000928016 ORDER BY JobId, FileIndex ASC
19-Nov-2019 15:29:15.662022 bareos-dir (100): cats/sql_query.cc:140-0 called: bool BareosDb::SqlQuery(const char*, int (*)(void*, int, char**), void*) with query SELECT JobId, FileIndex FROM b2000928016 ORDER BY JobId, FileIndex ASC
Pool Maxsize Maxused Inuse
NoPool 256 86 0
NAME 1318 16 5
FNAME 2304 75 65
MSG 2634 31 17
EMSG 2299 10 4
BareosSocket 31080 4 2
RECORD 128 0 0

19-Nov-2019 15:29:15.702682 bareos-dir (10): dird/ua_audit.cc:146-0 : Console [admin] from [10.105.132.139] cmdline .bvfs_cleanup path=b2000928016
19-Nov-2019 15:29:15.702753 bareos-dir (100): cats/sql_query.cc:124-0 called: bool BareosDb::SqlQuery(const char*, int) with query DROP TABLE b2000928016
19-Nov-2019 15:29:15.714596 bareos-dir (100): cats/mysql.cc:252-0 closedb ref=0 connected=1 db=7effb000ab20
19-Nov-2019 15:29:15.714646 bareos-dir (100): cats/mysql.cc:259-0 close db=7effb000ab20
19-Nov-2019 15:29:15.714817 bareos-dir (200): dird/job.cc:1560-0 Start dird FreeJcr
19-Nov-2019 15:29:15.714871 bareos-dir (200): dird/job.cc:1624-0 End dird FreeJcr
19-Nov-2019 15:29:15.714888 bareos-dir (100): lib/jcr.cc:446-0 FreeCommonJcr: 7effb0007898
19-Nov-2019 15:29:15.714909 bareos-dir (100): lib/bsock.cc:129-0 Destruct BareosSocket
19-Nov-2019 15:29:15.714924 bareos-dir (100): include/jcr.h:324-0 Destruct JobControlRecord



Mysql query log:
191119 15:29:15 37 Connect bareos@localhost as anonymous on bareos
                   37 Query SELECT VersionId FROM Version
                   37 Query SET wait_timeout=691200
                   37 Query SET interactive_timeout=691200
                   37 Query SELECT DISTINCT Job.JobId, Job.Job, Job.Name, Job.PurgedFiles, Job.Type, Job.Level, Job.ClientId, Client.Name as Client, Job.JobStatus, Job.SchedTime, Job.StartTime, Job.EndTime, Job.RealEndTime, Job.JobTDate, Job.VolSessionId, Job.VolSessionTime, Job.JobFiles, Job.JobBytes, Job.JobErrors, Job.JobMissingFiles, Job.PoolId, Pool.Name as PoolName, Job.PriorJobId, Job.FileSetId, FileSet.FileSet FROM Job LEFT JOIN Client ON Client.ClientId=Job.ClientId LEFT JOIN Pool ON Pool.PoolId=Job.PoolId LEFT JOIN JobMedia ON JobMedia.JobId=Job.JobId LEFT JOIN Media ON JobMedia.MediaId=Media.MediaId LEFT JOIN FileSet ON FileSet.FileSetId=Job.FileSetId WHERE Job.JobId > 0 AND Job.Type='B' AND Client.Name='someclient.domain.com' AND JobStatus IN ('T','W') AND (FileSet='v2iFileset' OR FileSet='SelfTest' OR FileSet='LinuxAll' OR FileSet='InfluxdbFileset' OR FileSet='IcingaFileset' OR FileSet='GraylogFileset' OR FileSet='GrafanaFileset' OR FileSet='Catalog') ORDER BY StartTime DESC
                   37 Query SELECT ClientId,Name,Uname,AutoPrune,FileRetention,JobRetention FROM Client ORDER BY ClientId
                   37 Query SELECT 1 FROM Job WHERE JobId = 142 AND HasCache=1
                   37 Query SELECT VolSessionId,VolSessionTime,PoolId,StartTime,EndTime,JobFiles,JobBytes,JobTDate,Job,JobStatus,Type,Level,ClientId,Name,PriorJobId,RealEndTime,JobId,FileSetId,SchedTime,RealEndTime,ReadBytes,HasBase,PurgedFiles FROM Job WHERE JobId=142
                   37 Query SELECT ClientId,Name,Uname,AutoPrune,FileRetention,JobRetention FROM Client WHERE Client.ClientId=8
                   37 Query DROP TABLE btempb2000928016
                   37 Query DROP TABLE b2000928016
                   37 Query CREATE TABLE btempb2000928016 AS SELECT Job.JobId, JobTDate, FileIndex, File.Name, PathId, FileId FROM File JOIN Job USING (JobId) WHERE FileId IN (6914)
                   37 Query DROP TABLE btempb2000928016
                   37 Query SELECT ClientId,Name,Uname,AutoPrune,FileRetention,JobRetention FROM Client WHERE Client.Name='someclient.domain.com'
                   37 Query SELECT JobId, FileIndex FROM b2000928016 ORDER BY JobId, FileIndex ASC
                   37 Query DROP TABLE b2000928016
                   37 Quit
System Description
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1139 [bareos-core] director crash always 2019-11-18 17:35 2019-11-18 17:35
Reporter: jason.agilitypr Platform: Linux  
Assigned To: OS: Ubuntu  
Priority: high OS Version: 16.04  
Status: new Product Version: 18.2.5  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Upgrading mysql from 5.7.27 to 5.7.28 causes director to segfault
Description: I patched my server this morning and there was an upgrade of two mysql packages after which bareos director would not start and seg faulted

patch details
2019-11-18 08:37:04 upgrade mysql-common:all 5.7.27-0ubuntu0.16.04.1 -> 5.7.28-0ubuntu0.16.04.2
2019-11-18 08:37:04 upgrade libmysqlclient20:amd64 5.7.27-0ubuntu0.16.04.1 -> 5.7.28-0ubuntu0.16.04.2

bareos-dir -u bareos -g bareos -t -v -d 200
--------
bareos-dir (200): lib/runscript.cc:339-0 --> RunOnSuccess=1
bareos-dir (200): lib/runscript.cc:340-0 --> RunOnFailure=0
bareos-dir (200): lib/runscript.cc:341-0 --> FailJobOnError=0
bareos-dir (200): lib/runscript.cc:342-0 --> RunWhen=1
bareos-dir (200): dird/dird_conf.cc:1295-0 Don't know how to propagate resource RunScript of configtype 60
bareos-dir (200): dird/dird_conf.cc:1295-0 Don't know how to propagate resource RunScript of configtype 60
bareos-dir (200): dird/dird_conf.cc:1295-0 Don't know how to propagate resource RunScript of configtype 60
bareos-dir (200): dird/dird_conf.cc:1295-0 Don't know how to propagate resource RunScript of configtype 60
bareos-dir (200): dird/dird_conf.cc:1295-0 Don't know how to propagate resource RunScript of configtype 60
bareos-dir (200): dird/dird_conf.cc:1295-0 Don't know how to propagate resource RunScript of configtype 60
bareos-dir (100): dird/dird.cc:392-0 backend path: /usr/lib/bareos/backends
bareos-dir (150): dird/dir_plugins.cc:302-0 Load dir plugins
bareos-dir (150): dird/dir_plugins.cc:304-0 No dir plugin dir!
bareos-dir (100): cats/cats_backends.cc:81-0 db_init_database: Trying to find mapping of given interfacename mysql to mapping interfacename dbi, partly_compare = true
bareos-dir (100): cats/cats_backends.cc:81-0 db_init_database: Trying to find mapping of given interfacename mysql to mapping interfacename mysql, partly_compare = false
bareos-dir (100): cats/cats_backends.cc:180-0 db_init_database: testing backend /usr/lib/bareos/backends/libbareoscats-mysql.so
bareos-dir (100): cats/cats_backends.cc:254-0 db_init_database: loaded backend /usr/lib/bareos/backends/libbareoscats-mysql.so
bareos-dir (100): cats/mysql.cc:869-0 db_init_database first time
bareos-dir (50): cats/mysql.cc:181-0 mysql_init done
bareos-dir (50): cats/mysql.cc:205-0 mysql_real_connect done
bareos-dir (50): cats/mysql.cc:207-0 db_user=bareos db_name=bareos db_password=*****************
bareos-dir (100): cats/mysql.cc:230-0 opendb ref=1 connected=1 db=216fce0
bareos-dir (100): cats/sql_query.cc:96-0 called: void BareosDb::FillQueryVaList(PoolMem&, BareosDbQueryEnum::SQL_QUERY_ENUM, __va_list_tag*) with query name sql_get_max_connections (45)
bareos-dir (100): cats/sql_query.cc:102-0 called: void BareosDb::FillQueryVaList(PoolMem&, BareosDbQueryEnum::SQL_QUERY_ENUM, __va_list_tag*) query is now SHOW VARIABLES LIKE 'max_connections'
bareos-dir (100): cats/mysql.cc:252-0 closedb ref=0 connected=1 db=216fce0
bareos-dir (100): cats/mysql.cc:259-0 close db=216fce0
BAREOS interrupted by signal 11: Segmentation violation
Segmentation fault
Tags:
Steps To Reproduce: upgrade to the latest release version of the MySQL libraries.
Additional Information:
System Description
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
951 [bareos-core] General major always 2018-05-22 18:35 2019-11-15 07:51
Reporter: ameijeiras Platform: Linux  
Assigned To: arogge OS: Ubuntu  
Priority: normal OS Version: 16.04  
Status: assigned Product Version: 17.2.4  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact: yes
bareos-master: action: will care
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact: yes
bareos-18.2: action: will care
bareos-17.2: impact: yes
bareos-17.2: action: will care
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Differential backup compute date from last incremental
Description: I configure a standard backup policy Full (Monthly), Differential (Weekly) and incremental (Daily).

If I configure it using a daily incremental schedule and control differentials and Full using the variables "Max Diff Interval = 6 days" (To perform a Differential backup weekly) and "Max Full Interval = 31 days" (To perform a Full backup monthly), differentials backups date don't be compute since last Full backup date but from the date of the last incremental wrongly. This create problem in the policy (Full, Diff, Inc) because differential backups will not be really differentials and only have data from last incremental.
I a run manually a differential backup or shedule manually, for example: "
Schedule {
 Name = Backup_test
 Run = Level=Full 1st sun at 02:00
 Run = Differential 2nd-5th sun at 02:00
 Run = Level=Incremental mon-sat at 02:00
}
" differentials backups date are compute right since last Full backup
Tags:
Steps To Reproduce: onfigure a policy backup Full, Differential and incremental using this method:

# ---------- Jobs config ---------------
Job {
  Name = Backup_test
  Enabled = yes
  JobDefs = "DefaultJob"
  Messages = Standard
  Type = Backup
  Storage = CabinaFS1
  Client = test-fd
  FileSet = Archivos_test
  Schedule = Backup_test
  Full Backup Pool = test_Mensual
  Differential Backup Pool = test_Semanal
  Incremental Backup Pool = test_Diario
  Pool = test_Mensual
  Max Diff Interval = 6 days
  Max Full Interval = 31 days
}
# ---------- Schedule config ----------------
Schedule {
 Name = Backup_test
 Run = Level=Incremental at 02:00
}
Additional Information: example config that generate mistaken differential backups:
...
# ---------- Jobs config ---------------
Job {
  Name = Backup_test
  Enabled = yes
  JobDefs = "DefaultJob"
  Messages = Standard
  Type = Backup
  Storage = CabinaFS1
  Client = test-fd
  FileSet = Archivos_test
  Schedule = Backup_test
  Full Backup Pool = test_Mensual
  Differential Backup Pool = test_Semanal
  Incremental Backup Pool = test_Diario
  Pool = test_Mensual
  Max Diff Interval = 6 days
  Max Full Interval = 31 days
}
# ---------- Schedule config ----------------
Schedule {
 Name = Backup_test
 Run = Level=Incremental at 02:00
}


example config that generate right differential backups:
...
Job {
  Name = Backup_test
  Enabled = yes
  JobDefs = "DefaultJob"
  Messages = Standard
  Type = Backup
  Storage = CabinaFS1
  Client = test-fd
  FileSet = Archivos_test
  Schedule = Backup_test
  Full Backup Pool = test_Mensual
  Differential Backup Pool = test_Semanal
  Incremental Backup Pool = test_Diario
  Pool = test_Mensual


}
# ---------- Schedule config ----------------
Schedule {
 Name = Backup_test
 Run = Level=Full 1st sun at 02:00
 Run = Differential 2nd-5th sun at 02:00
 Run = Level=Incremental mon-sat at 02:00
}
...
System Description
Attached Files:
Notes
(0003147)
comfortel   
2018-10-24 14:17   
Same problem here:
https://bugs.bareos.org/view.php?id=1001
(0003168)
arogge   
2019-01-09 09:04   
I just confirmed that this happens at least in 17.2.4 on Ubuntu 16.04. We will try to fix the issue and to find out which versions are affected.
(0003169)
arogge   
2019-01-10 15:03   
Can you please try my packages with an applied fix in your environment and make sure this actually fixes the issue for you?

http://download.bareos.org/bareos/people/arogge/951/xUbuntu_16.04/amd64/
(0003227)
arogge_adm   
2019-01-30 13:12   
Fix committed to bareos dev/arogge/bareos-18.2/fix-951 branch with changesetid 10978.
(0003269)
comfortel   
2019-02-19 11:42   
when can we expect a change in official repo 18.2 for ubuntu?
(0003270)
arogge   
2019-02-19 11:54   
AFAICT you haven't even tried the test-packages I provided.

The bug itself is not fixed yet, because my patch is flawed (picks up changes from the latest full or differential instead of the last full).
If you need this fixed soon, you can open a support case or take a look for yourself (the branch with my candidate fix is publicly available on github).
(0003312)
comfortel   
2019-04-04 13:33   
Hello.

We test Increment and all ok.
jobfiles show 7 and 6 changed files.

Automatically selected FileSet: FileSetBackup_ansible.organisation.pro
+--------+-------+----------+-------------+---------------------+---------------------------------+
| jobid | level | jobfiles | jobbytes | starttime | volumename |
+--------+-------+----------+-------------+---------------------+---------------------------------+
| 20,280 | F | 47,522 | 130,433,828 | 2019-04-04 13:38:20 | Full_ansible.organisation.pro_0781 |
| 20,281 | I | 7 | 761,010 | 2019-04-04 13:40:50 | Incr_ansible.organisation.pro_0782 |
| 20,282 | I | 6 | 2,144 | 2019-04-04 13:55:08 | Incr_ansible.organisation.pro_0783 |
+--------+-------+----------+-------------+---------------------+---------------------------------+
(0003335)
comfortel   
2019-04-15 09:17   
Automatically selected FileSet: FileSetBackup_ansible.organisation.pro
+--------+-------+----------+-------------+---------------------+---------------------------------+
| jobid | level | jobfiles | jobbytes | starttime | volumename |
+--------+-------+----------+-------------+---------------------+---------------------------------+
| 20,280 | F | 47,522 | 130,433,828 | 2019-04-04 13:38:20 | Full_ansible.organisation.pro_0781 |
| 20,291 | D | 29 | 23,345,104 | 2019-04-12 01:05:02 | Diff_ansible.organisation.pro_0789 |
| 20,292 | I | 6 | 54,832 | 2019-04-13 01:05:02 | Incr_ansible.organisation.pro_0783 |
| 20,293 | I | 0 | 0 | 2019-04-14 01:05:03 | Incr_ansible.organisation.pro_0784 |
| 20,294 | I | 0 | 0 | 2019-04-15 01:05:03 | Incr_ansible.organisation.pro_0786 |
+--------+-------+----------+-------------+---------------------+---------------------------------+
You have selected the following JobIds: 20280,20291,20292,20293,20294
(0003336)
arogge   
2019-04-15 09:35   
Thanks for the confirmation.
I'll try to get the patch merged for the next release.
(0003337)
comfortel   
2019-04-15 14:43   
Thanks
(0003392)
comfortel   
2019-06-13 16:58   
When will this fix be added?
(0003562)
comfortel   
2019-08-30 12:23   
Hello. When will this fix be added to upstream?
(0003563)
arogge   
2019-08-30 12:27   
The fix didn't pass the review. The implemented behaviour is wrong, so it'll need work.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
878 [bareos-core] director minor have not tried 2017-11-29 10:19 2019-11-13 15:39
Reporter: chaos_prevails Platform: Linux  
Assigned To: arogge OS: Ubuntu  
Priority: normal OS Version: 16.04 amd64  
Status: resolved Product Version: 16.2.4  
Product Build: Resolution: not fixable  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact: yes
bareos-master: action: none
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action: none
bareos-17.2: impact: yes
bareos-17.2: action: none
bareos-16.2: impact: yes
bareos-16.2: action: none
bareos-15.2: impact: no
bareos-15.2: action:
bareos-14.2: impact: no
bareos-14.2: action:
bareos-13.2: impact: no
bareos-13.2: action:
bareos-12.4: impact: no
bareos-12.4: action:
Summary: Consolidate takes Non-Always Incremental (full) Jobs into consideration when same Fileset is used
Description: Consolidate takes normal (Full) backup jobs (*without* Always Incremental = yes or any other Always incremental configuration) into consideration when the same fileset is used.

This leads to the situation that consolidate never does anything, because e.g. the weekly normal full backup is too young to consolidate, and all "always incremental" incremental jobs never exeed the "Always Incremental Keep Number".


Tags:
Steps To Reproduce: 1) Backup of client1 with Always Incremental job using fileset X (e.g. daily)
2) Backup of client1 with normal full job using fileset X (for offsite backup, cannot use VirtualFull because tape is connected to remote storage daemon) (e.g. weekly)
3) set always incremental keep number = 7
4) consolidate daily
5) consolidate never consolidates anything

Temporary Solution:
A) change backuptype of tape backup to "archive" (as explained in chapter 23.6.2. Virtual Full Jobs). However, these backups cannot be restored via web-ui .
or
B) clone fileset and give it another name

Permanent Solution:
Don't take Non-Always Incremental Jobs into consideration when consolidating. These are separate backups and not "descendants" of the always incremental+consolidate process (like virtual-Full backups)
Additional Information: My configuration:

JobDefs {
  Name = "default_ai"
  Type = Backup
  Level = Incremental
  Client = DIR-fd
  Messages = Standard
  Priority = 40
  Write Bootstrap = "|/usr/local/bin/bareos-messages.sh \"[Bootstrap] %j (%d: jobid %i)\" %i \"it@XXXXXX\" %c-%n"
  Maximum Concurrent Jobs = 7

  #Allow Duplicate Jobs = no #doesn't work with virtualFull Job
  #Cancel Lower Level Duplicates = yes
  #Cancel Queued Duplicates = yes


  #always incremental config
  Pool = disk_ai
  #Incremental Backup Pool = disk_ai
  Full Backup Pool = disk_ai_consolidate
  Storage = DIR-file
  Accurate = yes
  Always Incremental = yes
  Always Incremental Job Retention = 7 days
  Always Incremental Keep Number = 7
  Always Incremental Max Full Age = 14 days
}

JobDefs {
  Name = "default_tape"
  Type = Backup
  Level = Full
  Client = DIR-fd
  Messages = Standard
  Maximum Concurrent Jobs = 1
  Spool Data = yes
  Pool = tape_automated
  Priority = 10
  Full Backup Pool = tape_automated
  Incremental Backup Pool = tape_automated
  Storage = XXXX_HP_G2_Autochanger

  #prevent duplicate jobs
  Allow Duplicate Jobs = no
  Cancel Lower Level Duplicates = yes
  Cancel Queued Duplicates = yes

  Write Bootstrap = "|/usr/local/bin/bareos-messages.sh \"[Bootstrap] %j (%d: jobid %i)\" %i \"it@XXXXXX\" %c-%n"
}
~
~


#my always incremental job
Job {
  Name = "client1_sys_ai"
  JobDefs = "default_ai"
  Client = "client1-fd"
  FileSet = linux_common
  Schedule = client1_sys_ai
  RunAfterJob = "/bin/bash -c '/bin/echo \"run client1_sys_ai_v yes\" | bconsole >/dev/null'"
}

Job {
  Name = client1_sys_ai_v
  JobDefs = default_verify
  Verify Job = client1_sys_ai
  Level = VolumeToCatalog
  Client = client1-fd
  FileSet = linux_common
  Schedule = manual
  Pool = disk_ai
  Priority = 41
}

Job {
  Name = client1_sys_tape
  JobDefs = default_tape
  Level = Full
  Client = client1-fd
  FileSet = linux_common
# FileSet = linux_common_tape #<--temporary Solution B)
  Schedule = client1_sys_tape
  RunAfterJob = "/bin/bash -c '/bin/echo \"run client1_sys_tape_v yes\" | bconsole >/dev/null'"
# Run Script {
# console = "update jobid=%i jobtype=A" #<-- temporary solution A)
# Runs When = After
# Runs On Client = No
# Runs On Failure = No
# }
}
Job {
  Name = client1_sys_tape_v
  JobDefs = default_verify
  Verify Job = client1_sys_tape
  Level = VolumeToCatalog
  Client = client1-fd
  FileSet = linux_common
  Schedule = manual
  Pool = tape_automated
  Priority = 41
}
Attached Files:
Notes
(0002827)
joergs   
2017-11-30 09:58   
I think, this works as designed. What is the reason for running both Always Incremental and normal Full backup jobs in parallel?
(0002829)
chaos_prevails   
2017-11-30 10:16   
Because I need to have an offsite (tape) backup beside the Always Incremental disk backup. The tape drive is not connected to the same storage daemon, but on a different machine. I cannot do a VirtualFull backup, which only works on same storage daemon.
Even if I would have better access to the server room (so I can change tapes without problems), bareos runs on a VM and my physical backup server cannot passthrough pci devices (so I would need to run bareos on bare metal instead of within a VM)

I was surprised by this behaviour because the second (full/incremental) backup doesn't share any job configuration with the always incremental jobs. I thought I can see them as separate.
(0003635)
arogge   
2019-11-13 15:39   
I'm setting this to not fixable.
As joergs already mentioned: this works as designed.
In Bareos all backups using the same FileSet are merged when you restore (or consolidate or run a virtualfull). The common thing in your two jobs is the fileset and that produces the behaviour.

The workaround with the two filesets is the only thing you can do to achieve your desired behaviour.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1138 [bareos-core] General block always 2019-11-12 15:36 2019-11-12 15:36
Reporter: Sikaraha Platform: Linux  
Assigned To: OS: RHEL  
Priority: high OS Version: 6  
Status: new Product Version: 18.2.5  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Copy Error
Description: Disk to tape backup job crashes.
Tags:
Steps To Reproduce: - Run Job
- Run copy job

Job {
  Name = "name"
  Enabled = no
  Type = Backup
  Messages = "Standard"
  Schedule = "disabled_sched"
  Pool = "name"
  Client = "name-fd"
  FileSet = "name"
  RerunFailedLevels = yes
  Description = ""
}

Storage {
  Name = "name"
  Address = "address"
  Password = "[md5]pass_hash"
  Device = "Device-name"
  MediaType = "File-name"
}

Storage {
  Name = "Storage-oracle-LTO-5"
  Address = "address"
  Port = 9105
  Password = "[md5]pass_hash"
  Device = "SL150-KC22-Drive-5"
  MediaType = "LTO-6"
  AutoChanger = yes
  HeartbeatInterval = 2 minutes
  MaximumConcurrentJobs = 3
}

Pool {
  Name = "name"
  PoolType = Backup
  LabelType = "bareos"
  MaximumVolumeJobs = 1
  MaximumVolumeBytes = 476 g 857 m 256 k
  NextPool = "pool-lto6-cpp-1"
  Storage = "name"
  AutoPrune = no
  Recycle = no
}

Pool {
  Name = "pool-lto6-cpp-1"
  PoolType = Backup
  LabelType = "bareos"
  VolumeRetention = 73 years
  Storage = "Storage-oracle-LTO-5"
  AutoPrune = no
  Recycle = no
  RecyclePool = "pool-lto6-cpp-1"
}

Job {
  Name = "name-copy"
  Type = Copy
  Messages = "Standard"
  Pool = "name"
  SelectionType = PoolUncopiedJobs
}
Additional Information: Bareos: Copy Error of name-fd Unknown Job Level

1173257: Copying using JobId=1173212 Job=name.2019-11-12_10.39.19_05
1173257: Bootstrap records written to /mnt/md1/working//dir-name.restore.13.bsr
1173257: Start Copying JobId 1173257, Job=name-copy.2019-11-12_14.02.52_07
1173257: Connected Storage daemon at hostname:9103, encryption: PSK-AES256-CBC-SHA
1173257: Using Device "Device-name" to read.
1173258: Connected Storage daemon at hostname.off:9105, encryption: PSK-AES256-CBC-SHA
1173258: Using Device "LTO6-Drive-5" to write.
1173257: Error: Bareos dir-name 18.2.5 (30Jan19):
  Build OS: Linux-4.4.92-6.18-default redhat CentOS release 6.10 (Final)
  Prev Backup JobId: 1173212
  Prev Backup Job: name.2019-11-12_10.39.19_05
  New Backup JobId: 1173258
  Current JobId: 1173257
  Current Job: name-copy.2019-11-12_14.02.52_07
  Backup Level:
  Client: name-fd
  FileSet: "name"
  Read Pool: "name" (From Job resource)
  Read Storage: "name" (From Pool resource)
  Write Pool: "pool-lto6-cpp-1" (From Job Pool's NextPool resource)
  Write Storage: "Storage-oracle-LTO-5" (From Storage from Pool's NextPool resource)
  Next Pool: "pool-lto6-cpp-1" (From Job Pool's NextPool resource)
  Catalog: "MyCatalog" (From Default catalog)
  Start time: 12-Nov-2019 14:02:56
  End time: 12-Nov-2019 14:02:56
  Elapsed time: 0 secs
  Priority: 10
  SD Files Written: 0
  SD Bytes Written: 0 (0 B)
  Rate: 0.0 KB/s
  Volume name(s):
  Volume Session Id: 9
  Volume Session Time: 1573547009
  Last Volume Bytes: 0 (0 B)
  SD Errors: 0
  SD termination status:
  Bareos binary info: bareos.org build: Get official binaries and vendor support on bareos.com
  Termination: *** Copying Error ***
System Description
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1027 [bareos-core] director major always 2018-11-09 11:51 2019-11-12 15:30
Reporter: gnif Platform: Linux  
Assigned To: joergs OS: Debian  
Priority: urgent OS Version: 9  
Status: feedback Product Version: 18.4.1  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: 32bit integer overflow in sql LIMIT clause
Description: Running backups on the director is producing the following errors, it is clearly a 32-bit integer overflow as the limit has become negative. I believe the issue is in SetQueryRange as it doesn't support parsing or using 64 bit integers.

https://github.com/bareos/bareos/blob/1c5bf440cdc8fe949ba58357d16588474cd6ccb8/core/src/dird/ua_output.cc#L497
Tags:
Steps To Reproduce:
Additional Information: 09-Nov 21:00 bareos-dir JobId 0: Fatal error: cats/sql_list.cc:566 cats/sql_list.cc:566 query SELECT DISTINCT Job.JobId, Job.Job, Job.Name, Job.PurgedFiles, Job.Type, Job.Level, Job.ClientId, Client.Name as Client, Job.JobStatus, Job.SchedTime, Job.StartTime, Job.EndTime, Job.RealEndTime, Job.JobTDate, Job.VolSessionId, Job.VolSessionTime, Job.JobFiles, Job.JobBytes, Job.JobErrors, Job.JobMissingFiles, Job.PoolId, Pool.Name as PoolName, Job.PriorJobId, Job.FileSetId, FileSet.FileSet FROM Job LEFT JOIN Client ON Client.ClientId=Job.ClientId LEFT JOIN Pool ON Pool.PoolId=Job.PoolId LEFT JOIN JobMedia ON JobMedia.JobId=Job.JobId LEFT JOIN Media ON JobMedia.MediaId=Media.MediaId LEFT JOIN FileSet ON FileSet.FileSetId=Job.FileSetId WHERE Job.JobId > 0 AND Job.JobStatus = 'S' AND Job.SchedTime > '2018-11-08 21:00:21' ORDER BY StartTime LIMIT 1000 OFFSET -2018192296; failed:
You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near '-2018192296' at line 1
System Description
Attached Files:
Notes
(0003623)
joergs   
2019-11-11 19:33   
When does this error occur? How have you triggered this SQL query?
(0003625)
gnif   
2019-11-11 22:57   
Simply by allowing BareOS to run backups across multiple servers for several months. There are clearly over 2,147,483,647 records in the result set.
(0003628)
joergs   
2019-11-12 15:26   
I meant, what single action trigger this error.

I found out, that this sql query is triggered by the bconsole "llist jobs" command, but only when used with the "offset" parameter.

I don't assume, you execute a "llist jobs ... offset=..." manually?

Are you using the Bareos WebUI? The WebUI uses "llist jobs" to retrieve information and can also use limit and offset. Is the error triggered by same actions there?

Or are you using something else like CopyJob the selects jobs by such a query?

Anyhow, this problem should only occur when your last jobid comes close to 2,147,483,647. Is this the case?
(0003629)
joergs   
2019-11-12 15:30   
Manually calling the bconsole command

llist jobs limit=1000 offset=2147483648

results in a query with wrong offset:

cats/sql_query.cc:131-0 called: bool BareosDb::SqlQuery(const char*, int) with query SELECT DISTINCT Job.JobId, Job.Job, Job.Name, Job.PurgedFiles, Job.Type, Job.Level, Job.ClientId, Client.Name as Client, Job.JobStatus, Job.SchedTime, Job.StartTime, Job.EndTime, Job.RealEndTime, Job.JobTDate, Job.VolSessionId, Job.VolSessionTime, Job.JobFiles, Job.JobBytes, Job.JobErrors, Job.JobMissingFiles, Job.PoolId, Pool.Name as PoolName, Job.PriorJobId, Job.FileSetId, FileSet.FileSet FROM Job LEFT JOIN Client ON Client.ClientId=Job.ClientId LEFT JOIN Pool ON Pool.PoolId=Job.PoolId LEFT JOIN JobMedia ON JobMedia.JobId=Job.JobId LEFT JOIN Media ON JobMedia.MediaId=Media.MediaId LEFT JOIN FileSet ON FileSet.FileSetId=Job.FileSetId WHERE Job.JobId > 0 ORDER BY StartTime LIMIT 1000 OFFSET -2147483648;

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1019 [bareos-core] director major always 2018-10-09 15:09 2019-11-11 19:31
Reporter: wizhippo Platform: x86  
Assigned To: OS: Ubuntu  
Priority: normal OS Version: 18.04  
Status: resolved Product Version: 18.2.4-rc1  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Director hangs waiting for client if not available PSK
Description: Using TLS Psk Require = yes if a client is offline the director hangs waiting with:

delllt.2018-10-07_22.00.00_24 is waiting for Client to connect (Client Initiated Connection)

The log in bareos gui shows:

2018-10-07 22:06:47 kamino-dir JobId 1337:
Try to establish a secure connection by immediate TLS handshake:
2018-10-07 22:06:47 kamino-dir JobId 1337: Fatal error: Failed to connect to client "delllt-fd".
2018-10-07 22:06:35 kamino-dir JobId 1337: Fatal error: lib/bsock_tcp.cc:139 Unable to connect to Client: delllt-fd on delllt:9102. ERR=No route to host
2018-10-07 22:03:39 kamino-dir JobId 1337: Warning: lib/bsock_tcp.cc:133 Could not connect to Client: delllt-fd on delllt:9102. ERR=No route to host
Retrying ...
2018-10-07 22:03:23 kamino-dir JobId 1337: Using Device "FileDevice-1" to write.
2018-10-07 22:03:22 kamino-dir JobId 1337: Start Backup JobId 1337, Job=delllt.2018-10-07_22.00.00_24
2018-10-07 22:03:22 kamino-dir JobId 1337: Secure connection to Storage daemon at kamino:9103 with cipher ECDHE-PSK-CHACHA20-POLY1305 established

Should there not be a timeout waiting and the job should just fail?
Tags:
Steps To Reproduce:
Additional Information:
Attached Files:
Notes
(0003133)
wizhippo   
2018-10-09 15:12   
(Last edited: 2018-10-09 15:32)
Trying to cancel hung job even though director shows it's running I get:

*status dir

 JobId Level Name Status
======================================================================
  1337 Full delllt.2018-10-07_22.00.00_24 is waiting for Client to connect (Client Initiated Connection)


*can
Select Job:
     1: JobId=1337 Job=delllt.2018-10-07_22.00.00_24
Choose Job to cancel (1-21): 1
3904 Job delllt.2018-10-07_22.00.00_24 not found.


Had to restart director.

(0003134)
wizhippo   
2018-10-09 16:08   
Just to add Connection From Client To Director is not set and I'm not sure why there is a client initiated connection.
(0003148)
wizhippo   
2018-10-30 15:28   
I can reproduce this when running a job with higher priority first against a host that is not online and then running the catalog backup afterwards.

The catalog backup never runs as the first jobs fails because the host is unavailable but remains as a running job on the director indefinitely even though failed. A restart of the director is required to remove the job as trying to delete the job in the console shows the job does not exist even though it shows it as running.
(0003158)
r0mulux   
2018-12-07 11:41   
(Last edited: 2018-12-11 11:20)
Hello, I have same issue.
Jobs seems to freeze if machine to backup is not reachable, and next scheduled jobs are never executed. Freezed jobs can not be deleted. Need to restart director each time.

(0003160)
flo   
2018-12-17 10:55   
Hello,

I've got the same issue. The bareos director get's stuck when a job runs in an timeout. For example see the job details below.

The job cannot be cancelled manually because the director can't find the job if it's stuck.


I'm on 18.4.1 since 18.2.4rc1 has the same problem. For me it makes bareos unusable because there is a job that has problems nearly every night.





2018-12-17 03:13:16 bareos-dir JobId 340: Start Backup JobId 340, Job=backup-pihole-full.2018-12-17_03.00.01_02
2018-12-17 03:13:16 bareos-dir JobId 340: Connected Storage daemon at bareos:9103, encryption: ECDHE-PSK-CHACHA20-POLY1305
2018-12-17 03:13:16 bareos-dir JobId 340: Using Device "FileStorage" to write.
2018-12-17 03:13:16 bareos-dir JobId 340: Probing... (result will be saved until config reload)
2018-12-17 03:13:16 bareos-dir JobId 340: Connected Client: pihole-fd at pihole:9102, encryption: ECDHE-PSK-CHACHA20-POLY1305
2018-12-17 03:13:16 bareos-dir JobId 340: Handshake: Immediate TLS,
2018-12-17 03:13:16 bareos-dir JobId 340: Encryption: ECDHE-PSK-CHACHA20-POLY1305
2018-12-17 03:13:16 pihole-fd JobId 340: Error: lib/bsock_tcp.cc:192 BnetHost2IpAddrs() for host "bareos" failed: ERR=Temporary failure in name resolution
2018-12-17 03:13:16 pihole-fd JobId 340: Fatal error: Failed to connect to Storage daemon: bareos:9103
2018-12-17 03:13:16 bareos-dir JobId 340: Fatal error: Bad response to Storage command: wanted 2000 OK storage, got 2902 Bad storage
(0003162)
vincebattle   
2018-12-28 14:19   
(Last edited: 2018-12-28 14:22)
Here is a possible workaround to prevent job to get stuck in case of host unreachable.
Edit job definition file (bareos-dir.d/jobdefs/jobName.conf) and add a command to check host before job execution (property "Run Before Job").

JobDefs {
 Name = "jobName"
 Type = [...]
 Pool = [...]
 Messages = [...]
 Run Before Job = "netcat -z -w 2 %h 9102"
 [...]
}

Netcat command has a non-zero exit code if client is not reachable, making Bareos cancel the current job before it tries to connect to client and gets stuck (job status will be "Error").
If client is reachable, Bareos executes job normally.

Command arguments:
 -z     to scan port
 -w 2  to wait at most 2 seconds
 %h    is the job client address
 9102  is default port Bareos uses to connect to client

(0003163)
wizhippo   
2018-12-28 17:53   
Since Rc2 I have not had this issue. Not sure if others have found the same.
(0003622)
joergs   
2019-11-11 19:31   
This problem has been fixed with 18.2.4-rc2.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1085 [bareos-core] file daemon minor always 2019-05-21 00:55 2019-11-11 19:21
Reporter: leo Platform: x86  
Assigned To: OS: Windows  
Priority: high OS Version: 10  
Status: acknowledged Product Version: 18.2.5  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: the command to test the configuration files never exit
Description: When I run the command "bareos-fd -t" on a Windows machine, the command remains blocked after displaying any errors and never exit.

So, I can not see the return code of "bareos-fd -t".
Tags:
Steps To Reproduce: run "bareos-fd -t" on Bareos installation directory
Additional Information:
System Description
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1117 [bareos-core] director minor always 2019-09-23 13:57 2019-11-11 19:00
Reporter: joergs Platform:  
Assigned To: OS:  
Priority: normal OS Version:  
Status: confirmed Product Version: 19.2.1  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: When using multiple ACLs (Console ACL + Profile ACL) all negative ACLs except of the last one will be ignored
Description: A Console can contains ACLs and Profiles. The Profiles can also contain ACLs.

The way the Bareos Director evaluates multiple ACL is confusing (or just wrong).

All negative ACLs except of the last one will be ignored.


Tags:
Steps To Reproduce: Create following resource:

Console {
  name = test
  password = secret
  Pool ACL=!Full
  Profile = operator
}

The operator profile should already exist. If not, create it like this:
Profile {
  name = operator
  Command ACL = *all*
  Pool ACL = *all*
}

Login as Console test. The ".pools" will show you all pool, including "Full".
Additional Information: The function UaContext::AclAccessOk evaluates the Console ACLs first.
It stop evaluating ACLs, if it got a positive match (with is correct).
However, the function will continue checking the next ACL, if 1. no information about a resource have been found or 2. resource rejected. This is obviously wrong.
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1131 [bareos-core] General major have not tried 2019-11-05 08:33 2019-11-11 18:56
Reporter: colttt Platform:  
Assigned To: OS:  
Priority: high OS Version:  
Status: resolved Product Version:  
Product Build: Resolution: duplicate  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: add Debian Buster Repo
Description: Hello,

Debian Buster (10) is available since more than 4month and there is still no repo for bareos available.
Tags:
Steps To Reproduce:
Additional Information:
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1113 [bareos-core] installer / packages minor always 2019-09-17 10:14 2019-11-11 18:56
Reporter: DerMannMitDemHut Platform: Linux  
Assigned To: OS: Debian  
Priority: high OS Version: 10.1  
Status: new Product Version: 18.2.6  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: bareos repository lacks debian 10 support
Description: Debian 10 (buster) was released on 6th July 2019 but there is no bareos repository yet.
I would appreciate if debian 10 would be supported as well.
Tags:
Steps To Reproduce:
Additional Information:
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1116 [bareos-core] storage daemon major random 2019-09-23 10:14 2019-11-11 18:51
Reporter: Scorpionking83 Platform: Linux  
Assigned To: OS: RHEL  
Priority: high OS Version: 7  
Status: resolved Product Version: 16.2.4  
Product Build: Resolution: no change required  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Volume tape Incremental size error
Description: Dear sir,

Also I found out that there is a problem with Volume tape Incremental .
the problem is when increasing Volume tape Incremental from 1GB to 10GB, the disc will be full after back-up couple of systems, so I decrease it back to 1GB but it still create 10GB Incremental volume tape.

I use auto-recycle.

How can I solve this?

Tags:
Steps To Reproduce: Edit storage config file.
restart storage daemon

No luck.
Additional Information:
System Description
Attached Files:
Notes
(0003591)
Scorpionking83   
2019-10-07 09:07   
Is there someone who can help me with this problem?
(0003603)
arogge   
2019-10-16 10:49   
By default maximum volume bytes and parameters like that are stored on the volume when the volume is created. If you want changes in the pool configuration to apply to the volumes in the pool, you can copy the settings:
In bconsole run "update volume" and select "all volumes from all pools".
(0003618)
joergs   
2019-11-11 18:51   
As @arogge have written, how to solve this and there is no further feedback, I assume this ticket as resolved.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1136 [bareos-core] director crash always 2019-11-11 08:38 2019-11-11 08:39
Reporter: pstorz Platform: Linux  
Assigned To: pstorz OS: any  
Priority: low OS Version: 3  
Status: confirmed Product Version: 18.2.6  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Director crashes if DirAddress is configured to IP that is not configured on any interface (yet)
Description: When DirAddress is configured in the director resource, but the IP Adress is not available, the director crashes.
Tags:
Steps To Reproduce: Set DirAddress in the director resource to an IP that does not exist on system like

DirAddress = 1.1.1.1

and start the director.
Additional Information: This problem does not exist in current master anymore.

This bug was created for the PR 330 : https://github.com/bareos/bareos/pull/330
System Description General issues regarding all linux platforms or not specific to only one distribution
Attached Files:
Notes
(0003617)
pstorz   
2019-11-11 08:39   
Problem is reproducable

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
924 [bareos-core] documentation minor always 2018-03-03 15:34 2019-11-07 18:26
Reporter: tuxmaster Platform: Linux  
Assigned To: OS: CentOS  
Priority: normal OS Version: 7  
Status: new Product Version: 17.2.5  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Absolute Job Timeout option for the director settings is not described
Description: Nether the default settings nor the meaning of it are documented.

Only this little text are present:
Absolute Job Timeout = <positive-integer>
Version >= 14.2.0
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0003616)
b.braunger@syseleven.de   
2019-11-07 18:26   
Explained here: https://bareos-users.narkive.com/Mtv7pNGw/absolute-job-timeout

> Absolute Job Timeout was added to 14.2 to replace the Bacula hard coded
> limit of 6 days for a Job which some people ran into when doing long running
> jobs. Its now by default set to 0 which means don't care about how long a Job runs.
> But if you have the idea you have the need to automatically kill jobs that run
> longer then this is known to work well in Bacula as all jobs longer then 6 days
> got nicely killed there. Given you can properly cancel Jobs I see no real reason
> to set it that is also why the new default is 0 and not the old arbitrary 6 days.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1114 [bareos-core] file daemon minor always 2019-09-20 13:24 2019-11-07 07:45
Reporter: unki Platform: Linux  
Assigned To: OS: Debian  
Priority: normal OS Version: 9  
Status: acknowledged Product Version: 18.2.6  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Restoring backups with data-encryption + Percona xtrabackup Plugin file with "Bareos - filed/crypto.cc:303 Decryption error"
Description: I'm start encountering an issue on restoring MariaDB backups that have used the Percona xtrabackup Plugin for the file-daemon together with data-encryption.

The file-daemon config contains the following lines to enable data-encryption:

  PKI Signatures = yes
  PKI Encryption = yes
  PKI Master Key = "/etc/bareos/bareos.example.com_crt.pem" # ONLY the Public Key
  PKI Keypair = "/etc/bareos/backup4.pem" # Public and Private Keys

The problem only occurs in combination of an ongoing restore + xtrabackup-plugin + a job that involves Full + Increment backups.
Restoring the MariaDB from a Full-backup that has been made via the plugin works without any issue.
Restoring data of the same node that was _NOT_ using the xtrabackup-plugin but where ordinary file-backups, does not show any issue regardless of restoring a Full or Incremental backup.

When this problem occurs, the restore terminates just after having successfully restored the FULL backup on starting handling the involved Increment backups. The log then contains:

 2019-09-18 14:09:42 backup4-sd JobId 38061: stored/acquire.cc:151 Changing read device. Want Media Type="FileIncPool" have="FileFullPool"
  device="FullPool1" (/srv/bareos/pool)
 2019-09-18 14:09:42 backup4-sd JobId 38061: Releasing device "FullPool1" (/srv/bareos/pool).
 2019-09-18 14:09:42 backup4-sd JobId 38061: Media Type change. New read device "IncPool8" (/srv/bareos/pool) chosen.
 2019-09-18 14:09:42 backup4-sd JobId 38061: Ready to read from volume "Incremental-Sql-1736" on device "IncPool8" (/srv/bareos/pool).
 2019-09-18 14:09:42 backup4-sd JobId 38061: Forward spacing Volume "Incremental-Sql-1736" to file:block 0:212.
 2019-09-18 14:10:04 backup4-sd JobId 38061: End of Volume at file 0 on device "IncPool8" (/srv/bareos/pool), Volume "Incremental-Sql-1736"
 2019-09-18 14:10:04 backup4-sd JobId 38061: Ready to read from volume "Incremental-Sql-4829" on device "IncPool8" (/srv/bareos/pool).
 2019-09-18 14:10:04 backup4-sd JobId 38061: Forward spacing Volume "Incremental-Sql-4829" to file:block 0:749109650.
 2019-09-18 14:10:04 backup4-fd JobId 38061: Error: filed/crypto.cc:303 Decryption error. buf_len=30903 decrypt_len=0 on file /data/restore/prod-db-2019-09-18/_percona/xbstream.0000038025
 2019-09-18 14:10:04 backup4-fd JobId 38061: Error: Uncompression error on file /data/restore/prod-db-2019-09-18/_percona/xbstream.0000038025. ERR=Zlib data error
 2019-09-18 14:10:04 backup4-fd JobId 38061: Error: filed/crypto.cc:172 Missing cryptographic signature for /data/restore/prod-db-2019-09-18/_percona/xbstream.0000038025
 2019-09-18 14:10:06 backup4-sd JobId 38061: Error: lib/bsock_tcp.cc:417 Wrote 31 bytes to client:2a01:4f8:212:369f:4100:0:1c00:0:9103, but only 0 accepted.
 2019-09-18 14:10:06 backup4-sd JobId 38061: Fatal error: stored/read.cc:147 Error sending to File daemon. ERR=Connection reset by peer
 2019-09-18 14:10:06 backup4-sd JobId 38061: Error: lib/bsock_tcp.cc:457 Socket has errors=1 on call to client:2a01:4f8:212:369f:4100:0:1c00:0:9103
 2019-09-18 14:10:06 backup4-sd JobId 38061: Releasing device "IncPool8" (/srv/bareos/pool).

And one more thing to note - bareos-fd is actually crashing when this has happened:

Sep 17 12:37:02 backup4 bareos-fd[1046]: backup4-fd JobId 37880: Error: filed/crypto.cc:303 Decryption error. buf_len=28277 decrypt_len=0 on file /data/restore/prod-db-2019-09-17/_percona/xbstream.0000037835
Sep 17 12:37:02 backup4 bareos-fd[1046]: backup4-fd JobId 37880: Error: Uncompression error on file /data/restore/prod-db-2019-09-17/_percona/xbstream.0000037835. ERR=Zlib data error
Sep 17 12:37:02 backup4 bareos-fd[1046]: backup4-fd JobId 37880: Error: filed/crypto.cc:172 Missing cryptographic signature for /data/restore/prod-db-2019-09-17/_percona/xbstream.0000037835
Sep 17 12:37:02 backup4 bareos-fd[1046]: BAREOS interrupted by signal 11: Segmentation violation
Sep 17 12:37:02 backup4 systemd[1]: bareos-filedaemon.service: Main process exited, code=exited, status=1/FAILURE
Sep 17 12:37:02 backup4 systemd[1]: bareos-filedaemon.service: Unit entered failed state.
Sep 17 12:37:02 backup4 systemd[1]: bareos-filedaemon.service: Failed with result 'exit-code'.
Sep 17 12:37:02 backup4 systemd[1]: bareos-filedaemon.service: Service hold-off time over, scheduling restart.
Sep 17 12:37:02 backup4 systemd[1]: Stopped Bareos File Daemon service.


On my search I have only found this bug here with related error-messages:
https://bugs.bareos.org/view.php?id=192
But obviously it has been solved.

It's all about the xtrabackup-plugin - but I don't really understand why a plugin would have an affect on the data-encryption that is done by the file-daemon. Aren't that different layers in Bareos that are involved here?
Tags:
Steps To Reproduce: Only on my setup at the moment.
Additional Information:
System Description
Attached Files:
Notes
(0003577)
unki   
2019-09-20 13:37   
one more - as seen in the log lines, data-compress with GZIP is enabled on that fileset

FileSet {
  Name = "DbClientFileSetSql"
  Ignore File Set Changes = yes
  Include {
     Options {
       compression=GZIP
       signature = MD5
     }
     File = /etc/mysql
     Plugin = "python:module_path=/usr/lib/bareos/plugins:module_name=bareos-fd-percona:mycnf=/etc/bareos/bareos.my.cnf:extradumpoptions=--galera-info --tmpdir=/data/percona_tmp --ftwrl-wait-timeout=300 --check-privileges --no-backup-locks --no-lock --parallel=2 --use-memory=2GB"
  }
}

I will give it a try this night to perform the backups without data-compression enabled.
Just to see if it makes any difference.
(0003578)
unki   
2019-09-21 12:11   
(Last edited: 2019-09-21 12:13)
disabling compression makes no difference here. I've also tried disabling the usage of the Percona xtrabackup Plugin while restoring (by commenting-out 'Plugin Directory = ' and adding 'Plugin Names = ""').

The decryption error remains the same. Somehow the plugin seems to screw up the backups so that they fail later on restoring them.

 2019-09-21 11:48:07 backup4-sd JobId 38585: End of Volume at file 12 on device "FullPool1" (/srv/bareos/pool), Volume "Full-Sql-5549"
 2019-09-21 11:48:07 backup4-sd JobId 38585: Ready to read from volume "Full-Sql-5550" on device "FullPool1" (/srv/bareos/pool).
 2019-09-21 11:48:07 backup4-sd JobId 38585: Forward spacing Volume "Full-Sql-5550" to file:block 0:215.
 2019-09-21 11:56:21 backup4-sd JobId 38585: End of Volume at file 9 on device "FullPool1" (/srv/bareos/pool), Volume "Full-Sql-5550"
 2019-09-21 11:56:21 backup4-sd JobId 38585: stored/acquire.cc:151 Changing read device. Want Media Type="FileIncPool" have="FileFullPool"
  device="FullPool1" (/srv/bareos/pool)
 2019-09-21 11:56:21 backup4-sd JobId 38585: Releasing device "FullPool1" (/srv/bareos/pool).
 2019-09-21 11:56:21 backup4-sd JobId 38585: Media Type change. New read device "IncPool1" (/srv/bareos/pool) chosen.
 2019-09-21 11:56:21 backup4-sd JobId 38585: Ready to read from volume "Incremental-Sql-2164" on device "IncPool1" (/srv/bareos/pool).
 2019-09-21 11:56:21 backup4-sd JobId 38585: Forward spacing Volume "Incremental-Sql-2164" to file:block 0:212.
 2019-09-21 11:56:32 backup4-sd JobId 38585: End of Volume at file 0 on device "IncPool1" (/srv/bareos/pool), Volume "Incremental-Sql-2164"
 2019-09-21 11:56:32 backup4-sd JobId 38585: Ready to read from volume "Incremental-Sql-2157" on device "IncPool1" (/srv/bareos/pool).
 2019-09-21 11:56:32 backup4-sd JobId 38585: Forward spacing Volume "Incremental-Sql-2157" to file:block 0:621305456.
 2019-09-21 11:56:37 backup4-sd JobId 38585: End of Volume at file 0 on device "IncPool1" (/srv/bareos/pool), Volume "Incremental-Sql-2157"
 2019-09-21 11:56:37 backup4-sd JobId 38585: Ready to read from volume "Incremental-Sql-3950" on device "IncPool1" (/srv/bareos/pool).
 2019-09-21 11:56:37 backup4-sd JobId 38585: Forward spacing Volume "Incremental-Sql-3950" to file:block 0:219.
 2019-09-21 11:56:55 backup4-sd JobId 38585: End of Volume at file 0 on device "IncPool1" (/srv/bareos/pool), Volume "Incremental-Sql-3950"
 2019-09-21 11:56:55 backup4-sd JobId 38585: Ready to read from volume "Incremental-Sql-2167" on device "IncPool1" (/srv/bareos/pool).
 2019-09-21 11:56:55 backup4-sd JobId 38585: Forward spacing Volume "Incremental-Sql-2167" to file:block 0:212.
 2019-09-21 11:56:56 backup4-sd JobId 38585: End of Volume at file 0 on device "IncPool1" (/srv/bareos/pool), Volume "Incremental-Sql-2167"
 2019-09-21 11:56:56 backup4-sd JobId 38585: Ready to read from volume "Incremental-Sql-2169" on device "IncPool1" (/srv/bareos/pool).
 2019-09-21 11:56:56 backup4-sd JobId 38585: Forward spacing Volume "Incremental-Sql-2169" to file:block 0:212.
 2019-09-21 11:57:09 backup4-sd JobId 38585: End of Volume at file 0 on device "IncPool1" (/srv/bareos/pool), Volume "Incremental-Sql-2169"
 2019-09-21 11:57:09 backup4-sd JobId 2019-09-21 11:48:07 backup4-sd JobId 38585: End of Volume at file 12 on device "FullPool1" (/srv/bareos/pool), Volume "Full-Sql-5549"
 2019-09-21 11:48:07 backup4-sd JobId 38585: Ready to read from volume "Full-Sql-5550" on device "FullPool1" (/srv/bareos/pool).
 2019-09-21 11:48:07 backup4-sd JobId 38585: Forward spacing Volume "Full-Sql-5550" to file:block 0:215.
 2019-09-21 11:56:21 backup4-sd JobId 38585: End of Volume at file 9 on device "FullPool1" (/srv/bareos/pool), Volume "Full-Sql-5550"
 2019-09-21 11:56:21 backup4-sd JobId 38585: stored/acquire.cc:151 Changing read device. Want Media Type="FileIncPool" have="FileFullPool"
  device="FullPool1" (/srv/bareos/pool)
 2019-09-21 11:56:21 backup4-sd JobId 38585: Releasing device "FullPool1" (/srv/bareos/pool).
 2019-09-21 11:56:21 backup4-sd JobId 38585: Media Type change. New read device "IncPool1" (/srv/bareos/pool) chosen.
 2019-09-21 11:56:21 backup4-sd JobId 38585: Ready to read from volume "Incremental-Sql-2164" on device "IncPool1" (/srv/bareos/pool).
 2019-09-21 11:56:21 backup4-sd JobId 38585: Forward spacing Volume "Incremental-Sql-2164" to file:block 0:212.
 2019-09-21 11:56:32 backup4-sd JobId 38585: End of Volume at file 0 on device "IncPool1" (/srv/bareos/pool), Volume "Incremental-Sql-2164"
 2019-09-21 11:56:32 backup4-sd JobId 38585: Ready to read from volume "Incremental-Sql-2157" on device "IncPool1" (/srv/bareos/pool).
 2019-09-21 11:56:32 backup4-sd JobId 38585: Forward spacing Volume "Incremental-Sql-2157" to file:block 0:621305456.
 2019-09-21 11:56:37 backup4-sd JobId 38585: End of Volume at file 0 on device "IncPool1" (/srv/bareos/pool), Volume "Incremental-Sql-2157"
 2019-09-21 11:56:37 backup4-sd JobId 38585: Ready to read from volume "Incremental-Sql-3950" on device "IncPool1" (/srv/bareos/pool).
 2019-09-21 11:56:37 backup4-sd JobId 38585: Forward spacing Volume "Incremental-Sql-3950" to file:block 0:219.
 2019-09-21 11:56:55 backup4-sd JobId 38585: End of Volume at file 0 on device "IncPool1" (/srv/bareos/pool), Volume "Incremental-Sql-3950"
 2019-09-21 11:56:55 backup4-sd JobId 38585: Ready to read from volume "Incremental-Sql-2167" on device "IncPool1" (/srv/bareos/pool).
 2019-09-21 11:56:55 backup4-sd JobId 38585: Forward spacing Volume "Incremental-Sql-2167" to file:block 0:212.
 2019-09-21 11:56:56 backup4-sd JobId 38585: End of Volume at file 0 on device "IncPool1" (/srv/bareos/pool), Volume "Incremental-Sql-2167"
 2019-09-21 11:56:56 backup4-sd JobId 38585: Ready to read from volume "Incremental-Sql-2169" on device "IncPool1" (/srv/bareos/pool).
 2019-09-21 11:56:56 backup4-sd JobId 38585: Forward spacing Volume "Incremental-Sql-2169" to file:block 0:212.
 2019-09-21 11:57:09 backup4-sd JobId 38585: End of Volume at file 0 on device "IncPool1" (/srv/bareos/pool), Volume "Incremental-Sql-2169"
 2019-09-21 11:57:09 backup4-sd JobId 38585: Ready to read from volume "Incremental-Sql-2171" on device "IncPool1" (/srv/bareos/pool).
 2019-09-21 11:57:09 backup4-fd JobId 38585: Error: filed/crypto.cc:303 Decryption error. buf_len=65512 decrypt_len=0 on file /data/restore/prod-db-2019-09-21/_percona/xbstream.0000038576
 2019-09-21 11:57:09 backup4-fd JobId 38585: Error: filed/crypto.cc:172 Missing cryptographic signature for /data/restore/prod-db-2019-09-21/_percona/xbstream.0000038576
 2019-09-21 11:57:09 backup4-fd JobId 38585: Fatal error: TLS read/write failure.: ERR=error:06065064:digital envelope routines:EVP_DecryptFinal_ex:bad decrypt
 2019-09-21 11:57:09 backup4-fd JobId 38585: Error: filed/crypto.cc:172 Missing cryptographic signature for /data/restore/prod-db-2019-09-21/_percona/xbstream.0000038568
 2019-09-21 11:57:09 backup4-sd JobId 38585: Forward spacing Volume "Incremental-Sql-2171" to file:block 0:212.
 2019-09-21 11:57:12 backup4-sd JobId 38585: Error: lib/bsock_tcp.cc:417 Wrote 32 bytes to client:2a01:4f8:212:369f:4100:0:1c00:0:9103, but only 0 accepted.
 2019-09-21 11:57:12 backup4-sd JobId 38585: Fatal error: stored/read.cc:147 Error sending to File daemon. ERR=Connection reset by peer
 2019-09-21 11:57:12 backup4-sd JobId 38585: Error: lib/bsock_tcp.cc:457 Socket has errors=1 on call to client:2a01:4f8:212:369f:4100:0:1c00:0:9103
 2019-09-21 11:57:12 backup4-sd JobId 38585: Releasing device "IncPool1" (/srv/bareos/pool).
 38585: Ready to read from volume "Incremental-Sql-2171" on device "IncPool1" (/srv/bareos/pool).
 2019-09-21 11:57:09 backup4-fd JobId 38585: Error: filed/crypto.cc:303 Decryption error. buf_len=65512 decrypt_len=0 on file /data/restore/prod-db-2019-09-21/_percona/xbstream.0000038576
 2019-09-21 11:57:09 backup4-fd JobId 38585: Error: filed/crypto.cc:172 Missing cryptographic signature for /data/restore/prod-db-2019-09-21/_percona/xbstream.0000038576
 2019-09-21 11:57:09 backup4-fd JobId 38585: Fatal error: TLS read/write failure.: ERR=error:06065064:digital envelope routines:EVP_DecryptFinal_ex:bad decrypt
 2019-09-21 11:57:09 backup4-fd JobId 38585: Error: filed/crypto.cc:172 Missing cryptographic signature for /data/restore/prod-db-2019-09-21/_percona/xbstream.0000038568
 2019-09-21 11:57:09 backup4-sd JobId 38585: Forward spacing Volume "Incremental-Sql-2171" to file:block 0:212.
 2019-09-21 11:57:12 backup4-sd JobId 38585: Error: lib/bsock_tcp.cc:417 Wrote 32 bytes to client:2a01:4f8:212:369f:4100:0:1c00:0:9103, but only 0 accepted.
 2019-09-21 11:57:12 backup4-sd JobId 38585: Fatal error: stored/read.cc:147 Error sending to File daemon. ERR=Connection reset by peer
 2019-09-21 11:57:12 backup4-sd JobId 38585: Error: lib/bsock_tcp.cc:457 Socket has errors=1 on call to client:2a01:4f8:212:369f:4100:0:1c00:0:9103
 2019-09-21 11:57:12 backup4-sd JobId 38585: Releasing device "IncPool1" (/srv/bareos/pool).

(0003592)
unki   
2019-10-10 07:03   
I had now the chance to perform the backup & restore without PKI encryption of the backup-data.
The issue with the Percona xtrabackup Plugin for the file-daemon also occurs without encryption.

10-Oct 06:30 bareos-dir JobId 41723: Start Restore Job RestoreFiles.2019-10-10_06.29.59_27
10-Oct 06:30 bareos-dir JobId 41723: Connected Storage daemon at backup4.example.com:9103, encryption: DHE-RSA-AES256-GCM-SHA384
10-Oct 06:30 bareos-dir JobId 41723: Using Device "FullPool1" to read.
10-Oct 06:30 bareos-dir JobId 41723: Connected Client: db-fd at db.example.com:9102, encryption: DHE-RSA-AES256-GCM-SHA384
10-Oct 06:30 bareos-dir JobId 41723: Handshake: Immediate TLS 10-Oct 06:30 bareos-dir JobId 41723: Encryption: DHE-RSA-AES256-GCM-SHA384
10-Oct 06:30 db-fd JobId 41723: Connected Storage daemon at backup4.example.com:9103, encryption: DHE-RSA-AES256-GCM-SHA384
10-Oct 06:30 db-fd JobId 41723: python-fd: Got to_lsn 8731775617207 from restore object of job 41695
10-Oct 06:30 db-fd JobId 41723: python-fd: Got to_lsn 8731965444667 from restore object of job 41707
10-Oct 06:30 db-fd JobId 41723: python-fd: Got to_lsn 8732041077071 from restore object of job 41712
10-Oct 06:30 db-fd JobId 41723: python-fd: Got to_lsn 8732111279885 from restore object of job 41717
10-Oct 06:30 backup4-sd JobId 41723: Ready to read from volume "Full-Sql-2711" on device "FullPool1" (/srv/bareos/pool).
10-Oct 06:30 backup4-sd JobId 41723: Forward spacing Volume "Full-Sql-2711" to file:block 0:206.
10-Oct 06:48 backup4-sd JobId 41723: End of Volume at file 12 on device "FullPool1" (/srv/bareos/pool), Volume "Full-Sql-2711"
10-Oct 06:48 backup4-sd JobId 41723: Ready to read from volume "Full-Sql-2552" on device "FullPool1" (/srv/bareos/pool).
10-Oct 06:48 backup4-sd JobId 41723: Forward spacing Volume "Full-Sql-2552" to file:block 9:407702037.
10-Oct 06:53 backup4-sd JobId 41723: End of Volume at file 12 on device "FullPool1" (/srv/bareos/pool), Volume "Full-Sql-2552"
10-Oct 06:53 backup4-sd JobId 41723: Ready to read from volume "Full-Sql-2795" on device "FullPool1" (/srv/bareos/pool).
10-Oct 06:53 backup4-sd JobId 41723: Forward spacing Volume "Full-Sql-2795" to file:block 0:206.
10-Oct 06:54 backup4-sd JobId 41723: End of Volume at file 0 on device "FullPool1" (/srv/bareos/pool), Volume "Full-Sql-2795"
10-Oct 06:54 backup4-sd JobId 41723: stored/acquire.cc:151 Changing read device. Want Media Type="FileIncPool" have="FileFullPool"
  device="FullPool1" (/srv/bareos/pool)
10-Oct 06:54 backup4-sd JobId 41723: Releasing device "FullPool1" (/srv/bareos/pool).
10-Oct 06:54 backup4-sd JobId 41723: Media Type change. New read device "IncPool1" (/srv/bareos/pool) chosen.
10-Oct 06:54 backup4-sd JobId 41723: Ready to read from volume "Incremental-Sql-2873" on device "IncPool1" (/srv/bareos/pool).
10-Oct 06:54 backup4-sd JobId 41723: Forward spacing Volume "Incremental-Sql-2873" to file:block 0:883472630.
10-Oct 06:54 backup4-sd JobId 41723: End of Volume at file 0 on device "IncPool1" (/srv/bareos/pool), Volume "Incremental-Sql-2873"
10-Oct 06:54 backup4-sd JobId 41723: Ready to read from volume "Incremental-Sql-2875" on device "IncPool1" (/srv/bareos/pool).
10-Oct 06:54 backup4-sd JobId 41723: Forward spacing Volume "Incremental-Sql-2875" to file:block 0:590294726.
10-Oct 06:54 db-fd JobId 41723: Fatal error: filed/fd_plugins.cc:1178 Second call to startRestoreFile. plugin=python-fd.so cmd=python:module_path=/usr/lib/bareos/plugins:module_name=bareos-fd-percona:mycnf=/etc/bareos/bareos.my.cnf:log=/var/log/bareos/xtrabackup.log:extradumpoptions=--galera-info --ftwrl-wait-timeout=300 --check-privileges --tmpdir=/data/percona_tmp --no-backup-locks --no-lock --parallel=2 --use-memory=2GB
10-Oct 06:54 backup4-sd JobId 41723: Error: lib/bsock_tcp.cc:417 Wrote 33 bytes to client:::4100:0:1c00:0:9103, but only 0 accepted.
10-Oct 06:54 backup4-sd JobId 41723: Fatal error: stored/read.cc:147 Error sending to File daemon. ERR=Connection reset by peer
10-Oct 06:54 backup4-sd JobId 41723: Error: lib/bsock_tcp.cc:457 Socket has errors=1 on call to client:::4100:0:1c00:0:9103
10-Oct 06:54 backup4-sd JobId 41723: Releasing device "IncPool1" (/srv/bareos/pool).
10-Oct 06:54 bareos-dir JobId 41723: Error: Bareos bareos-dir 18.2.6 (13Feb19):
  Build OS: Linux-4.19.0-6-amd64 debian Debian GNU/Linux 9.11 (stretch)
  JobId: 41723
  Job: RestoreFiles.2019-10-10_06.29.59_27
  Restore Client: db-fd
  Start time: 10-Oct-2019 06:30:01
  End time: 10-Oct-2019 06:54:19
  Elapsed time: 24 mins 18 secs
  Files Expected: 20
  Files Restored: 19
  Bytes Restored: 286,059,460,373
  Rate: 196199.9 KB/s
  FD Errors: 1
  FD termination status: Fatal Error
  SD termination status: Fatal Error
  Bareos binary info:
  Termination: *** Restore Error ***

10-Oct 06:54 bareos-dir JobId 41723: Begin pruning Files.
10-Oct 06:54 bareos-dir JobId 41723: No Files found to prune.
10-Oct 06:54 bareos-dir JobId 41723: End auto prune.
(0003593)
unki   
2019-10-10 07:04   
the used xtrabackup version is v2.4.15
(0003595)
unki   
2019-10-10 13:23   
just to be sure I tried now downgrading xtrabackup to v2.4.12 - the same issue occurs

10-Oct 13:06 backup4-sd JobId 41769: Ready to read from volume "Incremental-Sql-2885" on device "IncPool1" (/srv/bareos/pool).
10-Oct 13:06 backup4-sd JobId 41769: Forward spacing Volume "Incremental-Sql-2885" to file:block 0:212.
10-Oct 13:06 db7-fd JobId 41769: Error: Compressed header size error. comp_len=31653, message_length=36220
10-Oct 13:06 db7-fd JobId 41769: Error: python-fd: Restore command returned non-zero value: 1, message:
10-Oct 13:06 backup4-sd JobId 41769: End of Volume at file 0 on device "IncPool1" (/srv/bareos/pool), Volume "Incremental-Sql-2885"
10-Oct 13:06 backup4-sd JobId 41769: Ready to read from volume "Incremental-Sql-2888" on device "IncPool1" (/srv/bareos/pool).
(0003615)
unki   
2019-11-06 17:36   
(Last edited: 2019-11-07 07:45)
Strange thing - e.g. the job has stored (with the help of the percona-plugin) the following files in the backup.

$ ls
xbstream.0000046633
xbstream.0000046648
xbstream.0000046665
xbstream.0000046675
xbstream.0000046683
xbstream.0000046691
xbstream.0000046698
xbstream.0000046705
xbstream.0000046717
xbstream.0000046722
xbstream.0000046729


If I restore those xbstream.* files one by one with individual jobs into individual directories, it's a success.
All-in-one causes the previously posted errors.
Wondering if this has then a similar reason as it was back in https://bugs.bareos.org/view.php?id=192


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1134 [bareos-core] vmware plugin feature always 2019-11-06 15:32 2019-11-06 15:32
Reporter: ratacorbo Platform: Linux  
Assigned To: OS: CentOS  
Priority: normal OS Version: 7  
Status: new Product Version: 18.2.5  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Installing bareos-vmware-plugin-18.2.5-124.1.el7.x86_64 python-pyvmomi error
Description: when tries to install bareos-vmware-plugin-18.2.5-124.1.el7.x86_64 in Centos 7 gives an error that python-pyvmomi
 doesn't exists
Tags:
Steps To Reproduce: yum install bareos-vmware-plugin
Additional Information: Error: Package: bareos-vmware-plugin-18.2.5-124.1.el7.x86_64 (bareos_bareos-18.2)
           Requires: python-pyvmomi
 You could try using --skip-broken to work around the problem
 You could try running: rpm -Va --nofiles --nodigest
System Description
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1130 [bareos-core] General major always 2019-11-03 18:58 2019-11-03 18:58
Reporter: rjung Platform: Solaris10  
Assigned To: OS: Solaris  
Priority: normal OS Version: 10  
Status: new Product Version: 18.2.6  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Allow to build client without bconsole or python
Description: Building bconsole and python support fail on Solaris 10. It should be possible to build the client without those two components.
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1129 [bareos-core] General major always 2019-11-03 18:57 2019-11-03 18:57
Reporter: rjung Platform: Solaris10  
Assigned To: OS: Solaris  
Priority: normal OS Version: 10  
Status: new Product Version: 18.2.6  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: OpenSSL and readline include path missing in cmake build
Description: Actually version is 18.4.1, but that is missing in the mantis version dropdown.

The new cmake based compilation correctly detects non-system OpenSSl and readline, when run e.g. with flags

      -DOPENSSL_ROOT_DIR=/path/to/openssl
      -DReadline_ROOT_DIR=/path/to/readline

But the when running maake, it can't find the header files, because the respective include paths never get added.

As a workaround I was using the follwoing patch:

--- core/CMakeLists.txt Fri Sep 28 10:30:36 2018
+++ core/CMakeLists.txt Sun Nov 3 14:55:00 2019
@@ -435,6 +435,10 @@
    set(HAVE_TLS "1")
 ENDIF()

+IF( "${HAVE_OPENSSL}")
+include_directories(${OPENSSL_INCLUDE_DIR})
+ENDIF()
+
 IF(NOT openssl)
    unset(HAVE_OPENSSL)
    unset(HAVE_TLS)
@@ -446,6 +450,7 @@
 set(got_readline "${READLINE_FOUND}" )
 if ("${READLINE_FOUND}")
    set(HAVE_READLINE 1)
+ include_directories(${Readline_INCLUDE_DIR})
 endif()

 if ("${PAM_FOUND}")

Tags: compile cmake headers openssl readline
Steps To Reproduce: Run cmake against a non-system OpenSSl and readline and make sure, the OpenSSl and readline header files are not installed in teh default system header file locations. The run make.
Additional Information:
System Description
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1128 [bareos-core] api major always 2019-11-03 18:52 2019-11-03 18:52
Reporter: rjung Platform: Solaris10  
Assigned To: OS: Solaris  
Priority: normal OS Version: 10  
Status: new Product Version: 18.2.6  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Compilation error due to existing symbol round()
Description: Actually version is 18.4.1 but that doesn't exist in the mantis version dropdown.

File core/src/lib/bsnprintf.cc defines a function round() which leads to a compilation error on Solaris 10 Sparc, because that symbol already exists. Since the new round(9 is static, obne can rename it, eg. to roundit. The following patch helps:

--- core/src/lib/bsnprintf.cc Fri Sep 28 10:30:36 2018
+++ core/src/lib/bsnprintf.cc Sun Nov 3 18:07:19 2019
@@ -618,7 +618,7 @@
    return result;
 }

-static int64_t round(LDOUBLE value)
+static int64_t roundit(LDOUBLE value)
 {
    int64_t intpart;

@@ -685,7 +685,7 @@
    /* We "cheat" by converting the fractional part to integer by
     * multiplying by a factor of 10
     */
- fracpart = round((pow10(max)) * (ufvalue - intpart));
+ fracpart = roundit((pow10(max)) * (ufvalue - intpart));

    if (fracpart >= pow10(max)) {
       intpart++;
Tags: compile solaris
Steps To Reproduce: Compile on Solaris 10 Sparc.
Additional Information:
System Description
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1127 [bareos-core] General major always 2019-11-03 18:47 2019-11-03 18:47
Reporter: rjung Platform: Solaris10  
Assigned To: OS: Solaris  
Priority: normal OS Version: 10  
Status: new Product Version: 18.2.6  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: cmake function check fails due to library dependency
Description: Actually version is 18.4.1 but that is not yet available in the mantis version dropdown.

Problem is that during cmake run on Solaris 10 Sparc certain functions that does exit are not detected, because they need "-lrt" or "-lsocket -lnsl" during compilation:

fdatasync -lrt
nanosleep -lrt
gethostbyname_r -lnsl
getaddrinfo -lsocket -lnsl
inet_ntop -lsocket -lnsl
inet_pton -lsocket -lnsl
gai_strerror -lsocket -lnsl

As a workaround, the following patch was used:

--- cmake/BareosCheckFunctions.cmake Fri Sep 28 10:30:36 2018
+++ cmake/BareosCheckFunctions.cmake Sun Nov 3 18:06:16 2019
@@ -19,6 +19,10 @@

 INCLUDE (CheckFunctionExists)

+list(APPEND CMAKE_REQUIRED_LIBRARIES rt)
+list(APPEND CMAKE_REQUIRED_LIBRARIES socket)
+list(APPEND CMAKE_REQUIRED_LIBRARIES nsl)
+
 CHECK_FUNCTION_EXISTS(strtoll HAVE_STRTOLL)
 CHECK_FUNCTION_EXISTS(backtrace HAVE_BACKTRACE)
 CHECK_FUNCTION_EXISTS(backtrace_symbols HAVE_BACKTRACE_SYMBOLS)


But that obviously isn't the right solution, because it must be made dependent on platform. Furthermore it adds -lrt -lsocket -lnsl as dependencies to all binaries and libs. That should only happen for those who actually make use of the respective symbols that need the libs.
Tags: cmake compilation
Steps To Reproduce: Run cmake on Solaris 10 and notice, that all of the above function lookups fail with "not found".
Additional Information:
System Description
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1004 [bareos-core] storage daemon major always 2018-09-03 09:22 2019-10-28 13:04
Reporter: hostedpower Platform: Linux  
Assigned To: arogge OS: Debian  
Priority: normal OS Version: 9  
Status: resolved Product Version: 17.2.7  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact: yes
bareos-master: action: fixed
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact: yes
bareos-18.2: action: none
bareos-17.2: impact: yes
bareos-17.2: action: none
bareos-16.2: impact: yes
bareos-16.2: action: none
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Very high cpu usage on Debian stretch
Description: Each time on consolidation we se a load of up to 18-20 (on a 6 cores machines).

There are about 20 backup clients on this machine in total. This is a backup server with sd only, the director is somewhere else.

This seems to have started pretty recently, there are 3 possible causes I can think of:

- Bug in newer bareos version
- Bug because of kernel update (less likely I would think)
- Extra backup clients, but it has grown pretty slowly and I don't think we saw such high usage before


Tags:
Steps To Reproduce: Use consolidation for about 20 clients on single SD, please see screenshot with perf top.

htop shows load of about 18-20 and high cpu usage on a couple of bareos-sd processes.
Additional Information:
System Description
Attached Files: high-cpu-bareos.png (65,852 bytes) 2018-09-03 09:22
https://bugs.bareos.org/file_download.php?file_id=306&type=bug
png
Notes
(0003543)
arogge   
2019-07-31 15:55   
I just had a look: it is in fact CRC32 that is that slow.
There are faster implementations available and we will consider integrating one of them.
(0003545)
hostedpower   
2019-07-31 16:00   
Wow that would be terrific, we really suffer on that server, even more than before since we've grown a bit in the meanwhile :)
(0003613)
arogge   
2019-10-28 13:04   
We have replaced the crc implementation with a faster one. This will be released in 19.2

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1108 [bareos-core] director minor always 2019-08-09 17:11 2019-10-28 12:47
Reporter: joergs Platform:  
Assigned To: franku OS:  
Priority: normal OS Version:  
Status: resolved Product Version: 19.2.1  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version: 19.2.1  
    Target Version:  
bareos-master: impact: yes
bareos-master: action: fixed
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact: yes
bareos-18.2: action: none
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: PAM users can be misused to directly connect to the bareos-director without password
Description: When using PAM as authentication method, the user password in the bareos-director is not used for authentication. However, the bareos configuration requires a password. In the documentation https://docs.bareos.org/TasksAndConcepts/PAM.html#pam-user the password is set to an empty string ("").

However, these users/consoles can be used to login via bconsole *without* password.
This is of course a security problem.
Tags:
Steps To Reproduce: Install bareos
Create following files:

/etc/bareos/bareos-dir.d/console/pam.conf:
Console {
  Name = pam
  Password = secret
  UsePamAuthentication = yes
}

/etc/bareos/bareos-dir.d/console/user1.conf
Console {
  Name = user1
  Password = ""
  Profile = admin
}

bconsole-user1.conf:
Director {
  Name = bareos-dir
  address = localhost
  Password = "UNUSED"
}

Console {
  Name = "user1"
  Password = ""
}

systemctl restart bareos-dir.service

bconsole -c bconsole-user1.conf
Connecting to Director localhost:9101
 Encryption: ECDHE-PSK-CHACHA20-POLY1305
1000 OK: bareos-dir Version: 19.1.2 (01 February 2019)
bareos.org build binary
bareos.org binaries are UNSUPPORTED by bareos.com.
Get official binaries and vendor support on https://www.bareos.com
You are logged in as: user1

Enter a period to cancel a command.
*
Additional Information: Temporary workaround:
create PAM users with random passwords.
Attached Files:
Notes
(0003565)
franku   
2019-09-03 09:01   
There will be a dedicated User resource that only accepts a name, description, acl and profile directives, but no password. This resource will only be used for login using pam authentication on the director.
(0003566)
franku   
2019-09-03 09:07   
The fix does not affect the issue that Console resources in general can be used without password.
(0003612)
franku   
2019-10-28 12:17   
see Story #3158

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1125 [bareos-core] director major always 2019-10-27 10:07 2019-10-27 10:07
Reporter: jstoffen Platform: Linux  
Assigned To: OS: Debian  
Priority: normal OS Version: 9  
Status: new Product Version: 18.2.5  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Segmentation fault after a few days of running
Description: After a couple of days of running perfectly, the process get killed.

A .backtrace file is produced but is almos empty :

cat bareos-dir.11927.bactrace
Attempt to dump current JCRs. njcrs=0

I'll try to provide a full trace the next time.
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1124 [bareos-core] file daemon crash always 2019-10-26 18:58 2019-10-26 18:58
Reporter: medorna Platform: server  
Assigned To: OS: windows  
Priority: normal OS Version: 2012  
Status: new Product Version: 19.2.1  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: windows bareos-fd bug
Description: There's a bug with bareos-fd that prevent connect bareos-dir with the client after reinstalling bareos-fd on windows server, snippit keeps harcoded on regedit.
Tags:
Steps To Reproduce: - Install bareos-fd on windows 2012
- Uninstall it.
- Install it again.
Additional Information: Need clean the regedit to manage the credentials again.
System Description
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
778 [bareos-core] director major always 2017-02-08 21:44 2019-10-19 01:08
Reporter: dpcushing Platform: Linux  
Assigned To: OS: CentOS  
Priority: high OS Version: 7  
Status: new Product Version: 16.2.4  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Always Incremental Consolidation is consolidating and deleting backup copies from a DR pool
Description: I have configured Bareos with Always incremental backups. I have also configured a copy schedule that runs daily and copies all of the daily backup jobs to a network drive in another location. As a simple example let's assume all even job IDs (1,3,5,7,9,...) are backups and all odd job IDs (2,4,6,8,10,...) are DR copies of those backups. On day 8 a consolidation job runs and consolidates, then purges job 1. When job 1 is purges, job 2 gets 'promoted' from copy to backup. On day 9 a consolidation job runs and consolidates jobs 2 and 3, then purges job 2. This continues daily.

The end result is that each job gets consolidates twice, once as the backup and again the next day as the 'promoted' copy of the backup. After consolidation, the backups are purged and the DR copy is lost.
Tags:
Steps To Reproduce: Set up daily Always Incremental backups with 'Always Incremental Job Retention = days' and 'Always Incremental Keep Number = 3'. Set up a copy job to run daily to copy all backup jobs to a separate storage pool. Set up an Always Incremental Consolidate job to run daily.
Additional Information: In the Jobs table in the database you'll see the original backup job with Type B and the copies with Type C. After a consolidation of a backup job you'll see that job purged and it's associated copy job changed to Type B. The next days you'll see the consolidate job operate on the 'used to be a copy' job, since it is now identified as a backup. Then the job is removed from the database. So the two issues identified are 1) The same backup set is always consolidated twice on consecutive days (or whatever the consolidation interval is). The DR copies, that may be set up for a longer retention period, are purged by consolidation.
System Description
Attached Files:
Notes
(0002607)
dpcushing   
2017-03-14 01:30   
For anybody else that may be facing this issue, I wrote a shell script that I run in an admin job after running consolidate that changes the newly 'promoted' backups to copies. Note in the script below that the pool where the copies reside is named 'FileCopy'. Here's the content ...

#!/bin/bash
# grab the database credentials from existing configuration files
catalogFile=`find /etc/bareos/bareos-dir.d/catalog/ -type f`
dbUser=`grep dbuser $catalogFile | grep -o '".*"' | sed 's/"//g'`
dbPwd=`grep dbpassword $catalogFile | grep -o '".*"' | sed 's/"//g'`

# Make sure all DR-Copy jobs that are in the FileCopy pool are properly set in the database as Copy (C) jobs.
/usr/bin/mysql bareos -u $dbUser -p$dbPwd -se "UPDATE Job J SET J.Type = 'C' WHERE J.Type <> 'C' AND EXISTS (SELECT 1 FROM Media M, JobMedia JM WHERE JM.JobId = J.JobId AND M.MediaId = JM.MediaID AND M.MediaType = 'FileCopy');"
(0003571)
brockp   
2019-09-15 04:38   
(Last edited: 2019-09-15 04:59)
I can confirm this issue still exists and is not documented as a limitation in 18.2.5

As far as can tell this directly conflicts with: https://docs.bareos.org/TasksAndConcepts/AlwaysIncrementalBackupScheme.html

(0003602)
arogge   
2019-10-16 10:47   
This can be worked around as described in https://docs.bareos.org/TasksAndConcepts/AlwaysIncrementalBackupScheme.html#virtual-full-jobs

"To make sure the longterm Level (Dir->Job) = VirtualFull is not taken as base for the next incrementals, the job type of the copied job is set to Type (Dir->Job) = Archive with the Run Script (Dir->Job)."

If you think the documentation needs a change to document this better, feel free to suggest something. Otherwise I'd like to close this issue.
(0003609)
dpcushing   
2019-10-19 01:08   
@arogge - The issue I have reported here is different. I do have VirtualFull backups configured for long term archiving and I do set those VirtualFull jobs to Archive via script as described in your reference link. The issue that I am describing is specific to Copy jobs. I prepare an offsite copy every night of each of the day's backups to allow me to perform an accurate recovery in the event of a complete site disaster. The problem that I've described is that when a primary AI job is consolidated and removed from the catalog, Bareos promotes the copy job to a primary job. On the next cycle that new 'primary' (used to be copy) job is consolidated again. Once that new 'primary' (used to be copy) job is consolidated it will get deleted from the catalog, making my offsite copies incomplete ;-0

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1122 [bareos-core] General major always 2019-10-18 19:32 2019-10-18 19:32
Reporter: xyros Platform: Linux  
Assigned To: OS: Ubuntu  
Priority: normal OS Version: 16.04  
Status: new Product Version: 18.2.6  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Consolidate queues and indefinitely orphans jobs but falsely reports status as "Consolidate OK" for last queued
Description: My Consolidate job never succeeds -- quickly terminating with "Consolidate OK" while leaving all the VirtualFull jobs it started queued and orphaned.

In the WebUI listing for the allegedly successful Consolidate run, it always list the sequentially last (by job ID) client it queued as being the successful run; however, the level is "Incremental," nothing is actually done, and the client's VirtualFull job is actually still queued up with all the other clients.

In bconsole the status is similar to this:

Running Jobs:
Console connected at 15-Oct-19 15:06
 JobId Level Name Status
======================================================================
   636 Virtual PandoraFMS.2019-10-15_14.33.02_06 is waiting on max Storage jobs
   637 Virtual MongoDB.2019-10-15_14.33.03_09 is waiting on max Storage jobs
   638 Virtual DNS-DHCP.2019-10-15_14.33.04_11 is waiting on max Storage jobs
   639 Virtual Desktop_1.2019-10-15_14.33.05_19 is waiting on max Storage jobs
   640 Virtual Desktop_2.2019-10-15_14.33.05_20 is waiting on max Storage jobs
   641 Virtual Desktop_3.2019-10-15_14.33.06_21 is waiting on max Storage jobs
====


Given that above output, for example the WebUI would show the following:

    642 Consolidate desktop3-fd.hq Consolidate Incremental 0 0.00 B 0 Success
    641 Desktop_3 desktop3-fd.hq Backup VirtualFull 0 0.00 B 0 Queued
    640 Desktop_2 desktop2-fd.hq Backup VirtualFull 0 0.00 B 0 Queued
    639 Desktop_1 desktop1-fd.hq Backup VirtualFull 0 0.00 B 0 Queued
    638 DNS-DHCP dns-dhcp-fd.hq Backup VirtualFull 0 0.00 B 0 Queued
    637 MongoDB mongodb-fd.hq Backup VirtualFull 0 0.00 B 0 Queued
    636 PandoraFMS pandorafms-fd.hq Backup VirtualFull 0 0.00 B 0 Queued


I don't know if this has anything to do with the fact that I have multiple storage definitions, for each VLAN the server is on, and an additional one dedicated for the storage addressable on the default IP (see bareos-dir/storage/File.conf in attached bareos.zip file). Technically this should not matter, but I get the impression Bareos nas not been designed/tested to elegantly work in an environment where the server participates in VLANs.

The reason I'm using VLANs is so that connections do not have to go through a router to reach the clients. Therefore, the full network bandwidth of each LAN segment is available to the Bareos client/server data transfer.

I've tried debugging the Consolidate backup process, using "bareos-dir -d 400 >> /var/log/bareos-dir.log;" however, I get nothing that particularly identifies the issue. I have attached a truncated log file that contains activity starting with queuing the second-to-last. I've cut off the log at the point where it is stuck in the endless cycling with output of:

bareos-dir (50): dird/jobq.cc:951-0 Inc Rstore=File-AI-VLAN105 rncj=1
bareos-dir (50): dird/jobq.cc:1004-0 Fail to acquire Wstore=File-AI-VLAN105 wncj=1
bareos-dir (50): dird/jobq.cc:971-0 Dec Rstore=File-AI-VLAN105 rncj=0
bareos-dir (50): dird/jobq.cc:951-0 Inc Rstore=File-AI-VLAN105 rncj=1
bareos-dir (50): dird/jobq.cc:1004-0 Fail to acquire Wstore=File-AI-VLAN105 wncj=1
bareos-dir (50): dird/jobq.cc:971-0 Dec Rstore=File-AI-VLAN105 rncj=0
bareos-dir (50): dird/jobq.cc:951-0 Inc Rstore=File-AI-VLAN107 rncj=1
bareos-dir (50): dird/jobq.cc:1004-0 Fail to acquire Wstore=File-AI-VLAN107 wncj=1
bareos-dir (50): dird/jobq.cc:971-0 Dec Rstore=File-AI-VLAN107 rncj=0
bareos-dir (50): dird/jobq.cc:951-0 Inc Rstore=File-AI-VLAN107 rncj=1
bareos-dir (50): dird/jobq.cc:1004-0 Fail to acquire Wstore=File-AI-VLAN107 wncj=1
etc...

For convenience, I have attached all the most relevant excerpts of my configuration files (sanitized for privacy/security reasons).

I suspect there's a bug that is responsible for this; however, I'm unable to make heads or tails of what's going on.

Could someone please take a look?

Thanks
Tags: always incremental, consolidate
Steps To Reproduce: 1. Place Bareos on a network switch (virtual or actual) with tagged VLANS
2. Configure Bareos host to have connectivity on three or more VLANs
3. Make sure you have clients you can backup, on each of the VLANs
4. Use the attached config files as reference for setting up storages and jobs for testing.
Additional Information:
System Description
Attached Files: bareos.zip (9,113 bytes) 2019-10-18 19:32
https://bugs.bareos.org/file_download.php?file_id=391&type=bug
bareos-dir.log (41,361 bytes) 2019-10-18 19:32
https://bugs.bareos.org/file_download.php?file_id=392&type=bug
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1121 [bareos-core] file daemon text always 2019-10-18 16:20 2019-10-18 16:20
Reporter: b.braunger@syseleven.de Platform: Linux  
Assigned To: OS: RHEL  
Priority: low OS Version: 7  
Status: new Product Version: 18.2.6  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: documentation of setFileAttributes for plugins not correct
Description: The documentation says that the setFileAttributes for FD Plugins is not implemented yet. Although I a python plugin I can write an according function and it gets called.

https://docs.bareos.org/DeveloperGuide/pluginAPI.html#setfileattributes-bpcontext-ctx-struct-restore-pkt-rp

Is the documentation out of date or am I getting something wrong?
Tags:
Steps To Reproduce: * Create a restore job which calls a python plugin
* start a FD with debug output (at least 150)
* watch the logfile of the FD
Additional Information: LOG example:
test-server (150): filed/fd_plugins.cc:1308-326 PluginSetAttributes
test-server (100): filed/python-fd.cc:2740-326 python-fd: set_file_attributes() entry point in Python called with RestorePacket(stream=1, data_stream=2, type=3, file_index=43337, linkFI=0, uid=0, statp="StatPacket(dev=0, ino=0, mode=0644, nlink=0, uid=0, gid=0, rdev=0, size=-1, atime=1571011435, mtime=1563229608, ctime=1571011439, blksize=4096, blocks=1)", attrEx="", ofname="/opt/puppetlabs/puppet/share/doc/openssl/html/man3/SSL_CTX_set_session_id_context.html", olname="", where="", RegexWhere="<NULL>", replace=97, create_status=2)
test-server (150): filed/restore.cc:517-326 Got hdr: Files=43337 FilInx=43338 size=210 Stream=1, Unix attributes.
test-server (130): filed/restore.cc:534-326 Got stream: Unix attributes len=210 extract=0
System Description
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1058 [bareos-core] General crash always 2019-02-16 11:58 2019-10-17 15:22
Reporter: tuxmaster Platform: Linux  
Assigned To: arogge OS: RHEL  
Priority: normal OS Version: 7  
Status: resolved Product Version: 18.2.5  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: 18.2.6 build error while cmake don't build the correct dependency's
Description: It looks like, cmake produce an make file in which ndmp_tape.cc try to build before rpcgen was running for all files.
So the build will fails with:
[ 28%] Generating ndmp9.h, ndmp9_xdr.c
cd /builddir/build/BUILD/bareos-Release-18.2.6/core/src/ndmp && /usr/bin/rpcgen -CM /builddir/build/BUILD/bareos-Release-18.2.6/cor
e/src/ndmp/ndmp9.x
BUILDSTDERR: In file included from /builddir/build/BUILD/bareos-Release-18.2.6/core/src/ndmp/ndmlib.h:43:0,
BUILDSTDERR: from /builddir/build/BUILD/bareos-Release-18.2.6/core/src/ndmp/ndmagents.h:100,
BUILDSTDERR: from /builddir/build/BUILD/bareos-Release-18.2.6/core/src/stored/ndmp_tape.cc:45:
BUILDSTDERR: /builddir/build/BUILD/bareos-Release-18.2.6/core/src/ndmp/ndmprotocol.h:102:19: fatal error: ndmp0.h: No such file or
directory
BUILDSTDERR: #include "ndmp0.h"
BUILDSTDERR: ^
BUILDSTDERR: compilation terminated.
[ 29%] Generating ndmp0.h, ndmp0_xdr.c
cd /builddir/build/BUILD/bareos-Release-18.2.6/core/src/ndmp && /usr/bin/rpcgen -CM /builddir/build/BUILD/bareos-Release-18.2.6/core/src/ndmp/ndmp0.x
Tags:
Steps To Reproduce:
Additional Information: See the build.log for the complete build.
System Description
Attached Files: build.log (226,170 bytes) 2019-02-16 11:58
https://bugs.bareos.org/file_download.php?file_id=352&type=bug
log.tar.xz (37,972 bytes) 2019-07-19 18:07
https://bugs.bareos.org/file_download.php?file_id=385&type=bug
Notes
(0003307)
tuxmaster   
2019-03-31 13:59   
Work around before the build:
for X in 0 2 3 4 9;do
rpcgen -CM ../core/src/ndmp/ndmp${X}.x
done
(0003434)
arogge   
2019-07-10 17:39   
This has never happened to me.
What version of cmake are you using?
What is your cmake and make commandline?

In short: how can I reproduce this?
(0003458)
tuxmaster   
2019-07-13 10:36   
cmake version 3.13.5 will used.
To reproduce it, simple use an fedora/redhat or centos system and try to build it using the default way on this platforms.
1. call rpmbuild -bs to create the srpm file
2. build the package via mock -r centos-7-x86_64 PATH_TO_SRPM
Then the build fails, because the files are not generated.

And to build it with my fix:
 mock -r centos-7-x86_64 PATH_TO_SRPM

Because the ticket system here is limit to 2MB I have uploaded all needed files to my nextcloud instance:
https://speicher.terrortux.de/s/tA5k5ZdMdjtZdbX
(0003505)
arogge   
2019-07-19 15:24   
I build on CentOS 7 all the time using cmake 3.13.3.
So you must be hitting some kind of cornercase.
I have however, never tried building with "make -j2", just "make", "make -j" and "make -j8".

I'll have to look into that. Can you reliably reproduce this? Because I cannot...
(0003506)
tuxmaster   
2019-07-19 18:07   
Yes I can reproduce it on every build. And it play no role how many parallel build are used.
Also running make -j1 will fail.
I have attached all log files, that will crated during the build.
(0003519)
arogge   
2019-07-22 13:17   
we'll see what we can do to fix this.
(0003590)
arogge   
2019-10-02 13:22   
Fix committed to bareos master branch with changesetid 11842.
(0003605)
arogge   
2019-10-17 15:22   
Fix committed to bareos bareos-18.2 branch with changesetid 11952.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1072 [bareos-core] regression testing block always 2019-03-31 12:55 2019-10-16 14:22
Reporter: tuxmaster Platform: x86  
Assigned To: arogge OS: Fedora  
Priority: normal OS Version: 29  
Status: resolved Product Version:  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: 18.2.6 build error on tests
Description: The build part for the test fails on Fedora >=29
BUILDSTDERR: /builddir/build/BUILD/bareos-Release-18.2.6/core/src/tests/lib_tests.cc: In function 'void do_get_name_from_hello_test(const char*, const char*, const string&, cons
t BareosVersionNumber&)':
BUILDSTDERR: /builddir/build/BUILD/bareos-Release-18.2.6/core/src/tests/lib_tests.cc:168:42: error: format not a string literal and no format arguments [-Werror=format-security]
BUILDSTDERR: sprintf(bashed_client_name, client_name);
BUILDSTDERR: ^
BUILDSTDERR: cc1plus: some warnings being treated as errors
make[2]: Leaving directory '/builddir/build/BUILD/bareos-Release-18.2.6/my-build'
BUILDSTDERR: make[2]: *** [core/src/tests/CMakeFiles/test_lib.dir/build.make:131: core/src/tests/CMakeFiles/test_lib.dir/lib_tests.cc.o] Error 1
BUILDSTDERR: make[1]: *** [CMakeFiles/Makefile2:686: core/src/tests/CMakeFiles/test_lib.dir/all] Error 2
BUILDSTDERR: make[1]: *** Waiting for unfinished jobs....
Tags: test
Steps To Reproduce:
Additional Information: See the build log for details.
Attached Files: build.log (561,064 bytes) 2019-03-31 12:55
https://bugs.bareos.org/file_download.php?file_id=358&type=bug
bareos-format-security.patch (594 bytes) 2019-07-13 11:41
https://bugs.bareos.org/file_download.php?file_id=381&type=bug
0001-cmake-treat-format-string-warnings-as-errors.patch (1,319 bytes) 2019-07-15 09:57
https://bugs.bareos.org/file_download.php?file_id=382&type=bug
0002-tests-fix-format-string-problem.patch (886 bytes) 2019-07-15 09:57
https://bugs.bareos.org/file_download.php?file_id=383&type=bug
Notes
(0003435)
arogge   
2019-07-10 17:43   
We do not build with -Werror=format-security yet. Could you try without it?
Once we're sure it is this flag, we can try to make sure you can build with it.
(0003459)
tuxmaster   
2019-07-13 11:41   
Yes without it will compile.
But, the setting is security relevant and default since Fedora 28, I create an patch for it.
Here the documentation about the options:
https://src.fedoraproject.org/rpms/redhat-rpm-config/blob/master/f/buildflags.md
https://fedoraproject.org/wiki/Format-Security-FAQ
I have tested the patch on build for centos7, fedora 29, fedora 30.
(0003462)
arogge   
2019-07-15 09:57   
Add patches for CMakeLists.txt and for the problematic format string.
(0003463)
arogge   
2019-07-15 09:58   
Can you please check whether my attached patches work for you (and maybe apply these to your branch to update the PR)?
Thank you.
(0003481)
tuxmaster   
2019-07-15 16:52   
I have tried both patches from you, but both are rejected against the 18.2.6 source code. :(
(0003482)
arogge_adm   
2019-07-15 16:58   
the patches are for master, you can adapt them for 18.2 yourself if you want.
(0003484)
tuxmaster   
2019-07-15 17:39   
OK.
I have back ported both and tested it with 18.2.6.
They will work as expected, so I will add it to the RP.
So I think we can close this ticket.
(0003568)
arogge   
2019-09-03 14:22   
Fix committed to bareos master branch with changesetid 11741.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1118 [bareos-core] installer / packages minor always 2019-10-07 10:54 2019-10-16 11:22
Reporter: wolfaba Platform: Linux  
Assigned To: OS: any  
Priority: normal OS Version: 3  
Status: new Product Version: 18.2.5  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Incorrect default value for smtp_host and ugly default values for job_email and dump_email in BareosSetVariableDefaults.cmake
Description: Incorrect default value for smtp_host and ugly default values for job_email and dump_email in BareosSetVariableDefaults.cmake



Dear Bareos developers,
we have found some ugly values in BareosSetVariableDefaults.cmake.
IF(NOT DEFINED job_email)
   SET(job_email "root@localhost")
ENDIF()
IF(NOT DEFINED dump_email)
   SET(dump_email "root@localhost")
ENDIF()
IF(NOT DEFINED smtp_host)
   SET(smtp_host "root@localhost")
ENDIF()

If the bareos process crash it uses btraceback script to generate backtrace and sends the email using bsmtp.
In btraceback.in is the last line
@sbindir@/bsmtp -h @smtp_host@ -f @dump_email@ -s "Bareos ${DEBUGGER} traceback of ${PNAME}" @dump_email@
bsmtp man page says that option -h should take mailhost:port (not email address), so the "root@localhost" for smtp_host is really incorrect. Could you replace this value with simple localhost?

Than if the email is generated with sender root@localhost, it will not be accepted on those mail servers, which check the sender address. Could you replace the sender address with simple "root" and let the localhost MTA generate correct domain?

The same thing for recipient. If you use root@localhost and the email will be passed to some mail server, it can land anywhere. Please, use simple "root", which delivers email either to local root or (if there is some alias defined) devliers email to destination address for local alias "root".

I have created pull request 0000297 "use simple default email values".

Thank you.

Regards,

Robert Wolf.
Tags:
Steps To Reproduce:
Additional Information:
System Description General issues regarding all linux platforms or not specific to only one distribution
Attached Files:
Notes
(0003604)
wolfaba   
2019-10-16 11:22   
Fix committed to bareos master branch with changesetid 11941.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1120 [bareos-core] webui trivial always 2019-10-14 14:34 2019-10-16 10:34
Reporter: tobias_stein Platform: amd64  
Assigned To: arogge OS: Debian GNU/Linux  
Priority: low OS Version: 10  
Status: assigned Product Version: 17.2.7  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: No Favicon with PHP7.3 (undefined variable "extras" in HeadLink->createDataStylesheet())
Description: Display errors on Dashboard
* favicon.ico is not delivered
* Overview of jobs just filled with spinners

Following PHP-Error message is printed in log:
[Mon Oct 14 12:54:19.526596 2019] [proxy_fcgi:error] [pid 842:tid 139696600672000] [client redacted:40176]
AH01071: Got error 'PHP message: PHP Notice:
compact(): Undefined variable: extras in /usr/share/bareos-webui/vendor/zendframework/zend-view/src/Helper/HeadLink.php on line 403
PHP message: PHP Notice:
compact(): Undefined variable: extras in /usr/share/bareos-webui/vendor/zendframework/zend-view/src/Helper/HeadLink.php on line 403'
Tags: webui
Steps To Reproduce: Use bareos-webui (17.2.4-15.1) on Debian 10 "Buster"
with php-fpm (7.3+69)
with Apache/2.4.38 (Debian) with mod_ssl, mod_http2, mod_proxy_fcgi, mod_mpm_event and mod_http2(, which doesn't support mpm_prefork).

Include /etc/apache2/conf-available/bareos-webui.conf in /etc/apache2/sites-available/default-ssl.conf.

aptitude install php-fpm
a2dismod php7.0
a2enconf php7.3-fpm
a2enmod ssl
a2enmod http2
a2enmod proxy_fcgi
a2dismod mpm_prefork
a2enmod mpm_event
systemctl restart apache2.service
Additional Information: Maybe the used zend-framework is no longer up2date in conjuction with usage of current Debian Buster.
Funktion "compact()" with php version 7.3 no longer ignores uninitialized variables.
https://www.php.net/manual/en/function.compact.php

I attached a little patch, that initializes variable `$extras` to empty string.
I don't know if it's written in a good style, but wanted to provide it. If it's trash, get rid of it.
This at least makes favicon.ico work again.
On subsequent refreshes of the dashbord spinners will appear again.
Attached Files: initialize_extras.diff (113 bytes) 2019-10-14 14:34
https://bugs.bareos.org/file_download.php?file_id=390&type=bug
Notes
(0003599)
arogge   
2019-10-16 10:04   
Thanks for the time you invested. I understand the issue, but I'm not sure that we will fix it for 17.2 anymore.
It should help to disable disable display_errors in php.ini (and I think this should be the default nowadays anyway).

Having said that, the PHP documentation for display_errors https://www.php.net/manual/en/errorfunc.configuration.php#ini.display-errors reads as follows:
"This is a feature to support your development and should never be used on production systems."
(0003601)
tobias_stein   
2019-10-16 10:34   
That's absolutely no problem, I've found a way to work around.
Probably there is even no need to backport a patch to v17.2, because it's a corner case setup (buster+php-fpm7.3).
I guess, I assigned the bug to the wrong version, zend-framework HeadLink hasn't changed on master in this point.
I thought, it would just be a nice user-experience, that at least future versions of bareos-webui,
work out of the box on Debian stable with the php-version provided by the distribution (7.3) and
just reported the behavior.

I'm currently testing and studying bareos' ecosystem.
So going with "display_errors" is okay for me with this system and in the end helped me to get around a problem.
Nevertheless thanks for the tip!

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
707 [bareos-core] director tweak always 2016-10-13 21:10 2019-10-14 14:40
Reporter: hostedpower Platform: Linux  
Assigned To: OS: Debian  
Priority: low OS Version: 8  
Status: confirmed Product Version: 16.2.4rc  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact: yes
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact: yes
bareos-16.2: action:
bareos-15.2: impact: no
bareos-15.2: action:
bareos-14.2: impact: no
bareos-14.2: action:
bareos-13.2: impact: no
bareos-13.2: action:
bareos-12.4: impact: no
bareos-12.4: action:
Summary: 13-Oct 21:02 bareos-dir JobId 0: Error: "fileset" directive in Job "Consolidate" resource is required, but not found.
Description: http://doc.bareos.org/master/html/bareos-manual-main-reference.html#QQ2-1-516

Job {
    Name = "Consolidate"
    Type = "Consolidate"
    Accurate = "yes"
    JobDefs = "DefaultJob"
}

Why would this require a fileset and some other directives which don't seem used?

Documentation: When used, it automatically trigger the consolidation of incremental jobs that need to be consolidated.

So it would be applied to any fileset?
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0002396)
joergs   
2016-10-14 12:34   
I recently modified the documentation in this respect, however the DefaultJob do contain a Fileset in the default installation. Have you changed your DefaultJob?
(0002398)
hostedpower   
2016-10-14 12:48   
Well I had another job defaults not containing any storage, pool , fileset or client and I had the error.

So the fileset is not really required, but it will give an error anyway when missing. Is that the right conclusion?

Also with storage, for me it was a bit confusing that it would store it potentially on the wrong place ... but I'm not an expert of course :)
(0002399)
joergs   
2016-10-14 13:06   
> So the fileset is not really required, but it will give an error anyway when missing. Is that the right conclusion?

yes. The reason is in the code. The default job type is Backup, and the required directives are set accordingly. For other job types some of these settings are ignored.
That is not pretty, but might have negative impact, if we would change this.
In the long run, we should change this.
(0003598)
b.braunger@syseleven.de   
2019-10-14 14:40   
Are the any news about this behaviour? This also affects Restore Jobs which need the following dummy configuration

  Client = dummy_client
  Fileset = dummy_fileset
  Storage = dummy_storage
  Pool = dummy_pool

That is a pain to configure and confuses operators.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1106 [bareos-core] director minor always 2019-08-01 17:50 2019-10-14 12:10
Reporter: b.braunger@syseleven.de Platform: Linux  
Assigned To: stephand OS: Ubuntu  
Priority: normal OS Version: 16.04  
Status: assigned Product Version: 18.2.6  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Director's not sending Plugin Options to FileDaemon
Description: I want to use a fileset with a python command plugin for multiple jobs and give extra key value pairs to the plug via the "FD Plugin Options" parameter.
As far as I understand the docs I can add options to the "FD Plugin Options" in the job resource which then will be passed to the Plugin additionally to the Options set in the fileset.

Sadly nothing but the Options String from the fileset is sent to the FD Plugin no matter what I define. This extends to the Plugin Options which are set via bconsole they're all ignored.
Tags: director, fd, plugin
Steps To Reproduce: FileSet {
  Name = "testing"
  Include {
    Plugin = "python:module_path=/usr/lib64/bareos/plugins:module_name=test_plugin:from=fileset"
  }
}
Job {
  Name = backup-bareos-test
  FileSet = testing
  FD Plugin Options = "python:from=job"
}

* run job=backup-bareos-test pluginoptions="from=bconsole" yes
Additional Information: Doc: https://docs.bareos.org/Configuration/Director.html#config-Dir_Job_FdPluginOptions

With debug over 500 one can see the "from=fileset" option being passed to the FD and parsed by the python plugin.

With TLS Enable = no and TCPdump one can see that no other Option string is sent to the FD
System Description
Attached Files:
Notes
(0003552)
b.braunger@syseleven.de   
2019-08-01 17:51   
maybe related: https://bugs.bareos.org/view.php?id=733
(0003553)
b.braunger@syseleven.de   
2019-08-01 17:54   
So I worked my way through the source code and roughly patched a version together which finally send all FD related plugin options to the FD:
https://github.com/bareos/bareos/pull/238
(0003554)
b.braunger@syseleven.de   
2019-08-01 18:08   
So I need a bit of support here:
1) Is there any reason why the plugin options are split into "jcr->plugin_options" and "jcr->res.job->FdPluginOptions"?
2) Does anyone contradict to send all plugin options to the FD?
3) Any other suggestions on the PR? I have no experience in cpp at all.
(0003596)
b.braunger@syseleven.de   
2019-10-14 12:10   
This is fixed and merged by https://github.com/bareos/bareos/pull/238

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
900 [bareos-core] file daemon feature always 2018-01-27 21:46 2019-10-10 12:59
Reporter: joergs Platform: Mac  
Assigned To: OS: MacOS X  
Priority: low OS Version: 10  
Status: resolved Product Version: 17.2.4  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: platforms/osx/Makefile conflicts when building for homebrew
Description: make -f platforms/osx/Makefile install
creates "$(DESTDIR)/Library/LaunchDaemons/org.bareos.bareos-fd.plist"

However, this disturbs building bareos in https://github.com/Homebrew/homebrew-core/

Please adapt the Makefile, that it can be used for locally creating a pkg file and be used in homebrew.
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0003587)
joergs   
2019-09-27 16:38   
Is this still an issue?
(0003589)
arogge   
2019-09-30 09:31   
Maybe. We're packaging for mac using cpack now, so the Makefile there is probably unmaintained right now.
We'll need to discuss whether or not we want to support this in the future.
(0003594)
joergs   
2019-10-10 12:59   
The platforms/osx path has been removed since branch 18.2.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
976 [bareos-core] General minor always 2018-06-29 12:11 2019-09-27 18:41
Reporter: frank Platform:  
Assigned To: OS:  
Priority: low OS Version:  
Status: resolved Product Version:  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: bconsole.py - IndexError: string index out of range in case of gui mode on
Description: GUI MODE ON
===========

echo -e "gui on\n.storage"|/usr/bin/bconsole.py --dirname us-dir2.bacula4.com-dir --port 9101 -p 001010100100111101010101 127.0.0.1
>>Traceback (most recent call last):
File "/usr/bin/bconsole.py", line 46, in <module>
director.interactive()
File "/usr/lib/python2.7/site-packages/bareos/bsock/lowlevel.py", line 245, in interactive
self._show_result(resultmsg)
File "/usr/lib/python2.7/site-packages/bareos/bsock/lowlevel.py", line 263, in _show_result
if msg[-2] != ord(b'\n'):
IndexError: string index out of range

GUI MODE OFF
============

echo -e ".client\n.storage"|/usr/bin/bconsole.py --dirname us-dir2.bacula4.com-dir --port 9101 -p 001010100100111101010101 127.0.0.1
>>us-dir2.ba4.com-fd
newdir_ba3-bsd-dev-fd
newdir_freebsd10.3-fd
newdir_cpe12-fd
newdir_wintest-fd
newdir_ubuntu12-fd
col_onapp-us-control1-fd
max-desctop-fd
newdir_dal01-dev-cart-sata1-fd
newdir_maxvm-fd
newdir_centos5test-fd
newdir_zimbra-fd
newdir_winplesk-fd
newdir_Imunify360-fd
Tags:
Steps To Reproduce:
Additional Information:
Attached Files:
Notes
(0003588)
joergs   
2019-09-27 18:41   
I can't reproduce the problem. However, I see, that there is a check for string length already implemented. So I assume, that the problem has been foxed in between.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
807 [bareos-core] director minor always 2017-04-06 14:53 2019-09-27 16:43
Reporter: TheisM Platform: Linux  
Assigned To: OS: Debian  
Priority: normal OS Version: 7  
Status: assigned Product Version: 16.2.5  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Bareos Director Error after Update
Description: After a Bareos update from the Univention App Center the Bareos Director service does not longer starts.
We try to start the service with "service bareos-dir start"
and receive the following message:

2017-04-06T14:08:37 error message: Did not find a plugin for ccache_ops: 2
2017-04-06T14:08:37 error message: Did not find a plugin for ccache_ops: 2

Can someone help us?
Tags:
Steps To Reproduce:
Additional Information: The Config Files are attached.
Thank you for your support.
System Description
Attached Files: bareos_config_files.rar (6,671 bytes) 2017-04-06 14:53
https://bugs.bareos.org/file_download.php?file_id=248&type=bug
Notes
(0002753)
joergs   
2017-09-21 22:50   
What version of Univention are you using?

Do you remember, from which Bareos version you upgraded?
(0002757)
TheisM   
2017-09-22 12:26   
UCS-Version is 4.1-4 errata435 (Vahr)
UMC-Version is 8.0.28-21.926.201611091130

Upgrade from Bareos release 15.2.2 (16 November 2015)

THX for Support

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
855 [bareos-core] director minor always 2017-10-02 04:23 2019-09-27 16:42
Reporter: Ryushin Platform: Linux  
Assigned To: OS: Debian  
Priority: normal OS Version: 8  
Status: assigned Product Version: 17.2.3  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact: yes
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact: yes
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact: no
bareos-15.2: action:
bareos-14.2: impact: no
bareos-14.2: action:
bareos-13.2: impact: no
bareos-13.2: action:
bareos-12.4: impact: no
bareos-12.4: action:
Summary: Update Volume or Purge Volume Hangs on Using Catalog
Description: This just started happening after the last couple of nightly updates. Pretty much at the time when the new mysql schema was updated in the Debian packages.

I've been using the same scripts that I've created the for the last couple of years without a problem. Now the scripts fail to run and hang when sending the command using echo and piping it to bconsole.

Example commands:
echo "update volume=vchanger_monthly_1.5tb_drives_0003_0001 ActionOnPurge=Truncate" | bconsole
echo "purge volume=vchanger_monthly_1.5tb_drives_0003_0001 action=truncate storage=vchanger_monthly_1.5tb pool=Off-site_Pool" | bconsole
echo "update volume=vchanger_monthly_1.5tb_drives_0003_0001 volstatus=Append storage=vchanger_monthly_1.5tb pool=Off-site_Pool" | bconsole

Example Output:
echo "update volume=vchanger_monthly_1.5tb_drives_0003_0001 ActionOnPurge=Truncate" | bconsoleConnecting to Director windwalker:9101
1000 OK: windwalker-dir Version: 17.2.3 (14 Aug 2017)
Enter a period to cancel a command.
update volume=vchanger_monthly_1.5tb_drives_0003_0001 ActionOnPurge=Truncate
Automatically selected Catalog: MyCatalog
Using Catalog "MyCatalog" <---- hangs here


Running bconsole directly, and then running the commands from inside bconsole seems to work most of the time. But after testing a few times, even inside of bconsole it hanges on the command. I restart the dir and sd daemons to unlock the issue (not sure if the sd daemon needs to be restarted or not)

It seems to be doing this with all of my scripts right now.
Tags:
Steps To Reproduce:
Additional Information: Script used:


bareos_purge_volumes_on_1.5tb_monthly_drive.sh:
#!/bin/bash
if [ "$1" = "" ]
then
    echo "Syntax: $0 <path where bareos volumes exist>"
    echo "I.e. $0 /mnt/vchanger/Monthly_1.5TB_01"
    exit 1
fi

echo "update slots storage=vchanger_monthly_1.5tb" | bconsole

for vol in $(ls $1/volumes/vchanger_monthly_1.5tb_drives_*)
do
    echo "update volume=$(basename $vol) ActionOnPurge=Truncate" | bconsole
    echo "purge volume=$(basename $vol) action=truncate storage=vchanger_monthly_1.5tb pool=Off-site_Pool" | bconsole
    echo "update volume=$(basename $vol) volstatus=Append storage=vchanger_monthly_1.5tb pool=Off-site_Pool" | bconsole
done



System Description
Attached Files: windwalker-dir.trace (214,144 bytes) 2017-10-04 02:20
https://bugs.bareos.org/file_download.php?file_id=263&type=bug
windwalker-dir.trace_20171008 (10,636 bytes) 2017-10-08 15:50
https://bugs.bareos.org/file_download.php?file_id=265&type=bug
windwalker-dir.trace_20171008_2 (57,958 bytes) 2017-10-08 16:01
https://bugs.bareos.org/file_download.php?file_id=266&type=bug
Notes
(0002768)
Ryushin   
2017-10-02 04:40   
BTW, this is occurring on two different installations.
(0002770)
joergs   
2017-10-02 16:47   
I don't think, that this issue is related to the catalog. However, there have been changes to the prune/purge and ActionOnPurge commands.

However, I'm not sure, if your script, even if it run, does what you expect.
In my example, with 17.2.4-rc1, volumes don't get truncated.

I would use:
<your volume loop>
  purge storage=File pool=Full volume=$VOLUME
# truncate all volumes with volstatus=Purged in pool Full
truncate volstatus=Purged pool=Full yes

Setting volstatus to append should not be required.

Back to your main problem:
no idea. Try to enable debug before the commands. Maybe the problem than gets obvious:

setdebug director level=200
(0002772)
Ryushin   
2017-10-04 02:21   
I'll update the script with your new commands. The script use to purge and truncate the volumes just fine.

I had to wait until I could change out the backup drive to another one so I can test again. Ran the script and it hung again. I turned setdebug to 200 and I recorded a trace file. I think everything after line 49 could be deleted (I think the webui queried at that time), but I left it just in case.
(0002776)
Ryushin   
2017-10-07 14:00   
Was the attached trace file useful? Is there anything else I can do?
(0002778)
joergs   
2017-10-08 11:43   
As far as I can see, the relevant part starts at line 686.

windwalker-dir (10): ua_audit.c:143-0 : Console [default] from [192.168.9.1] cmdline update volume=vchanger_monthly_1.5tb_drives_0001_0001 ActionOnPurge=Truncate

I can't see an error there. It did what it should:
windwalker-dir (100): sql_query.c:124-0 called: bool B_DB::sql_query(const char*, int) with query UPDATE Media SET VolJobs=1,VolFiles=12,VolBlocks=832203,VolBytes=53687079247,VolMounts=1,VolErrors=0,VolWrites=832204,MaxVolBytes=53687091200,VolStatus='Append',Slot=1,InChanger=1,VolReadTime=0,VolWriteTime=583967158,LabelType=0,StorageId=3,PoolId=6,VolRetention=7344000,VolUseDuration=0,MaxVolJobs=0,MaxVolFiles=0,Enabled=1,LocationId=0,ScratchPoolId=0,RecyclePoolId=0,RecycleCount=0,Recycle=1,ActionOnPurge=1,MinBlocksize=0,MaxBlocksize=0 WHERE VolumeName='vchanger_monthly_1.5tb_drives_0001_0001'


Also the follow up commands also got through:
windwalker-dir (10): ua_audit.c:143-0 : Console [default] from [192.168.9.1] cmdline purge volume=vchanger_monthly_1.5tb_drives_0001_0001 action=truncate storage=vchanger_monthly_1.5tb pool=Off-site_Pool
...

Did you run the commands again manually? Otherwise I don't see an error here. Adding the timestamp to the trace might help to figure out, when a command is issued.

setdebug director level=200 timestamp=1

So, I can't reproduce the problem and the trace file looks as if everything run through.

Without further information, I afraid, I can't help with this.
(0002779)
Ryushin   
2017-10-08 15:46   
Newish error, I've seen it a couple of times.

Ran: ./bareos_purge_volumes_on_1.5tb_monthly_drive.sh /mnt/vchanger/Monthly_1.5TB_01
Output:
Connecting to Director windwalker:9101
1000 OK: windwalker-dir Version: 17.2.3 (14 Aug 2017)
Enter a period to cancel a command.
update slots storage=vchanger_monthly_1.5tb
Automatically selected Catalog: MyCatalog
Using Catalog "MyCatalog"
Connecting to Storage daemon vchanger_monthly_1.5tb at windwalker:9103 ...
3306 Issuing autochanger "list" command. <---hangs here

I've attached another trace file.
(0002780)
Ryushin   
2017-10-08 15:59   
Restarting bareos-dir and bareos-sd daemons got it to get past the list command.

Output:
Connecting to Director windwalker:9101
1000 OK: windwalker-dir Version: 17.2.3 (14 Aug 2017)
Enter a period to cancel a command.
update volume=vchanger_monthly_1.5tb_drives_0001_0001 ActionOnPurge=Truncate
Automatically selected Catalog: MyCatalog
Using Catalog "MyCatalog" <--- hangs here

I've attached another trace file.
(0002804)
Ryushin   
2017-10-19 14:49   
Was the latest trace file helpful? Is there anything else I can provide?
(0002849)
Ryushin   
2017-12-28 00:11   
(Last edited: 2017-12-28 00:13)
I'm thinking what might be the issue is the mysql operation is taking so long
to complete the script appears to hang since bconsole never sends back a
response. If the mysql completes its operation quickly, then the script
appears to work. Only a theory at this point while watching mytop.

MySQL on localhost (10.0.32) load 1.78 1.23 1.15 2/1055 5752 up 19+09:45:45 [16:08:59]
 Queries: 10.6k qps: 0 Slow: 0.0 Se/In/Up/De(%): 81471/00/00/00
 Sorts: 0 qps now: 1 Slow qps: 0.0 Threads: 4 ( 2/ 6) 00/00/00/00
 Key Efficiency: 0.1% Bps in/out: 0.2/ 25.3 Now in/out: 21.3/ 3.0k

       Id User Host/IP DB Time Cmd State Query
       -- ---- ------- -- ---- --- ----- ----------
      266 snort localhost snort 738 Sleep
   468041 bareos localhost bareos 72 Query updating DELETE FROM File WHERE JobId IN (4990)
   466542 bareos localhost bareos 9 Sleep
   462612 root localhost 0 Query init show full processlist

(0002850)
Ryushin   
2017-12-28 01:55   
(Last edited: 2017-12-28 01:57)
Actually, I don't think this a bug after all. It's a mysql problem.
The script is just taking far, far longer to run. To the point that it seems
hung. mytop is still showing a query after what must be over an hour at this point:

MySQL on localhost (10.0.32) load 2.10 2.48 2.43 1/1106 30277 up 19+11:29:46 [17:53:00]
 Queries: 15.3k qps: 0 Slow: 2.0 Se/In/Up/De(%): 56385/00/00/00
 Sorts: 0 qps now: 1 Slow qps: 0.0 Threads: 4 ( 2/ 8) 125/00/00/00
 Key Efficiency: 0.1% Bps in/out: 0.2/ 36.7 Now in/out: 21.4/ 3.1k

       Id User Host/IP DB Time Cmd State Query
       -- ---- ------- -- ---- --- ----- ----------
   468894 bareos localhost bareos 3372 Query updating DELETE FROM File WHERE JobId IN (5034,5035,5036,5037,503
      266 snort localhost snort 1342 Sleep
   466542 bareos localhost bareos 10 Sleep
   462612 root localhost 0 Query init show full processlist


Is there some kind of mysql schema code update that fixes how long
the updating DELETE File WHERE JobId IN <JOB> takes or some kind
of optimization I can make?

The query is just taking far longer since the schema update.

(0002851)
Ryushin   
2017-12-28 13:42   
After four hours I stopped waiting for it to finish and I went to bed. It had only gotten to 47
out of 140 volumes. I spent quite a bit of time trying to improve the performance
of mysql with no noticeable improvement as I'm definitely I/O bound right now. The
File.ibd database is over 26GB in size. I run a monthly mysql optimize.
The mysql database resides on zfs raidz2 volume made up of four 300GB 15K drives.

My mysql performance parameters are as follows:
default_storage_engine=innodb
innodb_file_per_table=1
innodb_buffer_pool_size=4096M
innodb_log_file_size=128M
innodb_flush_log_at_trx_commit=2
innodb_flush_method=O_DIRECT
innodb_buffer_pool_instances=8
innodb_thread_concurrency=8
innodb_checksum_algorithm=none
innodb_doublewrite = 0

Not sure what else I can try and do to solve this. Before the schema update, the
longest it took was about 45 minutes.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
887 [bareos-core] webui minor always 2017-12-28 15:06 2019-09-27 16:41
Reporter: crs9 Platform: Linux  
Assigned To: OS: CentOS  
Priority: normal OS Version: 7  
Status: assigned Product Version: 17.2.4  
Product Build: Resolution: reopened  
Projection: none      
ETA: none Fixed in Version:  
    Target Version: 17.2.5  
bareos-master: impact: yes
bareos-master: action: will care
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact: yes
bareos-17.2: action: will care
bareos-16.2: impact:
bareos-16.2: action: none
bareos-15.2: impact:
bareos-15.2: action: none
bareos-14.2: impact:
bareos-14.2: action: none
bareos-13.2: impact:
bareos-13.2: action: none
bareos-12.4: impact:
bareos-12.4: action: none
Summary: Restore file list taking 20 minutes to populate
Description: When selecting a client with a large number of files, the file list will take over 20 minutes to come back. Clicking to other tabs will do nothing until the task is complete.
The bareos-audit.log shows:

20-Dec 11:29 : Console [admin] from [127.0.0.1] cmdline .bvfs_lsdirs jobid=778,804,829 path= limit=1000 offset=0
20-Dec 11:29 : Console [admin] from [127.0.0.1] cmdline .bvfs_update jobid=778,804,829
20-Dec 11:29 : Console [admin] from [127.0.0.1] cmdline .bvfs_lsdirs jobid=778,804,829 path= limit=1000 offset=1000

This appears over and over until it reaches:

20-Dec 11:50 : Console [admin] from [127.0.0.1] cmdline .bvfs_lsdirs jobid=778,804,829 path= limit=1000 offset=10349000
20-Dec 11:50 : Console [admin] from [127.0.0.1] cmdline status director

The Dashboard state 9150221 files, 617 jobs. This specific client has 0000010:0000301,000 files.
If I browse away from this client and back, the count starts all over again.
Tags:
Steps To Reproduce: Upgrade from 17.1.3 to 17.2.4
Manually update db scheme
Log into webui
Select client with a large amount of files to restore from
Additional Information: Jorg asked for the following:
}.bvfs_lsdirs jobid=778,804,829 path= limit=1000 offset=0
{
  "jsonrpc": "2.0",
  "id": null,
  "result": {
    "directories": [
      {
        "type": "D",
        "pathid": 45383,
        "fileid": 0,
        "jobid": 0,
        "lstat": "A A A A A A A A A A A A A A",
        "name": ".",
        "fullpath": ".",
        "stat": {
          "dev": 0,
          "ino": 0,
          "mode": 0,
          "nlink": 0,
          "user": "root",
          "group": "root",
          "rdev": 0,
          "size": 0,
          "atime": 0,
          "mtime": 0,
          "ctime": 0
        },
        "linkfileindex": 0
      }
    ]
  }

*.bvfs_lsdirs jobid=778,804,829 path= limit=1000 offset=1000
{
  "jsonrpc": "2.0",
  "id": null,
  "result": {
    "directories": [
      {
        "type": "D",
        "pathid": 45383,
        "fileid": 0,
        "jobid": 0,
        "lstat": "A A A A A A A A A A A A A A",
        "name": ".",
        "fullpath": ".",
        "stat": {
          "dev": 0,
          "ino": 0,
          "mode": 0,
          "nlink": 0,
          "user": "root",
          "group": "root",
          "rdev": 0,
          "size": 0,
          "atime": 0,
          "mtime": 0,
          "ctime": 0
        },
        "linkfileindex": 0
      }
    ]
  }

*.bvfs_lsdirs jobid=778,804,829 path= limit=1000 offset=10349000
{
  "jsonrpc": "2.0",
  "id": null,
  "result": {
    "directories": [
      {
        "type": "D",
        "pathid": 45383,
        "fileid": 0,
        "jobid": 0,
        "lstat": "A A A A A A A A A A A A A A",
        "name": ".",
        "fullpath": ".",
        "stat": {
          "dev": 0,
          "ino": 0,
          "mode": 0,
          "nlink": 0,
          "user": "root",
          "group": "root",
          "rdev": 0,
          "size": 0,
          "atime": 0,
          "mtime": 0,
          "ctime": 0
        },
        "linkfileindex": 0
      }
    ]
  }
System Description
Attached Files: Снимок экрана от 2018-01-15 09-51-07.png (184,466 bytes) 2018-01-15 06:56
https://bugs.bareos.org/file_download.php?file_id=277&type=bug
png
Notes
(0002854)
crs9   
2018-01-02 21:35   
As a note, from the 16.2.7 notes, I cannot create index

bareos=# CREATE INDEX file_jpfnidpart_idx ON File(PathId,_JobId, _FilenameId) WHERE FileIndex = 0;
ERROR: column "_jobid" does not exist

Not sure if this is required or not.
(0002855)
crs9   
2018-01-02 21:55   
I stand corrected, I see the expected indexes that were created during the 2171 migration
(0002857)
crs9   
2018-01-03 15:49   
I updated the cache manually.
backup01:~$ time echo ".bvfs_update" | sudo bconsole
Connecting to Director localhost:9101
1000 OK: backup01 Version: 17.2.4 (21 Sep 2017)
Enter a period to cancel a command.
.bvfs_update
Automatically selected Catalog: MyCatalog
Using Catalog "MyCatalog"
You have messages.

real 1m59.716s
user 0m0.053s
sys 0m0.020s

After only taking 2 minutes, the restore list is still taking 20 minutes to populate.
(0002859)
joergs   
2018-01-04 13:37   
There are at least two problems here:

1. the result delivered by .bvfs_lsdirs is not correct. It misses an entry for "/".
2. the bareos-webui does not behave right. The newly introduced looping through the result set continues looping until it receives the full result. In your case this results in an endless loop. I assume, after 20 minutes some browser, ajax or php timeout triggers.

About 2: this will be fixed soon.

About 1: I'm not sure, what triggers the problem in the first place. A few questions to limit the scope:
  * does this happen for all restore jobs or just for a specific one?
  * what database backend do you use?
  * have you used bareos-dbcheck? If not, don't use it now. There is another bug in there, at least when using sqlite. This will also be resolved soon.
(0002860)
crs9   
2018-01-04 14:44   
1) I have 6 clients
backup01, centos7.2 - lists fine
catalog, centos 7.2 - lists fine
dc01, windows 2016 core - lists fine
dc02, windows 2016 core - lists fine
fmp01, windows 2016 - slow - 301975 files, 294GB size
fps01, windows 2016 - slow - 181855 files, 14.11GB size

Both slow servers took 20 minutes to list.

After 20min, the webui displays an error box "Oops, something went wrong, probably too many files"

2) Postgresql 9.6.6

3) I have not used that, should I try it since I'm using postgresql?
(0002861)
joergs   
2018-01-04 14:51   
about 3:
not now. I'll first check what exactly the problem with dbcheck is.

about 1:
interesting. I've not assumed this.

What you could do is to recreate the full bvfs cache. Please be aware, that this will take quite a long time.

On bconsole type:
.bvfs_clear_cache yes
.bvfs_update

The first command will be quick, the second will take a lot of time and is normally not required (it will normally be called for specific jobids that you are interested in). However, you should give it a try.

For details about bvfs see http://doc.bareos.org/master/html/bareos-developer-guide.html#sec:bvfs
(0002862)
crs9   
2018-01-04 15:32   
.bvfs_clear_cache yes
ran for 2 seconds

.bvfs_update
ran for 16 minutes

Same slowness and error after the update command.
(0002863)
joergs   
2018-01-04 15:44   
Okay, that is the result of

list files jobid=<last full job of fmp01>
(0002864)
crs9   
2018-01-04 15:49   
It list all the files on C: and D:, which I would expect.
(0002865)
crs9   
2018-01-04 15:55   
fmp01 listing took 28 seconds.

fps01 last full lists all it's C: and D: files as expected also. this listing took 27 seconds, even though it has almost 2 times the number of files.
(0002866)
joergs   
2018-01-04 16:04   
But how are they displayed?

/C:/...

or somehow else?
(0002867)
crs9   
2018-01-04 16:08   
C:/Users/
 D:/Users/

there is a block of whitespace before each letter. dc01 is displayed the same way.
One other way these two servers are different, they have a D:/, dc01 and 02 do not.
(0002868)
frank   
2018-01-04 17:26   
Note:

A fix regarding the endless loop in case of receiving rubish BVFS output has been pushed internal to the webui bareos-17.2 branch and is going public with the next push to GitHub as well.
(0002869)
joergs   
2018-01-05 16:56   
In my Linux only test environment, I was able to create a similar problem by using dbcheck on my sqlite database.

Following steps did resolve this issue:

# clear cache
.bvfs_clear_cache yes

# restart the bareos-dir
systemctl restart bareos-dir

# update cache
.bvfs_update

The reason is some internal director cache.
(0002871)
frank   
2018-01-09 12:31   
Fix committed to bareos-webui bareos-17.2 branch with changesetid 7444.
(0002873)
protopopys   
2018-01-15 06:54   
Hi, it's not working for me. I have a tousen folders in directory and if i want to restore some files i have the error.
(0002953)
hostedpower   
2018-03-27 13:05   
Same here, cannot restore our machines :(

We're using MySQL and it's sooooo slow again. Don't know what happened.
(0002955)
hostedpower   
2018-03-27 13:15   
Ok server reboot fixed it here for us, no idea why it was so damn slow before the reboot.
(0002958)
joergs   
2018-04-05 12:39   
I assume, the problems are fixed with Bareos 17.2.5. Note: updating both the webui and the director is required.

Did anybody have other experiences with bareos-17.2.5?

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
857 [bareos-core] vmware plugin tweak always 2017-10-05 06:01 2019-09-27 16:32
Reporter: falk Platform: Linux  
Assigned To: OS: CentOS  
Priority: normal OS Version: 7  
Status: assigned Product Version: 16.2.4  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: After Upgrade to Centos 7.4 connection to Vsphere fails
Description: After an upgrade from 7.3 to 7.4 the Connection to Vsphere Server Fails.
This is the case if you don't have an official certificate on your vsphere Server.
Pls. add this hint to the documentation or to the FAQ, thanks!
Tags:
Steps To Reproduce: no one of the configured VMWare Backup Jobs are running sucessfully.
Additional Information: The Solution ist easy ... in the /etc/python/cert-verification.cfg you should set the verification value to disabled:

#[https]
#verify=platform_default
[https]
verify=disable

or you follow the instructions in https://access.redhat.com/articles/2039753 if you need this only in this python script...

System Description
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1063 [bareos-core] General feature always 2019-02-22 23:49 2019-09-25 01:24
Reporter: cm@p-i-u.de Platform: Linux  
Assigned To: OS: CentOS  
Priority: normal OS Version: 7  
Status: new Product Version: 18.2.5  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Add option to set priority to schedule Resource
Description: Currently it is still not possible to schedule a weeky and monthly based Pool Backup on fridays, because it happens that 2 Jobs overlap each other having a schedule like this
(Some month have 5 fridays others only 4 fridays)

Schedule {
  Name = "Default"
  Run = pool=Daily mon-thu at 22:30
  Run = pool=Weekly 1st-4th fri at 22:30
  Run = pool=Monthly last fri at 22:30
}

To solve this isssue it would be great to have priority keyword added to the scheduler similar like in the job resource. With this option it is possible to define which job should take precedence if more then one job match the scheduler criteria.

For example like this,
run Weekly but have Monthy take precedence if both would match

Schedule {
  Name = "Default"
  Run = pool=Daily mon-thu at 22:30
  Run = pool=Weekly 1st-4th fri at 22:30 priority = 20
  Run = pool=Monthly last fri at 22:30 priority = 10
}
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0003450)
arogge   
2019-07-12 10:57   
you can work around this by scheduling monthly five minutes early and disallowing duplicate jobs.
While I think you're right that we need something like "every friday, but the last", I would assume that priority in this context would override the job's priority (like pool and level do) and not declare the priority of this schedule entry.

Can you update your proposal so we don't have this kind of ambiguity?
(0003580)
cm@p-i-u.de   
2019-09-23 22:19   
The problem with the workaround is, that than one job is marked as canceled this is confusing the enduser (which is in our SME scenario is set up to control backups via webui).
Explaining to the user that the job marked as canceled (failed) but it isn't, is not leading into the right directions how to handle failed jobs.

Regarding priority I meant, that the schedule itselfs has priorities but not in the sense like priorities are handled on jobs but in the way that a schedule with a higher priority overrides the lower priority ones.
For example having 5 schedules, scheduled at the same point in time, the one with highest priority wins.
(0003581)
arogge   
2019-09-24 09:04   
I totally understand you request and the drawbacks of the automatic cancelling.

The main problem I have right now is, that the configuration syntax your propose is ambigous and misleading.
If you can come up with something that isn't misleading, we might consider to put this on the agenda.
(0003582)
cm@p-i-u.de   
2019-09-25 01:19   
Than maybe the best thing is to find another verb instead of priority, because with the implementation shown above there are really a lot things possible where more than one condition applies.

For example (a litte bit overdriven but just to illustrate it clearly) you schedule a job on the 30th of the month, which may also be the last day in the month and allso a friday, having schedules for fridays, last day of the month in additiion to the schedule for 30th of the month, you could control which one should take precedence.

Maybe "precendence" is also the word to be used :-)
(0003583)
cm@p-i-u.de   
2019-09-25 01:24   
"rank" comes also to mind

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1110 [bareos-core] file daemon minor always 2019-09-05 13:07 2019-09-19 15:31
Reporter: stephand Platform:  
Assigned To: OS:  
Priority: low OS Version:  
Status: new Product Version: 18.2.7  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: BareosFdPluginBaseclass:parse_plugin_definition() does not allow to override plugin options using FD Plugin Options in Job
Description: Currently, the code does not allow to override plugin options set from FileSet by using
the FD Plugin Options parameter in a Job resource.
As a user/admin, I want to be able to override the more general settings (FileSet) by more
specific settings (Job).

See
https://github.com/bareos/bareos/blob/02f72235abaa5acacc5e672bbe6af1a9253f9479/core/src/plugins/filed/BareosFdPluginBaseclass.py#L99
Tags: fd, plugin
Steps To Reproduce: Example FileSet:

FileSet {
  Name = "vm_generic_fileset"
  Include {
    Options {
      ...
    }
    Plugin = "python:module_path=/usr/lib64/bareos/plugins/vmware_plugin:module_name=bareos-fd-vmware:dc=mydc:folder=/my/vmfolder:vmname=myvm:vcserver=myvc.example.com:vcuser=bakadm@vsphere.local:vcpass=myexamplepassword"
  }
}

And example Job definition:

Job {
  Name = "vm_test02_job"
  JobDefs = "DefaultJob"
  FileSet = "vm_generic_fileset"
  FD Plugin Options = "python:folder=/dmz:vmname=fw01"
}

The above options from "FD Plugin Options" do currently not override the options declared in the FileSet,
it is only possible to add options not yet declared in the FileSet.
Additional Information:
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1112 [bareos-core] director minor always 2019-09-13 08:08 2019-09-17 16:22
Reporter: arogge Platform: Linux  
Assigned To: arogge OS: CentOS  
Priority: normal OS Version: 7  
Status: resolved Product Version: 18.2.6  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version: 18.2.7  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: After mount/unmount of tape "status slots" shows empty list
Description: When you mount or unmount a tape to or from a slot and then immediately issue an "status slots" this will show you an empty list.
Tags: storage
Steps To Reproduce: - attach a tape library
- label some tapes
- issue "status slots" (everything looks ok)
- issue mount command
- issue "status slots" (list shows empty)
- wait a minute
- issue "status slots" again (everything good again)
Additional Information: This seems to be related to the caching. Maybe mount/unmount clears the cache but fails to reset the flag?
System Description
Attached Files:
Notes
(0003573)
arogge   
2019-09-17 16:22   
Fix committed to bareos bareos-18.2 branch with changesetid 11779.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
987 [bareos-core] director major random 2018-07-18 10:42 2019-09-15 16:17
Reporter: franku Platform: Linux  
Assigned To: OS: Debian  
Priority: normal OS Version: 9  
Status: new Product Version: 17.2.6  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version: 17.2.8  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Canceling a job leads to a director crash (TT4200333)
Description: When canceling a job using bconsole the director can occasionally crash with a coredump.

It is likely that this appears as a result of a race condition where a signal is being sent to a job-thread. The job's thread_id used is a member in the JobControlRecord class whose memory could be deleted meanwhile.
Tags:
Steps To Reproduce: This issue appears very seldom. No way yet to reproduce reliably.
Additional Information: Excerpt from the coredump:

0000001 0x00007f4107389c14 in signal_handler (sig=11) at signal.c:240
0000002 <signal handler called>
0000003 __pthread_kill (threadid=139913685083904, signo=signo@entry=12) at ../sysdeps/unix/sysv/linux/pthread_kill.c:40
0000004 0x00007f4107377434 in JCR::my_thread_send_signal (this=this@entry=0x558a5e446318, sig=sig@entry=12) at jcr.c:682
0000005 0x0000558a5b0983ec in cancel_file_daemon_job (ua=ua@entry=0x7f3f0c00ed28, jcr=jcr@entry=0x558a5e446318) at fd_cmds.c:1080
System Description
Attached Files:
Notes
(0003074)
franku   
2018-07-18 15:13   
Current solution: Refactor the function that frees JobControlRecord (JCR) memory in order to lock the JCR mutex consecutively.
(0003572)
therm   
2019-09-15 16:17   
This one affects me also. We have a lot of copy and migrate jobs. Canceling them lets the director crash by a chance of about 50%. If I can provide something to get this one fixed please let me know.
Regards,
Dennis

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1111 [bareos-core] director major always 2019-09-12 17:34 2019-09-13 08:17
Reporter: arogge Platform: Linux  
Assigned To: arogge OS: CentOS  
Priority: normal OS Version: 7  
Status: resolved Product Version: 18.2.6  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version: 18.2.7  
    Target Version: 18.2.7  
bareos-master: impact: yes
bareos-master: action: fixed
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact: yes
bareos-18.2: action: fixed
bareos-17.2: impact: no
bareos-17.2: action: none
bareos-16.2: impact: no
bareos-16.2: action: none
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: import/export and status slots commands don't see tapes loaded in drives anymore
Description: When we import a tape from an import/export slot using the import command and another tape is loaded in a drive, the import may move the tape into the slot occupied by a tape loaded in a drive.
Also issuing the "status slots" command does not show the volumes that are loaded in a drive.
Tags: volume
Steps To Reproduce: - attach a tape library
- load a tape using the mount command
- issue "status slots" ans notice the mounted tape does not show up anymore
Additional Information:
System Description
Attached Files:
Notes
(0003570)
arogge   
2019-09-13 08:17   
The problem had been introduced by a refactoring and has been found and fixed during another refactoring.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1062 [bareos-core] General minor always 2019-02-19 08:48 2019-09-03 15:14
Reporter: IvanBayan Platform: Linux  
Assigned To: arogge OS: Ubuntu  
Priority: normal OS Version: 16.04  
Status: resolved Product Version: 18.2.5  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact: yes
bareos-master: action: fixed
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact: yes
bareos-18.2: action: fixed
bareos-17.2: impact: no
bareos-17.2: action:
bareos-16.2: impact: no
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Bconsole terminates if client protocol probing occurs
Description: When I try to get status of older client, bconsole terminates if "Probing client protocol." occurs.
If I repeat status request of same client again or if I request status of modern client, bconsole works fine.
Tags: bconsole, broken
Steps To Reproduce: 1. Request status of client prior version 18
2. Hit enter or enter ext command
Additional Information: *status client=ow-c01-02-fd
Connecting to Client ow-c01-02-fd at ow-c01-02.int:9102
Probing client protocol... (result will be saved until config reload)
 Handshake: Cleartext, Encryption: None

ow-c01-02-fd Version: 17.2.4 (21 Sep 2017) x86_64-redhat-linux-gnu redhat CentOS Linux release 7.4.1708 (Core)
Daemon started 28-Sep-18 12:27. Jobs: run=122 running=0.
 Heap: heap=36,864 smbytes=115,192 max_bytes=383,233 bufs=97 max_bufs=150
 Sizeof: boffset_t=8 size_t=8 debug=0 trace=0 bwlimit=0kB/s

Running Jobs:
mia-backup03-dir (director) connected at: 19-Feb-19 07:45
No Jobs running.
====

Terminated Jobs:
 JobId Level Files Bytes Status Finished Name
======================================================================
 14832 Full 376 1.967 G OK 08-Feb-19 06:43 logs_ow-c01-02-job
 ...
 15178 Full 393 1.878 G OK 19-Feb-19 06:48 logs_ow-c01-02-job
====
*time
user@mia-backup03:~$
System Description
Attached Files:
Notes
(0003271)
vitalii.s   
2019-02-20 08:57   
The same issue:

*st client=pserver10-fd
Connecting to Client pserver10-fd at pserver10.internal:9102
Probing client protocol... (result will be saved until config reload)
 Handshake: Cleartext, Encryption: None

pserver10.internal-fd Version: 17.2.4 (21 Sep 2017) x86_64-redhat-linux-gnu redhat CentOS release 6.9 (Final)
Daemon started 14-Feb-19 02:06. Jobs: run=18 running=0.
 Heap: heap=40,960 smbytes=41,410 max_bytes=2,216,997 bufs=89 max_bufs=262
 Sizeof: boffset_t=8 size_t=8 debug=0 trace=0 bwlimit=0kB/s

Running Jobs:
bareos-dir (director) connected at: 20-Feb-19 15:52
No Jobs running.
====

Terminated Jobs:
 JobId Level Files Bytes Status Finished Name
======================================================================
1555343 Incr 55,777 1.218 G OK 19-Feb-19 19:13 pserver10-fs
1555576 Full 1,304 1.690 G OK 19-Feb-19 22:32 pserver10-mysql
1555628 Incr 49,292 590.6 M OK 19-Feb-19 23:15 pserver10-fs-mini
====
*time
root@backupserver3:/etc#
(0003414)
arogge   
2019-07-04 16:08   
Can you try to reproduce the issue with out nightly build packages from https://download.bareos.org/bareos/experimental/nightly/
Thank you!
(0003569)
arogge   
2019-09-03 15:13   
This has been fixed in
https://github.com/bareos/bareos/pull/234
https://github.com/bareos/bareos/pull/232

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
667 [bareos-core] installer / packages minor always 2016-06-14 12:22 2019-09-03 10:54
Reporter: jungingen Platform:  
Assigned To: stephand OS: Linux  
Priority: low OS Version: Ubuntu 16.04 LTS  
Status: feedback Product Version: 15.2.3  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact: yes
bareos-master: action: will care
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact: yes
bareos-16.2: action: will care
bareos-15.2: impact: yes
bareos-15.2: action: will care
bareos-14.2: impact: yes
bareos-14.2: action: will care
bareos-13.2: impact: no
bareos-13.2: action:
bareos-12.4: impact: no
bareos-12.4: action:
Summary: Ubuntu repository uses weak digest algorithm (SHA1)
Description: Ubuntu 16.04 LTS gives an error on installing Bareos through repositories - experimental and stable, because of the weak digest algorithm:

http://download.bareos.org/bareos/release/latest/xUbuntu_14.04/
http://download.bareos.org/bareos/experimental/nightly/xUbuntu_16.04/


Tags:
Steps To Reproduce: After adding repository and installing the key, apt-get update gives the following error:

W: http://download.bareos.org/bareos/experimental/nightly/xUbuntu_16.04/Release.gpg: Signature by key 2FC04F7E3421E21B70F3231F7A855ABDE0F8EFD4 uses weak digest algorithm (SHA1)
Additional Information:
Attached Files:
Notes
(0002407)
joergs   
2016-10-24 15:57   
We use a private instance of http://openbuildservice.org/ (OBS) to build our Linux packages. As this is only a warning, we do not consider it urgent to fix this issue. However, recent releases of OBS (>= 2.7.0) have fixed this issue, by signing also with SHA256, see https://github.com/openSUSE/obs-sign/commit/688d5fa695c4756bf5c9825ed390112d23270bf0

We plan to update our build infrastructure when we find time for this.
(0002440)
monotek   
2016-11-08 19:27   
Would be nice you could reconsider this decission because our repos are managed by puppet which has problems running without erros when "apt-get update" is executed.
(0002441)
tudor   
2016-11-09 06:48   
+1 this affects pretty much every Ubuntu user who's upgraded recently also. I actively discourage my team from ignoring warnings like this as it's a bad habit to get into and paves the way for real attacks on our security.
(0002594)
kim-sondrup   
2017-03-03 18:34   
+1 also here having starting troubles when using the repo with Puppet
(0003567)
stephand   
2019-09-03 10:54   
Does this Puppet related problem still exist with the current bareos 18.2 repos?

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
385 [bareos-core] file daemon minor always 2014-12-26 11:41 2019-09-02 17:22
Reporter: tigerfoot Platform: Linux  
Assigned To: franku OS: any  
Priority: normal OS Version: 3  
Status: resolved Product Version: 14.2.2  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version: 19.2.1  
    Target Version:  
bareos-master: impact: yes
bareos-master: action: fixed
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact: yes
bareos-18.2: action:
bareos-17.2: impact: yes
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: bareos daemon stop restart hang if bareos-tray-monitor is connected
Description: On simple client (openSUSE 13.1) have bareos-fd and tray-monitor installed and running.

If you (as root) issue a systemctl restart/stop bareos-fd.service the shutdown process hangs until systemd timeout and finally kill it hard.

Same behavior can be found in 12.3 and 13.2 versions.

Why this need to be fixed. In case a computer having tray-monitor running is updated, the timeout take times and those hang the update/upgrade process.
This can confuse admin/user thinking the installation process is stuck, and could damage it if they stop the install process.
Tags:
Steps To Reproduce:
Additional Information: c-3po:~ # systemctl status bareos-fd.service
bareos-fd.service - Bareos File Daemon service
   Loaded: loaded (/etc/systemd/system/bareos-fd.service; enabled)
   Active: active (running) since Thu 2014-12-25 09:51:15 CET; 1 day 1h ago
     Docs: man:bareos-fd(8)
  Process: 1266 ExecStart=/usr/sbin/bareos-fd -c /etc/bareos/bareos-fd.conf (code=exited, status=0/SUCCESS)
 Main PID: 1275 (bareos-fd)
   CGroup: /system.slice/bareos-fd.service
           └─1275 /usr/sbin/bareos-fd -c /etc/bareos/bareos-fd.conf

Dec 25 09:51:15 c-3po.labaroche.ioda.net systemd[1]: Starting Bareos File Daemon service...
Dec 25 09:51:15 c-3po.labaroche.ioda.net systemd[1]: Started Bareos File Daemon service.
c-3po:~ # lsof -p 1275
lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/1502/gvfs
      Output information may be incomplete.
lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /var/run/user/1502/gvfs
      Output information may be incomplete.
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
bareos-fd 1275 root cwd DIR 254,4 4096 1835608 /var/lib/bareos
bareos-fd 1275 root rtd DIR 254,3 4096 2 /
bareos-fd 1275 root txt REG 254,3 191104 296534 /usr/sbin/bareos-fd
bareos-fd 1275 root mem REG 254,3 18816 414985 /lib64/libattr.so.1.1.0
bareos-fd 1275 root mem REG 254,3 92568 414949 /lib64/libgcc_s.so.1
bareos-fd 1275 root mem REG 254,3 1127824 415648 /lib64/libm-2.18.so
bareos-fd 1275 root mem REG 254,3 18904 415087 /lib64/libdl-2.18.so
bareos-fd 1275 root mem REG 254,3 424104 404514 /lib64/libssl.so.1.0.0
bareos-fd 1275 root mem REG 254,3 133344 1079141 /usr/lib64/liblzo2.so.2.0.0
bareos-fd 1275 root mem REG 254,3 18976 414880 /lib64/libcap.so.2.22
bareos-fd 1275 root mem REG 254,3 40880 415005 /lib64/libwrap.so.0.7.6
bareos-fd 1275 root mem REG 254,3 2000416 404513 /lib64/libcrypto.so.1.0.0
bareos-fd 1275 root mem REG 254,3 35464 414867 /lib64/libacl.so.1.1.0
bareos-fd 1275 root mem REG 254,3 2003811 414946 /lib64/libc-2.18.so
bareos-fd 1275 root mem REG 254,3 995512 1083038 /usr/lib64/libstdc++.so.6.0.18
bareos-fd 1275 root mem REG 254,3 141506 414947 /lib64/libpthread-2.18.so
bareos-fd 1275 root mem REG 254,3 58667 1062381 /usr/lib64/libfastlz.so.1.0.0
bareos-fd 1275 root mem REG 254,3 88216 415089 /lib64/libz.so.1.2.8
bareos-fd 1275 root mem REG 254,3 430208 1327573 /usr/lib64/bareos/libbareos-14.2.2.so
bareos-fd 1275 root mem REG 254,3 93536 1327574 /usr/lib64/bareos/libbareoscfg-14.2.2.so
bareos-fd 1275 root mem REG 254,3 81088 1327575 /usr/lib64/bareos/libbareosfind-14.2.2.so
bareos-fd 1275 root mem REG 254,3 80648 1327576 /usr/lib64/bareos/libbareoslmdb-14.2.2.so
bareos-fd 1275 root mem REG 254,3 154094 398832 /lib64/ld-2.18.so
bareos-fd 1275 root mem REG 254,3 26244 1192397 /usr/lib64/gconv/gconv-modules.cache
bareos-fd 1275 root mem REG 254,3 256356 959513 /usr/lib/locale/fr_CH.utf8/LC_CTYPE
bareos-fd 1275 root 0r CHR 1,3 0t0 1029 /dev/null
bareos-fd 1275 root 1r CHR 1,3 0t0 1029 /dev/null
bareos-fd 1275 root 2r CHR 1,3 0t0 1029 /dev/null
bareos-fd 1275 root 3u IPv6 22824 0t0 TCP *:bacula-fd (LISTEN)
bareos-fd 1275 root 4u IPv6 29886 0t0 TCP localhost6.localdomain6:bacula-fd->localhost6.localdomain6:36912 (ESTABLISHED)
c-3po:~ # systemctl stop bareos-fd.service
c-3po:~ # systemctl status bareos-fd.service
bareos-fd.service - Bareos File Daemon service
   Loaded: loaded (/etc/systemd/system/bareos-fd.service; enabled)
   Active: failed (Result: signal) since Fri 2014-12-26 11:06:07 CET; 12min ago
     Docs: man:bareos-fd(8)
  Process: 1266 ExecStart=/usr/sbin/bareos-fd -c /etc/bareos/bareos-fd.conf (code=exited, status=0/SUCCESS)
 Main PID: 1275 (code=killed, signal=KILL)

Dec 25 09:51:15 c-3po.labaroche.ioda.net systemd[1]: Starting Bareos File Daemon service...
Dec 25 09:51:15 c-3po.labaroche.ioda.net systemd[1]: Started Bareos File Daemon service.
Dec 26 11:04:37 c-3po.labaroche.ioda.net systemd[1]: Stopping Bareos File Daemon service...
Dec 26 11:04:37 c-3po.labaroche.ioda.net bareos-fd[1275]: Shutting down BAREOS service: c-3po-fd ...
Dec 26 11:06:07 c-3po.labaroche.ioda.net systemd[1]: bareos-fd.service stopping timed out. Killing.
Dec 26 11:06:07 c-3po.labaroche.ioda.net systemd[1]: bareos-fd.service: main process exited, code=killed, status=9/KILL
Dec 26 11:06:07 c-3po.labaroche.ioda.net systemd[1]: Stopped Bareos File Daemon service.
Dec 26 11:06:07 c-3po.labaroche.ioda.net systemd[1]: Unit bareos-fd.service entered failed state.


lsof -p 2694
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
bareos-tr 2694 bruno cwd DIR 254,2 12288 2621441 /home/bruno
bareos-tr 2694 bruno rtd DIR 254,3 4096 2 /
bareos-tr 2694 bruno txt REG 254,3 108536 154727 /usr/bin/bareos-tray-monitor
bareos-tr 2694 bruno mem REG 254,3 23184 1192882 /usr/lib64/kde4/plugins/imageformats/kimg_xview.so
bareos-tr 2694 bruno mem REG 254,3 64472 1192880 /usr/lib64/kde4/plugins/imageformats/kimg_xcf.so
bareos-tr 2694 bruno mem REG 254,3 319192 1078609 /usr/lib64/libwebp.so.4.0.3
bareos-tr 2694 bruno mem REG 254,3 18984 1187827 /usr/lib64/kde4/plugins/imageformats/kimg_webp.so
bareos-tr 2694 bruno mem REG 254,3 27376 1192879 /usr/lib64/kde4/plugins/imageformats/kimg_tga.so
bareos-tr 2694 bruno mem REG 254,3 43960 1192877 /usr/lib64/kde4/plugins/imageformats/kimg_rgb.so
bareos-tr 2694 bruno mem REG 254,3 27320 1192876 /usr/lib64/kde4/plugins/imageformats/kimg_ras.so
bareos-tr 2694 bruno mem REG 254,3 23208 1192864 /usr/lib64/kde4/plugins/imageformats/kimg_psd.so
bareos-tr 2694 bruno mem REG 254,3 23208 1192861 /usr/lib64/kde4/plugins/imageformats/kimg_pic.so
bareos-tr 2694 bruno mem REG 254,3 35640 1192860 /usr/lib64/kde4/plugins/imageformats/kimg_pcx.so
bareos-tr 2694 bruno mem REG 254,3 340328 1078633 /usr/lib64/libjasper.so.1.0.0
bareos-tr 2694 bruno mem REG 254,3 23320 1192833 /usr/lib64/kde4/plugins/imageformats/kimg_jp2.so
bareos-tr 2694 bruno mem REG 254,3 27272 1081365 /usr/lib64/libIlmThread-2_0.so.10.0.1
bareos-tr 2694 bruno mem REG 254,3 272432 1080366 /usr/lib64/libHalf.so.10.0.1
bareos-tr 2694 bruno mem REG 254,3 133504 1082230 /usr/lib64/libIex-2_0.so.10.0.1
bareos-tr 2694 bruno mem REG 254,3 1140776 1082997 /usr/lib64/libIlmImf-Imf_2_0.so.20.0.1
bareos-tr 2694 bruno mem REG 254,3 27400 1192832 /usr/lib64/kde4/plugins/imageformats/kimg_exr.so
bareos-tr 2694 bruno mem REG 254,3 35752 1192827 /usr/lib64/kde4/plugins/imageformats/kimg_eps.so
bareos-tr 2694 bruno mem REG 254,3 31408 1192728 /usr/lib64/kde4/plugins/imageformats/kimg_dds.so
bareos-tr 2694 bruno mem REG 254,3 50200 1082559 /usr/lib64/libjbig.so.2.0
bareos-tr 2694 bruno mem REG 254,3 477168 1082182 /usr/lib64/libtiff.so.5.2.0
bareos-tr 2694 bruno mem REG 254,3 31568 1087707 /usr/lib64/qt4/plugins/imageformats/libqtiff.so
bareos-tr 2694 bruno mem REG 254,3 23136 1087704 /usr/lib64/qt4/plugins/imageformats/libqtga.so
bareos-tr 2694 bruno mem REG 254,3 23304 1087699 /usr/lib64/qt4/plugins/imageformats/libqsvg.so
bareos-tr 2694 bruno mem REG 254,3 229488 1076006 /usr/lib64/liblcms.so.1.0.19
bareos-tr 2694 bruno mem REG 254,3 548184 1077342 /usr/lib64/libmng.so.1.1.0.10
bareos-tr 2694 bruno mem REG 254,3 27544 1087696 /usr/lib64/qt4/plugins/imageformats/libqmng.so
bareos-tr 2694 bruno mem REG 254,3 264224 1083063 /usr/lib64/libjpeg.so.8.0.2
bareos-tr 2694 bruno mem REG 254,3 31680 1087695 /usr/lib64/qt4/plugins/imageformats/libqjpeg.so
bareos-tr 2694 bruno mem REG 254,3 31496 1087693 /usr/lib64/qt4/plugins/imageformats/libqico.so
bareos-tr 2694 bruno mem REG 254,3 31424 1078322 /usr/lib64/qt4/plugins/imageformats/libqgif.so
bareos-tr 2694 bruno mem REG 254,3 61615 415656 /lib64/libnss_files-2.18.so
bareos-tr 2694 bruno mem REG 254,3 56575 414912 /lib64/libnss_nis-2.18.so
bareos-tr 2694 bruno mem REG 254,3 108060 415650 /lib64/libnsl-2.18.so
bareos-tr 2694 bruno mem REG 254,3 38420 415652 /lib64/libnss_compat-2.18.so
bareos-tr 2694 bruno mem REG 254,3 741536 1976149 /usr/share/fonts/truetype/DejaVuSans.ttf
bareos-tr 2694 bruno mem REG 254,4 10547304 1049360 /var/tmp/kdecache-bruno/icon-cache.kcache
bareos-tr 2694 bruno mem REG 254,3 118032 1088312 /usr/lib64/liboxygenstyle.so.4.14.3
bareos-tr 2694 bruno mem REG 254,3 755824 1190408 /usr/lib64/kde4/plugins/styles/oxygen.so
bareos-tr 2694 bruno mem REG 254,3 138720 414890 /lib64/libselinux.so.1
bareos-tr 2694 bruno mem REG 254,3 290752 403124 /lib64/libdbus-1.so.3.8.9
bareos-tr 2694 bruno mem REG 254,3 71736 1080351 /usr/lib64/libudev.so.1.4.0
bareos-tr 2694 bruno mem REG 254,3 1479688 1076894 /usr/lib64/libxml2.so.2.9.1
bareos-tr 2694 bruno mem REG 254,3 240808 1086960 /usr/lib64/libstreams.so.0.7.8
bareos-tr 2694 bruno mem REG 254,3 1163464 1061262 /usr/lib64/libsoprano.so.4.3.0
bareos-tr 2694 bruno mem REG 254,3 172042 1086679 /usr/lib64/liblzma.so.5.0.7
bareos-tr 2694 bruno mem REG 254,3 67326 1086678 /usr/lib64/libbz2.so.1.0.6
bareos-tr 2694 bruno mem REG 254,3 22952 1079104 /usr/lib64/libXtst.so.6.1.0
bareos-tr 2694 bruno mem REG 254,3 196448 1078819 /usr/lib64/libdbusmenu-qt.so.2.6.0
bareos-tr 2694 bruno mem REG 254,3 990912 1076823 /usr/lib64/libattica.so.0.4.2
bareos-tr 2694 bruno mem REG 254,3 513208 1075072 /usr/lib64/libQtDBus.so.4.8.5
bareos-tr 2694 bruno mem REG 254,3 18816 414985 /lib64/libattr.so.1.1.0
bareos-tr 2694 bruno mem REG 254,3 35464 414867 /lib64/libacl.so.1.1.0
bareos-tr 2694 bruno mem REG 254,3 77008 1087898 /usr/lib64/libksuseinstall.so.1
bareos-tr 2694 bruno mem REG 254,3 1041912 1087891 /usr/lib64/libsolid.so.4.14.3
bareos-tr 2694 bruno mem REG 254,3 556952 1086958 /usr/lib64/libstreamanalyzer.so.0.7.8
bareos-tr 2694 bruno mem REG 254,3 361808 1077046 /usr/lib64/libQtSvg.so.4.8.5
bareos-tr 2694 bruno mem REG 254,3 289864 1076740 /usr/lib64/libQtXml.so.4.8.5
bareos-tr 2694 bruno mem REG 254,3 1330800 1078144 /usr/lib64/libQtNetwork.so.4.8.5
bareos-tr 2694 bruno mem REG 254,3 849480 1082766 /usr/lib64/libnepomuk.so.4.14.3
bareos-tr 2694 bruno mem REG 254,3 2920208 1075679 /usr/lib64/libkdecore.so.5.14.3
bareos-tr 2694 bruno mem REG 254,3 4558656 1087579 /usr/lib64/libkdeui.so.5.14.3
bareos-tr 2694 bruno mem REG 254,3 2834280 1081227 /usr/lib64/libkio.so.5.14.3
bareos-tr 2694 bruno mem REG 254,3 108476 1976244 /usr/share/fonts/truetype/LiberationMono-Regular.ttf
bareos-tr 2694 bruno mem REG 254,3 48152 1187940 /usr/lib64/kde4/plugins/gui_platform/libkde.so
bareos-tr 2694 bruno mem REG 254,4 410600 787202 /var/cache/fontconfig/7ef2298fde41cc6eeb7af42e48b7d293-x86_64.cache-4
bareos-tr 2694 bruno mem REG 254,4 476216 786706 /var/cache/fontconfig/f0cb971afb730f85a4aedb9298bb8d29-x86_64.cache-4
bareos-tr 2694 bruno mem REG 254,4 473328 787088 /var/cache/fontconfig/17090aa38d5c6f09fb8c5c354938f1d7-x86_64.cache-4
bareos-tr 2694 bruno mem REG 254,4 474080 787087 /var/cache/fontconfig/df311e82a1a24c41a75c2c930223552e-x86_64.cache-4
bareos-tr 2694 bruno mem REG 254,3 22321744 134315 /usr/share/icu/51.2/icudt51l.dat
bareos-tr 2694 bruno mem REG 254,3 4936 1078666 /usr/lib64/libicudata.so.51.2
bareos-tr 2694 bruno mem REG 254,3 1542856 1076994 /usr/lib64/libicuuc.so.51.2
bareos-tr 2694 bruno mem REG 254,3 2162184 1080480 /usr/lib64/libicui18n.so.51.2
bareos-tr 2694 bruno mem REG 254,4 218224 786601 /var/cache/fontconfig/7e1fba7718b83835f0117a869412581c-x86_64.cache-4
bareos-tr 2694 bruno mem REG 254,3 16711 1192386 /usr/lib64/gconv/UTF-16.so
bareos-tr 2694 bruno mem REG 254,3 14552 1075628 /usr/lib64/libXau.so.6.0.0
bareos-tr 2694 bruno mem REG 254,3 129848 1076190 /usr/lib64/libxcb.so.1.1.0
bareos-tr 2694 bruno mem REG 254,3 170168 1082135 /usr/lib64/libexpat.so.1.6.0
bareos-tr 2694 bruno mem REG 254,3 18968 1080217 /usr/lib64/libuuid.so.1.3.0
bareos-tr 2694 bruno mem REG 254,3 31112 1079061 /usr/lib64/libffi.so.4.0.1
bareos-tr 2694 bruno mem REG 254,3 416064 1079593 /usr/lib64/libpcre.so.1.2.1
bareos-tr 2694 bruno mem REG 254,3 42594 415666 /lib64/librt-2.18.so
bareos-tr 2694 bruno mem REG 254,3 1302168 1078567 /usr/lib64/libX11.so.6.3.0
bareos-tr 2694 bruno mem REG 254,3 73456 1079183 /usr/lib64/libXext.so.6.4.0
bareos-tr 2694 bruno mem REG 254,3 248864 1080278 /usr/lib64/libfontconfig.so.1.8.0
bareos-tr 2694 bruno mem REG 254,3 10472 1078677 /usr/lib64/libXinerama.so.1.0.0
bareos-tr 2694 bruno mem REG 254,3 43736 1079334 /usr/lib64/libXcursor.so.1.0.2
bareos-tr 2694 bruno mem REG 254,3 22640 1079143 /usr/lib64/libXfixes.so.3.1.0
bareos-tr 2694 bruno mem REG 254,3 39264 1077827 /usr/lib64/libXrandr.so.2.2.0
bareos-tr 2694 bruno mem REG 254,3 39456 1079021 /usr/lib64/libXrender.so.1.3.0
bareos-tr 2694 bruno mem REG 254,3 63952 1080255 /usr/lib64/libXi.so.6.1.0
bareos-tr 2694 bruno mem REG 254,3 102536 1061341 /usr/lib64/libICE.so.6.3.0
bareos-tr 2694 bruno mem REG 254,3 31000 1076161 /usr/lib64/libSM.so.6.0.1
bareos-tr 2694 bruno mem REG 254,3 330952 1082639 /usr/lib64/libgobject-2.0.so.0.3800.2
bareos-tr 2694 bruno mem REG 254,3 597880 1079249 /usr/lib64/libfreetype.so.6.10.2
bareos-tr 2694 bruno mem REG 254,3 247768 1076653 /usr/lib64/libpng16.so.16.6.0
bareos-tr 2694 bruno mem REG 254,3 1057936 1076813 /usr/lib64/libglib-2.0.so.0.3800.2
bareos-tr 2694 bruno mem REG 254,3 1127824 415648 /lib64/libm-2.18.so
bareos-tr 2694 bruno mem REG 254,3 18904 415087 /lib64/libdl-2.18.so
bareos-tr 2694 bruno mem REG 254,3 424104 404514 /lib64/libssl.so.1.0.0
bareos-tr 2694 bruno mem REG 254,3 58667 1062381 /usr/lib64/libfastlz.so.1.0.0
bareos-tr 2694 bruno mem REG 254,3 133344 1079141 /usr/lib64/liblzo2.so.2.0.0
bareos-tr 2694 bruno mem REG 254,3 88216 415089 /lib64/libz.so.1.2.8
bareos-tr 2694 bruno mem REG 254,3 18976 414880 /lib64/libcap.so.2.22
bareos-tr 2694 bruno mem REG 254,3 40880 415005 /lib64/libwrap.so.0.7.6
bareos-tr 2694 bruno mem REG 254,3 141506 414947 /lib64/libpthread-2.18.so
bareos-tr 2694 bruno mem REG 254,3 2000416 404513 /lib64/libcrypto.so.1.0.0
bareos-tr 2694 bruno mem REG 254,3 2003811 414946 /lib64/libc-2.18.so
bareos-tr 2694 bruno mem REG 254,3 92568 414949 /lib64/libgcc_s.so.1
bareos-tr 2694 bruno mem REG 254,3 995512 1083038 /usr/lib64/libstdc++.so.6.0.18
bareos-tr 2694 bruno mem REG 254,3 3045464 1081285 /usr/lib64/libQtCore.so.4.8.5
bareos-tr 2694 bruno mem REG 254,3 11253512 1082999 /usr/lib64/libQtGui.so.4.8.5
bareos-tr 2694 bruno mem REG 254,3 430208 1327573 /usr/lib64/bareos/libbareos-14.2.2.so
bareos-tr 2694 bruno mem REG 254,3 93536 1327574 /usr/lib64/bareos/libbareoscfg-14.2.2.so
bareos-tr 2694 bruno mem REG 254,3 154094 398832 /lib64/ld-2.18.so
bareos-tr 2694 bruno mem REG 254,4 86136 786718 /var/cache/fontconfig/b5c7c63143a222d0fb41621bb05e4dd9-x86_64.cache-4
bareos-tr 2694 bruno mem REG 254,4 102160 786710 /var/cache/fontconfig/8d4af663993b81a124ee82e610bb31f9-x86_64.cache-4
bareos-tr 2694 bruno mem REG 254,3 256356 959513 /usr/lib/locale/fr_CH.utf8/LC_CTYPE
bareos-tr 2694 bruno mem REG 254,3 1243766 927170 /usr/lib/locale/fr_CH.utf8/LC_COLLATE
bareos-tr 2694 bruno mem REG 254,4 9608 786730 /var/cache/fontconfig/d458be102e54cf534d1eef0dcbb02d07-x86_64.cache-4
bareos-tr 2694 bruno mem REG 254,4 4040 786719 /var/cache/fontconfig/2f36c32c6ec3ca87de89a6eb757fa974-x86_64.cache-4
bareos-tr 2694 bruno mem REG 254,4 3072 786610 /var/cache/fontconfig/2a6be49ceff89c94e9adf1154f671c58-x86_64.cache-4
bareos-tr 2694 bruno mem REG 254,4 2544 786605 /var/cache/fontconfig/56762fdc8a452d8a54310bd64bbd8798-x86_64.cache-4
bareos-tr 2694 bruno mem REG 254,4 1576 786793 /var/cache/fontconfig/d4501469e4b92c47361ab1492f0f2d98-x86_64.cache-4
bareos-tr 2694 bruno mem REG 254,3 153231 1988341 /usr/share/locale/fr/LC_MESSAGES/libc.mo
bareos-tr 2694 bruno mem REG 254,3 54 807741 /usr/lib/locale/fr_CH.utf8/LC_NUMERIC
bareos-tr 2694 bruno mem REG 254,3 2366 959511 /usr/lib/locale/fr_CH.utf8/LC_TIME
bareos-tr 2694 bruno mem REG 254,3 294 799180 /usr/lib/locale/fr_CH.utf8/LC_MONETARY
bareos-tr 2694 bruno mem REG 254,3 58 959510 /usr/lib/locale/fr_CH.utf8/LC_MESSAGES/SYS_LC_MESSAGES
bareos-tr 2694 bruno mem REG 254,3 34 926931 /usr/lib/locale/fr_CH.utf8/LC_PAPER
bareos-tr 2694 bruno mem REG 254,3 62 951747 /usr/lib/locale/fr_CH.utf8/LC_NAME
bareos-tr 2694 bruno mem REG 254,3 127 925564 /usr/lib/locale/fr_CH.utf8/LC_ADDRESS
bareos-tr 2694 bruno mem REG 254,3 49 809549 /usr/lib/locale/fr_CH.utf8/LC_TELEPHONE
bareos-tr 2694 bruno mem REG 254,3 23 959514 /usr/lib/locale/fr_CH.utf8/LC_MEASUREMENT
bareos-tr 2694 bruno mem REG 254,3 26244 1192397 /usr/lib64/gconv/gconv-modules.cache
bareos-tr 2694 bruno mem REG 254,3 353 927102 /usr/lib/locale/fr_CH.utf8/LC_IDENTIFICATION
bareos-tr 2694 bruno 0r FIFO 0,8 0t0 20047 pipe
bareos-tr 2694 bruno 1w REG 254,2 2475484 2621637 /home/bruno/.xsession-errors-:0
bareos-tr 2694 bruno 2w REG 254,2 2475484 2621637 /home/bruno/.xsession-errors-:0
bareos-tr 2694 bruno 3u 0000 0,9 0 7427 anon_inode
bareos-tr 2694 bruno 4r FIFO 0,8 0t0 30872 pipe
bareos-tr 2694 bruno 5w FIFO 0,8 0t0 30872 pipe
bareos-tr 2694 bruno 6u unix 0x0000000000000000 0t0 30876 socket
bareos-tr 2694 bruno 7u unix 0x0000000000000000 0t0 30898 socket
bareos-tr 2694 bruno 8u unix 0x0000000000000000 0t0 30901 socket
bareos-tr 2694 bruno 9u 0000 0,9 0 7427 anon_inode
bareos-tr 2694 bruno 10u IPv6 31183 0t0 TCP localhost6.localdomain6:36912->localhost6.localdomain6:bacula-fd (ESTABLISHED)
System Description General issues regarding all linux platforms or not specific to only one distribution
Attached Files:
Notes
(0001187)
pstorz   
2015-01-15 15:59   
Hello Bruno,

is this behaviour already in Version 13.2 or is it new in 14.2?
(0001189)
tigerfoot   
2015-01-15 19:21   
Philipp. Honestly I've no idea.
When I've started to install 13.2 most of the time those where blank computer.
Other were servers without tray-monitor (upgrade from 12x to 14x)

On windows bareos-fd + tray-monitor it has worked (if I remember correctly)
I'm not 100% sure (due to the fact I've known this could be a problem and has perhaps close manually the tray-monitor before the upgrade)

It's shows the trouble for 100% sure on a linux desktop version 13x to 14x
(0001245)
joergs   
2015-01-30 15:43   
Same problem on my system (bareos-master)
(0001246)
mvwieringen   
2015-01-30 17:11   
Put a debugger on bareos-fd and breakpoint on terminate_filed and see what happens
when it gets the signal to exit by stepping through the code.
(0001325)
Hilario   
2015-03-19 15:26   
It is important to notice that this bug shows its face even when the tray-monitor is running on another computer (checked with Bareos version 14.2.2 Centos 7 and Fedora20).
(0002810)
tigerfoot   
2017-11-03 11:22   
I've an update on this.
Actually windows installer close correctly the bareos-tray monitor for the user connected (and obviously doing the installation as administrator).
But if you are on the terminal server and some other normal user have a session opened there's still chances that bareos-tray monitor is still opened and running for them. If not all bareos-tray-monitor.exe are killed then the installation failed.

So the installer (if possible) should kill all occurence of bareos-tray-monitor during the uninstall phase).

Seen with 16.2.7 release from subscription channel.
(0003564)
franku   
2019-09-02 17:22   
Fix committed to bareos master branch with changesetid 11740.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1109 [bareos-core] director trivial always 2019-08-27 18:20 2019-08-27 18:20
Reporter: Jacky Platform: Windows 7  
Assigned To: OS: Windows  
Priority: normal OS Version: 7  
Status: new Product Version: 18.2.5  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Fatal error: No Job status returned from FD.
Description: According to the official documentation steps to configure, the windows 7 platform failed to add backup.
The specific error message is as follows:
bareos-dir JobId 40: Error: Bareos bareos-dir 18.2.5 (30Jan19):
Build OS: Linux-4.4.92-6.18-default debian Debian GNU/Linux 9.7 (stretch)
JobId: 40
Job: BackupCatalog.2019-08-27_17.56.22_37
Backup Level: Full
Client: "zhangxianseng-fd" 18.2.5 (30Jan19) Microsoft Windows 7 Ultimate Edition Service Pack 1 (build 7601), 64-bit,Cross-compile,Win64
FileSet: "Windows All Drives" 2019-08-24 00:12:06
Pool: "Full" (From command line)
Catalog: "MyCatalog" (From Client resource)
Storage: "FileStorage1" (From Job resource)
Scheduled time: 27-8月-2019 17:56:22
Start time: 27-8月-2019 17:56:25
End time: 27-8月-2019 17:58:31
Elapsed time: 2 mins 6 secs
Priority: 11
FD Files Written: 0
SD Files Written: 0
FD Bytes Written: 0 (0 B)
SD Bytes Written: 0 (0 B)
Rate: 0.0 KB/s
Software Compression: None
VSS: no
Encryption: no
Accurate: no
Volume name(s):
Volume Session Id: 2
Volume Session Time: 1566899611
Last Volume Bytes: 0 (0 B)
Non-fatal FD errors: 1
SD Errors: 0
FD termination status: Error
SD termination status: Waiting on FD
FD Secure Erase Cmd:
Bareos binary info: bareos.org build: Get official binaries and vendor support on bareos.com
Termination: *** Backup Error ***

2019-08-27 17:56:31 bareos-dir JobId 40: Fatal error: Authorization key rejected by File Daemon bareos-dir.
2019-08-27 17:56:31 bareos-dir JobId 40: Fatal error: Unable to authenticate with File daemon at "192.168.128.216:9102". Possible causes:
Passwords or names not the same or
TLS negotiation failed or
Maximum Concurrent Jobs exceeded on the FD or
FD networking messed up (restart daemon).
2019-08-27 17:56:26 bareos-dir JobId 40: Connected Storage daemon at bareos:9103, encryption: TLS_CHACHA20_POLY1305_SHA256
2019-08-27 17:56:26 bareos-dir JobId 40: Using Device "FileStorage1" to write.
2019-08-27 17:56:26 bareos-dir JobId 40: Probing client protocol... (result will be saved until config reload)
2019-08-27 17:56:26 bareos-dir JobId 40: Fatal error: Connect failure: ERR=error:140943FC:SSL routines:ssl3_read_bytes:sslv3 alert bad record mac
2019-08-27 17:56:26 bareos-dir JobId 40: TLS negotiation failed (while probing client protocol)
2019-08-27 17:56:25 bareos-dir JobId 40: Start Backup JobId 40, Job=BackupCatalog.2019-08-27_17.56.22_37
2019-08-27 17:56:24 bareos-dir JobId 40: shell command: run BeforeJob "/usr/lib/bareos/scripts/make_catalog_backup.pl MyCatalog"
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
477 [bareos-core] file daemon minor have not tried 2015-06-08 13:02 2019-08-08 14:56
Reporter: RoyK Platform: Windows  
Assigned To: OS: any  
Priority: normal OS Version: 8  
Status: acknowledged Product Version: 14.2.2  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact: yes
bareos-master: action: will care
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact: yes
bareos-15.2: action: will care
bareos-14.2: impact: yes
bareos-14.2: action: none
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Bareos-fd fails with IPv6
Description: If bareos-fd is configured with IPv6 on Windows, director just hangs when attempting to contact fd. If removing the ipv6 config, director hangs in 0000004:0000001 minute before attempting ipv4, and things work. If hardcoding ipv4 address and removing ipv6 config on fd, things works as normal. We use IPv6 with all Linux clients and the plan was to use it with all windows clients as well, but this does not seem to be possible at the moment.
Tags:
Steps To Reproduce: bareos-fd.conf - tested on win2k8r2 and win2k12r2, both 64bit

Director {
  Name = urd.my.tld-dir
  Password = "supersecret"
}

FileDaemon {
  Name = somewinbox.my.tld-fd
  Maximum Concurrent Jobs = 20
  FDAddresses = {ipv6 = {port = 9102}}
}
Additional Information:
System Description
Attached Files:
Notes
(0001871)
RoyK   
2015-10-07 19:30   
Just checked with 15.2 - same issue
(0002337)
eye69   
2016-08-20 10:27   
Just checked with freshly downloaded "winbareos-16.3.1.1471620779.cae6abe-postvista-64-bit-r203.1.exe" on Windows 10 Pro, build 10586.545

When trying to contact the file daemon through IPv6, it just goes to 25% CPU usage and doesn't respond.
(0002381)
tigerfoot   
2016-10-12 14:10   
eye69 or RoyK any of you would be able to start bareos-fd with debug mode
options that should be added to windows service
-d 200
(0002406)
Lee   
2016-10-24 13:11   
Hi there,

Please find trace for IPv6 listener below:



bareos-fd (100): lib/parse_conf.c:151-0 config file = C:\ProgramData\Bareos\bareos-fd.d/*/*.conf
bareos-fd (100): lib/lex.c:356-0 glob C:\ProgramData\Bareos\bareos-fd.d/*/*.conf: 4 files
bareos-fd (100): lib/lex.c:250-0 open config file: C:\ProgramData\Bareos\bareos-fd.d/client/myself.conf
bareos-fd (100): lib/lex.c:250-0 open config file: C:\ProgramData\Bareos\bareos-fd.d/director/bareos-dir.conf
bareos-fd (100): lib/lex.c:250-0 open config file: C:\ProgramData\Bareos\bareos-fd.d/director/bareos-mon.conf
bareos-fd (100): lib/lex.c:250-0 open config file: C:\ProgramData\Bareos\bareos-fd.d/messages/Standard.conf
bareos-fd (90): filed/filed_conf.c:445-0 Inserting Director res: bareos-dir
bareos-fd (100): lib/lex.c:356-0 glob C:\ProgramData\Bareos\bareos-fd.d/*/*.conf: 4 files
bareos-fd (100): lib/lex.c:250-0 open config file: C:\ProgramData\Bareos\bareos-fd.d/client/myself.conf
bareos-fd (100): lib/lex.c:250-0 open config file: C:\ProgramData\Bareos\bareos-fd.d/director/bareos-dir.conf
bareos-fd (100): lib/lex.c:250-0 open config file: C:\ProgramData\Bareos\bareos-fd.d/director/bareos-mon.conf
bareos-fd (100): lib/lex.c:250-0 open config file: C:\ProgramData\Bareos\bareos-fd.d/messages/Standard.conf
id48269-fd (100): lib/jcr.c:141-0 read_last_jobs seek to 192
id48269-fd (100): lib/jcr.c:148-0 Read num_items=0
id48269-fd (150): filed/fd_plugins.c:1664-0 plugin dir is NULL
id48269-fd (10): filed/socket_server.c:96-0 filed: listening on port 9102
id48269-fd (100): lib/bnet_server_tcp.c:170-0 Addresses host[ipv6;0.0.0.0;9102]




************************

For comparison with IPv4 trace, i've provided that also (With a successful connect from director):



bareos-fd (100): lib/parse_conf.c:151-0 config file = C:\ProgramData\Bareos\bareos-fd.d/*/*.conf
bareos-fd (100): lib/lex.c:356-0 glob C:\ProgramData\Bareos\bareos-fd.d/*/*.conf: 4 files
bareos-fd (100): lib/lex.c:250-0 open config file: C:\ProgramData\Bareos\bareos-fd.d/client/myself.conf
bareos-fd (100): lib/lex.c:250-0 open config file: C:\ProgramData\Bareos\bareos-fd.d/director/bareos-dir.conf
bareos-fd (100): lib/lex.c:250-0 open config file: C:\ProgramData\Bareos\bareos-fd.d/director/bareos-mon.conf
bareos-fd (100): lib/lex.c:250-0 open config file: C:\ProgramData\Bareos\bareos-fd.d/messages/Standard.conf
bareos-fd (90): filed/filed_conf.c:445-0 Inserting Director res: Ziff-dir
bareos-fd (100): lib/lex.c:356-0 glob C:\ProgramData\Bareos\bareos-fd.d/*/*.conf: 4 files
bareos-fd (100): lib/lex.c:250-0 open config file: C:\ProgramData\Bareos\bareos-fd.d/client/myself.conf
bareos-fd (100): lib/lex.c:250-0 open config file: C:\ProgramData\Bareos\bareos-fd.d/director/bareos-dir.conf
bareos-fd (100): lib/lex.c:250-0 open config file: C:\ProgramData\Bareos\bareos-fd.d/director/bareos-mon.conf
bareos-fd (100): lib/lex.c:250-0 open config file: C:\ProgramData\Bareos\bareos-fd.d/messages/Standard.conf
id48269-fd (100): lib/jcr.c:141-0 read_last_jobs seek to 192
id48269-fd (100): lib/jcr.c:148-0 Read num_items=0
id48269-fd (150): filed/fd_plugins.c:1664-0 plugin dir is NULL
id48269-fd (10): filed/socket_server.c:96-0 filed: listening on port 9102
id48269-fd (100): lib/bnet_server_tcp.c:170-0 Addresses host[ipv4;0.0.0.0;9102]
id48269-fd (110): filed/socket_server.c:63-0 Conn: Hello Director Ziff-dir calling

id48269-fd (110): filed/socket_server.c:69-0 Got a DIR connection at 24-Oct-2016 12:07:53
id48269-fd (120): filed/dir_cmd.c:630-0 Calling Authenticate
id48269-fd (200): lib/bsys.c:191-0 pthread_cond_timedwait sec=0 usec=100
id48269-fd (50): lib/cram-md5.c:68-0 send: auth cram-md5 <29108.1477307273@id48269-fd> ssl=0
id48269-fd (100): lib/cram-md5.c:123-0 cram-get received: auth cram-md5 <1182722634.1477307273@Ziff-dir> ssl=0
id48269-fd (99): lib/cram-md5.c:143-0 sending resp to challenge: w7k/nCcyQCwlj+R79x/A2D
id48269-fd (200): lib/bsys.c:191-0 pthread_cond_timedwait sec=0 usec=100
id48269-fd (120): filed/dir_cmd.c:632-0 OK Authenticate
id48269-fd (100): filed/dir_cmd.c:495-0 <dird: status
id48269-fd (100): filed/dir_cmd.c:506-0 Executing status command.
id48269-fd (200): lib/runscript.c:149-0 runscript: running all RUNSCRIPT object (ClientAfterJob) JobStatus=C
id48269-fd (150): filed/fd_plugins.c:327-0 No bplugin_list: generate_plugin_event ignored.
id48269-fd (100): filed/dir_cmd.c:568-0 Done with free_jcr
(0002577)
RoyK   
2017-02-20 23:01   
Seems this is still an issue in 16.2. Btw, this is not a 'minor', since IPv6 should be pretty important by now.
(0002946)
kjetilho   
2018-03-15 00:05   
still an issue in 17.2.4-8.1 on Windows 2012R2 (64-bit). same symptoms, last message in trace is

foo.net-fd (100): lib/bnet_server_tcp.c:174-0 Addresses host[ipv6;0.0.0.0;9102]

and when director tries to connect, one CPU core starts spinning. please let me know if I can assist in debugging this.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1080 [bareos-core] director minor have not tried 2019-04-24 12:49 2019-08-07 08:39
Reporter: unki Platform: Linux  
Assigned To: arogge OS: Debian  
Priority: normal OS Version: 9  
Status: new Product Version: 18.2.5  
Product Build: Resolution: reopened  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Error on 'JobBytes sum select' during backup-job was running
Description: During an backup-job running - a Differential backup that was just on the way getting finished - `bareos-dir` crashed (segmentation violation) leaving back only these log lines:

```
24-Apr 10:17 backupsrv-sd JobId 19706: Releasing device "DiffPool4" (/srv/bareos/pool).
24-Apr 10:17 backupsrv-sd JobId 19706: Elapsed time=35:10:48, Transfer rate=516.4 K Bytes/second
24-Apr 10:17 bareos-dir JobId 19706: Insert of attributes batch table with 197406 entries start
24-Apr 10:17 bareos-dir JobId 19706: Insert of attributes batch table done
24-Apr 10:17 bareos-dir JobId 19706: Fatal error: cats/sql_get.cc:1481 cats/sql_get.cc:1481 query SELECT SUM(JobBytes) FROM Job WHERE ClientId = 119 AND JobId != 19706 AND SchedTime > TIMESTAMP '1577823450-01-00 03:38:02' failed:
ERROR: date/time field value out of range: "1577823450-01-00 03:38:02"
LINE 1: ...AND JobId != 19706 AND SchedTime > TIMESTAMP '157782345...
                                                             ^
HINT: Perhaps you need a different "datestyle" setting.

24-Apr 10:17 bareos-dir JobId 19706: Error: JobBytes sum select failed: ERR=
24-Apr 10:17 bareos-dir JobId 19706: Warning: Error getting Quota value: ERR=JobBytes sum select failed: ERR=
```

followed by

```
Apr 24 10:17:51 backup2 bareos-dir[6647]: BAREOS interrupted by signal 11: Segmentation violation
```

Somehow this UNIX-timestamp made it into the datatime-string.

It's Bareos v18.2.5 using a PostgreSQL database (9.6.11) on Debian Stretch (9.8).
Tags:
Steps To Reproduce:
Additional Information: It is possibly this SQL Query - defined in `core/src/cats/postgresql_queries.inc`

```
/* 0059_get_quota_jobbytes.postgresql */
"SELECT SUM(JobBytes) "
  "FROM Job "
 "WHERE ClientId = %s "
   "AND JobId != %s "
   "AND SchedTime > TIMESTAMP '%s' "
,
```
System Description
Attached Files:
Notes
(0003448)
arogge   
2019-07-12 10:31   
this is one of these "i looked at the code and this cannot happen" issues.

The timestamp is calculated and then converted to a datetime string before being injected into the query, so what you observed should be impossible.
Is this somehow reproducible?
(0003547)
unki   
2019-08-01 07:14   
No, luckily this never occurred again - I guess you can feel free to close this issue :)
(0003558)
unki   
2019-08-07 08:39   
Just this morning this issue hits me again - unbelievable :)

It's still on bareos v18.2.5.

this message was logged into syslog

Aug 07 08:20:45 backup2 bareos-dir[663]: BAREOS interrupted by signal 11: Segmentation violation

this in /var/log/bareos/bareos.log

07-Aug 08:20 backup-sd JobId 33591: Releasing device "DiffPool4" (/srv/bareos/pool).
07-Aug 08:20 backup-sd JobId 33591: Elapsed time=129:15:39, Transfer rate=1.745 M Bytes/second
07-Aug 08:20 bareos-dir JobId 33591: Insert of attributes batch table with 7017 entries start
07-Aug 08:20 bareos-dir JobId 33591: Insert of attributes batch table done
07-Aug 08:20 bareos-dir JobId 33591: Fatal error: cats/sql_get.cc:1481 cats/sql_get.cc:1481 query SELECT SUM(JobBytes) FROM Job WHERE ClientId = 119 AND JobId != 33591 AND SchedTime > TIMESTAMP '1577823450-01-00 01:40:56' failed:
ERROR: date/time field value out of range: "1577823450-01-00 01:40:56"
LINE 1: ...AND JobId != 33591 AND SchedTime > TIMESTAMP '157782345...
                                                             ^
HINT: Perhaps you need a different "datestyle" setting.

07-Aug 08:20 bareos-dir JobId 33591: Error: JobBytes sum select failed: ERR=
07-Aug 08:20 bareos-dir JobId 33591: Warning: Error getting Quota value: ERR=JobBytes sum select failed: ERR=

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1039 [bareos-core] webui major always 2019-01-30 07:40 2019-08-06 13:21
Reporter: alex-dvv Platform: Linux  
Assigned To: frank OS: Debian  
Priority: high OS Version: 9  
Status: assigned Product Version: 18.2.4-rc2  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Cant login in webui
Description: Hi!Cant login in webui
Error: Sorry, can not authenticate. Wrong username and/or password.
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files: Unbenannt.png (122,585 bytes) 2019-02-01 07:20
https://bugs.bareos.org/file_download.php?file_id=345&type=bug
png

webuiproblem.png (95,893 bytes) 2019-02-12 14:02
https://bugs.bareos.org/file_download.php?file_id=350&type=bug
png

Captura de pantalla de 2019-02-13 10-56-37.png (13,807 bytes) 2019-02-13 10:57
https://bugs.bareos.org/file_download.php?file_id=351&type=bug
png
Notes
(0003228)
noone   
2019-01-31 15:44   
Hello,

The same problem on my SLES 12.3, Version 18.2.5-136.1 (SUSE Repository).

I tracked the problem down to the following log messages (debug mode 100 and gdb running with breakpoint at lib/try_tls_handshake_as_a_server.cc:69):

"""
uranus.mcservice.eu-dir (100): lib/bsock.cc:81-0 Construct BareosSocket
[New Thread 0x7fffe3fff700 (LWP 19998)]
uranus.mcservice.eu-dir (100): include/jcr.h:320-0 Construct JobControlRecord
uranus.mcservice.eu-dir (200): lib/bsock.cc:631-0 Identified from Bareos handshake: admin-R_CONSOLE recognized version: 18.2
[Switching to Thread 0x7fffe3fff700 (LWP 19998)]

Thread 16 "bareos-dir" hit Breakpoint 1, GetHandshakeMode (config=0x6f4530, bs=0x7fffec001fe8, bs@entry=0x6d63a0 <std::string::_Rep::_S_empty_rep_storage>)
    at /usr/src/debug/bareos-18.2.5/src/lib/try_tls_handshake_as_a_server.cc:69
69 Dmsg1(200, "Connection to %s will be denied due to configuration mismatch\n", client_name.c_str());
(gdb) c
Continuing.
uranus.mcservice.eu-dir (200): lib/try_tls_handshake_as_a_server.cc:69-0 Connection to admin will be denied due to configuration mismatch
uranus.mcservice.eu-dir (100): lib/bsock.cc:129-0 Destruct BareosSocket
uranus.mcservice.eu-dir (100): include/jcr.h:324-0 Destruct JobControlRecord
[Thread 0x7fffe3fff700 (LWP 19998) exited]
"""

the relevant part of the code is giving for Bareos >=18.2 only a "ConnectionHandshakeMode::PerformCleartextHandshake" if "tls_policy == kBnetTlsNone".
(0003230)
noone   
2019-01-31 16:03   
I could resolve the problem by following the suggestion in "/etc/bareos/bareos-dir.d/console/admin.conf.example" to disable TLS or use Certificates. (It should only be disabled if the director is only listening on localhost, because otherwise passwords might be transferred nonencrypted over the network.)

So for me it looks like a configuration problem resulting from non adapted old configurations.
(0003232)
teka74   
2019-02-01 07:20   
Same Problem here after updating from 17.4 to 18.2.5, solved it with adding "TLS Enable = No" in the console config

But now I can't access to webui, gui is reporting:

Sorry, it seems you are not authorized to run this module. If you think this is an error, please contact your local administrator.
Please read the Bareos documentation for any additional information on how to configure the Command ACL directive of your Console/Profile resources. Following is a list of required commands which need to be in your Command ACL to run this module properly:

list
llist


I checked up my webui-admin profile, it containes the lines from 18.2.5 documentation....
(0003233)
alex-dvv   
2019-02-01 07:25   
Doesn't come anyway, here's the config:
onsole {
  Name = admin
  Password = ******
  Profile = webui-admin
  TLS Enable = No
}
(0003234)
alex-dvv   
2019-02-01 07:31   
I went in !!! But the error is the same:Sorry, it seems you are not authorized to run this module. If you think this is an error, please contact your local administrator.
(0003238)
alex-dvv   
2019-02-01 12:44   
It worked!I do not really like!
(0003240)
teka74   
2019-02-01 23:05   
alex, it is working on your server? what did you change?
(0003241)
teka74   
2019-02-01 23:59   
uhh, just looked at my system, and webui is now working without any changes! nice self-healing!!
(0003244)
murrdyn   
2019-02-04 18:27   
I have the blank page after login like teka74 did. It did self heal eventually, but then closing the webui window and trying to log back in returned the issue.

httpd error log shows:
[Mon Feb 04 11:20:53.319812 2019] [:error] [pid 3414] [client x.x.x.x:52713] PHP Notice: Undefined variable: form in /usr/share/bareos-webui/module/Auth/view/auth/auth/login.phtml on line 45, referer: http://x.x.x.x/bareos-webui/auth/login
[Mon Feb 04 11:20:53.319849 2019] [:error] [pid 3414] [client x.x.x.x:52713] PHP Fatal error: Call to a member function prepare() on a non-object in /usr/share/bareos-webui/module/Auth/view/auth/auth/login.phtml on line 45, referer: http://x.x.x.x/bareos-webui/auth/login

When it works correctly, those errors do not show up.
(0003245)
teka74   
2019-02-04 23:24   
agree with murrdyn, partially I can login in webui, sometimes blank window appears...
(0003254)
c0r3dump3d   
2019-02-07 16:07   
Hi, same problem in Centos 7.6.1810 fresh install bareos-dir version 18.2.5:

[Thu Feb 07 15:27:29.244019 2019] [:error] [pid 25046] [client 10.141.1.90:37769] admin, referer: http://bareosdir00.mgmt/bareos-webui/
[Thu Feb 07 15:27:29.244068 2019] [:error] [pid 25046] [client 10.141.1.90:37769] console_name: admin, referer: http://bareosdir00.mgmt/bareos-webui/
[Thu Feb 07 15:27:29.245627 2019] [:error] [pid 25046] [client 10.141.1.90:37769] PHP Notice: fwrite(): send of 26 bytes failed with errno=104 Connection reset by peer in /usr/share/bareos-webui/vendor/Bareos/library/Bareos/BSock/BareosBSock.php on line 219, referer: http://bareosdir00.mgmt/bareos-webui/
(0003257)
IvanBayan   
2019-02-12 14:02   
I have similar problem, if I try to login, I get next message:
(0003258)
c0r3dump3d   
2019-02-13 10:57   
This issue It seems to be the same to issue number 0001033 (https://bugs.bareos.org/view.php?id=1033) that's it's close and resolve in version 18.2.4rc2-76.1.

In my Centos 7.6 installations with php verison 7.2.15 with bareos 18.2.5 the error persist ...

[Wed Feb 13 10:35:08.142474 2019] [php7:notice] [pid 9012] [client 10.141.1.90:21152] PHP Notice: fwrite(): send of 26 bytes failed with errno=104 Connection reset by peer
in /usr/share/bareos-webui/vendor/Bareos/library/Bareos/BSock/BareosBSock.php on line 219

I'm sure that the credential are correct.
(0003259)
xyros   
2019-02-13 19:00   
A possibly helpful observation I have made concerning this bug:

Typically, if you remain logged in and your session expires by the time you try to perform an action, you have to log back in. This is when you encounter this bug.

Following a long idle period, if you avoid performing any action, so as to avoid being notified that your session has expired, and instead click your username and properly logout from the drop-down, you can log back in successfully without triggering this bug.

In fact, I have found that if I always deliberately logout, such that I avoid triggering the session expiry notice, I can always successfully login on the next attempt.

I have not tested a scenario of closing all browser windows then trying to login. But so far it seems that deliberately logging out -- even after session expiry (but without doing anything to trigger a session expiry notification) -- avoids triggering this bug.

Hope that helps with figuring out where the bug resides.
(0003261)
c0r3dump3d   
2019-02-14 09:46   
In my case the errot occurs in a fresh install and I have no previous sessions ..., otherwise I have test in a Debian 9 fresh install and I have the same problem!!
(0003263)
c0r3dump3d   
2019-02-15 10:32   
From a similar issue that previously happen in the docker version of bareos jukito show me that putting the options TLS Enable = No in /etc/bareos/bareos-dir.d/console/admin.conf file:

Console {
  Name = admin
  Password = *****
  Profile = webui-admin
  TLS Enable = No
}

correct the problem.
(0003264)
c0r3dump3d   
2019-02-15 10:35   
Sorry, I've forgotten to include the link for bareos docker same issue:
https://github.com/barcus/bareos/issues/24
(0003291)
gslongo   
2019-03-14 10:31   
Hi,

The error remains even with the "TLS Enable = No" setting here

PHP Notice: fwrite(): send of 26 bytes failed with errno=104 Connection reset by peer in /usr/share/bareos-webui/vendor/Bareos/library/Bareos/BSock/BareosBSock.php on line 219, referer: http://XX/bareos-webui/


[root@baloo bareos]# grep -rn webui
bareos-dir.d/console/admin.conf:2:# Restricted console used by bareos-webui
bareos-dir.d/console/admin.conf:7: Profile = "webui-admin"
bareos-dir.d/profile/webui-admin.conf:2:# bareos-webui webui-admin profile resource
bareos-dir.d/profile/webui-admin.conf:5: Name = "webui-admin"
bareos-dir.d/profile/webui-admin.conf:18:# bareos-webui default profile resource
bareos-dir.d/profile/webui-admin.conf:21: Name = webui

[root@baloo bareos]# cat bareos-dir.d/console/admin.conf
#
# Restricted console used by bareos-webui
#
Console {
  Name = admin
  Password = "********"
  Profile = "webui-admin"


  # As php does not support TLS-PSK,
  # and the director has TLS enabled by default,
  # we need to either disable TLS or setup
  # TLS with certificates.
  #
  # For testing purposes we disable it here
  TLS Enable = No
}


[root@baloo bareos]# cat bareos-dir.d/profile/webui-admin.conf
#
# bareos-webui webui-admin profile resource
#
Profile {
  Name = "webui-admin"
  CommandACL = !.bvfs_clear_cache, !.exit, !.sql, !configure, !create, !delete, !purge, !prune, !sqlquery, !umount, !unmount, *all*
  Job ACL = *all*
  Schedule ACL = *all*
  Catalog ACL = *all*
  Pool ACL = *all*
  Storage ACL = *all*
  Client ACL = *all*
  FileSet ACL = *all*
  Where ACL = *all*
}

#
# bareos-webui default profile resource
#
Profile {
  Name = webui
  CommandACL = status, messages, show, version, run, rerun, cancel, .api, .bvfs_*, list, llist, use, restore, .jobs, .filesets, .clients
  Job ACL = *all*
  Schedule ACL = *all*
  Catalog ACL = *all*
  Pool ACL = *all*
  Storage ACL = *all*
  Client ACL = *all*
  FileSet ACL = *all*
  Where ACL = *all*
}


Any idea ?

Thank you !
(0003343)
Schroeffu   
2019-04-23 15:14   
I have had the same issue after upgrade from 17.2.4 to 18.2.5 and the best result for me was editing this two files:

/etc/bareos-webui/directors.ini
tls_verify_peer = false
server_can_do_tls = true # it was false before
server_requires_tls = false
client_can_do_tls = false (with true, login in webui is not possible for me)

and in /etc/bareos/bareos-dir.d/console/admin.conf add
TLS Enable = No

Now WebUI (running on localhost, so, ignore TLS is ok for me) login is okay, plus, all backup-fd 18.2.5 clients are reachable by TLS-PSK according Log ( WebUI > Clients > Status Icon > first 2lines in logwindow says 'Handshake: Immediate TLS, Encryption: ECDHE-PSK-CHACHA20-POLY1305')

with "client_can_do_tls" i have a similar php error, this one:

Exception
File:
/usr/share/bareos-webui/vendor/Bareos/library/Bareos/BSock/BareosBSock.php:542
Message:
Error in TLS handshake
Stack trace:
#0 /usr/share/bareos-webui/vendor/Bareos/library/Bareos/BSock/BareosBSock.php(101): Bareos\BSock\BareosBSock->connect()
0000001 /usr/share/bareos-webui/module/Auth/src/Auth/Controller/AuthController.php(93): Bareos\BSock\BareosBSock->connect_and_authenticate()
0000002 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/Controller/AbstractActionController.php(82): Auth\Controller\AuthController->loginAction()
0000003 [internal function]: Zend\Mvc\Controller\AbstractActionController->onDispatch(Object(Zend\Mvc\MvcEvent))
0000004 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(444): call_user_func(Array, Object(Zend\Mvc\MvcEvent))
0000005 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(205): Zend\EventManager\EventManager->triggerListeners('dispatch', Object(Zend\Mvc\MvcEvent), Object(Closure))
0000006 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/Controller/AbstractController.php(118): Zend\EventManager\EventManager->trigger('dispatch', Object(Zend\Mvc\MvcEvent), Object(Closure))
0000007 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/DispatchListener.php(93): Zend\Mvc\Controller\AbstractController->dispatch(Object(Zend\Http\PhpEnvironment\Request), Object(Zend\Http\PhpEnvironment\Response))
0000008 [internal function]: Zend\Mvc\DispatchListener->onDispatch(Object(Zend\Mvc\MvcEvent))
0000009 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(444): call_user_func(Array, Object(Zend\Mvc\MvcEvent))
0000010 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(205): Zend\EventManager\EventManager->triggerListeners('dispatch', Object(Zend\Mvc\MvcEvent), Object(Closure))
0000011 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/Application.php(314): Zend\EventManager\EventManager->trigger('dispatch', Object(Zend\Mvc\MvcEvent), Object(Closure))
0000012 /usr/share/bareos-webui/public/index.php(24): Zend\Mvc\Application->run()
0000013 {main}
(0003380)
gslongo   
2019-05-23 16:15   
Hi,

Even with fix you suggest, we still have same issue but not same line in code :


[Thu May 23 16:13:23.853505 2019] [:error] [pid 16836] [client 172.16.38.128:42814] PHP Notice: fwrite(): send of 26 bytes failed with errno=104 Connection reset by peer in /usr/share/bareos-webui/vendor/Bareos/library/Bareos/BSock/BareosBSock.php on line 219, referer: http://baloo/bareos-webui/auth/login?req=/bareos-webui/dashboard/
(0003556)
gslongo   
2019-08-06 09:07   
Hi,

Any update on this issue ?

Thank you
(0003557)
gslongo   
2019-08-06 13:21   
Additional information when setting : setdebug level=200 trace=1 dir


bareos-dir (100): lib/bsock.cc:81-0 Construct BareosSocket
bareos-dir (100): include/jcr.h:320-0 Construct JobControlRecord
bareos-dir (200): lib/bsock.cc:631-0 Identified from Bareos handshake: webui-admin-R_CONSOLE recognized version: 18.2
bareos-dir (100): lib/parse_conf.cc:1056-0 Could not find foreign tls resource: R_CONSOLE-webui-admin
bareos-dir (100): lib/parse_conf.cc:1076-0 Could not find foreign tls resource: R_CONSOLE-webui-admin
bareos-dir (200): lib/try_tls_handshake_as_a_server.cc:54-0 Could not read out cleartext configuration
bareos-dir (100): lib/bsock.cc:129-0 Destruct BareosSocket
bareos-dir (100): include/jcr.h:324-0 Destruct JobControlRecord

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1107 [bareos-core] director major always 2019-08-02 17:26 2019-08-02 18:22
Reporter: egedwillo Platform: Linux  
Assigned To: OS: CentOS  
Priority: normal OS Version: 7  
Status: new Product Version: 17.2.7  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Backup Job Stuck on Bareos 17.2
Description: Hi,

I have a problem with the backup of my database files in bareos 17.2.

The backup job gets stuck when the volume size reaches approximately 6-7 GB and displays the message "Elapsed time = 00: 10: 37, Transfer rate = 11.39 M Bytes / second", after several hours it shows a timeout error and the backup job fails.

I have another 20 work tasks with the same configuration and procedure. This is the only task that fails.

The size of the backup files is 18 GB in total, after compression in the volumes the size is approximately 11-12 GB.

Bareos sd storage config and device config:

Storage {
  Name = hanab1-carp-sd
  Maximum Concurrent Jobs = 20
  SDAddress = 10.0.1.50
}

Device {
  Name = StorageHanaCarp
  Media Type = File
  Archive Device = /hana/backup/backupbareos
  LabelMedia = yes; # lets Bareos label unlabeled media
  Random Access = yes;
  AutomaticMount = yes; # when device opened, read it
  RemovableMedia = no;
  AlwaysOpen = no;
  Description = "File device. A connecting Director must have the same Name and MediaType."
}

Bareos Dir Pool config and storage config:


Storage {
  Name = hanab1-carp-sd
  Address = -------------
  Password = -------------
  Device = StorageHanaCarp
  Media Type = File
}

Pool {
  Name = carpFullHanaDB
  Pool Type = Backup
  Recycle = yes # Bareos can automatically recycle Volumes
  AutoPrune = yes # Prune expired volumes
  Volume Retention = 14 days # How long should the Full Backups be kept? (0000006)
  Maximum Volume Bytes = 200G # Limit Volume size to something reasonable
  Maximum Volumes = 100 # Limit number of Volumes in Pool
  Label Format = carp # Volumes will be labeled "Full-<volume-id>"
}
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files: bareos-1.JPG (71,503 bytes) 2019-08-02 17:26
https://bugs.bareos.org/file_download.php?file_id=386&type=bug
jpg

bareos-2.JPG (25,969 bytes) 2019-08-02 17:26
https://bugs.bareos.org/file_download.php?file_id=387&type=bug
jpg

bareos-3.JPG (62,018 bytes) 2019-08-02 18:22
https://bugs.bareos.org/file_download.php?file_id=388&type=bug
jpg
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1095 [bareos-core] webui minor always 2019-07-02 21:37 2019-08-01 12:22
Reporter: joergs Platform:  
Assigned To: astoorangi OS:  
Priority: normal OS Version:  
Status: resolved Product Version: 18.2.6  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version: 19.2.1  
    Target Version:  
bareos-master: impact: yes
bareos-master: action: fixed
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact: yes
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: webui: when login as a user without the permission to the ".api" command, the webui show a wrong and ugly error message
Description: When login to the webui as a user without the permission to the ".api" command (e.g. without a profile setting), the webui show:

(1)
Error: API 2 not available on director. Please upgrade to version 15.2.2 or greater and/or compile with jansson support.

Even when using the current webui and Bareos director.

When reloading the page, as better error will be display.
It shows the dashboard, with following hint:

(2)
Sorry, it seems you are not authorized to run this module. If you think this is an error, please contact your local administrator.

It would be good, if only the second message would be displayed.
Tags:
Steps To Reproduce: Create a user without profile using the bconsole:
* configure add console=test1 password=secret tlsenable=false

Login to the WebUI as user test1
Additional Information:
Attached Files:
Notes
(0003426)
readonly   
2019-07-09 13:22   
Fix committed to bareos master branch with changesetid 11571.
(0003550)
astoorangi   
2019-08-01 12:22   
Fix committed to bareos bareos-18.2 branch with changesetid 11625.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1059 [bareos-core] webui minor always 2019-02-16 19:16 2019-08-01 12:22
Reporter: billg Platform: Linux  
Assigned To: astoorangi OS: CentOS  
Priority: normal OS Version: 7  
Status: resolved Product Version: 18.2.5  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version: 19.2.1  
    Target Version:  
bareos-master: impact:
bareos-master: action: fixed
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Webui spams Apache error_log with bconsole messages
Description: The additional output when logging in to bconsole using the community packages, as well as when compiled from source code, displaying the notice that it is the community version results in Bareos spamming the Apache error_log whenever the webui is in use.

Example of spam now appearing in Apache's error_log file-
[root@backup ~]# >/var/log/httpd/error_log
[root@backup ~]# tail -f /var/log/httpd/error_log
[Sat Feb 16 12:10:27.343815 2019] [:error] [pid 6028] [client 127.0.0.1:60788] admin, referer: https://backup/
[Sat Feb 16 12:10:27.343879 2019] [:error] [pid 6028] [client 127.0.0.1:60788] console_name: admin, referer: https://backup/
[Sat Feb 16 12:10:27.387309 2019] [:error] [pid 6028] [client 127.0.0.1:60788] 1002\x1ebareos.org build binary\nbareos.org binaries are UNSUPPORTED by bareos.com.\nGet official binaries and vendor support on https://www.bareos.com\nYou are logged in as: admin, referer: https://backup/
[Sat Feb 16 12:10:28.062552 2019] [:error] [pid 5808] [client 127.0.0.1:60820] console_name: admin, referer: https://backup/dashboard/
[Sat Feb 16 12:10:28.104348 2019] [:error] [pid 5808] [client 127.0.0.1:60820] 1002\x1ebareos.org build binary\nbareos.org binaries are UNSUPPORTED by bareos.com.\nGet official binaries and vendor support on https://www.bareos.com\nYou are logged in as: admin, referer: https://backup/dashboard/
[Sat Feb 16 12:10:28.206117 2019] [:error] [pid 6024] [client 127.0.0.1:60822] console_name: admin, referer: https://backup/dashboard/
[Sat Feb 16 12:10:28.248259 2019] [:error] [pid 6024] [client 127.0.0.1:60822] 1002\x1ebareos.org build binary\nbareos.org binaries are UNSUPPORTED by bareos.com.\nGet official binaries and vendor support on https://www.bareos.com\nYou are logged in as: admin, referer: https://backup/dashboard/
[Sat Feb 16 12:10:29.868490 2019] [:error] [pid 6027] [client 127.0.0.1:60824] console_name: admin, referer: https://backup/dashboard/
[Sat Feb 16 12:10:29.910374 2019] [:error] [pid 6027] [client 127.0.0.1:60824] 1002\x1ebareos.org build binary\nbareos.org binaries are UNSUPPORTED by bareos.com.\nGet official binaries and vendor support on https://www.bareos.com\nYou are logged in as: admin, referer: https://backup/dashboard/
[Sat Feb 16 12:10:30.009492 2019] [:error] [pid 6028] [client 127.0.0.1:60788] console_name: admin, referer: https://backup/dashboard/
[Sat Feb 16 12:10:30.051350 2019] [:error] [pid 6028] [client 127.0.0.1:60788] 1002\x1ebareos.org build binary\nbareos.org binaries are UNSUPPORTED by bareos.com.\nGet official binaries and vendor support on https://www.bareos.com\nYou are logged in as: admin, referer: https://backup/dashboard/
[Sat Feb 16 12:10:34.021259 2019] [:error] [pid 6028] [client 127.0.0.1:60788] console_name: admin, referer: https://backup/dashboard/
[Sat Feb 16 12:10:34.062302 2019] [:error] [pid 6028] [client 127.0.0.1:60788] 1002\x1ebareos.org build binary\nbareos.org binaries are UNSUPPORTED by bareos.com.\nGet official binaries and vendor support on https://www.bareos.com\nYou are logged in as: admin, referer: https://backup/dashboard/
[Sat Feb 16 12:10:34.480549 2019] [:error] [pid 6028] [client 127.0.0.1:60788] console_name: admin, referer: https://backup/schedule//
[Sat Feb 16 12:10:34.522273 2019] [:error] [pid 6028] [client 127.0.0.1:60788] 1002\x1ebareos.org build binary\nbareos.org binaries are UNSUPPORTED by bareos.com.\nGet official binaries and vendor support on https://www.bareos.com\nYou are logged in as: admin, referer: https://backup/schedule/
Tags:
Steps To Reproduce: # Install Bareos 17.2.x on Centos7 with Apache

# Update to 18.2.5
wget http://download.bareos.org/bareos/release/latest/CentOS_7/bareos.repo -O /etc/yum.repos.d/bareos.repo
yum update -y
reboot
tail -f /var/log/httpd/error_log

# Use the Bareos web UI
Additional Information:
System Description
Attached Files:
Notes
(0003422)
readonly   
2019-07-09 11:22   
Fix committed to bareos master branch with changesetid 11568.
(0003551)
astoorangi   
2019-08-01 12:22   
Fix committed to bareos bareos-18.2 branch with changesetid 11627.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1013 [bareos-core] director crash have not tried 2018-09-30 14:52 2019-08-01 10:20
Reporter: progserega Platform: amd64  
Assigned To: OS: Debian  
Priority: urgent OS Version: 9.4  
Status: feedback Product Version: 17.2.4  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: bareos-dir crash after start job from webui
Description: bareos-dir crash after start job from webui.

Tags:
Steps To Reproduce: 1. Add new client and job for him
2. Reload bareos-dir
3. Enter webui
4. start job for backup new client
5. bareos-dir crash
Additional Information: Proxmox 5
lxc container with debian 9

ii bareos-bconsole 17.2.4-9.1 amd64 Backup Archiving Recovery Open Sourced - text console
ii bareos-client 17.2.4-9.1 amd64 Backup Archiving Recovery Open Sourced - client metapackage
ii bareos-common 17.2.4-9.1 amd64 Backup Archiving Recovery Open Sourced - common files
ii bareos-database-common 17.2.4-9.1 amd64 Backup Archiving Recovery Open Sourced - common catalog files
ii bareos-database-postgresql 17.2.4-9.1 amd64 Backup Archiving Recovery Open Sourced - PostgreSQL backend
ii bareos-database-tools 17.2.4-9.1 amd64 Backup Archiving Recovery Open Sourced - database tools
ii bareos-director 17.2.4-9.1 amd64 Backup Archiving Recovery Open Sourced - director daemon
ii bareos-filedaemon 17.2.4-9.1 amd64 Backup Archiving Recovery Open Sourced - file daemon
ii bareos-webui 17.2.4-15.1 all Backup Archiving Recovery Open Sourced - webui

Linux bareos 4.9.0-6-amd64 0000001 SMP Debian 4.9.82-1+deb9u3 (2018-03-02) x86_64 GNU/Linux

in dmesg:

[ 6.569348] input: PC Speaker as /devices/platform/pcspkr/input/input5
[ 6.571505] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input6
[ 6.571535] ACPI: Power Button [PWRF]
[ 6.730165] EXT4-fs (vda1): mounting ext2 file system using the ext4 subsystem
[ 6.754072] EXT4-fs (vda1): mounted filesystem without journal. Opts: (null)
[ 6.805224] Adding 2097148k swap on /dev/mapper/debian--vg-swap_1. Priority:-1 extents:1 across:2097148k FS
[ 105.056668] random: crng init done
[2929435.899835] INFO: task jbd2/dm-0-8:158 blocked for more than 120 seconds.
[2929435.909741] Not tainted 4.9.0-6-amd64 0000001 Debian 4.9.82-1+deb9u3
[2929435.910333] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[2929435.910696] jbd2/dm-0-8 D 0 158 2 0x00000000
[2929435.910700] ffff950778b2f800 0000000000000000 ffff95073689e0c0 ffff95077fc18940
[2929435.910703] ffffffff93211500 ffffadecc0523b30 ffffffff92c0c649 0000000000000001
[2929435.910705] 00ffffff928fdbc4 ffff95077fc18940 ffff950700000000 ffff95073689e0c0
[2929435.910707] Call Trace:
[2929435.910713] [<ffffffff92c0c649>] ? __schedule+0x239/0x6f0
[2929435.910715] [<ffffffff92c0d2f0>] ? bit_wait+0x50/0x50
[2929435.910717] [<ffffffff92c0cb32>] ? schedule+0x32/0x80
[2929435.910719] [<ffffffff92c0febd>] ? schedule_timeout+0x1dd/0x380
[2929435.910721] [<ffffffff926a9613>] ? update_load_avg+0x73/0x360
[2929435.910722] [<ffffffff926a9613>] ? update_load_avg+0x73/0x360
[2929435.910725] [<ffffffff92658e5a>] ? kvm_clock_get_cycles+0x1a/0x20
[2929435.910736] [<ffffffff926ed74e>] ? ktime_get+0x3e/0xb0
[2929435.910738] [<ffffffff92c0d2f0>] ? bit_wait+0x50/0x50
[2929435.910739] [<ffffffff92c0c3ad>] ? io_schedule_timeout+0x9d/0x100
[2929435.910741] [<ffffffff926b9887>] ? prepare_to_wait+0x57/0x80
[2929435.910743] [<ffffffff92c0d307>] ? bit_wait_io+0x17/0x60
[2929435.910744] [<ffffffff92c0cec5>] ? __wait_on_bit+0x55/0x80
[2929435.910746] [<ffffffff9269e4c6>] ? finish_task_switch+0x76/0x200
[2929435.910748] [<ffffffff92c0d2f0>] ? bit_wait+0x50/0x50
[2929435.910749] [<ffffffff92c0d02e>] ? out_of_line_wait_on_bit+0x7e/0xa0
[2929435.910751] [<ffffffff926b9cf0>] ? wake_atomic_t_function+0x60/0x60
[2929435.910758] [<ffffffffc04bbf85>] ? jbd2_journal_commit_transaction+0xf55/0x17b0 [jbd2]
[2929435.910760] [<ffffffff9269e4c6>] ? finish_task_switch+0x76/0x200
[2929435.910764] [<ffffffffc04c0c02>] ? kjournald2+0xc2/0x260 [jbd2]
[2929435.910765] [<ffffffff926b9c50>] ? prepare_to_wait_event+0xf0/0xf0
[2929435.910768] [<ffffffffc04c0b40>] ? commit_timeout+0x10/0x10 [jbd2]
[2929435.910771] [<ffffffff926970c9>] ? kthread+0xd9/0xf0
[2929435.910773] [<ffffffff92696ff0>] ? kthread_park+0x60/0x60
[2929435.910775] [<ffffffff9267c3d0>] ? SyS_exit_group+0x10/0x10
[2929435.910776] [<ffffffff92c11537>] ? ret_from_fork+0x57/0x70

in syslog:

Sep 30 21:51:15 bareos systemd-timesyncd[370]: Synchronized to time server 94.100.192.29:123 (3.debian.pool.ntp.org).
Sep 30 22:09:01 bareos CRON[10341]: (root) CMD ( [ -x /usr/lib/php/sessionclean ] && if [ ! -d /run/systemd/system ]; then /usr/lib/php/sessionclean; fi)
Sep 30 22:09:10 bareos systemd[1]: Starting Clean php session files...
Sep 30 22:09:10 bareos systemd[1]: Started Clean php session files.
Sep 30 22:12:39 bareos puppet-agent[10618]: Applied catalog in 21.57 seconds
Sep 30 22:17:01 bareos CRON[11401]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
Sep 30 22:22:06 bareos bareos-dir: BAREOS interrupted by signal 7: BUS error
Sep 30 22:33:34 bareos systemd[1]: Stopping Puppet agent...
Sep 30 22:33:34 bareos puppet-agent[16966]: Caught TERM; exiting
Sep 30 22:33:34 bareos systemd[1]: Stopped Puppet agent.
Sep 30 22:39:01 bareos CRON[12635]: (root) CMD ( [ -x /usr/lib/php/sessionclean ] && if [ ! -d /run/systemd/system ]; then /usr/lib/php/sessionclean; fi)
System Description
Attached Files: bareos-dir.20528.bactrace (1,596 bytes) 2018-09-30 14:52
https://bugs.bareos.org/file_download.php?file_id=309&type=bug
bareos.14463.traceback (685 bytes) 2018-09-30 15:43
https://bugs.bareos.org/file_download.php?file_id=310&type=bug
Notes
(0003125)
progserega   
2018-09-30 15:04   
Starting program: /usr/sbin/bareos-dir -f
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
[New Thread 0x7ffff41bc700 (LWP 13925)]
[New Thread 0x7fffebfff700 (LWP 13927)]
[New Thread 0x7fffeb7fe700 (LWP 13928)]
[New Thread 0x7fffeaffd700 (LWP 13929)]
[New Thread 0x7fffea7fc700 (LWP 13937)]
[New Thread 0x7fffe9ffb700 (LWP 13945)]
[Thread 0x7fffe9ffb700 (LWP 13945) exited]
[Thread 0x7fffea7fc700 (LWP 13937) exited]
[New Thread 0x7fffea7fc700 (LWP 13949)]
[New Thread 0x7fffe9ffb700 (LWP 13950)]
[New Thread 0x7fffe97fa700 (LWP 13960)]
[New Thread 0x7fffe8ff9700 (LWP 13962)]
[New Thread 0x7fffcbfff700 (LWP 13964)]
[Thread 0x7fffea7fc700 (LWP 13949) exited]
[Thread 0x7fffcbfff700 (LWP 13964) exited]
[Thread 0x7fffe8ff9700 (LWP 13962) exited]
[Thread 0x7fffe97fa700 (LWP 13960) exited]
[Thread 0x7fffe9ffb700 (LWP 13950) exited]
[New Thread 0x7fffe9ffb700 (LWP 13968)]
[Thread 0x7fffe9ffb700 (LWP 13968) exited]
[New Thread 0x7fffe9ffb700 (LWP 13969)]
[Thread 0x7fffe9ffb700 (LWP 13969) exited]
[New Thread 0x7fffe9ffb700 (LWP 13971)]
[Thread 0x7fffe9ffb700 (LWP 13971) exited]
[New Thread 0x7fffe9ffb700 (LWP 13972)]
[New Thread 0x7fffcbfff700 (LWP 13974)]
[Thread 0x7fffe9ffb700 (LWP 13972) exited]
[Thread 0x7fffcbfff700 (LWP 13974) exited]
[New Thread 0x7fffcbfff700 (LWP 13977)]
[New Thread 0x7fffe9ffb700 (LWP 13980)]
[Thread 0x7fffe9ffb700 (LWP 13980) exited]
[Thread 0x7fffcbfff700 (LWP 13977) exited]
[New Thread 0x7fffcbfff700 (LWP 13984)]
[New Thread 0x7fffe9ffb700 (LWP 13985)]
[Thread 0x7fffcbfff700 (LWP 13984) exited]
[New Thread 0x7fffcbfff700 (LWP 13989)]
[New Thread 0x7fffe8ff9700 (LWP 13990)]
[Thread 0x7fffe8ff9700 (LWP 13990) exited]
[New Thread 0x7fffe8ff9700 (LWP 14001)]
[New Thread 0x7fffe97fa700 (LWP 14010)]
[New Thread 0x7fffea7fc700 (LWP 14012)]
[New Thread 0x7fffcb7fe700 (LWP 14016)]
[Thread 0x7fffe97fa700 (LWP 14010) exited]
[Thread 0x7fffcb7fe700 (LWP 14016) exited]
[Thread 0x7fffea7fc700 (LWP 14012) exited]
[Thread 0x7fffe8ff9700 (LWP 14001) exited]
[New Thread 0x7fffe8ff9700 (LWP 14032)]
[Thread 0x7fffe8ff9700 (LWP 14032) exited]
[New Thread 0x7fffe8ff9700 (LWP 14086)]
[Thread 0x7fffe8ff9700 (LWP 14086) exited]
[New Thread 0x7fffe8ff9700 (LWP 14144)]
[New Thread 0x7fffcb7fe700 (LWP 14146)]
[New Thread 0x7fffea7fc700 (LWP 14148)]
[Thread 0x7fffea7fc700 (LWP 14148) exited]
[Thread 0x7fffcb7fe700 (LWP 14146) exited]
[New Thread 0x7fffea7fc700 (LWP 14161)]
[Thread 0x7fffea7fc700 (LWP 14161) exited]
[Thread 0x7fffe8ff9700 (LWP 14144) exited]
[New Thread 0x7fffe8ff9700 (LWP 14191)]
[Thread 0x7fffe8ff9700 (LWP 14191) exited]

Thread 5 "bareos-dir" received signal SIGUSR2, User defined signal 2.
[Switching to Thread 0x7fffeaffd700 (LWP 13929)]
0x00007ffff56f57dd in nanosleep () from /lib/x86_64-linux-gnu/libpthread.so.0
(gdb) bt full
#0 0x00007ffff56f57dd in nanosleep () from /lib/x86_64-linux-gnu/libpthread.so.0
No symbol table info available.
0000001 0x00007ffff6ea51f4 in bmicrosleep(int, int) () from /usr/lib/bareos/libbareos-17.2.4.so
No symbol table info available.
0000002 0x00007ffff6ecf0c8 in register_watchdog(s_watchdog_t*) () from /usr/lib/bareos/libbareos-17.2.4.so
No symbol table info available.
0000003 0x00007ffff6ea8107 in start_thread_timer(JCR*, unsigned long, unsigned int) () from /usr/lib/bareos/libbareos-17.2.4.so
No symbol table info available.
0000004 0x00007ffff6ea37c0 in BSOCK_TCP::connect(JCR*, int, long, long, char const*, char*, char*, int, bool) () from /usr/lib/bareos/libbareos-17.2.4.so
No symbol table info available.
0000005 0x00005555555a74cd in ?? ()
No symbol table info available.
0000006 0x00005555555aa48c in ?? ()
No symbol table info available.
0000007 0x00007ffff6eb5d9f in lmgr_thread_launcher () from /usr/lib/bareos/libbareos-17.2.4.so
No symbol table info available.
0000008 0x00007ffff56ec494 in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
No symbol table info available.
0000009 0x00007ffff498dacf in clone () from /lib/x86_64-linux-gnu/libc.so.6
No symbol table info available.
(gdb)
(0003126)
progserega   
2018-09-30 15:28   
it is happens where in bacula-dir client file set mistake IP of client. Which not allow connect to him from bareos-dir.
(0003127)
progserega   
2018-09-30 15:37   
When fix IP of client - job will run success, but then director crash again:

сен 30 23:07:56 bareos systemd[1]: Starting Bareos Director Daemon service...
сен 30 23:07:56 bareos systemd[1]: bareos-director.service: PID file /var/lib/bareos/bareos-dir.9101.pid not readable (yet?) after start: No such file or directory
сен 30 23:07:56 bareos systemd[1]: Started Bareos Director Daemon service.
сен 30 23:24:56 bareos bareos-dir[14463]: bsock_tcp.c:407 Write error sending -1 bytes to client:127.0.0.1:9101: ERR=Обрыв канала
сен 30 23:27:01 bareos bareos-dir[14463]: bsock_tcp.c:407 Write error sending -1 bytes to client:127.0.0.1:9101: ERR=Обрыв канала
сен 30 23:27:56 bareos bareos-dir[14463]: BAREOS interrupted by signal 7: BUS error
(0003128)
progserega   
2018-09-30 15:46   
Link to bareos-dir.core.14463.gz
https://yadi.sk/d/VN-HBejjny-Dhw
(0003549)
arogge   
2019-08-01 10:20   
If you still want this troubleshooted, we will need a meaningful traceback.
For this you need to install the debug-packages (bareos-dbg on Debian) in addition to gdb.
The traceback file will then contain a lot more information and will allow us to investigate the issue further.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1073 [bareos-core] director major always 2019-04-05 12:29 2019-08-01 10:14
Reporter: bratkartoffel Platform: Linux  
Assigned To: arogge OS: Alpine  
Priority: normal OS Version: 3.8+  
Status: resolved Product Version: 17.2.7  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version: 19.2.1  
    Target Version:  
bareos-master: impact: yes
bareos-master: action: fixed
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact: yes
bareos-18.2: action: none
bareos-17.2: impact: yes
bareos-17.2: action: none
bareos-16.2: impact: yes
bareos-16.2: action: none
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: bsmtp and bareos-dir crashing with segfaults due to undefined behaviour in pthread usage
Description: Using Alpine 3.8+ (with musl libc) bsmtp crashes when invoked and bareos-dir crashes when running a job or getting a connection with bconsole.
Bug at alpine: https://bugs.alpinelinux.org/issues/10156

Problem may still be there in latest bareos release.
Proposed patches are attached. I'm not that deep into c programming and I'm absolutely not sure if my checks are correct. Please adjust as needed.

Thanks,
Simon
Tags:
Steps To Reproduce: Simple reproduction of bsmtp crash:
---
$> docker run -it alpine:3.9 /bin/ash
/ # apk add bareos
[...]
/ # bsmtp
Segmentation fault (core dumped)
--
Additional Information: bsmtp:
There should be a thread key set called jcr_key, but its initial value is undefined. This is passed into tss_get which is a C11 wrapper for pthread_getspecific.
http://pubs.opengroup.org/onlinepubs/9699919799/functions/pthread_getspecific.html
-> The effect of calling pthread_getspecific() or pthread_setspecific() with a key value not obtained from pthread_key_create() or after key has been deleted with pthread_key_delete() is undefined.
---
0x00007ffff7fbe2c3 in tss_get () from /lib/ld-musl-x86_64.so.1
(gdb) bt
#0 0x00007ffff7fbe2c3 in tss_get () from /lib/ld-musl-x86_64.so.1
0000001 0x00007ffff7f2d94f in get_jobid_from_tsd () at jcr.c:739
0000002 0x00007ffff7f34279 in p_msg (file=0x555555558006 "bsmtp.c", line=364,
    level=0, fmt=0x555555558568 "Fatal error: no recipient given.\n")
    at message.c:1343
0000003 0x00005555555564d5 in main (argc=0, argv=<optimized out>) at bsmtp.c:364
---

bareos-dir (when bconsole connects):
The issue is that the code tries to do a pthread_detach() on an already detached thread. This leads to undefined behaviour and a crash.
http://pubs.opengroup.org/onlinepubs/9699919799/functions/pthread_detach.html
-> The behavior is undefined if the value specified by the thread argument to pthread_detach() does not refer to a joinable thread.
---
(gdb) bt
#0 __pthread_timedjoin_np (t=0x7f9c93e91b10, res=res@entry=0x0, at=at@entry=0x0) at src/thread/pthread_join.c:11
0000001 0x00007f9c95899383 in __pthread_join (t=<optimized out>, res=res@entry=0x0) at src/thread/pthread_join.c:24
0000002 0x00007f9c9589918c in __pthread_detach (t=<optimized out>) at src/thread/pthread_detach.c:9
0000003 __pthread_detach (t=<optimized out>) at src/thread/pthread_detach.c:4
0000004 0x00005559ea34ebd7 in handle_UA_client_request (user=user@entry=0x5559ea99d448) at ua_server.c:76
0000005 0x00005559ea329325 in handle_connection_request (arg=0x5559ea99d448) at socket_server.c:98
0000006 0x00007f9c957d5bc6 in workq_server (arg=arg@entry=0x5559ea38eae0 <socket_workq>) at workq.c:336
0000007 0x00007f9c957c0116 in lmgr_thread_launcher (x=0x5559ea99d588) at lockmgr.c:928
0000008 0x00007f9c95898c4c in start (p=<optimized out>) at src/thread/pthread_create.c:144
0000009 0x00007f9c9589acf2 in __clone () at src/thread/x86_64/clone.s:22
Backtrace stopped: frame did not save the PC
---

bareos-dir (when job starts):
The issue is that the code tries to do a pthread_detach() on an already detached thread. This leads to undefined behaviour and a crash.
http://pubs.opengroup.org/onlinepubs/9699919799/functions/pthread_detach.html
-> The behavior is undefined if the value specified by the thread argument to pthread_detach() does not refer to a joinable thread.
---
(gdb) bt
#0 __pthread_timedjoin_np (t=0x7f7daf1ffb10, res=res@entry=0x0, at=at@entry=0x0) at src/thread/pthread_join.c:11
0000001 0x00007f7db0c2a383 in __pthread_join (t=<optimized out>, res=res@entry=0x0) at src/thread/pthread_join.c:24
0000002 0x00007f7db0c2a18c in __pthread_detach (t=<optimized out>) at src/thread/pthread_detach.c:9
0000003 __pthread_detach (t=<optimized out>) at src/thread/pthread_detach.c:4
0000004 0x000055e353dce058 in job_thread (arg=0x55e35419f3e8) at job.c:423
0000005 0x000055e353dd2f2b in jobq_server (arg=arg@entry=0x55e353e3f880 <job_queue>) at jobq.c:485
0000006 0x00007f7db0b51116 in lmgr_thread_launcher (x=0x55e3541a1248) at lockmgr.c:928
0000007 0x00007f7db0c29c4c in start (p=<optimized out>) at src/thread/pthread_create.c:144
0000008 0x00007f7db0c2bcf2 in __clone () at src/thread/x86_64/clone.s:22
Backtrace stopped: frame did not save the PC
---
System Description General issues regarding all linux platforms or not specific to only one distribution
Attached Files: fix-bsmtp-segfault.patch (1,449 bytes) 2019-04-05 12:29
https://bugs.bareos.org/file_download.php?file_id=359&type=bug
pthread-double-detach.patch (1,238 bytes) 2019-04-05 12:29
https://bugs.bareos.org/file_download.php?file_id=360&type=bug
Notes
(0003329)
arogge   
2019-04-11 09:42   
First of all: thank you very much for putting this amount of effort into this.

To get your patches into Bareos easily, you should clone the git repository, apply your changes against master and then either create a github pull-request (this is the preferred way) or git format-patch the commits and send them to the bareos-devel mailing-list.
see also: https://docs.bareos.org/DeveloperGuide/generaldevel.html#contributions

The director-patch will need work. It is the exact same code twice, so it should be moved into its own function.
(0003333)
bratkartoffel   
2019-04-11 18:18   
Thank you for taking a look at the patches. Yes, I'll create a PR at github, I just wanted to make sure the patches / approach is correct.
(0003342)
bratkartoffel   
2019-04-18 15:39   
PR open: https://github.com/bareos/bareos/pull/169
(0003436)
arogge   
2019-07-10 17:46   
Can you please take a look at my branch (as I already suggested in your PR). I guess we can merge the change, but I cannot test on Alpine.
(0003439)
bratkartoffel   
2019-07-11 10:36   
Thanks for you change, I've seen the PR at github.
I'll compile and install it on saturday and test it next week. As far as I see you haven't changed that much, so I expect it to work without problems.
(0003441)
arogge   
2019-07-11 14:08   
The new PR is https://github.com/bareos/bareos/pull/220
Most changes concern CMake.
(0003461)
bratkartoffel   
2019-07-15 09:35   
The first batch of backups ran without a problem yesterday, I'll play with bconsole the next days and give you feedback here.
But as far as I see It looks very good

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1020 [bareos-core] webui major always 2018-10-10 09:37 2019-07-31 18:03
Reporter: linkstat Platform: Linux  
Assigned To: frank OS: Debian  
Priority: normal OS Version: 9  
Status: assigned Product Version: 18.2.4-rc1  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Webui can not restore a client, if it contains spaces in its name
Description: All my clients have names with spaces on their names like "client-fd using Catalog-XXX"; correctly handled (ie, enclosing the name in quotation marks, or escaping the space with \), this has not represented any problem... until now. Webui can even perform backup tasks (previously defined in the configuration files) and has not presented problems with the spaces. But when it came time to restore something ... it just does not seem to be able to properly handle the character strings that contain spaces inside. Apparently, cuts the string to first place especially found (as you can see by looking at the attached image).
Tags:
Steps To Reproduce: Define a client whose name contains spaces inside, such as "hostname-fd Testing Client".
Try to restore a backup from Webui to that client (it does not matter that the backup was originally made in that client or that the newly defined client is a new destination for the restoration of a backup previously made in another client).
Webui will fail by saying "invalid client argument: hostname-fd". As you can see, Webui will "cut" the client's name to the first string before the first space, and since there is no hostname-fd client, the task will fail; or worse, if additionally there was a client whose name matched the first string before the first space, Webui will restore the wrong client.
Additional Information: bconsole does not present any problem when the clients contain spaces in their names (this of course, when the spaces are correctly handled by the human operator who writes the commands, either by enclosing the name with quotation marks, or escaping spaces with a backslash).
System Description
Attached Files: Bareos - Can not restore when a client name has spaces in their name.jpg (139,884 bytes) 2018-10-10 09:37
https://bugs.bareos.org/file_download.php?file_id=311&type=bug
jpg
Notes
(0003546)
linkstat   
2019-07-31 18:03   
Hello!

Any news regarding this problem? (or any ideas about how to patch it temporarily so that you can use webui for the case described)?
Sometimes it is tedious to use bconsole all the time instead of webui ...

Regards!

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
985 [bareos-core] api feature have not tried 2018-07-15 19:38 2019-07-31 15:57
Reporter: enikq Platform:  
Assigned To: OS:  
Priority: low OS Version:  
Status: acknowledged Product Version: 17.2.6  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Bareos Configuration in JSON format
Description: I need the bareos-dir, bareos-fd and bareos-sd configuration in JSON output.
I noticed you can set the bconsole output to JSOn with `.api 2`, but when doing so `show all` still output the configuration in standard format and not in JSON.

Bacula has a CLI tool to output the config in JSON, see: https://fossies.org/linux/bacula/src/dird/bdirjson.c

Does bareos has somewhere hidden such implementation too, if not can you add this?
Tags:
Steps To Reproduce:
Additional Information:
Attached Files:
Notes
(0003544)
arogge   
2019-07-31 15:57   
Update this to be a feature request.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1103 [bareos-core] storage daemon text N/A 2019-07-22 11:19 2019-07-31 15:53
Reporter: Schroeffu Platform: Linux  
Assigned To: OS: Ubuntu  
Priority: low OS Version: 18.04  
Status: feedback Product Version: 18.2.5  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Bareos with warnings of bareos-fd Incremental
Description: Since Upgrade Bareos 17 to 18.2.5 on Ubuntu 18.04 , i get every daily backup two notifications:

1. 26-Jun 21:00 bareos-sd JobId 3786: Error: Bareos cannot write on disk Volume "Incremental-0017" because: The sizes do not match! Volume=641 Catalog=625999556

2. 27-Jun 00:20 bareos-sd JobId 3788: Warning: For Volume "Incremental-0103":
The sizes do not match! Volume=1073696951 Catalog=18148 Correcting Catalog 27-Jun 00:20 bareos-sd JobId 3788: Job vmlxgitlab01bs-Default.2019-06-27_00.20.00_05 is waiting. Cannot find any appendable volumes.
Please use the "label" command to create a new Volume for:
    Storage: "FileStorage-vmbaksrv01bs" (/bareos-backup-dir)
    Pool: Incremental
    Media type: File

Looks like only a warning, because it is "Correcting Catalog" as I see in the messages. And a test done to recovery something on the host ist also ok.

I can't stop these messages, so how can I make this correct for the future to not get this messages anymore?
Tags:
Steps To Reproduce: not sure. i just upgraded to the latest bareos version available for Ubuntu 18.04, since then the issue exists.
Additional Information:
Attached Files:
Notes
(0003516)
Schroeffu   
2019-07-22 11:29   
Ignore the extended error message of 2., i copied wrong one (there was really the appenable volumes full)
the right daily warning i want to resolve is:

20-Jul 00:20 bareos-sd JobId 4077: Warning: For Volume "Incremental-0091":
The sizes do not match! Volume=1073702232 Catalog=641 Correcting Catalog
(0003518)
arogge   
2019-07-22 13:02   
If you get "The sizes do not match!" this means the volume information in the catalog has not been updated.
This can happen when a previous job has failed miserably.
Does it happen with different volumes or just with Incremental-0017?
Can you provide the joblogs for tje job that wrote to the same volume before (list joblog)?
(0003533)
Schroeffu   
2019-07-30 09:59   
Got what you mean, it seems to be the Fileset SelfTest backup-bareos-fd which runs 21:00 before:

*list joblog jobid=4185
 2019-07-29 21:00:02 bareos-dir JobId 4185: Start Backup JobId 4185, Job=backup-bareos-fd.2019-07-29_21.00.00_45
 2019-07-29 21:00:02 bareos-dir JobId 4185: Connected Storage daemon at bareos:9103, encryption: TLS_CHACHA20_POLY1305_SHA256
 2019-07-29 21:00:03 bareos-dir JobId 4185: Using Device "FileStorage" to write.
 2019-07-29 21:00:03 bareos-dir JobId 4185: Connected Client: bareos-fd at localhost:9102, encryption: TLS_CHACHA20_POLY1305_SHA256
 2019-07-29 21:00:03 bareos-dir JobId 4185: Handshake: Immediate TLS 2019-07-29 21:00:03 bareos-dir JobId 4185: Encryption: TLS_CHACHA20_POLY1305_SHA256
 2019-07-29 21:00:03 vmlxbareos01bs-fd JobId 4185: Connected Storage daemon at bareos:9103, encryption: TLS_CHACHA20_POLY1305_SHA256
 2019-07-29 21:00:03 bareos-sd JobId 4185: Warning: Volume "Incremental-0163" not on device "FileStorage" (/var/lib/bareos/storage).
 2019-07-29 21:00:03 vmlxbareos01bs-fd JobId 4185: Extended attribute support is enabled
 2019-07-29 21:00:03 vmlxbareos01bs-fd JobId 4185: ACL support is enabled
 2019-07-29 21:00:03 bareos-sd JobId 4185: Marking Volume "Incremental-0163" in Error in Catalog.
 2019-07-29 21:00:03 bareos-sd JobId 4185: Warning: Volume "Incremental-0163" not on device "FileStorage" (/var/lib/bareos/storage).
 2019-07-29 21:00:03 bareos-sd JobId 4185: Marking Volume "Incremental-0163" in Error in Catalog.
 2019-07-29 21:00:03 bareos-sd JobId 4185: Warning: stored/mount.cc:270 Open device "FileStorage" (/var/lib/bareos/storage) Volume "Incremental-0163" failed: ERR=stored/dev.cc:731 Could not open: /var/lib/bareos/storage/Incremental-0163, ERR=No such file or directory

 2019-07-29 21:00:03 bareos-dir JobId 4185: There are no more Jobs associated with Volume "Incremental-0020". Marking it purged.
 2019-07-29 21:00:03 bareos-dir JobId 4185: All records pruned from Volume "Incremental-0020"; marking it "Purged"
 2019-07-29 21:00:03 bareos-dir JobId 4185: Recycled volume "Incremental-0020"
 2019-07-29 21:00:03 bareos-sd JobId 4185: Recycled volume "Incremental-0020" on device "FileStorage" (/var/lib/bareos/storage), all previous data lost.
 2019-07-29 21:00:03 bareos-sd JobId 4185: Releasing device "FileStorage" (/var/lib/bareos/storage).
 2019-07-29 21:00:03 bareos-sd JobId 4185: Elapsed time=00:00:01, Transfer rate=0 Bytes/second
 2019-07-29 21:00:03 bareos-dir JobId 4185: Bareos bareos-dir 18.2.5 (30Jan19):
  Build OS: Linux-4.4.92-6.18-default ubuntu Ubuntu 18.04 LTS
  JobId: 4185
  Job: backup-bareos-fd.2019-07-29_21.00.00_45
  Backup Level: Incremental, since=2019-07-27 21:00:02
  Client: "bareos-fd" 18.2.5 (30Jan19) Linux-4.4.92-6.18-default,ubuntu,Ubuntu 18.04 LTS,xUbuntu_18.04,x86_64
  FileSet: "SelfTest" 2018-05-04 16:53:55
  Pool: "Incremental" (From Job IncPool override)
  Catalog: "MyCatalog" (From Client resource)
  Storage: "File" (From Job resource)
  Scheduled time: 29-Jul-2019 21:00:00
  Start time: 29-Jul-2019 21:00:03
  End time: 29-Jul-2019 21:00:03
  Elapsed time: 0 secs
  Priority: 10
  FD Files Written: 0
  SD Files Written: 0
  FD Bytes Written: 0 (0 B)
  SD Bytes Written: 0 (0 B)
  Rate: 0.0 KB/s
  Software Compression: None
  VSS: no
  Encryption: no
  Accurate: no
  Volume name(s): Incremental-0020
  Volume Session Id: 84
  Volume Session Time: 1563781778
  Last Volume Bytes: 641 (641 B)
  Non-fatal FD errors: 0
  SD Errors: 0
  FD termination status: OK
  SD termination status: OK
  Bareos binary info: bareos.org build: Get official binaries and vendor support on bareos.com
  Termination: Backup OK

*

But the place /var/lib/bareos/storage looks to me writeable, it has content from another backup job BackupCatalog Fileset Catalog:

ls -rtlha /var/lib/bareos/storage/
(...)
-rw-r----- 1 bareos bareos 641 Jul 16 21:00 Incremental-0213
-rw-r----- 1 bareos bareos 10G Jul 16 21:10 Full-0083
-rw-r----- 1 bareos bareos 641 Jul 17 21:00 Incremental-0137
-rw-r----- 1 bareos bareos 641 Jul 18 21:00 Incremental-0088
-rw-r----- 1 bareos bareos 641 Jul 19 21:00 Incremental-0091
-rw-r----- 1 bareos bareos 705K Jul 22 21:00 Incremental-0010
-rw-r----- 1 bareos bareos 641 Jul 23 21:00 Incremental-0264
-rw-r----- 1 bareos bareos 641 Jul 24 21:00 Incremental-0099
-rw-r----- 1 bareos bareos 641 Jul 25 21:00 Incremental-0156
drwxr-x--- 2 bareos bareos 4.0K Jul 26 21:00 .
-rw-r----- 1 bareos bareos 641 Jul 26 21:00 Incremental-0018
-rw-r----- 1 bareos bareos 836K Jul 27 21:00 Differential-0164
-rw-r----- 1 bareos bareos 641 Jul 29 21:00 Incremental-0020
-rw-r----- 1 bareos bareos 7.8G Jul 29 21:10 Full-0084

Log of this job:

*list joblog jobid=4186
Automatically selected Catalog: MyCatalog
Using Catalog "MyCatalog"
 2019-07-29 21:10:02 bareos-dir JobId 4186: shell command: run BeforeJob "/usr/lib/bareos/scripts/make_catalog_backup.pl MyCatalog"
 2019-07-29 21:10:32 bareos-dir JobId 4186: Start Backup JobId 4186, Job=BackupCatalog.2019-07-29_21.10.00_46
 2019-07-29 21:10:32 bareos-dir JobId 4186: Connected Storage daemon at bareos:9103, encryption: TLS_CHACHA20_POLY1305_SHA256
 2019-07-29 21:10:32 bareos-dir JobId 4186: Using Device "FileStorage" to write.
 2019-07-29 21:10:33 bareos-dir JobId 4186: Connected Client: bareos-fd at localhost:9102, encryption: TLS_CHACHA20_POLY1305_SHA256
 2019-07-29 21:10:33 bareos-dir JobId 4186: Handshake: Immediate TLS 2019-07-29 21:10:33 bareos-dir JobId 4186: Encryption: TLS_CHACHA20_POLY1305_SHA256
 2019-07-29 21:10:33 vmlxbareos01bs-fd JobId 4186: Connected Storage daemon at bareos:9103, encryption: TLS_CHACHA20_POLY1305_SHA256
 2019-07-29 21:10:33 vmlxbareos01bs-fd JobId 4186: Extended attribute support is enabled
 2019-07-29 21:10:33 vmlxbareos01bs-fd JobId 4186: ACL support is enabled
 2019-07-29 21:10:33 bareos-sd JobId 4186: Volume "Full-0084" previously written, moving to end of data.
 2019-07-29 21:10:33 bareos-sd JobId 4186: Ready to append to end of Volume "Full-0084" size=7481695337
 2019-07-29 21:10:49 bareos-sd JobId 4186: Releasing device "FileStorage" (/var/lib/bareos/storage).
 2019-07-29 21:10:49 bareos-sd JobId 4186: Elapsed time=00:00:16, Transfer rate=49.89 M Bytes/second
 2019-07-29 21:10:49 bareos-dir JobId 4186: Insert of attributes batch table with 81 entries start
 2019-07-29 21:10:49 bareos-dir JobId 4186: Insert of attributes batch table done
 2019-07-29 21:10:49 bareos-dir JobId 4186: Bareos bareos-dir 18.2.5 (30Jan19):
  Build OS: Linux-4.4.92-6.18-default ubuntu Ubuntu 18.04 LTS
  JobId: 4186
  Job: BackupCatalog.2019-07-29_21.10.00_46
  Backup Level: Full
  Client: "bareos-fd" 18.2.5 (30Jan19) Linux-4.4.92-6.18-default,ubuntu,Ubuntu 18.04 LTS,xUbuntu_18.04,x86_64
  FileSet: "Catalog" 2018-05-04 21:10:00
  Pool: "Full" (From Job FullPool override)
  Catalog: "MyCatalog" (From Client resource)
  Storage: "File" (From Job resource)
  Scheduled time: 29-Jul-2019 21:10:00
  Start time: 29-Jul-2019 21:10:33
  End time: 29-Jul-2019 21:10:49
  Elapsed time: 16 secs
  Priority: 11
  FD Files Written: 81
  SD Files Written: 81
  FD Bytes Written: 798,320,279 (798.3 MB)
  SD Bytes Written: 798,330,193 (798.3 MB)
  Rate: 49895.0 KB/s
  Software Compression: None
  VSS: no
  Encryption: no
  Accurate: no
  Volume name(s): Full-0084
  Volume Session Id: 85
  Volume Session Time: 1563781778
  Last Volume Bytes: 8,280,620,204 (8.280 GB)
  Non-fatal FD errors: 0
  SD Errors: 0
  FD termination status: OK
  SD termination status: OK
  Bareos binary info: bareos.org build: Get official binaries and vendor support on bareos.com
  Termination: Backup OK

 2019-07-29 21:10:49 bareos-dir JobId 4186: shell command: run AfterJob "/usr/lib/bareos/scripts/delete_catalog_backup"
You have messages.

And then the other warning job is next:


*list joblog jobid=4187 [6/1835]
 2019-07-30 00:20:01 bareos-dir JobId 4187: Start Backup JobId 4187, Job=vmlxgitlab01bs-Default.2019-07-30_00.20.00_47
 2019-07-30 00:20:01 bareos-dir JobId 4187: Connected Storage daemon at bareos:9103, encryption: TLS_CHACHA20_POLY1305_SHA256
 2019-07-30 00:20:01 bareos-dir JobId 4187: Using Device "FileStorage-vmbaksrv01bs" to write.
 2019-07-30 00:20:01 bareos-dir JobId 4187: Connected Client: vmlxgitlab01bs-fd at gitlab.kuechenaktuell.de:9102, encryption: TLS_CHACHA20_POLY1305_SHA256
 2019-07-30 00:20:01 bareos-dir JobId 4187: Handshake: Immediate TLS 2019-07-30 00:20:01 bareos-dir JobId 4187: Encryption: TLS_CHACHA20_POLY1305_SHA256
 2019-07-30 00:20:01 vmlxgitlab01bs-fd JobId 4187: Connected Storage daemon at bareos:9103, encryption: TLS_CHACHA20_POLY1305_SHA256
 2019-07-30 00:20:01 vmlxgitlab01bs-fd JobId 4187: Extended attribute support is enabled
 2019-07-30 00:20:01 vmlxgitlab01bs-fd JobId 4187: ACL support is enabled
 2019-07-30 00:20:01 bareos-sd JobId 4187: Volume "Incremental-0020" previously written, moving to end of data.
 2019-07-30 00:20:01 bareos-sd JobId 4187: Warning: For Volume "Incremental-0020":
The sizes do not match! Volume=1073737890 Catalog=641
Correcting Catalog
 2019-07-30 00:20:30 bareos-sd JobId 4187: User defined maximum volume capacity 1,073,741,824 exceeded on device "FileStorage-vmbaksrv01bs" (/bareos-backup-dir).
 2019-07-30 00:20:30 bareos-sd JobId 4187: End of medium on Volume "Incremental-0020" Bytes=1,073,737,890 Blocks=1 at 30-Jul-2019 00:20.
 2019-07-30 00:20:30 bareos-dir JobId 4187: There are no more Jobs associated with Volume "Incremental-0038". Marking it purged.
 2019-07-30 00:20:30 bareos-dir JobId 4187: All records pruned from Volume "Incremental-0038"; marking it "Purged"
 2019-07-30 00:20:30 bareos-dir JobId 4187: Recycled volume "Incremental-0038"
 2019-07-30 00:20:30 bareos-sd JobId 4187: Recycled volume "Incremental-0038" on device "FileStorage-vmbaksrv01bs" (/bareos-backup-dir), all previous data lost.
 2019-07-30 00:20:30 bareos-sd JobId 4187: New volume "Incremental-0038" mounted on device "FileStorage-vmbaksrv01bs" (/bareos-backup-dir) at 30-Jul-2019 00:20.
 2019-07-30 00:21:02 vmlxgitlab01bs-fd JobId 4187: /var/lib/lxcfs is a different filesystem. Will not descend from / into it.
 2019-07-30 00:21:18 vmlxgitlab01bs-fd JobId 4187: /boot/efi is a different filesystem. Will not descend from / into it.
 2019-07-30 00:21:20 bareos-sd JobId 4187: Releasing device "FileStorage-vmbaksrv01bs" (/bareos-backup-dir).
 2019-07-30 00:21:20 bareos-sd JobId 4187: Elapsed time=00:01:19, Transfer rate=3.982 M Bytes/second
 2019-07-30 00:21:20 bareos-dir JobId 4187: Insert of attributes batch table with 1702 entries start
 2019-07-30 00:21:20 bareos-dir JobId 4187: Insert of attributes batch table done
 2019-07-30 00:21:20 bareos-dir JobId 4187: Bareos bareos-dir 18.2.5 (30Jan19):
  Build OS: Linux-4.4.92-6.18-default ubuntu Ubuntu 18.04 LTS
  JobId: 4187
  Job: vmlxgitlab01bs-Default.2019-07-30_00.20.00_47
  Backup Level: Incremental, since=2019-07-29 00:20:00
  Client: "vmlxgitlab01bs-fd" 18.2.5 (30Jan19) Linux-4.4.92-6.18-default,ubuntu,Ubuntu 18.04 LTS,xUbuntu_18.04,x86_64
  FileSet: "LinuxDefault" 2018-11-28 00:20:00
  Pool: "Incremental" (From Job IncPool override)
  Catalog: "MyCatalog" (From Client resource)
  Storage: "File-vmbaksrv01bs" (From Job resource)
  Scheduled time: 30-Jul-2019 00:20:00
  Start time: 30-Jul-2019 00:20:01
  End time: 30-Jul-2019 00:21:20
  Elapsed time: 1 min 19 secs
  Priority: 10
  FD Files Written: 1,702
  SD Files Written: 1,702
  FD Bytes Written: 314,373,356 (314.3 MB)
  SD Bytes Written: 314,635,737 (314.6 MB)
  Rate: 3979.4 KB/s
  Software Compression: 57.3 % (lz4)
  VSS: no
  Encryption: no
  Accurate: no
  Volume name(s): Incremental-0038
  Volume Session Id: 86
  Volume Session Time: 1563781778
  Last Volume Bytes: 315,005,362 (315.0 MB)
  Non-fatal FD errors: 0
  SD Errors: 0
  FD termination status: OK
  SD termination status: OK
  Bareos binary info: bareos.org build: Get official binaries and vendor support on bareos.com
  Termination: Backup OK

 2019-07-30 00:21:20 bareos-dir JobId 4187: Begin pruning Jobs older than 6 months .
 2019-07-30 00:21:20 bareos-dir JobId 4187: No Jobs found to prune.
 2019-07-30 00:21:20 bareos-dir JobId 4187: Begin pruning Files.
 2019-07-30 00:21:20 bareos-dir JobId 4187: Pruned Files from 1 Jobs for client vmlxgitlab01bs-fd from catalog.
 2019-07-30 00:21:20 bareos-dir JobId 4187: End auto prune.


--

do you see any reason for that error? i am confused why it could happen.

 2019-07-29 21:00:03 bareos-sd JobId 4185: Warning: stored/mount.cc:270 Open device "FileStorage" (/var/lib/bareos/storage) Volume "Incremental-0163" failed: ERR=stored/dev.cc:731 Could not open: /var/lib/bareos/storage/Incremental-0163, ERR=No such file or directory
(0003538)
arogge   
2019-07-31 09:40   
The error is obvious: the file bareos wants to use does not exist. While the volume is still in the catalog it does not exist on disk.

Concerning the mismatching size for the volume: I guess you have found a (minor) bug.
The SelfTest job does Recycle the volume, but does not write to it (because there are no files to backup).
This means the Volume has been truncated and relabeled on disk (by the recycle step) but its catalog information has not been updated (because the job did not write to it).

Is that the same behaviour you're seeing every night?
(0003541)
Schroeffu   
2019-07-31 10:54   
Thank you very much for the explanation. Make sense. Yes exactly, this is the same behaviour i get ervery night.

For an example and comparison, below one of the alerts I got in the middle of june already:

(Subject: Bareos: Backup OK of bareos-fd Incremental)
18-Jun 21:00 bareos-sd JobId 3689: Warning: Volume "Incremental-0006" not on device "FileStorage" (/var/lib/bareos/storage).
18-Jun 21:00 bareos-sd JobId 3689: Warning: Volume "Incremental-0006" not on device "FileStorage" (/var/lib/bareos/storage).
18-Jun 21:00 bareos-sd JobId 3689: Warning: stored/mount.cc:270 Open device "FileStorage" (/var/lib/bareos/storage) Volume "Incremental-0006" failed: ERR=stored/dev.cc:731 Could not open: /var/lib/bareos/storage/Incremental-0006, ERR=No such file or directory

(Subject: Bareos: Backup OK of vmlxgitlab01bs-fd Incremental)
19-Jun 00:20 bareos-sd JobId 3691: Warning: For Volume "Incremental-0089":
The sizes do not match! Volume=1073737134 Catalog=641 Correcting Catalog
(0003542)
arogge   
2019-07-31 15:53   
So the question is why are your volume files disapearing?
But I guess that's not a Bareos issue, do I'd like to close the Bug if it is OK with you.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
733 [bareos-core] director minor always 2016-12-02 19:01 2019-07-30 16:48
Reporter: hk298 Platform: Linux  
Assigned To: OS: Debian  
Priority: normal OS Version: 8  
Status: new Product Version: 16.2.4  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Plugin Options (mssqlvdi) in Job definitions
Description: The manual states that there is a field called "FD Plugin Options" that can be used in the job configuration. I'm trying to specify options for the mssqlvdi plugin. However, when I run a restore job from the console, I always see plugin options = *none*.

The SQL authentication options (serveraddress, instance, database, user, password) are read properly from the file set configuration that I specify as part of the job. However, options like "replace=yes" are ignored when I add them in the file set, and they don't seem to be read from the job configuration either. You have to type them in the console using the modify function. This is slightly annoying, and it becomes a real issue when you try to restore from the webui because in that case there's no way to specify, for example, the "replace" option.
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
933 [bareos-core] director major always 2018-03-22 12:44 2019-07-29 17:31
Reporter: hostedpower Platform: Linux  
Assigned To: OS: Debian  
Priority: high OS Version: 9  
Status: acknowledged Product Version: 16.2.7  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Fatal error: fd_plugins.c:1237 Unbalanced call to createFile=0 0
Description: 2018-03-22 12:27:28 example-dir JobId 15968: Error: Bareos example-dir 16.2.7 (09Oct17):
 Build OS: x86_64-pc-linux-gnu debian Debian GNU/Linux 8.0 (jessie)
 JobId: 15968
 Job: RestoreFiles.2018-03-22_12.26.32_04
 Restore Client: customer.xxx.com
 Start time: 22-Mar-2018 12:26:34
 End time: 22-Mar-2018 12:27:28
 Elapsed time: 54 secs
 Files Expected: 126,634
 Files Restored: 126,601
 Bytes Restored: 4,869,318,236
 Rate: 90172.6 KB/s
 FD Errors: 1
 FD termination status: Fatal Error
 SD termination status: OK
 Termination: *** Restore Error ***

 
2018-03-22 12:27:28 example-dir JobId 15968: Begin pruning Files.
 
2018-03-22 12:27:28 example-dir JobId 15968: No Files found to prune.
 
2018-03-22 12:27:28 example-dir JobId 15968: End auto prune.

 
2018-03-22 12:27:27 bareos-sd JobId 15968: End of Volume at file 2 on device "example-incr" (/home/example/bareos), Volume "vol-incr-0447"
 
2018-03-22 12:27:27 bareos-sd JobId 15968: Ready to read from volume "vol-cons-0486" on device "example-incr" (/home/example/bareos).
 
2018-03-22 12:27:27 bareos-sd JobId 15968: Forward spacing Volume "vol-cons-0486" to file:block 0:218.
 
2018-03-22 12:27:27 customer.xxx.com JobId 15968: Fatal error: fd_plugins.c:1237 Unbalanced call to createFile=0 0
 
2018-03-22 12:27:26 bareos-sd JobId 15968: End of Volume at file 8 on device "example-incr" (/home/example/bareos), Volume "vol-cons-0513"
 
2018-03-22 12:27:26 bareos-sd JobId 15968: Ready to read from volume "vol-incr-0447" on device "example-incr" (/home/example/bareos).
 
2018-03-22 12:27:26 bareos-sd JobId 15968: Forward spacing Volume "vol-incr-0447" to file:block 2:769145852.
 
2018-03-22 12:26:38 bareos-sd JobId 15968: End of Volume at file 2 on device "example-incr" (/home/example/bareos), Volume "vol-incr-0446"
 
2018-03-22 12:26:38 bareos-sd JobId 15968: Ready to read from volume "vol-cons-0513" on device "example-incr" (/home/example/bareos).
 
2018-03-22 12:26:38 bareos-sd JobId 15968: Forward spacing Volume "vol-cons-0513" to file:block 7:417218068.
 
2018-03-22 12:26:37 bareos-sd JobId 15968: End of Volume at file 2 on device "example-incr" (/home/example/bareos), Volume "vol-incr-0456"
 
2018-03-22 12:26:37 bareos-sd JobId 15968: Ready to read from volume "vol-incr-0446" on device "example-incr" (/home/example/bareos).
 
2018-03-22 12:26:37 bareos-sd JobId 15968: Forward spacing Volume "vol-incr-0446" to file:block 2:786923100.
 
2018-03-22 12:26:36 bareos-sd JobId 15968: End of Volume at file 2 on device "example-incr" (/home/example/bareos), Volume "vol-incr-0423"
 
2018-03-22 12:26:36 bareos-sd JobId 15968: Ready to read from volume "vol-incr-0456" on device "example-incr" (/home/example/bareos).
 
2018-03-22 12:26:36 bareos-sd JobId 15968: Forward spacing Volume "vol-incr-0456" to file:block 2:691729567.
 
2018-03-22 12:26:35 bareos-sd JobId 15968: Ready to read from volume "vol-incr-0423" on device "example-incr" (/home/example/bareos).
 
2018-03-22 12:26:35 bareos-sd JobId 15968: Forward spacing Volume "vol-incr-0423" to file:block 2:606078578.
 
2018-03-22 12:26:34 example-dir JobId 15968: Start Restore Job RestoreFiles.2018-03-22_12.26.32_04
 
2018-03-22 12:26:34 example-dir JobId 15968: Using Device "example-incr" to read.
Tags:
Steps To Reproduce: Restore files from an always incremental schedule, including a mysql backup

We tried separate restore afterwards and this works without errors (so first restore of files and than a separate restore job for the database)
Additional Information:
System Description
Attached Files:
Notes
(0002960)
joergs   
2018-04-05 12:54   
So, what plugings have you configured? This error should only occur, if a plugin is used.
(0002964)
hostedpower   
2018-04-05 13:44   
Hello Joergs,

We use the MySQL plugin of course :)
(0002965)
joergs   
2018-04-05 16:47   
Well, there are still multiple MySQL plugins and different possible configurations.
(0002966)
hostedpower   
2018-04-05 16:50   
We used this install procedure:


cd /tmp
git clone https://github.com/bareos/bareos-contrib
cp -R ./bareos-contrib/fd-plugins/mysql-python/*.py /usr/lib/bareos/plugins/
rm -rf bareos-contrib/
service bareos-fd restart

It's strange that seperate restores work perfectly, it's only when we combine SQL dump file restore & regular files we run into issues.

Please let me know which info exactly you need if you need anything else!
(0003530)
hostedpower   
2019-07-29 12:03   
Today this is still an issue..

If we restore MySQL database and files together it goes wrong :(



2019-07-29 11:59:57 xxxx JobId 88755: Fatal error: filed/fd_plugins.cc:1247 Unbalanced call to createFile=0 0
 
2019-07-29 11:59:57 backupxxx JobId 88755: Error: lib/bsock_tcp.cc:417 Wrote 29242 bytes to client:94.237.42.21:9103, but only 0 accepted.
 
2019-07-29 11:59:57 backupxxx JobId 88755: Fatal error: stored/read.cc:164 Error sending to File daemon. ERR=Connection reset by peer
 
2019-07-29 11:59:57 backupxxx JobId 88755: Error: lib/bsock_tcp.cc:457 Socket has errors=1 on call to client:94.237.42.21:9103
 
2019-07-29 11:59:57 backupxxx JobId 88755: Releasing device "serverxx-incr" (/home/serverxx/bareos).
 
2019-07-29 11:59:57 director JobId 88755: Max configured use duration=82,800 sec. exceeded. Marking Volume "vol-cons-3373" as Used.
 
2019-07-29 11:59:57 director JobId 88755: Error: Bareos director 18.2.6 (13Feb19):
 Build OS: Linux-4.4.92-6.18-default debian Debian GNU/Linux 9.7 (stretch)
 JobId: 88755
 Job: RestoreFiles.2019-07-29_11.09.05_18
 Restore Client: xxxxx
 Start time: 29-Jul-2019 11:58:44
 End time: 29-Jul-2019 11:59:57
 Elapsed time: 1 min 13 secs
 Files Expected: 186,501
 Files Restored: 15,767
 Bytes Restored: 5,625,032,491
 Rate: 77055.2 KB/s
 FD Errors: 1
 FD termination status: Fatal Error
 SD termination status: Fatal Error
 Bareos binary info: official Bareos subscription
 Termination: *** Restore Error ***

 
2019-07-29 11:59:57 director JobId 88755: Begin pruning Files.
 
2019-07-29 11:59:57 director JobId 88755: No Files found to prune.
 
2019-07-29 11:59:57 director JobId 88755: End auto prune.

 
2019-07-29 11:59:56 backupxxx JobId 88755: End of Volume at file 0 on device "serverxx-incr" (/home/serverxx/bareos), Volume "vol-incr-3338"
 
2019-07-29 11:59:56 backupxxx JobId 88755: Ready to read from volume "vol-cons-3373" on device "serverxx-incr" (/home/serverxx/bareos).
 
2019-07-29 11:59:56 backupxxx JobId 88755: Forward spacing Volume "vol-cons-3373" to file:block 0:241.
 
2019-07-29 11:59:53 backupxxx JobId 88755: End of Volume at file 0 on device "serverxx-incr" (/home/serverxx/bareos), Volume "vol-incr-3325"
 
2019-07-29 11:59:53 backupxxx JobId 88755: Ready to read from volume "vol-incr-3338" on device "serverxx-incr" (/home/serverxx/bareos).
 
2019-07-29 11:59:53 backupxxx JobId 88755: Forward spacing Volume "vol-incr-3338" to file:block 0:241.
 
2019-07-29 11:59:51 backupxxx JobId 88755: End of Volume at file 0 on device "serverxx-incr" (/home/serverxx/bareos), Volume "vol-incr-3314"
 
2019-07-29 11:59:51 backupxxx JobId 88755: Ready to read from volume "vol-incr-3325" on device "serverxx-incr" (/home/serverxx/bareos).
 
2019-07-29 11:59:51 backupxxx JobId 88755: Forward spacing Volume "vol-incr-3325" to file:block 0:241.
 
2019-07-29 11:59:49 backupxxx JobId 88755: End of Volume at file 0 on device "serverxx-incr" (/home/serverxx/bareos), Volume "vol-incr-3307"
 
2019-07-29 11:59:49 backupxxx JobId 88755: Ready to read from volume "vol-incr-3314" on device "serverxx-incr" (/home/serverxx/bareos).
 
2019-07-29 11:59:49 backupxxx JobId 88755: Forward spacing Volume "vol-incr-3314" to file:block 0:241.
 
2019-07-29 11:59:40 backupxxx JobId 88755: End of Volume at file 0 on device "serverxx-incr" (/home/serverxx/bareos), Volume "vol-incr-3283"
 
2019-07-29 11:59:40 backupxxx JobId 88755: Ready to read from volume "vol-incr-3295" on device "serverxx-incr" (/home/serverxx/bareos).
 
2019-07-29 11:59:40 backupxxx JobId 88755: Forward spacing Volume "vol-incr-3295" to file:block 0:241.
 
2019-07-29 11:59:40 backupxxx JobId 88755: End of Volume at file 0 on device "serverxx-incr" (/home/serverxx/bareos), Volume "vol-incr-3295"
 
2019-07-29 11:59:40 backupxxx JobId 88755: Ready to read from volume "vol-incr-3307" on device "serverxx-incr" (/home/serverxx/bareos).
 
2019-07-29 11:59:40 backupxxx JobId 88755: Forward spacing Volume "vol-incr-3307" to file:block 0:241.
 
2019-07-29 11:59:30 backupxxx JobId 88755: End of Volume at file 0 on device "serverxx-incr" (/home/serverxx/bareos), Volume "vol-incr-3276"
 
2019-07-29 11:59:30 backupxxx JobId 88755: Ready to read from volume "vol-incr-3283" on device "serverxx-incr" (/home/serverxx/bareos).
 
2019-07-29 11:59:30 backupxxx JobId 88755: Forward spacing Volume "vol-incr-3283" to file:block 0:241.
 
2019-07-29 11:59:02 backupxxx JobId 88755: End of Volume at file 0 on device "serverxx-incr" (/home/serverxx/bareos), Volume "vol-incr-3265"
 
2019-07-29 11:59:02 backupxxx JobId 88755: Ready to read from volume "vol-incr-3276" on device "serverxx-incr" (/home/serverxx/bareos).
 
2019-07-29 11:59:02 backupxxx JobId 88755: Forward spacing Volume "vol-incr-3276" to file:block 0:241.
 
2019-07-29 11:59:01 backupxxx JobId 88755: End of Volume at file 0 on device "serverxx-incr" (/home/serverxx/bareos), Volume "vol-incr-3258"
 
2019-07-29 11:59:01 backupxxx JobId 88755: Ready to read from volume "vol-incr-3265" on device "serverxx-incr" (/home/serverxx/bareos).
 
2019-07-29 11:59:01 backupxxx JobId 88755: Forward spacing Volume "vol-incr-3265" to file:block 0:241.
 
2019-07-29 11:59:00 backupxxx JobId 88755: End of Volume at file 0 on device "serverxx-incr" (/home/serverxx/bareos), Volume "vol-incr-3252"
 
2019-07-29 11:59:00 backupxxx JobId 88755: Ready to read from volume "vol-incr-3258" on device "serverxx-incr" (/home/serverxx/bareos).
 
2019-07-29 11:59:00 backupxxx JobId 88755: Forward spacing Volume "vol-incr-3258" to file:block 0:241.
 
2019-07-29 11:58:57 backupxxx JobId 88755: End of Volume at file 0 on device "serverxx-incr" (/home/serverxx/bareos), Volume "vol-incr-3244"
 
2019-07-29 11:58:57 backupxxx JobId 88755: Ready to read from volume "vol-incr-3252" on device "serverxx-incr" (/home/serverxx/bareos).
 
2019-07-29 11:58:57 backupxxx JobId 88755: Forward spacing Volume "vol-incr-3252" to file:block 0:241.
 
2019-07-29 11:58:56 backupxxx JobId 88755: End of Volume at file 0 on device "serverxx-incr" (/home/serverxx/bareos), Volume "vol-incr-3230"
 
2019-07-29 11:58:56 backupxxx JobId 88755: Ready to read from volume "vol-incr-3244" on device "serverxx-incr" (/home/serverxx/bareos).
 
2019-07-29 11:58:56 backupxxx JobId 88755: Forward spacing Volume "vol-incr-3244" to file:block 0:241.
 
2019-07-29 11:58:54 backupxxx JobId 88755: End of Volume at file 0 on device "serverxx-incr" (/home/serverxx/bareos), Volume "vol-incr-3224"
 
2019-07-29 11:58:54 backupxxx JobId 88755: Ready to read from volume "vol-incr-3230" on device "serverxx-incr" (/home/serverxx/bareos).
 
2019-07-29 11:58:54 backupxxx JobId 88755: Forward spacing Volume "vol-incr-3230" to file:block 0:241.
 
2019-07-29 11:58:51 backupxxx JobId 88755: End of Volume at file 0 on device "serverxx-incr" (/home/serverxx/bareos), Volume "vol-incr-3218"
 
2019-07-29 11:58:51 backupxxx JobId 88755: Ready to read from volume "vol-incr-3224" on device "serverxx-incr" (/home/serverxx/bareos).
 
2019-07-29 11:58:51 backupxxx JobId 88755: Forward spacing Volume "vol-incr-3224" to file:block 0:241.
 
2019-07-29 11:58:49 backupxxx JobId 88755: End of Volume at file 0 on device "serverxx-incr" (/home/serverxx/bareos), Volume "vol-incr-3212"
 
2019-07-29 11:58:49 backupxxx JobId 88755: Ready to read from volume "vol-incr-3218" on device "serverxx-incr" (/home/serverxx/bareos).
 
2019-07-29 11:58:49 backupxxx JobId 88755: Forward spacing Volume "vol-incr-3218" to file:block 0:241.
 
2019-07-29 11:58:48 backupxxx JobId 88755: End of Volume at file 0 on device "serverxx-incr" (/home/serverxx/bareos), Volume "vol-incr-3364"
 
2019-07-29 11:58:48 backupxxx JobId 88755: Ready to read from volume "vol-incr-3212" on device "serverxx-incr" (/home/serverxx/bareos).
 
2019-07-29 11:58:48 backupxxx JobId 88755: Forward spacing Volume "vol-incr-3212" to file:block 0:241.
 
2019-07-29 11:58:47 backupxxx JobId 88755: Ready to read from volume "vol-incr-3364" on device "serverxx-incr" (/home/serverxx/bareos).
 
2019-07-29 11:58:47 backupxxx JobId 88755: Forward spacing Volume "vol-incr-3364" to file:block 0:241.
 
2019-07-29 11:58:46 xxxx JobId 88755: Connected Storage daemon at backupxxx:9103, encryption: PSK-AES256-CBC-SHA
 
2019-07-29 11:58:44 director JobId 88755: Start Restore Job RestoreFiles.2019-07-29_11.09.05_18
 
2019-07-29 11:58:44 director JobId 88755: Connected Storage daemon at backupxxx:9103, encryption: ECDHE-PSK-CHACHA20-POLY1305
 
2019-07-29 11:58:44 director JobId 88755: Using Device "serverxx-incr" to read.
 
2019-07-29 11:58:44 director JobId 88755: Connected Client: xxxx at xxxx:9102, encryption: PSK-AES256-CBC-SHA
 
2019-07-29 11:58:44 director JobId 88755: Handshake: Immediate TLS
2019-07-29 11:58:44 director JobId 88755: Encryption: PSK-AES256-CBC-SHA
(0003532)
hostedpower   
2019-07-29 17:31   
to be clear.

Restore together, result:

Files: Not complete
MySQL db: restore looks ok

Restore Files and db , both in seperate job:

Files: OK
MySQL db: OK

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1101 [bareos-core] director crash have not tried 2019-07-17 03:10 2019-07-29 17:31
Reporter: progserega Platform: Linux  
Assigned To: OS: Debian  
Priority: high OS Version: 9  
Status: feedback Product Version: 17.2.4  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: bareos-dir segfault after start new job to run
Description: июл 15 12:01:51 bareos kernel: bareos-dir[889]: segfault at 0 ip 00007f3b7b8fcda1 sp 00007f3b6f7fc658 error 4 in libbareos-17.2.4.so[7f3b7b8e2000+6b000]
Tags:
Steps To Reproduce: 1. create new backup job
2. add to director
3. reload director (all good),
4. start job
5. see in webui that job started and work
6. some time after that bareos-dir is segfault
Additional Information: I get develop files for bareos and try start it in debugger, but can not:

Starting program: /usr/sbin/bareos-dir
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
[Inferior 1 (process 31748) exited normally]
(gdb) r
Starting program: /usr/sbin/bareos-dir
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
[Inferior 1 (process 31796) exited normally]
(gdb)
System Description
Attached Files: bareos.867.traceback (636 bytes) 2019-07-19 03:24
https://bugs.bareos.org/file_download.php?file_id=384&type=bug
Notes
(0003496)
arogge   
2019-07-17 11:20   
Please install the debug package (bareos-dbg on debian) and attach the latest generated traceback (/var/lib/bareos/*.traceback).
Thank you!
(0003498)
progserega   
2019-07-19 03:24   
bareos-dbg was installed. Attach traceback from this bug
(0003499)
arogge   
2019-07-19 08:04   
That did not (yet) work as expected.
Does the bareos-dbg package match your other installed bareos packages?
gdb in the traceback doesn't seem to load the symbol table, so there's no meaningful traceback information.

You can try this with "btraceback test <pid-of-bareos-process> /tmp". This should write a traceback for a running bareos program to /tmp/bareos.<pid-of-bareos-process>.traceback and it should contain debug symbols and the stack traces for each running thread.
Once you get that working (usually just by installing gdb and the right version of bareos-dbg) crashes will generate a meaningful traceback that we can look at.
(0003529)
progserega   
2019-07-29 08:09   
dpkg-query -l|grep 'bareos\|gdb'
ii bareos-bconsole 17.2.4-9.1 amd64 Backup Archiving Recovery Open Sourced - text console
ii bareos-client 17.2.4-9.1 amd64 Backup Archiving Recovery Open Sourced - client metapackage
ii bareos-common 17.2.4-9.1 amd64 Backup Archiving Recovery Open Sourced - common files
ii bareos-database-common 17.2.4-9.1 amd64 Backup Archiving Recovery Open Sourced - common catalog files
ii bareos-database-postgresql 17.2.4-9.1 amd64 Backup Archiving Recovery Open Sourced - PostgreSQL backend
ii bareos-database-tools 17.2.4-9.1 amd64 Backup Archiving Recovery Open Sourced - database tools
ii bareos-dbg 17.2.4-9.1 amd64 Backup Archiving Recovery Open Sourced - debugging symbols
ii bareos-devel 17.2.4-9.1 amd64 Backup Archiving Recovery Open Sourced - development files
ii bareos-director 17.2.4-9.1 amd64 Backup Archiving Recovery Open Sourced - director daemon
ii bareos-filedaemon 17.2.4-9.1 amd64 Backup Archiving Recovery Open Sourced - file daemon
ii bareos-webui 17.2.4-15.1 all Backup Archiving Recovery Open Sourced - webui
ii gdb 7.12-6 amd64 GNU Debugger
ii libgdbm3:amd64 1.8.3-14 amd64 GNU dbm database routines (runtime version)

'"btraceback test <pid-of-bareos-process> /tmp ' succsess create file.

Sorry, I was not attach core file for bareos.867.traceback:
https://yadi.sk/d/i2s-Rs1mOjxxuA (it is more then 5 mb)
(0003531)
arogge   
2019-07-29 17:31   
Thanks, but we really need the traceback muck more than the core file.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1102 [bareos-core] file daemon major sometimes 2019-07-20 14:13 2019-07-26 12:21
Reporter: twdragon Platform: x86_64  
Assigned To: OS: Ubuntu  
Priority: normal OS Version: 18.04 LTS  
Status: new Product Version: 18.2.6  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: FD hangs up when connected by OpenVPN channel
Description: This problem was encountered in backup infrastructure based on VMWare-based VDS equipped with Ubuntu 18.04 LTS (core 4.18.0-25-generic) and Debian 9 hardware-based storage server with RAID6 disk-based DSS. On both machines the same Bareos version is installed from community builds repository. The machines are interconnected using OpenVPN (it is necessary because VDS is located within DMZ). The backups are collected from remote VDS to local storage server.

If in any config file the 'Maximum Network Buffer Size' option encounters, then remote File Daemon located on VDS randomly hangs up completely. After that the strange behaviour emerges: the File Daemon keeps the running job in its job list for unlimited time even independently from 'cancel' commands issued by Director. There is no information written about this situation into logs, backtraces etc. In syslog and kernel logs there are no error reports too.

After long experiments it was cleared that deletion of 'Maximum Network Buffer Size' option from any config file used by Bareos turns a workaround to this problem. I think this problem should be considered as compatibility issue.
Tags: config, fd, network, vpn
Steps To Reproduce: - Interconnect machines using OpenVPN channel.
- Limit networks addresses for Bareos daemons to VPN intranet.
- Set 'Maximum Network Buffer Size' to any value.
Additional Information:
System Description
Attached Files:
Notes
(0003513)
arogge   
2019-07-22 10:17   
I'm curious: why did you configure Maximum Network Buffer Size at all? And to what value did you set it? The default is usually fine.
(0003514)
twdragon   
2019-07-22 10:53   
(Last edited: 2019-07-22 10:55)
@arogge the previous backup system we used was Bacula 7.7.4 and it did not work without Maximum Network Buffer Size parameter set on both sides of VPN channel to 32768. Without it the channel was blocked when Bacula runs until OpenVPN daemon was restarted. Literally, this error was the thing that forced us to migrate to Bareos (Bacula vendors deleted old versions from their repository but from version 9.0 Bacula produces the same FD error as the one we are discussing on here). I was curious too because FD hanging up revealed without any updates and observable preconditions. After migration we have seen that the true source of hanging up is File daemon (Bacula produced Storage daemon errors, but it was visible that in fact the remote File daemon hangs up). In prolonged tests, turning parameters off one by one, we discovered Maximum Network Buffer Size as source of the problem.

(0003515)
arogge   
2019-07-22 11:00   
thank you very much for the insight.
(0003517)
arogge   
2019-07-22 12:59   
Do you have any suggestion what we can do to improve the situation?
I can only imagine better documentation informing that setting this parameter might break things.
(0003523)
twdragon   
2019-07-26 12:21   
I tried to dig for deep reasons of such behaviour but did not make it successful till now. The overview of situation: when `Maximum Network Buffer Size` is set, the connection over OpenVPN established by Bareos File Daemon produces the jumbo packets with enormous (e.a. 80.6 MiB is the highest value I discovered in tests) length at random times. After that such packet is lost over VPN connection. The File daemon do not receive answer for this packet and then it hangs up in a state looking like it can not completely read the random file. The jumbo packet anomaly is discovered always after successful transporting of large (1.2 - 2.6 GiB) amount of data. I think it can be both Bacula/Bareos and OpenVPN compatibility issue. Now it could be good choice to insert the statement about this issue in documentation, I think. But I will attempt to discover the true reason.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1104 [bareos-core] file daemon minor N/A 2019-07-24 06:54 2019-07-24 11:42
Reporter: isi Platform: Linux  
Assigned To: OS: Debian  
Priority: normal OS Version: 10  
Status: new Product Version: 18.2.5  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: LDAP Plugin gives error
Description: LDAP Backup gives following error:
  File "/usr/lib/bareos/plugins/BareosFdPluginLDAP.py", line 152, in plugin_io
    IOP.buf = bytearray(self.ldap.ldif)
TypeError: unicode argument without an encoding

/usr/lib/bareos/plugins/BareosFdPluginLDAP.py
Please change Line 152 from old:
IOP.buf = bytearray(self.ldap.ldif)

to new:

IOP.buf = bytearray((self.ldap.ldif), 'utf8')

Thanks
Tags:
Steps To Reproduce:
Additional Information:
Attached Files:
Notes
(0003521)
arogge   
2019-07-24 10:56   
If you have a patch, please see the developer-guide how to contribute:
https://docs.bareos.org/DeveloperGuide/generaldevel.html#patches

We will then decice whether or not we want to apply your proposed change to the codebase.
(0003522)
isi   
2019-07-24 11:42   
Sorry I'm not a Developer and I'm unfortunately not used to GitHub.
Thats why I submitted this Bug Report.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1037 [bareos-core] storage daemon crash always 2019-01-24 18:26 2019-07-22 13:28
Reporter: r7 Platform: Windows  
Assigned To: OS: Windows  
Priority: normal OS Version: 7  
Status: feedback Product Version: 18.2.4-rc2  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Windows bareos-fd is crashing during backup
Description: After successful full backup incremental job always fail. Backup scheme is Always Incremental.
To begin please take a look here https://groups.google.com/forum/?fromgroups=#!topic/bareos-users/fbu6iz-Ugbc

Attached are traces, reports, memory dumps and dir logs for one such Windows 7 host with failing FD
- helium-WER.7z.* - contents of C:\ProgramData\Microsoft\Windows\WER\ReportQueue
- helium-traces.7z.* - crashing FD job traces (setdebug client=lab-helium level=200 trace=1)
- helium-dir.log - job logs from director
Tags: always incremental, crash, fd
Steps To Reproduce: No specific steps. FD is just crashing during backup.
Additional Information: Director: 18.2.4rc2
Storage: 18.2.4rc2
FD: Both 17.2.4 & 18.2.4rc2 are failing, no matter which Windows version 7 or 10
System Description
Attached Files: helium-dir.log (75,985 bytes) 2019-01-24 18:26
https://bugs.bareos.org/file_download.php?file_id=327&type=bug
helium-WER.7z.001 (1,966,080 bytes) 2019-01-24 19:07
https://bugs.bareos.org/file_download.php?file_id=328&type=bug
helium-WER.7z.002 (1,966,080 bytes) 2019-01-24 19:07
https://bugs.bareos.org/file_download.php?file_id=329&type=bug
helium-WER.7z.003 (1,966,080 bytes) 2019-01-24 22:36
https://bugs.bareos.org/file_download.php?file_id=330&type=bug
helium-WER.7z.004 (1,966,080 bytes) 2019-01-24 22:36
https://bugs.bareos.org/file_download.php?file_id=331&type=bug
helium-WER.7z.005 (1,966,080 bytes) 2019-01-24 22:36
https://bugs.bareos.org/file_download.php?file_id=332&type=bug
helium-WER.7z.006 (1,966,080 bytes) 2019-01-24 22:36
https://bugs.bareos.org/file_download.php?file_id=333&type=bug
helium-WER.7z.007 (1,966,080 bytes) 2019-01-24 22:36
https://bugs.bareos.org/file_download.php?file_id=334&type=bug
helium-WER.7z.008 (1,966,080 bytes) 2019-01-24 22:36
https://bugs.bareos.org/file_download.php?file_id=335&type=bug
helium-WER.7z.009 (487,341 bytes) 2019-01-24 22:36
https://bugs.bareos.org/file_download.php?file_id=336&type=bug
helium-traces.7z.001 (2,097,152 bytes) 2019-01-24 22:53
https://bugs.bareos.org/file_download.php?file_id=337&type=bug
helium-traces.7z.002 (2,097,152 bytes) 2019-01-24 22:53
https://bugs.bareos.org/file_download.php?file_id=338&type=bug
helium-traces.7z.003 (2,097,152 bytes) 2019-01-24 22:53
https://bugs.bareos.org/file_download.php?file_id=339&type=bug
helium-traces.7z.004 (2,097,152 bytes) 2019-01-25 00:07
https://bugs.bareos.org/file_download.php?file_id=340&type=bug
helium-traces.7z.005 (2,097,152 bytes) 2019-01-25 00:07
https://bugs.bareos.org/file_download.php?file_id=341&type=bug
helium-traces.7z.006 (700,997 bytes) 2019-01-25 00:07
https://bugs.bareos.org/file_download.php?file_id=342&type=bug
joblog.1835.txt (6,060 bytes) 2019-02-05 16:36
https://bugs.bareos.org/file_download.php?file_id=346&type=bug
Notes
(0003207)
r7   
2019-01-24 18:29   
It is almost impossible to upload files to mantis with your limits.

APPLICATION ERROR 0000500

File upload failed. This is likely because the filesize was larger than is currently allowed by this PHP installation.

Please use the "Back" button in your web browser to return to the previous page. There you can correct whatever problems were identified in this error or select another action. You can also click an option from the menu bar to go directly to a new section.
(0003208)
r7   
2019-01-24 18:37   
It is really a quest to attach files. What`s the limits for attachments?

APPLICATION ERROR #2800

Invalid form security token. This could be caused by a session timeout, or accidentally submitting the form twice.

Please use the "Back" button in your web browser to return to the previous page. There you can correct whatever problems were identified in this error or select another action. You can also click an option from the menu bar to go directly to a new section.
(0003209)
r7   
2019-01-24 19:07   
- helium-WER.7z.* - contents of C:\ProgramData\Microsoft\Windows\WER\ReportQueue. There are reports with memory dumps.

P.S.: I`ve been struggling with your mantis timeouts, it`s attach limits and php post limits for 3 hours to complete this report. I`ve repacked crash reports files 3 or 4 times to get through attachment limits.
(0003210)
r7   
2019-01-25 13:41   
Sorry category is incorrect. Please edit it to be "[All Projects] file daemon".
(0003246)
teka74   
2019-02-05 01:49   
now I have same problem, switched to always incremental backup after update to 18.2.5

The 1st backup changed automatically to full, and worked. Next evening the AI was scheduled normally, and nothing happens

bconsole output:
Connecting to Client server01-fd at server01:9102
 Handshake: Cleartext, Encryption: None

server01-fd Version: 17.2.4 (21 Sep 2017) VSS Linux Cross-compile Win64
Daemon started 07-Jan-19 07:55. Jobs: run=29 running=1.
Microsoft Windows Server 2008 R2 Small Business Server Service Pack 1 (build 7601), 64-bit
 Heap: heap=0 smbytes=57,056,510 max_bytes=57,056,704 bufs=208 max_bufs=369
 Sizeof: boffset_t=8 size_t=8 debug=0 trace=1 bwlimit=0kB/s

Running Jobs:
server01-mon (director) connected at: 07-Jan-19 08:02
server01-mon (director) connected at: 17-Jan-19 15:05
JobId 616 Job server01-ai.2019-02-05_01.00.00_19 is running.
    Incremental System or Console Job started: 05-Feb-19 01:00
    Files=0 Bytes=0 Bytes/sec=0 Errors=0
    Bwlimit=0
    Files Examined=0
    SDReadSeqNo=3 fd=720
bareos-dir (director) connected at: 05-Feb-19 01:40
====

daemon is still running, but doing nothing. On my linux machine (the director), mysql is at 40% cpu since backup started, and the webui is lagging
(0003247)
teka74   
2019-02-05 02:52   
after more than 1 hour waiting, i got an email from the bareos director:

05-Feb 02:43 bareos-dir: ERROR in dird/authenticate_console.cc:393 Number of console connections exceeded MaximumConsoleConnections


i cancelled the backup job, now switching back to normal backup
(0003250)
r7   
2019-02-05 16:36   
After DIR & SD upgrade to 18.2.5 job error in DIR logs of job with crashing 18.2.4rc2 FD changed.

2019-02-05 14:29:23 lab-helium: ABORTING due to ERROR in lib/smartall.cc:229
Overrun buffer: len=41300 addr=34a00a8 allocated: filed/accurate_htable.cc:49 called from /home/abuild/rpmbuild/BUILD/bareos-18.2.4rc2/src/filed/accurate_htable.cc:193
 2019-02-05 14:29:23 bareos-dir JobId 1835: Fatal error: Network error with FD during Backup: ERR=Connection reset by peer
 2019-02-05 14:29:23 fs32-sd JobId 1835: Fatal error: stored/append.cc:173 Error reading data header from FD. ERR=Connection reset by peer
 2019-02-05 14:29:23 fs32-sd JobId 1835: Releasing device "disk-fs32-r6s3" (/_bareos).
 2019-02-05 14:29:23 bareos-dir JobId 1835: Fatal error: No Job status returned from FD.
 2019-02-05 14:29:23 bareos-dir JobId 1835: Error: Bareos bareos-dir 18.2.5 (30Jan19):

You can find full joblog attached.
(0003520)
arogge   
2019-07-22 13:28   
Does this happen with 18.2.5 or 18.2.6?
Are you able to retry with the nightly build from https://download.bareos.org/bareos/experimental/nightly/?

Can you check whether the problem persists if you fix the TLS configuration problem (i.e. your client seems to be misconfigured, as TLS-PSK fails. Maybe the name mismatches)?

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1041 [bareos-core] webui crash always 2019-01-31 15:00 2019-07-21 13:53
Reporter: beckzg Platform: Linux  
Assigned To: frank OS: Ubuntu  
Priority: high OS Version: 16.04  
Status: assigned Product Version: 18.2.5  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: BareOS-WebUI partially working
Description: Hi,

  I just upgraded to version 18.2.5 (with apt-get upgrade), everything looks like okay, but Webui gives me the following error on most of the links (Jobs, Volumes, ...):

"Error: API 2 not available on director. Please upgrade to version 15.2.2 or greater and/or compile with jansson support."

In the Apache log I've a lot of error lines like:

"[Thu Jan 31 14:54:56.229653 2019] [:error] [pid 32236] [client 192.168.30.14:62781] console_name: admin, referer: http://backup.local/backup/dashboard/
[Thu Jan 31 14:54:56.229945 2019] [:error] [pid 32236] [client 192.168.30.14:62781] PHP Notice: fwrite(): send of 26 bytes failed with errno=104 Connection reset by peer in /usr/share/bareos-webui/vendor/Bareos/library/Bareos/BSock/BareosBSock.php on line 219, referer: http://backup.local/backup/dashboard/
[Thu Jan 31 14:55:06.847200 2019] [:error] [pid 31585] [client 192.168.30.14:62782] console_name: admin, referer: http://backup.local/backup/dashboard/
[Thu Jan 31 14:55:06.847706 2019] [:error] [pid 31585] [client 192.168.30.14:62782] PHP Notice: fwrite(): send of 26 bytes failed with errno=104 Connection reset by peer in /usr/share/bareos-webui/vendor/Bareos/library/Bareos/BSock/BareosBSock.php on line 219, referer: http://backup.local/backup/dashboard/"
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0003231)
beckzg   
2019-01-31 17:08   
Resolved :)

console/admin.conf: TLS Enable = No
(0003509)
danwie   
2019-07-21 13:53   
Thanks beckzg, this worked after upgrading from 17.x to 18.2.5!
We were despairing; how did you debug this?

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1029 [bareos-core] director major have not tried 2018-11-26 17:07 2019-07-19 15:15
Reporter: frank Platform: Linux  
Assigned To: OS: Debian  
Priority: normal OS Version: 8  
Status: feedback Product Version: 17.2.7  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Deadlock on DIRD
Description: Encountered a situation where there were 12 threads waiting for a wait-condition that could not be signaled due to a deadlock.
Tags:
Steps To Reproduce: n/a
Additional Information: backup02:/var/log/bareos# pidof bareos-dir
10412
backup02:/var/log/bareos# gdb -p 10412
GNU gdb (Debian 7.7.1+dfsg-5) 7.7.1
Copyright (C) 2014 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law. Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
<http://www.gnu.org/software/gdb/documentation/>.
For help, type "help".
Type "apropos word" to search for commands related to "word".
Attaching to process 10412
Reading symbols from /usr/sbin/bareos-dir...Reading symbols from /usr/lib/debug//usr/sbin/bareos-dir...done.
done.
Reading symbols from /usr/lib/bareos/libbareosndmp-17.2.7.so...Reading symbols from /usr/lib/debug//usr/lib/bareos/libbareosndmp-17.2.7.so...done.
done.
Loaded symbols for /usr/lib/bareos/libbareosndmp-17.2.7.so
Reading symbols from /usr/lib/bareos/libbareosfind-17.2.7.so...Reading symbols from /usr/lib/debug//usr/lib/bareos/libbareosfind-17.2.7.so...done.
done.
Loaded symbols for /usr/lib/bareos/libbareosfind-17.2.7.so
Reading symbols from /lib/x86_64-linux-gnu/libacl.so.1...(no debugging symbols found)...done.
Loaded symbols for /lib/x86_64-linux-gnu/libacl.so.1
Reading symbols from /usr/lib/bareos/libbareossql-17.2.7.so...Reading symbols from /usr/lib/debug//usr/lib/bareos/libbareossql-17.2.7.so...done.
done.
Loaded symbols for /usr/lib/bareos/libbareossql-17.2.7.so
Reading symbols from /usr/lib/bareos/libbareoscats-17.2.7.so...Reading symbols from /usr/lib/debug//usr/lib/bareos/libbareoscats-17.2.7.so...done.
done.
Loaded symbols for /usr/lib/bareos/libbareoscats-17.2.7.so
Reading symbols from /usr/lib/bareos/libbareoscfg-17.2.7.so...Reading symbols from /usr/lib/debug//usr/lib/bareos/libbareoscfg-17.2.7.so...done.
done.
Loaded symbols for /usr/lib/bareos/libbareoscfg-17.2.7.so
Reading symbols from /usr/lib/bareos/libbareos-17.2.7.so...Reading symbols from /usr/lib/debug//usr/lib/bareos/libbareos-17.2.7.so...done.
done.
Loaded symbols for /usr/lib/bareos/libbareos-17.2.7.so
Reading symbols from /lib/x86_64-linux-gnu/libz.so.1...(no debugging symbols found)...done.
Loaded symbols for /lib/x86_64-linux-gnu/libz.so.1
Reading symbols from /lib/x86_64-linux-gnu/liblzo2.so.2...(no debugging symbols found)...done.
Loaded symbols for /lib/x86_64-linux-gnu/liblzo2.so.2
Reading symbols from /usr/lib/libfastlz.so.1...(no debugging symbols found)...done.
Loaded symbols for /usr/lib/libfastlz.so.1
Reading symbols from /usr/lib/x86_64-linux-gnu/libjansson.so.4...(no debugging symbols found)...done.
Loaded symbols for /usr/lib/x86_64-linux-gnu/libjansson.so.4
Reading symbols from /usr/lib/x86_64-linux-gnu/libssl.so.1.0.0...(no debugging symbols found)...done.
Loaded symbols for /usr/lib/x86_64-linux-gnu/libssl.so.1.0.0
Reading symbols from /usr/lib/x86_64-linux-gnu/libcrypto.so.1.0.0...(no debugging symbols found)...done.
Loaded symbols for /usr/lib/x86_64-linux-gnu/libcrypto.so.1.0.0
Reading symbols from /lib/x86_64-linux-gnu/libwrap.so.0...(no debugging symbols found)...done.
Loaded symbols for /lib/x86_64-linux-gnu/libwrap.so.0
Reading symbols from /lib/x86_64-linux-gnu/libcap.so.2...(no debugging symbols found)...done.
Loaded symbols for /lib/x86_64-linux-gnu/libcap.so.2
Reading symbols from /usr/lib/bareos/libbareoslmdb-17.2.7.so...Reading symbols from /usr/lib/debug//usr/lib/bareos/libbareoslmdb-17.2.7.so...done.
done.
Loaded symbols for /usr/lib/bareos/libbareoslmdb-17.2.7.so
Reading symbols from /lib/x86_64-linux-gnu/libpthread.so.0...Reading symbols from /usr/lib/debug//lib/x86_64-linux-gnu/libpthread-2.19.so...done.
done.
[New LWP 19389]
[New LWP 9093]
[New LWP 6509]
[New LWP 1955]
[New LWP 2796]
[New LWP 30244]
[New LWP 29643]
[New LWP 26418]
[New LWP 22171]
[New LWP 28589]
[New LWP 29068]
[New LWP 29031]
[New LWP 9474]
[New LWP 9458]
[New LWP 10417]
[New LWP 10416]
[New LWP 10413]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Loaded symbols for /lib/x86_64-linux-gnu/libpthread.so.0
Reading symbols from /lib/x86_64-linux-gnu/libdl.so.2...Reading symbols from /usr/lib/debug//lib/x86_64-linux-gnu/libdl-2.19.so...done.
done.
Loaded symbols for /lib/x86_64-linux-gnu/libdl.so.2
Reading symbols from /usr/lib/x86_64-linux-gnu/libstdc++.so.6...(no debugging symbols found)...done.
Loaded symbols for /usr/lib/x86_64-linux-gnu/libstdc++.so.6
Reading symbols from /lib/x86_64-linux-gnu/libm.so.6...Reading symbols from /usr/lib/debug//lib/x86_64-linux-gnu/libm-2.19.so...done.
done.
Loaded symbols for /lib/x86_64-linux-gnu/libm.so.6
Reading symbols from /lib/x86_64-linux-gnu/libgcc_s.so.1...(no debugging symbols found)...done.
Loaded symbols for /lib/x86_64-linux-gnu/libgcc_s.so.1
Reading symbols from /lib/x86_64-linux-gnu/libc.so.6...Reading symbols from /usr/lib/debug//lib/x86_64-linux-gnu/libc-2.19.so...done.
done.
Loaded symbols for /lib/x86_64-linux-gnu/libc.so.6
Reading symbols from /lib/x86_64-linux-gnu/libattr.so.1...(no debugging symbols found)...done.
Loaded symbols for /lib/x86_64-linux-gnu/libattr.so.1
Reading symbols from /lib/x86_64-linux-gnu/libnsl.so.1...Reading symbols from /usr/lib/debug//lib/x86_64-linux-gnu/libnsl-2.19.so...done.
done.
Loaded symbols for /lib/x86_64-linux-gnu/libnsl.so.1
Reading symbols from /lib64/ld-linux-x86-64.so.2...Reading symbols from /usr/lib/debug//lib/x86_64-linux-gnu/ld-2.19.so...done.
done.
Loaded symbols for /lib64/ld-linux-x86-64.so.2
Reading symbols from /usr/lib/bareos/backends/libbareoscats-postgresql.so...Reading symbols from /usr/lib/debug//usr/lib/bareos/backends/libbareoscats-postgresql-17.2.7.so...done.
done.
Loaded symbols for /usr/lib/bareos/backends/libbareoscats-postgresql.so
Reading symbols from /usr/lib/x86_64-linux-gnu/libpq.so.5...(no debugging symbols found)...done.
Loaded symbols for /usr/lib/x86_64-linux-gnu/libpq.so.5
Reading symbols from /lib/x86_64-linux-gnu/libcrypt.so.1...Reading symbols from /usr/lib/debug//lib/x86_64-linux-gnu/libcrypt-2.19.so...done.
done.
Loaded symbols for /lib/x86_64-linux-gnu/libcrypt.so.1
Reading symbols from /usr/lib/x86_64-linux-gnu/libgssapi_krb5.so.2...(no debugging symbols found)...done.
Loaded symbols for /usr/lib/x86_64-linux-gnu/libgssapi_krb5.so.2
Reading symbols from /usr/lib/x86_64-linux-gnu/libldap_r-2.4.so.2...(no debugging symbols found)...done.
Loaded symbols for /usr/lib/x86_64-linux-gnu/libldap_r-2.4.so.2
Reading symbols from /usr/lib/x86_64-linux-gnu/libkrb5.so.3...(no debugging symbols found)...done.
Loaded symbols for /usr/lib/x86_64-linux-gnu/libkrb5.so.3
Reading symbols from /usr/lib/x86_64-linux-gnu/libk5crypto.so.3...(no debugging symbols found)...done.
Loaded symbols for /usr/lib/x86_64-linux-gnu/libk5crypto.so.3
Reading symbols from /lib/x86_64-linux-gnu/libcom_err.so.2...(no debugging symbols found)...done.
Loaded symbols for /lib/x86_64-linux-gnu/libcom_err.so.2
Reading symbols from /usr/lib/x86_64-linux-gnu/libkrb5support.so.0...(no debugging symbols found)...done.
Loaded symbols for /usr/lib/x86_64-linux-gnu/libkrb5support.so.0
Reading symbols from /lib/x86_64-linux-gnu/libkeyutils.so.1...(no debugging symbols found)...done.
Loaded symbols for /lib/x86_64-linux-gnu/libkeyutils.so.1
Reading symbols from /lib/x86_64-linux-gnu/libresolv.so.2...Reading symbols from /usr/lib/debug//lib/x86_64-linux-gnu/libresolv-2.19.so...done.
done.
Loaded symbols for /lib/x86_64-linux-gnu/libresolv.so.2
Reading symbols from /usr/lib/x86_64-linux-gnu/liblber-2.4.so.2...(no debugging symbols found)...done.
Loaded symbols for /usr/lib/x86_64-linux-gnu/liblber-2.4.so.2
Reading symbols from /usr/lib/x86_64-linux-gnu/libsasl2.so.2...(no debugging symbols found)...done.
Loaded symbols for /usr/lib/x86_64-linux-gnu/libsasl2.so.2
Reading symbols from /usr/lib/x86_64-linux-gnu/libgnutls-deb0.so.28...(no debugging symbols found)...done.
Loaded symbols for /usr/lib/x86_64-linux-gnu/libgnutls-deb0.so.28
Reading symbols from /usr/lib/x86_64-linux-gnu/libp11-kit.so.0...(no debugging symbols found)...done.
Loaded symbols for /usr/lib/x86_64-linux-gnu/libp11-kit.so.0
Reading symbols from /usr/lib/x86_64-linux-gnu/libtasn1.so.6...(no debugging symbols found)...done.
Loaded symbols for /usr/lib/x86_64-linux-gnu/libtasn1.so.6
Reading symbols from /usr/lib/x86_64-linux-gnu/libnettle.so.4...(no debugging symbols found)...done.
Loaded symbols for /usr/lib/x86_64-linux-gnu/libnettle.so.4
Reading symbols from /usr/lib/x86_64-linux-gnu/libhogweed.so.2...(no debugging symbols found)...done.
Loaded symbols for /usr/lib/x86_64-linux-gnu/libhogweed.so.2
Reading symbols from /usr/lib/x86_64-linux-gnu/libgmp.so.10...(no debugging symbols found)...done.
Loaded symbols for /usr/lib/x86_64-linux-gnu/libgmp.so.10
Reading symbols from /usr/lib/x86_64-linux-gnu/libffi.so.6...(no debugging symbols found)...done.
Loaded symbols for /usr/lib/x86_64-linux-gnu/libffi.so.6
0x00007f8dcf92014d in nanosleep () at ../sysdeps/unix/syscall-template.S:81
81 ../sysdeps/unix/syscall-template.S: No such file or directory.
(gdb) info thread
  Id Target Id Frame
  18 Thread 0x7f8dce722700 (LWP 10413) "bareos-dir" 0x00007f8dcf92014d in nanosleep () at ../sysdeps/unix/syscall-template.S:81
  17 Thread 0x7f8dc71df700 (LWP 10416) "bareos-dir" 0x00007f8dcec1faed in poll () at ../sysdeps/unix/syscall-template.S:81
  16 Thread 0x7f8dc69de700 (LWP 10417) "bareos-dir" 0x00007f8dcf91fa9d in read () at ../sysdeps/unix/syscall-template.S:81
  15 Thread 0x7f8daffff700 (LWP 9458) "bareos-dir" 0x00007f8dcf91fa9d in read () at ../sysdeps/unix/syscall-template.S:81
  14 Thread 0x7f8daf7fe700 (LWP 9474) "bareos-dir" 0x00007f8dcf91fa9d in read () at ../sysdeps/unix/syscall-template.S:81
  13 Thread 0x7f8daeffd700 (LWP 29031) "bareos-dir" 0x00007f8dcf91fa9d in read () at ../sysdeps/unix/syscall-template.S:81
  12 Thread 0x7f8dc51db700 (LWP 29068) "bareos-dir" 0x00007f8dcf91fa9d in read () at ../sysdeps/unix/syscall-template.S:81
  11 Thread 0x7f8dc61dd700 (LWP 28589) "bareos-dir" pthread_cond_timedwait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:238
  10 Thread 0x7f8dc59dc700 (LWP 22171) "bareos-dir" pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
  9 Thread 0x7f8dc49da700 (LWP 26418) "bareos-dir" pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
  8 Thread 0x7f8dae7fc700 (LWP 29643) "bareos-dir" pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
  7 Thread 0x7f8dadffb700 (LWP 30244) "bareos-dir" pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
  6 Thread 0x7f8dad7fa700 (LWP 2796) "bareos-dir" pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
  5 Thread 0x7f8dacff9700 (LWP 1955) "bareos-dir" pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
  4 Thread 0x7f8d8bfff700 (LWP 6509) "bareos-dir" pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
  3 Thread 0x7f8d8b7fe700 (LWP 9093) "bareos-dir" pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
  2 Thread 0x7f8d8affd700 (LWP 19389) "bareos-dir" pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
* 1 Thread 0x7f8dd2168740 (LWP 10412) "bareos-dir" 0x00007f8dcf92014d in nanosleep () at ../sysdeps/unix/syscall-template.S:81

(gdb) thread 10
[Switching to thread 10 (Thread 0x7f8dc59dc700 (LWP 22171))]
#0 pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
185 ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S: No such file or directory.
(gdb) bt
#0 pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
0000001 0x00007f8dd104ba9e in rwl_writelock_p (rwl=0x7f8dd1274e80 <lock>, file=<optimized out>, line=329) at rwlock.c:236
0000002 0x00007f8dd105733e in wd_lock () at watchdog.c:329
0000003 0x00007f8dd1057a4c in register_watchdog (wd=0x7f8da804f5b8) at watchdog.c:194
0000004 0x00007f8dd10308b7 in start_thread_timer (jcr=0x1c75308, tid=140246882567936, wait=1800) at btimers.c:148
0000005 0x00007f8dd102be6c in BSOCK_TCP::connect (this=0x7f8da80e9dd8, jcr=0x1c75308, retry_interval=10, max_retry_time=140247049556047, max_retry_time@entry=1800, heart_beat=140247076130304, heart_beat@entry=0,
    name=0x47c6e1 "Storage daemon", host=0x7f8d940b9d68 "backup02.bni", service=0x0, port=9103, verbose=true) at bsock_tcp.c:112
0000006 0x00000000004452ce in connect_to_storage_daemon (jcr=0x1c75308, retry_interval=<optimized out>, max_retry_time=<optimized out>, verbose=verbose@entry=true) at sd_cmds.c:118
0000007 0x00000000004453a8 in connect_to_storage_daemon (jcr=jcr@entry=0x1c75308, retry_interval=retry_interval@entry=10, max_retry_time=<optimized out>, verbose=verbose@entry=true) at sd_cmds.c:136
0000008 0x000000000041442a in do_native_backup (jcr=jcr@entry=0x1c75308) at backup.c:434
0000009 0x0000000000429654 in job_thread (arg=0x1c75308) at job.c:514
0000010 0x000000000042eb71 in jobq_server (arg=0x6b7240 <job_queue>) at jobq.c:485
0000011 0x00007f8dd103e7ff in lmgr_thread_launcher (x=0x1bce9f8) at lockmgr.c:928
0000012 0x00007f8dcf919064 in start_thread (arg=0x7f8dc59dc700) at pthread_create.c:309
0000013 0x00007f8dcec2862d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111
(gdb)

(gdb) thread 11
[Switching to thread 11 (Thread 0x7f8dc61dd700 (LWP 28589))]
#0 pthread_cond_timedwait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:238
238 ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S: No such file or directory.
(gdb) bt
#0 pthread_cond_timedwait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:238
0000001 0x00007f8dd103ec9c in bthread_cond_timedwait_p (cond=0x6b7860 <wait_for_next_run_cond>, m=0x6b78a0 <_ZL5mutex>, abstime=0x7f8dc61dcd60, file=0x48ddca "stats.c", line=120) at lockmgr.c:813
0000002 0x0000000000447f2e in wait_for_next_run () at stats.c:120
0000003 statistics_thread (arg=0x6b7864 <wait_for_next_run_cond+4>, arg@entry=0x0) at stats.c:332
0000004 0x00007f8dd103e7ff in lmgr_thread_launcher (x=0x7f8d94208158) at lockmgr.c:928
0000005 0x00007f8dcf919064 in start_thread (arg=0x7f8dc61dd700) at pthread_create.c:309
0000006 0x00007f8dcec2862d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111
(gdb)

(gdb) thread 9
[Switching to thread 9 (Thread 0x7f8dc49da700 (LWP 26418))]
#0 pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
185 ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S: No such file or directory.
(gdb) bt
#0 pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
0000001 0x00007f8dd104ba9e in rwl_writelock_p (rwl=0x7f8dd1274e80 <lock>, file=<optimized out>, line=329) at rwlock.c:236
0000002 0x00007f8dd105733e in wd_lock () at watchdog.c:329
0000003 0x00007f8dd1057a4c in register_watchdog (wd=0x7f8db00a57a8) at watchdog.c:194
0000004 0x00007f8dd10308b7 in start_thread_timer (jcr=0x1bb3d68, tid=140246865782528, wait=1800) at btimers.c:148
0000005 0x00007f8dd102be6c in BSOCK_TCP::connect (this=0x7f8db0062368, jcr=0x1bb3d68, retry_interval=10, max_retry_time=140247049556047, max_retry_time@entry=1800, heart_beat=140247076130304, heart_beat@entry=0,
    name=0x47c6e1 "Storage daemon", host=0x7f8d940b9d68 "backup02.bni", service=0x0, port=9103, verbose=true) at bsock_tcp.c:112
0000006 0x00000000004452ce in connect_to_storage_daemon (jcr=0x1bb3d68, retry_interval=<optimized out>, max_retry_time=<optimized out>, verbose=verbose@entry=true) at sd_cmds.c:118
0000007 0x00000000004453a8 in connect_to_storage_daemon (jcr=jcr@entry=0x1bb3d68, retry_interval=retry_interval@entry=10, max_retry_time=<optimized out>, verbose=verbose@entry=true) at sd_cmds.c:136
0000008 0x000000000041442a in do_native_backup (jcr=jcr@entry=0x1bb3d68) at backup.c:434
0000009 0x0000000000429654 in job_thread (arg=0x1bb3d68) at job.c:514
0000010 0x000000000042eb71 in jobq_server (arg=0x6b7240 <job_queue>) at jobq.c:485
0000011 0x00007f8dd103e7ff in lmgr_thread_launcher (x=0x1b9f498) at lockmgr.c:928
0000012 0x00007f8dcf919064 in start_thread (arg=0x7f8dc49da700) at pthread_create.c:309
0000013 0x00007f8dcec2862d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111
(gdb)

(gdb) thread 8
[Switching to thread 8 (Thread 0x7f8dae7fc700 (LWP 29643))]
#0 pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
185 in ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S
(gdb) bt
#0 pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
0000001 0x00007f8dd104ba9e in rwl_writelock_p (rwl=0x7f8dd1274e80 <lock>, file=<optimized out>, line=329) at rwlock.c:236
0000002 0x00007f8dd105733e in wd_lock () at watchdog.c:329
0000003 0x00007f8dd1057a4c in register_watchdog (wd=0x7f8d94007308) at watchdog.c:194
0000004 0x00007f8dd10309c4 in start_bsock_timer (bsock=0x7f8dc0001168, wait=600) at btimers.c:184
0000005 0x00007f8dd102a4f0 in BSOCK::two_way_authenticate (this=0x7f8dc0001168, jcr=0x0, what=0x47c76e "Console", name=0x47c762 "*UserAgent*", password=..., tls=..., initiated_by_remote=true) at bsock.c:340
0000006 0x000000000041118a in authenticate_inbound_connection (tls=..., password=..., name=<optimized out>, what=<optimized out>, jcr=<optimized out>, this=<optimized out>) at ../lib/bsock.h:151
0000007 authenticate_user_agent (uac=0x7f8d9403dfa8) at authenticate.c:262
0000008 0x000000000046f3fe in handle_UA_client_request (user=user@entry=0x7f8dc0001168) at ua_server.c:84
0000009 0x00000000004421d3 in handle_connection_request (arg=0x7f8dc0001168) at socket_server.c:98
0000010 0x00007f8dd1057f15 in workq_server (arg=arg@entry=0x6b7540 <socket_workq>) at workq.c:336
0000011 0x00007f8dd103e7ff in lmgr_thread_launcher (x=0x7f8dc0001378) at lockmgr.c:928
0000012 0x00007f8dcf919064 in start_thread (arg=0x7f8dae7fc700) at pthread_create.c:309
0000013 0x00007f8dcec2862d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111
(gdb)

(gdb) thread 7
[Switching to thread 7 (Thread 0x7f8dadffb700 (LWP 30244))]
#0 pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
185 in ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S
(gdb) bt
#0 pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
0000001 0x00007f8dd104ba9e in rwl_writelock_p (rwl=0x7f8dd1274e80 <lock>, file=<optimized out>, line=329) at rwlock.c:236
0000002 0x00007f8dd105733e in wd_lock () at watchdog.c:329
0000003 0x00007f8dd1057a4c in register_watchdog (wd=0x7f8da48f0828) at watchdog.c:194
0000004 0x00007f8dd10309c4 in start_bsock_timer (bsock=0x7f8dc0001378, wait=600) at btimers.c:184
0000005 0x00007f8dd102a4f0 in BSOCK::two_way_authenticate (this=0x7f8dc0001378, jcr=0x0, what=0x47c76e "Console", name=0x47c762 "*UserAgent*", password=..., tls=..., initiated_by_remote=true) at bsock.c:340
0000006 0x000000000041118a in authenticate_inbound_connection (tls=..., password=..., name=<optimized out>, what=<optimized out>, jcr=<optimized out>, this=<optimized out>) at ../lib/bsock.h:151
0000007 authenticate_user_agent (uac=0x7f8da4118768) at authenticate.c:262
0000008 0x000000000046f3fe in handle_UA_client_request (user=user@entry=0x7f8dc0001378) at ua_server.c:84
0000009 0x00000000004421d3 in handle_connection_request (arg=0x7f8dc0001378) at socket_server.c:98
0000010 0x00007f8dd1057f15 in workq_server (arg=arg@entry=0x6b7540 <socket_workq>) at workq.c:336
0000011 0x00007f8dd103e7ff in lmgr_thread_launcher (x=0x7f8dc00017d8) at lockmgr.c:928
0000012 0x00007f8dcf919064 in start_thread (arg=0x7f8dadffb700) at pthread_create.c:309
0000013 0x00007f8dcec2862d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111
(gdb)

(gdb) thread 6
[Switching to thread 6 (Thread 0x7f8dad7fa700 (LWP 2796))]
#0 pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
185 in ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S
(gdb) bt
#0 pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
0000001 0x00007f8dd104ba9e in rwl_writelock_p (rwl=0x7f8dd1274e80 <lock>, file=<optimized out>, line=329) at rwlock.c:236
0000002 0x00007f8dd105733e in wd_lock () at watchdog.c:329
0000003 0x00007f8dd1057a4c in register_watchdog (wd=0x7f8d840262f8) at watchdog.c:194
0000004 0x00007f8dd10309c4 in start_bsock_timer (bsock=0x7f8dc00017d8, wait=600) at btimers.c:184
0000005 0x00007f8dd102a4f0 in BSOCK::two_way_authenticate (this=0x7f8dc00017d8, jcr=0x0, what=0x47c76e "Console", name=0x7f8dad7f9c30 "thomas.fachtan", password=..., tls=..., initiated_by_remote=true) at bsock.c:340
0000006 0x000000000041121a in authenticate_inbound_connection (tls=..., password=..., name=<optimized out>, what=<optimized out>, jcr=<optimized out>, this=<optimized out>) at ../lib/bsock.h:151
0000007 authenticate_user_agent (uac=0x7f8d840249f8) at authenticate.c:269
0000008 0x000000000046f3fe in handle_UA_client_request (user=user@entry=0x7f8dc00017d8) at ua_server.c:84
0000009 0x00000000004421d3 in handle_connection_request (arg=0x7f8dc00017d8) at socket_server.c:98
0000010 0x00007f8dd1057f15 in workq_server (arg=arg@entry=0x6b7540 <socket_workq>) at workq.c:336
0000011 0x00007f8dd103e7ff in lmgr_thread_launcher (x=0x7f8dc0002a38) at lockmgr.c:928
0000012 0x00007f8dcf919064 in start_thread (arg=0x7f8dad7fa700) at pthread_create.c:309
0000013 0x00007f8dcec2862d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111
(gdb)

(gdb) thread 5
[Switching to thread 5 (Thread 0x7f8dacff9700 (LWP 1955))]
#0 pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
185 in ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S
(gdb) bt
#0 pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
0000001 0x00007f8dd104ba9e in rwl_writelock_p (rwl=0x7f8dd1274e80 <lock>, file=<optimized out>, line=329) at rwlock.c:236
0000002 0x00007f8dd105733e in wd_lock () at watchdog.c:329
0000003 0x00007f8dd1057a4c in register_watchdog (wd=0x7f8d7c003a78) at watchdog.c:194
0000004 0x00007f8dd10309c4 in start_bsock_timer (bsock=0x7f8dc0002bc8, wait=600) at btimers.c:184
0000005 0x00007f8dd102a4f0 in BSOCK::two_way_authenticate (this=0x7f8dc0002bc8, jcr=0x0, what=0x47c76e "Console", name=0x47c762 "*UserAgent*", password=..., tls=..., initiated_by_remote=true) at bsock.c:340
0000006 0x000000000041118a in authenticate_inbound_connection (tls=..., password=..., name=<optimized out>, what=<optimized out>, jcr=<optimized out>, this=<optimized out>) at ../lib/bsock.h:151
0000007 authenticate_user_agent (uac=0x7f8d7c002cd8) at authenticate.c:262
0000008 0x000000000046f3fe in handle_UA_client_request (user=user@entry=0x7f8dc0002bc8) at ua_server.c:84
0000009 0x00000000004421d3 in handle_connection_request (arg=0x7f8dc0002bc8) at socket_server.c:98
0000010 0x00007f8dd1057f15 in workq_server (arg=arg@entry=0x6b7540 <socket_workq>) at workq.c:336
0000011 0x00007f8dd103e7ff in lmgr_thread_launcher (x=0x7f8dc0003de8) at lockmgr.c:928
0000012 0x00007f8dcf919064 in start_thread (arg=0x7f8dacff9700) at pthread_create.c:309
0000013 0x00007f8dcec2862d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111
(gdb)

(gdb) thread 4
[Switching to thread 4 (Thread 0x7f8d8bfff700 (LWP 6509))]
#0 pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
185 in ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S
(gdb) bt
#0 pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
0000001 0x00007f8dd104ba9e in rwl_writelock_p (rwl=0x7f8dd1274e80 <lock>, file=<optimized out>, line=329) at rwlock.c:236
0000002 0x00007f8dd105733e in wd_lock () at watchdog.c:329
0000003 0x00007f8dd1057a4c in register_watchdog (wd=0x7f8d8c00b4e8) at watchdog.c:194
0000004 0x00007f8dd10309c4 in start_bsock_timer (bsock=0x7f8dc0003f78, wait=600) at btimers.c:184
0000005 0x00007f8dd102a4f0 in BSOCK::two_way_authenticate (this=0x7f8dc0003f78, jcr=0x0, what=0x47c76e "Console", name=0x47c762 "*UserAgent*", password=..., tls=..., initiated_by_remote=true) at bsock.c:340
0000006 0x000000000041118a in authenticate_inbound_connection (tls=..., password=..., name=<optimized out>, what=<optimized out>, jcr=<optimized out>, this=<optimized out>) at ../lib/bsock.h:151
0000007 authenticate_user_agent (uac=0x7f8d8c002708) at authenticate.c:262
0000008 0x000000000046f3fe in handle_UA_client_request (user=user@entry=0x7f8dc0003f78) at ua_server.c:84
0000009 0x00000000004421d3 in handle_connection_request (arg=0x7f8dc0003f78) at socket_server.c:98
0000010 0x00007f8dd1057f15 in workq_server (arg=arg@entry=0x6b7540 <socket_workq>) at workq.c:336
0000011 0x00007f8dd103e7ff in lmgr_thread_launcher (x=0x7f8dc0005198) at lockmgr.c:928
0000012 0x00007f8dcf919064 in start_thread (arg=0x7f8d8bfff700) at pthread_create.c:309
0000013 0x00007f8dcec2862d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111
(gdb)

(gdb) thread 3
[Switching to thread 3 (Thread 0x7f8d8b7fe700 (LWP 9093))]
#0 pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
185 in ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S
(gdb) bt
#0 pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
0000001 0x00007f8dd104ba9e in rwl_writelock_p (rwl=0x7f8dd1274e80 <lock>, file=<optimized out>, line=329) at rwlock.c:236
0000002 0x00007f8dd105733e in wd_lock () at watchdog.c:329
0000003 0x00007f8dd1057a4c in register_watchdog (wd=0x7f8da00f6108) at watchdog.c:194
0000004 0x00007f8dd10309c4 in start_bsock_timer (bsock=0x7f8dc0005328, wait=600) at btimers.c:184
0000005 0x00007f8dd102a4f0 in BSOCK::two_way_authenticate (this=0x7f8dc0005328, jcr=0x0, what=0x47c76e "Console", name=0x47c762 "*UserAgent*", password=..., tls=..., initiated_by_remote=true) at bsock.c:340
0000006 0x000000000041118a in authenticate_inbound_connection (tls=..., password=..., name=<optimized out>, what=<optimized out>, jcr=<optimized out>, this=<optimized out>) at ../lib/bsock.h:151
0000007 authenticate_user_agent (uac=0x7f8da001d958) at authenticate.c:262
0000008 0x000000000046f3fe in handle_UA_client_request (user=user@entry=0x7f8dc0005328) at ua_server.c:84
0000009 0x00000000004421d3 in handle_connection_request (arg=0x7f8dc0005328) at socket_server.c:98
0000010 0x00007f8dd1057f15 in workq_server (arg=arg@entry=0x6b7540 <socket_workq>) at workq.c:336
0000011 0x00007f8dd103e7ff in lmgr_thread_launcher (x=0x7f8dc0006548) at lockmgr.c:928
0000012 0x00007f8dcf919064 in start_thread (arg=0x7f8d8b7fe700) at pthread_create.c:309
0000013 0x00007f8dcec2862d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111

(gdb) thread 2
[Switching to thread 2 (Thread 0x7f8d8affd700 (LWP 19389))]
#0 pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
185 in ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S
(gdb) bt
#0 pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
0000001 0x00007f8dd104ba9e in rwl_writelock_p (rwl=0x7f8dd1274e80 <lock>, file=<optimized out>, line=329) at rwlock.c:236
0000002 0x00007f8dd105733e in wd_lock () at watchdog.c:329
0000003 0x00007f8dd1057a4c in register_watchdog (wd=0x7f8d80003b88) at watchdog.c:194
0000004 0x00007f8dd10309c4 in start_bsock_timer (bsock=0x7f8dc00066d8, wait=600) at btimers.c:184
0000005 0x00007f8dd102a4f0 in BSOCK::two_way_authenticate (this=0x7f8dc00066d8, jcr=0x0, what=0x47c76e "Console", name=0x47c762 "*UserAgent*", password=..., tls=..., initiated_by_remote=true) at bsock.c:340
0000006 0x000000000041118a in authenticate_inbound_connection (tls=..., password=..., name=<optimized out>, what=<optimized out>, jcr=<optimized out>, this=<optimized out>) at ../lib/bsock.h:151
0000007 authenticate_user_agent (uac=0x7f8d80002de8) at authenticate.c:262
0000008 0x000000000046f3fe in handle_UA_client_request (user=user@entry=0x7f8dc00066d8) at ua_server.c:84
0000009 0x00000000004421d3 in handle_connection_request (arg=0x7f8dc00066d8) at socket_server.c:98
0000010 0x00007f8dd1057f15 in workq_server (arg=arg@entry=0x6b7540 <socket_workq>) at workq.c:336
0000011 0x00007f8dd103e7ff in lmgr_thread_launcher (x=0x7f8dc0007b48) at lockmgr.c:928
0000012 0x00007f8dcf919064 in start_thread (arg=0x7f8d8affd700) at pthread_create.c:309
0000013 0x00007f8dcec2862d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111

(gdb) thread 1
[Switching to thread 1 (Thread 0x7f8dd2168740 (LWP 10412))]
#0 0x00007f8dcf92014d in nanosleep () at ../sysdeps/unix/syscall-template.S:81
81 ../sysdeps/unix/syscall-template.S: No such file or directory.
(gdb) bt
#0 0x00007f8dcf92014d in nanosleep () at ../sysdeps/unix/syscall-template.S:81
0000001 0x00007f8dd102d8f4 in bmicrosleep (sec=60, usec=0) at bsys.c:171
0000002 0x000000000044718f in wait_for_next_job (one_shot_job_to_run=0x5bf290e4 <error: Cannot access memory at address 0x5bf290e4>) at scheduler.c:173
0000003 0x000000000040f976 in main (argc=<optimized out>, argv=<optimized out>) at dird.c:437
(gdb)


(gdb) thread 12
[Switching to thread 12 (Thread 0x7f8dc51db700 (LWP 29068))]
#0 0x00007f8dcf91fa9d in read () at ../sysdeps/unix/syscall-template.S:81
81 ../sysdeps/unix/syscall-template.S: No such file or directory.
(gdb) bt
#0 0x00007f8dcf91fa9d in read () at ../sysdeps/unix/syscall-template.S:81
0000001 0x00007f8dd102bb00 in BSOCK_TCP::read_nbytes (this=0x7f8d9c002188, ptr=<optimized out>, nbytes=4) at bsock_tcp.c:978
0000002 0x00007f8dd102b37f in BSOCK_TCP::recv (this=0x7f8d9c002188) at bsock_tcp.c:550
0000003 0x0000000000426b9f in bget_dirmsg (bs=0x7f8d9c002188, allow_any_message=false) at getmsg.c:154
0000004 0x0000000000433f6c in msg_thread (arg=0xb, arg@entry=0x1bb30d8) at msgchan.c:434
0000005 0x00007f8dd103e7ff in lmgr_thread_launcher (x=0x7f8d9c001138) at lockmgr.c:928
0000006 0x00007f8dcf919064 in start_thread (arg=0x7f8dc51db700) at pthread_create.c:309
0000007 0x00007f8dcec2862d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111
(gdb)

(gdb) thread 13
[Switching to thread 13 (Thread 0x7f8daeffd700 (LWP 29031))]
#0 0x00007f8dcf91fa9d in read () at ../sysdeps/unix/syscall-template.S:81
81 in ../sysdeps/unix/syscall-template.S
(gdb) bt
#0 0x00007f8dcf91fa9d in read () at ../sysdeps/unix/syscall-template.S:81
0000001 0x00007f8dd102bb00 in BSOCK_TCP::read_nbytes (this=0x7f8d9c058e08, ptr=<optimized out>, nbytes=4) at bsock_tcp.c:978
0000002 0x00007f8dd102b37f in BSOCK_TCP::recv (this=0x7f8d9c058e08) at bsock_tcp.c:550
0000003 0x0000000000426b9f in bget_dirmsg (bs=0x7f8d9c058e08, allow_any_message=false) at getmsg.c:154
0000004 0x0000000000412632 in wait_for_job_termination (jcr=0x1bb30d8, timeout=0) at backup.c:715
0000005 0x0000000000414799 in do_native_backup (jcr=jcr@entry=0x1bb30d8) at backup.c:650
0000006 0x0000000000429654 in job_thread (arg=0x1bb30d8) at job.c:514
0000007 0x000000000042eb71 in jobq_server (arg=0x6b7240 <job_queue>) at jobq.c:485
0000008 0x00007f8dd103e7ff in lmgr_thread_launcher (x=0x1d45a28) at lockmgr.c:928
0000009 0x00007f8dcf919064 in start_thread (arg=0x7f8daeffd700) at pthread_create.c:309
0000010 0x00007f8dcec2862d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111

(gdb) thread 14
[Switching to thread 14 (Thread 0x7f8daf7fe700 (LWP 9474))]
#0 0x00007f8dcf91fa9d in read () at ../sysdeps/unix/syscall-template.S:81
81 in ../sysdeps/unix/syscall-template.S
(gdb) bt
#0 0x00007f8dcf91fa9d in read () at ../sysdeps/unix/syscall-template.S:81
0000001 0x00007f8dd102bb00 in BSOCK_TCP::read_nbytes (this=0x7f8d901754c8, ptr=<optimized out>, nbytes=4) at bsock_tcp.c:978
0000002 0x00007f8dd102b37f in BSOCK_TCP::recv (this=0x7f8d901754c8) at bsock_tcp.c:550
0000003 0x0000000000426b9f in bget_dirmsg (bs=0x7f8d901754c8, allow_any_message=false) at getmsg.c:154
0000004 0x0000000000433f6c in msg_thread (arg=0x8, arg@entry=0x1c73708) at msgchan.c:434
0000005 0x00007f8dd103e7ff in lmgr_thread_launcher (x=0x7f8d901824d8) at lockmgr.c:928
0000006 0x00007f8dcf919064 in start_thread (arg=0x7f8daf7fe700) at pthread_create.c:309
0000007 0x00007f8dcec2862d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111
(gdb) thread 15
[Switching to thread 15 (Thread 0x7f8daffff700 (LWP 9458))]
#0 0x00007f8dcf91fa9d in read () at ../sysdeps/unix/syscall-template.S:81
81 in ../sysdeps/unix/syscall-template.S
(gdb) bt
#0 0x00007f8dcf91fa9d in read () at ../sysdeps/unix/syscall-template.S:81
0000001 0x00007f8dd102bb00 in BSOCK_TCP::read_nbytes (this=0x7f8d900f2188, ptr=<optimized out>, nbytes=4) at bsock_tcp.c:978
0000002 0x00007f8dd102b37f in BSOCK_TCP::recv (this=0x7f8d900f2188) at bsock_tcp.c:550
0000003 0x0000000000426b9f in bget_dirmsg (bs=0x7f8d900f2188, allow_any_message=false) at getmsg.c:154
0000004 0x0000000000412632 in wait_for_job_termination (jcr=0x1c73708, timeout=0) at backup.c:715
0000005 0x0000000000414799 in do_native_backup (jcr=jcr@entry=0x1c73708) at backup.c:650
0000006 0x0000000000429654 in job_thread (arg=0x1c73708) at job.c:514
0000007 0x000000000042eb71 in jobq_server (arg=0x6b7240 <job_queue>) at jobq.c:485
0000008 0x00007f8dd103e7ff in lmgr_thread_launcher (x=0x1bcabd8) at lockmgr.c:928
0000009 0x00007f8dcf919064 in start_thread (arg=0x7f8daffff700) at pthread_create.c:309
0000010 0x00007f8dcec2862d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111

(gdb) thread 16
[Switching to thread 16 (Thread 0x7f8dc69de700 (LWP 10417))]
#0 0x00007f8dcf91fa9d in read () at ../sysdeps/unix/syscall-template.S:81
81 in ../sysdeps/unix/syscall-template.S
(gdb) bt
#0 0x00007f8dcf91fa9d in read () at ../sysdeps/unix/syscall-template.S:81
0000001 0x00007f8dd102bb00 in BSOCK_TCP::read_nbytes (this=0x7f8db8001838, ptr=<optimized out>, nbytes=4) at bsock_tcp.c:978
0000002 0x00007f8dd102b37f in BSOCK_TCP::recv (this=0x7f8db8001838) at bsock_tcp.c:550
0000003 0x00007f8dd1033294 in cram_md5_respond (bs=0x7f8db8001838, password=0x7f8da8144e08 "469bf773b1ee39b955b8636641a55856", tls_remote_need=0x7f8dc69ddb0c, compatible=0x7f8dc69ddb0b) at cram-md5.c:118
0000004 0x00007f8dd102a695 in BSOCK::two_way_authenticate (this=0x7f8db8001838, jcr=0x1d46f18, what=0x47c705 "File Daemon", name=0x7f8da8146598 "domino03.bni", password=..., tls=..., initiated_by_remote=false) at bsock.c:365
0000005 0x0000000000410c7a in authenticate_outbound_connection (tls=..., password=..., name=<optimized out>, what=<optimized out>, jcr=<optimized out>, this=<optimized out>) at ../lib/bsock.h:146
0000006 authenticate_with_file_daemon (jcr=0xe, jcr@entry=0x1d46f18) at authenticate.c:148
0000007 0x00000000004247e0 in connect_to_file_daemon (jcr=0x1d46f18, retry_interval=retry_interval@entry=10, max_retry_time=<optimized out>, verbose=verbose@entry=true) at fd_cmds.c:183
0000008 0x00000000004265a3 in cancel_file_daemon_job (ua=ua@entry=0x7f8db8001148, jcr=jcr@entry=0x1c73708) at fd_cmds.c:1065
0000009 0x0000000000429eff in cancel_job (ua=0x7f8db8001148, jcr=0x1c73708) at job.c:697
0000010 0x000000000042a225 in job_monitor_watchdog (self=0xe) at job.c:773
0000011 0x00007f8dd1057497 in watchdog_thread (arg=arg@entry=0x0) at watchdog.c:282
0000012 0x00007f8dd103e7ff in lmgr_thread_launcher (x=0x1d50498) at lockmgr.c:928
0000013 0x00007f8dcf919064 in start_thread (arg=0x7f8dc69de700) at pthread_create.c:309
0000014 0x00007f8dcec2862d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111

(gdb) thread 17
[Switching to thread 17 (Thread 0x7f8dc71df700 (LWP 10416))]
#0 0x00007f8dcec1faed in poll () at ../sysdeps/unix/syscall-template.S:81
81 ../sysdeps/unix/syscall-template.S: No such file or directory.
(gdb) bt
#0 0x00007f8dcec1faed in poll () at ../sysdeps/unix/syscall-template.S:81
0000001 0x00007f8dd10227c3 in bnet_thread_server_tcp (addr_list=addr_list@entry=0x1b9eaa8, max_clients=<optimized out>, sockfds=<optimized out>, client_wq=client_wq@entry=0x6b7540 <socket_workq>, nokeepalive=<optimized out>,
    handle_client_request=handle_client_request@entry=0x442150 <handle_connection_request(void*)>) at bnet_server_tcp.c:306
0000002 0x00000000004423df in connect_thread (arg=arg@entry=0x1b9eaa8) at socket_server.c:115
0000003 0x00007f8dd103e7ff in lmgr_thread_launcher (x=0x1d46ec8) at lockmgr.c:928
0000004 0x00007f8dcf919064 in start_thread (arg=0x7f8dc71df700) at pthread_create.c:309
0000005 0x00007f8dcec2862d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111
(gdb) thread 18
[Switching to thread 18 (Thread 0x7f8dce722700 (LWP 10413))]
#0 0x00007f8dcf92014d in nanosleep () at ../sysdeps/unix/syscall-template.S:81
81 ../sysdeps/unix/syscall-template.S: No such file or directory.
(gdb) bt
#0 0x00007f8dcf92014d in nanosleep () at ../sysdeps/unix/syscall-template.S:81
0000001 0x00007f8dd102d8f4 in bmicrosleep (sec=sec@entry=30, usec=usec@entry=0) at bsys.c:171
0000002 0x00007f8dd103e76c in check_deadlock () at lockmgr.c:568
0000003 0x00007f8dcf919064 in start_thread (arg=0x7f8dce722700) at pthread_create.c:309
0000004 0x00007f8dcec2862d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111
(gdb)

(gdb) thread 20
[Switching to thread 20 (Thread 0x7f8d89ffb700 (LWP 1954))]
#0 pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
185 ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S: No such file or directory.
(gdb) bt
#0 pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
0000001 0x00007f8dd104ba9e in rwl_writelock_p (rwl=0x7f8dd1274e80 <lock>, file=<optimized out>, line=329) at rwlock.c:236
0000002 0x00007f8dd105733e in wd_lock () at watchdog.c:329
0000003 0x00007f8dd1057a4c in register_watchdog (wd=0x7f8d78003818) at watchdog.c:194
0000004 0x00007f8dd10309c4 in start_bsock_timer (bsock=0x7f8dc0009088, wait=600) at btimers.c:184
0000005 0x00007f8dd102a4f0 in BSOCK::two_way_authenticate (this=0x7f8dc0009088, jcr=0x0, what=0x47c76e "Console", name=0x47c762 "*UserAgent*", password=..., tls=..., initiated_by_remote=true) at bsock.c:340
0000006 0x000000000041118a in authenticate_inbound_connection (tls=..., password=..., name=<optimized out>, what=<optimized out>, jcr=<optimized out>, this=<optimized out>) at ../lib/bsock.h:151
0000007 authenticate_user_agent (uac=0x7f8d78002a78) at authenticate.c:262
0000008 0x000000000046f3fe in handle_UA_client_request (user=user@entry=0x7f8dc0009088) at ua_server.c:84
0000009 0x00000000004421d3 in handle_connection_request (arg=0x7f8dc0009088) at socket_server.c:98
0000010 0x00007f8dd1057f15 in workq_server (arg=arg@entry=0x6b7540 <socket_workq>) at workq.c:336
0000011 0x00007f8dd103e7ff in lmgr_thread_launcher (x=0x7f8dc000a2a8) at lockmgr.c:928
0000012 0x00007f8dcf919064 in start_thread (arg=0x7f8d89ffb700) at pthread_create.c:309
0000013 0x00007f8dcec2862d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111


(gdb) thread 10
[Switching to thread 10 (Thread 0x7f8dc59dc700 (LWP 22171))]
#0 pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
185 ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S: No such file or directory.
(gdb) bt
#0 pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
0000001 0x00007f8dd104ba9e in rwl_writelock_p (rwl=0x7f8dd1274e80 <lock>, file=<optimized out>, line=329) at rwlock.c:236
0000002 0x00007f8dd105733e in wd_lock () at watchdog.c:329
0000003 0x00007f8dd1057a4c in register_watchdog (wd=0x7f8da804f5b8) at watchdog.c:194
0000004 0x00007f8dd10308b7 in start_thread_timer (jcr=0x1c75308, tid=140246882567936, wait=1800) at btimers.c:148
0000005 0x00007f8dd102be6c in BSOCK_TCP::connect (this=0x7f8da80e9dd8, jcr=0x1c75308, retry_interval=10, max_retry_time=140247049556047, max_retry_time@entry=1800, heart_beat=140247076130304, heart_beat@entry=0,
    name=0x47c6e1 "Storage daemon", host=0x7f8d940b9d68 "backup02.bni", service=0x0, port=9103, verbose=true) at bsock_tcp.c:112
0000006 0x00000000004452ce in connect_to_storage_daemon (jcr=0x1c75308, retry_interval=<optimized out>, max_retry_time=<optimized out>, verbose=verbose@entry=true) at sd_cmds.c:118
0000007 0x00000000004453a8 in connect_to_storage_daemon (jcr=jcr@entry=0x1c75308, retry_interval=retry_interval@entry=10, max_retry_time=<optimized out>, verbose=verbose@entry=true) at sd_cmds.c:136
0000008 0x000000000041442a in do_native_backup (jcr=jcr@entry=0x1c75308) at backup.c:434
0000009 0x0000000000429654 in job_thread (arg=0x1c75308) at job.c:514
0000010 0x000000000042eb71 in jobq_server (arg=0x6b7240 <job_queue>) at jobq.c:485
0000011 0x00007f8dd103e7ff in lmgr_thread_launcher (x=0x1bce9f8) at lockmgr.c:928
0000012 0x00007f8dcf919064 in start_thread (arg=0x7f8dc59dc700) at pthread_create.c:309
0000013 0x00007f8dcec2862d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111

(gdb) p lock
$1 = {mutex = {__data = {__lock = 0, __count = 0, __owner = 0, __nusers = 12, __kind = 0, __spins = 0, __elision = 0, __list = {__prev = 0x0, __next = 0x0}}, __size = '\000' <repeats 12 times>, "\f", '\000' <repeats 26 times>,
    __align = 0}, read = {__data = {__lock = 0, __futex = 0, __total_seq = 0, __wakeup_seq = 0, __woken_seq = 0, __mutex = 0x0, __nwaiters = 0, __broadcast_seq = 0}, __size = '\000' <repeats 47 times>, __align = 0}, write = {
    __data = {__lock = 0, __futex = 12, __total_seq = 12, __wakeup_seq = 0, __woken_seq = 0, __mutex = 0x7f8dd1274e80 <lock>, __nwaiters = 24, __broadcast_seq = 0},
    __size = "\000\000\000\000\f\000\000\000\f", '\000' <repeats 23 times>, "\200N'э\177\000\000\030\000\000\000\000\000\000", __align = 51539607552}, writer_id = 140246899353344, priority = 0, valid = 16435934, r_active = 0,
  w_active = 1, r_wait = 0, w_wait = 12}
(gdb)

(gdb) thread 9
[Switching to thread 9 (Thread 0x7f8dc49da700 (LWP 26418))]
#0 pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
185 in ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S
(gdb) p lock
$2 = {mutex = {__data = {__lock = 0, __count = 0, __owner = 0, __nusers = 12, __kind = 0, __spins = 0, __elision = 0, __list = {__prev = 0x0, __next = 0x0}}, __size = '\000' <repeats 12 times>, "\f", '\000' <repeats 26 times>,
    __align = 0}, read = {__data = {__lock = 0, __futex = 0, __total_seq = 0, __wakeup_seq = 0, __woken_seq = 0, __mutex = 0x0, __nwaiters = 0, __broadcast_seq = 0}, __size = '\000' <repeats 47 times>, __align = 0}, write = {
    __data = {__lock = 0, __futex = 12, __total_seq = 12, __wakeup_seq = 0, __woken_seq = 0, __mutex = 0x7f8dd1274e80 <lock>, __nwaiters = 24, __broadcast_seq = 0},
    __size = "\000\000\000\000\f\000\000\000\f", '\000' <repeats 23 times>, "\200N'э\177\000\000\030\000\000\000\000\000\000", __align = 51539607552}, writer_id = 140246899353344, priority = 0, valid = 16435934, r_active = 0,
  w_active = 1, r_wait = 0, w_wait = 12}
(gdb)
System Description
Attached Files:
Notes
(0003504)
arogge   
2019-07-19 15:15   
Looks like the watchdog got stuck.

How long did this situation persist?
Is this reproducible somehow?

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1091 [bareos-core] director minor always 2019-06-11 13:41 2019-07-15 13:22
Reporter: isi Platform: Linux  
Assigned To: arogge OS: Debian  
Priority: normal OS Version: 9  
Status: resolved Product Version: 18.2.5  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: NDMP to NDMP Copy Job Fails
Description: Backup to NDMPFile Pool works.
Backup to NDMPTape Pool works.

Copy from NDMPTape to NDMPCopy Pool gives error:

Automatically selected Catalog: MyCatalog
Using Catalog "MyCatalog"
 bareos-dir JobId 253: Start Copying JobId 253, Job=copy-asterix-lka-copypool.2019-06-11_13.23.24_16
 bareos-dir JobId 253: Fatal error: Connect failure: ERR=error:1408F10B:SSL routines:ssl3_get_record:wrong version number
 bareos-dir JobId 253: Error: TLS shutdown failure.: ERR=error:140E0197:SSL routines:SSL_shutdown:shutdown while in init
 bareos-dir JobId 253: Fatal error: TLS negotiation failed
 bareos-dir JobId 253: Error: Bareos bareos-dir 18.2.5 (30Jan19):
  Build OS: Linux-4.4.92-6.18-default debian Debian GNU/Linux 9.7 (stretch)
  Prev Backup JobId: 242
  Prev Backup Job: asterix-lka-ndmp-job.2019-06-11_11.53.23_48
  New Backup JobId: 254
  Current JobId: 253
  Current Job: copy-asterix-lka-copypool.2019-06-11_13.23.24_16
  Backup Level: Full
  Client: asterix-ndmp
  FileSet: "asterix-lka-ndmp-fs"
  Read Pool: "NDMPTape" (From Job resource)
  Read Storage: "NDMPTape" (From Pool resource)
  Write Pool: "NDMPCopy" (From Job Pool's NextPool resource)
  Write Storage: "NDMPCopy" (From Storage from Pool's NextPool resource)
  Next Pool: "NDMPCopy" (From Job Pool's NextPool resource)
  Catalog: "MyCatalog" (From Default catalog)
  Start time: 11-Jun-2019 13:23:26
  End time: 11-Jun-2019 13:23:26
  Elapsed time: 0 secs
  Priority: 10
  SD Files Written: 0
  SD Bytes Written: 0 (0 B)
  Rate: 0.0 KB/s
  Volume name(s):
  Volume Session Id: 0
  Volume Session Time: 0
  Last Volume Bytes: 0 (0 B)
  SD Errors: 0
  SD termination status:
  Bareos binary info: bareos.org build: Get official binaries and vendor support on bareos.com
  Termination: *** Copying Error ***

No TLS config settings in any config File present. TLS Config is 'default' settings.
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0003421)
arogge   
2019-07-09 09:04   
Is this a local (on the same sd) or a remote (to a different sd) copy?

Can you please try of the issue still exists in our nightly build https://download.bareos.org/bareos/experimental/nightly/
(0003428)
isi   
2019-07-09 16:24   
It's local, on same sd. Will try nightly and report back. Thanks.
(0003429)
isi   
2019-07-10 12:29   
No difference with nightly Version.

09-Jul 16:31 bareos-dir JobId 431: Copying using JobId=413 Job=asterix-lka-ndmp-job.2019-07-07_09.00.00_16
09-Jul 16:31 bareos-dir JobId 431: Bootstrap records written to /var/lib/bareos/bareos-dir.restore.1.bsr
09-Jul 16:31 bareos-dir JobId 431: Start Copying JobId 431, Job=copy-asterix-lka-copypool.2019-07-09_16.31.30_10
09-Jul 16:31 bareos-dir JobId 431: Fatal error: Connect failure: ERR=error:1408F10B:SSL routines:ssl3_get_record:wrong version number
09-Jul 16:31 bareos-dir JobId 431: Error: TLS shutdown failure.: ERR=error:140E0197:SSL routines:SSL_shutdown:shutdown while in init
09-Jul 16:31 bareos-dir JobId 431: Fatal error: TLS negotiation failed
09-Jul 16:31 bareos-dir JobId 431: Error: Bareos bareos-dir 19.1.2 (01Feb19):
  Build OS: Linux-4.4.175-89-default debian Debian GNU/Linux 9.9 (stretch)
  Prev Backup JobId: 413
  Prev Backup Job: asterix-lka-ndmp-job.2019-07-07_09.00.00_16
  New Backup JobId: 432
  Current JobId: 431
  Current Job: copy-asterix-lka-copypool.2019-07-09_16.31.30_10
  Backup Level: Full
  Client: asterix-ndmp
  FileSet: "asterix-lka-ndmp-fs"
  Read Pool: "NDMPTape" (From Job resource)
  Read Storage: "NDMPTape" (From Pool resource)
  Write Pool: "NDMPCopy" (From Job Pool's NextPool resource)
  Write Storage: "NDMPCopy" (From Storage from Pool's NextPool resource)
  Next Pool: "NDMPCopy" (From Job Pool's NextPool resource)
  Catalog: "MyCatalog" (From Default catalog)
  Start time: 09-Jul-2019 16:31:32
  End time: 09-Jul-2019 16:31:32
  Elapsed time: 0 secs
  Priority: 10
  SD Files Written: 0
  SD Bytes Written: 0 (0 B)
  Rate: 0.0 KB/s
  Volume name(s):
  Volume Session Id: 0
  Volume Session Time: 0
  Last Volume Bytes: 0 (0 B)
  SD Errors: 0
  SD termination status:
  Bareos binary info: bareos.org build: Get official binaries and vendor support on bareos.com
  Termination: *** Copying Error ***
(0003430)
arogge   
2019-07-10 17:02   
That's weird. Our testsuite should have tested this before release.
Can you provide your storage configuration for the two storages and paired storages, so I can test this?
(0003437)
isi   
2019-07-11 06:31   
Storage {
  Name = NDMPCopy
  Address = 10.210.0.88 # N.B. Use a fully qualified name here
  Port = 10000
  Protocol = NDMPv4 # Need to specify protocol before password as protocol determines password encoding used.
  Auth Type = Clear # Clear == Clear Text, MD5 == Challenge protocol
  Username = ndmp # username of the NDMP user on the TAPE AGENT e.g. the Bareos SD but accessed via the NDMP protocol.
  Password = xxxxxed # password of the NDMP user on the TAPE AGENT e.g. the Bareos SD but accessed via the NDMP protocol.
  Device = scalar-i500
  Media Type = LTO-5
  PairedStorage = i500
  Maximum Concurrent Jobs = 6
}
Storage {
  Name = NDMPTape
  Address = 10.210.0.88 # N.B. Use a fully qualified name here
  Port = 10000
  Protocol = NDMPv4 # Need to specify protocol before password as protocol determines password encoding used.
  Auth Type = Clear # Clear == Clear Text, MD5 == Challenge protocol
  Username = ndmp # username of the NDMP user on the TAPE AGENT e.g. the Bareos SD but accessed via the NDMP protocol.
  Password = xxxxxed # password of the NDMP user on the TAPE AGENT e.g. the Bareos SD but accessed via the NDMP protocol.
  Device = scalar-i500
  Media Type = LTO-5
  PairedStorage = i500
  Maximum Concurrent Jobs = 6
}

Storage {
  Name = i500
  Address = bb8.elkb.de # N.B. Use a fully qualified name here (do not use "localhost" here).
  Password = xxxxxxed
  Device = scalar-i500
  Media Type = LTO-5
  AutoChanger = yes
  Maximum Concurrent Jobs = 6
}
(0003438)
isi   
2019-07-11 06:39   
OK, If I look at it now, I will change the IP Adresses into names and test again.
(0003443)
arogge   
2019-07-11 14:43   
As far as I can tell you should have only two storages: the Bareos storage "i500" and *one* paired NDMP storage.
I think the second NDMP storage leads to confusion inside of Bareos.
(0003445)
isi   
2019-07-12 06:41   
When I try to copy my Tape Backups to a copy Tape to tranfer to vault
I get this:

12-Jul 06:22 bareos-dir JobId 452: Fatal error: JobId 413 cannot Copying using the same read and write storage.
12-Jul 06:22 bareos-dir JobId 452: Error: Bareos bareos-dir 19.1.2 (01Feb19):
 
Thats why I added a second NDMP Storage in the first place.

I tested with a i500-copy paired to NDMPCopy. It gives also the TLS Error.
So it looks like its not possible at the moment to have a NDMP Copy Pool ? Or am I doing it wrong ?
(0003454)
arogge   
2019-07-12 13:32   
Our Testsuite uses the following:
Configure two NDMP Storages in director (copy source and copy target) each with its own Bareos Storage as Paired Storage.
The Bareos Storages may point to the same Device on the same SD.

This is what our automated test does. As 18.2.5 has of course been tested with the testsuite before release, I'm quite sure this is some kind of configuration issue (and maybe a lack of proper documentation for NDMP).
(0003456)
isi   
2019-07-12 13:47   
Thanks for your time and your support. I will try as you suggested.
Case can be closed at your convenience.
(0003457)
arogge   
2019-07-12 13:53   
I'd really appreciate your feedback whether you got it working or not, so we know what to add to the documentation exactly.
(0003460)
isi   
2019-07-15 08:37   
Can't get this to work. If I disable TLS I get different error Messages like:

15-Jul 08:27 bareos-dir JobId 531: Fatal error: Password encoding is not MD5. You are probably restoring a NDMP Backup with a restore job not configured for NDMP protocol.
15-Jul 08:27 bareos-dir JobId 531: Fatal error: Director unable to authenticate with Storage daemon at "bb8-storage.elkb.de:10000". Possible causes:
Passwords or names not the same or TLS negotiation problem or Maximum Concurrent Jobs exceeded on the SD or SD networking messed up (restart daemon).

Changed encoding to MD5, double checked Username and Password and Maximum Concurrent Jobs.

Have to give up for now.
(0003464)
arogge   
2019-07-15 10:00   
Could you also provide your Pool, Job (and if applicable JobDefs) configuration?
This should be simple and work out of the box.
(0003468)
isi   
2019-07-15 10:13   
sure:
Pool {
  Name = NDMPTape
     Pool Type = Backup
     Recycle = yes
     AutoPrune = yes
     Storage = NDMPTape
     Recycle Pool = Scratch
     Maximum Block Size = 1048576
     Volume Retention = 6 Month
     Next Pool = NDMPCopy
}
Pool {
  Name = NDMPCopy
     Pool Type = Backup
     Recycle = yes
     AutoPrune = yes
     Storage = NDMPCopy
     Recycle Pool = Scratch
     Maximum Block Size = 1048576
     Volume Retention = 6 Month
}

Job {
  Name = copy-asterix-lka-copypool
  client = None
  Type = Copy
  Messages = Standard
  Pool = NDMPTape
  File Set = None
  Schedule = sun-22
  Selection Type = SQL Query
  Selection Pattern = "
    SELECT MAX(jobid)
          FROM job
          WHERE name='asterix-lka-ndmp-job'
      AND type='B'
      AND level='F'
      AND jobstatus='T';
"
  Enabled = no
}
# Fake client for copy jobs
Client {
  Name = None
  Address = localhost
  Protocol = NDMPv4
  Auth Type = Clear
  Username = ndmp-copy
  Password = xxxxed
  Catalog = MyCatalog
  Enabled = yes
}

# Fake fileset for copy jobs
Fileset {
  Name = None
  Include {
    Options {
    }
  }
}
# ndmp copy agent in bareos-sd.d/ndmp
Ndmp {
  Name = None
  Username = ndmp-copy
  Password = xxxxed
  AuthType = Clear
}
(0003469)
arogge   
2019-07-15 10:31   
A copy Job doesn't require a Client, so you can remove the directive.
If you want to keep the dummy, remove the NDMP settings. Even though this is an NDMP Backup copying it is not a NDMP operation.
(0003473)
isi   
2019-07-15 11:06   
(Last edited: 2019-07-15 11:59)
>> Even though this is an NDMP Backup copying it is not a NDMP operation. <<

This was the right hint !!! If it's not NDMP, the Copy Storage must look like this:

Storage {
  Name = NDMPCopy
  Address = bb8.elkb.de # N.B. Use a fully qualified name here
  Password = "xxxxxed"
  Device = scalar-i500
  Media Type = LTO-5
  PairedStorage = i500-copy
  Maximum Concurrent Jobs = 6
}

You made my day !!!
It's loading Tapes and copying Blocks. So Cool, thank you very much.

(0003474)
isi   
2019-07-15 12:13   
One don'T even need a different Paired Storage, it also works with:
PairedStorage = i500
(0003475)
arogge   
2019-07-15 12:28   
Do you have any idea what we could add to the documentation so nobody else runs into this issue?
(0003477)
isi   
2019-07-15 12:52   
I was so focused on NDMP, that for me it was not clear that copying
an NDMP Backup is not an NDMP Operation and that the Copy Storage
is just a "normal" Bareos Storage.

No Need for a Fake Client and a Fake Fileset, and no need for a 2nd
ndmp_tape_agent. No NDMP at all involved here.
(0003478)
arogge   
2019-07-15 13:13   
Added note to NDMP documentation so this case becomes clearer.
(0003479)
arogge   
2019-07-15 13:22   
Fix committed to bareos master branch with changesetid 11595.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
988 [bareos-core] General major always 2018-07-19 11:57 2019-07-15 12:52
Reporter: IvanBayan Platform: Linux  
Assigned To: OS: Ubuntu  
Priority: normal OS Version: 16.04  
Status: feedback Product Version: 17.2.4  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Truncate command do nothing
Description: Looks like bareos simple ignore truncate command. Since truncate option was removed from purge command, it's impossible to truncate volume from cli:
Tags:
Steps To Reproduce: 1. Purge volumes
2. Check that them was purged.
# echo "list volumes pool=ow-backup03_veeam_full"|bconsole|tail -n 4
| 390 | ow-backup01-fd-ow-backup03_veeam_full-20170711-2119-423 | Purged | 1 | 10,737,377,563 | 2 | 1,296,000 | 0 | 0 | 0 | File_ow-backup03_dev-03 | 2017-07-11 21:20:58 | ow-backup03_dev-03-sd |
| 391 | ow-backup01-fd-ow-backup03_veeam_full-20170711-2120-423 | Purged | 1 | 4,858,174,249 | 1 | 1,296,000 | 0 | 0 | 0 | File_ow-backup03_dev-03 | 2017-07-11 21:21:47 | ow-backup03_dev-03-sd |
+---------+---------------------------------------------------------+-----------+---------+----------------+----------+--------------+---------+------+-----------+-------------------------+---------------------+-----------------------+
3. Invoke truncate
# echo "truncate volstatus=Purged pool=ow-backup03_veeam_full yes"|bconsole
Connecting to Director localhost:9101
1000 OK: mia-backup03-dir Version: 17.2.4 (21 Sep 2017)
Enter a period to cancel a command.
truncate volstatus=Purged pool=ow-backup03_veeam_full yes
Automatically selected Catalog: MyCatalog
Using Catalog "MyCatalog"
No results to list.
You have messages.
4. Check that volumes stay untouched:
# ls -lh /mnt/storage-dev-03/bareos/storage/ow-backup01-fd-ow-backup03_veeam_full-20170711-21{19,20}*
-rw-r----- 1 bareos bareos 10G Jul 11 2017 /mnt/storage-dev-03/bareos/storage/ow-backup01-fd-ow-backup03_veeam_full-20170711-2119-423
-rw-r----- 1 bareos bareos 4.6G Jul 11 2017 /mnt/storage-dev-03/bareos/storage/ow-backup01-fd-ow-backup03_veeam_full-20170711-2120-423
Additional Information:
System Description
Attached Files:
Notes
(0003078)
aron_s   
2018-07-20 13:20   
(Last edited: 2018-07-20 13:20)
Could you try to specify each of the volumes when truncating them ?
The usage of the truncate command is truncate volstatus=Purged [storage=<storage>] [pool=<pool>] [volume=<volume>] [yes], so try

truncate volstatus=Purged pool=ow-backup03_veeam_full volume=ow-backup01-fd-ow-backup03_veeam_full-20170711-2119-423 yes

truncate volstatus=Purged pool=ow-backup03_veeam_full volume= ow-backup01-fd-ow-backup03_veeam_full-20170711-2120-423 yes

(0003079)
IvanBayan   
2018-07-20 13:42   
I've already deleted that volumes, but done test for you with different volume and got same result:
# ls -lh mia-backup03-fd-mia-backup03_mixed_differential-20170930-568-00
-rw-r----- 1 bareos bareos 3.7G Sep 30 2017 mia-backup03-fd-mia-backup03_mixed_differential-20170930-568-00


*purge volume=mia-backup03-fd-mia-backup03_mixed_differential-20170930-568-00

This command can be DANGEROUS!!!
.....
This command requires full access to all resources.
1 File on Volume "mia-backup03-fd-mia-backup03_mixed_differential-20170930-568-00" purged from catalog.
There are no more Jobs associated with Volume "mia-backup03-fd-mia-backup03_mixed_differential-20170930-568-00". Marking it purged.

*list volume=mia-backup03-fd-mia-backup03_mixed_differential-20170930-568-00
+---------+-----------------------------------------------------------------+-----------+---------+---------------+----------+--------------+---------+------+-----------+-----------------------------+---------------------+---------------------------+
| mediaid | volumename | volstatus | enabled | volbytes | volfiles | volretention | recycle | slot | inchanger | mediatype | lastwritten | storage |
+---------+-----------------------------------------------------------------+-----------+---------+---------------+----------+--------------+---------+------+-----------+-----------------------------+---------------------+---------------------------+
| 413 | mia-backup03-fd-mia-backup03_mixed_differential-20170930-568-00 | Purged | 1 | 3,903,535,449 | 0 | 5,184,000 | 0 | 0 | 0 | File_mia-backup03_storage03 | 2017-09-30 21:00:36 | mia-backup03_storage03-sd |
+---------+-----------------------------------------------------------------+-----------+---------+---------------+----------+--------------+---------+------+-----------+-----------------------------+---------------------+---------------------------+
# ls -lh mia-backup03-fd-mia-backup03_mixed_differential-20170930-568-00
-rw-r----- 1 bareos bareos 3.7G Sep 30 2017 mia-backup03-fd-mia-backup03_mixed_differential-20170930-568-00

So, volume specification in truncate command make no difference.
(0003080)
aron_s   
2018-07-20 13:43   
As a proof of functionality I tried this myself with a filestorage and it worked flawless.

$ echo "list volumes pool=truncate-test"|bconsole|tail -n 4
| mediaid | volumename | volstatus | enabled | volbytes | volfiles | volretention | recycle | slot | inchanger | mediatype | lastwritten | storage |
+---------+--------------+-----------+---------+------------+----------+--------------+---------+------+-----------+-----------+---------------------+--------------------------+
| 3 | Truncate-001 | Purged | 1 | 22,947,582 | 0 | 31,536,000 | 1 | 0 | 0 | File | 2018-07-20 13:28:00 | ubuntu1604-VirtualBox-sd |
+---------+--------------+-----------+---------+------------+----------+--------------+---------+------+-----------+-----------+---------------------+--------------------------+
$ ls -lh /var/lib/bareos/storage/Truncate-001
-rw-r----- 1 bareos bareos 22M Jul 20 13:28 /var/lib/bareos/storage/Truncate-001
$ echo "truncate volstatus=Purged pool=truncate-test yes"|bconsole
[...]
Sending relabel command from "Truncate-001" to "Truncate-001" ...
3000 OK label. VolBytes=231 Volume="Truncate-001" Device="FileStorage" (/var/lib/bareos/storage)
The volume 'Truncate-001' has been truncated.
$ ls -lh /var/lib/bareos/storage/Truncate-001
-rw-r----- 1 bareos bareos 231 Jul 20 13:38 /var/lib/bareos/storage/Truncate-001
(0003081)
IvanBayan   
2018-07-20 15:08   
So, is here a way to help you with debugging?
*version
mia-backup03-dir Version: 17.2.4 (21 Sep 2017) x86_64-pc-linux-gnu ubuntu Ubuntu 16.04 LTS xUbuntu_16.04 x86_64
*status storage=mia-backup03_storage03-sd
Connecting to Storage daemon mia-backup03_storage03-sd at mia-backup03.int:9103

mia-backup03_storage03-sd Version: 17.2.4 (21 Sep 2017) x86_64-pc-linux-gnu ubuntu Ubuntu 16.04 LTS
(0003082)
aron_s   
2018-07-20 23:18   
I will look into the changes between Bareos 18.2 and 17.2 when it comes to truncating. If there are none, it must be something outside of Bareos.
(0003476)
arogge   
2019-07-15 12:52   
Please try to reproduce with the latest nightly build from https://download.bareos.org/bareos/experimental/nightly/

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
795 [bareos-core] director minor have not tried 2017-03-10 07:15 2019-07-15 10:50
Reporter: wilful Platform: Linux  
Assigned To: OS: CentOS  
Priority: high OS Version: 7.1  
Status: feedback Product Version: 16.2.4  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Invalid name for volume
Description: Hi,

I have several servers with customized Bareos clients.
Recently I noticed that during a night backup, one file incorrectly creates a volume file.

Installed packages:

bareos-bconsole-16.2.4-12.1.el7.x86_64
bareos-filedaemon-16.2.4-12.1.el7.x86_64
bareos-database-common-16.2.4-12.1.el7.x86_64
bareos-director-16.2.4-12.1.el7.x86_64
bareos-common-16.2.4-12.1.el7.x86_64
bareos-storage-16.2.4-12.1.el7.x86_64
bareos-client-16.2.4-12.1.el7.x86_64
bareos-database-mysql-16.2.4-12.1.el7.x86_64
bareos-database-tools-16.2.4-12.1.el7.x86_64
bareos-16.2.4-12.1.el7.x86_64
bareos-webui-16.2.4-31.1.el7.noarch
python-bareos-0.3.1487336620.94de78e-42.1.el7.x86_64

Installed from official repositories

Clients installed via bconsole

For example:
configure add client name=$NAME address=$IP password=$PW
configure add job name=$NAME-mysql client=$NAME jobdefs=StandardLinuxJobMysql

client]# cat client1.conf client2.conf
Client {
  Name = client1
  Address = ip1
  Password = ***
}
Client {
  Name = client2
  Address = ip2
  Password = ***
}

job]# cat *client1* *client2*
Job {
  Name = client2-common
  Client = client2
  JobDefs = StandardLinuxJobCommon
}
Job {
  Name = client2-content
  Client = client2
  JobDefs = StandardLinuxJobContent
}
Job {
  Name = client2-mysql
  Client = client2
  JobDefs = StandardLinuxJobMysql
}
Job {
  Name = client1-common
  Client = client1
  JobDefs = StandardLinuxJobCommon
  Storage = another-storage
}
Job {
  Name = client1-content
  Client = client1
  JobDefs = StandardLinuxJobContent
  Storage = another-storage
}
Job {
  Name = client1-mysql
  Client = client1
  JobDefs = StandardLinuxJobMysql
  Storage = another-storage
}

jobdefs]# cat StandardLinuxJob*
JobDefs {
    Name = StandardLinuxJobCommon
    Type = Backup
    FileSet = StandardLinuxFileSetCommon
    Storage = first-storage
    Pool = DefaultCommonPool
    Full Backup Pool = FullCommonPool
    Incremental Backup Pool = IncCommonPool
    Differential Backup Pool = DiffCommonPool
    Prefer Mounted Volumes = no
    Write Bootstrap = "%c_%n.bsr"
    Schedule = PrimaryBackupCycle
    Priority = 7
    Messages = Standard
    Maximum Bandwidth = 10Mb/s
    Maximum Concurrent Jobs = 2
}
JobDefs {
    Name = StandardLinuxJobContent
    Type = Backup
    FileSet = StandardLinuxFileSetContent
    Storage = first-storage
    Pool = DefaultContentPool
    Full Backup Pool = FullContentPool
    Incremental Backup Pool = IncContentPool
    Differential Backup Pool = DiffContentPool
    Prefer Mounted Volumes = no
    Write Bootstrap = "%c_%n.bsr"
    Schedule = PrimaryBackupCycle
    Priority = 10
    Messages = Standard
    Maximum Bandwidth = 10Mb/s
    Maximum Concurrent Jobs = 2
}
JobDefs {
    Name = StandardLinuxJobMysql
    Client Run Before Job = /root/scripts/templates/bacula/mysqldump.sh
    Type = Backup
    FileSet = StandardLinuxFileSetMysql
    Storage = first-storage
# Level = Full
    Pool = DefaultMysqlPool
    Full Backup Pool = FullMysqlPool
    Incremental Backup Pool = IncMysqlPool
    Differential Backup Pool = DiffMysqlPool
    Prefer Mounted Volumes = no
    Write Bootstrap = "%c_%n.bsr"
    Schedule = PrimaryBackupCycle
    Priority = 10
    Messages = Standard
    Maximum Bandwidth = 10Mb/s
}
See what happens:
1355 is a backup for the client "Client1", and it creates a volume named "client2-IncMysqlPool-20170310-0012-1360"

bareos-dir JobId 1355: Bareos bareos-dir 16.2.4 (01Jul16):
Build OS: x86_64-redhat-linux-gnu redhat Red Hat Enterprise Linux Server release 7.0 (Maipo)
JobId: 1355
Job: client1-mysql.2017-03-10_00.05.14_19
Backup Level: Incremental, since=2017-03-09 00:11:56
Client: "client1" 16.2.4 (01Jul16) x86_64-redhat-linux-gnu,redhat,Red Hat Enterprise Linux Server release 6.5 (Santiago),RHEL_6,x86_64
FileSet: "StandardLinuxFileSetMysql" 2017-02-16 10:53:08
Pool: "IncMysqlPool" (From Job IncPool override)
Catalog: "MyCatalog" (From Client resource)
Storage: "another-storage" (From Job resource)
Scheduled time: 10-Mar-2017 00:05:14
Start time: 10-Mar-2017 00:12:23
End time: 10-Mar-2017 00:12:28
Elapsed time: 5 secs
Priority: 10
FD Files Written: 3
SD Files Written: 3
FD Bytes Written: 35,636,512 (35.63 MB)
SD Bytes Written: 35,636,896 (35.63 MB)
Rate: 7127.3 KB/s
Software Compression: None
VSS: no
Encryption: no
Accurate: no
Volume name(s): client2-IncMysqlPool-20170310-0012-1360
Volume Session Id: 26
Volume Session Time: 1488542340
Last Volume Bytes: 35,664,110 (35.66 MB)
Non-fatal FD errors: 0
SD Errors: 0
FD termination status: OK
SD termination status: OK
Termination: Backup OK

bareos-sd JobId 1355: Labeled new Volume "client2-IncMysqlPool-20170310-0012-1360" on device "storage-device-01" (/mnt/backup/bareos-dir)

After that, Client2 "creates" a file with the same name:

bareos-dir JobId 1360: Bareos bareos-dir 16.2.4 (01Jul16):
Build OS: x86_64-redhat-linux-gnu redhat Red Hat Enterprise Linux Server release 7.0 (Maipo)
JobId: 1360
Job: client2-mysql.2017-03-10_00.05.14_24
Backup Level: Incremental, since=2017-03-09 00:20:51
Client: "client2" 16.2.4 (01Jul16) x86_64-redhat-linux-gnu,redhat,Red Hat Enterprise Linux Server release 6.5 (Santiago),RHEL_6,x86_64
FileSet: "StandardLinuxFileSetMysql" 2017-02-16 10:53:08
Pool: "IncMysqlPool" (From Job IncPool override)
Catalog: "MyCatalog" (From Client resource)
Storage: "first-storage" (From Job resource)
Scheduled time: 10-Mar-2017 00:05:14
Start time: 10-Mar-2017 00:20:36
End time: 10-Mar-2017 00:23:09
Elapsed time: 2 mins 33 secs
Priority: 10
FD Files Written: 4
SD Files Written: 4
FD Bytes Written: 1,458,860,749 (1.458 GB)
SD Bytes Written: 1,458,861,269 (1.458 GB)
Rate: 9535.0 KB/s
Software Compression: None
VSS: no
Encryption: no
Accurate: no
Volume name(s): client2-IncMysqlPool-20170310-0020-1360
Volume Session Id: 260
Volume Session Time: 1488775384
Last Volume Bytes: 1,459,943,955 (1.459 GB)
Non-fatal FD errors: 0
SD Errors: 0
FD termination status: OK
SD termination status: OK
Termination: Backup OK

bareos-sd JobId 1360: Labeled new Volume "client2-IncMysqlPool-20170310-0020-1360" on device "storage-device-02" (/mnt/backup/bareos-dir).

I can not understand how this is possible, why it happens, and how to deal with it. On this and I will forgive the help from you.
         
This happens only at night, according to the planned schedule. If I start manually, then everything is called correct.
Schedule {
  Name = "PrimaryBackupCycle"
  run = Full 1st Sun at 00:05
  run = Differential 2nd-5th Sun at 00:05
  run = Incremental Mon-Sat at 00:05
}
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0002613)
wilful   
2017-03-20 05:58   
I found the problem

If Job failed, when next job use prelabeled volume from failed job.

Can I change this behavior? It is necessary that a new Label and Volume
(0003472)
arogge   
2019-07-15 10:50   
These two are not the same (0012 vs. 0020):
client2-IncMysqlPool-20170310-0012-1360
client2-IncMysqlPool-20170310-0020-1360

So what is the issue?

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
767 [bareos-core] file daemon minor always 2017-01-16 11:57 2019-07-15 10:35
Reporter: rhillmann Platform: Windows  
Assigned To: arogge OS: 2012 R2  
Priority: high OS Version: 8  
Status: resolved Product Version: 16.2.4  
Product Build: Resolution: not fixable  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: VSS Snapshot timeouts - VSS_WS_FAILED_AT_FREEZE
Description: Since update from 14.2 to 16.2 the vss writers (SQL, Exchange) runs into timeout on prepare vss snapshot.
Tags:
Steps To Reproduce:
Additional Information:  2017-01-14 12:00:02 heliumguest1.a2.rz1.intelliad.com JobId 10019: Start Backup JobId 10019, Job=Backup_exchange-1.2017-01-14_12.00.00_59
 2017-01-14 12:00:02 heliumguest1.a2.rz1.intelliad.com JobId 10019: Using Device "FileStorage1" to write.
 2017-01-14 12:00:03 silicon.b3.rz1.intelliad.com JobId 10019: Recycled volume "Full-0032" on device "FileStorage1" (/data/bareos), all previous data lost.
 2017-01-14 12:00:03 heliumguest1.a2.rz1.intelliad.com JobId 10019: Max Volume jobs=1 exceeded. Marking Volume "Full-0032" as Used.
 2017-01-14 12:00:02 exchange-1 JobId 10019: Created 25 wildcard excludes from FilesNotToBackup Registry key
 2017-01-14 12:00:11 exchange-1 JobId 10019: Generate VSS snapshots. Driver="Win64 VSS", Drive(s)="CD"
 2017-01-14 12:00:12 exchange-1 JobId 10019: VolumeMountpoints are not processed as onefs = yes.
 2017-01-14 12:00:13 exchange-1 JobId 10019: VolumeMountpoints are not processed as onefs = yes.
 2017-01-14 12:00:47 exchange-1 JobId 10019: Warning: VSS Writer "Cluster Shared Volume VSS Writer" has invalid state. ERR=The writer vetoed the shadow copy creation process during the PrepareForSnapshot state.
 2017-01-14 12:00:47 exchange-1 JobId 10019: Warning: VSS Writer "Cluster Database" has invalid state. ERR=The writer vetoed the shadow copy creation process during the freeze state.
 2017-01-14 12:00:47 exchange-1 JobId 10019: VSS Writer (PrepareForBackup): "Cluster Shared Volume VSS Writer", State: 0x8 (VSS_WS_FAILED_AT_PREPARE_SNAPSHOT)
 2017-01-14 12:00:47 exchange-1 JobId 10019: VSS Writer (PrepareForBackup): "Cluster Database", State: 0x9 (VSS_WS_FAILED_AT_FREEZE)
 2017-01-14 14:49:53 exchange-1 JobId 10019: Warning: VSS Writer "Cluster Shared Volume VSS Writer" has invalid state. ERR=The writer vetoed the shadow copy creation process during the PrepareForSnapshot state.
 2017-01-14 14:49:53 exchange-1 JobId 10019: Warning: VSS Writer "Cluster Database" has invalid state. ERR=The writer vetoed the shadow copy creation process during the freeze state.
 2017-01-14 14:50:52 exchange-1 JobId 10019: VSS Writer (BackupComplete): "Task Scheduler Writer", State: 0x1 (VSS_WS_STABLE)
 2017-01-14 14:50:52 exchange-1 JobId 10019: VSS Writer (BackupComplete): "VSS Metadata Store Writer", State: 0x1 (VSS_WS_STABLE)
 2017-01-14 14:50:52 exchange-1 JobId 10019: VSS Writer (BackupComplete): "Performance Counters Writer", State: 0x1 (VSS_WS_STABLE)
 2017-01-14 14:50:52 exchange-1 JobId 10019: VSS Writer (BackupComplete): "System Writer", State: 0x1 (VSS_WS_STABLE)
 2017-01-14 14:50:52 exchange-1 JobId 10019: VSS Writer (BackupComplete): "Microsoft Exchange Writer", State: 0x1 (VSS_WS_STABLE)
 2017-01-14 14:50:52 exchange-1 JobId 10019: Warning: VSS Writer (BackupComplete): "Cluster Shared Volume VSS Writer", State: 0x8 (VSS_WS_FAILED_AT_PREPARE_SNAPSHOT)
 2017-01-14 14:50:52 exchange-1 JobId 10019: VSS Writer (BackupComplete): "Shadow Copy Optimization Writer", State: 0x1 (VSS_WS_STABLE)
 2017-01-14 14:50:52 exchange-1 JobId 10019: VSS Writer (BackupComplete): "ASR Writer", State: 0x1 (VSS_WS_STABLE)
 2017-01-14 14:50:52 exchange-1 JobId 10019: VSS Writer (BackupComplete): "IIS Config Writer", State: 0x1 (VSS_WS_STABLE)
 2017-01-14 14:50:52 exchange-1 JobId 10019: VSS Writer (BackupComplete): "Registry Writer", State: 0x1 (VSS_WS_STABLE)
 2017-01-14 14:50:52 exchange-1 JobId 10019: VSS Writer (BackupComplete): "BITS Writer", State: 0x1 (VSS_WS_STABLE)
 2017-01-14 14:50:52 exchange-1 JobId 10019: VSS Writer (BackupComplete): "MSMQ Writer (MSMQ)", State: 0x1 (VSS_WS_STABLE)
 2017-01-14 14:50:52 exchange-1 JobId 10019: VSS Writer (BackupComplete): "WMI Writer", State: 0x1 (VSS_WS_STABLE)
 2017-01-14 14:50:52 exchange-1 JobId 10019: VSS Writer (BackupComplete): "IIS Metabase Writer", State: 0x1 (VSS_WS_STABLE)
 2017-01-14 14:50:52 exchange-1 JobId 10019: VSS Writer (BackupComplete): "COM+ REGDB Writer", State: 0x1 (VSS_WS_STABLE)
 2017-01-14 14:50:52 exchange-1 JobId 10019: Warning: VSS Writer (BackupComplete): "Cluster Database", State: 0x9 (VSS_WS_FAILED_AT_FREEZE)
 2017-01-14 14:50:55 silicon.b3.rz1.intelliad.com JobId 10019: Elapsed time=02:50:52, Transfer rate=52.03 M Bytes/second
 2017-01-14 14:51:15 heliumguest1.a2.rz1.intelliad.com JobId 10019: Bareos heliumguest1.a2.rz1.intelliad.com 16.2.4 (01Jul16):
  Build OS: x86_64-pc-linux-gnu ubuntu Ubuntu 14.04 LTS
  JobId: 10019
  Job: Backup_exchange-1.2017-01-14_12.00.00_59
  Backup Level: Full
  Client: "exchange-1" 16.2.4 (01Jul16) Microsoft Windows Server 2012 Standard Edition (build 9200), 64-bit,Cross-compile,Win64
  FileSet: "Exchange2013" 2015-11-11 17:00:25
  Pool: "Full" (From Job FullPool override)
  Catalog: "MyCatalog" (From Client resource)
  Storage: "silicon.b3.rz1.intelliad.com" (From Job resource)
  Scheduled time: 14-Jan-2017 12:00:00
  Start time: 14-Jan-2017 12:00:02
  End time: 14-Jan-2017 14:51:15
  Elapsed time: 2 hours 51 mins 13 secs
  Priority: 10
  FD Files Written: 399,355
  SD Files Written: 399,355
  FD Bytes Written: 533,371,993,313 (533.3 GB)
  SD Bytes Written: 533,465,108,884 (533.4 GB)
  Rate: 51919.8 KB/s
  Software Compression: None
  VSS: yes
  Encryption: no
  Accurate: no
  Volume name(s): Full-0032
  Volume Session Id: 293
  Volume Session Time: 1483445651
  Last Volume Bytes: 533,873,789,876 (533.8 GB)
  Non-fatal FD errors: 2
  SD Errors: 0
  FD termination status: OK
  SD termination status: OK
  Termination: Backup OK -- with warnings

 2017-01-14 14:51:15 heliumguest1.a2.rz1.intelliad.com JobId 10019: Begin pruning Jobs older than 6 months .
 2017-01-14 14:51:15 heliumguest1.a2.rz1.intelliad.com JobId 10019: No Jobs found to prune.
 2017-01-14 14:51:15 heliumguest1.a2.rz1.intelliad.com JobId 10019: Begin pruning Files.
 2017-01-14 14:51:15 heliumguest1.a2.rz1.intelliad.com JobId 10019: No Files found to prune.
 2017-01-14 14:51:15 heliumguest1.a2.rz1.intelliad.com JobId 10019: End auto prune.
System Description
Attached Files:
Notes
(0003470)
arogge   
2019-07-15 10:35   
This is a problem with VSS. You need to check your Windows Event Log and make sure the Writer works correctly.
There's not much we can do to make this work.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1006 [bareos-core] storage daemon crash always 2018-09-12 12:45 2019-07-15 10:33
Reporter: Martin Svec Platform: Linux  
Assigned To: joergs OS: Debian  
Priority: normal OS Version: 9  
Status: resolved Product Version: 17.2.4  
Product Build: Resolution: duplicate  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Storage daemon segfaults in update_job_statistics when starting scheduled jobs
Description: If job statistics are turned on and multiple backup jobs are scheduled and started at the same time, bareos-sd 17.2.4 almost always segfaults. When started in gdb, I got the following bareos-sd backtrace:

(gdb) bt
#0 0x000055555556c991 in update_job_statistics (jcr=0x7fffbc001078, now=1536531334) at sd_stats.c:296
0000001 0x000055555556cc93 in statistics_thread_runner (arg=arg@entry=0x0) at sd_stats.c:386
0000002 0x00007ffff74add9f in lmgr_thread_launcher (x=0x5555557b7328) at lockmgr.c:928
0000003 0x00007ffff6309494 in start_thread (arg=0x7fffeeffd700) at pthread_create.c:333
0000004 0x00007ffff519aacf in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:97

As a workaround, it suffices to set "Collect Job Statistics = no".
Tags:
Steps To Reproduce: Turn on job statistics, schedule multiple backup-to-disk jobs using one common Schedule and let the director to start all the jobs according to the Schedule. Note that the concurrency is important. If the jobs are started manually one by one, the bug doesn't occur.
Additional Information: Our setup: Bareos 17.2.4 installed from bareos.org Debian packages (http://download.bareos.org/bareos/release/latest/Debian_9.0/). Director, storage daemon and webui are all on one physical server, catalog is in MySQL database. Backups are stored on a local disk device. Storage daemon has twenty identical disk Device resources, pointing to the same location:

Device {
  Name = Disk-Bareos-01-000
  Media Type = Disk
  Device Type = File
  Archive Device = /backup
  LabelMedia = yes
  Random Access = yes
  AutomaticMount = yes
  RemovableMedia = no
  AlwaysOpen = no
  Maximum Concurrent Jobs = 1
}

...

Device {
  Name = Disk-Bareos-01-019
  Media Type = Disk
  Device Type = File
  Archive Device = /backup
  LabelMedia = yes
  Random Access = yes
  AutomaticMount = yes
  RemovableMedia = no
  AlwaysOpen = no
  Maximum Concurrent Jobs = 1
}


We've 88 jobs and four different nightly Schedules, Maximum Concurrent Jobs is set to 20 both in director and storage daemon. That is, director typically starts up to 20 jobs at the same time.
System Description
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1031 [bareos-core] file daemon crash always 2018-12-09 22:55 2019-07-15 10:11
Reporter: RoyK Platform: Ubuntu  
Assigned To: arogge OS: Linux  
Priority: high OS Version: 18.04  
Status: resolved Product Version: 18.4.1  
Product Build: Resolution: won't fix  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: If attempting to backup to a bareos 16 director (running debian stretch), fd segfaultsq
Description: An attempt to backup Ubuntu 18.04 with bareos-fd 18.x (both release and daily) to Bareos 16 on a Debian Stretch, first results in an error establishing TLS, and after disabling TLS in the FD, this kills the FD with a SIGSEGV. I had this on one VM some time back, but thought 'oh well', but now I have another, and this one is more important.
Tags:
Steps To Reproduce: Try to backup ubuntu 18.04 with bareos-fd v18 to debian running bareos-dir 16. I have both dir and sd running on this machine.
Additional Information:
I would guess the problem is between bareos versions and not directly related to distro version, although I really don't know. I haven't been able to create a dump/backtrace.
Attached Files:
Notes
(0003467)
arogge   
2019-07-15 10:11   
newer Director and SD with older FD is ok, newer FD with older Director and SD is not.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1032 [bareos-core] General minor always 2018-12-20 17:40 2019-07-15 10:09
Reporter: ghost Platform: Linux  
Assigned To: OS: Debian  
Priority: normal OS Version: 9  
Status: feedback Product Version: 17.2.4  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: End>Time is always 01-Jan-1970 01:00:00
Description: Hi,
I'am on a fresh install from debian and I get a summary like this on every Job.
See the lines "End time", "FD Files Written" and "FD Bytes Written".

2018-12-20 16:54:08 bareos-dir JobId 4: Bareos bareos-dir 17.2.4 (21Sep17):
  Build OS: x86_64-pc-linux-gnu debian Debian GNU/Linux 9.3 (stretch)
  JobId: 4
  Job: backup-backup001-fd.2018-12-20_16.51.43_17
  Backup Level: Full (upgraded from Incremental)
  Client: "backup001-fd" 17.2.4 (21Sep17) x86_64-pc-linux-gnu,debian,Debian GNU/Linux 9.3 (stretch),Debian_9.0,x86_64
  FileSet: "VollBackup-Set" 2018-12-20 16:51:43
  Pool: "Full" (From Job FullPool override)
  Catalog: "MyCatalog" (From Client resource)
  Storage: "File" (From Job resource)
  Scheduled time: 20-Dec-2018 16:51:43
  Start time: 20-Dec-2018 16:51:45
  End time: 01-Jan-1970 01:00:00
  Elapsed time: -49 years -715827882 months -20 days -15 hours -5
  Priority: 10
  FD Files Written: 0
  SD Files Written: 73,100
  FD Bytes Written: 0 (0 B)
  SD Bytes Written: 1,536,694,542 (1.536 GB)
  Rate: 0.0 KB/s
  Software Compression: None
  VSS: no
  Encryption: no
  Accurate: no
  Volume name(s): Full-0001
  Volume Session Id: 6
  Volume Session Time: 1545313482
  Last Volume Bytes: 1,540,116,160 (1.540 GB)
  Non-fatal FD errors: 0
  SD Errors: 0
  FD termination status: OK
  SD termination status: OK

The other point, and I think there might be a connection, in the web-ui under restore I don't any files.
with bconsole list files jobid=X I can see the files.
Tags:
Steps To Reproduce:
Additional Information: I've already recreate the catalog means drob the db and recerate it.

environment:

- all cients are debian 9 in an LAN (no encryption)
- I changed the language on the director and son clients from de_DE.UTF-8 to en_US.UTF-8 for testing no success
- database is on a mariadb(10.1.37)-galera-cluster behind a maxscale (2.2.17) with read-write-split
System Description
Attached Files:
Notes
(0003466)
arogge   
2019-07-15 10:09   
Please try to reproduce the issue with the latest build from https://download.bareos.org/bareos/experimental/nightly/

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1043 [bareos-core] storage daemon major unable to reproduce 2019-02-01 13:58 2019-07-15 10:08
Reporter: alex-dvv Platform: Linux  
Assigned To: arogge OS: Debian  
Priority: high OS Version: 9  
Status: resolved Product Version: 18.2.5  
Product Build: Resolution: won't fix  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: No connection to the Storage with the Director
Description: No connection to the Director(18.2.5) with the Storage(16.2)
Error on the side of the Storage: bsock_tcp.c:570 Packet size too big from "client:172.0.0.169:9103. Terminating connection.
Error on the side of the Director: TLS shutdown failure.: ERR=error:140E0197:SSL routines:SSL_shutdown:shutdown while in init
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0003465)
arogge   
2019-07-15 10:08   
This is expected. Director, SD and bconsole must be upgraded to 18.x at once as stated in the documentation: https://docs.bareos.org/Appendix/ReleaseNotes.html#backward-compatibility

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1051 [bareos-core] director feature N/A 2019-02-11 11:01 2019-07-12 13:36
Reporter: colttt Platform: Linux  
Assigned To: OS: Debian  
Priority: normal OS Version: 9  
Status: acknowledged Product Version: 18.2.5  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: use context based auto complete in bconsole
Description: it would be very nice if you add context based auto complete in the bconsole. example:
configure add client <TAB> <TAB> I got 213 possibilities, but normally I need name, address and password for a simple client config. so only 3 possibilities with tab completion would be very great.
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1003 [bareos-core] file daemon minor always 2018-08-31 20:39 2019-07-12 11:42
Reporter: Pedro Platform: Linux  
Assigned To: OS: Ubuntu  
Priority: normal OS Version: 14.04  
Status: feedback Product Version: 17.2.4  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Fatal error: bpipe-fd: Error closing stream for pseudo file /home/backup.sql: 268435457
Description: The fileset.conf has this configuration directive:
  Plugin = "bpipe:/home/backup.sql:mysqldump -u root -ppassword --databases MyDataBase:dd of=/home/restore.sql"

After run a restore command, the bconsole shows the error messages:
 "...
  FD Errors: 1
  FD termination status: Fatal Error
  SD termination status: OK
  Termination: *** Restore Error ***"
But the restore was successful done.

If the bareos-filedaemon is run with the "/usr/sbin/bareos-fd -d 100" command, no error message is displayed by bconsole:
 "...
  FD Errors: 0
  FD termination status: OK
  SD termination status: OK
  Termination: Restore OK"
Tags:
Steps To Reproduce:
Additional Information: Server operating system: Ubuntu server 16.04.5 LTS.
Client operating system: Ubuntu desktop 14.04.5 LTS.
System Description
Attached Files:
Notes
(0003114)
Shodan   
2018-09-17 10:54   
Try to wrap commands into shell scripts

Plugin = "bpipe:/home/backup.sql:backup.sh backup:backup.sh restore"
(0003115)
Pedro   
2018-09-17 20:27   
(Last edited: 2018-09-18 14:04)
Now I can not backup or restore.

"...
Fatal error: bpipe-fd: Error closing stream for pseudo file /home/backup.sql: 268435664
...
Error: Bareos bareos-dir 17.2.4 (21Sep17):
...
  Non-fatal FD errors: 1
  SD Errors: 0
  FD termination status: Fatal Error
  SD termination status: Canceled
  Termination: *** Backup Error ***"

(0003116)
Shodan   
2018-09-18 08:31   
Hope it helps

Plugin = "bpipe:file=/home/backup.sql:reader=backup.sh backup:writer=backup.sh restore"
(0003117)
Pedro   
2018-09-18 13:47   
Nothing solved again.

"...
Fatal error: bpipe-fd: Error closing stream for pseudo file /home/backup.sql: 268435664
...
Error: Bareos bareos-dir 17.2.4 (21Sep17):
...
  Non-fatal FD errors: 1
  SD Errors: 0
  FD termination status: Fatal Error
  SD termination status: Canceled
  Termination: *** Backup Error ***"
(0003393)
hostedpower   
2019-06-20 17:18   
was this ever resolved?

We use:

FileSet {
    Name = "mngdb1.hivecpq.com-mongo"
    Include {
        Options {
            signature = SHA1
            compression = GZIP3
        }
        # Backup mongodb
        Plugin = "bpipe:file=/_mongobackups_/restore.archive:reader=mongodump --host haset0/192.168.140.51,192.168.140.52,192.168.140.53 -u xxxx -p xxxx --readPreference primary --oplog --archive:writer=dd of=/home/mongo_restore.archive"
    }
    IgnoreFileSetChanges = yes
}

Backup seems to work, but whatever we tried, we can never restore :(

2019-06-20 17:11:49 xxxxx JobId 79267: Fatal error: bpipe-fd: Error closing stream for pseudo file /_mongobackups_/restore.archive: 268435457
 
2019-06-20 17:11:49 bareos-dir JobId 79267: Error: Bareos bareos-dir 18.2.6 (13Feb19):
 Build OS: Linux-4.4.92-6.18-default debian Debian GNU/Linux 9.7 (stretch)
 JobId: 79267
 Job: RestoreFiles.2019-06-20_17.11.44_13
 Restore Client: xxxx
 Start time: 20-Jun-2019 17:11:46
 End time: 20-Jun-2019 17:11:49
 Elapsed time: 3 secs
 Files Expected: 1
 Files Restored: 1
 Bytes Restored: 2,899
 Rate: 1.0 KB/s
 FD Errors: 1
 FD termination status: Fatal Error
 SD termination status: OK
 Bareos binary info: official Bareos subscription
 Termination: *** Restore Error ***
(0003394)
hostedpower   
2019-06-20 17:39   
Well for anyone looking for a solution; I used this writer and it works now:

writer=dd of=/tmp/mongo_restore.archive bs=16M oflag=direct iflag=fullblock status=none

Not sure if it's either of these:
oflag=direct
iflag=fullblock
status=none

But in any case 100% succesful now!! :)

too bad it doesn't take the selected path in the restore GUI, it's now hardcoded to /tmp.
(0003418)
arogge   
2019-07-08 12:08   
Does this happen with different software or only with mongodb?
Can you reproduce the issue with the latest nightly-build?
Could you take the time to provide debug-traces from the filedaemon that did the failed restore?
(0003453)
hostedpower   
2019-07-12 11:42   
Hello arogge

It was resolved with other dd settings, not sure if the bug is in Bareos. Although during all testing, it seemed very hard to find a workable solution :)

It would be great if this plugin supported some more features out of the box like file path specification with a default writer or something which supports it.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1047 [bareos-core] webui major sometimes 2019-02-06 20:40 2019-07-12 11:02
Reporter: murrdyn Platform: Linux  
Assigned To: OS: RHEL  
Priority: normal OS Version: 7  
Status: feedback Product Version: 18.2.5  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Webui login returns a blank page when logging in.
Description: Sometimes when logging into Webui or attempting to log in with a different users, I am getting a blank page. The following errors appear in the httpd error log when this occurs.

[Wed Feb 06 13:28:14.347786 2019] [:error] [pid 3627] [client x.x.x.x:63602] PHP Notice: Undefined variable: form in /usr/share/bareos-webui/module/Auth/view/auth/auth/login.phtml on line 45, referer: http://x.x.x.x/bareos-webui/auth/login
[Wed Feb 06 13:28:14.347819 2019] [:error] [pid 3627] [client x.x.x.x:63602] PHP Fatal error: Call to a member function prepare() on a non-object in /usr/share/bareos-webui/module/Auth/view/auth/auth/login.phtml on line 45, referer: http://x.x.x.x/bareos-webui/auth/login

If you refresh the blank page, I then get a page of the attached image.

Occasionally this will self heal, and the error does not appear in the logs.
Tags: broken, webui
Steps To Reproduce: Fresh install of 18.2.5 on RHEL 7.6. After first successful login into Webui with created admin account (TLS Enable = no for the console profile), error and blank page are received on next login attempt. Will self-heal within a day and then be fine until current session times out and you log back in or force the logout and back in.
Additional Information:
System Description
Attached Files: bareos-error.PNG (23,787 bytes) 2019-02-06 20:40
https://bugs.bareos.org/file_download.php?file_id=348&type=bug
png
Notes
(0003251)
murrdyn   
2019-02-06 20:43   
Additional information: Created a second admin account. Can temporary work around issue switching accounts in the webui when error is encountered.
(0003252)
teka74   
2019-02-07 02:02   
same problem here, 18.2.5 updated from 17.2.4, system ubuntu 16.04 lts
(0003262)
xyros   
2019-02-14 17:00   
I originally posted this in another bug report, but I believe it applies better here:

The observation I have found, regarding this issue, is that intentionally logging out (before doing anything that triggers a session expiry response/notification) avoids triggering this bug on subsequent login.

Typically, if you remain logged in and your session expires by the time you try to perform an action, you have to log back in. This is when you encounter this bug.

Following a long idle period, if you avoid performing any action, so as to avoid being notified that your session has expired, and instead click your username and properly logout from the drop-down, you can log back in successfully without triggering this bug.

In fact, I have found that if I always deliberately logout, such that I avoid triggering the session expiry notice, I can always successfully login on the next attempt.

I have not yet tested a scenario of closing all browser windows, without logging out, then trying to login again. However, so far it seems that deliberately logging out -- even after session expiry (but without doing anything to trigger a session expiry notification) -- avoids triggering this bug.

Hope that helps with figuring out where the bug resides.
(0003452)
arogge   
2019-07-12 11:02   
Can you please check whether deleting your cookies for the bareos-webui actually helps?

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1057 [bareos-core] vmware plugin feature always 2019-02-14 10:00 2019-07-12 10:59
Reporter: alex.kanaykin Platform: Linux  
Assigned To: OS: RHEL  
Priority: normal OS Version: 6  
Status: acknowledged Product Version: 18.2.5  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Option to disable quiesced snapshot
Description: Hi,
I have VM to backup with RHEL6. (Cisco UCM) When I tried to backup it I getting error msg = "An error occurred while quiescing the virtual machine. See the virtual machine's event log for details."

That VM doesn't support quiesced snapshots.

The question is: Can you, please, add an option to vmware-plugin to avoid quiescing host like in Bacula *quiesce_host=no*

Anyway thank you for your great software!

with best regards, Alex.
Tags: plugin
Steps To Reproduce: Start job on vm that can't do quiesced snapshots.
Additional Information: Fatal error: python-fd: Traceback (most recent call last):
File "/usr/lib64/bareos/plugins/BareosFdWrapper.py", line 38, in handle_plugin_event
return bareos_fd_plugin_object.handle_plugin_event(context, event)
File "/usr/lib64/bareos/plugins/BareosFdPluginVMware.py", line 549, in handle_plugin_event
return self.start_backup_job(context)
File "/usr/lib64/bareos/plugins/BareosFdPluginVMware.py", line 179, in start_backup_job
return self.vadp.prepare_vm_backup(context)
File "/usr/lib64/bareos/plugins/BareosFdPluginVMware.py", line 764, in prepare_vm_backup
if not self.create_vm_snapshot(context):
File "/usr/lib64/bareos/plugins/BareosFdPluginVMware.py", line 930, in create_vm_snapshot
self.vmomi_WaitForTasks([self.create_snap_task])
File "/usr/lib64/bareos/plugins/BareosFdPluginVMware.py", line 1219, in vmomi_WaitForTasks
raise task.info.error
vim.fault.ApplicationQuiesceFault: (vim.fault.ApplicationQuiesceFault) {
dynamicType = ,
dynamicProperty = (vmodl.DynamicProperty) [],
msg = "An error occurred while quiescing the virtual machine. See the virtual machine's event log for details.",
faultCause = ,
faultMessage = (vmodl.LocalizableMessage) [
(vmodl.LocalizableMessage) {
dynamicType = ,
dynamicProperty = (vmodl.DynamicProperty) [],
key = 'msg.checkpoint.save.fail2.std3',
arg = (vmodl.KeyAnyValue) [
(vmodl.KeyAnyValue) {
dynamicType = ,
dynamicProperty = (vmodl.DynamicProperty) [],
key = '1',
value = 'msg.snapshot.error-QUIESCINGERROR'
}
],
message = 'An error occurred while saving the snapshot: Failed to quiesce the virtual machine.'
},
(vmodl.LocalizableMessage) {
dynamicType = ,
dynamicProperty = (vmodl.DynamicProperty) [],
key = 'msg.snapshot.vigor.take.error',
arg = (vmodl.KeyAnyValue) [
(vmodl.KeyAnyValue) {
dynamicType = ,
dynamicProperty = (vmodl.DynamicProperty) [],
key = '1',
value = 'msg.snapshot.error-QUIESCINGERROR'
}
],
message = 'An error occurred while taking a snapshot: Failed to quiesce the virtual machine.'
}
]
System Description
Attached Files:
Notes
(0003451)
arogge   
2019-07-12 10:59   
sounds reasonable, but I don't know when we'll be able to implement it.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
843 [bareos-core] director crash always 2017-08-10 22:39 2019-07-12 10:33
Reporter: vshkolin Platform: Linux  
Assigned To: OS: CentOS  
Priority: normal OS Version: 7  
Status: feedback Product Version: 16.2.4  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: bareos-dir crashed with Segfault when parsing config files with job type = consolidate w/o pool and storage definition
Description: Job {
  Name = "Consolidate"
  Type = "Consolidate"
  Accurate = "yes"
  Max Full Consolidations = 1
  Client = bareos.home.loc
  FileSet = "SelfTest"

# Storage = File
# Pool = Incremental
}
 
[root@bareos ai]# bareos-dir -u bareos -g bareos -t -v
BAREOS interrupted by signal 11: Segmentation violation
Kaboom! bareos-dir, bareos-dir got signal 11 - Segmentation violation. Attempting traceback.
Kaboom! exepath=/etc/bareos/bareos-dir.d/ai
Calling: /etc/bareos/bareos-dir.d/ai/btraceback /etc/bareos/bareos-dir.d/ai/bareos-dir 8216 /var/lib/bareos
execv: /etc/bareos/bareos-dir.d/ai/btraceback failed: ERR=No such file or directory
The btraceback call returned 255
Dumping: /var/lib/bareos/bareos-dir.8216.bactrace
Tags:
Steps To Reproduce: 1. Install and configure bareos-dir
2. Add consolidate job w/o pool and storage definition
3. Run bareos-dir -> Segmentation violation
4. Uncomment storage and pool definition
5. Run bareos-dir -> config is successfully parsed with diagnostic '"Messages" directive in Job "Consolidate" resource is required'
6. Comment storage and pool definition
7. Run bareos-dir -> Segmentation violation

Checked on two systems with same result.
Additional Information:
System Description
Attached Files:
Notes
(0002939)
MarceloRuiz   
2018-03-09 02:56   
(Last edited: 2018-03-09 03:04)
I have the same problem...
The backtrace is:

 ==== bactrace output ====

Attempt to dump locks
Attempt to dump current JCRs. njcrs=0
 ==== End baktrace output ====

(0003273)
IvanBayan   
2019-02-27 08:56   
Today same happened to me, I've hit reload and bareos have crashed:
Feb 27 02:52:48 mia-backup03 bareos-dir[35840]: BAREOS interrupted by signal 11: Segmentation violation
root@mia-backup03:/var/log/bareos# apt-cache policy bareos-director
bareos-director:
  Installed: 18.2.5-139.1
  Candidate: 18.2.5-139.1
  Version table:
 *** 18.2.5-139.1 500
        500 http://download.bareos.org/bareos/release/latest/xUbuntu_16.04 Packages
        100 /var/lib/dpkg/status
     14.2.6-3 500
        500 http://us.archive.ubuntu.com/ubuntu xenial/universe amd64 Packages
(0003274)
IvanBayan   
2019-02-27 08:58   
$ sudo dmesg|tail -n1
[71272674.289931] bareos-dir[20454]: segfault at 7fec1b7fe9d0 ip 00007fec278f08d9 sp 00007fec1a7fa858 error 4 in libpthread-2.23.so[7fec278e8000+18000]
(0003449)
arogge   
2019-07-12 10:33   
Can you please reproduce this with our latest nightly-build from https://download.bareos.org/bareos/experimental/nightly/ ?
Thank you!

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1075 [bareos-core] webui major always 2019-04-10 05:10 2019-07-12 09:03
Reporter: jasonhuang Platform: Linux  
Assigned To: OS: CentOS  
Priority: urgent OS Version: 7  
Status: feedback Product Version: 17.2.4  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Does not list files in bareos-webui when restoring
Description: When open Restore menu and select any backup jobs, no files shown in "Select files to be restored" window. Restore job via bconsole menu shows files correctly.
The bconsole list of files shows files exist but the WebUI simply shows the wheel land Loading ... for a long, long time until it shows the Oops must be two many files. The set is not that big.
Tags: WebUI中
Steps To Reproduce:
Additional Information: What caused the problem? Pls Help to deal with the this problems, Thanks
System Description
Attached Files: 222.jpg (152,960 bytes) 2019-04-10 05:10
https://bugs.bareos.org/file_download.php?file_id=361&type=bug
jpg
Notes
(0003447)
arogge   
2019-07-12 09:03   
This might be an issue with the BVFS cache not being filled correctly.
Can you try to run ".bvfs_update jobid=<the-jobid-you-want-to-look-at>" in bconsole. Afterwards it should display correctly in web-ui.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
975 [bareos-core] api major always 2018-06-28 15:44 2019-07-11 17:22
Reporter: frank Platform: Linux  
Assigned To: arogge OS: CentOS  
Priority: normal OS Version: 7  
Status: resolved Product Version: 17.2.6  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: .bvfs_lsdirs limit offset command parameters do not work properly
Description: .bvfs_lsdirs limit offset command parameters do not work properly.

Tags:
Steps To Reproduce: *.bvfs_lsdirs jobid=2,111,112,122,130,138 path=/etc/
123 11500 138 P0A BAABB EHt BM A A A CAA BAA Y BbNJKO BbNLO2 BbNLO2 A A C .
809 0 0 A A A A A A A A A A A A A A ..
112 1426 2 P0A DBG22 EHt H A A A CG BAA A Ba+/we BbB9X7 BbB9X7 A A C NetworkManager/
96 1478 2 P0A BADLX EHt F A A A 5 BAA A BazZZL BbB85h BbB85h A A C X11/
231 1192 2 P0A BAA8i EHt C A A A Ds BAA A BbB9YK BbB9X7 BbB9X7 A A C alternatives/
239 3476 2 P0A DBcGD EHo D A A A r BAA A BazZQQ BbB853 BbB853 A A C audisp/
66 3482 2 P0A BBLZn EHo D A A A BT BAA A BazZQQ BbB8/P BbB8/P A A C audit/
41 11499 138 P0A ClO8 EHt H A Pj A Cr BAA A BbNLO1 BbNLO1 BbNLO1 A A C bareos/
251 1481 2 P0A J82 EHt C A A A s BAA A BazYwT BbB9a/ BbB9a/ A A C bash_completion.d/
19 1290 2 P0A DA17e EHt C A A A G BAA A BbB8/O BazbsQ BbB85h A A C binfmt.d/
46 1193 2 P0A CAKIj EHt C A A A G BAA A BZhHqO BZhHqO BbB84m A A C chkconfig.d/
188 1737 2 P0A DBG5Y EHt C A A A V BAA A BbB8/P BbB85o BbB85o A A C cron.d/
111 1724 2 P0A BAear EHt C A A A q BAA A BbB+MV BbB85y BbB85y A A C cron.daily/
148 1671 2 P0A BA704 EHt C A A A W BAA A BbB9DN BTljHH BbB85o A A C cron.hourly/
88 1750 2 P0A CBPGb EHt C A A A G BAA A BbB+x1 BTljHH BbB85o A A C cron.monthly/
200 1673 2 P0A DBG5f EHt C A A A G BAA A BbB+fF BTljHH BbB85o A A C cron.weekly/
223 1378 2 P0A DA17c EHt E A A A BO BAA A Bazbfp BbB85j BbB85j A A C dbus-1/
7 1279 2 P0A ILy EHt C A A A s BAA A BazG5z BbB891 BbB891 A A C default/
70 1264 2 P0A DA17I EHt C A A A X BAA A BbB86T BbB85f BbB85f A A C depmod.d/
202 3447 2 P0A BFVS EHo E A A A 1 BAA A Ba+vUZ Ba+vUZ BbB9X9 A A C dhcp/
10 1268 2 P0A 6be EHt C A A A G BAA A BbB86p BazbmK BbB85f A A C dracut.conf.d/
61 1945 2 P0A CBStG EHo H A A A CF BAA A BazZR8 BbB85y BbB85y A A C firewalld/
47 1677 2 P0A CAfA9 EHt C A A A G BAA A BZgfW+ BZgfW+ BbB842 A A C gcrypt/
106 3452 2 P0A DBSa8 EHt C A A A G BAA A BYHgOf BYHgOf BbB85v A A C gnupg/
170 1709 2 P0A BASuH EHt E A A A o BAA A BTlhZM BbB849 BbB849 A A C groff/
250 1173 2 P0A CAKAT EHA C A A A C2 BAA A BbB892 BbB86S BbB86S A A C grub.d/
174 1684 2 P0A SAS EHt D A A A U BAA A Ba8vw+ Ba8vw+ BbB9YG A A C gss/
43 1721 2 P0A DAObA EHt C A A A Cf BAA A BazcKB BbB85E BbB85E A A C iproute2/
128 3500 2 P0A DBi+O EHt D A A A Y BAA A BbB86C BbB86C BbB86C A A C kernel/
254 1675 2 P0A CAfAw EHt C A A A G BAA A Ba8vw+ Ba8vw+ BbB9YG A A C krb5.conf.d/
195 10811 111 P0A CAeXE EHt C A A A Ck BAA A BbKjdw BbJ9aM BbJ9aM A A C ld.so.conf.d/
172 1226 2 P0A CAfA+ EHt C A A A j BAA A BZg34Z BbB843 BbB843 A A C libnl/
232 11444 138 P0A DAnhS EHt C A A A Bk BAA A BbM3Hi BbNLOz BbNLOz A A C logrotate.d/
131 1404 2 P0A DBSjo EHt G A A A Bk BAA A BbB87A BbB85z BbB85z A A C lvm/
57 10807 111 P0A 6cc EHt C A A A BR BAA A BbMLpP BbJ9aM BbJ9aM A A C modprobe.d/
115 1292 2 P0A 6eK EHt C A A A G BAA A BbB87B BazbsQ BbB85h A A C modules-load.d/
539 1753 2 P0A BAo2a EHt C A A A f BAA A BZhL1I BbB85Z BbB85Z A A C my.cnf.d/
42 1747 2 P0A CBM68 EHt D A A A k BAA A Ba/ABV BbB9X8 BbB9X8 A A C openldap/
5 1482 2 P0A BADLY EHt C A A A G BAA A BazZZL BazZZL BbB84r A A C opt/
98 1261 2 P0A SC/ EHt C A A A BAA BAA I BazX9d BbB9zP BbB9zP A A C pam.d/
163 1635 2 P0A BALKP EHt D A A A V BAA A BZhQUN BbB84y BbB84y A A C pkcs11/
198 1542 2 P0A CAKIu EHt K A A A B0 BAA A BazZZL BbB85y BbB85y A A C pki/
74 3451 2 P0A BImx EHt C A A A c BAA A Ba0Khn BbB85u BbB85u A A C plymouth/
94 1546 2 P0A DACnD EHt F A A A 0 BAA A BazZZL BbB84r BbB84r A A C pm/
177 1461 2 P0A CBNcL EHt F A A A BI BAA A BazVZ5 BbB85k BbB85k A A C polkit-1/
63 1631 2 P0A R+F EHt C A A A G BAA A BbB85Y BTloOK BbB84w A A C popt.d/
479 3492 2 P0A DBcGU EHt C A A A Ca BAA A BbB8/S BbB854 BbB855 A A C postfix/
636 1470 2 P0A BA7yu EHt D A A A B7 BAA A BazZih BbB85k BbB85k A A C ppp/
92 1275 2 P0A BAA9U EHt C A A A BO BAA A Ba+3OX BbB9X6 BbB9X6 A A C prelink.conf.d/
136 1444 2 P0A DACm/ EHt C A A A EY BAA A BbB9wW BbB9a/ BbB9a/ A A C profile.d/
516 1711 2 P0A CAg/e EHt C A A A j BAA A BazbsB BbB84/ BbB84/ A A C python/
441 3495 2 P0A DBi9r EHt D A A A y BAA A BZhCM/ BbB86B BbB86B A A C qemu-ga/
152 1223 2 P0A DAASf EHt K A A A B/ BAA A BZhHqO BbB85h BbB85h A A C rc.d/
226 1640 2 P0A R/l EHt C A A A s BAA A BbB9p2 BbB9a3 BbB9a3 A A C rpm/
109 1288 2 P0A BAo6t EHt C A A A Z BAA A BbB9YN Ba/BWx BbB9YH A A C rsyslog.d/
235 1726 2 P0A CAyML EHt C A A A X BAA A BazZih BazZih BbB85k A A C rwtab.d/
113 1648 2 P0A BALKY EHt C A A A Y BAA A BazY0n BbB854 BbB854 A A C sasl2/
25 1703 2 P0A BALT6 EHt G A A A BAA BAA I BazX9d BbB847 BbB847 A A C security/
21 3442 2 P0A CBM79 EHt F A A A BR BAA A Ba8xHa Ba8xHa BbB9X9 A A C selinux/
134 1178 2 P0A CAKIC EHt C A A A + BAA A BbB896 BazZZL BbB84r A A C skel/
99 1763 2 P0A BFVI EHt C A A A Dh BAA A BazY1J BbB8/P BbB8/P A A C ssh/
173 1637 2 P0A CAfAH EHt C A A A T BAA A Ba+7UQ BbB9YH BbB9YH A A C ssl/
215 1632 2 P0A DBG27 EHt C A A A G BAA A BazZih BazZih BbB85k A A C statetab.d/
6 3506 2 P0A Be+V EHo C A A A G BAA A BazY9P BazY9P BbB86D A A C sudoers.d/
13 10809 111 P0A DACnE EHt G A A A BAA BAA I BbKjdw BbJ9aS BbJ9aS A A C sysconfig/
53 1381 2 P0A BAo62 EHt C A A A c BAA A BbB87B BbB85k BbB85k A A C sysctl.d/
176 1351 2 P0A CBM/Z EHt E A A A CX BAA A BazbsQ BbB85h BbB85h A A C systemd/
190 1174 2 P0A Bx EHt C A A A G BAA A BZsHHM BZsHHM BbB84j A A C terminfo/
220 1383 2 P0A BAo63 EHt C A A A G BAA A BbB85j BazbsQ BbB85h A A C tmpfiles.d/
635 3459 2 P0A BIrs EHt D A A A Bt BAA A BbB8/S BbB85x BbB85x A A C tuned/
213 1388 2 P0A CBNAA EHt D A A A 2 BAA A BazbsR BbB8/O BbB8/O A A C udev/
143 3449 2 P0A BA927 EHt C A A A h BAA A BazZwI BbB85r BbB85r A A C wpa_supplicant/
182 1617 2 P0A J84 EHt E A A A m BAA A BazZZL BbB85h BbB85h A A C xdg/
119 1618 2 P0A CAKIw EHt C A A A G BAA A BazZZL BazZZL BbB84r A A C xinetd.d/
130 10814 111 P0A BALKk EHt C A A A D4 BAA A BbKQb2 BbKQbX BbKQbX A A C yum.repos.d/
32 1663 2 P0A CAfAM EHt G A A A Bk BAA A BbB8+0 BbB85w BbB85w A A C yum/
*

*.bvfs_lsdirs jobid=2,111,112,122,130,138 path=/etc/ limit=12 offset=0
123 11500 138 P0A BAABB EHt BM A A A CAA BAA Y BbNJKO BbNLO2 BbNLO2 A A C .
809 0 0 A A A A A A A A A A A A A A ..
112 1426 2 P0A DBG22 EHt H A A A CG BAA A Ba+/we BbB9X7 BbB9X7 A A C NetworkManager/
96 1478 2 P0A BADLX EHt F A A A 5 BAA A BazZZL BbB85h BbB85h A A C X11/
231 1192 2 P0A BAA8i EHt C A A A Ds BAA A BbB9YK BbB9X7 BbB9X7 A A C alternatives/
239 3476 2 P0A DBcGD EHo D A A A r BAA A BazZQQ BbB853 BbB853 A A C audisp/
66 3482 2 P0A BBLZn EHo D A A A BT BAA A BazZQQ BbB8/P BbB8/P A A C audit/
*

*.bvfs_lsdirs jobid=2,111,112,122,130,138 path=/etc/ limit=3 offset=0
123 11500 138 P0A BAABB EHt BM A A A CAA BAA Y BbNJKO BbNLO2 BbNLO2 A A C .
*

*.bvfs_lsdirs jobid=2,111,112,122,130,138 path=/etc/ limit=5
123 11500 138 P0A BAABB EHt BM A A A CAA BAA Y BbNJKO BbNLO2 BbNLO2 A A C .
*

Additional Information: bareos-dir (10): bvfs.c:642-0 ls_dirs(123)
bareos-dir (100): sql_query.c:96-0 called: void B_DB::fill_query_va_list(POOL_MEM&, B_DB_QUERY_ENUM_CLASS::SQL_QUERY_ENUM, __va_list_tag*) with query name bvfs_ls_special_dirs_3 (73)
bareos-dir (100): sql_query.c:102-0 called: void B_DB::fill_query_va_list(POOL_MEM&, B_DB_QUERY_ENUM_CLASS::SQL_QUERY_ENUM, __va_list_tag*) query is now SELECT 'D', SpecialDir.PathId, SpecialDir.Path, JobId, LStat, FileId FROM ( SELECT 123 AS PathId, '.' AS Path UNION SELECT PPathId AS PathId, '..' AS Path FROM PathHierarchy WHERE PathId = 123 ) AS SpecialDir LEFT JOIN ( SELECT PathId, JobId, LStat, FileId FROM File WHERE File.Name = '' AND File.JobId IN (2,111,112,122,130,138) ) AS DirAttribute ON (SpecialDir.PathId = DirAttribute.PathId)
bareos-dir (100): sql_query.c:96-0 called: void B_DB::fill_query_va_list(POOL_MEM&, B_DB_QUERY_ENUM_CLASS::SQL_QUERY_ENUM, __va_list_tag*) with query name bvfs_ls_sub_dirs_5 (74)
bareos-dir (100): sql_query.c:102-0 called: void B_DB::fill_query_va_list(POOL_MEM&, B_DB_QUERY_ENUM_CLASS::SQL_QUERY_ENUM, __va_list_tag*) query is now SELECT 'D', PathId, Path, JobId, LStat, FileId FROM ( SELECT Path1.PathId AS PathId, Path1.Path AS Path, lower(Path1.Path) AS lpath, listfile1.JobId AS JobId, listfile1.LStat AS LStat, listfile1.FileId AS FileId FROM ( SELECT listpath1.PathId AS PathId FROM ( SELECT DISTINCT PathHierarchy1.PathId AS PathId FROM PathHierarchy AS PathHierarchy1 INNER JOIN Path AS Path2 ON (PathHierarchy1.PathId = Path2.PathId) INNER JOIN PathVisibility AS PathVisibility1 ON (PathHierarchy1.PathId = PathVisibility1.PathId) WHERE PathHierarchy1.PPathId = 123 AND PathVisibility1.JobId IN (2,111,112,122,130,138) ) AS listpath1 LEFT JOIN ( SELECT PVD1.PathId AS PathId FROM ( SELECT PV1.PathId AS PathId, MAX(JobId) AS MaxJobId FROM PathVisibility AS PV1 WHERE JobId IN (2,111,112,122,130,138) GROUP BY PathId ) AS PVD1 INNER JOIN File AS F2 ON (F2.PathId = PVD1.PathId AND F2.JobId = PVD1.MaxJobId AND F2.FileIndex = 0 AND F2.Name = '') ) AS listpath2 ON (listpath1.PathId = listpath2.PathId) WHERE listpath2.PathId IS NULL ) AS listpath3 INNER JOIN Path AS Path1 ON (listpath3.PathId = Path1.PathId) LEFT JOIN ( SELECT File1.PathId AS PathId, File1.JobId AS JobId, File1.LStat AS LStat, File1.FileId AS FileId FROM File AS File1 WHERE File1.Name = '' AND File1.JobId IN (2,111,112,122,130,138) ) AS listfile1 ON (listpath3.PathId = listfile1.PathId) ) AS A
bareos-dir (100): sql_query.c:96-0 called: void B_DB::fill_query_va_list(POOL_MEM&, B_DB_QUERY_ENUM_CLASS::SQL_QUERY_ENUM, __va_list_tag*) with query name bvfs_lsdirs_4 (63)
bareos-dir (100): sql_query.c:102-0 called: void B_DB::fill_query_va_list(POOL_MEM&, B_DB_QUERY_ENUM_CLASS::SQL_QUERY_ENUM, __va_list_tag*) query is now SELECT 'D', SpecialDir.PathId, SpecialDir.Path, JobId, LStat, FileId FROM ( SELECT 123 AS PathId, '.' AS Path UNION SELECT PPathId AS PathId, '..' AS Path FROM PathHierarchy WHERE PathId = 123 ) AS SpecialDir LEFT JOIN ( SELECT PathId, JobId, LStat, FileId FROM File WHERE File.Name = '' AND File.JobId IN (2,111,112,122,130,138) ) AS DirAttribute ON (SpecialDir.PathId = DirAttribute.PathId) UNION SELECT 'D', PathId, Path, JobId, LStat, FileId FROM ( SELECT Path1.PathId AS PathId, Path1.Path AS Path, lower(Path1.Path) AS lpath, listfile1.JobId AS JobId, listfile1.LStat AS LStat, listfile1.FileId AS FileId FROM ( SELECT listpath1.PathId AS PathId FROM ( SELECT DISTINCT PathHierarchy1.PathId AS PathId FROM PathHierarchy AS PathHierarchy1 INNER JOIN Path AS Path2 ON (PathHierarchy1.PathId = Path2.PathId) INNER JOIN PathVisibility AS PathVisibility1 ON (PathHierarchy1.PathId = PathVisibility1.PathId) WHERE PathHierarchy1.PPathId = 123 AND PathVisibility1.JobId IN (2,111,112,122,130,138) ) AS listpath1 LEFT JOIN ( SELECT PVD1.PathId AS PathId FROM ( SELECT PV1.PathId AS PathId, MAX(JobId) AS MaxJobId FROM PathVisibility AS PV1 WHERE JobId IN (2,111,112,122,130,138) GROUP BY PathId ) AS PVD1 INNER JOIN File AS F2 ON (F2.PathId = PVD1.PathId AND F2.JobId = PVD1.MaxJobId AND F2.FileIndex = 0 AND F2.Name = '') ) AS listpath2 ON (listpath1.PathId = listpath2.PathId) WHERE listpath2.PathId IS NULL ) AS listpath3 INNER JOIN Path AS Path1 ON (listpath3.PathId = Path1.PathId) LEFT JOIN ( SELECT File1.PathId AS PathId, File1.JobId AS JobId, File1.LStat AS LStat, File1.FileId AS FileId FROM File AS File1 WHERE File1.Name = '' AND File1.JobId IN (2,111,112,122,130,138) ) AS listfile1 ON (listpath3.PathId = listfile1.PathId) ) AS A ORDER BY Path ASC,JobId DESC LIMIT 5 OFFSET 0
bareos-dir (15): bvfs.c:371-0 q=SELECT 'D', SpecialDir.PathId, SpecialDir.Path, JobId, LStat, FileId FROM ( SELECT 123 AS PathId, '.' AS Path UNION SELECT PPathId AS PathId, '..' AS Path FROM PathHierarchy WHERE PathId = 123 ) AS SpecialDir LEFT JOIN ( SELECT PathId, JobId, LStat, FileId FROM File WHERE File.Name = '' AND File.JobId IN (2,111,112,122,130,138) ) AS DirAttribute ON (SpecialDir.PathId = DirAttribute.PathId) UNION SELECT 'D', PathId, Path, JobId, LStat, FileId FROM ( SELECT Path1.PathId AS PathId, Path1.Path AS Path, lower(Path1.Path) AS lpath, listfile1.JobId AS JobId, listfile1.LStat AS LStat, listfile1.FileId AS FileId FROM ( SELECT listpath1.PathId AS PathId FROM ( SELECT DISTINCT PathHierarchy1.PathId AS PathId FROM PathHierarchy AS PathHierarchy1 INNER JOIN Path AS Path2 ON (PathHierarchy1.PathId = Path2.PathId) INNER JOIN PathVisibility AS PathVisibility1 ON (PathHierarchy1.PathId = PathVisibility1.PathId) WHERE PathHierarchy1.PPathId = 123 AND PathVisibility1.JobId IN (2,111,112,122,130,138) ) AS listpath1 LEFT JOIN ( SELECT PVD1.PathId AS PathId FROM ( SELECT PV1.PathId AS PathId, MAX(JobId) AS MaxJobId FROM PathVisibility AS PV1 WHERE JobId IN (2,111,112,122,130,138) GROUP BY PathId ) AS PVD1 INNER JOIN File AS F2 ON (F2.PathId = PVD1.PathId AND F2.JobId = PVD1.MaxJobId AND F2.FileIndex = 0 AND F2.Name = '') ) AS listpath2 ON (listpath1.PathId = listpath2.PathId) WHERE listpath2.PathId IS NULL ) AS listpath3 INNER JOIN Path AS Path1 ON (listpath3.PathId = Path1.PathId) LEFT JOIN ( SELECT File1.PathId AS PathId, File1.JobId AS JobId, File1.LStat AS LStat, File1.FileId AS FileId FROM File AS File1 WHERE File1.Name = '' AND File1.JobId IN (2,111,112,122,130,138) ) AS listfile1 ON (listpath3.PathId = listfile1.PathId) ) AS A ORDER BY Path ASC,JobId DESC LIMIT 5 OFFSET 0
bareos-dir (100): sql_query.c:140-0 called: bool B_DB::sql_query(const char*, int (*)(void*, int, char**), void*) with query SELECT 'D', SpecialDir.PathId, SpecialDir.Path, JobId, LStat, FileId FROM ( SELECT 123 AS PathId, '.' AS Path UNION SELECT PPathId AS PathId, '..' AS Path FROM PathHierarchy WHERE PathId = 123 ) AS SpecialDir LEFT JOIN ( SELECT PathId, JobId, LStat, FileId FROM File WHERE File.Name = '' AND File.JobId IN (2,111,112,122,130,138) ) AS DirAttribute ON (SpecialDir.PathId = DirAttribute.PathId) UNION SELECT 'D', PathId, Path, JobId, LStat, FileId FROM ( SELECT Path1.PathId AS PathId, Path1.Path AS Path, lower(Path1.Path) AS lpath, listfile1.JobId AS JobId, listfile1.LStat AS LStat, listfile1.FileId AS FileId FROM ( SELECT listpath1.PathId AS PathId FROM ( SELECT DISTINCT PathHierarchy1.PathId AS PathId FROM PathHierarchy AS PathHierarchy1 INNER JOIN Path AS Path2 ON (PathHierarchy1.PathId = Path2.PathId) INNER JOIN PathVisibility AS PathVisibility1 ON (PathHierarchy1.PathId = PathVisibility1.PathId) WHERE PathHierarchy1.PPathId = 123 AND PathVisibility1.JobId IN (2,111,112,122,130,138) ) AS listpath1 LEFT JOIN ( SELECT PVD1.PathId AS PathId FROM ( SELECT PV1.PathId AS PathId, MAX(JobId) AS MaxJobId FROM PathVisibility AS PV1 WHERE JobId IN (2,111,112,122,130,138) GROUP BY PathId ) AS PVD1 INNER JOIN File AS F2 ON (F2.PathId = PVD1.PathId AND F2.JobId = PVD1.MaxJobId AND F2.FileIndex = 0 AND F2.Name = '') ) AS listpath2 ON (listpath1.PathId = listpath2.PathId) WHERE listpath2.PathId IS NULL ) AS listpath3 INNER JOIN Path AS Path1 ON (listpath3.PathId = Path1.PathId) LEFT JOIN ( SELECT File1.PathId AS PathId, File1.JobId AS JobId, File1.LStat AS LStat, File1.FileId AS FileId FROM File AS File1 WHERE File1.Name = '' AND File1.JobId IN (2,111,112,122,130,138) ) AS listfile1 ON (listpath3.PathId = listfile1.PathId) ) AS A ORDER BY Path ASC,JobId DESC LIMIT 5 OFFSET 0
System Description
Attached Files:
Notes
(0003444)
arogge   
2019-07-11 17:22   
Fix committed to bareos master branch with changesetid 11581.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1070 [bareos-core] storage daemon major always 2019-03-27 12:31 2019-07-10 17:48
Reporter: guidoilbaldo Platform: Linux  
Assigned To: OS: CentOS  
Priority: high OS Version: 7  
Status: assigned Product Version: 18.2.5  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: DIR lost connection with SD - Broken pipe
Description: Hello,
I recently installed BareOS on a single machine (DIR + SD) and installed droplet plugin to send data to AWS S3.
The configuration looks fine as bconsole tells me storage status is ok:
```
Device status:

Device "AWS_S3_Object_Storage" (AWS S3 Storage) is not open.
Backend connection is working.
Inflight chunks: 0
No pending IO flush requests.
==
====
```

However, when launching a FULL backup job, after 15 minutes it fails with the following error:
```
27-Mar 11:03 rabbit-1-fd JobId 3: Error: lib/bsock_tcp.cc:417 Wrote 9611 bytes to Storage daemon:172.17.0.60:9103, but only 0 accepted.
27-Mar 11:03 rabbit-1-fd JobId 3: Fatal error: filed/backup.cc:1033 Network send error to SD. ERR=Connection timed out
27-Mar 11:03 backup-1 JobId 3: Fatal error: Director's comm line to SD dropped.
27-Mar 11:03 backup-1 JobId 3: Error: Bareos backup-1 18.2.5 (30Jan19):
  Build OS: Linux-4.4.92-6.18-default redhat CentOS Linux release 7.6.1810 (Core)
  JobId: 3
  Job: backup-rabbit-1.2019-03-27_10.34.06_14
  Backup Level: Full
  Client: "rabbit-1-fd" 18.2.5 (30Jan19) Linux-4.4.92-6.18-default,redhat,CentOS Linux release 7.6.1810 (Core) ,CentOS_7,x86_64
  FileSet: "RabbitMQFileSet" 2019-03-27 09:42:15
  Pool: "Full" (From command line)
  Catalog: "MyCatalog" (From Client resource)
  Storage: "S3_Object" (From Job resource)
  Scheduled time: 27-Mar-2019 10:34:06
  Start time: 27-Mar-2019 10:48:17
  End time: 27-Mar-2019 11:03:51
  Elapsed time: 15 mins 34 secs
  Priority: 10
  FD Files Written: 24
  SD Files Written: 0
  FD Bytes Written: 95,320 (95.32 KB)
  SD Bytes Written: 691 (691 B)
  Rate: 0.1 KB/s
  Software Compression: 24.3 % (lz4)
  VSS: no
  Encryption: no
  Accurate: no
  Volume name(s):
  Volume Session Id: 2
  Volume Session Time: 1553682684
  Last Volume Bytes: 0 (0 B)
  Non-fatal FD errors: 1
  SD Errors: 0
  FD termination status: Fatal Error
  SD termination status: Error
  Bareos binary info: bareos.org build: Get official binaries and vendor support on bareos.com
  Termination: *** Backup Error ***
```

The connection between FD on backup client and BareOS server looks fine, I can telnet ports 9101 and 9103 to DIR and SD correctly.
Tags: aws, s3;droplet;aws;storage, storage
Steps To Reproduce: Install bareos stack 18.2.5 and try to perform a FULL backup job, sending data to S3
Additional Information: DIR config:

Client {
  Name = rabbit-1-fd
  Address = 172.17.0.170
  Password = ****
  Heartbeat Interval = 60
  File Retention = 30 days
  Job Retention = 6 months
  AutoPrune = yes
}

Job {
  Name = "backup-rabbit-1"
  Client = rabbit-1-fd
  JobDefs = "RabbitMQJobDef"
}

JobDefs {
  Name = "RabbitMQJobDef"
  Type = Backup
  Level = Incremental

  Client = bareos-fd

  FileSet = "RabbitMQFileSet"
  Schedule = "RabbitMQCycle"
  Storage = S3_Object

  Messages = Standard

  Pool = Incremental

  Priority = 10
  Write Bootstrap = "/var/lib/bareos/%c.bsr"

  Full Backup Pool = Full
  Differential Backup Pool = Differential
  Incremental Backup Pool = Incremental
}

Storage {
  Name = S3_Object
  Address = 172.17.0.60
  Password = ****
  Device = AWS_S3_Object_Storage
  Media Type = S3_Object1
  Heartbeat Interval = 60
}

SD config:

Device {
  Name = "AWS_S3_Object_Storage"
  Media Type = "S3_Object1"
  Archive Device = "AWS S3 Storage"
  Device Type = droplet
  Device Options = "profile=/etc/bareos/bareos-sd.d/device/droplet/aws.droplet.profile,bucket=bucket,location=eu-central-1,chunksize=100M,iothreads=10,retries=0"
  LabelMedia = yes
  Random Access = yes
  AutomaticMount = yes
  RemovableMedia = no
  AlwaysOpen = no
  Maximum Concurrent Jobs = 10
  Maximum File Size = 500M
  Maximum Spool Size = 15000M
}

use_https = true
host = s3.eu-central-1.amazonaws.com
access_key = <access_key>
secret_key = <secret_key>
pricing_dir = ""
backend = s3
aws_region = eu-central-1
aws_auth_sign_version = 4
System Description
Attached Files: backup-1.trace (27,106 bytes) 2019-03-27 16:02
https://bugs.bareos.org/file_download.php?file_id=357&type=bug
Notes
(0003301)
arogge   
2019-03-27 12:43   
Can you please enable tracing of the sd (maybe level 200) and reproduce this?
(0003302)
guidoilbaldo   
2019-03-27 15:22   
Sure, could you help me with that or point me somewhere in the docs where I can find how?
(0003303)
guidoilbaldo   
2019-03-27 15:26   
Ok, i found where to look.

I launched the job again and keep you posted, in the meantime this is the device status:

Device status:

Device "AWS_S3_Object_Storage" (AWS S3 Storage) is mounted with:
    Volume: Full-0001
    Pool: Full
    Media type: S3_Object1
Backend connection is working.
Inflight chunks: 0
No pending IO flush requests.
Configured device capabilities:
  EOF BSR BSF FSR FSF EOM !REM RACCESS AUTOMOUNT LABEL !ANONVOLS !ALWAYSOPEN
Device state:
  OPENED !TAPE LABEL !MALLOC APPEND !READ EOT !WEOT !EOF !NEXTVOL !SHORT MOUNTED
  num_writers=1 reserves=0 block=0
Attached Jobs: 4
Device parameters:
  Archive name: AWS S3 Storage Device name: AWS_S3_Object_Storage
  File=0 block=64712
  Min block=64512 Max block=64512
    Total Bytes=64,712 Blocks=0 Bytes/block=64,712
    Positioned at File=0 Block=64,712
(0003304)
guidoilbaldo   
2019-03-27 16:02   
This time the error was "Connection timed out". I attached the trace file for the SD and here I paste bareos log for completion:

27-Mar 14:45 backup-1 JobId 6: Using Device "AWS_S3_Object_Storage" to write.
27-Mar 14:45 backup-1 JobId 6: Probing client protocol... (result will be saved until config reload)
27-Mar 14:45 backup-1 JobId 6: Connected Client: rabbit-1-fd at 172.17.0.170:9102, encryption: PSK-AES256-CBC-SHA
27-Mar 14:45 backup-1 JobId 6: Handshake: Immediate TLS 27-Mar 14:45 backup-1 JobId 6: Encryption: PSK-AES256-CBC-SHA
27-Mar 14:45 rabbit-1-fd JobId 6: Connected Storage daemon at 172.17.0.60:9103, encryption: PSK-AES256-CBC-SHA
27-Mar 14:45 backup-1: info: src/droplet.c:127: dpl_init: PRNG has been seeded with enough data
27-Mar 14:45 rabbit-1-fd JobId 6: Extended attribute support is enabled
27-Mar 14:45 rabbit-1-fd JobId 6: ACL support is enabled
27-Mar 14:45 backup-1 JobId 6: Volume "Full-0001" previously written, moving to end of data.
27-Mar 14:45 backup-1 JobId 6: Ready to append to end of Volume "Full-0001" size=64712
27-Mar 15:00 rabbit-1-fd JobId 6: Error: lib/bsock_tcp.cc:417 Wrote 22084 bytes to Storage daemon:172.17.0.60:9103, but only 0 accepted.
27-Mar 15:00 rabbit-1-fd JobId 6: Fatal error: filed/backup.cc:1033 Network send error to SD. ERR=Connection timed out
27-Mar 15:00 backup-1 JobId 6: Fatal error: Director's comm line to SD dropped.
27-Mar 15:00 backup-1 JobId 6: Error: Bareos backup-1 18.2.5 (30Jan19):
  Build OS: Linux-4.4.92-6.18-default redhat CentOS Linux release 7.6.1810 (Core)
  JobId: 6
  Job: backup-rabbit-1.2019-03-27_14.44.59_07
  Backup Level: Full
  Client: "rabbit-1-fd" 18.2.5 (30Jan19) Linux-4.4.92-6.18-default,redhat,CentOS Linux release 7.6.1810 (Core) ,CentOS_7,x86_64
  FileSet: "RabbitMQFileSet" 2019-03-27 09:42:15
  Pool: "Full" (From command line)
  Catalog: "MyCatalog" (From Client resource)
  Storage: "S3_Object" (From Job resource)
  Scheduled time: 27-Mar-2019 14:44:59
  Start time: 27-Mar-2019 14:45:01
  End time: 27-Mar-2019 15:00:29
  Elapsed time: 15 mins 28 secs
  Priority: 10
  FD Files Written: 27
  SD Files Written: 0
  FD Bytes Written: 164,032 (164.0 KB)
  SD Bytes Written: 1,852 (1.852 KB)
  Rate: 0.2 KB/s
  Software Compression: 27.4 % (lz4)
  VSS: no
  Encryption: no
  Accurate: no
  Volume name(s):
  Volume Session Id: 1
  Volume Session Time: 1553697872
  Last Volume Bytes: 0 (0 B)
  Non-fatal FD errors: 1
  SD Errors: 0
  FD termination status: Fatal Error
  SD termination status: Error
  Bareos binary info: bareos.org build: Get official binaries and vendor support on bareos.com
  Termination: *** Backup Error ***
(0003305)
arogge   
2019-03-28 07:53   
According to the trace, your director told the sd to cancel the job:

backup-1 (110): stored/socket_server.cc:97-0 Conn: Hello Director backup-1 calling
backup-1 (110): stored/socket_server.cc:115-0 Got a DIR connection at 27-Mar-2019 15:00:29
backup-1 (100): include/jcr.h:320-0 Construct JobControlRecord
backup-1 (50): lib/cram_md5.cc:69-0 send: auth cram-md5 <461601013.1553698829@backup-1> ssl=1
backup-1 (100): lib/cram_md5.cc:116-0 cram-get received: auth cram-md5 <950524310.1553698829@backup-1> ssl=1
backup-1 (99): lib/cram_md5.cc:135-0 sending resp to challenge: XVkIXSZZLjpi2/+sNi/X8A
backup-1 (90): stored/dir_cmd.cc:289-0 Message channel init completed.
backup-1 (199): stored/dir_cmd.cc:300-0 <dird: cancel Job=backup-rabbit-1.2019-03-27_14.44.59_07
backup-1 (200): stored/dir_cmd.cc:318-0 Do command: cancel

Do you have any idea why this might have happened?
(0003306)
guidoilbaldo   
2019-03-29 08:56   
No, unfortunately not. I just launched the job from bareos-webui and left it to complete.
Do you think it might be some misconfiguration in DIR or SD configs I posted in the "additional information" part?
(0003308)
guidoilbaldo   
2019-04-01 17:31   
Hi,
if you have any hint why our configuration is not working, I would like to know because I'm in the process of deciding whether to stick with BareOS or finding a possible alternative.
Thank you very much,
Stefano
(0003316)
guidoilbaldo   
2019-04-10 15:23   
Is it possible to have a clue on this?
(0003317)
arogge   
2019-04-10 16:37   
Sorry, I have really no idea.
Maybe you can take this to the mailing list and somebody can help?

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1077 [bareos-core] file daemon minor always 2019-04-11 22:46 2019-07-10 17:47
Reporter: santachago Platform: Linux  
Assigned To: arogge OS: Ubuntu  
Priority: normal OS Version: 18.04  
Status: resolved Product Version: 18.2.5  
Product Build: Resolution: won't fix  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Missing bareos-filedaemon-glusterfs-plugin
Description: On Bareos 18.2 repo there is no bareos-filedaemon-glusterfs-plugin package available for Ubuntu.
Tags:
Steps To Reproduce: install bareos repository
Install bareos bareos-filedaemon

try to install gluster plugin:

#apt-get install bareos-filedaemon-glusterfs-plugin
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package bareos-filedaemon-glusterfs-plugin
Additional Information:
Attached Files:
Notes
(0003341)
arogge   
2019-04-18 12:05   
You're right there is no such package right now.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1082 [bareos-core] director crash sometimes 2019-04-30 14:23 2019-07-10 17:45
Reporter: jurgengoedbloed Platform: Linux  
Assigned To: OS: CentOS  
Priority: normal OS Version: 7  
Status: acknowledged Product Version: 18.2.5  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Bareos director crashes with segfault when restarting or reload from console
Description: After a config change, I reloaded the bareos director and it crashed with a segfault.

After I restart the Bareos director, it seems to run for about a minute but after then it crashes again. Sometimes almost directly, sometimes after a couple of minutes.

At startup, Bareos doesn't complain about a config error. After a while it just stops.

I already had the same issue in the past, but then managed to get the system up and running after waiting a considerable amount of time (> 1 day) and then restarting the director. It had then been running and backing up for over two weeks.

The director and storage daemon are running 18.2.5 all clients run 17.2.4 or 18.2.5, all runnining on Centos 7. The director and storage (both on the same machine) run on a fully patched Centos 7 machine.

I've had the same issue with the director on version 17.2.4 and a self-compiled 17.2.5

I suspect that it has to do with the fact that all clients use the 'client initiated connection' and something goes wrong as soon as clients reconnect after restart of the director. A race condition, lack of resources..?
Tags:
Steps To Reproduce: When the crash occurs:
- Start the bareos director
- Within a minute, the direct will crash again.
Additional Information: As requested by Andreas, created this bug and attached the traceback file.
System Description
Attached Files: bareos-dir.1640.bactrace (1,079 bytes) 2019-04-30 14:23
https://bugs.bareos.org/file_download.php?file_id=367&type=bug
bareos.1640.traceback (1,657 bytes) 2019-05-03 10:12
https://bugs.bareos.org/file_download.php?file_id=370&type=bug
bareos.63319.traceback (12,707 bytes) 2019-05-03 11:29
https://bugs.bareos.org/file_download.php?file_id=371&type=bug
Notes
(0003349)
arogge   
2019-04-30 15:06   
Does the problem persist if you disable statistics collection?
(0003352)
jurgengoedbloed   
2019-05-02 17:12   
Yes. Statistics collection was already turned off.
The database tables 'devicestats' and 'jobstats' are also empty.
(0003353)