View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1202 [bareos-core] storage daemon major always 2020-02-26 00:15 2020-02-26 00:29
Reporter: stephand Platform:  
Assigned To: OS:  
Priority: high OS Version:  
Status: new Product Version: 19.2.6  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: droplet backend (S3): Using MaximumConcurrentJobs > 1 causes restore errors although backup job terminated with "Backup OK"
Description: When using the droplet S3 bareos-sd backend with MaximumConcurrentJobs > 1 and running concurrent jobs, backup terminates with "Backup OK" but problems occur on restore.

This issue can be mitigated by using multiple storage devices each with MaximumConcurrentJobs = 1 as described in
https://docs.bareos.org/TasksAndConcepts/StorageBackends.html#configuration or
https://docs.bareos.org/Configuration/StorageDaemon.html#autochanger-resource
Tags:
Steps To Reproduce: 1. Configure as in the attached config
2. Run two concurrent jobs, can be done like this:
    echo -e "run job=S3Data1 yes\nrun job=S3Data2 yes" |bconsole
3. Try to restore
Additional Information: The problematic part seems to happen when the first job terminates while the second is still writing:

25-Feb 17:42 bareos-sd JobId 238: Releasing device "S3_Dev1" (S3).
25-Feb 17:42 bareos-sd JobId 239: End of Volume "S3Full-0" at 1:4189926 on device "S3_Dev1" (S3). Write of 64512 bytes got -1.
25-Feb 17:42 bareos-sd JobId 239: End of medium on Volume "S3Full-0" Bytes=4,299,157,223 Blocks=66,642 at 25-Feb-2020 17:42.
25-Feb 17:42 bareos-dir JobId 239: Created new Volume "S3Full-1" in catalog.

Restore of the first job restores the files but never terminates, bareos-sd debug messages repeats the following every 10s until terminating the process:
25-Feb-2020 22:22:08.359365 bareos-sd (100): backends/droplet_device.cc:949-250 get chunked_remote_volume_size(S3Full-0)
25-Feb-2020 22:22:08.363639 bareos-sd (100): backends/droplet_device.cc:240-250 chunk /S3Full-0/0000 exists. Calling callback.
25-Feb-2020 22:22:08.367575 bareos-sd (100): backends/droplet_device.cc:240-250 chunk /S3Full-0/0001 exists. Calling callback.
...
25-Feb-2020 22:22:08.568750 bareos-sd (100): backends/droplet_device.cc:240-250 chunk /S3Full-0/0040 exists. Calling callback.
25-Feb-2020 22:22:08.572301 bareos-sd (100): backends/droplet_device.cc:257-250 chunk /S3Full-0/0041 does not exists. Exiting.
25-Feb-2020 22:22:08.572366 bareos-sd (100): backends/droplet_device.cc:960-250 Size of volume /S3Full-0: 4246192871
25-Feb-2020 22:22:08.572402 bareos-sd (100): backends/chunked_device.cc:1256-250 volume: S3Full-0, chunked_remote_volume_size = 4246192871, VolCatInfo.VolCatBytes = 4299157223
25-Feb-2020 22:22:08.572436 bareos-sd (100): backends/chunked_device.cc:1262-250 volume S3Full-0 is pending, as 'remote volume size' = 4246192871 < 'catalog volume size' = 4299157223

Restoring the second job fails with

25-Feb 22:28 bareos2-fd JobId 253: Error: findlib/attribs.cc:441 File size of restored file /tmp/bareos-restores/data/2/testfile4.txt not correct. Original 545837338, restored 492912238.

When larger chunksizes are configured, eg. chunksize=1000M, also the following error message appears before the File size not correct error:

21-Feb 17:36 bareos-sd JobId 13: Error: stored/block.cc:1127 Volume data error at 0:3145669842! Short block of 58157 bytes on device "S3_Wasabi1" (S3) discarded.
21-Feb 17:36 bareos-sd JobId 13: Error: stored/read_record.cc:256 stored/block.cc:1127 Volume data error at 0:3145669842! Short block of 58157 bytes on device "S3_Wasabi1" (S3) discarded.
Attached Files: S3_Dev1.conf (1,516 bytes) 2020-02-26 00:24
https://bugs.bareos.org/file_download.php?file_id=407&type=bug
S3_Storage1.conf (253 bytes) 2020-02-26 00:25
https://bugs.bareos.org/file_download.php?file_id=408&type=bug
S3Job.conf (282 bytes) 2020-02-26 00:26
https://bugs.bareos.org/file_download.php?file_id=409&type=bug
S3testjobs.conf (189 bytes) 2020-02-26 00:26
https://bugs.bareos.org/file_download.php?file_id=410&type=bug
testfilesets.conf (221 bytes) 2020-02-26 00:27
https://bugs.bareos.org/file_download.php?file_id=411&type=bug
gen_testfile.py (228 bytes) 2020-02-26 00:27
https://bugs.bareos.org/file_download.php?file_id=412&type=bug
Notes
(0003858)
stephand   
2020-02-26 00:29   
To reproduce with the attached config, create testfiles like this:
mkdir -p /data/1
mkdir -p /data/2
./gen_testfile.py > /data/1/testfile1.txt
./gen_testfile.py > /data/1/testfile2.txt
./gen_testfile.py > /data/2/testfile1.txt
./gen_testfile.py > /data/2/testfile2.txt

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1134 [bareos-core] vmware plugin feature always 2019-11-06 15:32 2020-02-26 00:23
Reporter: ratacorbo Platform: Linux  
Assigned To: stephand OS: CentOS  
Priority: normal OS Version: 7  
Status: feedback Product Version: 18.2.5  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Installing bareos-vmware-plugin-18.2.5-124.1.el7.x86_64 python-pyvmomi error
Description: when tries to install bareos-vmware-plugin-18.2.5-124.1.el7.x86_64 in Centos 7 gives an error that python-pyvmomi
 doesn't exists
Tags:
Steps To Reproduce: yum install bareos-vmware-plugin
Additional Information: Error: Package: bareos-vmware-plugin-18.2.5-124.1.el7.x86_64 (bareos_bareos-18.2)
           Requires: python-pyvmomi
 You could try using --skip-broken to work around the problem
 You could try running: rpm -Va --nofiles --nodigest
System Description
Attached Files:
Notes
(0003856)
stephand   
2020-02-26 00:23   
The python-pyvmomi package for CentOS 7/RHEL 7 is available in EPEL.
On CentOS 7 EPEL repo can by added by runing

yum install epel-release

For RHEL 7 see https://fedoraproject.org/wiki/EPEL

Does it work when the EPEL repo was added to your system?

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1192 [bareos-core] director major random 2020-02-12 22:45 2020-02-25 13:38
Reporter: hostedpower Platform: Linux  
Assigned To: arogge OS: Debian  
Priority: normal OS Version: 9  
Status: confirmed Product Version: 19.2.6  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Authorization key rejected by Storage daemon since upgrading director and storage daemons
Description: Hello,

Since we upgraded all servers components to 19.2.6 (coming from 18.2.7) we get Authorization key rejected by Storage daemon again for random hosts.

There really isn't any reason since those hosts never changed since very long time. Also they were working for months with 18.2.x and has 0 issues. Very strange to see this error. It's only for about 5-10% of the hosts.

Example:

web.xxxx.com JobId 9: Fatal error: Authorization key rejected by Storage daemon .
Please see http://doc.bareos.org/master/html/bareos-manual-main-reference.html#AuthorizationErrors for help.
2020-02-12 22:27:47 hostedpower-dir JobId 9: TLS negotiation failed (while probing client protocol)
2020-02-12 22:27:47 hostedpower-dir JobId 9: Connected Client: web.xxxx.com at web.xxxx.com:9102, encryption: None

I think it are all some older ones that don't support encryption, but on the other hand we have others not supporting encryption which work 100% fine.
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files: bareos-sd.trace (32,485 bytes) 2020-02-18 16:51
https://bugs.bareos.org/file_download.php?file_id=403&type=bug
bareos-dir.trace (120,750 bytes) 2020-02-18 16:51
https://bugs.bareos.org/file_download.php?file_id=404&type=bug
backup_log.txt (2,461 bytes) 2020-02-18 16:51
https://bugs.bareos.org/file_download.php?file_id=405&type=bug
win10_1910-fd.trace (238,161 bytes) 2020-02-18 16:51
https://bugs.bareos.org/file_download.php?file_id=406&type=bug
Notes
(0003803)
ojehle   
2020-02-13 07:15   
(Last edited: 2020-02-13 07:29)
I get the same error without tls.. i have 10 jobs on the same Windows FD, connecting to the Same Linux SD 2 of the jobs are failed. restarting the jobs will work after.. looks a temporary error on SD, i try with delay between jobs, but not working.. i increase now the timeout between the jobs from 30 secs to 60 secs

Upgraded from 18.2.6 -> 19.2.4 (windows FD)
Upgraded Server 18.2.7 -> 19.2.5 (Linux SD/Director)

compiling now 19.2.6 and install it....



2020-02-12 22:31:47 linux-dir JobId 13446: Start Backup JobId 13446, Job=BackupWindows.2020-02-12_22.31.45_43
2020-02-12 22:31:47 linux-dir JobId 13446: Connected Storage daemon at arena8:9103, encryption: TLS_CHACHA20_POLY1305_SHA256
2020-02-12 22:31:47 linux-dir JobId 13446: Using Device "WindowsRemote" to write.
2020-02-12 22:31:47 linux-dir JobId 13446: Connected Client: windows-fd at windows:9102, encryption: None
2020-02-12 22:31:47 linux-dir JobId 13446: Handshake: Cleartext
2020-02-12 22:31:47 linux-dir JobId 13446: Encryption: None
2020-02-12 22:31:46 windows-fd JobId 13446: Fatal error: Authorization key rejected windows-fd.
2020-02-12 22:31:46 windows-fd JobId 13446: Fatal error: Failed to authenticate Storage daemon.

(0003806)
arogge   
2020-02-13 10:04   
You can just install 19.2.6 from the repositories. Binaries have been released to the public.
(0003807)
buloosh   
2020-02-13 11:24   
(Last edited: 2020-02-13 11:40)
Got the same problem on a 19.2.6 as well
running all daemons in debug mode i figured out, that problem relays server side, when director tries to connect storage daemon sometimes it asks for the tls handshake, but there is no one in request, so it fails
my setup:
server centos7 dir\sd(vers 19.2.6)
clients debian8-9\centos7(vers 17.2-19.2.6)

(0003808)
ojehle   
2020-02-13 11:29   
i complied and installed 19.2.6 and the windows fd was 19.2.4 got the same error.. did not see errors in the SD Version 19.2.6 trace logs.. so i expect it's the windows fd release 19.2.4. will retry the testing with the 18.2.5 windows fd.... there i never got such errors...

will post the results...
(0003810)
ojehle   
2020-02-13 13:01   
(Last edited: 2020-02-13 13:02)
Tested now:

with the downgraded fd on windows i get no errors anymore.. i started my jobs approx 25 times...

git 19.2.6 version director/storage ubuntu linux 18lts
windows fd 18.2.5 version (exe) from download

18.2.5 (30Jan19) Microsoft Windows Server 2012 Datacenter Edition (build 9200), 64-bit,Cross-compile,Win64


with the 19.2.4 version (exe) from download i got the error sometimes (in the same job sequence , (10 jobs) i got it always once)...

(0003811)
ojehle   
2020-02-13 13:13   
test running now with windows fd 19.2.6 from download...
(0003812)
ojehle   
2020-02-13 14:10   
with the windows 19.2.6 i got the same error...... after several backups.. ... the error occured, next backup works again...

will downgrade now again to windows fd to 18.2.5 and run the tests again....
(0003813)
ojehle   
2020-02-13 14:28   
got it now also with the 18.2.5 fd .....


linux-dir JobId 13553: Error: Bareos linux-dir 19.2.6 (11Feb20):
Build OS: Linux-4.15.0-76-generic ubuntu Ubuntu 18.04.4 LTS
JobId: 13553
Job: BackupWindows.2020-02-13_14.20.05_21
Backup Level: Incremental, since=2020-02-13 14:15:16
Client: "windows-fd" 18.2.5 (30Jan19) Microsoft Windows Server 2012 Datacenter Edition (build 9200), 64-bit,Cross-compile,Win64
(0003826)
jkk   
2020-02-18 14:54   
We have pretty much the same problem after upgrading to Bareos 19.2.6.
Clients are mostly CentOS 7 using bacula-client, so unencrypted fd<->sd connections are used.
The failing clients seem to be random - rerunning the job eventually succeeds, sometimes after 5 tries.
This definitely looks like an issue in bareos-sd 19.2, maybe handshake/authentication-related.
It is pretty annoying, since every day around half of our scheduled backups fail.
(0003827)
arogge   
2020-02-18 15:16   
Could somebody enable tracing on the components (at least debuglevel 100), reproduce the issue and attach the tracefiles here?
(0003830)
ojehle   
2020-02-18 16:51   
(Last edited: 2020-02-18 20:57)
could reproduce it installing a linux vm and windows vm, set a small job and on client

tls enable = no
tls require = no

i could reproduce it ever 5 or 6 run of the backup...

(0003831)
ojehle   
2020-02-18 17:29   
it looks it's the

tls enable = no
tls require = no

after i removed this from client and director.. i could start 40 backups against the file daemon without any backup error. if i switch back to

tls enable = no
tls require = no

after the 5 backup, the error occurs...
(0003832)
arogge   
2020-02-18 17:36   
Just to make sure i can reproduce this correctly: where do you set these? In the Client resource on the director and in the Director resource on the filedaemon?
(0003833)
ojehle   
2020-02-18 17:42   
yes, i did it in the client resource of the director and on the client in the director and myself.conf
(0003836)
arogge   
2020-02-19 09:19   
In my minimal test-case I cannot make it fail (yet). Can you reproduce the problem when dir, fd and sd are on the same machine, or do you need networking in between?
(0003837)
ojehle   
2020-02-19 09:30   
i had linux vm (DIR,SD) and a windows vm (FD)

yesterday i run my test "simple backup" until error. for approx 200 successfull backups with 19.2.6 on my test setup.

After the successfull test, i switched our productive system again from 18.6 to 19.2.6 without the tls_enable = no everywhere and its working since then without any error.
(0003838)
arogge   
2020-02-19 09:44   
It looks like sometimes when a non-TLS client connects to the SD the authentication fails. I'm just not sure how this can fail only sometimes.
(0003839)
ojehle   
2020-02-19 09:48   
it's really like this.. if you set "tls enable=no" after starting 5-6 times the same job you get the error. The Same 2 VM (virtual box), same resources, only thing running on my laptop.... .not touching anything...

timing issue? if you cannot get it with localhost?
(0003841)
arogge   
2020-02-19 11:41   
Seems like a restart of the SD between backups works around the issue. I made the test not do this anymore and now I can reproduce the issue.

Thanks for taking the time to reproduce this. I'll keep you updated as soon as I know anything new.
(0003850)
arogge   
2020-02-21 20:14   
When debugging the SD with debuglevel 200 or above, you should see "lib/try_tls_handshake_as_a_server.cc:77-0 Connection to %s will be denied due to configuration mismatch". This message occurs when the connection does not satisfy the TlsPolicy.
The TlsPolicy that is checked should be set by ConfiguredTlsPolicyGetter::GetConfiguredTlsPolicyFromCleartextHello(). Seems like there are cases where it isn't set correctly or maybe the value gets corrupted.
(0003855)
arogge   
2020-02-25 13:38   
Seems like the value was never initialized in the first place. We now have a proposed fix for this.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1190 [bareos-core] director minor always 2020-02-12 11:36 2020-02-25 12:49
Reporter: arogge Platform:  
Assigned To: arogge OS:  
Priority: normal OS Version:  
Status: assigned Product Version: 19.2.6  
Product Build: Resolution: reopened  
Projection: none      
ETA: none Fixed in Version:  
    Target Version: 19.2.7  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Schedules without a client will not be run
Description: When you configure a job without a client (only possible on some types like copy and migrate) the job will not be scheduled.
Tags:
Steps To Reproduce: 1. Configure a joblike this:
Job {
  Name = "backup-bareos-fd"
  Type = Copy
  Messages = "Standard"
  Pool = "Full"
  Schedule = "MySchedule"
  SelectionType = PoolUncopiedJobs
}

The job will not be scheduled even though other jobs referencing MySchedule will be scheduled.
I would expect the job to be scheduled.
Additional Information: This happens since an upgrade to 19.2.5.
Attached Files:
Notes
(0003800)
arogge   
2020-02-12 11:36   
Tested and confirmed.
(0003816)
normic   
2020-02-16 02:17   
I'm not sure if this is really happening only with 19.5 or is a bug at all.
I had the same issue with an 18.2.5 installation. But after investigating this further I noticed that the exact behavior occurs if the Schedule is _not_ referenced by a Job.

I did not recheck this with 19.x
(0003848)
arogge   
2020-02-20 14:03   
Brock Palen from the mailing list confirmed that the issue only occurs when no client is configured on a job.
(0003852)
franku   
2020-02-25 12:22   
Fix committed to bareos master branch with changesetid 12909.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1187 [bareos-core] General minor always 2020-02-11 17:23 2020-02-25 12:49
Reporter: arogge Platform:  
Assigned To: arogge_adm OS:  
Priority: normal OS Version:  
Status: confirmed Product Version: 19.2.6  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version: 19.2.7  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Release Bareos 19.2.7
Description: This is a tracker ticket for everything going into 19.2.7
Tags: release
Steps To Reproduce:
Additional Information:
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
613 [bareos-core] director minor always 2016-02-03 15:38 2020-02-24 11:08
Reporter: dpearceFL Platform: Linux  
Assigned To: OS: CentOS  
Priority: normal OS Version: 6  
Status: confirmed Product Version: 14.2.6  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact: yes
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact: yes
bareos-15.2: action:
bareos-14.2: impact: yes
bareos-14.2: action:
bareos-13.2: impact: yes
bareos-13.2: action:
bareos-12.4: impact: yes
bareos-12.4: action: none
Summary: PID file is not cleared when hard reset occurs
Description: I am running Bareos on a server that about once a month has a hardware event the causes an instantaneous reboot with no proper shutdown. This is my problem.

However I have noticed that the processes bareos-fd and bareos-sd will recover on reboot even though the PID file for each process still exists.

The process bareos-dir will not start because the PID file still exists but the process is NOT running.

So I'm looking at the code in ./src/lib/bsys.c Line 597 and it mentions bug 0000797 which I can not find.
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0002182)
dpearceFL   
2016-02-03 15:56   
(Last edited: 2016-02-03 19:04)
If I stop Bareos properly, I notice that pid file for bareos-dir does not get removed. If the PID file were removed on process exit, my issue would still exist.

(0003851)
jurgengoedbloed   
2020-02-24 11:08   
I think that on CentOS, at least on CentOS7, the pidfile should move to /run
This directory is a temporary filesystem that is recreated at each reboot.

I've had a hard reboot of several machines and the following occured:
- After the hard reboot and another process is running with the process id that is listed in the pidfile
- Bareos-fd is not started as systemd thinks that bareos-fd is already running
- When restarting bareos-fd (with the command systemctl restart bareos-fd), systemd actually kills the other process and then starts bareos-fd.

On machines running eg haproxy, we've had on multiple servers the case that haproxy was killed during restart of bareos-fd.

I think this also applies to other bareos components, like bareos-dir and bareos-sd.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1200 [bareos-core] webui major always 2020-02-18 19:24 2020-02-21 01:33
Reporter: HiFlyer Platform: Dell xps15  
Assigned To: OS: Ubuntu  
Priority: high OS Version: 18.04  
Status: new Product Version: 19.2.6  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: fails to restore files
Description: I select a file from Webui on a client. Webui show the file is selected. I then ask it to restore the file. NO run is created and no file is created in the /tmp directory. Bconsole will restore the file.
Tags:
Steps To Reproduce: open Webui
open restore tab
select client
select a file
hit restore button.
Additional Information: Restore section appears as if it is working. but no run created.
Attached Files:
Notes
(0003849)
HiFlyer   
2020-02-21 01:33   
I just realized that after I select the file, I hit restore accepting the default item for the left hand side of the restore window. What I noticed is that under 19.2.6-2 that the "Restore Job" item is greyed out and does not allow a selection. When running bconsole, I actually have to select a restore file. Is this potentially the problem?

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1182 [bareos-core] webui minor always 2020-02-09 20:30 2020-02-20 09:23
Reporter: zendx Platform: Linux  
Assigned To: stephand OS: Debian  
Priority: normal OS Version: 9  
Status: resolved Product Version: 19.2.5  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact: yes
bareos-19.2: action: fixed
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Files list in restore dialog is empty after updating 18.2.5 to 19.2.5
Description: I've upgraded bareos 18.2.5 -> 19.2.5
PostgreSQL.

/usr/lib/bareos/scripts/update_bareos_tables

DB migrated to version 2192

.bvfs_clear_cache yes done without errors.


After .bvfs_update, some error messages appears in the logfile:

09-Feb 13:08 bareos-dir JobId 0: Fatal error: cats/bvfs.cc:244 cats/bvfs.cc:244 query INSERT INTO PathVisibility (PathId, JobId) SELECT DISTINCT PathId, JobId FROM (SELECT PathId, JobId FROM File WHERE JobId = 13066 UNION SELECT PathId, BaseFiles.JobId FROM BaseFiles JOIN File AS F USING (FileId) WHERE BaseFiles.JobId = 13066) AS B failed: ERROR: duplicate key value violates unique constraint "pathvisibility_pkey"


Restore dialog in WebUI shows all jobs for clients, but files list is empty for all of them :(
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0003794)
stephand   
2020-02-12 08:35   
Thanks for reporting this, I verifided that it is a race condition that unfortunately happens sometimes when multiple .bvfs_update are running in parallel. It is perfectly ok to run multiple .bvfs_update in parallel, and using the Bareos webui also causes running .bvfs_update. In most of the cases, the code prevents from running bvfs update on the same JobId in parallel, see https://github.com/bareos/bareos/blob/master/core/src/cats/bvfs.cc#L191, but I was able to reproduce the error nevertheless, both in 18.2 and 19.2. So this is not related to the upgrade. However, the error does not impact the consistency of the bvfs cache data, please consider this like a warning for the time being. It needs to be fixed in the future.

The other problem "Restore dialog in WebUI shows all jobs for clients, but files list is empty for all of them :(" is related to the upgrade. It's fixed with https://github.com/bareos/bareos/pull/411 also in 19.2.6, please update.
(0003795)
stephand   
2020-02-12 08:38   
Could you please check if it work with Bareos 19.2.6?
(0003843)
mfulz   
2020-02-19 23:36   
I can confirm the empty files list is fixed with 19.2.6.
I had the same issue with the 19.2.5 version.

THX
(0003844)
stephand   
2020-02-20 09:23   
Issue is fixed in 19.2.6

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1181 [bareos-core] director crash always 2020-02-08 02:47 2020-02-19 22:51
Reporter: teka74 Platform: Dual-CPU XEON  
Assigned To: arogge OS: Ubuntu  
Priority: high OS Version: 18.04  
Status: assigned Product Version: 19.2.5  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: director failed to start
Description: After upgrading my database with the hack from issue 1176, i could start my director and it made a scheduled backup

After this backup, I tried to reboot my computer, and the director failed to start


root@backup:~# service bareos-director start
Job for bareos-director.service failed because the service did not take the steps required by its unit configuration.
See "systemctl status bareos-director.service" and "journalctl -xe" for details.
root@backup:~# service bareos-director status
● bareos-director.service - Bareos Director Daemon service
   Loaded: loaded (/lib/systemd/system/bareos-director.service; enabled; vendor preset: enabled)
   Active: failed (Result: protocol) since Sat 2020-02-08 02:41:02 CET; 14s ago
     Docs: man:bareos-dir(8)
  Process: 4522 ExecStart=/usr/sbin/bareos-dir (code=exited, status=1/FAILURE)
  Process: 4521 ExecStartPre=/usr/sbin/bareos-dir -t -f (code=exited, status=1/FAILURE)

Feb 08 02:41:02 backup bareos-dir[4521]: Config error: expected an equals, got: {
Feb 08 02:41:02 backup bareos-dir[4521]: : line 1, col 9 of file /etc/bareos/bareos-dir.d/fileset/WindowsAll.conf
Feb 08 02:41:02 backup bareos-dir[4521]: FileSet {
Feb 08 02:41:02 backup bareos-dir[4522]: bareos-dir: ERROR TERMINATION at lib/parse_conf_state_machine.cc:128
Feb 08 02:41:02 backup bareos-dir[4522]: Config error: expected an equals, got: {
Feb 08 02:41:02 backup bareos-dir[4522]: : line 1, col 9 of file /etc/bareos/bareos-dir.d/fileset/WindowsAll.conf
Feb 08 02:41:02 backup bareos-dir[4522]: FileSet {
Feb 08 02:41:02 backup systemd[1]: bareos-director.service: Can't open PID file /var/lib/bareos/bareos-dir.9101.pid (yet?) after start: No such file or directory
Feb 08 02:41:02 backup systemd[1]: bareos-director.service: Failed with result 'protocol'.
Feb 08 02:41:02 backup systemd[1]: Failed to start Bareos Director Daemon service.
root@backup:~#


are there any changes in the fileset syntax??

found nothing in the current documentation



Thomas
Tags:
Steps To Reproduce: reboot

start director

failed
Additional Information:
Attached Files:
Notes
(0003759)
arogge   
2020-02-10 10:59   
does "sudo -u bareos bareos-dir -t" test the configuration OK?
(0003802)
teka74   
2020-02-13 00:51   
root@backup:~# sudo -u bareos bareos-dir -t
bareos-dir: ERROR TERMINATION at lib/parse_conf_state_machine.cc:128
Config error: expected an equals, got: {
            : line 1, col 9 of file /etc/bareos/bareos-dir.d/fileset/WindowsAll.conf
FileSet {

root@backup:~#
(0003804)
arogge   
2020-02-13 09:34   
Thank you for the feedback!

It seems like the parser thinks it is inside of a resource definition and reads "FileSet" as a keyword, expecting it to be followed by an equals sign ("=").
Could you attach that configuration file (as it is) to this bug, so I can try to reproduce this and find out what is going wrong? Would you also take a look at the other configuration files in the fileset directory?
(0003814)
teka74   
2020-02-14 02:58   
(Last edited: 2020-02-19 00:46)
Here is my Fileset, it was working with 18.2.5 for a year:



FileSet {
  Name= "WindowsAll"
  Enable VSS = No
  Include {
    Options {
      Signature = MD5
      Compression = GZIP
      Drive Type = fixed
      IgnoreCase = yes
      aclsupport = no
      xattrsupport = no
      WildFile = "[A-Z]:/pagefile.sys"
      WildFile = "[A-Z]:/hiberfil.sys"
      WildFile = "*.mp4"
      WildDir = "[A-Z]:/RECYCLER"
      WildDir = "[A-Z]:/$RECYCLE.BIN"
      WildDir = "[A-Z]:/System Volume Information"
      Exclude = yes
    }
    File = "c:/Freigaben/"
    File = "c:/Lampey/"
  }
}


and I checked my other filesets, all filesets are original since first install, no changes made


Are there any problems with old clients?? On my MS Server 2011 17.2.5 is running ... edit: after updating the win machine to 19.2.6 same error

Thomas

(0003824)
RSmit   
2020-02-18 14:09   
Seems to me that you did not close the "FileSet" Section with a "}"... This should be at the last line...
(0003834)
teka74   
2020-02-19 00:44   
@RSmit

no, checked it, I think forgot one } with the copy&paste

3 times { and 3 times }

edited my older post
(0003835)
arogge   
2020-02-19 08:43   
I will need an example how to trigger this. Either your director configuration (i.e. /etc/bareos/bareos-dir.d) in a tarball, preferably with the passwords removed.
If you don't want to upload it here (which I can totally understand) you can send it to me via e-mail: andreas.rogge@bareos.com

If you can reduce the configuration to the bare minimum that is required to reproduce the problem, that would save a lot of my time.
(0003840)
RSmit   
2020-02-19 10:12   
I have tried your config file in one of my Directors, but it doesn't fail... So the problem might be somewhere else...
The syntax your using has not changed as far as I can tell...
Some things to try for general troubleshooting:
- Have you tried to run you Director in Debug 200 (bareos-dir -d 200) mode? This helps me a lot when I have configuration problems.
- What happens if remove the ".conf" from your WindowsAll config file and start the director? Note: there can be an other message from a Job missing this config...
- Might there be a /etc/bareos/bareos-dir.conf file from an earlier version of Bareos? (long shot, but you never know...)
- Can you start your Director if you remove the ".conf" from all your config files in [bareos]/client/, [bareos]/fileset/, [bareos]/schedule and [bareos]/job (where [bareos] = /etc/bareos/bareos-dir.d/)... If so re-add ".conf" to the config files one by one an try to restart the director... If not, you could send the remaining config to Andreas...
(0003842)
teka74   
2020-02-19 22:51   
@arogge @RSmit

Aftes several tests and your tips, i got my director running

tested it with gdb and debuglevel 200, gdb told me a list of config files to read. I checked all these configs and couldn't see any syntax errors. Then I saw, I had some older test configs under /etc/bareos, but all these were renamed without ".conf". I moved these files to my home dir, and the director started!

My suggestion: the director tries to read these files, what he shouldn't not have to do.

perhaps, check this for coming versions, and/or give a hint to the manual for other users


Thomas

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1194 [bareos-core] file daemon major always 2020-02-13 08:53 2020-02-18 15:42
Reporter: Int Platform: x86  
Assigned To: arogge OS: Windows  
Priority: high OS Version: 2016  
Status: resolved Product Version: 17.2.4  
Product Build: Resolution: unable to reproduce  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Incremental backup does backup all files instead of changed files
Description: On 30.1.2020 the incremental backup on my Windows Server 2016 (64bit) suddenly did not backup the changed files anymore but did backup up all files instead. This happened ever since except one time on 4.2.2020 where it did a correct incremental backup.
This change of behaviour does not correlate with Windows updates - the last time I installed Windows updates on the server was on 17.1.2020.
Also the Bareos server was not changed.
I am using the Accurate option for my backups.

in this bug https://bugs.bareos.org/view.php?id=907
I found the hint to use only MD5 hash for the accurate option so I tried this and changed my fileset configuration to "accurate = 5"
but it did not change the behaviour. The incremental backup is still backing up all files.

Backups of clients with Windows 10 still work correctly, so I assume that issues lies with the file daemon on Windows Server 2016.

Any ideas what this could be caused by?
Or any hint where to debug?
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0003805)
arogge   
2020-02-13 09:37   
First of all: please don't run 17.2 if you can help it.

As you have accurate enabled, it could be that the accurate flags are not transferred to the client correctly.

- can you successfully open the restore browser in bconsole using the restore command and then selecting option 5, and the client in question?
- how many files are "all files"?
(0003809)
Int   
2020-02-13 12:05   
Thank you for the quick feedback!

Upgrading from 17.2 is on my todo list but I didn't have the time to tackle that so far.
I can restore successfully for this client with bconsole.
All files are 8,786,819 files at the moment. The number of files didn't increase drastically, it was always about this amount for this client.

It would be great if 17.2 would work again, this would give me time to plan and execute the upgrade to 19.2 in a relaxed manner.
If you think an upgrade to 19.2 is the only solution for this problem I will have to reorganize my tasks and give this highest priority.
(0003821)
Int   
2020-02-18 09:08   
Since I didn't get any further feedback about how to proceed, I decided myself and worked over the weekend to update Bareos to the latest version 19.2.6

So far (for the last two nights) the incremental and differential backups work correctly again, so the update seems to have solved the issue.

But I still don't understand why Bareos changed it's behaviour all of a sudden in the first place...
(0003822)
arogge   
2020-02-18 10:32   
I don't understand it either - but I'm also not eager to dig into old code to find an issue that has been fixed in the meantime.
While 19.2 has its own issues (and we're working on that) it is probably a much better choice than 17.2.

Is your problem fixed now? Can I close the issue?
(0003823)
Int   
2020-02-18 10:43   
Let me monitor the backup behaviour until end of the week just to be sure ...
I will give you feedback on Friday.

Thanks!
(0003829)
arogge   
2020-02-18 15:42   
As the problem doesn't seem to occur in 19.2 anymore, I'll close the issue.
You can just reopen it if the issue resurfaces. Thank you!

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1198 [bareos-core] installer / packages minor always 2020-02-16 12:42 2020-02-18 15:21
Reporter: Int Platform:  
Assigned To: OS: Windows  
Priority: normal OS Version:  
Status: new Product Version: 19.2.6  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Upgrade of Bareos client on Windows gives error when trying to copy libbareos.dll
Description: The upgrade fails to copy libbareos.dll because
"Bareos Tray Monitor" is still running in the background. The installer does not stop this process.

To continue the upgrade process the "Bareos Tray Monitor" process has to be killed manually in the Task Manager.
Tags:
Steps To Reproduce:
Additional Information:
Attached Files:
Notes
(0003817)
Int   
2020-02-16 13:41   
the error happened not always but on 3 out of 5 Windows clients
(0003825)
RSmit   
2020-02-18 14:17   
I had the same problem when there was an other Bareos Tray Monitor running in an other session. Please check your Task Manager for that...
(0003828)
Int   
2020-02-18 15:21   
you are right. A Bareos Tray Monitor running in an other session might have been the case here too.
I didn't pay attention to the user account that the "Bareos Tray Monitor" process was running under when I killed the task manually in the Task Manager, so I can confirm, but it sounds likely.
All my windows clients are updated now so I can not reproduce it at the moment.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1199 [bareos-core] webui feature always 2020-02-18 10:06 2020-02-18 10:06
Reporter: Int Platform:  
Assigned To: OS:  
Priority: normal OS Version:  
Status: new Product Version:  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: possibility to configure default restore destination path in WebUI
Description: The default restore destination path in WebUI is set to /tmp/bareos-restores/

It would be nice if one could change this default restore path to something else in the configuration of the WebUI.
Tags:
Steps To Reproduce:
Additional Information:
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1047 [bareos-core] webui major sometimes 2019-02-06 20:40 2020-02-17 14:11
Reporter: murrdyn Platform: Linux  
Assigned To: arogge OS: RHEL  
Priority: normal OS Version: 7  
Status: feedback Product Version: 18.2.5  
Product Build: Resolution: reopened  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Webui login returns a blank page when logging in.
Description: Sometimes when logging into Webui or attempting to log in with a different users, I am getting a blank page. The following errors appear in the httpd error log when this occurs.

[Wed Feb 06 13:28:14.347786 2019] [:error] [pid 3627] [client x.x.x.x:63602] PHP Notice: Undefined variable: form in /usr/share/bareos-webui/module/Auth/view/auth/auth/login.phtml on line 45, referer: http://x.x.x.x/bareos-webui/auth/login
[Wed Feb 06 13:28:14.347819 2019] [:error] [pid 3627] [client x.x.x.x:63602] PHP Fatal error: Call to a member function prepare() on a non-object in /usr/share/bareos-webui/module/Auth/view/auth/auth/login.phtml on line 45, referer: http://x.x.x.x/bareos-webui/auth/login

If you refresh the blank page, I then get a page of the attached image.

Occasionally this will self heal, and the error does not appear in the logs.
Tags: broken, webui
Steps To Reproduce: Fresh install of 18.2.5 on RHEL 7.6. After first successful login into Webui with created admin account (TLS Enable = no for the console profile), error and blank page are received on next login attempt. Will self-heal within a day and then be fine until current session times out and you log back in or force the logout and back in.
Additional Information:
System Description
Attached Files: bareos-error.PNG (23,787 bytes) 2019-02-06 20:40
https://bugs.bareos.org/file_download.php?file_id=348&type=bug
png
Notes
(0003251)
murrdyn   
2019-02-06 20:43   
Additional information: Created a second admin account. Can temporary work around issue switching accounts in the webui when error is encountered.
(0003252)
teka74   
2019-02-07 02:02   
same problem here, 18.2.5 updated from 17.2.4, system ubuntu 16.04 lts
(0003262)
xyros   
2019-02-14 17:00   
I originally posted this in another bug report, but I believe it applies better here:

The observation I have found, regarding this issue, is that intentionally logging out (before doing anything that triggers a session expiry response/notification) avoids triggering this bug on subsequent login.

Typically, if you remain logged in and your session expires by the time you try to perform an action, you have to log back in. This is when you encounter this bug.

Following a long idle period, if you avoid performing any action, so as to avoid being notified that your session has expired, and instead click your username and properly logout from the drop-down, you can log back in successfully without triggering this bug.

In fact, I have found that if I always deliberately logout, such that I avoid triggering the session expiry notice, I can always successfully login on the next attempt.

I have not yet tested a scenario of closing all browser windows, without logging out, then trying to login again. However, so far it seems that deliberately logging out -- even after session expiry (but without doing anything to trigger a session expiry notification) -- avoids triggering this bug.

Hope that helps with figuring out where the bug resides.
(0003452)
arogge   
2019-07-12 11:02   
Can you please check whether deleting your cookies for the bareos-webui actually helps?
(0003686)
arogge   
2019-12-18 15:32   
closing due to no response on feedback request
(0003819)
b0zzo   
2020-02-17 14:04   
Hello,

I'm facing the exact same issue, and I can confirm that cleaning the cookies is not helping.
I tried using multiple console, admin, user1, user2 .. the "admin" (using the webui-admin) worked well, but when I started switching to a different account the problem appeared.
I thought it was because of the ACL (as the error would eventually say) but even with similar profile the bug would still appear and trying to log back to admin also fails.


bareos-webui-18.2.6-58.1.el7.noarch
redhat 7.7

Thank you
(0003820)
arogge   
2020-02-17 14:11   
So the webui-admin worked, but the others don't?
Do you have "TLS Enable = no" in the other configurations? If not this might be the cause (and we will need a better way to detect and report that problem).

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
958 [bareos-core] General major have not tried 2018-06-01 16:22 2020-02-17 04:30
Reporter: MarceloRuiz Platform: Linux  
Assigned To: OS: Ubuntu  
Priority: high OS Version: 18.04  
Status: new Product Version: 17.2.5  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Always incremental backups always upgraded to full
Description: After upgrading bareos from 16.2 to version 17.2, each "always incremental" backup job is upgraded to a full backup. This happens all the time and the unmodified configuration used to work fine with 16.2.
Client: Ubuntu 18.04
Server (Director, Storage Daemon): FreeNas 11.1 (Freebsd 11.1-RELEASE-p10)
Tags: always incremental, freebsd
Steps To Reproduce: On a fresh installation, create an always incremental job. The job will be upgraded to full during the first backup (expected) and subsequent ones (not expected).
Additional Information: I figured maybe the script that upgraded the tables had a problem, so I dropped the whole database and recreated it again and reset bareos to zero (deleting all the stored data and status on every single component including the 2 clients I have). In this way I ruled out problems with the upgrade from one version to another, but the problem remains.
Looking at the logs I see the confirmation the first full backup upgraded from incremental finished with status OK, but the following same job (which should run as incremental) is upgraded "from Full to Full" and the reason is that a prior failed full job was found in the catalog, but the database table has the right information for the OK status.
I am attaching a sanitized version of the log file with the two Job Ids involved. The rest of the logs belong to the second client failing jobs (that on purpose I turned off) to see what was going on more easily.
Attached Files: bareos.log (6,342 bytes) 2018-06-01 16:22
https://bugs.bareos.org/file_download.php?file_id=294&type=bug
Notes
(0003818)
MarceloRuiz   
2020-02-17 04:30   
Problem still present in version 19.2.6 and Postgres 11.
The message: "Prior failed job found in catalog. Upgrading to Full." is not very helpful. At least it should show the previous JobId its referencing to and the job status (in my latest test it will show that the JobID is 1 and the status is T).
Is there any way to debug this behavior?

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1197 [bareos-core] installer / packages minor always 2020-02-16 11:25 2020-02-16 11:25
Reporter: Int Platform:  
Assigned To: OS:  
Priority: normal OS Version:  
Status: new Product Version: 19.2.6  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: An upgrade of Bareos packages creates default configuration files
Description: The upgrade of Bareos packages of an existing installation creates default configuration files.
This pollutes the existing installation with used and unwanted storages, client, ...
For example I now have a storage named "File" in my database that I don't need and don't use.

The default configuration files should be created not at all or at least as .conf.example in case of an upgrade.
Tags:
Steps To Reproduce:
Additional Information:
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1196 [bareos-core] General minor always 2020-02-14 11:10 2020-02-14 15:09
Reporter: okonobeev Platform: Linux  
Assigned To: OS: CentOS  
Priority: normal OS Version: 7  
Status: new Product Version: 19.2.6  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: oVirt plugin. Restore doesn't work if using different than 'Blank' template
Description: Hello.

There is a problem with a restore process if backed up VM using different from 'Blank' template.
During backup process everything is OK and in ovirt-sdk log you can find that there is correct template ID in use:

DEBUG:root:> <original_template href="/ovirt-engine/api/templates/e5b10b2a-a982-437b-934b-2e3dda570c27" id="e5b10b2a-a982-437b-934b-2e3dda570c27"/>
DEBUG:root:> <template href="/ovirt-engine/api/templates/e5b10b2a-a982-437b-934b-2e3dda570c27" id="e5b10b2a-a982-437b-934b-2e3dda570c27"/>


But during restore process of the same VM there is always 'Blank' template instead of the right one:

DEBUG:root:> <original_template href="/ovirt-engine/api/templates/00000000-0000-0000-0000-000000000000" id="00000000-0000-0000-0000-000000000000"/>
DEBUG:root:> <template href="/ovirt-engine/api/templates/00000000-0000-0000-0000-000000000000" id="00000000-0000-0000-0000-000000000000"/>

In the end restore job will finish with success status because VM was successful created and disk was uploaded too, but in the oVirt manager we can see following error:
VDSM bl1-3.ovirt.***.local command VerifyUntrustedVolumeVDS failed: Image verification failed: "reason=Image backing file u'2cd41bd5-ab2f-4212-887e-33ef4e88b73e' does not match volume parent uuid '00000000-0000-0000-0000-000000000000'"
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
907 [bareos-core] file daemon major have not tried 2018-02-09 10:59 2020-02-13 09:37
Reporter: mbr Platform: Linux  
Assigned To: OS: Windows  
Priority: normal OS Version: 2016  
Status: acknowledged Product Version: 17.2.4  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact: yes
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact: yes
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Incremental Job backups same files
Description: Director is running on CentOS 7, FD on Windows Server 2016.
Jobtype: always incremental;
Bug: Some files (~ 180) were copied on every job.

Example:

Jobs:
 22401 Incr 2,187 105.1 G OK 04-Feb-18 01:24 FS01-AI
 22421 Incr 1,143 103.8 G OK 05-Feb-18 08:35 FS01-AI
 22495 Incr 2,515 105.3 G OK 09-Feb-18 07:13 FS01-AI


In the database you can see that the files have not changed between backups:

bareos=# select * from file where pathid='5305263' and name='foo.bar' and jobid=22495;
  fileid | fileindex | jobid | pathid | deltaseq | markid | fhinfo | fhnode | lstat | md5 | name
-----------+-----------+-------+---------+----------+--------+--------+--------+----------------------------------------------------------+------------------------+--------------------
 252600154 | 2292 | 22495 | 5305263 | 0 | 0 | 0 | 0 | A A IH/ A A A BAAAYg ePLa A A BOh02e BN3PJg BZFeCr A A f | VsVr09AEXA85Ywpk6aW+8w | foo.bar
 252163936 | 1010 | 22421 | 5305263 | 0 | 0 | 0 | 0 | A A IH/ A A A BAAAYg ePLa A A BOh02e BN3PJg BZFeCr A A f | VsVr09AEXA85Ywpk6aW+8w | foo.bar
 252090699 | 1934 | 22401 | 5305263 | 0 | 0 | 0 | 0 | A A IH/ A A A BAAAYg ePLa A A BOh02e BN3PJg BZFeCr A A f | VsVr09AEXA85Ywpk6aW+8w | foo.bar
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0002903)
joergs   
2018-02-09 19:58   
Anything special you know about the affected files?

Some filesystems behave oddly, react on access time ...

You can configure, what attributes are evaluated on accurate mode by the fileset include option "accurate", see http://doc.bareos.org/master/html/bareos-manual-main-reference.html#FileSetIncludeRessource

If you are only interested in the file contents, setting

accurate = 5

might be an option.
(0002906)
mbr   
2018-02-12 14:38   
Joerg, thanks for your reply.
Affected files were pst(s), media files and binaries but no M$ office files (which are the most on this server / fileset). The File system is a ntfs on a Server 2016.

I understand that it is a possible workaround to reduce the attribut set which decides about if a file has changed or not. I will give it a try.

Please correct me if I am wrong but from my understanding of bareos the decision of backingup a file or not is done by the catalog entries. The entries in the catalog (see my database dump in the bug report) for the files which were all the time backed up were the same so there must be a bug in the decision process.
(0002907)
joergs   
2018-02-12 16:39   

It depends.
Normal backup is based only on timestamps (backup all files that have been modified after the last backup).
As you are using Always Incremental, I assume you are using accurate backup, where the backup decision is really based on the catalog data.

My proposal can be seen as workaround, but it has been intended as a way to track down the problem. If it works with some settings, we can determine which attributes are treated wrong.

Anyhow, there are also other possible reason:
maybe there have been job runs in between, where the data have not been present (volume not mounted, ...). In this case the in between job has find out that foo.bar has not been available in between. As it had been available afterwards, it has been backuped again.
(0002938)
romor   
2018-03-05 16:17   
Hi, i have same problem on two servers with Ubuntu 16.04 and BareOS 17.2.4.
I have 2 milions files, which are backuped with incremental job again (ALL) after update Bareos from version 15.2 to 17.2.
But this do only on BIG backups (2 and 3 milions files).
I saw, that upgrade of database from version 2004 to new 2171 merge somehow tables File and Filename. Maybe here is some problem.
In restored environment back to version 15.2 i have in table File 2 848 506 rows (size 1,2 GiB) and in table FileName 2 297 827 rows (337 MiB). After upgrade to 17.2 has table File 2GiB and incremental backup did backup all.
In another environment in version 17.2 i have 14 600 000 rows (size 3,6GiB) - there i have problem.
(0003062)
beckzg   
2018-07-05 23:25   
I have the same problem, I have a simple incremental backup job for a web server's almost static files (php, js, pictures). Every time all files are backup up. There is 1,7 millions files.

The environment is:

OS: Debian GNU/Linux 9.4 (stretch)
Bareos: Version: 17.2.4 (21 Sep 2017)
(0003152)
kilroy   
2018-11-20 13:25   
Same here.

We have 99 Jobs on quite identical machines. All Linux. All ext3 or ext4.

*Two* of these jobs fail and do a full backup each incremental "accurate" backup. Both have over 4 million files. All other 97 jobs have < 2 million files and work just fine!

Disabling accurate fixes this.

I have looked into the source code (filed/accurate.c) and found that the init for the MDB accurate database is called and then the return code (which could false) is *completely ignored*:

   jcr->file_list->init(jcr, nb);
   jcr->accurate = true;

It should read:

  if (!jcr->file_list->init(jcr, nb))
  {
      // emit error message, possibly informing director: dir->fsend(_("2991 Failed to allocate memory');
      return false;
  }

I don't know if this fixes this issue, it just points to a missing error handling...

OS: Gentoo Linux, Ubuntu Linux, Debian Linux, ...
Bareos: Various versions, most of them 17.2.4 and 17.2.7

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1193 [bareos-core] webui major sometimes 2020-02-13 07:02 2020-02-13 07:02
Reporter: fanxp Platform: Linux  
Assigned To: OS: Ubuntu  
Priority: normal OS Version: 18.04  
Status: new Product Version:  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Need to clear cookies about bareos-webui when log in, otherwise a blank page will appear
Description: sometimes I log in, I get a blank page. The following errors appear in the baroes-webui's log when this occurs.

2020/02/13 13:31:30 [error] 29794#29794: *42931 FastCGI sent in stderr: "PHP message: PHP Notice: Undefined variable: form in /usr/share/bareos-webui/module/Auth/view/auth/auth/login.phtml on line 45
PHP message: PHP Fatal error: Uncaught Error: Call to a member function prepare() on null in /usr/share/bareos-webui/module/Auth/view/auth/auth/login.phtml:45
Stack trace:
#0 /usr/share/bareos-webui/vendor/zendframework/zend-view/src/Renderer/PhpRenderer.php(501): include()
0000001 /usr/share/bareos-webui/vendor/zendframework/zend-view/src/View.php(205): Zend\View\Renderer\PhpRenderer->render(NULL)
0000002 /usr/share/bareos-webui/vendor/zendframework/zend-view/src/View.php(233): Zend\View\View->render(Object(Zend\View\Model\ViewModel))
0000003 /usr/share/bareos-webui/vendor/zendframework/zend-view/src/View.php(198): Zend\View\View->renderChildren(Object(Zend\View\Model\ViewModel))
0000004 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/View/Http/DefaultRenderingStrategy.php(103): Zend\View\View->render(Object(Zend\View\Model\ViewModel))
0000005 [internal function]: Zend\Mvc\View\Http\DefaultRenderingStrategy->render(Object(Zend\Mvc\MvcEvent))
0000006 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventMa" while reading response header from upstream, client: 10.9.1.1, server: bareos, request: "POST /auth/login HTTP/1.1", upstream: "fastcgi://unix:/var/run/php/php7.2-fpm.sock:", host: "bareos.yfish.x", referrer: "http://bareos.yfish.x/auth/login"

And when I refresh the blank page, I will get the follwing errors appear in the web page.

Access denied
Permission to execute the following commands is required:
list,llist,use,version,.api,.clients,.help
Read the Bareos documentation on how to configure ACL settings in your Console/Profile resources.

At the same time, The following errors appear in the bareos-webui's log.

2020/02/13 13:36:17 [error] 29794#29794: *43062 FastCGI sent in stderr: "PHP message: PHP Warning: array_key_exists() expects parameter 2 to be array, null given in /usr/share/bareos-webui/module/Application/src/Application/Controller/Plugin/CommandACLPlugin.php on line 53
PHP message: PHP Warning: array_key_exists() expects parameter 2 to be array, null given in /usr/share/bareos-webui/module/Application/src/Application/Controller/Plugin/CommandACLPlugin.php on line 53
PHP message: PHP Warning: array_key_exists() expects parameter 2 to be array, null given in /usr/share/bareos-webui/module/Application/src/Application/Controller/Plugin/CommandACLPlugin.php on line 53
PHP message: PHP Warning: array_key_exists() expects parameter 2 to be array, null given in /usr/share/bareos-webui/module/Application/src/Application/Controller/Plugin/CommandACLPlugin.php on line 53
PHP message: PHP Warning: array_key_exists() expects parameter 2 to be array, null given in /usr/share/bareos-webui/module/Application/src/Application/Controller/Plugin/CommandACLPlugin.php on line 53
PHP message: PHP Warning: array_key_exists() expects parameter 2 to be array, null given in /usr/share/bareos-webui/module/Application/src/Application/Controller/Plugin/CommandACLPlugin.php on line 53
PHP message: PHP Warning: array_key_exists() expects parameter 2 to be array, null given in /usr/share/bareos-webui/module/Application/src/Application/Controller/Plugin/CommandACLPlugin.php on line 53" while reading response header from upstream, client: 10.9.1.1, server: bareos, request: "GET /dashboard/ HTTP/1.1", upstream: "fastcgi://unix:/var/run/php/php7.2-fpm.sock:", host: "bareos.yfish.x", referrer: "http://bareos.yfish.x/auth/login"

But if I delete cookies about bareos-webui, the webui works well again.
I have found a similar issue (https://bugs.bareos.org/view.php?id=1047), but I don't find the eventual solutions to my problem in that issue. Basically every time I will meet this issue when I log in, it's troublesome to solve that using the way to delete cookies.
Tags: webui
Steps To Reproduce: Install bareos and bareos-webui (version: 19.2.5-1) on Ubuntu 18.04 using apt. Configure bareos and baroes-webui, and log in webui. Sometimes login successfully, sometimes get a blank page.
Additional Information:
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1189 [bareos-core] configuration gui minor always 2020-02-12 09:58 2020-02-12 18:46
Reporter: hostedpower Platform: Linux  
Assigned To: OS: Debian  
Priority: normal OS Version: 9  
Status: new Product Version: 19.2.6  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Restore windows takes very long before loading, then times out
Description: When we go to: https://backupmaster.xxx/restore/ , the page keeps loading forever to end up with an empty screen.

Afterwards we can restore by selecting the correct date of a specific server...

However this takes a really really long time since we have to wait for it to timeout first... :(

So all restores takes much more time as before.

It looks like it selects some invalid client or something when going to the screen the first time.

Afterwards (after waiting a long time), we can succesfully go to a client and start a restore
Tags:
Steps To Reproduce: * using always incremental scheme
* Click on restore in the bareos web-ui
* The screen show loading, this takes up to several minutes
* The screen end up's empty (It's on "Please choose a ckient")
* Now finally we can go to a client and start the restore

* Bonus on Internet Explorer we no longer can restore
Additional Information:
System Description
Attached Files:
Notes
(0003801)
hostedpower   
2020-02-12 18:46   
PS: I think it does this query over and over when selecting the restore window:

2901 | bareos | localhost | bareos | Query | 43 | Sending data | DELETE FROM PathVisibility WHERE NOT EXISTS (SELECT 1 FROM Job WHERE JobId=PathVisibility.JobId) | 0 | 53298921 |

This is slow apparently with much data.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1191 [bareos-core] webui crash always 2020-02-12 15:40 2020-02-12 15:40
Reporter: khvalera Platform: Linux  
Assigned To: OS: Arch Linux  
Priority: high OS Version: x64  
Status: new Product Version: 19.2.6  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: The web interface runs under any login and password
Description: To enter the web interface starts under any arbitrary username and password.
How to fix it?
Tags: webui
Steps To Reproduce: /etc/bareos/bareos-dir.d/console/web-admin.conf

Console {
  Name = web-admin
  Password = "123"
  Profile = "webui-admin"
}

/etc/bareos/bareos-dir.d/profile/webui-admin.conf

Profile {
  Name = "webui-admin"
  CommandACL = !.bvfs_clear_cache, !.exit, !.sql, !configure, !create, !delete, !purge, !prune, !sqlquery, !umount, !unmount, *all*
  Job ACL = *all*
  Schedule ACL = *all*
  Catalog ACL = *all*
  Pool ACL = *all*
  Storage ACL = *all*
  Client ACL = *all*
  FileSet ACL = *all*
  Where ACL = *all*
}

/etc/bareos-webui/directors.ini

[bareos_dir]
enabled = "yes"
diraddress = "localhost"
dirport>= 9101
;UsePamAuthentication = yes
pam_console_name = "web-admin"
pam_console_password = "123"
Additional Information:
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
657 [bareos-core] storage daemon crash sometimes 2016-05-09 16:56 2020-02-12 11:28
Reporter: barbugs Platform: Linux  
Assigned To: OS: Ubuntu  
Priority: high OS Version: 14.04  
Status: feedback Product Version: 15.2.2  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: SD crashes almost every time when scheduled jobs are starting
Description: Almost every night I get a Bareos GDB traceback. After that I have to restart the SD and start some jobs by hand again. Usually following scheduled jobs are OK then.

I found this bug here:

https://bugs.bareos.org/view.php?id=414

That seems to be my problem but it is fixed already, isn't it? My version is bareos-storage 15.2.2-37.1.

Should I provide your with traceback and core as well?
Tags:
Steps To Reproduce: Let scheduled jobs run a couple of times. After the 3rd or 4th attempt it occurs. Often just right away.
Additional Information: I installed bareos-dbg and gdb. Thus I have tracebacks and core files.

Something strange I spotted in traceback:

0x00007fc7ecb4a215 in bnet_host2ipaddrs (host=host@entry=0xaaaaaaaaaaaaaaaa <error: Cannot access memory at address 0xaaaaaaaaaaaaaaaa>, family=family@entry=0, errstr=errstr@entry=0x7fc7e37fcf78) at bnet.c:414

System Description
Attached Files:
Notes
(0002262)
mvwieringen   
2016-05-09 19:01   
At least the traceback as that show the stacktraces. The above seems to
indicate a freed buffer as it contains 0xaaaaaa etc which is the pattern
for freed data.
(0002263)
mvwieringen   
2016-05-09 19:03   
I honestly cannot tie the two bugs together e.g. this one and the one you
mention. Do you get a overrun buffer ASSERT ? Next to the traceback it
might also make sense to paste the exact error you get in the log.
(0002268)
barbugs   
2016-05-10 15:12   
OK, that was just a hunch.

There are different unpleasant things in the logs:

- JobId 474: Error: sql_create.c:450 Volume "client2-2016-5-6-474" already exists.
- Storage daemon didn't accept Device "V3-RAID-20" command.
- JobId 411: Fatal error: filed/dir_cmd.c:2397 Comm error with SD. bad response to Append Data. ERR=Unknown error
- JobId 411: Error: sql_create.c:450 Volume "client7-2016-4-29-411" already exists.
- JobId 411: Job client7-differential.2016-04-29_01.00.00_36 is waiting. Cannot find any appendable volumes.
- Fatal error: bsock_tcp.c:134 Unable to connect to Storage daemon on <IP>:9103. ERR=Connection refused (after it died)


It seems the whole concept of having one Volume File for each client and job accomplished by many storage devices (because you want to write more than one Volume at a time) is crap.

However here is the full traceback:

Created /var/lib/bareos/bareos-dir.core.10802 for doing postmortem debugging
[New LWP 10804]
[New LWP 10806]
[New LWP 10807]
[New LWP 10808]
[New LWP 10994]
[New LWP 10802]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Core was generated by `/usr/sbin/bareos-dir'.
#0 0x00007fc7ec336b9d in nanosleep () at ../sysdeps/unix/syscall-template.S:81
81 ../sysdeps/unix/syscall-template.S: No such file or directory.
$1 = 0x6a3ca0 <my_name> "vampire3-dir"
$2 = 0x2058058 "bareos-dir"
$3 = 0x2058098 "/usr/sbin/bareos-dir"
$4 = 0x7fc7d4011b98 "MySQL"
$5 = 0x7fc7ecb83c3a "15.2.2 (16 November 2015)"
$6 = 0x7fc7ecb83c0e "x86_64-pc-linux-gnu"
$7 = 0x7fc7ecb83c07 "ubuntu"
$8 = 0x7fc7ecb83c29 "Ubuntu 14.04 LTS"
$9 = "vampire3", '\000' <repeats 247 times>
$10 = 0x7fc7ecb83c22 "ubuntu Ubuntu 14.04 LTS"
Environment variable "TestName" not defined.
#0 0x00007fc7ec336b9d in nanosleep () at ../sysdeps/unix/syscall-template.S:81
0000001 0x00007fc7ecb545a4 in bmicrosleep (sec=sec@entry=30, usec=usec@entry=0) at bsys.c:171
0000002 0x00007fc7ecb6486c in check_deadlock () at lockmgr.c:566
0000003 0x00007fc7ec32f182 in start_thread (arg=0x7fc7e9e25700) at pthread_create.c:312
0000004 0x00007fc7ebd5847d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111

Thread 6 (Thread 0x7fc7eda8b780 (LWP 10802)):
#0 0x00007fc7ec336b9d in nanosleep () at ../sysdeps/unix/syscall-template.S:81
0000001 0x00007fc7ecb545a4 in bmicrosleep (sec=sec@entry=60, usec=usec@entry=0) at bsys.c:171
0000002 0x0000000000442b9b in wait_for_next_job (one_shot_job_to_run=<optimized out>) at scheduler.c:124
0000003 0x000000000040f367 in main (argc=<optimized out>, argv=<optimized out>) at dird.c:393

Thread 5 (Thread 0x7fc7e2ffd700 (LWP 10994)):
#0 0x00007fc7ec336b9d in nanosleep () at ../sysdeps/unix/syscall-template.S:81
0000001 0x00007fc7ecb545a4 in bmicrosleep (sec=sec@entry=2, usec=usec@entry=0) at bsys.c:171
0000002 0x000000000042eba7 in jobq_server (arg=arg@entry=0x6a46c0 <job_queue>) at jobq.c:649
0000003 0x00007fc7ecb64955 in lmgr_thread_launcher (x=0x205a088) at lockmgr.c:926
0000004 0x00007fc7ec32f182 in start_thread (arg=0x7fc7e2ffd700) at pthread_create.c:312
0000005 0x00007fc7ebd5847d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111

Thread 4 (Thread 0x7fc7e37fe700 (LWP 10808)):
#0 0x00007fc7ec336ee9 in __libc_waitpid (pid=10995, stat_loc=0x7fc7e37fc5fc, options=0) at ../sysdeps/unix/sysv/linux/waitpid.c:40
0000001 0x00007fc7ecb73fa4 in signal_handler (sig=7) at signal.c:240
0000002 <signal handler called>
0000003 0x00007fc7ecb4a215 in bnet_host2ipaddrs (host=host@entry=0xaaaaaaaaaaaaaaaa <error: Cannot access memory at address 0xaaaaaaaaaaaaaaaa>, family=family@entry=0, errstr=errstr@entry=0x7fc7e37fcf78) at bnet.c:414
0000004 0x00007fc7ecb534b7 in BSOCK_TCP::open (this=0x7fc7d000a208, jcr=0x7fc7d0001148, name=0x47269a "Storage daemon", host=0xaaaaaaaaaaaaaaaa <error: Cannot access memory at address 0xaaaaaaaaaaaaaaaa>, service=<optimized out>, port=-1431655766, heart_beat=-6148914691236517206, fatal=0x7fc7e37fdb4c) at bsock_tcp.c:182
0000005 0x00007fc7ecb52a40 in BSOCK_TCP::connect (this=this@entry=0x7fc7d000a208, jcr=jcr@entry=0x7fc7d0001148, retry_interval=retry_interval@entry=2, max_retry_time=max_retry_time@entry=1, heart_beat=heart_beat@entry=-6148914691236517206, name=0x47269a "Storage daemon", host=host@entry=0xaaaaaaaaaaaaaaaa <error: Cannot access memory at address 0xaaaaaaaaaaaaaaaa>, service=service@entry=0x0, port=port@entry=-1431655766, verbose=verbose@entry=false) at bsock_tcp.c:115
0000006 0x000000000044038a in connect_to_storage_daemon (jcr=0x7fc7d0001148, retry_interval=2, max_retry_time=1, verbose=verbose@entry=false) at sd_cmds.c:113
0000007 0x00000000004404b8 in connect_to_storage_daemon (jcr=jcr@entry=0x7fc7d0001148, retry_interval=retry_interval@entry=2, max_retry_time=max_retry_time@entry=1, verbose=verbose@entry=false) at sd_cmds.c:132
0000008 0x00000000004432db in statistics_thread_runner (arg=arg@entry=0x0) at stats.c:228
0000009 0x00007fc7ecb64955 in lmgr_thread_launcher (x=0x20aa0a8) at lockmgr.c:926
0000010 0x00007fc7ec32f182 in start_thread (arg=0x7fc7e37fe700) at pthread_create.c:312
0000011 0x00007fc7ebd5847d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111

Thread 3 (Thread 0x7fc7e3fff700 (LWP 10807)):
#0 pthread_cond_timedwait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:238
0000001 0x00007fc7ecb64dbc in bthread_cond_timedwait_p (cond=cond@entry=0x7fc7ecd99720 <_ZL5timer>, m=m@entry=0x7fc7ecd99760 <_ZL11timer_mutex>, abstime=abstime@entry=0x7fc7e3ffede0, file=file@entry=0x7fc7ecb88302 "watchdog.c", line=line@entry=313) at lockmgr.c:811
0000002 0x00007fc7ecb7d4a9 in watchdog_thread (arg=arg@entry=0x0) at watchdog.c:313
0000003 0x00007fc7ecb64955 in lmgr_thread_launcher (x=0x20926d8) at lockmgr.c:926
0000004 0x00007fc7ec32f182 in start_thread (arg=0x7fc7e3fff700) at pthread_create.c:312
0000005 0x00007fc7ebd5847d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111

Thread 2 (Thread 0x7fc7e8ee5700 (LWP 10806)):
#0 0x00007fc7ebd4b12d in poll () at ../sysdeps/unix/syscall-template.S:81
0000001 0x00007fc7ecb49283 in bnet_thread_server_tcp (addr_list=addr_list@entry=0x205a8c8, max_clients=<optimized out>, sockfds=<optimized out>, client_wq=client_wq@entry=0x6a4980 <socket_workq>, nokeepalive=<optimized out>, handle_client_request=handle_client_request@entry=0x43d340 <handle_connection_request(void*)>) at bnet_server_tcp.c:298
0000002 0x000000000043d59f in connect_thread (arg=arg@entry=0x205a8c8) at socket_server.c:100
0000003 0x00007fc7ecb64955 in lmgr_thread_launcher (x=0x20a9ff8) at lockmgr.c:926
0000004 0x00007fc7ec32f182 in start_thread (arg=0x7fc7e8ee5700) at pthread_create.c:312
0000005 0x00007fc7ebd5847d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111

Thread 1 (Thread 0x7fc7e9e25700 (LWP 10804)):
#0 0x00007fc7ec336b9d in nanosleep () at ../sysdeps/unix/syscall-template.S:81
0000001 0x00007fc7ecb545a4 in bmicrosleep (sec=sec@entry=30, usec=usec@entry=0) at bsys.c:171
0000002 0x00007fc7ecb6486c in check_deadlock () at lockmgr.c:566
0000003 0x00007fc7ec32f182 in start_thread (arg=0x7fc7e9e25700) at pthread_create.c:312
0000004 0x00007fc7ebd5847d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111
#0 0x00007fc7ec336b9d in nanosleep () at ../sysdeps/unix/syscall-template.S:81
81 in ../sysdeps/unix/syscall-template.S
No locals.
0000001 0x00007fc7ecb545a4 in bmicrosleep (sec=sec@entry=30, usec=usec@entry=0) at bsys.c:171
171 bsys.c: No such file or directory.
timeout = {tv_sec = 30, tv_nsec = 0}
tv = {tv_sec = 140496646714409, tv_usec = 0}
tz = {tz_minuteswest = -371042560, tz_dsttime = 32711}
status = <optimized out>
0000002 0x00007fc7ecb6486c in check_deadlock () at lockmgr.c:566
566 lockmgr.c: No such file or directory.
__cancel_buf = {__cancel_jmp_buf = {{__cancel_jmp_buf = {140496599144192, 8888629645796315974, 0, 0, 140496599144896, 140496599144192, -8875167333342899386, -8875160014926507194}, __mask_was_saved = 0}}, __pad = {0x7fc7e9e24ef0, 0x0, 0x7fc7ec32f124 <start_thread+100>, 0x7fc7e9e25700}}
__cancel_routine = 0x7fc7ecb647f0 <cln_hdl(void*)>
__not_first_call = <optimized out>
old = 0
0000003 0x00007fc7ec32f182 in start_thread (arg=0x7fc7e9e25700) at pthread_create.c:312
312 pthread_create.c: No such file or directory.
__res = <optimized out>
pd = 0x7fc7e9e25700
now = <optimized out>
unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140496599144192, 8888629645796315974, 0, 0, 140496599144896, 140496599144192, -8875167333368065210, -8875158953774279866}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
not_first_call = <optimized out>
pagesize_m1 = <optimized out>
sp = <optimized out>
freesize = <optimized out>
__PRETTY_FUNCTION__ = "start_thread"
0000004 0x00007fc7ebd5847d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111
111 ../sysdeps/unix/sysv/linux/x86_64/clone.S: No such file or directory.
No locals.
#0 0x0000000000000000 in ?? ()
No symbol table info available.
#0 0x0000000000000000 in ?? ()
No symbol table info available.
#0 0x0000000000000000 in ?? ()
No symbol table info available.
(0003799)
arogge   
2020-02-12 11:28   
Sorry for coming back to you that late. Are you still using Bareos? Can you still reproduce this with the current release?

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
854 [bareos-core] director tweak have not tried 2017-09-21 10:21 2020-02-12 10:04
Reporter: hostedpower Platform: Linux  
Assigned To: stephand OS: Debian  
Priority: normal OS Version: 8  
Status: assigned Product Version: 16.2.5  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Problem with always incremental virtual full
Description: Hi,


All the sudden we have issues with virtual full (for consolidate) no longer working.

We have 2 pools for each customer. One is for the full (consolidate) and the other for the incremental.

We used to have the option to limit a single job to a single volume, we removed that a while ago, maybe there is a relation.

We also had to downgrade 16.2.6 to 16.2.5 because of the MySQL slowness issues, that happened recently, so that's also a possibility.

We have the feeling this software is not very reliable or at least very complex to get it somewhat stable.

PS: The volume it asks is there, but it doesn't want to use it :(

We used this config for a long time without this particular issue, except for the

        Action On Purge = Truncate
and commenting the:
# Maximum Volume Jobs = 1 # a new file for each backup that is done
Tags:
Steps To Reproduce: Config:



Storage {
    Name = customerx-incr
    Device = customerx-incr # bareos-sd Device
    Media Type = customerx
    Address = xxx # backup server fqdn > sent to client sd
    Password = "xxx" # password for Storage daemon
    Maximum Concurrent Jobs = 8 # required for virtual full
}

Storage {
    Name = customerx-cons
    Device = customerx-cons # bareos-sd Device
    Media Type = customerx
    Address = xxx # backup server fqdn > sent to client sd
    Password = "xxx" # password for Storage daemon
    Maximum Concurrent Jobs = 8 # required for virtual full
}

cat /etc/bareos/pool-defaults.conf
#pool defaults used for always incremental scheme
        Pool Type = Backup
# Maximum Volume Jobs = 1 # a new file for each backup that is done
        Recycle = yes # Bareos can automatically recycle Volumes
        Auto Prune = yes # Prune expired volumes
        Volume Use Duration = 23h
        Action On Purge = Truncate
#end


Pool {
    Name = customerx-incr
    Storage = customerx-incr
    LabelFormat = "vol-incr-"
    Next Pool = customerx-cons
    @/etc/bareos/pool-defaults.conf
}

Pool {
    Name = customerx-cons
    Storage = customerx-cons
    LabelFormat = "vol-cons-"
    @/etc/bareos/pool-defaults.conf
}



# lb1.customerx.com
Job {
    Name = "backup-lb1.customerx.com"
    Client = lb1.customerx.com
    Pool = customerx-incr
    Full Backup Pool = customerx-cons
    FileSet = "linux-all"
    Schedule = "always-inc-cycle-4"
    #Defaults
    JobDefs = "HPJobInc"
    Maximum Concurrent Jobs = 8 # Let up to 8 jobs run
}

Client {
    Name = lb1.customerx.com
    Address = lb1.customerx.com
# Catalog = MyCatalog
    Password = "xxx" # password for Storage daemon
    AutoPrune = yes
}


# lb2.customerx.com
Job {
    Name = "backup-lb2.customerx.com"
    Client = lb2.customerx.com
    Pool = customerx-incr
    Full Backup Pool = customerx-cons
    FileSet = "linux-all"
    Schedule = "always-inc-cycle-4"
    #Defaults
    JobDefs = "HPJobInc"
    Maximum Concurrent Jobs = 8 # Let up to 8 jobs run
}

Client {
    Name = lb2.customerx.com
    Address = lb2.customerx.com
# Catalog = MyCatalog
    Password = "xxx" # password for Storage daemon
    AutoPrune = yes
}



# app1.customerx.com
Job {
    Name = "backup-app1.customerx.com"
    Client = app1.customerx.com
    Pool = customerx-incr
    Full Backup Pool = customerx-cons
    FileSet = "linux-all"
    Schedule = "always-inc-cycle-4"
    #Defaults
    JobDefs = "HPJobInc"
    Maximum Concurrent Jobs = 8 # Let up to 8 jobs run
}

Client {
    Name = app1.customerx.com
    Address = app1.customerx.com
# Catalog = MyCatalog
    Password = "xxx" # password for Storage daemon
    AutoPrune = yes
}



# app2.customerx.com
Job {
    Name = "backup-app2.customerx.com"
    Client = app2.customerx.com
    Pool = customerx-incr
    Full Backup Pool = customerx-cons
    FileSet = "linux-all"
    Schedule = "always-inc-cycle-4"
    #Defaults
    JobDefs = "HPJobInc"
    Maximum Concurrent Jobs = 8 # Let up to 8 jobs run
}

Client {
    Name = app2.customerx.com
    Address = app2.customerx.com
# Catalog = MyCatalog
    Password = "xxx" # password for Storage daemon
    AutoPrune = yes
}

# app3.customerx.com
Job {
    Name = "backup-app3.customerx.com"
    Client = app3.customerx.com
    Pool = customerx-incr
    Full Backup Pool = customerx-cons
    FileSet = "linux-all"
    Schedule = "always-inc-cycle-4"
    #Defaults
    JobDefs = "HPJobInc"
    Maximum Concurrent Jobs = 8 # Let up to 8 jobs run
}

Client {
    Name = app3.customerx.com
    Address = app3.customerx.com
# Catalog = MyCatalog
    Password = "xxx" # password for Storage daemon
    AutoPrune = yes
}




# db1.customerx.com
Job {
    Name = "backup-db1.customerx.com"
    Client = db1.customerx.com
    Pool = customerx-incr
    Full Backup Pool = customerx-cons
    FileSet = "linux-all-mysql"
    Schedule = "always-inc-cycle-4"
    #Defaults
    JobDefs = "HPJobInc"
    Maximum Concurrent Jobs = 8 # Let up to 8 jobs run
}

Client {
    Name = db1.customerx.com
    Address = db1.customerx.com
# Catalog = MyCatalog
    Password = "xxx" # password for Storage daemon
    AutoPrune = yes
}


# db2.customerx.com
Job {
    Name = "backup-db2.customerx.com"
    Client = db2.customerx.com
    Pool = customerx-incr
    Full Backup Pool = customerx-cons
    FileSet = "linux-all-mysql"
    Schedule = "always-inc-cycle-4"
    #Defaults
    JobDefs = "HPJobInc"
    Maximum Concurrent Jobs = 8 # Let up to 8 jobs run
}

Client {
    Name = db2.customerx.com
    Address = db2.customerx.com
# Catalog = MyCatalog
    Password = "xxx" # password for Storage daemon
    AutoPrune = yes
}

# db3.customerx.com
Job {
    Name = "backup-db3.customerx.com"
    Client = db3.customerx.com
    Pool = customerx-incr
    Full Backup Pool = customerx-cons
    FileSet = "linux-all-mysql"
    Schedule = "always-inc-cycle-4"
    #Defaults
    JobDefs = "HPJobInc"
    Maximum Concurrent Jobs = 8 # Let up to 8 jobs run
}

Client {
    Name = db3.customerx.com
    Address = db3.customerx.com
# Catalog = MyCatalog
    Password = "xxx" # password for Storage daemon
    AutoPrune = yes
}




# Backup for customerx
Device {
  Name = customerx-incr
  Media Type = customerx
  Archive Device = /home/customerx/bareos
  LabelMedia = yes;
  Random Access = Yes;
  AutomaticMount = yes;
  RemovableMedia = no;
  AlwaysOpen = no;
  Maximum Concurrent Jobs = 8
}

Device {
  Name = customerx-cons
  Media Type = customerx
  Archive Device = /home/customerx/bareos
  LabelMedia = yes;
  Random Access = Yes;
  AutomaticMount = yes;
  RemovableMedia = no;
  AlwaysOpen = no;
  Maximum Concurrent Jobs = 8
}
Additional Information: 2017-09-21 10:07:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 10:02:35 bareos-sd JobId 2467: Please mount read Volume "vol-cons-0378" for:
 Job: backup-db3.customerx.cloud.2017-09-21_09.00.04_05
 Storage: "customerx-incr" (/home/customerx/bareos)
 Pool: customerx-incr
 Media type: customerx
 
2017-09-21 09:57:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:52:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:47:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:42:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:37:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:32:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:27:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:22:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:17:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:12:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:07:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:02:35 hostedpower-dir JobId 2467: Bootstrap records written to /var/lib/bareos/hostedpower-dir.restore.107.bsr
 
2017-09-21 09:02:35 hostedpower-dir JobId 2467: Using Device "customerx-incr" to read.
 
2017-09-21 09:02:35 hostedpower-dir JobId 2467: Using Device "customerx-cons" to write.
 
2017-09-21 09:02:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:02:35 bareos-sd JobId 2467: Warning: acquire.c:294 Read acquire: label.c:254 Could not reserve volume vol-cons-0378 on "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:02:35 bareos-sd JobId 2467: Please mount read Volume "vol-cons-0378" for:
 Job: backup-db3.customerx.cloud.2017-09-21_09.00.04_05
 Storage: "customerx-incr" (/home/customerx/bareos)
 Pool: customerx-incr
 Media type: customerx
 
2017-09-21 09:02:34 hostedpower-dir JobId 2467: Start Virtual Backup JobId 2467, Job=backup-db3.customerx.cloud.2017-09-21_09.00.04_05
 
2017-09-21 09:02:34 hostedpower-dir JobId 2467: Consolidating JobIds 2392,971
System Description
Attached Files:
Notes
(0002744)
joergs   
2017-09-21 16:08   
I see some problems in this configuration.

You should check section http://doc.bareos.org/master/html/bareos-manual-main-reference.html#ConcurrentDiskJobs from the manual.

Each device can only read/write one volume at a time. VirtualFull requires multiple volumes.

Basically, you need multiple devices to the some "Storage Directory", each with "Maximum Concurrent Jobs = 1" to make it work.

Your setting

Device {
  Name = customerx-incr
  Media Type = customerx
  Archive Device = /home/customerx/bareos
  LabelMedia = yes;
  Random Access = Yes;
  AutomaticMount = yes;
  RemovableMedia = no;
  AlwaysOpen = no;
  Maximum Concurrent Jobs = 8
}

will use interleaving instead, which could cause performance issues on restore.
(0002745)
hostedpower   
2017-09-21 16:14   
So you suggest just making the device:

Device {
  Name = customerx-incr
  Media Type = customerx
  Archive Device = /home/customerx/bareos
  LabelMedia = yes;
  Random Access = Yes;
  AutomaticMount = yes;
  RemovableMedia = no;
  AlwaysOpen = no;
  Maximum Concurrent Jobs = 1
}

This would fix all issues?

Before we had this and I think that worked also: Maximum Volume Jobs = 1 , but it seems discouraged...
(0002746)
joergs   
2017-09-21 16:26   
No, this will not fix the issue, but it prevents interleaving, which might cause other problems.

I suggest, by pointing to the documentation, that you setup multiple Devices all pointing to the same Archive Device. Then, access them all to a Director Storage, like

Storage {
    Name = customerx
    Device = customerx-dev1, customerx-dev2, ... # bareos-sd Device
    Media Type = customerx
    Address = xxx # backup server fqdn > sent to client sd
    Password = "xxx" # password for Storage daemon
    Maximum Concurrent Jobs = 8 # required for virtual full
}
(0002747)
joergs   
2017-09-21 16:26   
With Maximum Concurrent Jobs = 8 you should use 8 Devices.
(0002748)
hostedpower   
2017-09-21 16:37   
Hi Joergs,

Thanks a lot, that documentation makes sense and seems to have improved since I last read it (or I missed it somehow).

Will test it like that, but it looks very promising :)
(0002754)
hostedpower   
2017-09-21 22:57   
I've read it, but it would mean I need to create multiple storage devices in this case? That's a lot of extra definitions to just backup 1 customer. It would be nice if you would simply declare the device object and tell there are 8 from it for example. All in just one definition. That shouldn't be too hard I suppose?

Something like:

Device {
  Name = customerx-incr
  Media Type = customerx
  Archive Device = /home/customerx/bareos
  LabelMedia = yes;
  Random Access = Yes;
  AutomaticMount = yes;
  RemovableMedia = no;
  AlwaysOpen = no;
  Maximum Concurrent Jobs = 8 --> this automatically creates customerx-incr1 --> customerx-incr8, probably with some extra setting to allow it.
}



For now, would it be a solution to set

Device {
  Name = customerx-incr
  Media Type = customerx
  Archive Device = /home/customerx/bareos
  LabelMedia = yes;
  Random Access = Yes;
  AutomaticMount = yes;
  RemovableMedia = no;
  AlwaysOpen = no;
  Maximum Concurrent Jobs = 1 # just use one at the same time
}

Since we have so many clients, it would still run multiple backups at the same time for different clients I suppose? (Each client has his own media, device and storage)


PS: We want to keep about 20 days of data, is the next config ok together with the above scenario?


JobDefs {
    Name = "HPJobInc"
    Type = Backup
    Level = Full
    Write Bootstrap = "/var/lib/bareos/%c.bsr"
    Accurate=yes
    Level=Full
    Messages = Standard
    Always Incremental = yes
    Always Incremental Job Retention = 20 days
    # The resulting interval between full consolidations when running daily backups and daily consolidations is Always Incremental Max Full Age Dir Job - Always Incremental Job Retention Dir Job.
    Always Incremental Max Full Age = 35 days # should NEVER be less then Always Incremental Job Retention -> Every 15 days the full backup is also consolidated ( Always Incremental Max Full Age Dir Job - Always Incremental Job Retention Dir )
    Always Incremental Keep Number = 5 # Guarantee that at least the specified number of Backup Jobs will persist, even if they are older than "Always Incremental Job Retention".
    Maximum Concurrent Jobs = 5 # Let up to 5 jobs run
}
(0002759)
hostedpower   
2017-09-25 09:50   
We used this now:

  Maximum Concurrent Jobs = 1 # just use one at the same time

But some jobs also have the same error:

2017-09-25 09:00:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 
2017-09-25 08:55:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)


It looks almost like multiple jobs can't exist together in one volume (well they can, but then issues like this start to occur.

Before, probably with the "Maximum Volume Jobs = 1", we never encountered these problems.
(0002760)
joergs   
2017-09-25 11:17   
Only one access per volume is possible at a time.
So reading and writing to the same volume is not possible.

I thought, you covered this by "Maximum Use Duration = 23h". Have you disabled it, or did you run multiple jobs during one day?

However, this is a bug tracker. I think further questions about Always Incrementals are best handled using the bareos-users mailing list or a bareos.com support ticket.
(0002761)
hostedpower   
2017-09-25 11:37   
Yes I was wondering why we encountered it now and never before.

It wants to swap consolidate pool for and incremental pool (or vice versa). Don't understand why.
(0002763)
joergs   
2017-09-25 12:15   
Please attach the output of

list joblog jobid=2668
(0002764)
hostedpower   
2017-09-25 12:22   
Enter a period to cancel a command.
*list joblog jobid=2668
Automatically selected Catalog: MyCatalog
Using Catalog "MyCatalog"
 2017-09-24 09:00:13 hostedpower-dir JobId 2668: Start Virtual Backup JobId 2668, Job=backup-web.hosted-power.com.2017-09-24_09.00.03_27
 2017-09-24 09:00:13 hostedpower-dir JobId 2668: Consolidating JobIds 2593,1173
 2017-09-24 09:00:18 hostedpower-dir JobId 2668: Bootstrap records written to /var/lib/bareos/hostedpower-dir.restore.87.bsr
 2017-09-24 09:00:18 hostedpower-dir JobId 2668: Using Device "hostedpower-incr" to read.
 2017-09-24 09:00:18 hostedpower-dir JobId 2668: Using Device "hostedpower-cons" to write.
 2017-09-24 09:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:00:18 bareos-sd JobId 2668: Warning: acquire.c:294 Read acquire: label.c:254 Could not reserve volume vol-cons-0344 on "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:00:18 bareos-sd JobId 2668: Please mount read Volume "vol-cons-0344" for:
    Job: backup-web.hosted-power.com.2017-09-24_09.00.03_27
    Storage: "hostedpower-incr" (/home/hostedpower/bareos)
    Pool: hostedpower-incr
    Media type: hostedpower
 2017-09-24 09:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 10:00:18 bareos-sd JobId 2668: Please mount read Volume "vol-cons-0344" for:
    Job: backup-web.hosted-power.com.2017-09-24_09.00.03_27
    Storage: "hostedpower-incr" (/home/hostedpower/bareos)
    Pool: hostedpower-incr
    Media type: hostedpower
 2017-09-24 10:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 10:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 10:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 10:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 10:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 10:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 10:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 10:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 10:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 10:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 10:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:20:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:25:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:30:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:35:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:40:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:45:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:50:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:55:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 12:00:18 bareos-sd JobId 2668: Please mount read Volume "vol-cons-0344" for:
    Job: backup-web.hosted-power.com.2017-09-24_09.00.03_27
    Storage: "hostedpower-incr" (/home/hostedpower/bareos)
    Pool: hostedpower-incr
    Media type: hostedpower
 2017-09-24 12:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 12:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 12:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 12:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 12:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 12:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 12:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 12:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 12:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 12:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 12:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 16:00:18 bareos-sd JobId 2668: Please mount read Volume "vol-cons-0344" for:
    Job: backup-web.hosted-power.com.2017-09-24_09.00.03_27
    Storage: "hostedpower-incr" (/home/hostedpower/bareos)
    Pool: hostedpower-incr
    Media type: hostedpower
 2017-09-24 16:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 16:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 16:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 16:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 16:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 16:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 16:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 16:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 16:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 16:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 16:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 00:00:18 bareos-sd JobId 2668: Please mount read Volume "vol-cons-0344" for:
    Job: backup-web.hosted-power.com.2017-09-24_09.00.03_27
    Storage: "hostedpower-incr" (/home/hostedpower/bareos)
    Pool: hostedpower-incr
    Media type: hostedpower
 2017-09-25 00:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 00:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 00:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 00:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 00:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 00:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 00:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 00:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 00:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 00:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 00:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:30:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:35:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:40:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:45:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:50:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:55:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:00:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:05:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:10:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:15:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:20:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:25:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:30:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:35:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:40:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:45:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:50:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:55:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:00:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:05:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:10:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:15:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:20:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:25:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:30:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:35:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:40:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:45:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:50:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:55:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:00:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:05:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:10:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:15:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:20:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:25:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:30:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:35:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:40:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:45:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:50:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:55:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:00:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:05:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:10:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:15:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:20:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:25:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:30:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:35:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:40:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:45:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:50:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:55:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 09:00:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 09:05:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 09:10:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 09:15:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 09:18:14 bareos-sd JobId 2668: acquire.c:221 Job 2668 canceled.
 2017-09-25 09:18:14 bareos-sd JobId 2668: Elapsed time=418423:18:14, Transfer rate=0 Bytes/second
 2017-09-25 09:18:14 hostedpower-dir JobId 2668: Bareos hostedpower-dir 16.2.5 (03Mar17):
  Build OS: x86_64-pc-linux-gnu debian Debian GNU/Linux 8.0 (jessie)
  JobId: 2668
  Job: backup-web.hosted-power.com.2017-09-24_09.00.03_27
  Backup Level: Virtual Full
  Client: "web.hosted-power.com" 16.2.4 (01Jul16) x86_64-pc-linux-gnu,debian,Debian GNU/Linux 7.0 (wheezy),Debian_7.0,x86_64
  FileSet: "linux-all-mysql" 2017-08-09 00:15:00
  Pool: "hostedpower-cons" (From Job Pool's NextPool resource)
  Catalog: "MyCatalog" (From Client resource)
  Storage: "hostedpower-cons" (From Storage from Pool's NextPool resource)
  Scheduled time: 24-Sep-2017 09:00:03
  Start time: 04-Sep-2017 02:15:02
  End time: 04-Sep-2017 02:42:30
  Elapsed time: 27 mins 28 secs
  Priority: 10
  SD Files Written: 0
  SD Bytes Written: 0 (0 B)
  Rate: 0.0 KB/s
  Volume name(s):
  Volume Session Id: 194
  Volume Session Time: 1506027627
  Last Volume Bytes: 0 (0 B)
  SD Errors: 0
  SD termination status: Canceled
  Accurate: yes
  Termination: Backup Canceled

You have messages.
*
(0002765)
hostedpower   
2017-09-27 11:57   
Hi,

Now jobs seem to success for the moment.

They also seem to be set as incremental all the time while before it was set as full after consolidate.

Example of such job

2017-09-27 09:02:07 hostedpower-dir JobId 2892: Joblevel was set to joblevel of first consolidated job: Incremental
 
2017-09-27 09:02:07 hostedpower-dir JobId 2892: Bareos hostedpower-dir 16.2.5 (03Mar17):
 Build OS: x86_64-pc-linux-gnu debian Debian GNU/Linux 8.0 (jessie)
 JobId: 2892
 Job: backup-web.hosted-power.com.2017-09-27_09.00.02_47
 Backup Level: Virtual Full
 Client: "web.hosted-power.com" 16.2.4 (01Jul16) x86_64-pc-linux-gnu,debian,Debian GNU/Linux 7.0 (wheezy),Debian_7.0,x86_64
 FileSet: "linux-all-mysql" 2017-08-09 00:15:00
 Pool: "hostedpower-cons" (From Job Pool's NextPool resource)
 Catalog: "MyCatalog" (From Client resource)
 Storage: "hostedpower-cons" (From Storage from Pool's NextPool resource)
 Scheduled time: 27-Sep-2017 09:00:02
 Start time: 07-Sep-2017 02:15:03
 End time: 07-Sep-2017 02:42:52
 Elapsed time: 27 mins 49 secs
 Priority: 10
 SD Files Written: 2,803
 SD Bytes Written: 8,487,235,164 (8.487 GB)
 Rate: 5085.2 KB/s
 Volume name(s): vol-cons-0010
 Volume Session Id: 121
 Volume Session Time: 1506368550
 Last Volume Bytes: 8,495,713,067 (8.495 GB)
 SD Errors: 0
 SD termination status: OK
 Accurate: yes
 Termination: Backup OK

 
2017-09-27 09:02:07 hostedpower-dir JobId 2892: purged JobIds 2817,1399 as they were consolidated into Job 2892
 
2017-09-27 09:02:07 hostedpower-dir JobId 2892: Begin pruning Jobs older than 6 months .
 
2017-09-27 09:02:07 hostedpower-dir JobId 2892: No Jobs found to prune.
 
2017-09-27 09:02:07 hostedpower-dir JobId 2892: Begin pruning Files.
 
2017-09-27 09:02:07 hostedpower-dir JobId 2892: No Files found to prune.
 
2017-09-27 09:02:07 hostedpower-dir JobId 2892: End auto prune.

 
2017-09-27 09:02:03 bareos-sd JobId 2892: Elapsed time=00:01:46, Transfer rate=80.06 M Bytes/second
 
2017-09-27 09:00:32 bareos-sd JobId 2892: End of Volume at file 1 on device "hostedpower-incr" (/home/hostedpower/bareos), Volume "vol-cons-0344"
 
2017-09-27 09:00:32 bareos-sd JobId 2892: Ready to read from volume "vol-incr-0097" on device "hostedpower-incr" (/home/hostedpower/bareos).
 
2017-09-27 09:00:32 bareos-sd JobId 2892: Forward spacing Volume "vol-incr-0097" to file:block 0:4709591.
 
2017-09-27 09:00:17 bareos-sd JobId 2892: Recycled volume "vol-cons-0010" on device "hostedpower-cons" (/home/hostedpower/bareos), all previous data lost.
 
2017-09-27 09:00:17 bareos-sd JobId 2892: Forward spacing Volume "vol-cons-0344" to file:block 0:215.
 
2017-09-27 09:00:14 hostedpower-dir JobId 2892: Start Virtual Backup JobId 2892, Job=backup-web.hosted-power.com.2017-09-27_09.00.02_47
 
2017-09-27 09:00:14 hostedpower-dir JobId 2892: Consolidating JobIds 2817,1399
 
2017-09-27 09:00:14 hostedpower-dir JobId 2892: Bootstrap records written to /var/lib/bareos/hostedpower-dir.restore.48.bsr
 
2017-09-27 09:00:14 hostedpower-dir JobId 2892: Using Device "hostedpower-incr" to read.
 
2017-09-27 09:00:14 hostedpower-dir JobId 2892: Max configured use duration=82,800 sec. exceeded. Marking Volume "vol-cons-0344" as Used.
 
2017-09-27 09:00:14 hostedpower-dir JobId 2892: There are no more Jobs associated with Volume "vol-cons-0010". Marking it purged.
 
2017-09-27 09:00:14 hostedpower-dir JobId 2892: All records pruned from Volume "vol-cons-0010"; marking it "Purged"
 
2017-09-27 09:00:14 hostedpower-dir JobId 2892: Recycled volume "vol-cons-0010"
 
2017-09-27 09:00:14 hostedpower-dir JobId 2892: Using Device "hostedpower-cons" to write.
 
2017-09-27 09:00:14 bareos-sd JobId 2892: Ready to read from volume "vol-cons-0344" on device "hostedpower-incr" (/home/hostedpower/bareos).
(0002766)
hostedpower   
2017-09-28 12:39   
Very strange things, most work for the moment it seems (but how long).

Some show full , other incremental


hostedpower-dir JobId 2956: Joblevel was set to joblevel of first consolidated job: Full
 
2017-09-28 09:05:03 hostedpower-dir JobId 2956: Bareos hostedpower-dir 16.2.5 (03Mar17):
 Build OS: x86_64-pc-linux-gnu debian Debian GNU/Linux 8.0 (jessie)
 JobId: 2956
 Job: backup-vps53404.2017-09-28_09.00.02_20
 Backup Level: Virtual Full
 Client: "vps53404" 16.2.4 (01Jul16) Microsoft Windows Server 2012 Standard Edition (build 9200), 64-bit,Cross-compile,Win64
 FileSet: "windows-all" 2017-08-08 22:15:00
 Pool: "vps53404-cons" (From Job Pool's NextPool resource)
 Catalog: "MyCatalog" (From Client resource)
 Storage: "vps53404-cons" (From Storage from Pool's NextPool resource)
 Scheduled time: 28-Sep-2017 09:00:02

Before they always all showed full
(0002802)
hostedpower   
2017-10-19 09:43   
Well I downgraded to 16.2.5 and guess what, issue was gone for few weeks.

Now I tried out the 16.2.7 and issue is back again ...

2017-10-19 09:40:03 bareos-sd JobId 4445: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0340 from dev="vps52320-cons" (/home/vps52320/bareos) to "vps52320-incr" (/home/vps52320/bareos)
 
2017-10-19 09:35:03 bareos-sd JobId 4445: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0340 from dev="vps52320-cons" (/home/vps52320/bareos) to "vps52320-incr" (/home/vps52320/bareos)
 
2017-10-19 09:30:03 bareos-sd JobId 4445: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0340 from dev="vps52320-cons" (/home/vps52320/bareos) to "vps52320-incr" (/home/vps52320/bareos)
 
2017-10-19 09:25:03 bareos-sd JobId 4445: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0340 from dev="vps52320-cons" (/home/vps52320/bareos) to "vps52320-incr" (/home/vps52320/bareos)
 
2017-10-19 09:20:03 bareos-sd JobId 4445: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0340 from dev="vps52320-cons" (/home/vps52320/bareos) to "vps52320-incr" (/home/vps52320/bareos)

Coincidence? I start doubting it.
(0002830)
hostedpower   
2017-12-02 12:19   
Hi,


I'm still on 17.2.7 and have to say it's an intermittent error. It goes fine for days and then all the sudden sometimes one or more jobs suffer from it.

Never had it in the past until a certain version, pretty sure of that.
(0002949)
hostedpower   
2018-03-26 20:22   
The problem was long time gone, but now it's back in full force. Any idea what the cause could be?
(0002991)
stephand   
2018-05-04 10:57   
With larger MySQL databases and Bareos 17.2, for incremental jobs with accurate=yes, it seems to help to add the following index:

CREATE INDEX jobtdate_idx ON Job (JobTDate);
ANALYZE TABLE Job;

Could you please check if that works for you?
(0002992)
hostedpower   
2018-05-04 11:16   
ok thanks, we addeed the index, but it took only 0.5 seconds. Usually this means there was not an issue :)

Creating an index which is slow, usually means there is (serious) performance gain.
(0002994)
stephand   
2018-05-04 17:56   
For sure it depends on the size of the Job table. I've measured it 25% faster with this index with 10000 records in the Job table.

However, looking at the logs like
2017-10-19 09:20:03 bareos-sd JobId 4445: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0340 from dev="vps52320-cons" (/home/vps52320/bareos) to "vps52320-incr" (/home/vps52320/bareos)
that's a different problem, has nothing to do with that index.
As Joerg already suggested to use multiple storage devices, I'd propose increase the number. This is documented meanwhile at
http://doc.bareos.org/master/html/bareos-manual-main-reference.html#UsingMultipleStorageDevices
(0003047)
stephand   
2018-06-21 07:59   
Were you able to solve the problem by using multiple storages devices?
(0003053)
hostedpower   
2018-06-27 23:56   
while this might work, we use at least 50-60 devices atm. So it would be a lot of extra work to add extra storage devices.

Why not create a device of type 'disk volume' and automatically give it the right properties? It would make things SO MUCH EASIER and bring bareos into the 2018 ...

Anything that can be done to get this supported? We would want to sponsor this if needed...
(0003145)
stephand   
2018-10-22 15:54   
Hi,

When you say "a lot of extra work to add extra storage devices" are you talking about the Storage Daemon configuration mentioned at
http://doc.bareos.org/master/html/bareos-manual-main-reference.html#UsingMultipleStorageDevices
which is a always

Device {
  Name = FileStorage1
  Media Type = File
  Archive Device = /var/lib/bareos/storage
  LabelMedia = yes
  Random Access = yes
  AutomaticMount = yes
  RemovableMedia = no
  AlwaysOpen = no
  Maximum Concurrent Jobs = 1
}

and only the number in Name = FileStorage1 is increased for each Device?

Are you already using configuration management tools like Ansible, Puppet etc? Then it shouldn't be too hard to get done.

Or what exactly do you mean by "Why not create a device of type 'disk volume' and automatically give it the right properties? It would make things SO MUCH EASIER and bring bareos into the 2018 ..."?

Let me guess, lets imagine we would have MultiDevice in the SD configuration, then the example config from http://doc.bareos.org/master/html/bareos-manual-main-reference.html#UsingMultipleStorageDevices could be written like this:

The Bareos Director config:

Director {
  Name = bareos-dir.example.com
  QueryFile = "/usr/lib/bareos/scripts/query.sql"
  Maximum Concurrent Jobs = 10
  Password = "<secret>"
}
 
Storage {
  Name = File
  Address = bareos-sd.bareos.com
  Password = "<sd-secret>"
  Device = MultiFileStorage
  # number of devices = Maximum Concurrent Jobs
  Maximum Concurrent Jobs = 4
  Media Type = File
}

The Bareos Storagedaemon config:

Storage {
  Name = bareos-sd.example.com
  # any number >= 4
  Maximum Concurrent Jobs = 20
}
 
Director {
  Name = bareos-dir.example.com
  Password = "<sd-secret>"
}
 
MultiDevice {
  Name = MultiFileStorage
  Media Type = File
  Archive Device = /var/lib/bareos/storage
  LabelMedia = yes
  Random Access = yes
  AutomaticMount = yes
  RemovableMedia = no
  AlwaysOpen = no
  Maximum Concurrent Jobs = 1
  Multi Device Count = 4
}
 
Do you mean that?

Or if not, please give an example of how the config should look like to make things easier for you.
(0003146)
hostedpower   
2018-10-22 16:25   
We're indeed looking into Ansible to automate this, but still, something like:

MultiDevice {
  Name = MultiFileStorage
  Media Type = File
  Archive Device = /var/lib/bareos/storage
  LabelMedia = yes
  Random Access = yes
  AutomaticMount = yes
  RemovableMedia = no
  AlwaysOpen = no
  Maximum Concurrent Jobs = 1
  Multi Device Count = 4
}

would be more than fantastic!!

Just a single device supporting concurrent access in an easy fashion!

Probably we could then also set "Maximum Concurrent Jobs = 4" Pretty safely?


I can imagine if you're used to Bareos (and tapes), this seems maybe a strange way of working.

However if you're used to (most) other backup software supporting harddrives by original design, the way it's designed now for disks is just way too complicated :(
(0003768)
hostedpower   
2020-02-10 21:08   
Hi,


I think this feature was written: https://docs.bareos.org/Configuration/StorageDaemon.html#storageresourcemultiplieddevice

Does it require an autochanger for this to work as disussed in this thread? Or would simply more devices thanks to the count parameter be sufficient?

I ask since lately we see a lot of errors again as reported here :| :)
(0003774)
arogge   
2020-02-11 09:34   
The autochanger is optional, but the feature won't help you if you don't configure an autochanger.
With the autochanger you only need to configure one storage device in your Director. Otherwise you'd need to configure each of the mutliplied devices in your director separately.

This is - of course - not a physical existing autochanger, it is just an autochanger configuration in the storage daemon to group the different storage devices together.
(0003775)
hostedpower   
2020-02-11 10:00   
ok, but in our case we ha
(0003776)
hostedpower   
2020-02-11 10:02   
ok, but in our case we have more than 100 storages configured with different names.

Do we need multiple autochangers as well or just 1 auto changer for all these devices? I'm afraid it's the latter, so we'd have to add tons of autochangers as well right? :)
(0003777)
arogge   
2020-02-11 10:30   
If you have more than 100 storages, I hope you're generating your configuration with something like puppet or ansible.
Why exactly do you need such a large amount of individual storages?
Usually if you're using only File-based storage a single storage (or file-autochanger) per RAID array is enough. Everything else usually just overcomplicates things in the end.
(0003798)
hostedpower   
2020-02-12 10:04   
Well it's because we allocate storage space for each customer, so each customer pays for his own storage. Putting all into one large storage, wouldn't show us anymore who is using what exactly.

Is there a better way to "allocate" storage for individual customers while at the same time use large storage as you suggest?

PS: Yes we generate config, but updating config now to include autochanger, would still be quite some work since we generate this config only once .

Just adding a device count is easy since we use include file. So adding autochanger now isn't really what we hoped for :)

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1188 [bareos-core] webui major always 2020-02-11 21:02 2020-02-12 10:00
Reporter: hostedpower Platform: Linux  
Assigned To: OS: Debian  
Priority: urgent OS Version: 9  
Status: new Product Version: 19.2.6  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Cannot restore at all after uprgade to 19.2.6 (php error)
Description: Hi,


Not sure what happened, but after upgrading from 19.2.5 to 19.2.6 the restore screen no longer works at all.

We see php errors as well:

[Tue Feb 11 21:02:07.597134 2020] [proxy_fcgi:error] [pid 762:tid 140551107876608] [client 178.117.59.204:49903] AH01071: Got error 'PHP message: PHP Notice: Undefined index: type in /usr/share/bareos-webui/module/Restore/src/Restore/Form/RestoreForm.php on line 91\n', referer: https://xxxxx.hosted-power.com/restore/?jobid=134785&client=xxx.xxxx.com&restoreclient=&restorejob=&where=&fileset=&mergefilesets=0&mergejobs=0&limit=2000
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0003792)
hostedpower   
2020-02-11 21:17   
PS: I just found out that it's only in Internet explorer, chrome and edge work fine.

However the issue of slow loading persists, initial loading of the /restore url is slow, once a client is selected after that, it's finally faster
(0003797)
hostedpower   
2020-02-12 10:00   
So to conclude, we can restore with chrome, but internet explorer gives the weird php error :)

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1061 [bareos-core] director major always 2019-02-19 00:23 2020-02-11 11:57
Reporter: hostedpower Platform: Linux  
Assigned To: OS: Debian  
Priority: high OS Version: 9  
Status: new Product Version: 18.2.5  
Product Build: Resolution: reopened  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Tremendous MySQL load
Description: Hi,


It seems the queries are very un-optimized again since upgrading to bareos 18.2.6 :/

Tags:
Steps To Reproduce: Bareos generates tons of inefficient non indexed queries like:

 SELECT Path.Path, T1.Name, T1.FileIndex, T1.JobId, LStat, DeltaSeq , Fhinfo, Fhnode FROM ( SELECT FileId, Job.JobId AS JobId, FileIndex, File.PathId AS PathId, File.Name AS Name, LStat , DeltaSeq, Fhinfo, Fhnode, Job.JobTDate AS JobTDate FROM Job, File, (SELECT MAX(JobTDate) AS JobTDate, PathId, FileName FROM (SELECT JobTDate, PathId, File.Name AS FileName FROM File JOIN Job USING (JobId) WHERE File.JobId IN (54468,55303,51880,52055,52230,52405,52584,52759,52934,53111,53288,53475,53652,53829,54005,54183,54361,54530,54695,54858,55030,55206) UNION ALL SELECT JobTDate, PathId, File.Name AS FileName FROM BaseFiles JOIN File USING (FileId) JOIN Job ON (BaseJobId = Job.JobId) WHERE BaseFiles.JobId IN (54468,55303,51880,52055,52230,52405,52584,52759,52934,53111,53288,53475,53652,53829,54005,54183,54361,54530,54695,54858,55030,55206) ) AS tmp GROUP BY PathId, FileName) AS T1 WHERE (Job.JobId IN (SELECT DISTINCT BaseJobId FROM BaseFiles WHERE JobId IN (54468,55303,51880,52055,52230,52405,52584,52759,52934,53111,53288,53475,53652,53829,54005,54183,54361,54530,54695,54858,55030,55206)) OR Job.JobId IN (54468,55303,51880,52055,52230,52405,52584,52759,52934,53111,53288,53475,53652,53829,54005,54183,54361,54530,54695,54858,55030,55206)) AND T1.JobTDate = Job.JobTDate AND Job.JobId = File.JobId AND T1.PathId = File.PathId AND T1.FileName = File.Name ) AS T1 JOIN Path ON (Path.PathId = T1.PathId) WHERE FileIndex > 0 ORDER BY T1.JobTDate, FileIndex ASC ;

Additional Information: Please provide proper indexes, SQL without proper indexes is very bad for performance and makes bareos almost crash :/
System Description
Attached Files: query_draft.sql (2,558 bytes) 2019-03-04 12:42
https://bugs.bareos.org/file_download.php?file_id=356&type=bug
Notes
(0003272)
der_andrew   
2019-02-20 10:04   
+1
(0003275)
hostedpower   
2019-02-28 22:35   
Hi,


The version is very cumbersome. Because the queries are so heavy and long lasting, also the other jobs are waiting for the lock and cannot finish until all locks are gone. We had many issues since upgrading :(

Example where you can see the lock:

| 19777 | bareos | localhost | bareos | Query | 318 | Waiting for table metadata lock | LOCK TABLES Path write, batch write, Path as p write | 0 | 0 |
| 19779 | bareos | localhost | bareos | Sleep | 954 | | NULL | 0 | 0 |
| 19780 | bareos | localhost | bareos | Query | 143 | Waiting for table metadata lock | LOCK TABLES Path write, batch write, Path as p write | 0 | 0 |
| 19783 | bareos | localhost | bareos | Sleep | 57 | | NULL | 0 | 0 |
| 19784 | bareos | localhost | bareos | Sleep | 169 | | NULL | 0 | 0 |
| 19785 | bareos | localhost | bareos | Sleep | 952 | | NULL | 0 | 0 |
| 20090 | bareos | localhost | bareos | Sleep | 2 | | NULL | 0 | 0 |
| 20091 | bareos | localhost | bareos | Query | 324 | Creating sort index | SELECT Path.Path, T1.Name, T1.FileIndex, T1.JobId, LStat, DeltaSeq , Fhinfo, Fhnode FROM ( SELECT FileId, Job.JobId AS JobId, FileIndex, File.PathId AS PathId, File.Name AS Name, LStat , DeltaSeq, Fhinfo, Fhnode, Job.JobTDate AS JobTDate FROM Job, File, (SELECT MAX(JobTDate) AS JobTDate, PathId, FileName FROM (SELECT JobTDate, PathId, File.Name AS FileName FROM File JOIN Job USING (JobId) WHERE File.JobId IN (56417,56969,53458,53635,53812,53988,54166,54344,54513,54678,54841,55013,55189,55545,55738,55923,56106,56288,56469,56651,56833) UNION ALL SELECT JobTDate, PathId, File.Name AS FileName FROM BaseFiles JOIN File USING (FileId) JOIN Job ON (BaseJobId = Job.JobId) WHERE BaseFiles.JobId IN (56417,56969,53458,53635,53812,53988,54166,54344,54513,54678,54841,55013,55189,55545,55738,55923,56106,56288,56469,56651,56833) ) AS tmp GROUP BY PathId, FileName) AS T1 WHERE (Job.JobId IN (SELECT DISTINCT BaseJobId FROM BaseFiles WHERE JobId IN (56417,56969,53458,53635,53812,53988,54166,54344,54513,54678,54841,55013,55189,55545,55738,55923,56106,56288,56469,56651,56833)) OR Job.JobId IN (56417,56969,53458,53635,53812,53988,54166,54344,54513,54678,54841,55013,55189,55545,55738,55923,56106,56288,56469,56651,56833)) AND T1.JobTDate = Job.JobTDate AND Job.JobId = File.JobId AND T1.PathId = File.PathId AND T1.FileName = File.Name ) AS T1 JOIN Path ON (Path.PathId = T1.PathId) WHERE FileIndex > 0 ORDER BY T1.JobTDate, FileIndex ASC | 0 | 12132482 |
| 20096 | bareos | localhost | bareos | Query | 317 | Waiting for table metadata lock | SELECT Path.Path, T1.Name, T1.FileIndex, T1.JobId, LStat, DeltaSeq , Fhinfo, Fhnode FROM ( SELECT FileId, Job.JobId AS JobId, FileIndex, File.PathId AS PathId, File.Name AS Name, LStat , DeltaSeq, Fhinfo, Fhnode, Job.JobTDate AS JobTDate FROM Job, File, (SELECT MAX(JobTDate) AS JobTDate, PathId, FileName FROM (SELECT JobTDate, PathId, File.Name AS FileName FROM File JOIN Job USING (JobId) WHERE File.JobId IN (56419,56971,53460,53637,53814,53990,54168,54346,54515,54680,54843,55015,55191,55547,55740,55925,56108,56290,56471,56653,56835) UNION ALL SELECT JobTDate, PathId, File.Name AS FileName FROM BaseFiles JOIN File USING (FileId) JOIN Job ON (BaseJobId = Job.JobId) WHERE BaseFiles.JobId IN (56419,56971,53460,53637,53814,53990,54168,54346,54515,54680,54843,55015,55191,55547,55740,55925,56108,56290,56471,56653,56835) ) AS tmp GROUP BY PathId, FileName) AS T1 WHERE (Job.JobId IN (SELECT DISTINCT BaseJobId FROM BaseFiles WHERE JobId IN (56419,56971,53460,53637,53814,53990,54168,54346,54515,54680,54843,55015,55191,55547,55740,55925,56108,56290,56471,56653,56835)) OR Job.JobId IN (56419,56971,53460,53637,53814,53990,54168,54346,54515,54680,54843,55015,55191,55547,55740,55925,56108,56290,56471,56653,56835)) AND T1.JobTDate = Job.JobTDate AND Job.JobId = File.JobId AND T1.PathId = File.PathId AND T1.FileName = File.Name ) AS T1 JOIN Path ON (Path.PathId = T1.PathId) WHERE FileIndex > 0 ORDER BY T1.JobTDate, FileIndex ASC | 0 | 0 |
| 20224 | root | localhost | NULL | Query | 0 | starting | show full processlist | 0 | 0 |
+-------+--------+-----------+--------+---------+------+---------------------------------+---

When we lock in the console, we see jobs which are finished for 30 minutes, yet they cannot finish permanently since they are waiting for the lock :/
(0003276)
hostedpower   
2019-03-03 15:10   
Some jobs hang forever because of it, we have many failed jobs since upgrading :(
(0003277)
tomer   
2019-03-04 12:42   
The query in the attached file can be used as an initial draft to potentially optimize the slow query in this issue report.
Unfortunately, I have little knowledge of bareos internals, so I'll appreciate hearing the thoughts from the community / contributors.

Some notes about the changes in the query:
1. Filtering conditions from the outer query (such as FileIndex > 0) can be added to the relevant subqueries as well, to reduce the amount of data returned from them, as early as possible in the query execution process.
2. Sometimes joins can be avoided and transformed to a subquery in the SELECT clause. This can be effective in some cases.
3. Selecting data from potentially empty tables is redundant (for example, the table BaseFiles can be empty in the case I saw), so the query can be built dynamically if the application had that knowledge before running the query (for example, by checking the size of the table before running the query).

If anyone can share some more information about the goals and internals of this query, in details, it will be very helpful and we can work together on further optimizing the query.
(0003279)
arogge   
2019-03-13 18:07   
From which version did you upgrade?
There have been no changes to that query for a really long time.

Can you provide more information especially concerning the "non indexed" part? Do you have an EXPLAIN for that statement?
(0003293)
hostedpower   
2019-03-14 21:17   
Hello,


mysql> explain SELECT Path.Path, T1.Name, T1.FileIndex, T1.JobId, LStat, DeltaSeq , Fhinfo, Fhnode FROM ( SELECT FileId, Job.JobId AS JobId, FileIndex, File.PathId AS PathId, File.Name AS Name, LStat , DeltaSeq, Fhinfo, Fhnode, Job.JobTDate AS JobTDate FROM Job, File, (SELECT MAX(JobTDate) AS JobTDate, PathId, FileName FROM (SELECT JobTDate, PathId, File.Name AS FileName FROM File JOIN Job USING (JobId) WHERE File.JobId IN (54468,55303,51880,52055,52230,52405,52584,52759,52934,53111,53288,53475,53652,53829,54005,54183,54361,54530,54695,54858,55030,55206) UNION ALL SELECT JobTDate, PathId, File.Name AS FileName FROM BaseFiles JOIN File USING (FileId) JOIN Job ON (BaseJobId = Job.JobId) WHERE BaseFiles.JobId IN (54468,55303,51880,52055,52230,52405,52584,52759,52934,53111,53288,53475,53652,53829,54005,54183,54361,54530,54695,54858,55030,55206) ) AS tmp GROUP BY PathId, FileName) AS T1 WHERE (Job.JobId IN (SELECT DISTINCT BaseJobId FROM BaseFiles WHERE JobId IN (54468,55303,51880,52055,52230,52405,52584,52759,52934,53111,53288,53475,53652,53829,54005,54183,54361,54530,54695,54858,55030,55206)) OR Job.JobId IN (54468,55303,51880,52055,52230,52405,52584,52759,52934,53111,53288,53475,53652,53829,54005,54183,54361,54530,54695,54858,55030,55206)) AND T1.JobTDate = Job.JobTDate AND Job.JobId = File.JobId AND T1.PathId = File.PathId AND T1.FileName = File.Name ) AS T1 JOIN Path ON (Path.PathId = T1.PathId) WHERE FileIndex > 0 ORDER BY T1.JobTDate, FileIndex ASC ;
+----+-------------+------------+------------+--------+------------------------------------------------------------------------+-------------------------------+---------+----------------------------------------+------+----------+----------------------------------------------+
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+------------+------------+--------+------------------------------------------------------------------------+-------------------------------+---------+----------------------------------------+------+----------+----------------------------------------------+
| 1 | PRIMARY | <derived3> | NULL | ALL | NULL | NULL | NULL | NULL | 24 | 100.00 | Using where; Using temporary; Using filesort |
| 1 | PRIMARY | Path | NULL | eq_ref | PRIMARY | PRIMARY | 4 | T1.PathId | 1 | 100.00 | NULL |
| 1 | PRIMARY | Job | NULL | ref | PRIMARY,jobtdate_idx,job_idx_jobtdate_jobid,job_idx_jobid_jobtdate | job_idx_jobtdate_jobid | 9 | T1.JobTDate | 1 | 100.00 | Using where; Using index |
| 1 | PRIMARY | File | NULL | ref | JobId_PathId_Name,PathId_JobId_FileIndex,idxPIchk,file_idx_pathid_name | JobId_PathId_Name | 265 | bareos.Job.JobId,T1.PathId,T1.FileName | 1 | 33.33 | Using where |
| 6 | SUBQUERY | BaseFiles | NULL | index | basefiles_jobid_idx,basefiles_idx_basejobid_jobid | basefiles_idx_basejobid_jobid | 8 | NULL | 1 | 100.00 | Using where; Using index |
| 3 | DERIVED | <derived4> | NULL | ALL | NULL | NULL | NULL | NULL | 24 | 100.00 | Using temporary; Using filesort |
| 4 | DERIVED | File | NULL | range | JobId_PathId_Name | JobId_PathId_Name | 4 | NULL | 22 | 100.00 | Using index condition |
| 4 | DERIVED | Job | NULL | eq_ref | PRIMARY,job_idx_jobid_jobtdate | PRIMARY | 4 | bareos.File.JobId | 1 | 100.00 | NULL |
| 5 | UNION | BaseFiles | NULL | ALL | basefiles_jobid_idx,basefiles_idx_basejobid_jobid | NULL | NULL | NULL | 1 | 100.00 | Using where |
| 5 | UNION | Job | NULL | eq_ref | PRIMARY,job_idx_jobid_jobtdate | PRIMARY | 4 | bareos.BaseFiles.BaseJobId | 1 | 100.00 | NULL |
| 5 | UNION | File | NULL | eq_ref | PRIMARY | PRIMARY | 8 | bareos.BaseFiles.FileId | 1 | 100.00 | NULL |
+----+-------------+------------+------------+--------+------------------------------------------------------------------------+-------------------------------+---------+----------------------------------------+------+----------+----------------------------------------------+
11 rows in set, 1 warning (0.00 sec)

mysql>

The query doesn't (really) work anymore since it's empty now.

Maybe the problem was there longer time and it might have gotten worse with more data.
(0003294)
arogge   
2019-03-15 09:32   
The problem is that we cannot anticipate what your data will look like:
Some clients have a lot of small files, some clients have only a few big files.
Some clients have only a few files changing in Incremental backups, some clients have many files changing in a backup.
Some clients run daily backup and a full backup every week resulting in a short backup chain (1 x Full, 6 x Incr), some clients run 4 or more backups per day, one full backup a month and a diff once a week resulting in incredibly long backup chains like 1 x Full + 1 x Diff + 32 x Incr.

Now the execution plan EXPLAIN will throw out depends on the amount of data and the value distribution in the indexes. This is why I asked wether you could provide an execution plan: I can run EXPLAIN on my testing dataset, but the result will differ from your real-world setup. The result will even differ depending on the MySQL version and tuning parameters.

Having said that, we try to optimize database stuff as good as possible for the average case. We look into PostgreSQL a lot more than in MySQL/MariaDB. For the Bareos use-case PostgreSQL offers better performance and manageability in most cases, which is why we encourage you to use it instead of MySQL/MariaDB and why we put more effort into PostgreSQL than into MySQL/MariaDB.

Last but not least the issue that you're seeing is common. Your database server has a limited amount of memory and usually your Bareos catalog won't fit in it completely. This is not a problem as PostgreSQL and MySQL/MariaDB know what to put in memory and what not.
Concerning your specific query you can be quite sure that the File-table won't fit in memory completely. However as long as the index JobId_PathId_Name fits into memory and most of the other tables are cached in memory your queries will run fast. That index is (on your installation) 265 bytes per row in the file table. Which means around 265 MB per 1 Million files.
As soon as this doesn't fit into memory anymore your query performance will drop dramatically (read: probably 10 to 100 times slower). The only way to solve this is to make sure that the file-tables's indexes (at least those that are used often) fit into memory (and to make sure your database server is configured to use that amount of memory, of course).

So if you claim that a query runs slowly we cannot find out why this might be the case without the execution plan for that query on *your* database server with *your* dataset. Usually (we get quite a lot of support requests concerning this) it boils down to non-optimal tuning of the databases's memory parameters or simply not enough memory.
(0003297)
arogge   
2019-03-25 08:50   
You can try to add an index on Job(JobTDate):

CREATE INDEX jobtdate_idx ON Job (JobTDate);
ANALYZE TABLE Job;

This may improve your experience.
(0003328)
arogge   
2019-04-11 09:07   
I'm settings this to resolved. If anyone can provide more information, don't hesitate to reopen the issue.
(0003659)
frank   
2019-12-12 09:22   
Fix committed to bareos master branch with changesetid 12371.
(0003770)
hostedpower   
2020-02-10 22:09   
I think it's happening again:


                                                                                                                                                   | 0 | 0 |
| 10693 | bareos | localhost | bareos | Query | 555 | Creating sort index | SELECT Path.Path, T1.Name, T1.FileIndex, T1.JobId, LStat, DeltaSeq , Fhinfo, Fhnode FROM ( SELECT FileId, Job.JobId AS JobId, FileIndex, File.PathId AS PathId, File.Name AS Name, LStat , File.DeltaSeq AS DeltaSeq, File.Fhinfo AS Fhinfo, File.Fhnode AS Fhnode, Job.JobTDate AS JobTDate FROM Job, File, ( SELECT MAX(JobTDate) AS JobTDate, PathId, FileName, DeltaSeq, Fhinfo, Fhnode FROM ( SELECT JobTDate, PathId, File.Name AS FileName, DeltaSeq, Fhinfo, Fhnode FROM File JOIN Job USING (JobId) WHERE File.JobId IN (135971,137304,132044,132310,132572,132839,133103,133368,133631,133899,134170,134442,134716,134989,135262,135534,135807,136081,136352,136624,136898,137061,137485) UNION ALL SELECT JobTDate, PathId, File.Name AS FileName, DeltaSeq, Fhinfo, Fhnode FROM BaseFiles JOIN File USING (FileId) JOIN Job ON (BaseJobId = Job.JobId) WHERE BaseFiles.JobId IN (135971,137304,132044,132310,132572,132839,133103,133368,133631,133899,134170,134442,134716,134989,135262,135534,135807,136081,136352,136624,136898,137061,137485) ) AS tmp GROUP BY PathId, FileName, DeltaSeq, Fhinfo, Fhnode) AS T1 WHERE (Job.JobId IN (SELECT DISTINCT BaseJobId FROM BaseFiles WHERE JobId IN (135971,137304,132044,132310,132572,132839,133103,133368,133631,133899,134170,134442,134716,134989,135262,135534,135807,136081,136352,136624,136898,137061,137485)) OR Job.JobId IN (135971,137304,132044,132310,132572,132839,133103,133368,133631,133899,134170,134442,134716,134989,135262,135534,135807,136081,136352,136624,136898,137061,137485)) AND T1.JobTDate = Job.JobTDate AND Job.JobId = File.JobId AND T1.PathId = File.PathId AND T1.FileName = File.Name ) AS T1 JOIN Path ON (Path.PathId = T1.PathId) WHERE FileIndex > 0 ORDER BY T1.JobTDate, FileIndex ASC | 0 | 61968801 |
| 10703 | bareos | localhost | bareos | Sleep | 1 | | NULL

It really makes everything locked while this is happening...
(0003771)
hostedpower   
2020-02-10 22:09   
Adding explain:

explain SELECT Path.Path, T1.Name, T1.FileIndex, T1.JobId, LStat, DeltaSeq , Fhinfo, Fhnode FROM ( SELECT FileId, Job.JobId AS JobId, FileIndex, File.PathId AS PathId, File.Name AS Name, LStat , File.DeltaSeq AS DeltaSeq, File.Fhinfo AS Fhinfo, File.Fhnode AS Fhnode, Job.JobTDate AS JobTDate FROM Job, File, ( SELECT MAX(JobTDate) AS JobTDate, PathId, FileName, DeltaSeq, Fhinfo, Fhnode FROM ( SELECT JobTDate, PathId, File.Name AS FileName, DeltaSeq, Fhinfo, Fhnode FROM File JOIN Job USING (JobId) WHERE File.JobId IN (135971,137304,132044,132310,132572,132839,133103,133368,133631,133899,134170,134442,134716,134989,135262,135534,135807,136081,136352,136624,136898,137061,137485) UNION ALL SELECT JobTDate, PathId, File.Name AS FileName, DeltaSeq, Fhinfo, Fhnode FROM BaseFiles JOIN File USING (FileId) JOIN Job ON (BaseJobId = Job.JobId) WHERE BaseFiles.JobId IN (135971,137304,132044,132310,132572,132839,133103,133368,133631,133899,134170,134442,134716,134989,135262,135534,135807,136081,136352,136624,136898,137061,137485) ) AS tmp GROUP BY PathId, FileName, DeltaSeq, Fhinfo, Fhnode) AS T1 WHERE (Job.JobId IN (SELECT DISTINCT BaseJobId FROM BaseFiles WHERE JobId IN (135971,137304,132044,132310,132572,132839,133103,133368,133631,133899,134170,134442,134716,134989,135262,135534,135807,136081,136352,136624,136898,137061,137485)) OR Job.JobId IN (135971,137304,132044,132310,132572,132839,133103,133368,133631,133899,134170,134442,134716,134989,135262,135534,135807,136081,136352,136624,136898,137061,137485)) AND T1.JobTDate = Job.JobTDate AND Job.JobId = File.JobId AND T1.PathId = File.PathId AND T1.FileName = File.Name ) AS T1 JOIN Path ON (Path.PathId = T1.PathId) WHERE FileIndex > 0 ORDER BY T1.JobTDate, FileIndex ASC ;
+----+-------------+------------+------------+--------+--------------------------------------------------------------------+-------------------------------+---------+----------------------------------------+----------+----------+-----------------------------------------------------------+
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+------------+------------+--------+--------------------------------------------------------------------+-------------------------------+---------+----------------------------------------+----------+----------+-----------------------------------------------------------+
| 1 | PRIMARY | Job | NULL | index | PRIMARY,jobtdate_idx,job_idx_jobtdate_jobid,job_idx_jobid_jobtdate | jobtdate_idx | 9 | NULL | 6991 | 100.00 | Using where; Using index; Using temporary; Using filesort |
| 1 | PRIMARY | <derived3> | NULL | ref | <auto_key1> | <auto_key1> | 9 | bareos.Job.JobTDate | 6053 | 100.00 | NULL |
| 1 | PRIMARY | File | NULL | ref | JobId_PathId_Name,PathId_JobId_FileIndex,idxPIchk | JobId_PathId_Name | 265 | bareos.Job.JobId,T1.PathId,T1.FileName | 1 | 33.33 | Using where |
| 1 | PRIMARY | Path | NULL | eq_ref | PRIMARY | PRIMARY | 4 | T1.PathId | 1 | 100.00 | NULL |
| 6 | SUBQUERY | BaseFiles | NULL | index | basefiles_jobid_idx,basefiles_idx_basejobid_jobid | basefiles_idx_basejobid_jobid | 8 | NULL | 1 | 100.00 | Using where; Using index |
| 3 | DERIVED | <derived4> | NULL | ALL | NULL | NULL | NULL | NULL | 32360748 | 100.00 | Using temporary; Using filesort |
| 4 | DERIVED | File | NULL | range | JobId_PathId_Name | JobId_PathId_Name | 4 | NULL | 32360746 | 100.00 | Using index condition |
| 4 | DERIVED | Job | NULL | eq_ref | PRIMARY,job_idx_jobid_jobtdate | PRIMARY | 4 | bareos.File.JobId | 1 | 100.00 | NULL |
| 5 | UNION | BaseFiles | NULL | ALL | basefiles_jobid_idx,basefiles_idx_basejobid_jobid | NULL | NULL | NULL | 1 | 100.00 | Using where |
| 5 | UNION | Job | NULL | eq_ref | PRIMARY,job_idx_jobid_jobtdate | PRIMARY | 4 | bareos.BaseFiles.BaseJobId | 1 | 100.00 | NULL |
| 5 | UNION | File | NULL | eq_ref | PRIMARY | PRIMARY | 8 | bareos.BaseFiles.FileId | 1 | 100.00 | NULL |
+----+-------------+------------+------------+--------+--------------------------------------------------------------------+-------------------------------+---------+----------------------------------------+----------+----------+-----------------------------------------------------------+
11 rows in set, 1 warning (0.01 sec)

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1171 [bareos-core] General feature always 2020-02-03 10:08 2020-02-11 09:08
Reporter: hasalah Platform: Linux  
Assigned To: OS: CentOS  
Priority: normal OS Version: 7  
Status: confirmed Product Version: 19.2.4~pre  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Backup the VM using UUID in oVirt-Plugin for Bareos
Description: Hi All;
Is there a way to backup the VM using UUID, because the name of the VM may be changed?
I tried to do that but not work, the config file below:
vim /etc/bareos/bareos-dir.d/fileset/vm-backup.conf
FileSet {
   Name = "testvm1_fileset"

   Include {
      Options {
         signature = MD5
         Compression = LZ4
      }
      Plugin = "python:module_path=/usr/lib64/bareos/plugins:module_name=bareos-fd-ovirt:ca=/etc/bareos/ovirt-ca.cert:server=rhv.xx.com:username=admin@internal:password=XXXXXX:uuid=184f361e-fe51-48ef-bad2-a7012babd880"
   }
}
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files: BareosFdPluginOvirt.py (71,466 bytes) 2020-02-10 21:14
https://bugs.bareos.org/file_download.php?file_id=402&type=bug
Notes
(0003728)
arogge   
2020-02-04 14:51   
You're right. This is a currently limitation the plugin has.
(0003769)
hasalah   
2020-02-10 21:14   
Hello Arogge;
It's fixed by replacing the "uuid" with "id" in the plugin as shown below:

def get_vm(self, context):
        search = None
        if "uuid" in self.options:
            search = "uuid=%s" % str(self.options["uuid"])
Must be:
    def get_vm(self, context):
        search = None
        if "uuid" in self.options:
            search = "id=%s" % str(self.options["uuid"])

Thanks
(0003773)
arogge   
2020-02-11 09:08   
Thanks for your PR#415 - https://github.com/bareos/bareos/pull/415

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
980 [bareos-core] director major always 2018-07-09 09:04 2020-02-10 09:28
Reporter: Int Platform: Linux  
Assigned To: joergs OS: CentOS  
Priority: normal OS Version: 7  
Status: assigned Product Version: 17.2.4  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: release command not allowed in runscript makes admin job "ReleaseAllTapeDrives.conf" unusable
Description: In Chapter "19.14 Tape Drive Cleaning" in Bareos documentation an admin job "ReleaseAllTapeDrives.conf" is given to release tapes if the tape libraries auto-cleaning won’t work when there are tapes in the drives.
But this job fails with error:
"Can't use release command in a runscript : release: is an invalid command."

It seems that at some point running release was forbidden, but this makes the example in Chapter "19.14 Tape Drive Cleaning" useless.
Please update the documentation with how to achieve tape release with currently allowed commands (or allow release command in a runscript again).
Tags:
Steps To Reproduce: Implement admin job "ReleaseAllTapeDrives.conf" as described in Bareos documentation Chapter "19.14 Tape Drive Cleaning"
Additional Information:
System Description
Attached Files:
Notes
(0003164)
Int   
2018-12-29 09:26   
Will this issue be fixed in bareos-18.2 release?
(0003179)
user1   
2019-01-16 09:32   
I have the same issue, log files says:

/var/log/bareos/bareos.log- Volume Session Id: 144
/var/log/bareos/bareos.log- Volume Session Time: 1539591089
/var/log/bareos/bareos.log- Last Volume Bytes: 8,889,862,675,324 (8.889 TB)
/var/log/bareos/bareos.log- Termination: Backup OK
/var/log/bareos/bareos.log-
/var/log/bareos/bareos.log:19-Oct 09:03 bareos-dir JobId 4037: console command: run BeforeJob "release storage=QuantumSL3 alldrives"
/var/log/bareos/bareos.log:19-Oct 09:03 bareos-dir JobId 0: Can't use release command in a runscript19-Oct 09:03 bareos-dir JobId 0: release: is an invalid command.
/var/log/bareos/bareos.log-19-Oct 09:03 bareos-dir JobId 4037: Error: BAREOS 16.2.7 (09Oct17): 19-Oct-2018 09:03:15
/var/log/bareos/bareos.log- JobId: 4037
/var/log/bareos/bareos.log- Job: ReleaseAllTapeDrives.2018-10-19_09.03.12_00
/var/log/bareos/bareos.log- Scheduled time: 19-Oct-2018 09:03:09
/var/log/bareos/bareos.log- Start time: 19-Oct-2018 09:03:15

This is *really* annoying since my tape drive tries to clean itself and ofc it runs into an error since there is another tape loaded.
(0003667)
Int   
2019-12-13 13:38   
one and a half years are over since this bug was reported and assigned - will this be fixed or not ?
Some feedback would be nice.
(0003756)
kinglui   
2020-02-10 09:28   
Hi @all,

any news? I run into the same problem.

greets

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1179 [bareos-core] webui block always 2020-02-06 17:17 2020-02-06 17:40
Reporter: tkla Platform: Linux  
Assigned To: OS: Debian 10  
Priority: normal OS Version: 9  
Status: feedback Product Version: 16.2.4  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Bareos Webui blank after updating to Debian 10
Description: Hello guys,
i was updating my Debian server from 9 to Debian 10 today and installed the bareos-webui package manually later on. It worked before, but now i just get a blank page when i ry to access it.

What i have done:
 - Cleared all cookies
 - Tried to access it from a different host
 - Installed newer version of the bareos-webui, from 16.2.6 up to 16.2.9

My Apache gives me the following error:


[Thu Feb 06 17:01:57.790741 2020] [mpm_prefork:notice] [pid 4311] AH00169: caught SIGTERM, shutting down
[Thu Feb 06 17:01:57.889458 2020] [mpm_prefork:notice] [pid 4386] AH00163: Apache/2.4.38 (Debian) OpenSSL/1.1.1d configured -- resuming normal operations
[Thu Feb 06 17:01:57.889557 2020] [core:notice] [pid 4386] AH00094: Command line: '/usr/sbin/apache2'
[Thu Feb 06 17:01:59.408675 2020] [php7:warn] [pid 4404] [client 10.245.225.6:35950] PHP Warning: "continue" targeting switch is equivalent to "break". Did you mean to use "continue 2"? in /usr/share/bareos-webui/vendor/zendframework/zend-stdlib/src/ArrayObject.php on line 426
[Thu Feb 06 17:01:59.443207 2020] [php7:error] [pid 4404] [client 10.245.225.6:35950] PHP Fatal error: Declaration of Zend\\Session\\AbstractContainer::offsetGet($key) must be compatible with & Zend\\Stdlib\\ArrayObject::offsetGet($key) in /usr/share/bareos-webui/vendor/zendframework/zend-session/src/AbstractContainer.php on line 0


I dont think that this is a configuration error, moreover it seems like a Zend error.

Searching on the web i found a tip to add an "&" to line 425 of the abstraction.php (https://github.com/zendframework/zend-session/issues/74). I tried that and got other errors then.

The installed Apache version is: 2.4.38-3+deb10u3

The Bareos packages coming from the Debian repositories, except the bareos-webui. This is from here: https://download.bareos.com/bareos/release/16.2/Debian_9.0/all/
I know that im mixing up versions here a bit, but since the Debian repos only include 16.2 i thought this would be the best way to do so.

Thanks for your help!

Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files: AbstractContainer.php (17,964 bytes) 2020-02-06 17:17
https://bugs.bareos.org/file_download.php?file_id=401&type=bug
Notes
(0003738)
arogge   
2020-02-06 17:40   
Did I understand correctly that you have a subscription for download.bareos.com and you're using the packages from Debian?

You should not run packages for Debian 9 on Debian 10. If you can break the Debian 10 package on Debian 10, we might take a look.
However, 16.2 is nearing its end-of-life and I don't think we're going to put much effort into that anymore.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1170 [bareos-core] General block always 2020-02-01 12:19 2020-02-03 15:51
Reporter: tuxmaster Platform: x86_64  
Assigned To: arogge OS: Fedora  
Priority: normal OS Version: 31  
Status: feedback Product Version: 19.2.4~rc1  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Build fails on Fedora 31
Description: The build fails with:
In file included from /builddir/build/BUILD/bareos-Release-19.2.4/core/src/ndmp/ndmjob_args.c:37:
/builddir/build/BUILD/bareos-Release-19.2.4/core/src/lib/version.h:41:55: error: unknown type name 'size_t'
   41 | void (*FormatCopyrightWithFsfAndPlanets)(char* out, size_t len, int FsfYear);
      | ^~~~~~
/builddir/build/BUILD/bareos-Release-19.2.4/core/src/lib/version.h:1:1: note: 'size_t' is defined in header '<stddef.h>'; did you forget to '#include <stddef.h>'?
  +++ |+#include <stddef.h>
    1 | /*
/builddir/build/BUILD/bareos-Release-19.2.4/core/src/lib/version.h:42:43: error: unknown type name 'FILE'
   42 | void (*PrintCopyrightWithFsfAndPlanets)(FILE* fh, int FsfYear);
      | ^~~~
/builddir/build/BUILD/bareos-Release-19.2.4/core/src/lib/version.h:1:1: note: 'FILE' is defined in header '<stdio.h>'; did you forget to '#include <stdio.h>'?
  +++ |+#include <stdio.h>
    1 | /*
/builddir/build/BUILD/bareos-Release-19.2.4/core/src/lib/version.h:43:38: error: unknown type name 'size_t'
   43 | void (*FormatCopyright)(char* out, size_t len, int StartYear);
      | ^~~~~~
/builddir/build/BUILD/bareos-Release-19.2.4/core/src/lib/version.h:43:38: note: 'size_t' is defined in header '<stddef.h>'; did you forget to '#include <stddef.h>'?
/builddir/build/BUILD/bareos-Release-19.2.4/core/src/lib/version.h:44:26: error: unknown type name 'FILE'
   44 | void (*PrintCopyright)(FILE* fh, int StartYear);
      | ^~~~
/builddir/build/BUILD/bareos-Release-19.2.4/core/src/lib/version.h:44:26: note: 'FILE' is defined in header '<stdio.h>'; did you forget to '#include <stdio.h>'?
/builddir/build/BUILD/bareos-Release-19.2.4/core/src/lib/version.h:45:1: warning: no semicolon at end of struct or union
   45 | };
      | ^
make[2]: *** [core/src/ndmp/CMakeFiles/ndmjob.dir/build.make:66: core/src/ndmp/CMakeFiles/ndmjob.dir/ndmjob_args.c.o] Error 1
make[2]: Leaving directory '/builddir/build/BUILD/bareos-Release-19.2.4/my-build'
make[1]: *** [CMakeFiles/Makefile2:1979: core/src/ndmp/CMakeFiles/ndmjob.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
Tags:
Steps To Reproduce:
Additional Information: I have attached the build log.
System Description
Attached Files: build.log (374,221 bytes) 2020-02-01 12:19
https://bugs.bareos.org/file_download.php?file_id=400&type=bug
Notes
(0003726)
arogge   
2020-02-03 15:51   
That's weird. We're building on a real F31, too and we don't run into that problem.

As you're easily able to reproduce this, maybe you can provide a PR to patch this? Looks like just two includes and a semicolon are missing.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1167 [bareos-core] file daemon minor always 2020-01-10 14:08 2020-01-10 14:08
Reporter: oleg.cherkasov Platform: x86_64  
Assigned To: OS: FreeBSD  
Priority: high OS Version: 11.3-RELEASE-p5  
Status: new Product Version: 17.2.8  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: bareos-fd daemon/fork mode does not as expected, after ESTIMATE or run a job the bareos-fd freezes with no log messages
Description: Hi,

I have rather strange issue with 4 FreeBSD hosts recently upgraded to 11.3 from 11.2.

If bareos-fd running as a service/daemon (the way it suppose to) so it serves just few ESTIMATE on RUN JOB request. Then stops responding completely. DIR times out if I try to reach bareos-fd on those 4 FreeBSD hosts. Tried running with debugging options however it just stops logging, looks like it hangs somewhere waiting for I/O event from network or disk.

Running bareos-fd manually with -f option makes it work as normal:

/usr/local/sbin/bareos-fd -u root -g wheel -v -c /usr/local/etc/bareos/ -f

rc.conf/sysrc configuration:

bareos_fd_enable: YES

No custom options, no ports built rather all stack packages from PKG.

Upgrading to the latest patch level 5 does not help. Updating bareos17-client to the latest 17.2.8 does not solve the problem either.

It is worth to note that FreeBSD 12 and 12.1 run bareos-fd as a daemon with no issues.

Wonder if it is FreeBSD 11.3 issue only. Would make sense to check changelog related to fork()/vfork() updates however it is just a hypothesis.

Thanks,
Oleg
Tags: fd freebsd
Steps To Reproduce: Run ESTIMATE or RUN JOB more then once against the same bareos-fd on FreeBSD 11.3.
Additional Information: bareos17-client-17.2.8 package is used.
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1164 [bareos-core] director minor always 2019-12-19 17:19 2020-01-10 13:53
Reporter: joergs Platform:  
Assigned To: OS:  
Priority: low OS Version:  
Status: acknowledged Product Version: 19.2.4~rc1  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Inheritance of JobDefs is handled oddly
Description: Jobs can retrieve settings from JobDefs. A JobDefs itself can also retrieve settings from JobDefs.
So, specifying a job via

1. job
2. job -> jobdef
3. job -> jobdef -> jobdef
all works as expected.

However, specifying
job -> jobdef -> jobdef -> jobdef
does not work.
Tags:
Steps To Reproduce:
Additional Information: https://github.com/joergsteffens/bareos/tree/dev/joergs/master/systemtests-jobdefs-runscripts
does implement a systemtest for part iof this behavior. Handling RunScripts fom JobDefs is a special, even more complicated case.
Attached Files:
Notes
(0003709)
stephand   
2019-12-20 10:36   
I really doubt that this makes sense, I also dare to say that the possibility to use inheritance or nesting of JobDefs should be considered a bug, as the documentation already quite clearly defines:

"The JobDefs resource permits all the same directives that can appear in a Job resource. However, a JobDefs resource does not create a Job, rather it can be referenced within a Job to provide defaults for that Job. This permits you to concisely define several nearly identical Jobs, each one referencing a JobDefs resource which contains the defaults. Only the changes from the defaults need to be mentioned in each Job."
(See https://docs.bareos.org/Configuration/Director.html#jobdefs-resource)

In this respect inheritance of JobDefs was obviously never intended. Allowing arbitrary levels of inheritance/nesting of JobDefs could allow inscrutable configurations and problems would be hard to reproduce. The first sentence in the documentation should be clarified and instead say:

"The JobDefs resource permits all the same directives that can appear in a Job resource, except JobDefs. JobDefs can not be nested."

When really considering to allow nesting JobDefs, there must be a defined maximum nesting depth. In any case it would also require to detect bad configurations like JobDefs referencing each other or indirect circular referencing.

Also the handling of RunScript in JobDefs is rather odd, as it deviates from the expected behaviour of beeing able to override parameters from JobDef in a Job.
(0003711)
joergs   
2019-12-20 12:34   
That is not easily to decide.

The documentation clearly states in https://docs.bareos.org/master/Configuration/Director.html#config-Dir_Job_JobDefs :
"To structure the configuration even more, Job Defs themselves can also refer to other Job Defs."

So it is a documented behavior.

Also the code tries to handle it. However, it handles it incorrectly (only one level of inheritance inside JobDefs) .

I did not care about JobDefs in the past, but I know at least 4 installations using JobDef inheritance, also one pull request and at least one other Mantis ticket refer to this behavior.

Jobs and JobDefs can contain multiple RunScripts and RunScripts can have different parameter: RunBefore vs. RunAfter, RunOnClient vs RunOnHost, ...
Should a RunScript directive in a Job really overwrite all RunScript directive from a JobDef, even if they are of different types?

In my opinion, it would be handled clearer if inherited RunScripts are added, at it is now.

However, I really agree, that this must be obvious for the user. Currently, the ConfigParser already stores information, about if a directive is inherited. This could be extended, that the ConfigParser also stores from which jobDef a setting is inherited. Currently, the console command "show Jobs verbose" already gives some more insight about job definitions. This could also be extended in a way that the verbose mode of show command shows also where a directive is defined. Both should be relatively easy to implement. Also the JobLog could state, where a RunScript is defined.

If there is a decision against inherited JobDefs, I strongly opt for not changing this behavior in the next major release, but mark it as deprecated and remove it in the major release there after.
(0003714)
embareossed   
2019-12-22 23:13   
I note that in this discussion, no mention of multiple inheritance has been mentioned. If we are going for featureness, it might be something to consider.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1166 [bareos-core] director minor always 2020-01-06 16:37 2020-01-07 16:05
Reporter: tastydr Platform: Linux  
Assigned To: pstorz OS: Debian  
Priority: normal OS Version: 9  
Status: new Product Version: 18.2.5  
Product Build: Resolution: reopened  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: try entire counter range before giving up on labeling Volume
Description: Bareos is failing to label a new Volume with log entries like this:
06-Jan 09:30 bareos-dir JobId 33140: Warning: Wanted to create Volume "Merged-2541", but it already exists. Trying again.
[...]
06-Jan 09:25 bareos-dir JobId 33140: Warning: Wanted to create Volume "Merged-2639", but it already exists. Trying again.
06-Jan 09:25 bareos-dir JobId 33140: Error: Too many failures. Giving up creating Volume name.

But it did not try the entire range of possibly counter range of 0001-9999 (Dir -> Maxumum Volumes is set to 9999).
This is a File storage type which has fragmented ranges of volume name. There are names available outside that tried by Bareos.
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0003716)
pstorz   
2020-01-07 14:44   
Please read the following first before you create a bug report.

Thank you!

https://www.bareos.org/en/HOWTO/articles/how-to-create-a-bugreport.html

(0003717)
tastydr   
2020-01-07 15:53   
Here is where the problem is:
core/src/dird/newvol.cc from master branch
in
static bool CreateSimpleName(JobControlRecord* jcr,
                             MediaDbRecord* mr,
                             PoolDbRecord* pr)
line 149
for (int i = (int)ctx.value + 1; i < (int)ctx.value + 100; i++)

A database lookup is performed to get the maximum MediaId of an existing volume, then the for loop tries approximately 100 MediaIds after that.

In the case where MediaId are used in non-continuous chucks there may be available MediaIds which are numerically less than the maximum MediaId. For this reason the for loop should start at the first possible MediaId and end at the last possible MediaId.
(0003718)
pstorz   
2020-01-07 16:05   
Please create a Pull Request with your fixes including a documentation how to reproduce the problem.

Thank you

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1165 [bareos-core] director major random 2020-01-03 09:57 2020-01-07 15:09
Reporter: franku Platform:  
Assigned To: franku OS:  
Priority: normal OS Version:  
Status: resolved Product Version:  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact: yes
bareos-master: action: fixed
bareos-19.2: impact: yes
bareos-19.2: action: fixed
bareos-18.2: impact: yes
bareos-18.2: action: fixed
bareos-17.2: impact: yes
bareos-17.2: action: none
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Same job name for jobs submitted at the same time
Description: Restore jobs submitted at the same time end up with the same job name, which results in one of the jobs being rejected by the storage daemon and failing.
Tags:
Steps To Reproduce: See attached files:

why.c: test-source file with the code stripped down to the essence
output: log-output of the test that shows up the problem
Additional Information: The seq variable is incremented inside of the mutex, which should be safe, but then its value is read into the JobControlRecord outside of the mutex, which is a race condition if other threads are manipulating the value at the same time.
Attached Files: why.c (1,939 bytes) 2020-01-03 09:57
https://bugs.bareos.org/file_download.php?file_id=398&type=bug
output (1,116 bytes) 2020-01-03 09:57
https://bugs.bareos.org/file_download.php?file_id=399&type=bug
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
933 [bareos-core] director major always 2018-03-22 12:44 2019-12-24 15:57
Reporter: hostedpower Platform: Linux  
Assigned To: OS: Debian  
Priority: high OS Version: 9  
Status: acknowledged Product Version: 16.2.7  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Fatal error: fd_plugins.c:1237 Unbalanced call to createFile=0 0
Description: 2018-03-22 12:27:28 example-dir JobId 15968: Error: Bareos example-dir 16.2.7 (09Oct17):
 Build OS: x86_64-pc-linux-gnu debian Debian GNU/Linux 8.0 (jessie)
 JobId: 15968
 Job: RestoreFiles.2018-03-22_12.26.32_04
 Restore Client: customer.xxx.com
 Start time: 22-Mar-2018 12:26:34
 End time: 22-Mar-2018 12:27:28
 Elapsed time: 54 secs
 Files Expected: 126,634
 Files Restored: 126,601
 Bytes Restored: 4,869,318,236
 Rate: 90172.6 KB/s
 FD Errors: 1
 FD termination status: Fatal Error
 SD termination status: OK
 Termination: *** Restore Error ***

 
2018-03-22 12:27:28 example-dir JobId 15968: Begin pruning Files.
 
2018-03-22 12:27:28 example-dir JobId 15968: No Files found to prune.
 
2018-03-22 12:27:28 example-dir JobId 15968: End auto prune.

 
2018-03-22 12:27:27 bareos-sd JobId 15968: End of Volume at file 2 on device "example-incr" (/home/example/bareos), Volume "vol-incr-0447"
 
2018-03-22 12:27:27 bareos-sd JobId 15968: Ready to read from volume "vol-cons-0486" on device "example-incr" (/home/example/bareos).
 
2018-03-22 12:27:27 bareos-sd JobId 15968: Forward spacing Volume "vol-cons-0486" to file:block 0:218.
 
2018-03-22 12:27:27 customer.xxx.com JobId 15968: Fatal error: fd_plugins.c:1237 Unbalanced call to createFile=0 0
 
2018-03-22 12:27:26 bareos-sd JobId 15968: End of Volume at file 8 on device "example-incr" (/home/example/bareos), Volume "vol-cons-0513"
 
2018-03-22 12:27:26 bareos-sd JobId 15968: Ready to read from volume "vol-incr-0447" on device "example-incr" (/home/example/bareos).
 
2018-03-22 12:27:26 bareos-sd JobId 15968: Forward spacing Volume "vol-incr-0447" to file:block 2:769145852.
 
2018-03-22 12:26:38 bareos-sd JobId 15968: End of Volume at file 2 on device "example-incr" (/home/example/bareos), Volume "vol-incr-0446"
 
2018-03-22 12:26:38 bareos-sd JobId 15968: Ready to read from volume "vol-cons-0513" on device "example-incr" (/home/example/bareos).
 
2018-03-22 12:26:38 bareos-sd JobId 15968: Forward spacing Volume "vol-cons-0513" to file:block 7:417218068.
 
2018-03-22 12:26:37 bareos-sd JobId 15968: End of Volume at file 2 on device "example-incr" (/home/example/bareos), Volume "vol-incr-0456"
 
2018-03-22 12:26:37 bareos-sd JobId 15968: Ready to read from volume "vol-incr-0446" on device "example-incr" (/home/example/bareos).
 
2018-03-22 12:26:37 bareos-sd JobId 15968: Forward spacing Volume "vol-incr-0446" to file:block 2:786923100.
 
2018-03-22 12:26:36 bareos-sd JobId 15968: End of Volume at file 2 on device "example-incr" (/home/example/bareos), Volume "vol-incr-0423"
 
2018-03-22 12:26:36 bareos-sd JobId 15968: Ready to read from volume "vol-incr-0456" on device "example-incr" (/home/example/bareos).
 
2018-03-22 12:26:36 bareos-sd JobId 15968: Forward spacing Volume "vol-incr-0456" to file:block 2:691729567.
 
2018-03-22 12:26:35 bareos-sd JobId 15968: Ready to read from volume "vol-incr-0423" on device "example-incr" (/home/example/bareos).
 
2018-03-22 12:26:35 bareos-sd JobId 15968: Forward spacing Volume "vol-incr-0423" to file:block 2:606078578.
 
2018-03-22 12:26:34 example-dir JobId 15968: Start Restore Job RestoreFiles.2018-03-22_12.26.32_04
 
2018-03-22 12:26:34 example-dir JobId 15968: Using Device "example-incr" to read.
Tags:
Steps To Reproduce: Restore files from an always incremental schedule, including a mysql backup

We tried separate restore afterwards and this works without errors (so first restore of files and than a separate restore job for the database)
Additional Information:
System Description
Attached Files:
Notes
(0002960)
joergs   
2018-04-05 12:54   
So, what plugings have you configured? This error should only occur, if a plugin is used.
(0002964)
hostedpower   
2018-04-05 13:44   
Hello Joergs,

We use the MySQL plugin of course :)
(0002965)
joergs   
2018-04-05 16:47   
Well, there are still multiple MySQL plugins and different possible configurations.
(0002966)
hostedpower   
2018-04-05 16:50   
We used this install procedure:


cd /tmp
git clone https://github.com/bareos/bareos-contrib
cp -R ./bareos-contrib/fd-plugins/mysql-python/*.py /usr/lib/bareos/plugins/
rm -rf bareos-contrib/
service bareos-fd restart

It's strange that seperate restores work perfectly, it's only when we combine SQL dump file restore & regular files we run into issues.

Please let me know which info exactly you need if you need anything else!
(0003530)
hostedpower   
2019-07-29 12:03   
Today this is still an issue..

If we restore MySQL database and files together it goes wrong :(



2019-07-29 11:59:57 xxxx JobId 88755: Fatal error: filed/fd_plugins.cc:1247 Unbalanced call to createFile=0 0
 
2019-07-29 11:59:57 backupxxx JobId 88755: Error: lib/bsock_tcp.cc:417 Wrote 29242 bytes to client:94.237.42.21:9103, but only 0 accepted.
 
2019-07-29 11:59:57 backupxxx JobId 88755: Fatal error: stored/read.cc:164 Error sending to File daemon. ERR=Connection reset by peer
 
2019-07-29 11:59:57 backupxxx JobId 88755: Error: lib/bsock_tcp.cc:457 Socket has errors=1 on call to client:94.237.42.21:9103
 
2019-07-29 11:59:57 backupxxx JobId 88755: Releasing device "serverxx-incr" (/home/serverxx/bareos).
 
2019-07-29 11:59:57 director JobId 88755: Max configured use duration=82,800 sec. exceeded. Marking Volume "vol-cons-3373" as Used.
 
2019-07-29 11:59:57 director JobId 88755: Error: Bareos director 18.2.6 (13Feb19):
 Build OS: Linux-4.4.92-6.18-default debian Debian GNU/Linux 9.7 (stretch)
 JobId: 88755
 Job: RestoreFiles.2019-07-29_11.09.05_18
 Restore Client: xxxxx
 Start time: 29-Jul-2019 11:58:44
 End time: 29-Jul-2019 11:59:57
 Elapsed time: 1 min 13 secs
 Files Expected: 186,501
 Files Restored: 15,767
 Bytes Restored: 5,625,032,491
 Rate: 77055.2 KB/s
 FD Errors: 1
 FD termination status: Fatal Error
 SD termination status: Fatal Error
 Bareos binary info: official Bareos subscription
 Termination: *** Restore Error ***

 
2019-07-29 11:59:57 director JobId 88755: Begin pruning Files.
 
2019-07-29 11:59:57 director JobId 88755: No Files found to prune.
 
2019-07-29 11:59:57 director JobId 88755: End auto prune.

 
2019-07-29 11:59:56 backupxxx JobId 88755: End of Volume at file 0 on device "serverxx-incr" (/home/serverxx/bareos), Volume "vol-incr-3338"
 
2019-07-29 11:59:56 backupxxx JobId 88755: Ready to read from volume "vol-cons-3373" on device "serverxx-incr" (/home/serverxx/bareos).
 
2019-07-29 11:59:56 backupxxx JobId 88755: Forward spacing Volume "vol-cons-3373" to file:block 0:241.
 
2019-07-29 11:59:53 backupxxx JobId 88755: End of Volume at file 0 on device "serverxx-incr" (/home/serverxx/bareos), Volume "vol-incr-3325"
 
2019-07-29 11:59:53 backupxxx JobId 88755: Ready to read from volume "vol-incr-3338" on device "serverxx-incr" (/home/serverxx/bareos).
 
2019-07-29 11:59:53 backupxxx JobId 88755: Forward spacing Volume "vol-incr-3338" to file:block 0:241.
 
2019-07-29 11:59:51 backupxxx JobId 88755: End of Volume at file 0 on device "serverxx-incr" (/home/serverxx/bareos), Volume "vol-incr-3314"
 
2019-07-29 11:59:51 backupxxx JobId 88755: Ready to read from volume "vol-incr-3325" on device "serverxx-incr" (/home/serverxx/bareos).
 
2019-07-29 11:59:51 backupxxx JobId 88755: Forward spacing Volume "vol-incr-3325" to file:block 0:241.
 
2019-07-29 11:59:49 backupxxx JobId 88755: End of Volume at file 0 on device "serverxx-incr" (/home/serverxx/bareos), Volume "vol-incr-3307"
 
2019-07-29 11:59:49 backupxxx JobId 88755: Ready to read from volume "vol-incr-3314" on device "serverxx-incr" (/home/serverxx/bareos).
 
2019-07-29 11:59:49 backupxxx JobId 88755: Forward spacing Volume "vol-incr-3314" to file:block 0:241.
 
2019-07-29 11:59:40 backupxxx JobId 88755: End of Volume at file 0 on device "serverxx-incr" (/home/serverxx/bareos), Volume "vol-incr-3283"
 
2019-07-29 11:59:40 backupxxx JobId 88755: Ready to read from volume "vol-incr-3295" on device "serverxx-incr" (/home/serverxx/bareos).
 
2019-07-29 11:59:40 backupxxx JobId 88755: Forward spacing Volume "vol-incr-3295" to file:block 0:241.
 
2019-07-29 11:59:40 backupxxx JobId 88755: End of Volume at file 0 on device "serverxx-incr" (/home/serverxx/bareos), Volume "vol-incr-3295"
 
2019-07-29 11:59:40 backupxxx JobId 88755: Ready to read from volume "vol-incr-3307" on device "serverxx-incr" (/home/serverxx/bareos).
 
2019-07-29 11:59:40 backupxxx JobId 88755: Forward spacing Volume "vol-incr-3307" to file:block 0:241.
 
2019-07-29 11:59:30 backupxxx JobId 88755: End of Volume at file 0 on device "serverxx-incr" (/home/serverxx/bareos), Volume "vol-incr-3276"
 
2019-07-29 11:59:30 backupxxx JobId 88755: Ready to read from volume "vol-incr-3283" on device "serverxx-incr" (/home/serverxx/bareos).
 
2019-07-29 11:59:30 backupxxx JobId 88755: Forward spacing Volume "vol-incr-3283" to file:block 0:241.
 
2019-07-29 11:59:02 backupxxx JobId 88755: End of Volume at file 0 on device "serverxx-incr" (/home/serverxx/bareos), Volume "vol-incr-3265"
 
2019-07-29 11:59:02 backupxxx JobId 88755: Ready to read from volume "vol-incr-3276" on device "serverxx-incr" (/home/serverxx/bareos).
 
2019-07-29 11:59:02 backupxxx JobId 88755: Forward spacing Volume "vol-incr-3276" to file:block 0:241.
 
2019-07-29 11:59:01 backupxxx JobId 88755: End of Volume at file 0 on device "serverxx-incr" (/home/serverxx/bareos), Volume "vol-incr-3258"
 
2019-07-29 11:59:01 backupxxx JobId 88755: Ready to read from volume "vol-incr-3265" on device "serverxx-incr" (/home/serverxx/bareos).
 
2019-07-29 11:59:01 backupxxx JobId 88755: Forward spacing Volume "vol-incr-3265" to file:block 0:241.
 
2019-07-29 11:59:00 backupxxx JobId 88755: End of Volume at file 0 on device "serverxx-incr" (/home/serverxx/bareos), Volume "vol-incr-3252"
 
2019-07-29 11:59:00 backupxxx JobId 88755: Ready to read from volume "vol-incr-3258" on device "serverxx-incr" (/home/serverxx/bareos).
 
2019-07-29 11:59:00 backupxxx JobId 88755: Forward spacing Volume "vol-incr-3258" to file:block 0:241.
 
2019-07-29 11:58:57 backupxxx JobId 88755: End of Volume at file 0 on device "serverxx-incr" (/home/serverxx/bareos), Volume "vol-incr-3244"
 
2019-07-29 11:58:57 backupxxx JobId 88755: Ready to read from volume "vol-incr-3252" on device "serverxx-incr" (/home/serverxx/bareos).
 
2019-07-29 11:58:57 backupxxx JobId 88755: Forward spacing Volume "vol-incr-3252" to file:block 0:241.
 
2019-07-29 11:58:56 backupxxx JobId 88755: End of Volume at file 0 on device "serverxx-incr" (/home/serverxx/bareos), Volume "vol-incr-3230"
 
2019-07-29 11:58:56 backupxxx JobId 88755: Ready to read from volume "vol-incr-3244" on device "serverxx-incr" (/home/serverxx/bareos).
 
2019-07-29 11:58:56 backupxxx JobId 88755: Forward spacing Volume "vol-incr-3244" to file:block 0:241.
 
2019-07-29 11:58:54 backupxxx JobId 88755: End of Volume at file 0 on device "serverxx-incr" (/home/serverxx/bareos), Volume "vol-incr-3224"
 
2019-07-29 11:58:54 backupxxx JobId 88755: Ready to read from volume "vol-incr-3230" on device "serverxx-incr" (/home/serverxx/bareos).
 
2019-07-29 11:58:54 backupxxx JobId 88755: Forward spacing Volume "vol-incr-3230" to file:block 0:241.
 
2019-07-29 11:58:51 backupxxx JobId 88755: End of Volume at file 0 on device "serverxx-incr" (/home/serverxx/bareos), Volume "vol-incr-3218"
 
2019-07-29 11:58:51 backupxxx JobId 88755: Ready to read from volume "vol-incr-3224" on device "serverxx-incr" (/home/serverxx/bareos).
 
2019-07-29 11:58:51 backupxxx JobId 88755: Forward spacing Volume "vol-incr-3224" to file:block 0:241.
 
2019-07-29 11:58:49 backupxxx JobId 88755: End of Volume at file 0 on device "serverxx-incr" (/home/serverxx/bareos), Volume "vol-incr-3212"
 
2019-07-29 11:58:49 backupxxx JobId 88755: Ready to read from volume "vol-incr-3218" on device "serverxx-incr" (/home/serverxx/bareos).
 
2019-07-29 11:58:49 backupxxx JobId 88755: Forward spacing Volume "vol-incr-3218" to file:block 0:241.
 
2019-07-29 11:58:48 backupxxx JobId 88755: End of Volume at file 0 on device "serverxx-incr" (/home/serverxx/bareos), Volume "vol-incr-3364"
 
2019-07-29 11:58:48 backupxxx JobId 88755: Ready to read from volume "vol-incr-3212" on device "serverxx-incr" (/home/serverxx/bareos).
 
2019-07-29 11:58:48 backupxxx JobId 88755: Forward spacing Volume "vol-incr-3212" to file:block 0:241.
 
2019-07-29 11:58:47 backupxxx JobId 88755: Ready to read from volume "vol-incr-3364" on device "serverxx-incr" (/home/serverxx/bareos).
 
2019-07-29 11:58:47 backupxxx JobId 88755: Forward spacing Volume "vol-incr-3364" to file:block 0:241.
 
2019-07-29 11:58:46 xxxx JobId 88755: Connected Storage daemon at backupxxx:9103, encryption: PSK-AES256-CBC-SHA
 
2019-07-29 11:58:44 director JobId 88755: Start Restore Job RestoreFiles.2019-07-29_11.09.05_18
 
2019-07-29 11:58:44 director JobId 88755: Connected Storage daemon at backupxxx:9103, encryption: ECDHE-PSK-CHACHA20-POLY1305
 
2019-07-29 11:58:44 director JobId 88755: Using Device "serverxx-incr" to read.
 
2019-07-29 11:58:44 director JobId 88755: Connected Client: xxxx at xxxx:9102, encryption: PSK-AES256-CBC-SHA
 
2019-07-29 11:58:44 director JobId 88755: Handshake: Immediate TLS
2019-07-29 11:58:44 director JobId 88755: Encryption: PSK-AES256-CBC-SHA
(0003532)
hostedpower   
2019-07-29 17:31   
to be clear.

Restore together, result:

Files: Not complete
MySQL db: restore looks ok

Restore Files and db , both in seperate job:

Files: OK
MySQL db: OK
(0003715)
hostedpower   
2019-12-24 15:57   
Still an issue, we're not able to restore AND files AND mysql database at the same time without this weird error... :)

Any idea?

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1155 [bareos-core] General feature always 2019-12-13 09:31 2019-12-21 14:42
Reporter: bigz Platform: Linux  
Assigned To: joergs OS: any  
Priority: high OS Version: 3  
Status: assigned Product Version: 19.2.4~pre  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact: yes
bareos-master: action:
bareos-19.2: impact: yes
bareos-19.2: action:
bareos-18.2: impact: yes
bareos-18.2: action: none
bareos-17.2: impact: yes
bareos-17.2: action: none
bareos-16.2: impact: yes
bareos-16.2: action: none
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Impossible to connect to a TLS with no PSK configured director
Description: I configured my director with TLS in a centos7 docker image.

I want to connect with python-bareos pip module on order to send a command. The python client does not support TLS configuration without PSK.

I think python client does not support this configuration. I do enhancement in my fork github repo (https://github.com/bigzbigz/bareos/tree/dev/bigz/master/python-support-tls-without-psk)

I plan to push a pull request on the officiel repo in order to fix the problem. I need your opinion before.


Tags:
Steps To Reproduce: I work in a venv

-> % pip install sslpsk python-bareos
[...]
-> % pip list
Package Version Location
--------------- ------- --------------------------------------------
pip 19.3.1
pkg-resources 0.0.0
python-bareos 18.2.5
python-dateutil 2.8.1
setuptools 42.0.2
six 1.13.0
sslpsk 1.0.0
wheel 0.33.6

I try with TLS-PSK require

-> % python bconsole.py -d --name bareos-dir --port 9101 --address bareos-dir -p $PASS --tls-psk-require
DEBUG bconsole.<module>: options: {'name': 'bareos-dir', 'password': 'xxxxxxxx', 'port': '9101', 'address': 'bareos-dir', 'protocolversion': 2, 'tls_psk_require': True}
DEBUG lowlevel.__init__: init
DEBUG lowlevel.__connect_plain: connected to bareos-dir:9101
DEBUG lowlevel.__connect_tls_psk: identity = R_CONSOLEbareos-dir, password = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Traceback (most recent call last):
  File "bconsole.py", line 28, in <module>
    director = bareos.bsock.DirectorConsole(**bareos_args)
  File "/home/user/Downloads/bareos/python-bareos/bareos/bsock/directorconsole.py", line 99, in __init__
    self.connect(address, port, dirname, ConnectionType.DIRECTOR, name, password)
  File "/home/user/Downloads/bareos/python-bareos/bareos/bsock/lowlevel.py", line 104, in connect
    return self.__connect()
  File "/home/user/Downloads/bareos/python-bareos/bareos/bsock/lowlevel.py", line 119, in __connect
    self.__connect_tls_psk()
  File "/home/user/Downloads/bareos/python-bareos/bareos/bsock/lowlevel.py", line 191, in __connect_tls_psk
    server_side=False)
  File "/home/user/.virtualenvs/bareos/lib/python3.7/site-packages/sslpsk/sslpsk.py", line 106, in wrap_socket
    _ssl_set_psk_client_callback(sock, cb)
  File "/home/user/.virtualenvs/bareos/lib/python3.7/site-packages/sslpsk/sslpsk.py", line 73, in _ssl_set_psk_client_callback
    ssl_id = _sslpsk.sslpsk_set_psk_client_callback(_sslobj(sock))
  File "/home/user/.virtualenvs/bareos/lib/python3.7/site-packages/sslpsk/sslpsk.py", line 55, in _sslobj
    return sock._sslobj._sslobj
AttributeError: '_ssl._SSLSocket' object has no attribute '_sslobj'

I try with no TLS-PSK require (default configuration)

-> % python bconsole.py -d --name bareos-dir --port 9101 --address bareos-dir -p $PASS
/home/user/Downloads/bareos/python-bareos/bareos/bsock/lowlevel.py:38: UserWarning: Connection encryption via TLS-PSK is not available, as the module sslpsk is not installed.
  warnings.warn(u'Connection encryption via TLS-PSK is not available, as the module sslpsk is not installed.')
DEBUG bconsole.<module>: options: {'name': 'bareos-dir', 'password': 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx', 'port': '9101', 'address': 'bareos-dir', 'protocolversion': 2, 'tls_psk_require': False}
DEBUG lowlevel.__init__: init
DEBUG lowlevel.__connect_plain: connected to bareos-dir:9101
DEBUG lowlevel.__connect: Encryption: None
DEBUG lowlevel.send: bytearray(b'Hello bareos-dir calling version 18.2.5')
DEBUG lowlevel.recv_bytes: expecting 4 bytes.
DEBUG lowlevel.recv: header: -4
WARNING lowlevel._handleSocketError: socket error: Conversation terminated (-4)
Received unexcepted signal: Conversation terminated (-4)
Additional Information: Director configuration:
Director {
  Name = @@DIR_NAME@@-dir
  DIRport = 9101 # where we listen for UA connections
  QueryFile = "/usr/lib/bareos/scripts/query.sql"
  WorkingDirectory = "/var/spool/bareos"
  PidDirectory = "/var/run"
  Password = "@@DIR_PASSWORD@@" # Console password
  Messages = Daemon
  Auditing = yes
  TLS Enable = yes
  TLS Require = yes
  TLS DH File = /etc/ssl/dh1024.pem
  TLS CA Certificate File = /etc/ssl/certs/ca-bundle.crt
  TLS Key = /etc/ssl/private/client.key
  TLS Certificate = /etc/ssl/certs/client.pem
}
System Description General issues regarding all linux platforms or not specific to only one distribution
Attached Files: console.log (32,684 bytes) 2019-12-17 22:44
https://bugs.bareos.org/file_download.php?file_id=396&type=bug
Notes
(0003663)
arogge   
2019-12-13 10:46   
Thanks for taking a look.
I'm not sure I already understand what happens in your environment. However, if you want to touch the code, you should probably checkout the master branch and use python-bareos from there.
(0003664)
bigz   
2019-12-13 10:57   
I verify the error with the code from the master branch in python-bareos folder.
(0003665)
joergs   
2019-12-13 11:39   
it is correct, that python-bareos does not support TLS other then TLS-PSK.
My assumption has been, that most new installations will use TLS-PSK. However, a patch to also support normal TLS without PSK is welcome.
I took a first look at your code. It looks good so far.
However, if I read it correctly, you allow TLS, but don't verify against a custom CA? Have I missed something there or is it your intention to plainly accepting all TLS connections?

Have you seen the systemtest testing the python-bareos authentication?
See https://docs.bareos.org/master/DeveloperGuide/BuildAndTestBareos.html#building-the-test-environment

Instead of running all tests, you can also change to the build/systemtests/tests/python-bareos-test directory and run "./testrunner" from there.
This way you can verify, that your change to not change existing behavior and maybe you can add extra tests for your functionality.

With what version of Python have you tested? I experienced difficulties with the Python3 version of sslpsk. What OS/distribution did you use, as at least on new Fedora (>=30) systems there are also compile problems with sslpsk?

Currently, we use tls_psk_enable and tls_psk_require parameter. You added tls_enable and tls_require. I'm not sure, if this is the best way to configure it, especially, if more parameter as CA are required. I'll discuss about this in our next developer meeting.
(0003666)
bigz   
2019-12-13 12:06   
You can use a custom CA (this is my configuration). The use of ssl.wrap_socket automatically check the CA you installed in the operating system (normally in /etc/ssl/certs). It is possible to load an extra CA chain

I don't see the systemtest. I use my travis-ci account to check the existing CI from the official repo.
I will think about if a new test is possible to verify my enhancement.

I use Python 3.7.3 version. My OS is Ubuntu 19.04 and I use official python package. Modules are installed with a virtualenv with pip command.
(0003668)
bigz   
2019-12-14 22:01   
(Last edited: 2019-12-14 22:30)
Hello joergs
I have difficulties to build bareos project with cmake like you explain in your note. I think I have dependencies missing but I don't find which one is missing. I installed libacl1-dev and zlib1g-dev on my ubuntu19.04. Do you have the list of dependencies packages is needed ?
When I use this command I have this error.
-> % cmake -Dsqlite3=yes -Dtraymonitor=yes ../bareos
[...]
-- Disabled test: system:bconsole-pam
CMake Error: The following variables are used in this project, but they are set to NOTFOUND.
Please set them or make sure they are set and tested correctly in the CMake files:
Readline_INCLUDE_DIR (ADVANCED)
   used as include directory in directory /home/user/Perso/clion/bareos/core/src/console
   used as include directory in directory /home/user/Perso/clion/bareos/core/src/console
   used as include directory in directory /home/user/Perso/clion/bareos/core/src/console
   used as include directory in directory /home/user/Perso/clion/bareos/core/src/console
   used as include directory in directory /home/user/Perso/clion/bareos/core/src/console
   used as include directory in directory /home/user/Perso/clion/bareos/core/src/console
   used as include directory in directory /home/user/Perso/clion/bareos/core/src/console
   used as include directory in directory /home/user/Perso/clion/bareos/core/src/console
   used as include directory in directory /home/user/Perso/clion/bareos/core/src/console

I don't understand the error
Thanks

(0003669)
bigz   
2019-12-14 23:40   
I passed to build the project...I continue investigation but my previous errors are solved
(0003670)
joergs   
2019-12-15 09:48   
Good that you passed the build process. You find the dependency packages in the files we use to create Debian packages: https://github.com/bareos/bareos/blob/master/core/platforms/packaging/bareos.dsc and/or https://github.com/bareos/bareos/blob/master/core/debian/control (or http://download.bareos.org/bareos/experimental/nightly/xUbuntu_18.04/bareos_19.2.4*.dsc). Make sure to have libjansson-dev installed, otherwise, Bareos will build but misses functionality required for the test.
(0003671)
bigz   
2019-12-15 14:51   
Hello,
I have a small error in ./testrunner

-> % /bin/zsh /home/user/Perso/clion/bareos/cmake-build-release/systemtests/tests/python-bareos-test/testrunner [devel|…]
creating database (sqlite3)
running /home/user/Perso/clion/bareos/cmake-build-release/systemtests/scripts/setup
 
 
=== python-bareos-test: starting at 14:46:34 ===
=
=
exit(0) is called. Set test to failure and end test.
end_test:7: no matches found: /home/user/Perso/clion/bareos/cmake-build-release/systemtests/tests/python-bareos-test/working/bareos.*.traceback
end_test:8: no matches found: /home/user/Perso/clion/bareos/cmake-build-release/systemtests/tests/python-bareos-test/working/bareos.*.traceback
end_test:9: no matches found: /home/user/Perso/clion/bareos/cmake-build-release/systemtests/tests/python-bareos-test/working/*.bactrace
end_test:10: no matches found: /home/user/Perso/clion/bareos/cmake-build-release/systemtests/tests/python-bareos-test/working/*.bactrace
 
  !!!!! python-bareos-test failed!!! 14:46:34 !!!!!
   Status: estat=998 zombie=0 backup=0 restore=0 diff=0
 
I think I don't understand the behavior of the start_test() function in functions. A trap is added at the beginning of the function and the trap is always taken at the end of this start_test() function as a consequence end_test() is called and no tests are done. Is it a desired bahavior ?
(0003672)
joergs   
2019-12-15 19:26   
Interesting. However, this problem does only occur when using zsh. It seams, that you are the first who ever tried it with it. Normally, we use bash (dash) or ksh. With these, the test runs as expected.
(0003675)
bigz   
2019-12-16 20:22   
(Last edited: 2019-12-16 22:30)
Problem is solved and it comes with my zsh interpreter. I just change it with bash.
I already have a problem because I use default python3.7 version of my ubuntu OS. It seems to have a problem with sslpsk module and python version 3.7 (https://github.com/drbild/sslpsk/issues/11). I will try with python3.6 and I'll give you the answer.

(0003679)
bigz   
2019-12-17 22:44   
I change my python version to 3.6.5 in order to avoid sslpsk problem
Sorry but I already have errors when I execute ./testrunner from master branch. I upload console.log file with stdout.

Please could you watch and tell me what do you think about ? In my opinion problem comes from "WARNING lowlevel._handleSocketError: socket error: [SSL: ATTEMPT_TO_REUSE_SESSION_IN_DIFFERENT_CONTEXT] attempt to reuse session in different context (_ssl.c:833)". In this situation, the connection falls back in plain and the test fails.
I have no problem when I use builded bconsole -c bconsole-admin-tls.conf or bconsole -c bconsole-admin-notls.conf command. All of 2 are encrypted with TLS_CHACHA20_POLY1305_SHA256
I try to google ATTEMPT_TO_REUSE_SESSION_IN_DIFFERENT_CONTEXT but I don't find an interesting answer.
Maybe you have an opinion on the problem ?
As you send in your email, I will create a draft pull request tomorrow.
(0003705)
bigz   
2019-12-18 22:48   
(Last edited: 2019-12-18 23:10)
Hello,
I work today and I rebase to the bareos master branch. I do not have the problem anymore. You have been doing commits in last few days but I don't understand how do you solve my problem

I did small fixes in python-bareos with a pull request in bareos github repo.

I already have error when I execute python unittests .Do you manage to perfom the unittest ? Do you send me a log file of the execution ?

Thanks

(0003707)
joergs   
2019-12-19 12:55   
Hi, I accepted https://github.com/bareos/bareos/pull/382.

Have I understood you correctly, that connecting to a Director console without TLS-PSK, but with TLS by certificate does work now? I've not changed the behavior intentionally.

The systemtest also fails on my system when using Python 3. With Python 2 it works without problems. I assumed a general problem with sslpsk on Python 3, but after you saying, it works somehow in your environment, I assumend a local problem.
After your hint, I checked the project https://github.com/drbild/sslpsk again and saw, that the example code works on Python 3. I hope to find the time to check about this in more detail soon.
(0003712)
joergs   
2019-12-20 17:22   
I'm not sure, what have changed, but the example and test code from https://github.com/drbild/sslpsk does no longer work on my machine.
(0003713)
bigz   
2019-12-21 14:42   
Hello,
It seems to don't work for him as well => https://travis-ci.org/drbild/sslpsk

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
843 [bareos-core] director crash always 2017-08-10 22:39 2019-12-20 11:21
Reporter: vshkolin Platform: Linux  
Assigned To: OS: CentOS  
Priority: normal OS Version: 7  
Status: acknowledged Product Version: 16.2.4  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: bareos-dir crashed with Segfault when parsing config files with job type = consolidate w/o pool and storage definition
Description: Job {
  Name = "Consolidate"
  Type = "Consolidate"
  Accurate = "yes"
  Max Full Consolidations = 1
  Client = bareos.home.loc
  FileSet = "SelfTest"

# Storage = File
# Pool = Incremental
}
 
[root@bareos ai]# bareos-dir -u bareos -g bareos -t -v
BAREOS interrupted by signal 11: Segmentation violation
Kaboom! bareos-dir, bareos-dir got signal 11 - Segmentation violation. Attempting traceback.
Kaboom! exepath=/etc/bareos/bareos-dir.d/ai
Calling: /etc/bareos/bareos-dir.d/ai/btraceback /etc/bareos/bareos-dir.d/ai/bareos-dir 8216 /var/lib/bareos
execv: /etc/bareos/bareos-dir.d/ai/btraceback failed: ERR=No such file or directory
The btraceback call returned 255
Dumping: /var/lib/bareos/bareos-dir.8216.bactrace
Tags:
Steps To Reproduce: 1. Install and configure bareos-dir
2. Add consolidate job w/o pool and storage definition
3. Run bareos-dir -> Segmentation violation
4. Uncomment storage and pool definition
5. Run bareos-dir -> config is successfully parsed with diagnostic '"Messages" directive in Job "Consolidate" resource is required'
6. Comment storage and pool definition
7. Run bareos-dir -> Segmentation violation

Checked on two systems with same result.
Additional Information:
System Description
Attached Files: bareos-dir.debug (16,760 bytes) 2019-12-12 10:47
https://bugs.bareos.org/file_download.php?file_id=395&type=bug
Notes
(0002939)
MarceloRuiz   
2018-03-09 02:56   
(Last edited: 2018-03-09 03:04)
I have the same problem...
The backtrace is:

 ==== bactrace output ====

Attempt to dump locks
Attempt to dump current JCRs. njcrs=0
 ==== End baktrace output ====

(0003273)
IvanBayan   
2019-02-27 08:56   
Today same happened to me, I've hit reload and bareos have crashed:
Feb 27 02:52:48 mia-backup03 bareos-dir[35840]: BAREOS interrupted by signal 11: Segmentation violation
root@mia-backup03:/var/log/bareos# apt-cache policy bareos-director
bareos-director:
  Installed: 18.2.5-139.1
  Candidate: 18.2.5-139.1
  Version table:
 *** 18.2.5-139.1 500
        500 http://download.bareos.org/bareos/release/latest/xUbuntu_16.04 Packages
        100 /var/lib/dpkg/status
     14.2.6-3 500
        500 http://us.archive.ubuntu.com/ubuntu xenial/universe amd64 Packages
(0003274)
IvanBayan   
2019-02-27 08:58   
$ sudo dmesg|tail -n1
[71272674.289931] bareos-dir[20454]: segfault at 7fec1b7fe9d0 ip 00007fec278f08d9 sp 00007fec1a7fa858 error 4 in libpthread-2.23.so[7fec278e8000+18000]
(0003449)
arogge   
2019-07-12 10:33   
Can you please reproduce this with our latest nightly-build from https://download.bareos.org/bareos/experimental/nightly/ ?
Thank you!
(0003660)
ironiq   
2019-12-12 10:24   
Hi!
Same happened with me today:

root@nas:~# apt-cache policy bareos-director
bareos-director:
  Installed: 18.2.5-147.2
  Candidate: 18.2.5-147.2
  Version table:
 *** 18.2.5-147.2 500
        500 http://download.bareos.org/bareos/release/latest/xUbuntu_16.04 Packages
        100 /var/lib/dpkg/status
     14.2.6-3 500
        500 http://cz.archive.ubuntu.com/ubuntu xenial/universe amd64 Packages
root@nas:~# bareos-dir -u bareos -g bareos -t -v
BAREOS interrupted by signal 11: Segmentation violation
Segmentation fault (core dumped)
root@nas:~# dmesg|tail -n1
[63085.221143] bareos-dir[19067]: segfault at 7fb75836f510 ip 00007fb75836f510 sp 00007ffc7c79b1f8 error 14 in locale-archive[7fb758ade000+2d8000]
root@nas:~#
(0003661)
ironiq   
2019-12-12 10:47   
It's happening only when running the "bareos-dir" with "-t" option. Attached a console output with "-d 300" and a part with "-d 900". If running without "-t" everything seems working.
(0003698)
arogge   
2019-12-18 15:52   
Thanks for the report.
Could you try to reproduce the issue in the nightly-build from https://download.bareos.org/bareos/experimental/nightly/ so we don't go chasing bugs that have already been fixed?

Thank you!
(0003699)
ironiq   
2019-12-18 16:06   
Hi!

Here is the result:
root@nas:~# bareos-dir -t -f
BAREOS interrupted by signal 11: Segmentation violation
Segmentation fault (core dumped)
root@nas:~# bareos-dir -?

pre-release versions are UNSUPPORTED.
Get a released version and vendor support on https://www.bareos.com
Copyright (C) 2013-2019 Bareos GmbH & Co. KG
Copyright (C) 2000-2012 Free Software Foundation Europe e.V.
Copyright (C) 2010-2017 Planets Communications B.V.

Version: 19.2.4~pre1228.1b4462ef8 (18 December 2019) Linux-3.10.0-1062.9.1.el7.x86_64 ubuntu Ubuntu 16.04 LTS

Usage: bareos-dir [options]
        -c <path> use <path> as configuration file or directory
        -d <nn> set debug level to <nn>
        -dt print timestamp in debug output
        -f run in foreground (for debugging)
        -g <group> run as group <group>
        -m print kaboom output (for debugging)
        -r <job> run <job> now
        -s no signals (for debugging)
        -t test - read configuration and exit
        -u <user> run as user <user>
        -v verbose user messages
        -xc print configuration and exit
        -xs print configuration file schema in JSON format and exit
        -? print this message.
root@nas:~#
(0003700)
arogge   
2019-12-18 16:23   
Thanks.
We'll look into it.
(0003710)
lborek   
2019-12-20 11:19   
(Last edited: 2019-12-20 11:21)
Same problem here for 18.2.5-147.2 and Ubuntu 16.04. But we don't use consolidate job in config.

To fix bareos-dir startup for Ubuntu :

root@:/etc# cat ./systemd/system/bareos-dir.service | grep "ExecStartPre"
#ExecStartPre=/usr/sbin/bareos-dir -t -f
root@:/etc# systemctl daemon-reload
root@:/etc# service bareos-dir start


Thanks.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
909 [bareos-core] director major always 2018-02-12 09:50 2019-12-19 10:14
Reporter: rightmirem Platform: Intel  
Assigned To: OS: Debian Jessie  
Priority: normal OS Version: 8  
Status: confirmed Product Version:  
Product Build: Resolution: reopened  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: "Reschedule on error" recognized, but not actually rescheduling the job
Description: I have been testing my backup's ability to recover from an error.

I have a job that has the following settings...

  Reschedule Interval = 1 minute
  Reschedule On Error = yes
  Reschedule Times = 5

... and I start it as a full. I then restart the Bareos director (to error out the job intentionally).

In the log, it shows that the job has been rescheduled - but the job never starts. The job should have started at 10:20. But by 10:26 there was nothing running being reported by "list jobs" in bconsole.

=== LOG ===
    01-Feb 10:19 server-dir JobId 569: Fatal error: Network error with FD during Backup: ERR=No data available
    01-Feb 10:19 server-sd JobId 569: Fatal error: append.c:245 Network error reading from FD. ERR=No data available
    01-Feb 10:19 server-sd JobId 569: Elapsed time=00:01:24, Transfer rate=112.6 M Bytes/second
    01-Feb 10:19 server-dir JobId 569: Error: Director's comm line to SD dropped.
    01-Feb 10:19 server-dir JobId 569: Fatal error: No Job status returned from FD.
    01-Feb 10:19 server-dir JobId 569: Error: Bareos server-dir 17.2.4 (21Sep17):
      Build OS: x86_64-pc-linux-gnu debian Debian GNU/Linux 8.0 (jessie)
      JobId: 569
      Job: backupJobName.2018-02-01_10.18.12_04
      Backup Level: Full
      Client: "server-fd" 17.2.4 (21Sep17) x86_64-pc-linux-gnu,debian,Debian GNU/Linux 8.0 (jessie),Debian_8.0,x86_64
      FileSet: "backupJobName" 2018-01-29 15:00:00
      Pool: "6mo-Full" (From Job FullPool override)
      Catalog: "MyCatalog" (From Client resource)
      Storage: "Tape" (From Job resource)
      Scheduled time: 01-Feb-2018 10:18:10
      Start time: 01-Feb-2018 10:18:14
      End time: 01-Feb-2018 10:19:45
      Elapsed time: 1 min 31 secs
      Priority: 10
      FD Files Written: 0
      SD Files Written: 0
      FD Bytes Written: 0 (0 B)
      SD Bytes Written: 1,042 (1.042 KB)
      Rate: 0.0 KB/s
      Software Compression: None
      VSS: no
      Encryption: no
      Accurate: no
      Volume name(s): DL011BL7
      Volume Session Id: 1
      Volume Session Time: 1517476667
      Last Volume Bytes: 5,035,703,887,872 (5.035 TB)
      Non-fatal FD errors: 2
      SD Errors: 0
      FD termination status: Error
      SD termination status: Error
      Termination: *** Backup Error ***

    01-Feb 10:19 server-dir JobId 569: Rescheduled Job backupJobName.2018-02-01_10.18.12_04 at 01-Feb-2018 10:19 to re-run in 60 seconds (01-Feb-2018 10:20).
    01-Feb 10:19 server-dir JobId 569: Job backupJobName.2018-02-01_10.18.12_04 waiting 60 seconds for scheduled start time.


Tags:
Steps To Reproduce: I have scheduled a job with "reschedule on error"

I have both started the job manually, and let t he schedule start the job through the scheduler

I have tried killing the job BOTH by killing the core Bareos process with "kill -9" AND by simply restarting bareos with the restart commands.

Regardless of the method to kill the job, the log recognizes the job ended on an error, and states it is rescheduling the job (in 60 seconds).

However, the job never actually restarts.
Additional Information: See the main issue description for the log data
System Description
Attached Files:
Notes
(0002908)
joergs   
2018-02-12 18:30   
Reschedule on error is not intended to cover Bareos Director restart.
However, it should work if you restart the fd.
(0002932)
rightmirem   
2018-02-20 15:16   
Can we reopen this. I never got notification that it was in progress.

So, is it indicative of a problem when the log TRIES to reschedule the job - but simply doesn't?
(0002933)
rightmirem   
2018-02-20 15:23   
OK. It DID work when I killed the fd.

However, can you tell me what sorts of errors WILL trigger a restart (I don't see that in the manual). We're not just concerned with file errors, but also...

- Tape drive failure.
- Accidental system restart or server power failure.
- OS crash or hang.
- Daemon crashes or hangs.
(0002945)
rightmirem   
2018-03-13 12:16   
This can be marked as resolved
(0003597)
b.braunger@syseleven.de   
2019-10-14 14:33   
How was this resolved? Is there some kind of documentation by now?
(0003600)
arogge   
2019-10-16 10:08   
I don't see what kind of documentation you expect?
Reschedule on error does not work for a director restart (and was never intended to do this).
Its purpose is to rerun a job that failed.

So what else do you need?
(0003606)
b.braunger@syseleven.de   
2019-10-18 11:24   
Well I did not reproduce this but the Log of rightmirem says that the job terminated with an error and therefore it should be rescheduled as far as I understand. https://docs.bareos.org/Configuration/Director.html?highlight=reschedule#config-Dir_Job_RescheduleOnError
Although I see that this feature is not intended to cover director crashes the documentation should mention on what kind of failure a user can expect a reschedule and what does not trigger one (like the already mentioned 'Cancel')
However the log should never report that a job is rescheduled if that one is not going to be executed.
(0003607)
arogge   
2019-10-18 11:32   
The problem is probably that the director does not have a persistent schedule.
So when the job is rescheduled (and the reschedule log message is written) and then the director is restarted, the scheduling information is lost during the restart.
With the current design of rescheduling this cannot be fixed.

However, we can document that limitation.
(0003608)
b.braunger@syseleven.de   
2019-10-18 11:37   
Thanks for the info! I would appreciate if the doc explains that behaviour and as far as I'm concerned this ticket can be closed then.
(0003704)
bozonius   
2019-12-18 21:21   
When the director starts, couldn't it load the information from the database to re-discover jobs that have been rescheduled?

This should be applicable to the cancel waiting jobs issue (https://bugs.bareos.org/view.php?id=1148) also.

I don't see where a complete re-write/re-design is necessary to accomplish either of these features, while adding quite a bit of value to BareOS.
(0003706)
arogge   
2019-12-19 10:14   
What you're describing requires a redesign and probably also a major rewrite of the feature. It would have to work based on the job history in the catalog instead of the way it currently works.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1162 [bareos-core] file daemon major always 2019-12-18 16:01 2019-12-18 18:44
Reporter: arogge Platform:  
Assigned To: arogge OS:  
Priority: normal OS Version:  
Status: resolved Product Version: 19.2.4~pre  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version: 19.2.4~rc1  
    Target Version: 19.2.4~rc1  
bareos-master: impact: yes
bareos-master: action: fixed
bareos-19.2: impact: yes
bareos-19.2: action: will care
bareos-18.2: impact: yes
bareos-18.2: action: will care
bareos-17.2: impact: yes
bareos-17.2: action: will care
bareos-16.2: impact: yes
bareos-16.2: action: will care
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: When restoring files without directories, the permissions of the immediate parent directory are wrong
Description: If you restore files (and not directories) with a restore job and the parent directories are created by the filedaemon, then these directories will have weird permission bits set.
Tags:
Steps To Reproduce: 1. start restore browser
2. select a single file in any non-top directory
3. restore to a non-existant location
4. observe weird permission bits on the file's immediate parent directory
Additional Information: It looks like the filedaemon guesses what permission the directory should have based on the file that is being restored. This is inconsistent and the whole behaviour should probably be rewritten sometime.
Attached Files:
Notes
(0003702)
arogge   
2019-12-18 17:22   
Fix committed to bareos master branch with changesetid 12446.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1157 [bareos-core] General minor always 2019-12-13 12:55 2019-12-18 18:41
Reporter: arogge Platform:  
Assigned To: OS:  
Priority: low OS Version:  
Status: confirmed Product Version: 18.2.7  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version: 18.2.8  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Release Bareos 18.2.8
Description: This ticket acts as a master ticket to collect information about releasing Bareos 18.2.8.
Tags: release
Steps To Reproduce:
Additional Information:
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1158 [bareos-core] General minor always 2019-12-13 12:55 2019-12-18 18:41
Reporter: arogge Platform:  
Assigned To: OS:  
Priority: low OS Version:  
Status: confirmed Product Version: 17.2.8  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version: 17.2.9  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Release Bareos 17.2.9
Description: This ticket acts as a master ticket to collect information about releasing Bareos 17.2.9.
Tags: release
Steps To Reproduce:
Additional Information:
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1159 [bareos-core] General minor always 2019-12-13 12:56 2019-12-18 18:40
Reporter: arogge Platform:  
Assigned To: OS:  
Priority: low OS Version:  
Status: new Product Version: 16.2.9  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version: 16.2.10  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Release Bareos 16.2.10
Description: This ticket acts as a master ticket to collect information about releasing Bareos 16.2.10.
Tags: release
Steps To Reproduce:
Additional Information:
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
951 [bareos-core] General major always 2018-05-22 18:35 2019-12-18 09:56
Reporter: ameijeiras Platform: Linux  
Assigned To: arogge OS: Ubuntu  
Priority: normal OS Version: 16.04  
Status: assigned Product Version: 17.2.4  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact: yes
bareos-master: action: will care
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact: yes
bareos-18.2: action: will care
bareos-17.2: impact: yes
bareos-17.2: action: will care
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Differential backup compute date from last incremental
Description: I configure a standard backup policy Full (Monthly), Differential (Weekly) and incremental (Daily).

If I configure it using a daily incremental schedule and control differentials and Full using the variables "Max Diff Interval = 6 days" (To perform a Differential backup weekly) and "Max Full Interval = 31 days" (To perform a Full backup monthly), differentials backups date don't be compute since last Full backup date but from the date of the last incremental wrongly. This create problem in the policy (Full, Diff, Inc) because differential backups will not be really differentials and only have data from last incremental.
I a run manually a differential backup or shedule manually, for example: "
Schedule {
 Name = Backup_test
 Run = Level=Full 1st sun at 02:00
 Run = Differential 2nd-5th sun at 02:00
 Run = Level=Incremental mon-sat at 02:00
}
" differentials backups date are compute right since last Full backup
Tags:
Steps To Reproduce: onfigure a policy backup Full, Differential and incremental using this method:

# ---------- Jobs config ---------------
Job {
  Name = Backup_test
  Enabled = yes
  JobDefs = "DefaultJob"
  Messages = Standard
  Type = Backup
  Storage = CabinaFS1
  Client = test-fd
  FileSet = Archivos_test
  Schedule = Backup_test
  Full Backup Pool = test_Mensual
  Differential Backup Pool = test_Semanal
  Incremental Backup Pool = test_Diario
  Pool = test_Mensual
  Max Diff Interval = 6 days
  Max Full Interval = 31 days
}
# ---------- Schedule config ----------------
Schedule {
 Name = Backup_test
 Run = Level=Incremental at 02:00
}
Additional Information: example config that generate mistaken differential backups:
...
# ---------- Jobs config ---------------
Job {
  Name = Backup_test
  Enabled = yes
  JobDefs = "DefaultJob"
  Messages = Standard
  Type = Backup
  Storage = CabinaFS1
  Client = test-fd
  FileSet = Archivos_test
  Schedule = Backup_test
  Full Backup Pool = test_Mensual
  Differential Backup Pool = test_Semanal
  Incremental Backup Pool = test_Diario
  Pool = test_Mensual
  Max Diff Interval = 6 days
  Max Full Interval = 31 days
}
# ---------- Schedule config ----------------
Schedule {
 Name = Backup_test
 Run = Level=Incremental at 02:00
}


example config that generate right differential backups:
...
Job {
  Name = Backup_test
  Enabled = yes
  JobDefs = "DefaultJob"
  Messages = Standard
  Type = Backup
  Storage = CabinaFS1
  Client = test-fd
  FileSet = Archivos_test
  Schedule = Backup_test
  Full Backup Pool = test_Mensual
  Differential Backup Pool = test_Semanal
  Incremental Backup Pool = test_Diario
  Pool = test_Mensual


}
# ---------- Schedule config ----------------
Schedule {
 Name = Backup_test
 Run = Level=Full 1st sun at 02:00
 Run = Differential 2nd-5th sun at 02:00
 Run = Level=Incremental mon-sat at 02:00
}
...
System Description
Attached Files:
Notes
(0003147)
comfortel   
2018-10-24 14:17   
Same problem here:
https://bugs.bareos.org/view.php?id=1001
(0003168)
arogge   
2019-01-09 09:04   
I just confirmed that this happens at least in 17.2.4 on Ubuntu 16.04. We will try to fix the issue and to find out which versions are affected.
(0003169)
arogge   
2019-01-10 15:03   
Can you please try my packages with an applied fix in your environment and make sure this actually fixes the issue for you?

http://download.bareos.org/bareos/people/arogge/951/xUbuntu_16.04/amd64/
(0003227)
arogge_adm   
2019-01-30 13:12   
Fix committed to bareos dev/arogge/bareos-18.2/fix-951 branch with changesetid 10978.
(0003269)
comfortel   
2019-02-19 11:42   
when can we expect a change in official repo 18.2 for ubuntu?
(0003270)
arogge   
2019-02-19 11:54   
AFAICT you haven't even tried the test-packages I provided.

The bug itself is not fixed yet, because my patch is flawed (picks up changes from the latest full or differential instead of the last full).
If you need this fixed soon, you can open a support case or take a look for yourself (the branch with my candidate fix is publicly available on github).
(0003312)
comfortel   
2019-04-04 13:33   
Hello.

We test Increment and all ok.
jobfiles show 7 and 6 changed files.

Automatically selected FileSet: FileSetBackup_ansible.organisation.pro
+--------+-------+----------+-------------+---------------------+---------------------------------+
| jobid | level | jobfiles | jobbytes | starttime | volumename |
+--------+-------+----------+-------------+---------------------+---------------------------------+
| 20,280 | F | 47,522 | 130,433,828 | 2019-04-04 13:38:20 | Full_ansible.organisation.pro_0781 |
| 20,281 | I | 7 | 761,010 | 2019-04-04 13:40:50 | Incr_ansible.organisation.pro_0782 |
| 20,282 | I | 6 | 2,144 | 2019-04-04 13:55:08 | Incr_ansible.organisation.pro_0783 |
+--------+-------+----------+-------------+---------------------+---------------------------------+
(0003335)
comfortel   
2019-04-15 09:17   
Automatically selected FileSet: FileSetBackup_ansible.organisation.pro
+--------+-------+----------+-------------+---------------------+---------------------------------+
| jobid | level | jobfiles | jobbytes | starttime | volumename |
+--------+-------+----------+-------------+---------------------+---------------------------------+
| 20,280 | F | 47,522 | 130,433,828 | 2019-04-04 13:38:20 | Full_ansible.organisation.pro_0781 |
| 20,291 | D | 29 | 23,345,104 | 2019-04-12 01:05:02 | Diff_ansible.organisation.pro_0789 |
| 20,292 | I | 6 | 54,832 | 2019-04-13 01:05:02 | Incr_ansible.organisation.pro_0783 |
| 20,293 | I | 0 | 0 | 2019-04-14 01:05:03 | Incr_ansible.organisation.pro_0784 |
| 20,294 | I | 0 | 0 | 2019-04-15 01:05:03 | Incr_ansible.organisation.pro_0786 |
+--------+-------+----------+-------------+---------------------+---------------------------------+
You have selected the following JobIds: 20280,20291,20292,20293,20294
(0003336)
arogge   
2019-04-15 09:35   
Thanks for the confirmation.
I'll try to get the patch merged for the next release.
(0003337)
comfortel   
2019-04-15 14:43   
Thanks
(0003392)
comfortel   
2019-06-13 16:58   
When will this fix be added?
(0003562)
comfortel   
2019-08-30 12:23   
Hello. When will this fix be added to upstream?
(0003563)
arogge   
2019-08-30 12:27   
The fix didn't pass the review. The implemented behaviour is wrong, so it'll need work.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1160 [bareos-core] General minor always 2019-12-15 13:01 2019-12-17 18:27
Reporter: tuxmaster Platform: Linux  
Assigned To: arogge OS: CentOS  
Priority: normal OS Version: 7  
Status: assigned Product Version: 18.2.7  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: The value for the name oftopn of an resurece must be ascii only
Description: When set the name to an resource contains no ascii chars then the director will not start.
Sample:
will work:
Pool {
name = foo
...
}
will fails:
Pool {
name = fooäöü
...
}
But the documentation says, that file files can be UTF-8.
Tags:
Steps To Reproduce:
Additional Information: Using unicode chars in other options than name, it will work.
System Description
Attached Files:
Notes
(0003673)
arogge   
2019-12-16 17:25   
Where in the docs does it say that you can use anything non-ascii for a resource name?
If it is documented like that it is simply wrong and we would have to update the documentation.
(0003674)
tuxmaster   
2019-12-16 17:59   
I have read this documentation:
https://docs.bareos.org/Configuration/CustomizingTheConfiguration.html
Under the section "Character Sets" https://docs.bareos.org/Configuration/CustomizingTheConfiguration.html#character-sets
is written, that bareos will use UTF-8 encodings.
And in the section of the name resource there is only spoken of alphanumeric characters. So for example an 'ä' is an normal character.
So then there must be an hint here, that only asciii chars, are allowed here. Because for options that accept an string, utf-8 chars are working.
(0003677)
arogge   
2019-12-17 09:45   
In the data-type section at https://docs.bareos.org/Configuration/CustomizingTheConfiguration.html#data-types there is the following description:

"""
A keyword or name consisting of alphanumeric characters, including the hyphen, underscore, and dollar characters. The first character of a name must be a letter. A name has a maximum length currently set to 127 bytes.
Please note that Bareos resource names as well as certain other names (e.g. Volume names) must contain only letters (including ISO accented letters), numbers, and a few special characters (space, underscore, …). All other characters and punctuation are invalid.
"""

We can improve the documentation for that quite a lot. However, I'm not sure it is sufficient to do it here. Do you think we should add another hint somewhere else?
(0003678)
tuxmaster   
2019-12-17 18:27   
I think it will be better, general say, that only ascii chars are allowed under the general section.
Or when possible modify the parser, so that it will accept utf-8 strings on all settings.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1039 [bareos-core] webui major always 2019-01-30 07:40 2019-12-17 08:16
Reporter: alex-dvv Platform: Linux  
Assigned To: frank OS: Debian  
Priority: high OS Version: 9  
Status: assigned Product Version: 18.2.4-rc2  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Cant login in webui
Description: Hi!Cant login in webui
Error: Sorry, can not authenticate. Wrong username and/or password.
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files: Unbenannt.png (122,585 bytes) 2019-02-01 07:20
https://bugs.bareos.org/file_download.php?file_id=345&type=bug
png

webuiproblem.png (95,893 bytes) 2019-02-12 14:02
https://bugs.bareos.org/file_download.php?file_id=350&type=bug
png

Captura de pantalla de 2019-02-13 10-56-37.png (13,807 bytes) 2019-02-13 10:57
https://bugs.bareos.org/file_download.php?file_id=351&type=bug
png
Notes
(0003228)
noone   
2019-01-31 15:44   
Hello,

The same problem on my SLES 12.3, Version 18.2.5-136.1 (SUSE Repository).

I tracked the problem down to the following log messages (debug mode 100 and gdb running with breakpoint at lib/try_tls_handshake_as_a_server.cc:69):

"""
uranus.mcservice.eu-dir (100): lib/bsock.cc:81-0 Construct BareosSocket
[New Thread 0x7fffe3fff700 (LWP 19998)]
uranus.mcservice.eu-dir (100): include/jcr.h:320-0 Construct JobControlRecord
uranus.mcservice.eu-dir (200): lib/bsock.cc:631-0 Identified from Bareos handshake: admin-R_CONSOLE recognized version: 18.2
[Switching to Thread 0x7fffe3fff700 (LWP 19998)]

Thread 16 "bareos-dir" hit Breakpoint 1, GetHandshakeMode (config=0x6f4530, bs=0x7fffec001fe8, bs@entry=0x6d63a0 <std::string::_Rep::_S_empty_rep_storage>)
    at /usr/src/debug/bareos-18.2.5/src/lib/try_tls_handshake_as_a_server.cc:69
69 Dmsg1(200, "Connection to %s will be denied due to configuration mismatch\n", client_name.c_str());
(gdb) c
Continuing.
uranus.mcservice.eu-dir (200): lib/try_tls_handshake_as_a_server.cc:69-0 Connection to admin will be denied due to configuration mismatch
uranus.mcservice.eu-dir (100): lib/bsock.cc:129-0 Destruct BareosSocket
uranus.mcservice.eu-dir (100): include/jcr.h:324-0 Destruct JobControlRecord
[Thread 0x7fffe3fff700 (LWP 19998) exited]
"""

the relevant part of the code is giving for Bareos >=18.2 only a "ConnectionHandshakeMode::PerformCleartextHandshake" if "tls_policy == kBnetTlsNone".
(0003230)
noone   
2019-01-31 16:03   
I could resolve the problem by following the suggestion in "/etc/bareos/bareos-dir.d/console/admin.conf.example" to disable TLS or use Certificates. (It should only be disabled if the director is only listening on localhost, because otherwise passwords might be transferred nonencrypted over the network.)

So for me it looks like a configuration problem resulting from non adapted old configurations.
(0003232)
teka74   
2019-02-01 07:20   
Same Problem here after updating from 17.4 to 18.2.5, solved it with adding "TLS Enable = No" in the console config

But now I can't access to webui, gui is reporting:

Sorry, it seems you are not authorized to run this module. If you think this is an error, please contact your local administrator.
Please read the Bareos documentation for any additional information on how to configure the Command ACL directive of your Console/Profile resources. Following is a list of required commands which need to be in your Command ACL to run this module properly:

list
llist


I checked up my webui-admin profile, it containes the lines from 18.2.5 documentation....
(0003233)
alex-dvv   
2019-02-01 07:25   
Doesn't come anyway, here's the config:
onsole {
  Name = admin
  Password = ******
  Profile = webui-admin
  TLS Enable = No
}
(0003234)
alex-dvv   
2019-02-01 07:31   
I went in !!! But the error is the same:Sorry, it seems you are not authorized to run this module. If you think this is an error, please contact your local administrator.
(0003238)
alex-dvv   
2019-02-01 12:44   
It worked!I do not really like!
(0003240)
teka74   
2019-02-01 23:05   
alex, it is working on your server? what did you change?
(0003241)
teka74   
2019-02-01 23:59   
uhh, just looked at my system, and webui is now working without any changes! nice self-healing!!
(0003244)
murrdyn   
2019-02-04 18:27   
I have the blank page after login like teka74 did. It did self heal eventually, but then closing the webui window and trying to log back in returned the issue.

httpd error log shows:
[Mon Feb 04 11:20:53.319812 2019] [:error] [pid 3414] [client x.x.x.x:52713] PHP Notice: Undefined variable: form in /usr/share/bareos-webui/module/Auth/view/auth/auth/login.phtml on line 45, referer: http://x.x.x.x/bareos-webui/auth/login
[Mon Feb 04 11:20:53.319849 2019] [:error] [pid 3414] [client x.x.x.x:52713] PHP Fatal error: Call to a member function prepare() on a non-object in /usr/share/bareos-webui/module/Auth/view/auth/auth/login.phtml on line 45, referer: http://x.x.x.x/bareos-webui/auth/login

When it works correctly, those errors do not show up.
(0003245)
teka74   
2019-02-04 23:24   
agree with murrdyn, partially I can login in webui, sometimes blank window appears...
(0003254)
c0r3dump3d   
2019-02-07 16:07   
Hi, same problem in Centos 7.6.1810 fresh install bareos-dir version 18.2.5:

[Thu Feb 07 15:27:29.244019 2019] [:error] [pid 25046] [client 10.141.1.90:37769] admin, referer: http://bareosdir00.mgmt/bareos-webui/
[Thu Feb 07 15:27:29.244068 2019] [:error] [pid 25046] [client 10.141.1.90:37769] console_name: admin, referer: http://bareosdir00.mgmt/bareos-webui/
[Thu Feb 07 15:27:29.245627 2019] [:error] [pid 25046] [client 10.141.1.90:37769] PHP Notice: fwrite(): send of 26 bytes failed with errno=104 Connection reset by peer in /usr/share/bareos-webui/vendor/Bareos/library/Bareos/BSock/BareosBSock.php on line 219, referer: http://bareosdir00.mgmt/bareos-webui/
(0003257)
IvanBayan   
2019-02-12 14:02   
I have similar problem, if I try to login, I get next message:
(0003258)
c0r3dump3d   
2019-02-13 10:57   
This issue It seems to be the same to issue number 0001033 (https://bugs.bareos.org/view.php?id=1033) that's it's close and resolve in version 18.2.4rc2-76.1.

In my Centos 7.6 installations with php verison 7.2.15 with bareos 18.2.5 the error persist ...

[Wed Feb 13 10:35:08.142474 2019] [php7:notice] [pid 9012] [client 10.141.1.90:21152] PHP Notice: fwrite(): send of 26 bytes failed with errno=104 Connection reset by peer
in /usr/share/bareos-webui/vendor/Bareos/library/Bareos/BSock/BareosBSock.php on line 219

I'm sure that the credential are correct.
(0003259)
xyros   
2019-02-13 19:00   
A possibly helpful observation I have made concerning this bug:

Typically, if you remain logged in and your session expires by the time you try to perform an action, you have to log back in. This is when you encounter this bug.

Following a long idle period, if you avoid performing any action, so as to avoid being notified that your session has expired, and instead click your username and properly logout from the drop-down, you can log back in successfully without triggering this bug.

In fact, I have found that if I always deliberately logout, such that I avoid triggering the session expiry notice, I can always successfully login on the next attempt.

I have not tested a scenario of closing all browser windows then trying to login. But so far it seems that deliberately logging out -- even after session expiry (but without doing anything to trigger a session expiry notification) -- avoids triggering this bug.

Hope that helps with figuring out where the bug resides.
(0003261)
c0r3dump3d   
2019-02-14 09:46   
In my case the errot occurs in a fresh install and I have no previous sessions ..., otherwise I have test in a Debian 9 fresh install and I have the same problem!!
(0003263)
c0r3dump3d   
2019-02-15 10:32   
From a similar issue that previously happen in the docker version of bareos jukito show me that putting the options TLS Enable = No in /etc/bareos/bareos-dir.d/console/admin.conf file:

Console {
  Name = admin
  Password = *****
  Profile = webui-admin
  TLS Enable = No
}

correct the problem.
(0003264)
c0r3dump3d   
2019-02-15 10:35   
Sorry, I've forgotten to include the link for bareos docker same issue:
https://github.com/barcus/bareos/issues/24
(0003291)
gslongo   
2019-03-14 10:31   
Hi,

The error remains even with the "TLS Enable = No" setting here

PHP Notice: fwrite(): send of 26 bytes failed with errno=104 Connection reset by peer in /usr/share/bareos-webui/vendor/Bareos/library/Bareos/BSock/BareosBSock.php on line 219, referer: http://XX/bareos-webui/


[root@baloo bareos]# grep -rn webui
bareos-dir.d/console/admin.conf:2:# Restricted console used by bareos-webui
bareos-dir.d/console/admin.conf:7: Profile = "webui-admin"
bareos-dir.d/profile/webui-admin.conf:2:# bareos-webui webui-admin profile resource
bareos-dir.d/profile/webui-admin.conf:5: Name = "webui-admin"
bareos-dir.d/profile/webui-admin.conf:18:# bareos-webui default profile resource
bareos-dir.d/profile/webui-admin.conf:21: Name = webui

[root@baloo bareos]# cat bareos-dir.d/console/admin.conf
#
# Restricted console used by bareos-webui
#
Console {
  Name = admin
  Password = "********"
  Profile = "webui-admin"


  # As php does not support TLS-PSK,
  # and the director has TLS enabled by default,
  # we need to either disable TLS or setup
  # TLS with certificates.
  #
  # For testing purposes we disable it here
  TLS Enable = No
}


[root@baloo bareos]# cat bareos-dir.d/profile/webui-admin.conf
#
# bareos-webui webui-admin profile resource
#
Profile {
  Name = "webui-admin"
  CommandACL = !.bvfs_clear_cache, !.exit, !.sql, !configure, !create, !delete, !purge, !prune, !sqlquery, !umount, !unmount, *all*
  Job ACL = *all*
  Schedule ACL = *all*
  Catalog ACL = *all*
  Pool ACL = *all*
  Storage ACL = *all*
  Client ACL = *all*
  FileSet ACL = *all*
  Where ACL = *all*
}

#
# bareos-webui default profile resource
#
Profile {
  Name = webui
  CommandACL = status, messages, show, version, run, rerun, cancel, .api, .bvfs_*, list, llist, use, restore, .jobs, .filesets, .clients
  Job ACL = *all*
  Schedule ACL = *all*
  Catalog ACL = *all*
  Pool ACL = *all*
  Storage ACL = *all*
  Client ACL = *all*
  FileSet ACL = *all*
  Where ACL = *all*
}


Any idea ?

Thank you !
(0003343)
Schroeffu   
2019-04-23 15:14   
I have had the same issue after upgrade from 17.2.4 to 18.2.5 and the best result for me was editing this two files:

/etc/bareos-webui/directors.ini
tls_verify_peer = false
server_can_do_tls = true # it was false before
server_requires_tls = false
client_can_do_tls = false (with true, login in webui is not possible for me)

and in /etc/bareos/bareos-dir.d/console/admin.conf add
TLS Enable = No

Now WebUI (running on localhost, so, ignore TLS is ok for me) login is okay, plus, all backup-fd 18.2.5 clients are reachable by TLS-PSK according Log ( WebUI > Clients > Status Icon > first 2lines in logwindow says 'Handshake: Immediate TLS, Encryption: ECDHE-PSK-CHACHA20-POLY1305')

with "client_can_do_tls" i have a similar php error, this one:

Exception
File:
/usr/share/bareos-webui/vendor/Bareos/library/Bareos/BSock/BareosBSock.php:542
Message:
Error in TLS handshake
Stack trace:
#0 /usr/share/bareos-webui/vendor/Bareos/library/Bareos/BSock/BareosBSock.php(101): Bareos\BSock\BareosBSock->connect()
0000001 /usr/share/bareos-webui/module/Auth/src/Auth/Controller/AuthController.php(93): Bareos\BSock\BareosBSock->connect_and_authenticate()
0000002 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/Controller/AbstractActionController.php(82): Auth\Controller\AuthController->loginAction()
0000003 [internal function]: Zend\Mvc\Controller\AbstractActionController->onDispatch(Object(Zend\Mvc\MvcEvent))
0000004 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(444): call_user_func(Array, Object(Zend\Mvc\MvcEvent))
0000005 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(205): Zend\EventManager\EventManager->triggerListeners('dispatch', Object(Zend\Mvc\MvcEvent), Object(Closure))
0000006 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/Controller/AbstractController.php(118): Zend\EventManager\EventManager->trigger('dispatch', Object(Zend\Mvc\MvcEvent), Object(Closure))
0000007 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/DispatchListener.php(93): Zend\Mvc\Controller\AbstractController->dispatch(Object(Zend\Http\PhpEnvironment\Request), Object(Zend\Http\PhpEnvironment\Response))
0000008 [internal function]: Zend\Mvc\DispatchListener->onDispatch(Object(Zend\Mvc\MvcEvent))
0000009 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(444): call_user_func(Array, Object(Zend\Mvc\MvcEvent))
0000010 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(205): Zend\EventManager\EventManager->triggerListeners('dispatch', Object(Zend\Mvc\MvcEvent), Object(Closure))
0000011 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/Application.php(314): Zend\EventManager\EventManager->trigger('dispatch', Object(Zend\Mvc\MvcEvent), Object(Closure))
0000012 /usr/share/bareos-webui/public/index.php(24): Zend\Mvc\Application->run()
0000013 {main}
(0003380)
gslongo   
2019-05-23 16:15   
Hi,

Even with fix you suggest, we still have same issue but not same line in code :


[Thu May 23 16:13:23.853505 2019] [:error] [pid 16836] [client 172.16.38.128:42814] PHP Notice: fwrite(): send of 26 bytes failed with errno=104 Connection reset by peer in /usr/share/bareos-webui/vendor/Bareos/library/Bareos/BSock/BareosBSock.php on line 219, referer: http://baloo/bareos-webui/auth/login?req=/bareos-webui/dashboard/
(0003556)
gslongo   
2019-08-06 09:07   
Hi,

Any update on this issue ?

Thank you
(0003557)
gslongo   
2019-08-06 13:21   
Additional information when setting : setdebug level=200 trace=1 dir


bareos-dir (100): lib/bsock.cc:81-0 Construct BareosSocket
bareos-dir (100): include/jcr.h:320-0 Construct JobControlRecord
bareos-dir (200): lib/bsock.cc:631-0 Identified from Bareos handshake: webui-admin-R_CONSOLE recognized version: 18.2
bareos-dir (100): lib/parse_conf.cc:1056-0 Could not find foreign tls resource: R_CONSOLE-webui-admin
bareos-dir (100): lib/parse_conf.cc:1076-0 Could not find foreign tls resource: R_CONSOLE-webui-admin
bareos-dir (200): lib/try_tls_handshake_as_a_server.cc:54-0 Could not read out cleartext configuration
bareos-dir (100): lib/bsock.cc:129-0 Destruct BareosSocket
bareos-dir (100): include/jcr.h:324-0 Destruct JobControlRecord
(0003648)
bozonius   
2019-12-07 17:08   
I know this is old, but in case someone goes looking for a solution to this in the future, I found that restarting the web server(!) cleared the bug. I was able to login normally without any config changes.
(0003676)
krelac   
2019-12-17 08:16   
Hi,

still the same issue - upon logging in a blank page appears and after page refresh there is a "Sorry, it seems you are not authorized to run this module. If you think this is an error, please contact your local administrator." message. admin cannot access anything in web-gui.

bareos.x86_64 18.2.5-139.1.el7
bareos-webui.noarch 18.2.5-126.1.el7

reinstallation of bareos-webui and following documentation from doc.bareos.org did not help.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1161 [bareos-core] director minor always 2019-12-15 22:43 2019-12-15 22:43
Reporter: embareossed Platform: Linux  
Assigned To: OS: Debian  
Priority: normal OS Version: 9  
Status: new Product Version: 18.2.5  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: JobId 0: Prior failed job found in catalog. Upgrading to Differential.
Description: Differential backups always produce this error. It is confusing on two counts: 1) Scheduled differential backups should not generate this message since the backup is scheduled to be differential, and 2) Message does not indicate which backup is being (falsely) "upgraded." Since I have 5 differential jobs every week (except once a month when full backups are run), there are exactly 5 of these messages, so apparently such message is generated for each differential job.
Tags:
Steps To Reproduce: Schedule differential jobs. One such message will be generated for each job.

NB: My backup schedule is identical for all backup jobs. I have it set to perform monthly full backups, then differentials once a week, and incrementals all other days.
Additional Information: I am simply following the model offered in the documentation, so I assumed that this was a valid, workable approach.

These messages show up in separate emails from the backup emails.
System Description
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1070 [bareos-core] storage daemon major always 2019-03-27 12:31 2019-12-12 13:29
Reporter: guidoilbaldo Platform: Linux  
Assigned To: OS: CentOS  
Priority: high OS Version: 7  
Status: assigned Product Version: 18.2.5  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: DIR lost connection with SD - Broken pipe
Description: Hello,
I recently installed BareOS on a single machine (DIR + SD) and installed droplet plugin to send data to AWS S3.
The configuration looks fine as bconsole tells me storage status is ok:
```
Device status:

Device "AWS_S3_Object_Storage" (AWS S3 Storage) is not open.
Backend connection is working.
Inflight chunks: 0
No pending IO flush requests.
==
====
```

However, when launching a FULL backup job, after 15 minutes it fails with the following error:
```
27-Mar 11:03 rabbit-1-fd JobId 3: Error: lib/bsock_tcp.cc:417 Wrote 9611 bytes to Storage daemon:172.17.0.60:9103, but only 0 accepted.
27-Mar 11:03 rabbit-1-fd JobId 3: Fatal error: filed/backup.cc:1033 Network send error to SD. ERR=Connection timed out
27-Mar 11:03 backup-1 JobId 3: Fatal error: Director's comm line to SD dropped.
27-Mar 11:03 backup-1 JobId 3: Error: Bareos backup-1 18.2.5 (30Jan19):
  Build OS: Linux-4.4.92-6.18-default redhat CentOS Linux release 7.6.1810 (Core)
  JobId: 3
  Job: backup-rabbit-1.2019-03-27_10.34.06_14
  Backup Level: Full
  Client: "rabbit-1-fd" 18.2.5 (30Jan19) Linux-4.4.92-6.18-default,redhat,CentOS Linux release 7.6.1810 (Core) ,CentOS_7,x86_64
  FileSet: "RabbitMQFileSet" 2019-03-27 09:42:15
  Pool: "Full" (From command line)
  Catalog: "MyCatalog" (From Client resource)
  Storage: "S3_Object" (From Job resource)
  Scheduled time: 27-Mar-2019 10:34:06
  Start time: 27-Mar-2019 10:48:17
  End time: 27-Mar-2019 11:03:51
  Elapsed time: 15 mins 34 secs
  Priority: 10
  FD Files Written: 24
  SD Files Written: 0
  FD Bytes Written: 95,320 (95.32 KB)
  SD Bytes Written: 691 (691 B)
  Rate: 0.1 KB/s
  Software Compression: 24.3 % (lz4)
  VSS: no
  Encryption: no
  Accurate: no
  Volume name(s):
  Volume Session Id: 2
  Volume Session Time: 1553682684
  Last Volume Bytes: 0 (0 B)
  Non-fatal FD errors: 1
  SD Errors: 0
  FD termination status: Fatal Error
  SD termination status: Error
  Bareos binary info: bareos.org build: Get official binaries and vendor support on bareos.com
  Termination: *** Backup Error ***
```

The connection between FD on backup client and BareOS server looks fine, I can telnet ports 9101 and 9103 to DIR and SD correctly.
Tags: aws, s3;droplet;aws;storage, storage
Steps To Reproduce: Install bareos stack 18.2.5 and try to perform a FULL backup job, sending data to S3
Additional Information: DIR config:

Client {
  Name = rabbit-1-fd
  Address = 172.17.0.170
  Password = ****
  Heartbeat Interval = 60
  File Retention = 30 days
  Job Retention = 6 months
  AutoPrune = yes
}

Job {
  Name = "backup-rabbit-1"
  Client = rabbit-1-fd
  JobDefs = "RabbitMQJobDef"
}

JobDefs {
  Name = "RabbitMQJobDef"
  Type = Backup
  Level = Incremental

  Client = bareos-fd

  FileSet = "RabbitMQFileSet"
  Schedule = "RabbitMQCycle"
  Storage = S3_Object

  Messages = Standard

  Pool = Incremental

  Priority = 10
  Write Bootstrap = "/var/lib/bareos/%c.bsr"

  Full Backup Pool = Full
  Differential Backup Pool = Differential
  Incremental Backup Pool = Incremental
}

Storage {
  Name = S3_Object
  Address = 172.17.0.60
  Password = ****
  Device = AWS_S3_Object_Storage
  Media Type = S3_Object1
  Heartbeat Interval = 60
}

SD config:

Device {
  Name = "AWS_S3_Object_Storage"
  Media Type = "S3_Object1"
  Archive Device = "AWS S3 Storage"
  Device Type = droplet
  Device Options = "profile=/etc/bareos/bareos-sd.d/device/droplet/aws.droplet.profile,bucket=bucket,location=eu-central-1,chunksize=100M,iothreads=10,retries=0"
  LabelMedia = yes
  Random Access = yes
  AutomaticMount = yes
  RemovableMedia = no
  AlwaysOpen = no
  Maximum Concurrent Jobs = 10
  Maximum File Size = 500M
  Maximum Spool Size = 15000M
}

use_https = true
host = s3.eu-central-1.amazonaws.com
access_key = <access_key>
secret_key = <secret_key>
pricing_dir = ""
backend = s3
aws_region = eu-central-1
aws_auth_sign_version = 4
System Description
Attached Files: backup-1.trace (27,106 bytes) 2019-03-27 16:02
https://bugs.bareos.org/file_download.php?file_id=357&type=bug
Notes
(0003301)
arogge   
2019-03-27 12:43   
Can you please enable tracing of the sd (maybe level 200) and reproduce this?
(0003302)
guidoilbaldo   
2019-03-27 15:22   
Sure, could you help me with that or point me somewhere in the docs where I can find how?
(0003303)
guidoilbaldo   
2019-03-27 15:26   
Ok, i found where to look.

I launched the job again and keep you posted, in the meantime this is the device status:

Device status:

Device "AWS_S3_Object_Storage" (AWS S3 Storage) is mounted with:
    Volume: Full-0001
    Pool: Full
    Media type: S3_Object1
Backend connection is working.
Inflight chunks: 0
No pending IO flush requests.
Configured device capabilities:
  EOF BSR BSF FSR FSF EOM !REM RACCESS AUTOMOUNT LABEL !ANONVOLS !ALWAYSOPEN
Device state:
  OPENED !TAPE LABEL !MALLOC APPEND !READ EOT !WEOT !EOF !NEXTVOL !SHORT MOUNTED
  num_writers=1 reserves=0 block=0
Attached Jobs: 4
Device parameters:
  Archive name: AWS S3 Storage Device name: AWS_S3_Object_Storage
  File=0 block=64712
  Min block=64512 Max block=64512
    Total Bytes=64,712 Blocks=0 Bytes/block=64,712
    Positioned at File=0 Block=64,712
(0003304)
guidoilbaldo   
2019-03-27 16:02   
This time the error was "Connection timed out". I attached the trace file for the SD and here I paste bareos log for completion:

27-Mar 14:45 backup-1 JobId 6: Using Device "AWS_S3_Object_Storage" to write.
27-Mar 14:45 backup-1 JobId 6: Probing client protocol... (result will be saved until config reload)
27-Mar 14:45 backup-1 JobId 6: Connected Client: rabbit-1-fd at 172.17.0.170:9102, encryption: PSK-AES256-CBC-SHA
27-Mar 14:45 backup-1 JobId 6: Handshake: Immediate TLS 27-Mar 14:45 backup-1 JobId 6: Encryption: PSK-AES256-CBC-SHA
27-Mar 14:45 rabbit-1-fd JobId 6: Connected Storage daemon at 172.17.0.60:9103, encryption: PSK-AES256-CBC-SHA
27-Mar 14:45 backup-1: info: src/droplet.c:127: dpl_init: PRNG has been seeded with enough data
27-Mar 14:45 rabbit-1-fd JobId 6: Extended attribute support is enabled
27-Mar 14:45 rabbit-1-fd JobId 6: ACL support is enabled
27-Mar 14:45 backup-1 JobId 6: Volume "Full-0001" previously written, moving to end of data.
27-Mar 14:45 backup-1 JobId 6: Ready to append to end of Volume "Full-0001" size=64712
27-Mar 15:00 rabbit-1-fd JobId 6: Error: lib/bsock_tcp.cc:417 Wrote 22084 bytes to Storage daemon:172.17.0.60:9103, but only 0 accepted.
27-Mar 15:00 rabbit-1-fd JobId 6: Fatal error: filed/backup.cc:1033 Network send error to SD. ERR=Connection timed out
27-Mar 15:00 backup-1 JobId 6: Fatal error: Director's comm line to SD dropped.
27-Mar 15:00 backup-1 JobId 6: Error: Bareos backup-1 18.2.5 (30Jan19):
  Build OS: Linux-4.4.92-6.18-default redhat CentOS Linux release 7.6.1810 (Core)
  JobId: 6
  Job: backup-rabbit-1.2019-03-27_14.44.59_07
  Backup Level: Full
  Client: "rabbit-1-fd" 18.2.5 (30Jan19) Linux-4.4.92-6.18-default,redhat,CentOS Linux release 7.6.1810 (Core) ,CentOS_7,x86_64
  FileSet: "RabbitMQFileSet" 2019-03-27 09:42:15
  Pool: "Full" (From command line)
  Catalog: "MyCatalog" (From Client resource)
  Storage: "S3_Object" (From Job resource)
  Scheduled time: 27-Mar-2019 14:44:59
  Start time: 27-Mar-2019 14:45:01
  End time: 27-Mar-2019 15:00:29
  Elapsed time: 15 mins 28 secs
  Priority: 10
  FD Files Written: 27
  SD Files Written: 0
  FD Bytes Written: 164,032 (164.0 KB)
  SD Bytes Written: 1,852 (1.852 KB)
  Rate: 0.2 KB/s
  Software Compression: 27.4 % (lz4)
  VSS: no
  Encryption: no
  Accurate: no
  Volume name(s):
  Volume Session Id: 1
  Volume Session Time: 1553697872
  Last Volume Bytes: 0 (0 B)
  Non-fatal FD errors: 1
  SD Errors: 0
  FD termination status: Fatal Error
  SD termination status: Error
  Bareos binary info: bareos.org build: Get official binaries and vendor support on bareos.com
  Termination: *** Backup Error ***
(0003305)
arogge   
2019-03-28 07:53   
According to the trace, your director told the sd to cancel the job:

backup-1 (110): stored/socket_server.cc:97-0 Conn: Hello Director backup-1 calling
backup-1 (110): stored/socket_server.cc:115-0 Got a DIR connection at 27-Mar-2019 15:00:29
backup-1 (100): include/jcr.h:320-0 Construct JobControlRecord
backup-1 (50): lib/cram_md5.cc:69-0 send: auth cram-md5 <461601013.1553698829@backup-1> ssl=1
backup-1 (100): lib/cram_md5.cc:116-0 cram-get received: auth cram-md5 <950524310.1553698829@backup-1> ssl=1
backup-1 (99): lib/cram_md5.cc:135-0 sending resp to challenge: XVkIXSZZLjpi2/+sNi/X8A
backup-1 (90): stored/dir_cmd.cc:289-0 Message channel init completed.
backup-1 (199): stored/dir_cmd.cc:300-0 <dird: cancel Job=backup-rabbit-1.2019-03-27_14.44.59_07
backup-1 (200): stored/dir_cmd.cc:318-0 Do command: cancel

Do you have any idea why this might have happened?
(0003306)
guidoilbaldo   
2019-03-29 08:56   
No, unfortunately not. I just launched the job from bareos-webui and left it to complete.
Do you think it might be some misconfiguration in DIR or SD configs I posted in the "additional information" part?
(0003308)
guidoilbaldo   
2019-04-01 17:31   
Hi,
if you have any hint why our configuration is not working, I would like to know because I'm in the process of deciding whether to stick with BareOS or finding a possible alternative.
Thank you very much,
Stefano
(0003316)
guidoilbaldo   
2019-04-10 15:23   
Is it possible to have a clue on this?
(0003317)
arogge   
2019-04-10 16:37   
Sorry, I have really no idea.
Maybe you can take this to the mailing list and somebody can help?

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1151 [bareos-core] webui feature always 2019-12-12 09:25 2019-12-12 09:26
Reporter: DanielB Platform:  
Assigned To: OS:  
Priority: normal OS Version:  
Status: new Product Version: 18.2.5  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: bareos webui does not show the inchanger flag for volumes
Description: The bareos webui does not show the inchanger flag for tape volumes. The flag is visible in the bconsole.
The flag should be visible as additional column to help volume management with tape changers.
Tags: volume, webui
Steps To Reproduce: Log into the webgui.
Select Storage -> Volumes
Additional Information:
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1014 [bareos-core] webui minor always 2018-09-30 15:36 2019-12-12 09:15
Reporter: progserega Platform: amd64  
Assigned To: frank OS: Debian  
Priority: normal OS Version: 9.4  
Status: assigned Product Version: 17.2.4  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: web ui not show statistic about running job, but bconsole - show
Description: web ui not show statistic about running job, but bconsole - show
Tags:
Steps To Reproduce: 1. Start job
2. see web ui for statistic of current runnong job - no statistic, web said:"not collect enaph data"
3. At this time open bconsole, get status this client and see statistic: bytes/sec, files and other, wich changed after each start "status client=client_name
4. After some time in statistic window in webui set: "Error fetching data."
Additional Information:
System Description
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1139 [bareos-core] director crash always 2019-11-18 17:35 2019-12-10 15:40
Reporter: jason.agilitypr Platform: Linux  
Assigned To: OS: Ubuntu  
Priority: high OS Version: 16.04  
Status: new Product Version: 18.2.5  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Upgrading mysql from 5.7.27 to 5.7.28 causes director to segfault
Description: I patched my server this morning and there was an upgrade of two mysql packages after which bareos director would not start and seg faulted

patch details
2019-11-18 08:37:04 upgrade mysql-common:all 5.7.27-0ubuntu0.16.04.1 -> 5.7.28-0ubuntu0.16.04.2
2019-11-18 08:37:04 upgrade libmysqlclient20:amd64 5.7.27-0ubuntu0.16.04.1 -> 5.7.28-0ubuntu0.16.04.2

bareos-dir -u bareos -g bareos -t -v -d 200
--------
bareos-dir (200): lib/runscript.cc:339-0 --> RunOnSuccess=1
bareos-dir (200): lib/runscript.cc:340-0 --> RunOnFailure=0
bareos-dir (200): lib/runscript.cc:341-0 --> FailJobOnError=0
bareos-dir (200): lib/runscript.cc:342-0 --> RunWhen=1
bareos-dir (200): dird/dird_conf.cc:1295-0 Don't know how to propagate resource RunScript of configtype 60
bareos-dir (200): dird/dird_conf.cc:1295-0 Don't know how to propagate resource RunScript of configtype 60
bareos-dir (200): dird/dird_conf.cc:1295-0 Don't know how to propagate resource RunScript of configtype 60
bareos-dir (200): dird/dird_conf.cc:1295-0 Don't know how to propagate resource RunScript of configtype 60
bareos-dir (200): dird/dird_conf.cc:1295-0 Don't know how to propagate resource RunScript of configtype 60
bareos-dir (200): dird/dird_conf.cc:1295-0 Don't know how to propagate resource RunScript of configtype 60
bareos-dir (100): dird/dird.cc:392-0 backend path: /usr/lib/bareos/backends
bareos-dir (150): dird/dir_plugins.cc:302-0 Load dir plugins
bareos-dir (150): dird/dir_plugins.cc:304-0 No dir plugin dir!
bareos-dir (100): cats/cats_backends.cc:81-0 db_init_database: Trying to find mapping of given interfacename mysql to mapping interfacename dbi, partly_compare = true
bareos-dir (100): cats/cats_backends.cc:81-0 db_init_database: Trying to find mapping of given interfacename mysql to mapping interfacename mysql, partly_compare = false
bareos-dir (100): cats/cats_backends.cc:180-0 db_init_database: testing backend /usr/lib/bareos/backends/libbareoscats-mysql.so
bareos-dir (100): cats/cats_backends.cc:254-0 db_init_database: loaded backend /usr/lib/bareos/backends/libbareoscats-mysql.so
bareos-dir (100): cats/mysql.cc:869-0 db_init_database first time
bareos-dir (50): cats/mysql.cc:181-0 mysql_init done
bareos-dir (50): cats/mysql.cc:205-0 mysql_real_connect done
bareos-dir (50): cats/mysql.cc:207-0 db_user=bareos db_name=bareos db_password=*****************
bareos-dir (100): cats/mysql.cc:230-0 opendb ref=1 connected=1 db=216fce0
bareos-dir (100): cats/sql_query.cc:96-0 called: void BareosDb::FillQueryVaList(PoolMem&, BareosDbQueryEnum::SQL_QUERY_ENUM, __va_list_tag*) with query name sql_get_max_connections (45)
bareos-dir (100): cats/sql_query.cc:102-0 called: void BareosDb::FillQueryVaList(PoolMem&, BareosDbQueryEnum::SQL_QUERY_ENUM, __va_list_tag*) query is now SHOW VARIABLES LIKE 'max_connections'
bareos-dir (100): cats/mysql.cc:252-0 closedb ref=0 connected=1 db=216fce0
bareos-dir (100): cats/mysql.cc:259-0 close db=216fce0
BAREOS interrupted by signal 11: Segmentation violation
Segmentation fault
Tags:
Steps To Reproduce: upgrade to the latest release version of the MySQL libraries.
Additional Information:
System Description
Attached Files:
Notes
(0003638)
jason.agilitypr   
2019-11-22 21:09   
this issue is reproducible every time, upgrading to 5.7.28 will cause the service to stop working again
(0003639)
jason.agilitypr   
2019-11-22 21:11   
Some action on this ticket would be welcome, as a standard linux system update should not cause packages (bareos-dir) to seg fault. Tot being able to update to 5.7.28 is also leaving my server vulnerable to all the security fixes that are included in 5.7.28
(0003646)
thomasDOTwtf   
2019-12-04 12:35   
I've experienced the same issue with the latest Ubuntu 16.04 patches.
Did a release upgrade to 18.04 and the issue went away.
(0003657)
teka74   
2019-12-10 15:40   
Same Issue here, Ubuntu 16.04 made security updates, and now bareos-dir can't start

root@backup:~# systemctl status bareos-director.service
● bareos-director.service - Bareos Director Daemon service
   Loaded: loaded (/lib/systemd/system/bareos-director.service; enabled; vendor preset: enabled)
   Active: failed (Result: core-dump) since Di 2019-12-10 15:17:30 CET; 3min 23s ago
     Docs: man:bareos-dir(8)
  Process: 3421 ExecStartPre=/usr/sbin/bareos-dir -t -f (code=dumped, signal=SEGV)

Dez 10 15:17:30 backup systemd[1]: Starting Bareos Director Daemon service...
Dez 10 15:17:30 backup bareos-dir[3421]: BAREOS interrupted by signal 11: Segmentation violation
Dez 10 15:17:30 backup bareos-dir[3421]: BAREOS interrupted by signal 11: Segmentation violation
Dez 10 15:17:30 backup systemd[1]: bareos-director.service: Control process exited, code=dumped status=11
Dez 10 15:17:30 backup systemd[1]: Failed to start Bareos Director Daemon service.
Dez 10 15:17:30 backup systemd[1]: bareos-director.service: Unit entered failed state.
Dez 10 15:17:30 backup systemd[1]: bareos-director.service: Failed with result 'core-dump'.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1148 [bareos-core] director minor always 2019-12-07 17:21 2019-12-07 17:21
Reporter: bozonius Platform: Linux  
Assigned To: OS: Debian  
Priority: high OS Version: 9  
Status: new Product Version: 18.2.5  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: How to cancel "waiting" job (status = "Created")
Description: There really needs to be a clean way to cancel jobs no matter what state they are in. I have a job that failed because it could not contact the client file daemon. I also had another job having an unrelated issue that required restarting the bareos storage daemon. (The problem there was the packet size too big; I had set the network buffer size too large.)

Since it was the only job I wanted to cancel, I tried using "cancel all status=waiting" and "cancel all status=created" (I tried the latter since the client status indicated the job was in the "Created" state; I also tried "cancel all status=Created" thinking maybe the case could matter in some contexts). None of these worked. I even tried "cancel all yes" to no avail; every one of these came back with "No jobs running."
Tags:
Steps To Reproduce: Terminate and restart the storage daemon while a job is in this "created" state. The job cannot be terminated.
Additional Information: The director and storage daemons are running on the same server. In this case, client was on different system.
System Description
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
937 [bareos-core] file daemon minor always 2018-04-08 02:50 2019-12-05 19:05
Reporter: bozonius Platform: Linux  
Assigned To: OS: Ubuntu  
Priority: low OS Version: 14.04  
Status: new Product Version: 16.2.4  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: estimate causes error if time on filedaemon and director are out of sync
Description: If the bareos-dir system and the bareos-fd system are not in time sync (by more than hardcoded 3 second threshold), the file daemon will generate an error message like "JobId 0: Error: getmsg.c:212 Malformed message: DIR and FD clocks differ by -35 seconds, FD automatically compensating" where the call is

Jmsg(jcr, type, 0, _("DIR and FD clocks differ by %lld seconds, FD automatically compensating.\n"), adj);

in the file daemon from about line 1565 in dir_cmd.cc. The problem is that the director code handles this message as an exception case but cannot parse it at about line 212 in bget_dirmsg() in the director code. In my configuration, it generates an email and a log entry.

In some ways, it is helpful to get these alerts in emails, but the message could be handled more consistently like others the director expects, and it would be nice if the client were identified. Otherwise, it is guesswork figuring out which client is out of time sync. For me, there are only a few clients, but a data center might have a greater challenge. Of course, I'd expect a data center to have this sort of business much better under control...

This is low priority and mainly a nuisance. It should not be too difficult to either 1) change the calling code (level_cmd in dir_cmd.cc) to issue an error message in a format that is parsable by the catchall case in the director, or 2) add a specific case handler for time sync problem in the director code (bget_dirmsg in getmsg.cc). It would be very helpful to include the client name or backup name in the message to aid in resolving such an issue.
Tags:
Steps To Reproduce: In bconsole, run

echo 'estimate accurate=yes level=incremental job=backup_name' | bconsole

where client "backup_name" is a system whose time is out of sync with the director by more than 3 seconds. See resulting bareos.log for the message (assuming Daemon Messages are configured to log errors).
Additional Information:
System Description
Attached Files:
Notes
(0002975)
aron_s   
2018-04-27 13:26   
I could not reproduce this problem. In my test environment, Director and storage daemon both run virtually and the filedaemon runs on a OS X system.
Time was set 30 seconds off, yet the estimation did not return any errors.
(0002979)
joergs   
2018-04-27 14:56   
A quick look at the code indicates, that this warning is only produced if level=since, not level=incremental. Are you sure it happens on your system with level=incremental?
(0002981)
bozonius   
2018-04-27 18:28   
@joergs: level can be incremental or full. I've not tried other variations.

@aron_s: maybe the code has been corrected since the release I am using (you did not mention yours)? I did not do version comparisons of source code.
(0002982)
joergs   
2018-04-27 18:47   
I checked against 16.2.7 and now also verified that the code has not changed between 16.2.4 and 16.2.7.

Setting the filed to debug level 10 (very much output!) will contain the line "level_cmd: ...". It must be since_utime in your case, maybe because of accurate.

Debug level can be changed during runtime using
setdebug client=<NAME> level=10 trace=1

Result will be written to a trace file.
(0002983)
bozonius   
2018-04-28 02:20   
@joergs: I just set one of the clients to a time about 10 mins earlier than the director's (and storage's, since they are on the same system). I re-ran my experiment, just as before, and I get the same result as I reported originally.

I tried the setdebug as you described. The audit log indicates that the command was received; no error there. So I am assuming that debugging is set, yet the main bareos log does not show any new messages other than the usual ones.

The client I tested against is running bareos-fd 14.2.1, which is a bit older, and may explain why you don't see the same problem. And also maybe why the debugging doesn't kick in.

I have upgraded the client as much as I can using the repos, short of building the client myself. I'll see if anti-X might have newer package updates for the client.

The other clients are also at various levels of the client (filedaemon).
(0002984)
bozonius   
2018-04-28 02:26   
Would running bareos in compatibility mode help to eliminate these messages, or would that just introduce more problems?
(0002985)
joergs   
2018-04-28 12:40   
Debug is not written to the main bareos.log file, but to a local trace file. Issuing the setdebug command tells you where to find this file.

Compatibility mode should have no impact on this.

So the error did only occur with old bareos filedaemons? Sorry, but the version should be at least 16.2, better 7.2 or master (experimental, nightly) to be relevant for us. What platform are the clients running on?
(0002986)
bozonius   
2018-04-28 13:41   
@joergs: "Local?" to what? Local to the system where the setdebug was run, or local to where the filedaemon is running? Or local to where the director is running? I just ran setdebug again, and it did not indicate where the output was going.

Could not parse: at least 16.2, better 7.2 -- what does this mean?

Anti-X repo version is 14.2.1. There is no update available. However, keep in mind that other clients have seen this error, and they are running other versions of the file daemon software (depending on the distro). There's a couple of debian variants, and an arch-based system.
(0002987)
joergs   
2018-04-28 15:24   
Local: local on the system that runs the daemon. In case of "setdebug client=>NAME>" it is on the client.

at least 16.2, better 7.2 => at least 16.2, better 17.2
(0002988)
bozonius   
2018-04-28 18:56   
(Last edited: 2018-04-28 19:07)
OK, so I have setdebug as requested, found the trace file (it was not obvious to me where "current directory" might be), and yes, found since_utime= entries in the output.

But I assure you I am running these backups ONLY as full or incremental. I do not use any others (even differential). Here is an extract from the trace file:

$ grep level_cmd /shared/antix-fd.trace
antix-fd: dir_cmd.c:1228-0 level_cmd: level = accurate_incremental mtime_only=0
antix-fd: dir_cmd.c:1228-0 level_cmd: level = since_utime 1524830415 mtime_only=0 prev_job=antix-sys.2018-04-27_05.00.00_10
antix-fd: dir_cmd.c:1228-0 level_cmd: level = accurate_incremental mtime_only=0
antix-fd: dir_cmd.c:1228-0 level_cmd: level = since_utime 1524830415 mtime_only=0 prev_job=antix-sys.2018-04-27_05.00.00_10
antix-fd: dir_cmd.c:1228-0 level_cmd: level = incremental mtime_only=0
antix-fd: dir_cmd.c:1228-0 level_cmd: level = since_utime 1524830415 mtime_only=0 prev_job=antix-sys.2018-04-27_05.00.00_10
antix-fd: dir_cmd.c:1228-0 level_cmd: level = accurate_incremental mtime_only=0
antix-fd: dir_cmd.c:1228-0 level_cmd: level = since_utime 1524830438 mtime_only=0 prev_job=antix-user.2018-04-27_05.00.01_11
antix-fd: dir_cmd.c:1228-0 level_cmd: level = incremental mtime_only=0
antix-fd: dir_cmd.c:1228-0 level_cmd: level = since_utime 1524830438 mtime_only=0 prev_job=antix-user.2018-04-27_05.00.01_11

(0003009)
bozonius   
2018-05-22 00:39   
This error has surfaced again, today. I have searched the logs in /var/log/bareos, /var/log/syslog, and my own logging and cannot force an explanation out of any of them.

The strangest part of this is that the first of two jobs failed on just two Vbox VM filedaemons, the other of the two ran fine on each of the two systems. These run smoothly every day, nearly forever (I'm not complaining, believe me). It's just that every so often, I see this crop up. I might not see it again for a long time.

I note that bareos tools do more audit logging in some tools than others. For instance, the audit log for today is relatively "quiet" for the jobs I run overnight, but the log gets noisy, logging every SQL query when I run BAT. I will see if I can get more "noise" from bconsole.

The director (dir 16.2.4) and storage (sd 16.2.4) daemons are still running on Ubuntu 14.04. The clients affected today were Devuan Jessie (fd 14.2.1) and Anti-X (fd 14.2.1). As I mentioned, these backups run without serious* issues over 99% of the time, which makes this an intermittent problem, and thus one that is hard to nail down the source of. Increasing the verbosity of the tools to the logs will be helpful, and I will look into this.

(* sometimes clocks are off, but this presents no serious problem; the jobs do not fail in those instances)
(0003021)
bozonius   
2018-05-30 09:07   
I have debug logging running on the director (I may add logging for other daemons later if needed). Log level is set to 200.

It would be helpful to have a listing of debug levels. I want to be sure to gather enough debugging info to pin down problems when they come up, but not so much that it will eat disk space unnecessarily.

Now I wait for the next time there is a problem and see if there is adequate log output to analyze it.
(0003023)
bozonius   
2018-06-03 04:25   
I have now enabled debugging on calls to bconsole for one backup which fails as described. This might be helpful, too, since the failure occurs while bconsole is running.
(0003024)
bozonius   
2018-06-04 01:31   
I have now enabled debugging on all jobs.
(0003025)
bozonius   
2018-06-04 01:34   
I just realized this wasn't the bug I was trying to address with the debug statements (though they will help perhaps). The other problem I have is with backups that fail in bconsole (not just issue warning) when requesting an estimate.

At any rate, maybe this additional logging will help going forward for any sort of issue that arises. The logging does not add a significant amount of disk usage at level 200.
(0003026)
aron_s   
2018-06-04 13:41   
So I reproduced this again and got the same warning, but with the name of the filedaemon as source of the warning. Identifying the out-of-sync client is doable. The job itself also runs perfectly fine.
I get the following output when running a job on a client with time difference > 60s:

04-Jun 13:35 ubuntu1604-VirtualBox-dir JobId 10: Start Backup JobId 10, Job=macbook-log-backup.2018-06-04_13.35.08_12
04-Jun 13:35 ubuntu1604-VirtualBox-dir JobId 10: Created new Volume "Incremental-0002" in catalog.
04-Jun 13:35 ubuntu1604-VirtualBox-dir JobId 10: Using Device "FileStorage" to write.
04-Jun 13:32 macbook-fd JobId 10: DIR and FD clocks differ by -180 seconds, FD automatically compensating.
04-Jun 13:32 macbook-fd JobId 10: Warning: LZ4 compression support requested in fileset but not available on this platform. Disabling ...
04-Jun 13:35 ubuntu1604-VirtualBox-sd JobId 10: Labeled new Volume "Incremental-0002" on device "FileStorage" (/var/lib/bareos/storage).
04-Jun 13:35 ubuntu1604-VirtualBox-sd JobId 10: Wrote label to prelabeled Volume "Incremental-0002" on device "FileStorage" (/var/lib/bareos/storage)
04-Jun 13:35 ubuntu1604-VirtualBox-sd JobId 10: Elapsed time=00:00:01, Transfer rate=19.44 K Bytes/second
04-Jun 13:35 ubuntu1604-VirtualBox-dir JobId 10: sql_create.c:872 Insert of attributes batch table done
04-Jun 13:35 ubuntu1604-VirtualBox-dir JobId 10: Bareos ubuntu1604-VirtualBox-dir 17.2.4 (21Sep17):
  Build OS: i686-pc-linux-gnu ubuntu Ubuntu 16.04 LTS
  JobId: 10
  Job: macbook-log-backup.2018-06-04_13.35.08_12
  Backup Level: Incremental, since=2018-05-17 13:11:57
  Client: "macbook-fd" 17.2.4 (21Sep17) x86_64-apple-darwin17.4.0,osx,17.4.0
  FileSet: "MacbookLogFileset" 2018-05-17 13:00:50
  Pool: "Incremental" (From Job IncPool override)
  Catalog: "MyCatalog" (From Client resource)
  Storage: "ubuntu1604-VirtualBox-sd" (From Job resource)
  Scheduled time: 04-Jun-2018 13:35:06
  Start time: 04-Jun-2018 13:35:14
  End time: 04-Jun-2018 13:35:14
  Elapsed time: 0 secs
  Priority: 10
  FD Files Written: 7
  SD Files Written: 7
  FD Bytes Written: 18,515 (18.51 KB)
  SD Bytes Written: 19,448 (19.44 KB)
  Rate: 0.0 KB/s
  Software Compression: None
  VSS: no
  Encryption: no
  Accurate: no
  Volume name(s): Incremental-0002
  Volume Session Id: 1
  Volume Session Time: 1528110773
  Last Volume Bytes: 20,363 (20.36 KB)
  Non-fatal FD errors: 0
  SD Errors: 0
  FD termination status: OK
  SD termination status: OK
  Termination: Backup OK
(0003033)
bozonius   
2018-06-06 23:04   
(Last edited: 2018-06-06 23:13)
@aron_s: Thanks for trying this again. BTW, there are actually two problems here (may be related, maybe not). The original issue was that I got email message indicating times were out of sync between client and director systems, but that message did not name the client. The actual run of the backup worked fine, just as you confirm. This problem is, ultimately, minor though annoying.

The more vexing problem is the other one. Many times, a call to bconsole for an estimate of backup size fails. I am now able to compare debug traces of normal (non-failing) job and failing job (debug level is 200). So far, my investigation reveals "bconsole (100): console.c:346-0 Got poll BNET_EOD" appears in the trace only once for a normal job, but twice for the failing job. This "once versus twice" behavior seems to be consistent in comparing failed to normal jobs.

I note it is possible the 2nd message, if it is indeed actually sent by the director, might be getting swallowed by my logging as the result, perhaps, of my script closing the pipe before some final flush of messages to my script, or something like that. I'll double-check the perl code I wrote, but I have been running this script for years now and has always had this intermittent results from the 'estimate' command. bconsole estimate is called from perl as a "Command Before Job".

In both scenarios, bconsole connects successfully with the director. I can see the 'estimate' request in the director's trace in both instances.

Is there a much better way to obtain the estimate? If I end up with some run-away job (meaning huge amounts of data have appeared since the last run), I don't want to end up eating gobs of backup disk space (and time!). One example of this is backing up the home directory where, say, a program has experienced extremely increased amounts of data in its files. I have excluded a long list of files and directories in all backups, but every so often another one would come along that I had overlooked that just happened to bollix the works. This call to bconsole for an estimate prior to the run is just an additional prevention.

(0003645)
bozonius   
2019-12-03 17:24   
(Last edited: 2019-12-05 19:05)
I have (finally) gotten around to upgrading my backups. I am using bareos 18.2.5 on the host, and that comes from the bareos repo, not my distro's (Devuan). The clients are running bareos 16.2.4, 16.2.6, or 17.2.5; these come from the respective distros, but they and the director are all running on Linux.

This morning, I was greeted with messages that the file daemons were out of sync on 3 of the clients. Again, I am not certain which ones, because the messages indicate neither the client involved nor even the job id (which is always zero in these messages).

I think I'd like to change the title of this issue to something like "phantom error messages due to client clocks out of sync with director" which would be much clearer. I did not know, when I first filed this bug, what the nature of the problem was in the first place. These messages do not represent true errors as such since the system adjusts itself anyway. But the messages are unhelpful as they stand; even if I were to investigate the time disparities, I won't get too far since I can't tell which one of the clients are out of sync.

Is it possible to either eliminate these messages altogether (since the situations they represent seem to be self-healing), or to include either the job number or the client name (or both) in these messages? I'd be happy to go around fixing my time problems, but generally I do not witness significant time differences by the time I go to investigate. As far as I know, all my clients are sync'ing to one time server on my network.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
419 [bareos-core] director feature always 2015-02-01 13:48 2019-12-03 12:58
Reporter: stephand Platform:  
Assigned To: OS:  
Priority: low OS Version:  
Status: acknowledged Product Version: 14.2.2  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact: yes
bareos-master: action:
bareos-19.2: impact: yes
bareos-19.2: action:
bareos-18.2: impact: yes
bareos-18.2: action: none
bareos-17.2: impact: yes
bareos-17.2: action: none
bareos-16.2: impact: yes
bareos-16.2: action: none
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: admin job does not run script on client
Description: When using an admin job (a job with Type = Admin) that defines a RunScript with RunsOnClient = yes is not executed.
Tags:
Steps To Reproduce: Define an admin job with the following parameters:

Job {
  Name = TestAdminJobOnClient
  JobDefs = DefaultJob
  Type = Admin
  Priority = 40

  RunScript {
    RunsWhen = Before
    RunsOnClient = yes
    Fail Job On Error = yes
    Command = "/usr/local/bin/bareos_test_adminjob.sh"
  }
}

The script /usr/local/bin/bareos_test_adminjob.sh:
#!/bin/bash
echo "$(date) bareos_test_adminjob started" >> /tmp/bareos_test_adminjob.log
echo "id: $(id)"
echo "whoami: $(whoami)"
echo "pwd: $(pwd)"
echo -e "output line 1\noutput line 2\noutput line 3"
echo "exiting with exit 1"
echo "$(date) bareos_test_adminjob exiting with exit 1" >> /tmp/bareos_test_adminjob.log
exit 1

Acutal result when running the job:

[root@bareost01 ~]# bconsole
Connecting to Director bareost01:9101
1000 OK: bareost01-dir Version: 14.2.2 (12 December 2014)
Enter a period to cancel a command.
*run job=TestAdminJobOnClient
Using Catalog "MyCatalog"
Run Admin Job
JobName: TestAdminJobOnClient
FileSet: Full Set
Client: bareost01-fd
Storage: File
When: 2015-02-01 13:13:58
Priority: 40
OK to run? (yes/mod/no): yes
Job queued. JobId=263
*
You have messages.
*mes
01-Feb 13:14 bareost01-dir JobId 263: Start Admin JobId 263, Job=TestAdminJobOnClient.2015-02-01_13.14.00_04
01-Feb 13:14 bareost01-dir JobId 263: BAREOS 14.2.2 (12Dec14): 01-Feb-2015 13:14:02
  JobId: 263
  Job: TestAdminJobOnClient.2015-02-01_13.14.00_04
  Scheduled time: 01-Feb-2015 13:13:58
  Start time: 01-Feb-2015 13:14:02
  End time: 01-Feb-2015 13:14:02
  Termination: Admin OK

Expected result:
The output of the script should be in the job messages and
the nonzero exit code should have lead to termination with error.
Additional Information: Attached bareos-dir debug at level 200

Running bareos-fd with debug level 200 at the same time does not show any output when running the admin job.

Note: works as expected when running the same script within a normal backup job:
FileSet {
  Name = "EmptyDirSet"
  Include {
    Options {
      signature = MD5
    }
    File = "/data/emptydir"
  }
}

Job {
  Name = TestJobWithScriptOnClient
  JobDefs = DefaultJob
  Type = Backup
  FileSet = EmptyDirSet
  Priority = 40

  RunScript {
    RunsWhen = Before
    RunsOnClient = yes
    Fail Job On Error = yes
    Command = "/usr/local/bin/bareos_test_adminjob.sh"
  }
}
Attached Files: bareos-dir_debug_TestAdminJobOnClient.out (13,599 bytes) 2015-02-01 13:48
https://bugs.bareos.org/file_download.php?file_id=127&type=bug
Notes
(0002339)
tigerfoot   
2016-08-25 11:55   
(Last edited: 2016-08-25 11:57)
Still true with 15.2 version
But work with Run On Client = no
?

(0003644)
joergs   
2019-12-03 12:58   
The behavior has not changed. However, Pull Request https://github.com/bareos/bareos/pull/359 would at least produce a hint in the Job Message.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1144 [bareos-core] director minor always 2019-11-27 13:27 2019-11-27 13:27
Reporter: ironiq Platform: Linux  
Assigned To: OS: CentOS  
Priority: normal OS Version: 7  
Status: new Product Version: 18.2.5  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Backtrace if trying to login user with pam authentication
Description: I tried to configure the PAM authentication for bconsole and bareos-webui, but following the official docs (https://docs.bareos.org/TasksAndConcepts/PAM.html) i get a huge backtrace after entering username and password. I followed the other docs on github (https://github.com/bareos/bareos-contrib/tree/master/misc/bareos_pam_integration) about the pam integration and tried the pamtester command with success. The server is an up2date CentOS 7, the bareos packages are coming from "official" bareos yum repository. The SELinux is set currently to "Enforcing", but also tried with "Permissive" with the same result. Also tried to simplify the PAM config with the official docs version, the result was the same.
Tags:
Steps To Reproduce: 1. Configure the director regarding the docs
2. Run: bconsole -c /etc/bareos/bconsole-pam.conf
3. Enter username and password.
Additional Information: [root@backup /etc/bareos]# cat /etc/redhat-release
CentOS Linux release 7.7.1908 (Core)
[root@backup /etc/bareos]# yum repolist | grep bareos
bareos_bareos-18.2 bareos:bareos-18.2 (CentOS_7) 45
[root@backup /etc/bareos]# rpm -qa | grep bareos
bareos-filedaemon-18.2.5-144.1.el7.x86_64
bareos-client-18.2.5-144.1.el7.x86_64
bareos-director-18.2.5-144.1.el7.x86_64
bareos-common-18.2.5-144.1.el7.x86_64
bareos-tools-18.2.5-144.1.el7.x86_64
bareos-bconsole-18.2.5-144.1.el7.x86_64
bareos-database-common-18.2.5-144.1.el7.x86_64
bareos-database-tools-18.2.5-144.1.el7.x86_64
bareos-18.2.5-144.1.el7.x86_64
bareos-storage-18.2.5-144.1.el7.x86_64
bareos-database-postgresql-18.2.5-144.1.el7.x86_64
bareos-webui-18.2.5-131.1.el7.noarch
[root@backup /etc/bareos]# cat /etc/pam.d/bareos
auth required pam_env.so
auth sufficient pam_sss.so forward_pass
auth sufficient pam_unix.so try_first_pass
auth required pam_deny.so
[root@backup /etc/bareos]# su - bareos -s /bin/bash
Last login: Tue Nov 26 17:08:42 CET 2019 on pts/0
-bash-4.2$ pamtester bareos foobar authenticate
Password:
pamtester: successfully authenticated
-bash-4.2$ logout
[root@backup /etc/bareos]# cat bconsole-pam.conf
#
# Bareos User Agent (or Console) Configuration File
#

Director {
  Name = bareos-dir
  address = localhost
  Password = "Very Secret Password"
  Description = "Bareos Console credentials for local Director"
}

Console {
  Name = "PamConsole"
  Password = "secret"
}

[root@backup /etc/bareos]# cat bareos-dir.d/console/pam-console.conf
Console {
        Name = "PamConsole"
        Password = "secret"
        UsePamAuthentication = yes
}

[root@backup /etc/bareos]# cat bareos-dir.d/user/foobar.conf
User {
        Name = "foobar"
        Password = ""
        CommandACL = status, .status
    JobACL = *all*
}

[root@backup /etc/bareos]# bconsole -c bconsole-pam.conf
Connecting to Director localhost:9101
 Encryption: PSK-AES256-CBC-SHA
login:foobar
Password: *** Error in `bconsole': double free or corruption (fasttop): 0x0000000001826390 ***
======= Backtrace: =========
/lib64/libc.so.6(+0x81679)[0x7f83fb78c679]
bconsole(_Z22ConsolePamAuthenticateP8_IO_FILEP12BareosSocket+0x9d)[0x40945d]
bconsole(main+0x10c2)[0x405c52]
/lib64/libc.so.6(__libc_start_main+0xf5)[0x7f83fb72d505]
bconsole[0x4063f8]
======= Memory map: ========
00400000-0040d000 r-xp 00000000 fd:00 188 /usr/sbin/bconsole
0060c000-0060d000 r--p 0000c000 fd:00 188 /usr/sbin/bconsole
0060d000-0060e000 rw-p 0000d000 fd:00 188 /usr/sbin/bconsole
0060e000-0060f000 rw-p 00000000 00:00 0
017e9000-0184c000 rw-p 00000000 00:00 0 [heap]
7f83ec000000-7f83ec021000 rw-p 00000000 00:00 0
7f83ec021000-7f83f0000000 ---p 00000000 00:00 0
7f83f2948000-7f83f2954000 r-xp 00000000 fd:00 8336 /usr/lib64/libnss_files-2.17.so
7f83f2954000-7f83f2b53000 ---p 0000c000 fd:00 8336 /usr/lib64/libnss_files-2.17.so
7f83f2b53000-7f83f2b54000 r--p 0000b000 fd:00 8336 /usr/lib64/libnss_files-2.17.so
7f83f2b54000-7f83f2b55000 rw-p 0000c000 fd:00 8336 /usr/lib64/libnss_files-2.17.so
7f83f2b55000-7f83f2b5b000 rw-p 00000000 00:00 0
7f83f2b5b000-7f83f2b5c000 ---p 00000000 00:00 0
7f83f2b5c000-7f83f335c000 rw-p 00000000 00:00 0
7f83f335c000-7f83f9886000 r--p 00000000 fd:00 8065 /usr/lib/locale/locale-archive
7f83f9886000-7f83f98e6000 r-xp 00000000 fd:00 16996 /usr/lib64/libpcre.so.1.2.0
7f83f98e6000-7f83f9ae6000 ---p 00060000 fd:00 16996 /usr/lib64/libpcre.so.1.2.0
7f83f9ae6000-7f83f9ae7000 r--p 00060000 fd:00 16996 /usr/lib64/libpcre.so.1.2.0
7f83f9ae7000-7f83f9ae8000 rw-p 00061000 fd:00 16996 /usr/lib64/libpcre.so.1.2.0
7f83f9ae8000-7f83f9b0c000 r-xp 00000000 fd:00 11896 /usr/lib64/libselinux.so.1
7f83f9b0c000-7f83f9d0b000 ---p 00024000 fd:00 11896 /usr/lib64/libselinux.so.1
7f83f9d0b000-7f83f9d0c000 r--p 00023000 fd:00 11896 /usr/lib64/libselinux.so.1
7f83f9d0c000-7f83f9d0d000 rw-p 00024000 fd:00 11896 /usr/lib64/libselinux.so.1
7f83f9d0d000-7f83f9d0f000 rw-p 00000000 00:00 0
7f83f9d0f000-7f83f9d25000 r-xp 00000000 fd:00 8346 /usr/lib64/libresolv-2.17.so
7f83f9d25000-7f83f9f24000 ---p 00016000 fd:00 8346 /usr/lib64/libresolv-2.17.so
7f83f9f24000-7f83f9f25000 r--p 00015000 fd:00 8346 /usr/lib64/libresolv-2.17.so
7f83f9f25000-7f83f9f26000 rw-p 00016000 fd:00 8346 /usr/lib64/libresolv-2.17.so
7f83f9f26000-7f83f9f28000 rw-p 00000000 00:00 0
7f83f9f28000-7f83f9f2b000 r-xp 00000000 fd:00 17360 /usr/lib64/libkeyutils.so.1.5
7f83f9f2b000-7f83fa12a000 ---p 00003000 fd:00 17360 /usr/lib64/libkeyutils.so.1.5
7f83fa12a000-7f83fa12b000 r--p 00002000 fd:00 17360 /usr/lib64/libkeyutils.so.1.5
7f83fa12b000-7f83fa12c000 rw-p 00003000 fd:00 17360 /usr/lib64/libkeyutils.so.1.5
7f83fa12c000-7f83fa13a000 r-xp 00000000 fd:00 17861 /usr/lib64/libkrb5support.so.0.1
7f83fa13a000-7f83fa33a000 ---p 0000e000 fd:00 17861 /usr/lib64/libkrb5support.so.0.1
7f83fa33a000-7f83fa33b000 r--p 0000e000 fd:00 17861 /usr/lib64/libkrb5support.so.0.1
7f83fa33b000-7f83fa33c000 rw-p 0000f000 fd:00 17861 /usr/lib64/libkrb5support.so.0.1
7f83fa33c000-7f83fa340000 r-xp 00000000 fd:00 17146 /usr/lib64/libcap-ng.so.0.0.0
7f83fa340000-7f83fa540000 ---p 00004000 fd:00 17146 /usr/lib64/libcap-ng.so.0.0.0
7f83fa540000-7f83fa541000 r--p 00004000 fd:00 17146 /usr/lib64/libcap-ng.so.0.0.0
7f83fa541000-7f83fa542000 rw-p 00005000 fd:00 17146 /usr/lib64/libcap-ng.so.0.0.0
7f83fa542000-7f83fa546000 r-xp 00000000 fd:00 16978 /usr/lib64/libattr.so.1.1.0
7f83fa546000-7f83fa745000 ---p 00004000 fd:00 16978 /usr/lib64/libattr.so.1.1.0
7f83fa745000-7f83fa746000 r--p 00003000 fd:00 16978 /usr/lib64/libattr.so.1.1.0
7f83fa746000-7f83fa747000 rw-p 00004000 fd:00 16978 /usr/lib64/libattr.so.1.1.0
7f83fa747000-7f83fa778000 r-xp 00000000 fd:00 17850 /usr/lib64/libk5crypto.so.3.1
7f83fa778000-7f83fa977000 ---p 00031000 fd:00 17850 /usr/lib64/libk5crypto.so.3.1
7f83fa977000-7f83fa979000 r--p 00030000 fd:00 17850 /usr/lib64/libk5crypto.so.3.1
7f83fa979000-7f83fa97a000 rw-p 00032000 fd:00 17850 /usr/lib64/libk5crypto.so.3.1
7f83fa97a000-7f83fa97d000 r-xp 00000000 fd:00 16967 /usr/lib64/libcom_err.so.2.1
7f83fa97d000-7f83fab7c000 ---p 00003000 fd:00 16967 /usr/lib64/libcom_err.so.2.1
7f83fab7c000-7f83fab7d000 r--p 00002000 fd:00 16967 /usr/lib64/libcom_err.so.2.1
7f83fab7d000-7f83fab7e000 rw-p 00003000 fd:00 16967 /usr/lib64/libcom_err.so.2.1
7f83fab7e000-7f83fac57000 r-xp 00000000 fd:00 17856 /usr/lib64/libkrb5.so.3.3
7f83fac57000-7f83fae56000 ---p 000d9000 fd:00 17856 /usr/lib64/libkrb5.so.3.3
7f83fae56000-7f83fae64000 r--p 000d8000 fd:00 17856 /usr/lib64/libkrb5.so.3.3
7f83fae64000-7f83fae67000 rw-p 000e6000 fd:00 17856 /usr/lib64/libkrb5.so.3.3
7f83fae67000-7f83faeb1000 r-xp 00000000 fd:00 17727 /usr/lib64/libgssapi_krb5.so.2.2
7f83faeb1000-7f83fb0b1000 ---p 0004a000 fd:00 17727 /usr/lib64/libgssapi_krb5.so.2.2
7f83fb0b1000-7f83fb0b2000 r--p 0004a000 fd:00 17727 /usr/lib64/libgssapi_krb5.so.2.2
7f83fb0b2000-7f83fb0b4000 rw-p 0004b000 fd:00 17727 /usr/lib64/libgssapi_krb5.so.2.2
7f83fb0b4000-7f83fb0b6000 r-xp 00000000 fd:00 8083 /usr/lib64/libdl-2.17.so
7f83fb0b6000-7f83fb2b6000 ---p 00002000 fd:00 8083 /usr/lib64/libdl-2.17.so
7f83fb2b6000-7f83fb2b7000 r--p 00002000 fd:00 8083 /usr/lib64/libdl-2.17.so
7f83fb2b7000-7f83fb2b8000 rw-p 00003000 fd:00 8083 /usr/lib64/libdl-2.17.so
7f83fb2b8000-7f83fb2d6000 r-xp 00000000 fd:00 11901 /usr/lib64/libaudit.so.1.0.0BAREOS interrupted by signal 6: IOT trap
bconsole, bconsole got signal 6 - IOT trap. Attempting traceback.
exepath=/etc/bareos
Calling: /etc/bareos/btraceback /etc/bareos/bconsole 10627 /tmp
execv: /etc/bareos/btraceback failed: ERR=No such file or directory
[root@backup /etc/bareos]#
System Description
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1027 [bareos-core] director major always 2018-11-09 11:51 2019-11-26 12:35
Reporter: gnif Platform: Linux  
Assigned To: OS: Debian  
Priority: urgent OS Version: 9  
Status: feedback Product Version: 18.4.1  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: 32bit integer overflow in sql LIMIT clause
Description: Running backups on the director is producing the following errors, it is clearly a 32-bit integer overflow as the limit has become negative. I believe the issue is in SetQueryRange as it doesn't support parsing or using 64 bit integers.

https://github.com/bareos/bareos/blob/1c5bf440cdc8fe949ba58357d16588474cd6ccb8/core/src/dird/ua_output.cc#L497
Tags:
Steps To Reproduce:
Additional Information: 09-Nov 21:00 bareos-dir JobId 0: Fatal error: cats/sql_list.cc:566 cats/sql_list.cc:566 query SELECT DISTINCT Job.JobId, Job.Job, Job.Name, Job.PurgedFiles, Job.Type, Job.Level, Job.ClientId, Client.Name as Client, Job.JobStatus, Job.SchedTime, Job.StartTime, Job.EndTime, Job.RealEndTime, Job.JobTDate, Job.VolSessionId, Job.VolSessionTime, Job.JobFiles, Job.JobBytes, Job.JobErrors, Job.JobMissingFiles, Job.PoolId, Pool.Name as PoolName, Job.PriorJobId, Job.FileSetId, FileSet.FileSet FROM Job LEFT JOIN Client ON Client.ClientId=Job.ClientId LEFT JOIN Pool ON Pool.PoolId=Job.PoolId LEFT JOIN JobMedia ON JobMedia.JobId=Job.JobId LEFT JOIN Media ON JobMedia.MediaId=Media.MediaId LEFT JOIN FileSet ON FileSet.FileSetId=Job.FileSetId WHERE Job.JobId > 0 AND Job.JobStatus = 'S' AND Job.SchedTime > '2018-11-08 21:00:21' ORDER BY StartTime LIMIT 1000 OFFSET -2018192296; failed:
You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near '-2018192296' at line 1
System Description
Attached Files:
Notes
(0003623)
joergs   
2019-11-11 19:33   
When does this error occur? How have you triggered this SQL query?
(0003625)
gnif   
2019-11-11 22:57   
Simply by allowing BareOS to run backups across multiple servers for several months. There are clearly over 2,147,483,647 records in the result set.
(0003628)
joergs   
2019-11-12 15:26   
I meant, what single action trigger this error.

I found out, that this sql query is triggered by the bconsole "llist jobs" command, but only when used with the "offset" parameter.

I don't assume, you execute a "llist jobs ... offset=..." manually?

Are you using the Bareos WebUI? The WebUI uses "llist jobs" to retrieve information and can also use limit and offset. Is the error triggered by same actions there?

Or are you using something else like CopyJob the selects jobs by such a query?

Anyhow, this problem should only occur when your last jobid comes close to 2,147,483,647. Is this the case?
(0003629)
joergs   
2019-11-12 15:30   
Manually calling the bconsole command

llist jobs limit=1000 offset=2147483648

results in a query with wrong offset:

cats/sql_query.cc:131-0 called: bool BareosDb::SqlQuery(const char*, int) with query SELECT DISTINCT Job.JobId, Job.Job, Job.Name, Job.PurgedFiles, Job.Type, Job.Level, Job.ClientId, Client.Name as Client, Job.JobStatus, Job.SchedTime, Job.StartTime, Job.EndTime, Job.RealEndTime, Job.JobTDate, Job.VolSessionId, Job.VolSessionTime, Job.JobFiles, Job.JobBytes, Job.JobErrors, Job.JobMissingFiles, Job.PoolId, Pool.Name as PoolName, Job.PriorJobId, Job.FileSetId, FileSet.FileSet FROM Job LEFT JOIN Client ON Client.ClientId=Job.ClientId LEFT JOIN Pool ON Pool.PoolId=Job.PoolId LEFT JOIN JobMedia ON JobMedia.JobId=Job.JobId LEFT JOIN Media ON JobMedia.MediaId=Media.MediaId LEFT JOIN FileSet ON FileSet.FileSetId=Job.FileSetId WHERE Job.JobId > 0 ORDER BY StartTime LIMIT 1000 OFFSET -2147483648;

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1143 [bareos-core] director feature have not tried 2019-11-26 12:09 2019-11-26 12:13
Reporter: joergs Platform:  
Assigned To: OS:  
Priority: low OS Version:  
Status: confirmed Product Version: 19.2.4~pre  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Update a Job to Accurate instead of Full backup on FileSet changes.
Description: Currently (19.2.4) changes to the File directive in a FileSet result that the next backup job is upgraded to a Full backup, see https://docs.bareos.org/master/Configuration/Director.html#fileset-resource.

It has several disadvantages:
  * The job is only upgraded on changes to the File directive. Changes to the options, which may also result into a different fileset are not covered.
  * Even if the fileset does only include one file more (or in effect does not change anything, as the path do not exist), all files would be backed up again.
  * If your backup uses the Accurate option, upgrade to Full is not even required.

A better solution would be to upgrade a backup to an Accurate backup instead of a Full backup.
This has the same benefits like an upgrade to a Full backup (not missing any newly included files),
but instead instead of a Full backup, it does not backup all files again, but only the changed and newly added files (or mark removed files as removed).

Also a Accurate job produces more load on the network and on the client than a normal Incremental job, however, much less than a Full backup.

An further improvement would be to upgrade a Job on any change to a fileset. The current behavior does not cover all relevant cases (e.g. switching OneFS from on to off may result in a lot more files to backup. However, these files will no be covered by an Incremental Non-Accurate job.)
Tags:
Steps To Reproduce:
Additional Information:
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1142 [bareos-core] director minor always 2019-11-23 17:21 2019-11-23 17:21
Reporter: tuxmaster Platform: Linux  
Assigned To: OS: CentOS  
Priority: normal OS Version: 7  
Status: new Product Version: 18.2.6  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: No error message logged/reported when the mail command don't exists.
Description: When the command for "Mail Command" or "Operator Command" not exists, then no error will logged or reported.
Tags:
Steps To Reproduce: Simple set an invalid mail command.
Additional Information: Sample configuration:
good:
Messages {
  Name = Standard
  Description = "all to the log, db and mail"
  Mail Command = "/usr/bin/bsmtp -8 -h localhost -f \"\(Bareos\) \<%r\>\" -s \"Bareos: %t %e of %c %l\" %r"
  Operator Command = "/usr/bin/bsmtp -8 -h localhost -f \"\(Bareos\) \<%r\>\" -s \"Bareos: help at %j needed\" %r"
  Mail = root@localhost = all, !skipped, !saved, !audit
  Operator = root@localhost = mount
  Console = all, !skipped, !saved, !audit
  Append = "/var/log/bareos/bareos.log" = all, !skipped, !saved, !audit
  Catalog = all, !skipped, !saved, !audit
}
bad:
Messages {
  Name = Standard
  Description = "all to the log, db and mail"
  Mail Command = "/usr/sbin/bsmtp -8 -h localhost -f \"\(Bareos\) \<%r\>\" -s \"Bareos: %t %e of %c %l\" %r"
  Operator Command = "/usr/sbin/bsmtp -8 -h localhost -f \"\(Bareos\) \<%r\>\" -s \"Bareos: help at %j needed\" %r"
  Mail = root@localhost = all, !skipped, !saved, !audit
  Operator = root@localhost = mount
  Console = all, !skipped, !saved, !audit
  Append = "/var/log/bareos/bareos.log" = all, !skipped, !saved, !audit
  Catalog = all, !skipped, !saved, !audit
}
System Description
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1137 [bareos-core] General text always 2019-11-12 00:22 2019-11-20 23:49
Reporter: embareossed Platform: Linux  
Assigned To: OS: Debian  
Priority: normal OS Version: 9  
Status: new Product Version: 17.2.4  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Maximum Network Buffer Size must be set for network operations
Description: In attempting to copy a job from the the source bareos system 1 (with its own director, storage, and file daemons) to the target bareos system 2 (also with its own director, storage, and file daemons), the job initially failed with an error which appeared in the syslog as:

bareos-sd: bsock_tcp.c:591 Packet size too big from client:192.168.xx.xx:9103

The documentation at https://docs.bareos.org/Configuration/FileDaemon.html#config-Fd_Client_MaximumNetworkBufferSize says default is 65536. However, running bareos-sd -xc reveals:

MaximumNetworkBufferSize = 0
(repeated for each storage and device resource, of course)

Setting this value to a larger number, such as the supposed default value of 65536, permits the copy job to proceed.

Tags:
Steps To Reproduce: Set MaximumNetworkBufferSize to zero, or omit the setting entirely in the Storage and Device resources of both the source and target systems involved in the network operation (copy, migrate, or maybe a distributed configuration?).

Attempt the network operation. A simple test could be running bconsole and requesting status on the remote storage daemon.
Additional Information: The source system is running bareos 16.2, whereas the target system is running 17.2.

The documentation needs to be updated to instruct the administrator to set these values appropriately for network operations, or else the defaults in the code itself need to be set to the default values indicated in the current documentation.
System Description
Attached Files:
Notes
(0003626)
embareossed   
2019-11-12 00:26   
Incidentally, both the Storage Resource and the Device Resource of the storage daemon both support the MaximumNetworkBufferSize option. However, only the Device Resource has detailed information on the option. It might be a good idea to either reference the documentation of the Device Resource for more on this option, or add the same information there.
(0003627)
embareossed   
2019-11-12 00:31   
The link I included in the description is actually the wrong one; it should be https://docs.bareos.org/Configuration/StorageDaemon.html#config-Sd_Device_MaximumNetworkBufferSize. However, the information there is the same as for Storage and Device resources.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1135 [bareos-core] director feature N/A 2019-11-11 02:38 2019-11-20 23:49
Reporter: embareossed Platform: Linux  
Assigned To: OS: Debian  
Priority: normal OS Version: 9  
Status: new Product Version: 17.2.4  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Means to limit the total size of a backup
Description: If a backup suddenly grows due to an explosion in file growth in one or more of the directories in a fileset it can use up a lot of resources including time and storage. It may be the case that the additional data is due to a bug or some other circumstance that needs to be corrected at some point, or any other reason that some data might not be needed or wanted for long-term storage in a backup.

An example of this could be when a developer is testing their work in the same directory with their source files, perhaps creating a lot of, or large, log files or other output during normal work. There are other scenarios. Developers and administrators should always take care to avoid accumulation of files, but sometimes they are required for some time or may be overlooked. Whatever the case, there is always a chance that a backup could grow by large amounts.

Currently, I have a before run script that calls estimate and decides whether to run the backup based on the result. It would be more efficient if bareos had a built-in feature to limit the size of a backup, either by its total projected size or perhaps by the total number of media volumes it is expected to use. My script has worked well for years, but it is a clumsy approach better served by built-in faculties in bareos. I hope you will consider adding this as a feature in the future.
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1132 [bareos-core] documentation minor always 2019-11-06 11:12 2019-11-20 23:48
Reporter: embareossed Platform: Linux  
Assigned To: OS: Debian  
Priority: normal OS Version: 9  
Status: new Product Version: 17.2.4  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: How to make Counter Resources increment
Description: BareOS Counter Resources do not automatically increment themselves; this must be done by nudging the corresponding counter variable appropriately. In addition, one must know the exact syntax to to use.

Merely creating a tape label with, e.g.:

Label Format = "DIFF-$DiffSeq"

does NOT increment the value of "DiffSeq," which will cause backups to error and send annoying intervention emails. The error indicates that the director could not find an appendable volume.

There does not appear to be documentation on how to actually use counter variables corresponding to Counter Resources. I searched high and low for hours and only discovered the solution by examining a google hit on Bacula, not BareOS. Apparently, Bacula has some sophisticated facilities for generating tape labels from counter variables; I have not tried them all. However, the Bacula documentation on this topic was enough (along with my own experience and knowledge of *nix-type systems) to help me figure out how to do something similar in BareOS. I changed the directive to:

Label Format = "DIFF-${DiffSeq+}"

Note that the plus sign (the '+') MUST be INSIDE the curly braces. I have not tried padding and other features the Bacula documentation discusses as I feel I have spent enough time on this already. I use "faux" padding by starting my sequences in the Counter Resource at 1000; this is satisfactory for my own use. Others might try experimenting with some other features, if they are implemented in Bareos also.

BTW, Counter Variables really do work in BareOS. The secret is knowing exactly how to use them.

The Bacula doc on this topic is at: https://www.bacula.org/9.0.x-manuals/en/misc/Variable_Expansion.html

I think an update to the BareOS documentation on the use, very specifically, of counter variables would be appropriate.

Tags:
Steps To Reproduce: Create a Counter Resource and reference it with a simple shell-like substitution like $CounterVariable.
Additional Information:
System Description
Attached Files:
Notes
(0003614)
embareossed   
2019-11-06 11:20   
Incidentally, if this is fixed in post-17.2 releases, do not immediately close this issue. The webui in 18.2 has some serious defects. Short of using a mix of 17.2 serverware and 18.2 webui clientware (which may be a bit of a configuration hassle on some platforms depending on packaging system), using BareOS 18.2 at this time is not possible. Production shops, and many users preferring stability over cutting edges, could well decide to avoid nightlies.

Please leave this open so others can see they are not losing their minds. Thanks.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
144 [bareos-core] director feature have not tried 2013-04-07 12:35 2019-11-20 23:46
Reporter: pstorz Platform: Linux  
Assigned To: OS: any  
Priority: normal OS Version: 3  
Status: new Product Version: 12.4.2  
Product Build: Resolution: reopened  
Projection: none      
ETA: none Fixed in Version:  
    Target Version: 13.2.0  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Extend cancel command to be able to cancel all running jobs
Description: Especially when testing, it would be a very nice feature to be able to
cancel multiple jobs at once.

For the first step, it would be nice to be able to cancel all jobs.

Also useful would be to be able to cancel, jobs of a certain state, e.g.

cancel jobs state=waiting
Tags:
Steps To Reproduce:
Additional Information:
System Description General issues regarding all linux platforms or not specific to only one distribution
Attached Files:
Notes
(0000597)
mvwieringen adm   
2013-08-13 03:12   
Fix committed to bareos master branch with changesetid 696.
(0001419)
mvwieringen   
2015-03-25 16:51   
Fix committed to bareos2015 bareos-13.2 branch with changesetid 4252.
(0001572)
joergs   
2015-03-25 19:19   
Due to the reimport of the Github repository to bugs.bareos.org, the status of some tickets have been changed. These tickets will be closed again.
Sorry for the noise.
(0003579)
bozonius   
2019-09-21 13:25   
Sorry, but I think this bug /might/ be back. I am running 16.2.4, and I am seeing jobs with a status of "waiting" and I cannot cancel them.

This is on Ubuntu Linux 14.04 (yes, I know that is old, and I am moving on soon...), and the bareos packages come from the repository (not hand built).
(0003637)
bozonius   
2019-11-20 23:34   
I am now running Bareos 18.2.5 on Devuan and the problem persists. I have a 'waiting' job that won't go away. I've tried using bconsole and the job is still in waiting state.

The only 'solution' I have found is to manually remove it in the database, but I prefer not to perform such invasive actions since they could damage the integrity of Bareos overall.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
886 [bareos-core] director minor always 2017-12-27 09:14 2019-11-20 07:49
Reporter: bozonius Platform: Linux  
Assigned To: OS: Ubuntu  
Priority: normal OS Version: 14.04  
Status: acknowledged Product Version: 16.2.4  
Product Build: Resolution: reopened  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: "Run Before Job" in JobDefs is run even though a backup job specifies a different "Run Before Job"
Description: My JobDefs file specifies a script that is to be run before a backup job. This is valid for nearly all of my jobs. However, one backup job should not run that script; instead, I want to run a different script before running that backup. So in that one specific backup job definition, I override the default from JobDefs with a different script. But the JobDefs script still gets run anyway.
Tags:
Steps To Reproduce: Create a JobDefs with a "Run Before Job" set to run "script1."

Create one or more jobs (e.g., maybe "job-general1," "job-general2," "job-general3") for the general case, not setting "Run Before Job" in those, allowing the setting to default to the one specified in JobDefs (i.e., "script1").

Create one job for a specific case ("job-special"), setting "Run Before Job" to "script2."

Run any of the general case jobs ("job-general1," etc.) and "script1" is correctly run, since no override is specified for any of those jobs.

Run "job-special" and BOTH script1 AND script2 are run before the job.
Additional Information: From the documentation, section 9.3, "Only the changes from the defaults need to be mentioned in each Job." (http://doc.bareos.org/master/html/bareos-manual-main-reference.html#QQ2-1-216) I infer that the description of Bareos's defaulting mechanism fits the industry-standard definition of the term "default."

Please note that this was also an issue in Bacula, and one of the (many) reasons I chose to move on to Bareos. I was told by Bacula support that I did not understand the meaning of "default," which actually, I really am sure I do. I've worked with software for over 30 years, and the documented semantics (as per section 9.3) seem to comport with general agreement about the meaning of default mechanisms. To this day, I do not think I've ever seen a single exception to the generally agreed-upon meaning of "default." Even overloading mechanisms in OO languages all appear to comport to this intention and implementation.

I hope this will not result in the same stonewalling of this issue and that this bug, while minor for the most part but potential for disaster for those unawares in other more complicated contexts, can and will be fixed in some upcoming release. Thanks.
System Description
Attached Files:
Notes
(0002845)
joergs   
2017-12-27 12:59   
The "Run Before Job" directive can be specified multiple times in a resource to run multiple run scripts. So the behavior you describe is to be expected. You can extend a DefaultJob with additional run scripts.

To achieve the behavior requested by you, you might use the fact, that DefaultJobs can be used recursively.

JobDefs {
  Name = "defaultjob-without-script"
  ...
}

JobDefs {
  Name = "defaultjob-with-script"
  JobDefs = "defaultjob-without-script"
  Run Before Job = "echo 'test'"
}

Jobs can than either refer to "defaultjob-with-script" or "defaultjob-without-script".
(0002848)
bozonius   
2017-12-27 21:30   
How is this reflected in the documentation? I was kind of wondering if one could specify multiple JobDefs (default configurations), which would be handy. However, the documentation, as I read it, does not seem to encourage the reader to consider multiple JobDefs, as you have illustrated.

Thank you for this information, and I will DEFINITELY make use of this facility -- actually, it addresses precisely what I wanted! However, it might be a good idea to add something to the JobDefs documentation to illustrate exactly what you have shown here. Thanks.

I wonder, even had a bareos admin read every last word of the documentation, if this approach to handling job defaults would be obvious to them. This is not a criticism of the current docs; as docs go, this is one of the more complete I've seen. It's just that sometimes it takes more explanation of various options than those one might glean by reading the material quite literally as I have. I also don't believe that one will necessarily read every last bit of the documentation in the first place (though you might be within your right to claim every user really should have in order to leverage the full advantage of this extensive system). Users may be tasked with correcting or adding an urgent matter and not have time to read everything, rather heading straight to the portion of the docs seeming to be most relevant for the task at hand.

In the opening paragraph of JobDefs might be a good place to add this information. It is the place where it would be most likely NOT to be missed. Again, thanks for the info.
(0002853)
joergs   
2018-01-02 16:39   
Already documented at http://doc.bareos.org/master/html/bareos-manual-main-reference.html#directiveDirJobJob%20Defs :

To structure the configuration even more, Job Defs themselves can also refer to other Job Defs.

If you know a better way to describe this: also patches to the documentation are always welcome.
(0002856)
bozonius   
2018-01-03 03:35   
From the doc:

Any value that you explicitly define in the current Job resource, will override any defaults specified in the Job Defs resource.

That isn't quite exactly correct. What really happens is something a bit more akin to subclassing I think, albeit these "classes" are linear; there is no "multiple inheritance," so to speak (though THAT might even be useful?).

BareOS JobDefs values are not always replaced; they may be replaced, or appended instead. Appending in this manner is not typical of the way we normally speak of defaults in most of the software kingdom (at least in my own experience).

Not sure how to improve the docs, but just thought I'd put this out there: This notion of single parent subclassing. Perhaps this could lead us toward a better description.
(0003636)
bozonius   
2019-11-20 07:49   
I tried the link in your note https://bugs.bareos.org/view.php?id=886#c2853, but it takes me to the start of the doc, not the specific section, which is what I think your link is supposed to do. Seems like a lot of links (and anchors) in the doc are broken this way. References to the bareos doc from websites (like linuxquestions) are now broken; I realize there might not be much that can be done about external references outside of bareos.org.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1141 [bareos-core] General block always 2019-11-19 22:23 2019-11-19 22:23
Reporter: harryruhr Platform: amd64  
Assigned To: OS: OpenBSD  
Priority: high OS Version: 6.6  
Status: new Product Version: 18.2.6  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Can't build on OpenBSD / against libressl
Description: The build aborts with the following error:

/usr/ports/pobj/bareos-18.2.6/bareos-Release-18.2.6/core/src/lib/crypto_openssl.cc:425:24: error: use of undeclared identifier 'M_ASN1_OCTET_STRING_dup'; did you mean 'ASN1_OCTET_STRING_dup'?
      newpair->keyid = M_ASN1_OCTET_STRING_dup(keypair->keyid);
                       ^~~~~~~~~~~~~~~~~~~~~~~
                       ASN1_OCTET_STRING_dup
/usr/include/openssl/asn1.h:692:20: note: 'ASN1_OCTET_STRING_dup' declared here
ASN1_OCTET_STRING *ASN1_OCTET_STRING_dup(const ASN1_OCTET_STRING *a);
                   ^
Tags:
Steps To Reproduce: - Get the OpenBSD port from https://github.com/jasperla/openbsd-wip/tree/master/sysutils/bareos and try to build it.
Additional Information: This error seems to occur in general, when building against libressl (instead of openssl), see also https://bugs.gentoo.org/692370
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1140 [bareos-core] webui minor always 2019-11-19 15:40 2019-11-19 15:40
Reporter: koef Platform: Linux  
Assigned To: OS: CentOS  
Priority: normal OS Version: 7  
Status: new Product Version: 18.2.5  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Restore feature always fails from webui (cats/bvfs.cc:927-0 Can't execute q)
Description: Hello.

Restore feature doesn't create restore job from webui. But it works fine from bconsole.
Please ask for additional info if it's needed.

You can see debug output with level 200 and mysql query log below.

Thanks.
Tags: director, webui
Steps To Reproduce: Merge all client filesets - No
Merge all related jobs to last full backup of selected backup job - No
Additional Information: bareos-dir debug trace:
19-Nov-2019 15:29:15.191147 bareos-dir (100): lib/bsock.cc:81-0 Construct BareosSocket
19-Nov-2019 15:29:15.191410 bareos-dir (100): include/jcr.h:320-0 Construct JobControlRecord
19-Nov-2019 15:29:15.191460 bareos-dir (200): lib/bsock.cc:631-0 Identified from Bareos handshake: admin-R_CONSOLE recognized version: 18.2
19-Nov-2019 15:29:15.191491 bareos-dir (110): dird/socket_server.cc:109-0 Conn: Hello admin calling version 18.2.5
19-Nov-2019 15:29:15.191506 bareos-dir (100): include/jcr.h:320-0 Construct JobControlRecord
19-Nov-2019 15:29:15.191528 bareos-dir (100): dird/storage.cc:157-0 write_storage_list=File
19-Nov-2019 15:29:15.191547 bareos-dir (100): dird/storage.cc:166-0 write_storage=File where=Job resource
19-Nov-2019 15:29:15.191559 bareos-dir (100): dird/job.cc:1519-0 JobId=0 created Job=-Console-.2019-11-19_15.29.15_07
19-Nov-2019 15:29:15.191776 bareos-dir (50): lib/cram_md5.cc:69-0 send: auth cram-md5 <1114491002.1574173755@bareos-dir> ssl=0
19-Nov-2019 15:29:15.192019 bareos-dir (50): lib/cram_md5.cc:88-0 Authenticate OK Gd1+i91cs2Tf7pZiQJs+ew
19-Nov-2019 15:29:15.192200 bareos-dir (100): lib/cram_md5.cc:116-0 cram-get received: auth cram-md5 <9503288492.1574173755@php-bsock> ssl=0
19-Nov-2019 15:29:15.192239 bareos-dir (99): lib/cram_md5.cc:135-0 sending resp to challenge: 1y/il6/RE9/FU8dciG/X6A
19-Nov-2019 15:29:15.273737 bareos-dir (10): dird/ua_audit.cc:146-0 : Console [admin] from [10.105.132.139] cmdline llist backups client="someclient.domain.com" fileset="any" order=desc
19-Nov-2019 15:29:15.273867 bareos-dir (100): dird/ua_db.cc:155-0 UA Open database
19-Nov-2019 15:29:15.273903 bareos-dir (100): cats/sql_pooling.cc:61-0 DbSqlGetNonPooledConnection allocating 1 new non pooled database connection to database bareos, backend type mysql
19-Nov-2019 15:29:15.273929 bareos-dir (100): cats/cats_backends.cc:81-0 db_init_database: Trying to find mapping of given interfacename mysql to mapping interfacename dbi, partly_compare = true
19-Nov-2019 15:29:15.273943 bareos-dir (100): cats/cats_backends.cc:81-0 db_init_database: Trying to find mapping of given interfacename mysql to mapping interfacename mysql, partly_compare = false
19-Nov-2019 15:29:15.273959 bareos-dir (100): cats/mysql.cc:869-0 db_init_database first time
19-Nov-2019 15:29:15.273990 bareos-dir (50): cats/mysql.cc:181-0 mysql_init done
19-Nov-2019 15:29:15.274839 bareos-dir (50): cats/mysql.cc:205-0 mysql_real_connect done
19-Nov-2019 15:29:15.274873 bareos-dir (50): cats/mysql.cc:207-0 db_user=someuser db_name=bareos db_password=somepass
19-Nov-2019 15:29:15.275378 bareos-dir (100): cats/mysql.cc:230-0 opendb ref=1 connected=1 db=7effb000ab20
19-Nov-2019 15:29:15.275854 bareos-dir (150): dird/ua_db.cc:188-0 DB bareos opened
19-Nov-2019 15:29:15.275887 bareos-dir (20): dird/ua_output.cc:579-0 list: llist backups client="someclient.domain.com" fileset="any" order=desc
19-Nov-2019 15:29:15.275937 bareos-dir (100): cats/sql_query.cc:96-0 called: void BareosDb::FillQueryVaList(PoolMem&, BareosDbQueryEnum::SQL_QUERY_ENUM, __va_list_tag*) with query name list_jobs_long (6)
19-Nov-2019 15:29:15.276015 bareos-dir (100): cats/sql_query.cc:102-0 called: void BareosDb::FillQueryVaList(PoolMem&, BareosDbQueryEnum::SQL_QUERY_ENUM, __va_list_tag*) query is now SELECT DISTINCT Job.JobId, Job.Job, Job.Name, Job.PurgedFiles, Job.Type, Job.Level, Job.ClientId, Client.Name as Client, Job.JobStatus, Job.SchedTime, Job.StartTime, Job.EndTime, Job.RealEndTime, Job.JobTDate, Job.VolSessionId, Job.VolSessionTime, Job.JobFiles, Job.JobBytes, Job.JobErrors, Job.JobMissingFiles, Job.PoolId, Pool.Name as PoolName, Job.PriorJobId, Job.FileSetId, FileSet.FileSet FROM Job LEFT JOIN Client ON Client.ClientId=Job.ClientId LEFT JOIN Pool ON Pool.PoolId=Job.PoolId LEFT JOIN JobMedia ON JobMedia.JobId=Job.JobId LEFT JOIN Media ON JobMedia.MediaId=Media.MediaId LEFT JOIN FileSet ON FileSet.FileSetId=Job.FileSetId WHERE Job.JobId > 0 AND Job.Type='B' AND Client.Name='someclient.domain.com' AND JobStatus IN ('T','W') AND (FileSet='v2iFileset' OR FileSet='SelfTest' OR FileSet='LinuxAll' OR FileSet='InfluxdbFileset' OR FileSet='IcingaFileset' OR FileSet='GraylogFileset' OR FileSet='GrafanaFileset' OR FileSet='Catalog') ORDER BY StartTime DESC;
19-Nov-2019 15:29:15.276067 bareos-dir (100): cats/sql_query.cc:124-0 called: bool BareosDb::SqlQuery(const char*, int) with query SELECT DISTINCT Job.JobId, Job.Job, Job.Name, Job.PurgedFiles, Job.Type, Job.Level, Job.ClientId, Client.Name as Client, Job.JobStatus, Job.SchedTime, Job.StartTime, Job.EndTime, Job.RealEndTime, Job.JobTDate, Job.VolSessionId, Job.VolSessionTime, Job.JobFiles, Job.JobBytes, Job.JobErrors, Job.JobMissingFiles, Job.PoolId, Pool.Name as PoolName, Job.PriorJobId, Job.FileSetId, FileSet.FileSet FROM Job LEFT JOIN Client ON Client.ClientId=Job.ClientId LEFT JOIN Pool ON Pool.PoolId=Job.PoolId LEFT JOIN JobMedia ON JobMedia.JobId=Job.JobId LEFT JOIN Media ON JobMedia.MediaId=Media.MediaId LEFT JOIN FileSet ON FileSet.FileSetId=Job.FileSetId WHERE Job.JobId > 0 AND Job.Type='B' AND Client.Name='someclient.domain.com' AND JobStatus IN ('T','W') AND (FileSet='v2iFileset' OR FileSet='SelfTest' OR FileSet='LinuxAll' OR FileSet='InfluxdbFileset' OR FileSet='IcingaFileset' OR FileSet='GraylogFileset' OR FileSet='GrafanaFileset' OR FileSet='Catalog') ORDER BY StartTime DESC;
19-Nov-2019 15:29:15.354800 bareos-dir (10): dird/ua_audit.cc:146-0 : Console [admin] from [10.105.132.139] cmdline llist clients current
19-Nov-2019 15:29:15.354928 bareos-dir (20): dird/ua_output.cc:579-0 list: llist clients current
19-Nov-2019 15:29:15.354968 bareos-dir (100): cats/sql_query.cc:124-0 called: bool BareosDb::SqlQuery(const char*, int) with query SELECT ClientId,Name,Uname,AutoPrune,FileRetention,JobRetention FROM Client ORDER BY ClientId
19-Nov-2019 15:29:15.355739 bareos-dir (200): dird/ua_output.cc:1626-0 filterit: Filter on resource_type 1002 value bareos-fd, suppress output
19-Nov-2019 15:29:15.355779 bareos-dir (200): dird/ua_output.cc:1626-0 filterit: Filter on resource_type 1002 value bareos-dir-node, suppress output
19-Nov-2019 15:29:15.610801 bareos-dir (10): dird/ua_audit.cc:146-0 : Console [admin] from [10.105.132.139] cmdline .bvfs_update jobid=142
19-Nov-2019 15:29:15.616201 bareos-dir (100): lib/htable.cc:77-0 malloc buf=7effb006a718 size=9830400 rem=9830376
19-Nov-2019 15:29:15.616266 bareos-dir (100): lib/htable.cc:220-0 Allocated big buffer of 9830400 bytes
19-Nov-2019 15:29:15.616634 bareos-dir (10): cats/bvfs.cc:359-0 Updating cache for 142
19-Nov-2019 15:29:15.616656 bareos-dir (10): cats/bvfs.cc:190-0 UpdatePathHierarchyCache()
19-Nov-2019 15:29:15.616694 bareos-dir (100): cats/sql_query.cc:124-0 called: bool BareosDb::SqlQuery(const char*, int) with query SELECT 1 FROM Job WHERE JobId = 142 AND HasCache=1
19-Nov-2019 15:29:15.617365 bareos-dir (10): cats/bvfs.cc:202-0 Already computed 142
19-Nov-2019 15:29:15.617405 bareos-dir (100): lib/htable.cc:90-0 free malloc buf=7effb006a718
Pool Maxsize Maxused Inuse
NoPool 256 86 0
NAME 1318 16 4
FNAME 2304 75 65
MSG 2634 31 17
EMSG 2299 10 4
BareosSocket 31080 4 2
RECORD 128 0 0

19-Nov-2019 15:29:15.619407 bareos-dir (100): lib/htable.cc:601-0 Done destroy.
19-Nov-2019 15:29:15.620312 bareos-dir (10): dird/ua_audit.cc:146-0 : Console [admin] from [10.105.132.139] cmdline .bvfs_restore jobid=142 fileid=6914 dirid= path=b2000928016
19-Nov-2019 15:29:15.620348 bareos-dir (100): cats/sql_query.cc:124-0 called: bool BareosDb::SqlQuery(const char*, int) with query SELECT VolSessionId,VolSessionTime,PoolId,StartTime,EndTime,JobFiles,JobBytes,JobTDate,Job,JobStatus,Type,Level,ClientId,Name,PriorJobId,RealEndTime,JobId,FileSetId,SchedTime,RealEndTime,ReadBytes,HasBase,PurgedFiles FROM Job WHERE JobId=142
19-Nov-2019 15:29:15.620781 bareos-dir (100): cats/sql_query.cc:124-0 called: bool BareosDb::SqlQuery(const char*, int) with query SELECT ClientId,Name,Uname,AutoPrune,FileRetention,JobRetention FROM Client WHERE Client.ClientId=8
19-Nov-2019 15:29:15.621038 bareos-dir (100): cats/sql_query.cc:124-0 called: bool BareosDb::SqlQuery(const char*, int) with query DROP TABLE btempb2000928016
19-Nov-2019 15:29:15.621252 bareos-dir (100): cats/sql_query.cc:124-0 called: bool BareosDb::SqlQuery(const char*, int) with query DROP TABLE b2000928016
19-Nov-2019 15:29:15.621419 bareos-dir (15): cats/bvfs.cc:924-0 q=CREATE TABLE btempb2000928016 AS SELECT Job.JobId, JobTDate, FileIndex, File.Name, PathId, FileId FROM File JOIN Job USING (JobId) WHERE FileId IN (6914)
19-Nov-2019 15:29:15.621434 bareos-dir (100): cats/sql_query.cc:124-0 called: bool BareosDb::SqlQuery(const char*, int) with query CREATE TABLE btempb2000928016 AS SELECT Job.JobId, JobTDate, FileIndex, File.Name, PathId, FileId FROM File JOIN Job USING (JobId) WHERE FileId IN (6914)
19-Nov-2019 15:29:15.621634 bareos-dir (10): cats/bvfs.cc:927-0 Can't execute q
19-Nov-2019 15:29:15.621662 bareos-dir (100): cats/sql_query.cc:124-0 called: bool BareosDb::SqlQuery(const char*, int) with query DROP TABLE btempb2000928016
19-Nov-2019 15:29:15.661510 bareos-dir (10): dird/ua_audit.cc:146-0 : Console [admin] from [10.105.132.139] cmdline restore file=?b2000928016 client=someclient.domain.com restoreclient=someclient.domain.com restorejob="RestoreFiles" where=/tmp/bareos-restores/ replace=never yes
19-Nov-2019 15:29:15.661580 bareos-dir (100): cats/sql_query.cc:124-0 called: bool BareosDb::SqlQuery(const char*, int) with query SELECT ClientId,Name,Uname,AutoPrune,FileRetention,JobRetention FROM Client WHERE Client.Name='someclient.domain.com'
19-Nov-2019 15:29:15.661982 bareos-dir (100): cats/sql_query.cc:96-0 called: void BareosDb::FillQueryVaList(PoolMem&, BareosDbQueryEnum::SQL_QUERY_ENUM, __va_list_tag*) with query name uar_jobid_fileindex_from_table (32)
19-Nov-2019 15:29:15.662011 bareos-dir (100): cats/sql_query.cc:102-0 called: void BareosDb::FillQueryVaList(PoolMem&, BareosDbQueryEnum::SQL_QUERY_ENUM, __va_list_tag*) query is now SELECT JobId, FileIndex FROM b2000928016 ORDER BY JobId, FileIndex ASC
19-Nov-2019 15:29:15.662022 bareos-dir (100): cats/sql_query.cc:140-0 called: bool BareosDb::SqlQuery(const char*, int (*)(void*, int, char**), void*) with query SELECT JobId, FileIndex FROM b2000928016 ORDER BY JobId, FileIndex ASC
Pool Maxsize Maxused Inuse
NoPool 256 86 0
NAME 1318 16 5
FNAME 2304 75 65
MSG 2634 31 17
EMSG 2299 10 4
BareosSocket 31080 4 2
RECORD 128 0 0

19-Nov-2019 15:29:15.702682 bareos-dir (10): dird/ua_audit.cc:146-0 : Console [admin] from [10.105.132.139] cmdline .bvfs_cleanup path=b2000928016
19-Nov-2019 15:29:15.702753 bareos-dir (100): cats/sql_query.cc:124-0 called: bool BareosDb::SqlQuery(const char*, int) with query DROP TABLE b2000928016
19-Nov-2019 15:29:15.714596 bareos-dir (100): cats/mysql.cc:252-0 closedb ref=0 connected=1 db=7effb000ab20
19-Nov-2019 15:29:15.714646 bareos-dir (100): cats/mysql.cc:259-0 close db=7effb000ab20
19-Nov-2019 15:29:15.714817 bareos-dir (200): dird/job.cc:1560-0 Start dird FreeJcr
19-Nov-2019 15:29:15.714871 bareos-dir (200): dird/job.cc:1624-0 End dird FreeJcr
19-Nov-2019 15:29:15.714888 bareos-dir (100): lib/jcr.cc:446-0 FreeCommonJcr: 7effb0007898
19-Nov-2019 15:29:15.714909 bareos-dir (100): lib/bsock.cc:129-0 Destruct BareosSocket
19-Nov-2019 15:29:15.714924 bareos-dir (100): include/jcr.h:324-0 Destruct JobControlRecord



Mysql query log:
191119 15:29:15 37 Connect bareos@localhost as anonymous on bareos
                   37 Query SELECT VersionId FROM Version
                   37 Query SET wait_timeout=691200
                   37 Query SET interactive_timeout=691200
                   37 Query SELECT DISTINCT Job.JobId, Job.Job, Job.Name, Job.PurgedFiles, Job.Type, Job.Level, Job.ClientId, Client.Name as Client, Job.JobStatus, Job.SchedTime, Job.StartTime, Job.EndTime, Job.RealEndTime, Job.JobTDate, Job.VolSessionId, Job.VolSessionTime, Job.JobFiles, Job.JobBytes, Job.JobErrors, Job.JobMissingFiles, Job.PoolId, Pool.Name as PoolName, Job.PriorJobId, Job.FileSetId, FileSet.FileSet FROM Job LEFT JOIN Client ON Client.ClientId=Job.ClientId LEFT JOIN Pool ON Pool.PoolId=Job.PoolId LEFT JOIN JobMedia ON JobMedia.JobId=Job.JobId LEFT JOIN Media ON JobMedia.MediaId=Media.MediaId LEFT JOIN FileSet ON FileSet.FileSetId=Job.FileSetId WHERE Job.JobId > 0 AND Job.Type='B' AND Client.Name='someclient.domain.com' AND JobStatus IN ('T','W') AND (FileSet='v2iFileset' OR FileSet='SelfTest' OR FileSet='LinuxAll' OR FileSet='InfluxdbFileset' OR FileSet='IcingaFileset' OR FileSet='GraylogFileset' OR FileSet='GrafanaFileset' OR FileSet='Catalog') ORDER BY StartTime DESC
                   37 Query SELECT ClientId,Name,Uname,AutoPrune,FileRetention,JobRetention FROM Client ORDER BY ClientId
                   37 Query SELECT 1 FROM Job WHERE JobId = 142 AND HasCache=1
                   37 Query SELECT VolSessionId,VolSessionTime,PoolId,StartTime,EndTime,JobFiles,JobBytes,JobTDate,Job,JobStatus,Type,Level,ClientId,Name,PriorJobId,RealEndTime,JobId,FileSetId,SchedTime,RealEndTime,ReadBytes,HasBase,PurgedFiles FROM Job WHERE JobId=142
                   37 Query SELECT ClientId,Name,Uname,AutoPrune,FileRetention,JobRetention FROM Client WHERE Client.ClientId=8
                   37 Query DROP TABLE btempb2000928016
                   37 Query DROP TABLE b2000928016
                   37 Query CREATE TABLE btempb2000928016 AS SELECT Job.JobId, JobTDate, FileIndex, File.Name, PathId, FileId FROM File JOIN Job USING (JobId) WHERE FileId IN (6914)
                   37 Query DROP TABLE btempb2000928016
                   37 Query SELECT ClientId,Name,Uname,AutoPrune,FileRetention,JobRetention FROM Client WHERE Client.Name='someclient.domain.com'
                   37 Query SELECT JobId, FileIndex FROM b2000928016 ORDER BY JobId, FileIndex ASC
                   37 Query DROP TABLE b2000928016
                   37 Quit
System Description
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1085 [bareos-core] file daemon minor always 2019-05-21 00:55 2019-11-11 19:21
Reporter: leo Platform: x86  
Assigned To: OS: Windows  
Priority: high OS Version: 10  
Status: acknowledged Product Version: 18.2.5  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: the command to test the configuration files never exit
Description: When I run the command "bareos-fd -t" on a Windows machine, the command remains blocked after displaying any errors and never exit.

So, I can not see the return code of "bareos-fd -t".
Tags:
Steps To Reproduce: run "bareos-fd -t" on Bareos installation directory
Additional Information:
System Description
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1117 [bareos-core] director minor always 2019-09-23 13:57 2019-11-11 19:00
Reporter: joergs Platform:  
Assigned To: OS:  
Priority: normal OS Version:  
Status: confirmed Product Version: 19.2.1  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: When using multiple ACLs (Console ACL + Profile ACL) all negative ACLs except of the last one will be ignored
Description: A Console can contains ACLs and Profiles. The Profiles can also contain ACLs.

The way the Bareos Director evaluates multiple ACL is confusing (or just wrong).

All negative ACLs except of the last one will be ignored.


Tags:
Steps To Reproduce: Create following resource:

Console {
  name = test
  password = secret
  Pool ACL=!Full
  Profile = operator
}

The operator profile should already exist. If not, create it like this:
Profile {
  name = operator
  Command ACL = *all*
  Pool ACL = *all*
}

Login as Console test. The ".pools" will show you all pool, including "Full".
Additional Information: The function UaContext::AclAccessOk evaluates the Console ACLs first.
It stop evaluating ACLs, if it got a positive match (with is correct).
However, the function will continue checking the next ACL, if 1. no information about a resource have been found or 2. resource rejected. This is obviously wrong.
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1113 [bareos-core] installer / packages minor always 2019-09-17 10:14 2019-11-11 18:56
Reporter: DerMannMitDemHut Platform: Linux  
Assigned To: OS: Debian  
Priority: high OS Version: 10.1  
Status: new Product Version: 18.2.6  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: bareos repository lacks debian 10 support
Description: Debian 10 (buster) was released on 6th July 2019 but there is no bareos repository yet.
I would appreciate if debian 10 would be supported as well.
Tags:
Steps To Reproduce:
Additional Information:
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1136 [bareos-core] director crash always 2019-11-11 08:38 2019-11-11 08:39
Reporter: pstorz Platform: Linux  
Assigned To: pstorz OS: any  
Priority: low OS Version: 3  
Status: confirmed Product Version: 18.2.6  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Director crashes if DirAddress is configured to IP that is not configured on any interface (yet)
Description: When DirAddress is configured in the director resource, but the IP Adress is not available, the director crashes.
Tags:
Steps To Reproduce: Set DirAddress in the director resource to an IP that does not exist on system like

DirAddress = 1.1.1.1

and start the director.
Additional Information: This problem does not exist in current master anymore.

This bug was created for the PR 330 : https://github.com/bareos/bareos/pull/330
System Description General issues regarding all linux platforms or not specific to only one distribution
Attached Files:
Notes
(0003617)
pstorz   
2019-11-11 08:39   
Problem is reproducable

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
924 [bareos-core] documentation minor always 2018-03-03 15:34 2019-11-07 18:26
Reporter: tuxmaster Platform: Linux  
Assigned To: OS: CentOS  
Priority: normal OS Version: 7  
Status: new Product Version: 17.2.5  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Absolute Job Timeout option for the director settings is not described
Description: Nether the default settings nor the meaning of it are documented.

Only this little text are present:
Absolute Job Timeout = <positive-integer>
Version >= 14.2.0
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0003616)
b.braunger@syseleven.de   
2019-11-07 18:26   
Explained here: https://bareos-users.narkive.com/Mtv7pNGw/absolute-job-timeout

> Absolute Job Timeout was added to 14.2 to replace the Bacula hard coded
> limit of 6 days for a Job which some people ran into when doing long running
> jobs. Its now by default set to 0 which means don't care about how long a Job runs.
> But if you have the idea you have the need to automatically kill jobs that run
> longer then this is known to work well in Bacula as all jobs longer then 6 days
> got nicely killed there. Given you can properly cancel Jobs I see no real reason
> to set it that is also why the new default is 0 and not the old arbitrary 6 days.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1114 [bareos-core] file daemon minor always 2019-09-20 13:24 2019-11-07 07:45
Reporter: unki Platform: Linux  
Assigned To: OS: Debian  
Priority: normal OS Version: 9  
Status: acknowledged Product Version: 18.2.6  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Restoring backups with data-encryption + Percona xtrabackup Plugin file with "Bareos - filed/crypto.cc:303 Decryption error"
Description: I'm start encountering an issue on restoring MariaDB backups that have used the Percona xtrabackup Plugin for the file-daemon together with data-encryption.

The file-daemon config contains the following lines to enable data-encryption:

  PKI Signatures = yes
  PKI Encryption = yes
  PKI Master Key = "/etc/bareos/bareos.example.com_crt.pem" # ONLY the Public Key
  PKI Keypair = "/etc/bareos/backup4.pem" # Public and Private Keys

The problem only occurs in combination of an ongoing restore + xtrabackup-plugin + a job that involves Full + Increment backups.
Restoring the MariaDB from a Full-backup that has been made via the plugin works without any issue.
Restoring data of the same node that was _NOT_ using the xtrabackup-plugin but where ordinary file-backups, does not show any issue regardless of restoring a Full or Incremental backup.

When this problem occurs, the restore terminates just after having successfully restored the FULL backup on starting handling the involved Increment backups. The log then contains:

 2019-09-18 14:09:42 backup4-sd JobId 38061: stored/acquire.cc:151 Changing read device. Want Media Type="FileIncPool" have="FileFullPool"
  device="FullPool1" (/srv/bareos/pool)
 2019-09-18 14:09:42 backup4-sd JobId 38061: Releasing device "FullPool1" (/srv/bareos/pool).
 2019-09-18 14:09:42 backup4-sd JobId 38061: Media Type change. New read device "IncPool8" (/srv/bareos/pool) chosen.
 2019-09-18 14:09:42 backup4-sd JobId 38061: Ready to read from volume "Incremental-Sql-1736" on device "IncPool8" (/srv/bareos/pool).
 2019-09-18 14:09:42 backup4-sd JobId 38061: Forward spacing Volume "Incremental-Sql-1736" to file:block 0:212.
 2019-09-18 14:10:04 backup4-sd JobId 38061: End of Volume at file 0 on device "IncPool8" (/srv/bareos/pool), Volume "Incremental-Sql-1736"
 2019-09-18 14:10:04 backup4-sd JobId 38061: Ready to read from volume "Incremental-Sql-4829" on device "IncPool8" (/srv/bareos/pool).
 2019-09-18 14:10:04 backup4-sd JobId 38061: Forward spacing Volume "Incremental-Sql-4829" to file:block 0:749109650.
 2019-09-18 14:10:04 backup4-fd JobId 38061: Error: filed/crypto.cc:303 Decryption error. buf_len=30903 decrypt_len=0 on file /data/restore/prod-db-2019-09-18/_percona/xbstream.0000038025
 2019-09-18 14:10:04 backup4-fd JobId 38061: Error: Uncompression error on file /data/restore/prod-db-2019-09-18/_percona/xbstream.0000038025. ERR=Zlib data error
 2019-09-18 14:10:04 backup4-fd JobId 38061: Error: filed/crypto.cc:172 Missing cryptographic signature for /data/restore/prod-db-2019-09-18/_percona/xbstream.0000038025
 2019-09-18 14:10:06 backup4-sd JobId 38061: Error: lib/bsock_tcp.cc:417 Wrote 31 bytes to client:2a01:4f8:212:369f:4100:0:1c00:0:9103, but only 0 accepted.
 2019-09-18 14:10:06 backup4-sd JobId 38061: Fatal error: stored/read.cc:147 Error sending to File daemon. ERR=Connection reset by peer
 2019-09-18 14:10:06 backup4-sd JobId 38061: Error: lib/bsock_tcp.cc:457 Socket has errors=1 on call to client:2a01:4f8:212:369f:4100:0:1c00:0:9103
 2019-09-18 14:10:06 backup4-sd JobId 38061: Releasing device "IncPool8" (/srv/bareos/pool).

And one more thing to note - bareos-fd is actually crashing when this has happened:

Sep 17 12:37:02 backup4 bareos-fd[1046]: backup4-fd JobId 37880: Error: filed/crypto.cc:303 Decryption error. buf_len=28277 decrypt_len=0 on file /data/restore/prod-db-2019-09-17/_percona/xbstream.0000037835
Sep 17 12:37:02 backup4 bareos-fd[1046]: backup4-fd JobId 37880: Error: Uncompression error on file /data/restore/prod-db-2019-09-17/_percona/xbstream.0000037835. ERR=Zlib data error
Sep 17 12:37:02 backup4 bareos-fd[1046]: backup4-fd JobId 37880: Error: filed/crypto.cc:172 Missing cryptographic signature for /data/restore/prod-db-2019-09-17/_percona/xbstream.0000037835
Sep 17 12:37:02 backup4 bareos-fd[1046]: BAREOS interrupted by signal 11: Segmentation violation
Sep 17 12:37:02 backup4 systemd[1]: bareos-filedaemon.service: Main process exited, code=exited, status=1/FAILURE
Sep 17 12:37:02 backup4 systemd[1]: bareos-filedaemon.service: Unit entered failed state.
Sep 17 12:37:02 backup4 systemd[1]: bareos-filedaemon.service: Failed with result 'exit-code'.
Sep 17 12:37:02 backup4 systemd[1]: bareos-filedaemon.service: Service hold-off time over, scheduling restart.
Sep 17 12:37:02 backup4 systemd[1]: Stopped Bareos File Daemon service.


On my search I have only found this bug here with related error-messages:
https://bugs.bareos.org/view.php?id=192
But obviously it has been solved.

It's all about the xtrabackup-plugin - but I don't really understand why a plugin would have an affect on the data-encryption that is done by the file-daemon. Aren't that different layers in Bareos that are involved here?
Tags:
Steps To Reproduce: Only on my setup at the moment.
Additional Information:
System Description
Attached Files:
Notes
(0003577)
unki   
2019-09-20 13:37   
one more - as seen in the log lines, data-compress with GZIP is enabled on that fileset

FileSet {
  Name = "DbClientFileSetSql"
  Ignore File Set Changes = yes
  Include {
     Options {
       compression=GZIP
       signature = MD5
     }
     File = /etc/mysql
     Plugin = "python:module_path=/usr/lib/bareos/plugins:module_name=bareos-fd-percona:mycnf=/etc/bareos/bareos.my.cnf:extradumpoptions=--galera-info --tmpdir=/data/percona_tmp --ftwrl-wait-timeout=300 --check-privileges --no-backup-locks --no-lock --parallel=2 --use-memory=2GB"
  }
}

I will give it a try this night to perform the backups without data-compression enabled.
Just to see if it makes any difference.
(0003578)
unki   
2019-09-21 12:11   
(Last edited: 2019-09-21 12:13)
disabling compression makes no difference here. I've also tried disabling the usage of the Percona xtrabackup Plugin while restoring (by commenting-out 'Plugin Directory = ' and adding 'Plugin Names = ""').

The decryption error remains the same. Somehow the plugin seems to screw up the backups so that they fail later on restoring them.

 2019-09-21 11:48:07 backup4-sd JobId 38585: End of Volume at file 12 on device "FullPool1" (/srv/bareos/pool), Volume "Full-Sql-5549"
 2019-09-21 11:48:07 backup4-sd JobId 38585: Ready to read from volume "Full-Sql-5550" on device "FullPool1" (/srv/bareos/pool).
 2019-09-21 11:48:07 backup4-sd JobId 38585: Forward spacing Volume "Full-Sql-5550" to file:block 0:215.
 2019-09-21 11:56:21 backup4-sd JobId 38585: End of Volume at file 9 on device "FullPool1" (/srv/bareos/pool), Volume "Full-Sql-5550"
 2019-09-21 11:56:21 backup4-sd JobId 38585: stored/acquire.cc:151 Changing read device. Want Media Type="FileIncPool" have="FileFullPool"
  device="FullPool1" (/srv/bareos/pool)
 2019-09-21 11:56:21 backup4-sd JobId 38585: Releasing device "FullPool1" (/srv/bareos/pool).
 2019-09-21 11:56:21 backup4-sd JobId 38585: Media Type change. New read device "IncPool1" (/srv/bareos/pool) chosen.
 2019-09-21 11:56:21 backup4-sd JobId 38585: Ready to read from volume "Incremental-Sql-2164" on device "IncPool1" (/srv/bareos/pool).
 2019-09-21 11:56:21 backup4-sd JobId 38585: Forward spacing Volume "Incremental-Sql-2164" to file:block 0:212.
 2019-09-21 11:56:32 backup4-sd JobId 38585: End of Volume at file 0 on device "IncPool1" (/srv/bareos/pool), Volume "Incremental-Sql-2164"
 2019-09-21 11:56:32 backup4-sd JobId 38585: Ready to read from volume "Incremental-Sql-2157" on device "IncPool1" (/srv/bareos/pool).
 2019-09-21 11:56:32 backup4-sd JobId 38585: Forward spacing Volume "Incremental-Sql-2157" to file:block 0:621305456.
 2019-09-21 11:56:37 backup4-sd JobId 38585: End of Volume at file 0 on device "IncPool1" (/srv/bareos/pool), Volume "Incremental-Sql-2157"
 2019-09-21 11:56:37 backup4-sd JobId 38585: Ready to read from volume "Incremental-Sql-3950" on device "IncPool1" (/srv/bareos/pool).
 2019-09-21 11:56:37 backup4-sd JobId 38585: Forward spacing Volume "Incremental-Sql-3950" to file:block 0:219.
 2019-09-21 11:56:55 backup4-sd JobId 38585: End of Volume at file 0 on device "IncPool1" (/srv/bareos/pool), Volume "Incremental-Sql-3950"
 2019-09-21 11:56:55 backup4-sd JobId 38585: Ready to read from volume "Incremental-Sql-2167" on device "IncPool1" (/srv/bareos/pool).
 2019-09-21 11:56:55 backup4-sd JobId 38585: Forward spacing Volume "Incremental-Sql-2167" to file:block 0:212.
 2019-09-21 11:56:56 backup4-sd JobId 38585: End of Volume at file 0 on device "IncPool1" (/srv/bareos/pool), Volume "Incremental-Sql-2167"
 2019-09-21 11:56:56 backup4-sd JobId 38585: Ready to read from volume "Incremental-Sql-2169" on device "IncPool1" (/srv/bareos/pool).
 2019-09-21 11:56:56 backup4-sd JobId 38585: Forward spacing Volume "Incremental-Sql-2169" to file:block 0:212.
 2019-09-21 11:57:09 backup4-sd JobId 38585: End of Volume at file 0 on device "IncPool1" (/srv/bareos/pool), Volume "Incremental-Sql-2169"
 2019-09-21 11:57:09 backup4-sd JobId 2019-09-21 11:48:07 backup4-sd JobId 38585: End of Volume at file 12 on device "FullPool1" (/srv/bareos/pool), Volume "Full-Sql-5549"
 2019-09-21 11:48:07 backup4-sd JobId 38585: Ready to read from volume "Full-Sql-5550" on device "FullPool1" (/srv/bareos/pool).
 2019-09-21 11:48:07 backup4-sd JobId 38585: Forward spacing Volume "Full-Sql-5550" to file:block 0:215.
 2019-09-21 11:56:21 backup4-sd JobId 38585: End of Volume at file 9 on device "FullPool1" (/srv/bareos/pool), Volume "Full-Sql-5550"
 2019-09-21 11:56:21 backup4-sd JobId 38585: stored/acquire.cc:151 Changing read device. Want Media Type="FileIncPool" have="FileFullPool"
  device="FullPool1" (/srv/bareos/pool)
 2019-09-21 11:56:21 backup4-sd JobId 38585: Releasing device "FullPool1" (/srv/bareos/pool).
 2019-09-21 11:56:21 backup4-sd JobId 38585: Media Type change. New read device "IncPool1" (/srv/bareos/pool) chosen.
 2019-09-21 11:56:21 backup4-sd JobId 38585: Ready to read from volume "Incremental-Sql-2164" on device "IncPool1" (/srv/bareos/pool).
 2019-09-21 11:56:21 backup4-sd JobId 38585: Forward spacing Volume "Incremental-Sql-2164" to file:block 0:212.
 2019-09-21 11:56:32 backup4-sd JobId 38585: End of Volume at file 0 on device "IncPool1" (/srv/bareos/pool), Volume "Incremental-Sql-2164"
 2019-09-21 11:56:32 backup4-sd JobId 38585: Ready to read from volume "Incremental-Sql-2157" on device "IncPool1" (/srv/bareos/pool).
 2019-09-21 11:56:32 backup4-sd JobId 38585: Forward spacing Volume "Incremental-Sql-2157" to file:block 0:621305456.
 2019-09-21 11:56:37 backup4-sd JobId 38585: End of Volume at file 0 on device "IncPool1" (/srv/bareos/pool), Volume "Incremental-Sql-2157"
 2019-09-21 11:56:37 backup4-sd JobId 38585: Ready to read from volume "Incremental-Sql-3950" on device "IncPool1" (/srv/bareos/pool).
 2019-09-21 11:56:37 backup4-sd JobId 38585: Forward spacing Volume "Incremental-Sql-3950" to file:block 0:219.
 2019-09-21 11:56:55 backup4-sd JobId 38585: End of Volume at file 0 on device "IncPool1" (/srv/bareos/pool), Volume "Incremental-Sql-3950"
 2019-09-21 11:56:55 backup4-sd JobId 38585: Ready to read from volume "Incremental-Sql-2167" on device "IncPool1" (/srv/bareos/pool).
 2019-09-21 11:56:55 backup4-sd JobId 38585: Forward spacing Volume "Incremental-Sql-2167" to file:block 0:212.
 2019-09-21 11:56:56 backup4-sd JobId 38585: End of Volume at file 0 on device "IncPool1" (/srv/bareos/pool), Volume "Incremental-Sql-2167"
 2019-09-21 11:56:56 backup4-sd JobId 38585: Ready to read from volume "Incremental-Sql-2169" on device "IncPool1" (/srv/bareos/pool).
 2019-09-21 11:56:56 backup4-sd JobId 38585: Forward spacing Volume "Incremental-Sql-2169" to file:block 0:212.
 2019-09-21 11:57:09 backup4-sd JobId 38585: End of Volume at file 0 on device "IncPool1" (/srv/bareos/pool), Volume "Incremental-Sql-2169"
 2019-09-21 11:57:09 backup4-sd JobId 38585: Ready to read from volume "Incremental-Sql-2171" on device "IncPool1" (/srv/bareos/pool).
 2019-09-21 11:57:09 backup4-fd JobId 38585: Error: filed/crypto.cc:303 Decryption error. buf_len=65512 decrypt_len=0 on file /data/restore/prod-db-2019-09-21/_percona/xbstream.0000038576
 2019-09-21 11:57:09 backup4-fd JobId 38585: Error: filed/crypto.cc:172 Missing cryptographic signature for /data/restore/prod-db-2019-09-21/_percona/xbstream.0000038576
 2019-09-21 11:57:09 backup4-fd JobId 38585: Fatal error: TLS read/write failure.: ERR=error:06065064:digital envelope routines:EVP_DecryptFinal_ex:bad decrypt
 2019-09-21 11:57:09 backup4-fd JobId 38585: Error: filed/crypto.cc:172 Missing cryptographic signature for /data/restore/prod-db-2019-09-21/_percona/xbstream.0000038568
 2019-09-21 11:57:09 backup4-sd JobId 38585: Forward spacing Volume "Incremental-Sql-2171" to file:block 0:212.
 2019-09-21 11:57:12 backup4-sd JobId 38585: Error: lib/bsock_tcp.cc:417 Wrote 32 bytes to client:2a01:4f8:212:369f:4100:0:1c00:0:9103, but only 0 accepted.
 2019-09-21 11:57:12 backup4-sd JobId 38585: Fatal error: stored/read.cc:147 Error sending to File daemon. ERR=Connection reset by peer
 2019-09-21 11:57:12 backup4-sd JobId 38585: Error: lib/bsock_tcp.cc:457 Socket has errors=1 on call to client:2a01:4f8:212:369f:4100:0:1c00:0:9103
 2019-09-21 11:57:12 backup4-sd JobId 38585: Releasing device "IncPool1" (/srv/bareos/pool).
 38585: Ready to read from volume "Incremental-Sql-2171" on device "IncPool1" (/srv/bareos/pool).
 2019-09-21 11:57:09 backup4-fd JobId 38585: Error: filed/crypto.cc:303 Decryption error. buf_len=65512 decrypt_len=0 on file /data/restore/prod-db-2019-09-21/_percona/xbstream.0000038576
 2019-09-21 11:57:09 backup4-fd JobId 38585: Error: filed/crypto.cc:172 Missing cryptographic signature for /data/restore/prod-db-2019-09-21/_percona/xbstream.0000038576
 2019-09-21 11:57:09 backup4-fd JobId 38585: Fatal error: TLS read/write failure.: ERR=error:06065064:digital envelope routines:EVP_DecryptFinal_ex:bad decrypt
 2019-09-21 11:57:09 backup4-fd JobId 38585: Error: filed/crypto.cc:172 Missing cryptographic signature for /data/restore/prod-db-2019-09-21/_percona/xbstream.0000038568
 2019-09-21 11:57:09 backup4-sd JobId 38585: Forward spacing Volume "Incremental-Sql-2171" to file:block 0:212.
 2019-09-21 11:57:12 backup4-sd JobId 38585: Error: lib/bsock_tcp.cc:417 Wrote 32 bytes to client:2a01:4f8:212:369f:4100:0:1c00:0:9103, but only 0 accepted.
 2019-09-21 11:57:12 backup4-sd JobId 38585: Fatal error: stored/read.cc:147 Error sending to File daemon. ERR=Connection reset by peer
 2019-09-21 11:57:12 backup4-sd JobId 38585: Error: lib/bsock_tcp.cc:457 Socket has errors=1 on call to client:2a01:4f8:212:369f:4100:0:1c00:0:9103
 2019-09-21 11:57:12 backup4-sd JobId 38585: Releasing device "IncPool1" (/srv/bareos/pool).

(0003592)
unki   
2019-10-10 07:03   
I had now the chance to perform the backup & restore without PKI encryption of the backup-data.
The issue with the Percona xtrabackup Plugin for the file-daemon also occurs without encryption.

10-Oct 06:30 bareos-dir JobId 41723: Start Restore Job RestoreFiles.2019-10-10_06.29.59_27
10-Oct 06:30 bareos-dir JobId 41723: Connected Storage daemon at backup4.example.com:9103, encryption: DHE-RSA-AES256-GCM-SHA384
10-Oct 06:30 bareos-dir JobId 41723: Using Device "FullPool1" to read.
10-Oct 06:30 bareos-dir JobId 41723: Connected Client: db-fd at db.example.com:9102, encryption: DHE-RSA-AES256-GCM-SHA384
10-Oct 06:30 bareos-dir JobId 41723: Handshake: Immediate TLS 10-Oct 06:30 bareos-dir JobId 41723: Encryption: DHE-RSA-AES256-GCM-SHA384
10-Oct 06:30 db-fd JobId 41723: Connected Storage daemon at backup4.example.com:9103, encryption: DHE-RSA-AES256-GCM-SHA384
10-Oct 06:30 db-fd JobId 41723: python-fd: Got to_lsn 8731775617207 from restore object of job 41695
10-Oct 06:30 db-fd JobId 41723: python-fd: Got to_lsn 8731965444667 from restore object of job 41707
10-Oct 06:30 db-fd JobId 41723: python-fd: Got to_lsn 8732041077071 from restore object of job 41712
10-Oct 06:30 db-fd JobId 41723: python-fd: Got to_lsn 8732111279885 from restore object of job 41717
10-Oct 06:30 backup4-sd JobId 41723: Ready to read from volume "Full-Sql-2711" on device "FullPool1" (/srv/bareos/pool).
10-Oct 06:30 backup4-sd JobId 41723: Forward spacing Volume "Full-Sql-2711" to file:block 0:206.
10-Oct 06:48 backup4-sd JobId 41723: End of Volume at file 12 on device "FullPool1" (/srv/bareos/pool), Volume "Full-Sql-2711"
10-Oct 06:48 backup4-sd JobId 41723: Ready to read from volume "Full-Sql-2552" on device "FullPool1" (/srv/bareos/pool).
10-Oct 06:48 backup4-sd JobId 41723: Forward spacing Volume "Full-Sql-2552" to file:block 9:407702037.
10-Oct 06:53 backup4-sd JobId 41723: End of Volume at file 12 on device "FullPool1" (/srv/bareos/pool), Volume "Full-Sql-2552"
10-Oct 06:53 backup4-sd JobId 41723: Ready to read from volume "Full-Sql-2795" on device "FullPool1" (/srv/bareos/pool).
10-Oct 06:53 backup4-sd JobId 41723: Forward spacing Volume "Full-Sql-2795" to file:block 0:206.
10-Oct 06:54 backup4-sd JobId 41723: End of Volume at file 0 on device "FullPool1" (/srv/bareos/pool), Volume "Full-Sql-2795"
10-Oct 06:54 backup4-sd JobId 41723: stored/acquire.cc:151 Changing read device. Want Media Type="FileIncPool" have="FileFullPool"
  device="FullPool1" (/srv/bareos/pool)
10-Oct 06:54 backup4-sd JobId 41723: Releasing device "FullPool1" (/srv/bareos/pool).
10-Oct 06:54 backup4-sd JobId 41723: Media Type change. New read device "IncPool1" (/srv/bareos/pool) chosen.
10-Oct 06:54 backup4-sd JobId 41723: Ready to read from volume "Incremental-Sql-2873" on device "IncPool1" (/srv/bareos/pool).
10-Oct 06:54 backup4-sd JobId 41723: Forward spacing Volume "Incremental-Sql-2873" to file:block 0:883472630.
10-Oct 06:54 backup4-sd JobId 41723: End of Volume at file 0 on device "IncPool1" (/srv/bareos/pool), Volume "Incremental-Sql-2873"
10-Oct 06:54 backup4-sd JobId 41723: Ready to read from volume "Incremental-Sql-2875" on device "IncPool1" (/srv/bareos/pool).
10-Oct 06:54 backup4-sd JobId 41723: Forward spacing Volume "Incremental-Sql-2875" to file:block 0:590294726.
10-Oct 06:54 db-fd JobId 41723: Fatal error: filed/fd_plugins.cc:1178 Second call to startRestoreFile. plugin=python-fd.so cmd=python:module_path=/usr/lib/bareos/plugins:module_name=bareos-fd-percona:mycnf=/etc/bareos/bareos.my.cnf:log=/var/log/bareos/xtrabackup.log:extradumpoptions=--galera-info --ftwrl-wait-timeout=300 --check-privileges --tmpdir=/data/percona_tmp --no-backup-locks --no-lock --parallel=2 --use-memory=2GB
10-Oct 06:54 backup4-sd JobId 41723: Error: lib/bsock_tcp.cc:417 Wrote 33 bytes to client:::4100:0:1c00:0:9103, but only 0 accepted.
10-Oct 06:54 backup4-sd JobId 41723: Fatal error: stored/read.cc:147 Error sending to File daemon. ERR=Connection reset by peer
10-Oct 06:54 backup4-sd JobId 41723: Error: lib/bsock_tcp.cc:457 Socket has errors=1 on call to client:::4100:0:1c00:0:9103
10-Oct 06:54 backup4-sd JobId 41723: Releasing device "IncPool1" (/srv/bareos/pool).
10-Oct 06:54 bareos-dir JobId 41723: Error: Bareos bareos-dir 18.2.6 (13Feb19):
  Build OS: Linux-4.19.0-6-amd64 debian Debian GNU/Linux 9.11 (stretch)
  JobId: 41723
  Job: RestoreFiles.2019-10-10_06.29.59_27
  Restore Client: db-fd
  Start time: 10-Oct-2019 06:30:01
  End time: 10-Oct-2019 06:54:19
  Elapsed time: 24 mins 18 secs
  Files Expected: 20
  Files Restored: 19
  Bytes Restored: 286,059,460,373
  Rate: 196199.9 KB/s
  FD Errors: 1
  FD termination status: Fatal Error
  SD termination status: Fatal Error
  Bareos binary info:
  Termination: *** Restore Error ***

10-Oct 06:54 bareos-dir JobId 41723: Begin pruning Files.
10-Oct 06:54 bareos-dir JobId 41723: No Files found to prune.
10-Oct 06:54 bareos-dir JobId 41723: End auto prune.
(0003593)
unki   
2019-10-10 07:04   
the used xtrabackup version is v2.4.15
(0003595)
unki   
2019-10-10 13:23   
just to be sure I tried now downgrading xtrabackup to v2.4.12 - the same issue occurs

10-Oct 13:06 backup4-sd JobId 41769: Ready to read from volume "Incremental-Sql-2885" on device "IncPool1" (/srv/bareos/pool).
10-Oct 13:06 backup4-sd JobId 41769: Forward spacing Volume "Incremental-Sql-2885" to file:block 0:212.
10-Oct 13:06 db7-fd JobId 41769: Error: Compressed header size error. comp_len=31653, message_length=36220
10-Oct 13:06 db7-fd JobId 41769: Error: python-fd: Restore command returned non-zero value: 1, message:
10-Oct 13:06 backup4-sd JobId 41769: End of Volume at file 0 on device "IncPool1" (/srv/bareos/pool), Volume "Incremental-Sql-2885"
10-Oct 13:06 backup4-sd JobId 41769: Ready to read from volume "Incremental-Sql-2888" on device "IncPool1" (/srv/bareos/pool).
(0003615)
unki   
2019-11-06 17:36   
(Last edited: 2019-11-07 07:45)
Strange thing - e.g. the job has stored (with the help of the percona-plugin) the following files in the backup.

$ ls
xbstream.0000046633
xbstream.0000046648
xbstream.0000046665
xbstream.0000046675
xbstream.0000046683
xbstream.0000046691
xbstream.0000046698
xbstream.0000046705
xbstream.0000046717
xbstream.0000046722
xbstream.0000046729


If I restore those xbstream.* files one by one with individual jobs into individual directories, it's a success.
All-in-one causes the previously posted errors.
Wondering if this has then a similar reason as it was back in https://bugs.bareos.org/view.php?id=192


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1130 [bareos-core] General major always 2019-11-03 18:58 2019-11-03 18:58
Reporter: rjung Platform: Solaris10  
Assigned To: OS: Solaris  
Priority: normal OS Version: 10  
Status: new Product Version: 18.2.6  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Allow to build client without bconsole or python
Description: Building bconsole and python support fail on Solaris 10. It should be possible to build the client without those two components.
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1129 [bareos-core] General major always 2019-11-03 18:57 2019-11-03 18:57
Reporter: rjung Platform: Solaris10  
Assigned To: OS: Solaris  
Priority: normal OS Version: 10  
Status: new Product Version: 18.2.6  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: OpenSSL and readline include path missing in cmake build
Description: Actually version is 18.4.1, but that is missing in the mantis version dropdown.

The new cmake based compilation correctly detects non-system OpenSSl and readline, when run e.g. with flags

      -DOPENSSL_ROOT_DIR=/path/to/openssl
      -DReadline_ROOT_DIR=/path/to/readline

But the when running maake, it can't find the header files, because the respective include paths never get added.

As a workaround I was using the follwoing patch:

--- core/CMakeLists.txt Fri Sep 28 10:30:36 2018
+++ core/CMakeLists.txt Sun Nov 3 14:55:00 2019
@@ -435,6 +435,10 @@
    set(HAVE_TLS "1")
 ENDIF()

+IF( "${HAVE_OPENSSL}")
+include_directories(${OPENSSL_INCLUDE_DIR})
+ENDIF()
+
 IF(NOT openssl)
    unset(HAVE_OPENSSL)
    unset(HAVE_TLS)
@@ -446,6 +450,7 @@
 set(got_readline "${READLINE_FOUND}" )
 if ("${READLINE_FOUND}")
    set(HAVE_READLINE 1)
+ include_directories(${Readline_INCLUDE_DIR})
 endif()

 if ("${PAM_FOUND}")

Tags: compile cmake headers openssl readline
Steps To Reproduce: Run cmake against a non-system OpenSSl and readline and make sure, the OpenSSl and readline header files are not installed in teh default system header file locations. The run make.
Additional Information:
System Description
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1128 [bareos-core] api major always 2019-11-03 18:52 2019-11-03 18:52
Reporter: rjung Platform: Solaris10  
Assigned To: OS: Solaris  
Priority: normal OS Version: 10  
Status: new Product Version: 18.2.6  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Compilation error due to existing symbol round()
Description: Actually version is 18.4.1 but that doesn't exist in the mantis version dropdown.

File core/src/lib/bsnprintf.cc defines a function round() which leads to a compilation error on Solaris 10 Sparc, because that symbol already exists. Since the new round(9 is static, obne can rename it, eg. to roundit. The following patch helps:

--- core/src/lib/bsnprintf.cc Fri Sep 28 10:30:36 2018
+++ core/src/lib/bsnprintf.cc Sun Nov 3 18:07:19 2019
@@ -618,7 +618,7 @@
    return result;
 }

-static int64_t round(LDOUBLE value)
+static int64_t roundit(LDOUBLE value)
 {
    int64_t intpart;

@@ -685,7 +685,7 @@
    /* We "cheat" by converting the fractional part to integer by
     * multiplying by a factor of 10
     */
- fracpart = round((pow10(max)) * (ufvalue - intpart));
+ fracpart = roundit((pow10(max)) * (ufvalue - intpart));

    if (fracpart >= pow10(max)) {
       intpart++;
Tags: compile solaris
Steps To Reproduce: Compile on Solaris 10 Sparc.
Additional Information:
System Description
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1127 [bareos-core] General major always 2019-11-03 18:47 2019-11-03 18:47
Reporter: rjung Platform: Solaris10  
Assigned To: OS: Solaris  
Priority: normal OS Version: 10  
Status: new Product Version: 18.2.6  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: cmake function check fails due to library dependency
Description: Actually version is 18.4.1 but that is not yet available in the mantis version dropdown.

Problem is that during cmake run on Solaris 10 Sparc certain functions that does exit are not detected, because they need "-lrt" or "-lsocket -lnsl" during compilation:

fdatasync -lrt
nanosleep -lrt
gethostbyname_r -lnsl
getaddrinfo -lsocket -lnsl
inet_ntop -lsocket -lnsl
inet_pton -lsocket -lnsl
gai_strerror -lsocket -lnsl

As a workaround, the following patch was used:

--- cmake/BareosCheckFunctions.cmake Fri Sep 28 10:30:36 2018
+++ cmake/BareosCheckFunctions.cmake Sun Nov 3 18:06:16 2019
@@ -19,6 +19,10 @@

 INCLUDE (CheckFunctionExists)

+list(APPEND CMAKE_REQUIRED_LIBRARIES rt)
+list(APPEND CMAKE_REQUIRED_LIBRARIES socket)
+list(APPEND CMAKE_REQUIRED_LIBRARIES nsl)
+
 CHECK_FUNCTION_EXISTS(strtoll HAVE_STRTOLL)
 CHECK_FUNCTION_EXISTS(backtrace HAVE_BACKTRACE)
 CHECK_FUNCTION_EXISTS(backtrace_symbols HAVE_BACKTRACE_SYMBOLS)


But that obviously isn't the right solution, because it must be made dependent on platform. Furthermore it adds -lrt -lsocket -lnsl as dependencies to all binaries and libs. That should only happen for those who actually make use of the respective symbols that need the libs.
Tags: cmake compilation
Steps To Reproduce: Run cmake on Solaris 10 and notice, that all of the above function lookups fail with "not found".
Additional Information:
System Description
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1125 [bareos-core] director major always 2019-10-27 10:07 2019-10-27 10:07
Reporter: jstoffen Platform: Linux  
Assigned To: OS: Debian  
Priority: normal OS Version: 9  
Status: new Product Version: 18.2.5  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Segmentation fault after a few days of running
Description: After a couple of days of running perfectly, the process get killed.

A .backtrace file is produced but is almos empty :

cat bareos-dir.11927.bactrace
Attempt to dump current JCRs. njcrs=0

I'll try to provide a full trace the next time.
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1124 [bareos-core] file daemon crash always 2019-10-26 18:58 2019-10-26 18:58
Reporter: medorna Platform: server  
Assigned To: OS: windows  
Priority: normal OS Version: 2012  
Status: new Product Version: 19.2.1  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: windows bareos-fd bug
Description: There's a bug with bareos-fd that prevent connect bareos-dir with the client after reinstalling bareos-fd on windows server, snippit keeps harcoded on regedit.
Tags:
Steps To Reproduce: - Install bareos-fd on windows 2012
- Uninstall it.
- Install it again.
Additional Information: Need clean the regedit to manage the credentials again.
System Description
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
778 [bareos-core] director major always 2017-02-08 21:44 2019-10-19 01:08
Reporter: dpcushing Platform: Linux  
Assigned To: OS: CentOS  
Priority: high OS Version: 7  
Status: new Product Version: 16.2.4  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Always Incremental Consolidation is consolidating and deleting backup copies from a DR pool
Description: I have configured Bareos with Always incremental backups. I have also configured a copy schedule that runs daily and copies all of the daily backup jobs to a network drive in another location. As a simple example let's assume all even job IDs (1,3,5,7,9,...) are backups and all odd job IDs (2,4,6,8,10,...) are DR copies of those backups. On day 8 a consolidation job runs and consolidates, then purges job 1. When job 1 is purges, job 2 gets 'promoted' from copy to backup. On day 9 a consolidation job runs and consolidates jobs 2 and 3, then purges job 2. This continues daily.

The end result is that each job gets consolidates twice, once as the backup and again the next day as the 'promoted' copy of the backup. After consolidation, the backups are purged and the DR copy is lost.
Tags:
Steps To Reproduce: Set up daily Always Incremental backups with 'Always Incremental Job Retention = days' and 'Always Incremental Keep Number = 3'. Set up a copy job to run daily to copy all backup jobs to a separate storage pool. Set up an Always Incremental Consolidate job to run daily.
Additional Information: In the Jobs table in the database you'll see the original backup job with Type B and the copies with Type C. After a consolidation of a backup job you'll see that job purged and it's associated copy job changed to Type B. The next days you'll see the consolidate job operate on the 'used to be a copy' job, since it is now identified as a backup. Then the job is removed from the database. So the two issues identified are 1) The same backup set is always consolidated twice on consecutive days (or whatever the consolidation interval is). The DR copies, that may be set up for a longer retention period, are purged by consolidation.
System Description
Attached Files:
Notes
(0002607)
dpcushing   
2017-03-14 01:30   
For anybody else that may be facing this issue, I wrote a shell script that I run in an admin job after running consolidate that changes the newly 'promoted' backups to copies. Note in the script below that the pool where the copies reside is named 'FileCopy'. Here's the content ...

#!/bin/bash
# grab the database credentials from existing configuration files
catalogFile=`find /etc/bareos/bareos-dir.d/catalog/ -type f`
dbUser=`grep dbuser $catalogFile | grep -o '".*"' | sed 's/"//g'`
dbPwd=`grep dbpassword $catalogFile | grep -o '".*"' | sed 's/"//g'`

# Make sure all DR-Copy jobs that are in the FileCopy pool are properly set in the database as Copy (C) jobs.
/usr/bin/mysql bareos -u $dbUser -p$dbPwd -se "UPDATE Job J SET J.Type = 'C' WHERE J.Type <> 'C' AND EXISTS (SELECT 1 FROM Media M, JobMedia JM WHERE JM.JobId = J.JobId AND M.MediaId = JM.MediaID AND M.MediaType = 'FileCopy');"
(0003571)
brockp   
2019-09-15 04:38   
(Last edited: 2019-09-15 04:59)
I can confirm this issue still exists and is not documented as a limitation in 18.2.5

As far as can tell this directly conflicts with: https://docs.bareos.org/TasksAndConcepts/AlwaysIncrementalBackupScheme.html

(0003602)
arogge   
2019-10-16 10:47   
This can be worked around as described in https://docs.bareos.org/TasksAndConcepts/AlwaysIncrementalBackupScheme.html#virtual-full-jobs

"To make sure the longterm Level (Dir->Job) = VirtualFull is not taken as base for the next incrementals, the job type of the copied job is set to Type (Dir->Job) = Archive with the Run Script (Dir->Job)."

If you think the documentation needs a change to document this better, feel free to suggest something. Otherwise I'd like to close this issue.
(0003609)
dpcushing   
2019-10-19 01:08   
@arogge - The issue I have reported here is different. I do have VirtualFull backups configured for long term archiving and I do set those VirtualFull jobs to Archive via script as described in your reference link. The issue that I am describing is specific to Copy jobs. I prepare an offsite copy every night of each of the day's backups to allow me to perform an accurate recovery in the event of a complete site disaster. The problem that I've described is that when a primary AI job is consolidated and removed from the catalog, Bareos promotes the copy job to a primary job. On the next cycle that new 'primary' (used to be copy) job is consolidated again. Once that new 'primary' (used to be copy) job is consolidated it will get deleted from the catalog, making my offsite copies incomplete ;-0

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1122 [bareos-core] General major always 2019-10-18 19:32 2019-10-18 19:32
Reporter: xyros Platform: Linux  
Assigned To: OS: Ubuntu  
Priority: normal OS Version: 16.04  
Status: new Product Version: 18.2.6  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Consolidate queues and indefinitely orphans jobs but falsely reports status as "Consolidate OK" for last queued
Description: My Consolidate job never succeeds -- quickly terminating with "Consolidate OK" while leaving all the VirtualFull jobs it started queued and orphaned.

In the WebUI listing for the allegedly successful Consolidate run, it always list the sequentially last (by job ID) client it queued as being the successful run; however, the level is "Incremental," nothing is actually done, and the client's VirtualFull job is actually still queued up with all the other clients.

In bconsole the status is similar to this:

Running Jobs:
Console connected at 15-Oct-19 15:06
 JobId Level Name Status
======================================================================
   636 Virtual PandoraFMS.2019-10-15_14.33.02_06 is waiting on max Storage jobs
   637 Virtual MongoDB.2019-10-15_14.33.03_09 is waiting on max Storage jobs
   638 Virtual DNS-DHCP.2019-10-15_14.33.04_11 is waiting on max Storage jobs
   639 Virtual Desktop_1.2019-10-15_14.33.05_19 is waiting on max Storage jobs
   640 Virtual Desktop_2.2019-10-15_14.33.05_20 is waiting on max Storage jobs
   641 Virtual Desktop_3.2019-10-15_14.33.06_21 is waiting on max Storage jobs
====


Given that above output, for example the WebUI would show the following:

    642 Consolidate desktop3-fd.hq Consolidate Incremental 0 0.00 B 0 Success
    641 Desktop_3 desktop3-fd.hq Backup VirtualFull 0 0.00 B 0 Queued
    640 Desktop_2 desktop2-fd.hq Backup VirtualFull 0 0.00 B 0 Queued
    639 Desktop_1 desktop1-fd.hq Backup VirtualFull 0 0.00 B 0 Queued
    638 DNS-DHCP dns-dhcp-fd.hq Backup VirtualFull 0 0.00 B 0 Queued
    637 MongoDB mongodb-fd.hq Backup VirtualFull 0 0.00 B 0 Queued
    636 PandoraFMS pandorafms-fd.hq Backup VirtualFull 0 0.00 B 0 Queued


I don't know if this has anything to do with the fact that I have multiple storage definitions, for each VLAN the server is on, and an additional one dedicated for the storage addressable on the default IP (see bareos-dir/storage/File.conf in attached bareos.zip file). Technically this should not matter, but I get the impression Bareos nas not been designed/tested to elegantly work in an environment where the server participates in VLANs.

The reason I'm using VLANs is so that connections do not have to go through a router to reach the clients. Therefore, the full network bandwidth of each LAN segment is available to the Bareos client/server data transfer.

I've tried debugging the Consolidate backup process, using "bareos-dir -d 400 >> /var/log/bareos-dir.log;" however, I get nothing that particularly identifies the issue. I have attached a truncated log file that contains activity starting with queuing the second-to-last. I've cut off the log at the point where it is stuck in the endless cycling with output of:

bareos-dir (50): dird/jobq.cc:951-0 Inc Rstore=File-AI-VLAN105 rncj=1
bareos-dir (50): dird/jobq.cc:1004-0 Fail to acquire Wstore=File-AI-VLAN105 wncj=1
bareos-dir (50): dird/jobq.cc:971-0 Dec Rstore=File-AI-VLAN105 rncj=0
bareos-dir (50): dird/jobq.cc:951-0 Inc Rstore=File-AI-VLAN105 rncj=1
bareos-dir (50): dird/jobq.cc:1004-0 Fail to acquire Wstore=File-AI-VLAN105 wncj=1
bareos-dir (50): dird/jobq.cc:971-0 Dec Rstore=File-AI-VLAN105 rncj=0
bareos-dir (50): dird/jobq.cc:951-0 Inc Rstore=File-AI-VLAN107 rncj=1
bareos-dir (50): dird/jobq.cc:1004-0 Fail to acquire Wstore=File-AI-VLAN107 wncj=1
bareos-dir (50): dird/jobq.cc:971-0 Dec Rstore=File-AI-VLAN107 rncj=0
bareos-dir (50): dird/jobq.cc:951-0 Inc Rstore=File-AI-VLAN107 rncj=1
bareos-dir (50): dird/jobq.cc:1004-0 Fail to acquire Wstore=File-AI-VLAN107 wncj=1
etc...

For convenience, I have attached all the most relevant excerpts of my configuration files (sanitized for privacy/security reasons).

I suspect there's a bug that is responsible for this; however, I'm unable to make heads or tails of what's going on.

Could someone please take a look?

Thanks
Tags: always incremental, consolidate
Steps To Reproduce: 1. Place Bareos on a network switch (virtual or actual) with tagged VLANS
2. Configure Bareos host to have connectivity on three or more VLANs
3. Make sure you have clients you can backup, on each of the VLANs
4. Use the attached config files as reference for setting up storages and jobs for testing.
Additional Information:
System Description
Attached Files: bareos.zip (9,113 bytes) 2019-10-18 19:32
https://bugs.bareos.org/file_download.php?file_id=391&type=bug
bareos-dir.log (41,361 bytes) 2019-10-18 19:32
https://bugs.bareos.org/file_download.php?file_id=392&type=bug
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1121 [bareos-core] file daemon text always 2019-10-18 16:20 2019-10-18 16:20
Reporter: b.braunger@syseleven.de Platform: Linux  
Assigned To: OS: RHEL  
Priority: low OS Version: 7  
Status: new Product Version: 18.2.6  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: documentation of setFileAttributes for plugins not correct
Description: The documentation says that the setFileAttributes for FD Plugins is not implemented yet. Although I a python plugin I can write an according function and it gets called.

https://docs.bareos.org/DeveloperGuide/pluginAPI.html#setfileattributes-bpcontext-ctx-struct-restore-pkt-rp

Is the documentation out of date or am I getting something wrong?
Tags:
Steps To Reproduce: * Create a restore job which calls a python plugin
* start a FD with debug output (at least 150)
* watch the logfile of the FD
Additional Information: LOG example:
test-server (150): filed/fd_plugins.cc:1308-326 PluginSetAttributes
test-server (100): filed/python-fd.cc:2740-326 python-fd: set_file_attributes() entry point in Python called with RestorePacket(stream=1, data_stream=2, type=3, file_index=43337, linkFI=0, uid=0, statp="StatPacket(dev=0, ino=0, mode=0644, nlink=0, uid=0, gid=0, rdev=0, size=-1, atime=1571011435, mtime=1563229608, ctime=1571011439, blksize=4096, blocks=1)", attrEx="", ofname="/opt/puppetlabs/puppet/share/doc/openssl/html/man3/SSL_CTX_set_session_id_context.html", olname="", where="", RegexWhere="<NULL>", replace=97, create_status=2)
test-server (150): filed/restore.cc:517-326 Got hdr: Files=43337 FilInx=43338 size=210 Stream=1, Unix attributes.
test-server (130): filed/restore.cc:534-326 Got stream: Unix attributes len=210 extract=0
System Description
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1118 [bareos-core] installer / packages minor always 2019-10-07 10:54 2019-10-16 11:22
Reporter: wolfaba Platform: Linux  
Assigned To: OS: any  
Priority: normal OS Version: 3  
Status: new Product Version: 18.2.5  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Incorrect default value for smtp_host and ugly default values for job_email and dump_email in BareosSetVariableDefaults.cmake
Description: Incorrect default value for smtp_host and ugly default values for job_email and dump_email in BareosSetVariableDefaults.cmake



Dear Bareos developers,
we have found some ugly values in BareosSetVariableDefaults.cmake.
IF(NOT DEFINED job_email)
   SET(job_email "root@localhost")
ENDIF()
IF(NOT DEFINED dump_email)
   SET(dump_email "root@localhost")
ENDIF()
IF(NOT DEFINED smtp_host)
   SET(smtp_host "root@localhost")
ENDIF()

If the bareos process crash it uses btraceback script to generate backtrace and sends the email using bsmtp.
In btraceback.in is the last line
@sbindir@/bsmtp -h @smtp_host@ -f @dump_email@ -s "Bareos ${DEBUGGER} traceback of ${PNAME}" @dump_email@
bsmtp man page says that option -h should take mailhost:port (not email address), so the "root@localhost" for smtp_host is really incorrect. Could you replace this value with simple localhost?

Than if the email is generated with sender root@localhost, it will not be accepted on those mail servers, which check the sender address. Could you replace the sender address with simple "root" and let the localhost MTA generate correct domain?

The same thing for recipient. If you use root@localhost and the email will be passed to some mail server, it can land anywhere. Please, use simple "root", which delivers email either to local root or (if there is some alias defined) devliers email to destination address for local alias "root".

I have created pull request 0000297 "use simple default email values".

Thank you.

Regards,

Robert Wolf.
Tags:
Steps To Reproduce:
Additional Information:
System Description General issues regarding all linux platforms or not specific to only one distribution
Attached Files:
Notes
(0003604)
wolfaba   
2019-10-16 11:22   
Fix committed to bareos master branch with changesetid 11941.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1120 [bareos-core] webui trivial always 2019-10-14 14:34 2019-10-16 10:34
Reporter: tobias_stein Platform: amd64  
Assigned To: arogge OS: Debian GNU/Linux  
Priority: low OS Version: 10  
Status: assigned Product Version: 17.2.7  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: No Favicon with PHP7.3 (undefined variable "extras" in HeadLink->createDataStylesheet())
Description: Display errors on Dashboard
* favicon.ico is not delivered
* Overview of jobs just filled with spinners

Following PHP-Error message is printed in log:
[Mon Oct 14 12:54:19.526596 2019] [proxy_fcgi:error] [pid 842:tid 139696600672000] [client redacted:40176]
AH01071: Got error 'PHP message: PHP Notice:
compact(): Undefined variable: extras in /usr/share/bareos-webui/vendor/zendframework/zend-view/src/Helper/HeadLink.php on line 403
PHP message: PHP Notice:
compact(): Undefined variable: extras in /usr/share/bareos-webui/vendor/zendframework/zend-view/src/Helper/HeadLink.php on line 403'
Tags: webui
Steps To Reproduce: Use bareos-webui (17.2.4-15.1) on Debian 10 "Buster"
with php-fpm (7.3+69)
with Apache/2.4.38 (Debian) with mod_ssl, mod_http2, mod_proxy_fcgi, mod_mpm_event and mod_http2(, which doesn't support mpm_prefork).

Include /etc/apache2/conf-available/bareos-webui.conf in /etc/apache2/sites-available/default-ssl.conf.

aptitude install php-fpm
a2dismod php7.0
a2enconf php7.3-fpm
a2enmod ssl
a2enmod http2
a2enmod proxy_fcgi
a2dismod mpm_prefork
a2enmod mpm_event
systemctl restart apache2.service
Additional Information: Maybe the used zend-framework is no longer up2date in conjuction with usage of current Debian Buster.
Funktion "compact()" with php version 7.3 no longer ignores uninitialized variables.
https://www.php.net/manual/en/function.compact.php

I attached a little patch, that initializes variable `$extras` to empty string.
I don't know if it's written in a good style, but wanted to provide it. If it's trash, get rid of it.
This at least makes favicon.ico work again.
On subsequent refreshes of the dashbord spinners will appear again.
Attached Files: initialize_extras.diff (113 bytes) 2019-10-14 14:34
https://bugs.bareos.org/file_download.php?file_id=390&type=bug
Notes
(0003599)
arogge   
2019-10-16 10:04   
Thanks for the time you invested. I understand the issue, but I'm not sure that we will fix it for 17.2 anymore.
It should help to disable disable display_errors in php.ini (and I think this should be the default nowadays anyway).

Having said that, the PHP documentation for display_errors https://www.php.net/manual/en/errorfunc.configuration.php#ini.display-errors reads as follows:
"This is a feature to support your development and should never be used on production systems."
(0003601)
tobias_stein   
2019-10-16 10:34   
That's absolutely no problem, I've found a way to work around.
Probably there is even no need to backport a patch to v17.2, because it's a corner case setup (buster+php-fpm7.3).
I guess, I assigned the bug to the wrong version, zend-framework HeadLink hasn't changed on master in this point.
I thought, it would just be a nice user-experience, that at least future versions of bareos-webui,
work out of the box on Debian stable with the php-version provided by the distribution (7.3) and
just reported the behavior.

I'm currently testing and studying bareos' ecosystem.
So going with "display_errors" is okay for me with this system and in the end helped me to get around a problem.
Nevertheless thanks for the tip!

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
707 [bareos-core] director tweak always 2016-10-13 21:10 2019-10-14 14:40
Reporter: hostedpower Platform: Linux  
Assigned To: OS: Debian  
Priority: low OS Version: 8  
Status: confirmed Product Version: 16.2.4rc  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact: yes
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact: yes
bareos-16.2: action:
bareos-15.2: impact: no
bareos-15.2: action:
bareos-14.2: impact: no
bareos-14.2: action:
bareos-13.2: impact: no
bareos-13.2: action:
bareos-12.4: impact: no
bareos-12.4: action:
Summary: 13-Oct 21:02 bareos-dir JobId 0: Error: "fileset" directive in Job "Consolidate" resource is required, but not found.
Description: http://doc.bareos.org/master/html/bareos-manual-main-reference.html#QQ2-1-516

Job {
    Name = "Consolidate"
    Type = "Consolidate"
    Accurate = "yes"
    JobDefs = "DefaultJob"
}

Why would this require a fileset and some other directives which don't seem used?

Documentation: When used, it automatically trigger the consolidation of incremental jobs that need to be consolidated.

So it would be applied to any fileset?
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0002396)
joergs   
2016-10-14 12:34   
I recently modified the documentation in this respect, however the DefaultJob do contain a Fileset in the default installation. Have you changed your DefaultJob?
(0002398)
hostedpower   
2016-10-14 12:48   
Well I had another job defaults not containing any storage, pool , fileset or client and I had the error.

So the fileset is not really required, but it will give an error anyway when missing. Is that the right conclusion?

Also with storage, for me it was a bit confusing that it would store it potentially on the wrong place ... but I'm not an expert of course :)
(0002399)
joergs   
2016-10-14 13:06   
> So the fileset is not really required, but it will give an error anyway when missing. Is that the right conclusion?

yes. The reason is in the code. The default job type is Backup, and the required directives are set accordingly. For other job types some of these settings are ignored.
That is not pretty, but might have negative impact, if we would change this.
In the long run, we should change this.
(0003598)
b.braunger@syseleven.de   
2019-10-14 14:40   
Are the any news about this behaviour? This also affects Restore Jobs which need the following dummy configuration

  Client = dummy_client
  Fileset = dummy_fileset
  Storage = dummy_storage
  Pool = dummy_pool

That is a pain to configure and confuses operators.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1106 [bareos-core] director minor always 2019-08-01 17:50 2019-10-14 12:10
Reporter: b.braunger@syseleven.de Platform: Linux  
Assigned To: stephand OS: Ubuntu  
Priority: normal OS Version: 16.04  
Status: assigned Product Version: 18.2.6  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Director's not sending Plugin Options to FileDaemon
Description: I want to use a fileset with a python command plugin for multiple jobs and give extra key value pairs to the plug via the "FD Plugin Options" parameter.
As far as I understand the docs I can add options to the "FD Plugin Options" in the job resource which then will be passed to the Plugin additionally to the Options set in the fileset.

Sadly nothing but the Options String from the fileset is sent to the FD Plugin no matter what I define. This extends to the Plugin Options which are set via bconsole they're all ignored.
Tags: director, fd, plugin
Steps To Reproduce: FileSet {
  Name = "testing"
  Include {
    Plugin = "python:module_path=/usr/lib64/bareos/plugins:module_name=test_plugin:from=fileset"
  }
}
Job {
  Name = backup-bareos-test
  FileSet = testing
  FD Plugin Options = "python:from=job"
}

* run job=backup-bareos-test pluginoptions="from=bconsole" yes
Additional Information: Doc: https://docs.bareos.org/Configuration/Director.html#config-Dir_Job_FdPluginOptions

With debug over 500 one can see the "from=fileset" option being passed to the FD and parsed by the python plugin.

With TLS Enable = no and TCPdump one can see that no other Option string is sent to the FD
System Description
Attached Files:
Notes
(0003552)
b.braunger@syseleven.de   
2019-08-01 17:51   
maybe related: https://bugs.bareos.org/view.php?id=733
(0003553)
b.braunger@syseleven.de   
2019-08-01 17:54   
So I worked my way through the source code and roughly patched a version together which finally send all FD related plugin options to the FD:
https://github.com/bareos/bareos/pull/238
(0003554)
b.braunger@syseleven.de   
2019-08-01 18:08   
So I need a bit of support here:
1) Is there any reason why the plugin options are split into "jcr->plugin_options" and "jcr->res.job->FdPluginOptions"?
2) Does anyone contradict to send all plugin options to the FD?
3) Any other suggestions on the PR? I have no experience in cpp at all.
(0003596)
b.braunger@syseleven.de   
2019-10-14 12:10   
This is fixed and merged by https://github.com/bareos/bareos/pull/238

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
807 [bareos-core] director minor always 2017-04-06 14:53 2019-09-27 16:43
Reporter: TheisM Platform: Linux  
Assigned To: OS: Debian  
Priority: normal OS Version: 7  
Status: assigned Product Version: 16.2.5  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Bareos Director Error after Update
Description: After a Bareos update from the Univention App Center the Bareos Director service does not longer starts.
We try to start the service with "service bareos-dir start"
and receive the following message:

2017-04-06T14:08:37 error message: Did not find a plugin for ccache_ops: 2
2017-04-06T14:08:37 error message: Did not find a plugin for ccache_ops: 2

Can someone help us?
Tags:
Steps To Reproduce:
Additional Information: The Config Files are attached.
Thank you for your support.
System Description
Attached Files: bareos_config_files.rar (6,671 bytes) 2017-04-06 14:53
https://bugs.bareos.org/file_download.php?file_id=248&type=bug
Notes
(0002753)
joergs   
2017-09-21 22:50   
What version of Univention are you using?

Do you remember, from which Bareos version you upgraded?
(0002757)
TheisM   
2017-09-22 12:26   
UCS-Version is 4.1-4 errata435 (Vahr)
UMC-Version is 8.0.28-21.926.201611091130

Upgrade from Bareos release 15.2.2 (16 November 2015)

THX for Support

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
855 [bareos-core] director minor always 2017-10-02 04:23 2019-09-27 16:42
Reporter: Ryushin Platform: Linux  
Assigned To: OS: Debian  
Priority: normal OS Version: 8  
Status: assigned Product Version: 17.2.3  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact: yes
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact: yes
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact: no
bareos-15.2: action:
bareos-14.2: impact: no
bareos-14.2: action:
bareos-13.2: impact: no
bareos-13.2: action:
bareos-12.4: impact: no
bareos-12.4: action:
Summary: Update Volume or Purge Volume Hangs on Using Catalog
Description: This just started happening after the last couple of nightly updates. Pretty much at the time when the new mysql schema was updated in the Debian packages.

I've been using the same scripts that I've created the for the last couple of years without a problem. Now the scripts fail to run and hang when sending the command using echo and piping it to bconsole.

Example commands:
echo "update volume=vchanger_monthly_1.5tb_drives_0003_0001 ActionOnPurge=Truncate" | bconsole
echo "purge volume=vchanger_monthly_1.5tb_drives_0003_0001 action=truncate storage=vchanger_monthly_1.5tb pool=Off-site_Pool" | bconsole
echo "update volume=vchanger_monthly_1.5tb_drives_0003_0001 volstatus=Append storage=vchanger_monthly_1.5tb pool=Off-site_Pool" | bconsole

Example Output:
echo "update volume=vchanger_monthly_1.5tb_drives_0003_0001 ActionOnPurge=Truncate" | bconsoleConnecting to Director windwalker:9101
1000 OK: windwalker-dir Version: 17.2.3 (14 Aug 2017)
Enter a period to cancel a command.
update volume=vchanger_monthly_1.5tb_drives_0003_0001 ActionOnPurge=Truncate
Automatically selected Catalog: MyCatalog
Using Catalog "MyCatalog" <---- hangs here


Running bconsole directly, and then running the commands from inside bconsole seems to work most of the time. But after testing a few times, even inside of bconsole it hanges on the command. I restart the dir and sd daemons to unlock the issue (not sure if the sd daemon needs to be restarted or not)

It seems to be doing this with all of my scripts right now.
Tags:
Steps To Reproduce:
Additional Information: Script used:


bareos_purge_volumes_on_1.5tb_monthly_drive.sh:
#!/bin/bash
if [ "$1" = "" ]
then
    echo "Syntax: $0 <path where bareos volumes exist>"
    echo "I.e. $0 /mnt/vchanger/Monthly_1.5TB_01"
    exit 1
fi

echo "update slots storage=vchanger_monthly_1.5tb" | bconsole

for vol in $(ls $1/volumes/vchanger_monthly_1.5tb_drives_*)
do
    echo "update volume=$(basename $vol) ActionOnPurge=Truncate" | bconsole
    echo "purge volume=$(basename $vol) action=truncate storage=vchanger_monthly_1.5tb pool=Off-site_Pool" | bconsole
    echo "update volume=$(basename $vol) volstatus=Append storage=vchanger_monthly_1.5tb pool=Off-site_Pool" | bconsole
done



System Description
Attached Files: windwalker-dir.trace (214,144 bytes) 2017-10-04 02:20
https://bugs.bareos.org/file_download.php?file_id=263&type=bug
windwalker-dir.trace_20171008 (10,636 bytes) 2017-10-08 15:50
https://bugs.bareos.org/file_download.php?file_id=265&type=bug
windwalker-dir.trace_20171008_2 (57,958 bytes) 2017-10-08 16:01
https://bugs.bareos.org/file_download.php?file_id=266&type=bug
Notes
(0002768)
Ryushin   
2017-10-02 04:40   
BTW, this is occurring on two different installations.
(0002770)
joergs   
2017-10-02 16:47   
I don't think, that this issue is related to the catalog. However, there have been changes to the prune/purge and ActionOnPurge commands.

However, I'm not sure, if your script, even if it run, does what you expect.
In my example, with 17.2.4-rc1, volumes don't get truncated.

I would use:
<your volume loop>
  purge storage=File pool=Full volume=$VOLUME
# truncate all volumes with volstatus=Purged in pool Full
truncate volstatus=Purged pool=Full yes

Setting volstatus to append should not be required.

Back to your main problem:
no idea. Try to enable debug before the commands. Maybe the problem than gets obvious:

setdebug director level=200
(0002772)
Ryushin   
2017-10-04 02:21   
I'll update the script with your new commands. The script use to purge and truncate the volumes just fine.

I had to wait until I could change out the backup drive to another one so I can test again. Ran the script and it hung again. I turned setdebug to 200 and I recorded a trace file. I think everything after line 49 could be deleted (I think the webui queried at that time), but I left it just in case.
(0002776)
Ryushin   
2017-10-07 14:00   
Was the attached trace file useful? Is there anything else I can do?
(0002778)
joergs   
2017-10-08 11:43   
As far as I can see, the relevant part starts at line 686.

windwalker-dir (10): ua_audit.c:143-0 : Console [default] from [192.168.9.1] cmdline update volume=vchanger_monthly_1.5tb_drives_0001_0001 ActionOnPurge=Truncate

I can't see an error there. It did what it should:
windwalker-dir (100): sql_query.c:124-0 called: bool B_DB::sql_query(const char*, int) with query UPDATE Media SET VolJobs=1,VolFiles=12,VolBlocks=832203,VolBytes=53687079247,VolMounts=1,VolErrors=0,VolWrites=832204,MaxVolBytes=53687091200,VolStatus='Append',Slot=1,InChanger=1,VolReadTime=0,VolWriteTime=583967158,LabelType=0,StorageId=3,PoolId=6,VolRetention=7344000,VolUseDuration=0,MaxVolJobs=0,MaxVolFiles=0,Enabled=1,LocationId=0,ScratchPoolId=0,RecyclePoolId=0,RecycleCount=0,Recycle=1,ActionOnPurge=1,MinBlocksize=0,MaxBlocksize=0 WHERE VolumeName='vchanger_monthly_1.5tb_drives_0001_0001'


Also the follow up commands also got through:
windwalker-dir (10): ua_audit.c:143-0 : Console [default] from [192.168.9.1] cmdline purge volume=vchanger_monthly_1.5tb_drives_0001_0001 action=truncate storage=vchanger_monthly_1.5tb pool=Off-site_Pool
...

Did you run the commands again manually? Otherwise I don't see an error here. Adding the timestamp to the trace might help to figure out, when a command is issued.

setdebug director level=200 timestamp=1

So, I can't reproduce the problem and the trace file looks as if everything run through.

Without further information, I afraid, I can't help with this.
(0002779)
Ryushin   
2017-10-08 15:46   
Newish error, I've seen it a couple of times.

Ran: ./bareos_purge_volumes_on_1.5tb_monthly_drive.sh /mnt/vchanger/Monthly_1.5TB_01
Output:
Connecting to Director windwalker:9101
1000 OK: windwalker-dir Version: 17.2.3 (14 Aug 2017)
Enter a period to cancel a command.
update slots storage=vchanger_monthly_1.5tb
Automatically selected Catalog: MyCatalog
Using Catalog "MyCatalog"
Connecting to Storage daemon vchanger_monthly_1.5tb at windwalker:9103 ...
3306 Issuing autochanger "list" command. <---hangs here

I've attached another trace file.
(0002780)
Ryushin   
2017-10-08 15:59   
Restarting bareos-dir and bareos-sd daemons got it to get past the list command.

Output:
Connecting to Director windwalker:9101
1000 OK: windwalker-dir Version: 17.2.3 (14 Aug 2017)
Enter a period to cancel a command.
update volume=vchanger_monthly_1.5tb_drives_0001_0001 ActionOnPurge=Truncate
Automatically selected Catalog: MyCatalog
Using Catalog "MyCatalog" <--- hangs here

I've attached another trace file.
(0002804)
Ryushin   
2017-10-19 14:49   
Was the latest trace file helpful? Is there anything else I can provide?
(0002849)
Ryushin   
2017-12-28 00:11   
(Last edited: 2017-12-28 00:13)
I'm thinking what might be the issue is the mysql operation is taking so long
to complete the script appears to hang since bconsole never sends back a
response. If the mysql completes its operation quickly, then the script
appears to work. Only a theory at this point while watching mytop.

MySQL on localhost (10.0.32) load 1.78 1.23 1.15 2/1055 5752 up 19+09:45:45 [16:08:59]
 Queries: 10.6k qps: 0 Slow: 0.0 Se/In/Up/De(%): 81471/00/00/00
 Sorts: 0 qps now: 1 Slow qps: 0.0 Threads: 4 ( 2/ 6) 00/00/00/00
 Key Efficiency: 0.1% Bps in/out: 0.2/ 25.3 Now in/out: 21.3/ 3.0k

       Id User Host/IP DB Time Cmd State Query
       -- ---- ------- -- ---- --- ----- ----------
      266 snort localhost snort 738 Sleep
   468041 bareos localhost bareos 72 Query updating DELETE FROM File WHERE JobId IN (4990)
   466542 bareos localhost bareos 9 Sleep
   462612 root localhost 0 Query init show full processlist

(0002850)
Ryushin   
2017-12-28 01:55   
(Last edited: 2017-12-28 01:57)
Actually, I don't think this a bug after all. It's a mysql problem.
The script is just taking far, far longer to run. To the point that it seems
hung. mytop is still showing a query after what must be over an hour at this point:

MySQL on localhost (10.0.32) load 2.10 2.48 2.43 1/1106 30277 up 19+11:29:46 [17:53:00]
 Queries: 15.3k qps: 0 Slow: 2.0 Se/In/Up/De(%): 56385/00/00/00
 Sorts: 0 qps now: 1 Slow qps: 0.0 Threads: 4 ( 2/ 8) 125/00/00/00
 Key Efficiency: 0.1% Bps in/out: 0.2/ 36.7 Now in/out: 21.4/ 3.1k

       Id User Host/IP DB Time Cmd State Query
       -- ---- ------- -- ---- --- ----- ----------
   468894 bareos localhost bareos 3372 Query updating DELETE FROM File WHERE JobId IN (5034,5035,5036,5037,503
      266 snort localhost snort 1342 Sleep
   466542 bareos localhost bareos 10 Sleep
   462612 root localhost 0 Query init show full processlist


Is there some kind of mysql schema code update that fixes how long
the updating DELETE File WHERE JobId IN <JOB> takes or some kind
of optimization I can make?

The query is just taking far longer since the schema update.

(0002851)
Ryushin   
2017-12-28 13:42   
After four hours I stopped waiting for it to finish and I went to bed. It had only gotten to 47
out of 140 volumes. I spent quite a bit of time trying to improve the performance
of mysql with no noticeable improvement as I'm definitely I/O bound right now. The
File.ibd database is over 26GB in size. I run a monthly mysql optimize.
The mysql database resides on zfs raidz2 volume made up of four 300GB 15K drives.

My mysql performance parameters are as follows:
default_storage_engine=innodb
innodb_file_per_table=1
innodb_buffer_pool_size=4096M
innodb_log_file_size=128M
innodb_flush_log_at_trx_commit=2
innodb_flush_method=O_DIRECT
innodb_buffer_pool_instances=8
innodb_thread_concurrency=8
innodb_checksum_algorithm=none
innodb_doublewrite = 0

Not sure what else I can try and do to solve this. Before the schema update, the
longest it took was about 45 minutes.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
857 [bareos-core] vmware plugin tweak always 2017-10-05 06:01 2019-09-27 16:32
Reporter: falk Platform: Linux  
Assigned To: OS: CentOS  
Priority: normal OS Version: 7  
Status: assigned Product Version: 16.2.4  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: After Upgrade to Centos 7.4 connection to Vsphere fails
Description: After an upgrade from 7.3 to 7.4 the Connection to Vsphere Server Fails.
This is the case if you don't have an official certificate on your vsphere Server.
Pls. add this hint to the documentation or to the FAQ, thanks!
Tags:
Steps To Reproduce: no one of the configured VMWare Backup Jobs are running sucessfully.
Additional Information: The Solution ist easy ... in the /etc/python/cert-verification.cfg you should set the verification value to disabled:

#[https]
#verify=platform_default
[https]
verify=disable

or you follow the instructions in https://access.redhat.com/articles/2039753 if you need this only in this python script...

System Description
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1063 [bareos-core] General feature always 2019-02-22 23:49 2019-09-25 01:24
Reporter: cm@p-i-u.de Platform: Linux  
Assigned To: OS: CentOS  
Priority: normal OS Version: 7  
Status: new Product Version: 18.2.5  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Add option to set priority to schedule Resource
Description: Currently it is still not possible to schedule a weeky and monthly based Pool Backup on fridays, because it happens that 2 Jobs overlap each other having a schedule like this
(Some month have 5 fridays others only 4 fridays)

Schedule {
  Name = "Default"
  Run = pool=Daily mon-thu at 22:30
  Run = pool=Weekly 1st-4th fri at 22:30
  Run = pool=Monthly last fri at 22:30
}

To solve this isssue it would be great to have priority keyword added to the scheduler similar like in the job resource. With this option it is possible to define which job should take precedence if more then one job match the scheduler criteria.

For example like this,
run Weekly but have Monthy take precedence if both would match

Schedule {
  Name = "Default"
  Run = pool=Daily mon-thu at 22:30
  Run = pool=Weekly 1st-4th fri at 22:30 priority = 20
  Run = pool=Monthly last fri at 22:30 priority = 10
}
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0003450)
arogge   
2019-07-12 10:57   
you can work around this by scheduling monthly five minutes early and disallowing duplicate jobs.
While I think you're right that we need something like "every friday, but the last", I would assume that priority in this context would override the job's priority (like pool and level do) and not declare the priority of this schedule entry.

Can you update your proposal so we don't have this kind of ambiguity?
(0003580)
cm@p-i-u.de   
2019-09-23 22:19   
The problem with the workaround is, that than one job is marked as canceled this is confusing the enduser (which is in our SME scenario is set up to control backups via webui).
Explaining to the user that the job marked as canceled (failed) but it isn't, is not leading into the right directions how to handle failed jobs.

Regarding priority I meant, that the schedule itselfs has priorities but not in the sense like priorities are handled on jobs but in the way that a schedule with a higher priority overrides the lower priority ones.
For example having 5 schedules, scheduled at the same point in time, the one with highest priority wins.
(0003581)
arogge   
2019-09-24 09:04   
I totally understand you request and the drawbacks of the automatic cancelling.

The main problem I have right now is, that the configuration syntax your propose is ambigous and misleading.
If you can come up with something that isn't misleading, we might consider to put this on the agenda.
(0003582)
cm@p-i-u.de   
2019-09-25 01:19   
Than maybe the best thing is to find another verb instead of priority, because with the implementation shown above there are really a lot things possible where more than one condition applies.

For example (a litte bit overdriven but just to illustrate it clearly) you schedule a job on the 30th of the month, which may also be the last day in the month and allso a friday, having schedules for fridays, last day of the month in additiion to the schedule for 30th of the month, you could control which one should take precedence.

Maybe "precendence" is also the word to be used :-)
(0003583)
cm@p-i-u.de   
2019-09-25 01:24   
"rank" comes also to mind

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1110 [bareos-core] file daemon minor always 2019-09-05 13:07 2019-09-19 15:31
Reporter: stephand Platform:  
Assigned To: OS:  
Priority: low OS Version:  
Status: new Product Version: 18.2.7  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: BareosFdPluginBaseclass:parse_plugin_definition() does not allow to override plugin options using FD Plugin Options in Job
Description: Currently, the code does not allow to override plugin options set from FileSet by using
the FD Plugin Options parameter in a Job resource.
As a user/admin, I want to be able to override the more general settings (FileSet) by more
specific settings (Job).

See
https://github.com/bareos/bareos/blob/02f72235abaa5acacc5e672bbe6af1a9253f9479/core/src/plugins/filed/BareosFdPluginBaseclass.py#L99
Tags: fd, plugin
Steps To Reproduce: Example FileSet:

FileSet {
  Name = "vm_generic_fileset"
  Include {
    Options {
      ...
    }
    Plugin = "python:module_path=/usr/lib64/bareos/plugins/vmware_plugin:module_name=bareos-fd-vmware:dc=mydc:folder=/my/vmfolder:vmname=myvm:vcserver=myvc.example.com:vcuser=bakadm@vsphere.local:vcpass=myexamplepassword"
  }
}

And example Job definition:

Job {
  Name = "vm_test02_job"
  JobDefs = "DefaultJob"
  FileSet = "vm_generic_fileset"
  FD Plugin Options = "python:folder=/dmz:vmname=fw01"
}

The above options from "FD Plugin Options" do currently not override the options declared in the FileSet,
it is only possible to add options not yet declared in the FileSet.
Additional Information:
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
987 [bareos-core] director major random 2018-07-18 10:42 2019-09-15 16:17
Reporter: franku Platform: Linux  
Assigned To: OS: Debian  
Priority: normal OS Version: 9  
Status: new Product Version: 17.2.6  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version: 17.2.8  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Canceling a job leads to a director crash (TT4200333)
Description: When canceling a job using bconsole the director can occasionally crash with a coredump.

It is likely that this appears as a result of a race condition where a signal is being sent to a job-thread. The job's thread_id used is a member in the JobControlRecord class whose memory could be deleted meanwhile.
Tags:
Steps To Reproduce: This issue appears very seldom. No way yet to reproduce reliably.
Additional Information: Excerpt from the coredump:

0000001 0x00007f4107389c14 in signal_handler (sig=11) at signal.c:240
0000002 <signal handler called>
0000003 __pthread_kill (threadid=139913685083904, signo=signo@entry=12) at ../sysdeps/unix/sysv/linux/pthread_kill.c:40
0000004 0x00007f4107377434 in JCR::my_thread_send_signal (this=this@entry=0x558a5e446318, sig=sig@entry=12) at jcr.c:682
0000005 0x0000558a5b0983ec in cancel_file_daemon_job (ua=ua@entry=0x7f3f0c00ed28, jcr=jcr@entry=0x558a5e446318) at fd_cmds.c:1080
System Description
Attached Files:
Notes
(0003074)
franku   
2018-07-18 15:13   
Current solution: Refactor the function that frees JobControlRecord (JCR) memory in order to lock the JCR mutex consecutively.
(0003572)
therm   
2019-09-15 16:17   
This one affects me also. We have a lot of copy and migrate jobs. Canceling them lets the director crash by a chance of about 50%. If I can provide something to get this one fixed please let me know.
Regards,
Dennis

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1109 [bareos-core] director trivial always 2019-08-27 18:20 2019-08-27 18:20
Reporter: Jacky Platform: Windows 7  
Assigned To: OS: Windows  
Priority: normal OS Version: 7  
Status: new Product Version: 18.2.5  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Fatal error: No Job status returned from FD.
Description: According to the official documentation steps to configure, the windows 7 platform failed to add backup.
The specific error message is as follows:
bareos-dir JobId 40: Error: Bareos bareos-dir 18.2.5 (30Jan19):
Build OS: Linux-4.4.92-6.18-default debian Debian GNU/Linux 9.7 (stretch)
JobId: 40
Job: BackupCatalog.2019-08-27_17.56.22_37
Backup Level: Full
Client: "zhangxianseng-fd" 18.2.5 (30Jan19) Microsoft Windows 7 Ultimate Edition Service Pack 1 (build 7601), 64-bit,Cross-compile,Win64
FileSet: "Windows All Drives" 2019-08-24 00:12:06
Pool: "Full" (From command line)
Catalog: "MyCatalog" (From Client resource)
Storage: "FileStorage1" (From Job resource)
Scheduled time: 27-8月-2019 17:56:22
Start time: 27-8月-2019 17:56:25
End time: 27-8月-2019 17:58:31
Elapsed time: 2 mins 6 secs
Priority: 11
FD Files Written: 0
SD Files Written: 0
FD Bytes Written: 0 (0 B)
SD Bytes Written: 0 (0 B)
Rate: 0.0 KB/s
Software Compression: None
VSS: no
Encryption: no
Accurate: no
Volume name(s):
Volume Session Id: 2
Volume Session Time: 1566899611
Last Volume Bytes: 0 (0 B)
Non-fatal FD errors: 1
SD Errors: 0
FD termination status: Error
SD termination status: Waiting on FD
FD Secure Erase Cmd:
Bareos binary info: bareos.org build: Get official binaries and vendor support on bareos.com
Termination: *** Backup Error ***

2019-08-27 17:56:31 bareos-dir JobId 40: Fatal error: Authorization key rejected by File Daemon bareos-dir.
2019-08-27 17:56:31 bareos-dir JobId 40: Fatal error: Unable to authenticate with File daemon at "192.168.128.216:9102". Possible causes:
Passwords or names not the same or
TLS negotiation failed or
Maximum Concurrent Jobs exceeded on the FD or
FD networking messed up (restart daemon).
2019-08-27 17:56:26 bareos-dir JobId 40: Connected Storage daemon at bareos:9103, encryption: TLS_CHACHA20_POLY1305_SHA256
2019-08-27 17:56:26 bareos-dir JobId 40: Using Device "FileStorage1" to write.
2019-08-27 17:56:26 bareos-dir JobId 40: Probing client protocol... (result will be saved until config reload)
2019-08-27 17:56:26 bareos-dir JobId 40: Fatal error: Connect failure: ERR=error:140943FC:SSL routines:ssl3_read_bytes:sslv3 alert bad record mac
2019-08-27 17:56:26 bareos-dir JobId 40: TLS negotiation failed (while probing client protocol)
2019-08-27 17:56:25 bareos-dir JobId 40: Start Backup JobId 40, Job=BackupCatalog.2019-08-27_17.56.22_37
2019-08-27 17:56:24 bareos-dir JobId 40: shell command: run BeforeJob "/usr/lib/bareos/scripts/make_catalog_backup.pl MyCatalog"
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
477 [bareos-core] file daemon minor have not tried 2015-06-08 13:02 2019-08-08 14:56
Reporter: RoyK Platform: Windows  
Assigned To: OS: any  
Priority: normal OS Version: 8  
Status: acknowledged Product Version: 14.2.2  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact: yes
bareos-master: action: will care
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact: yes
bareos-15.2: action: will care
bareos-14.2: impact: yes
bareos-14.2: action: none
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Bareos-fd fails with IPv6
Description: If bareos-fd is configured with IPv6 on Windows, director just hangs when attempting to contact fd. If removing the ipv6 config, director hangs in 0000004:0000001 minute before attempting ipv4, and things work. If hardcoding ipv4 address and removing ipv6 config on fd, things works as normal. We use IPv6 with all Linux clients and the plan was to use it with all windows clients as well, but this does not seem to be possible at the moment.
Tags:
Steps To Reproduce: bareos-fd.conf - tested on win2k8r2 and win2k12r2, both 64bit

Director {
  Name = urd.my.tld-dir
  Password = "supersecret"
}

FileDaemon {
  Name = somewinbox.my.tld-fd
  Maximum Concurrent Jobs = 20
  FDAddresses = {ipv6 = {port = 9102}}
}
Additional Information:
System Description
Attached Files:
Notes
(0001871)
RoyK   
2015-10-07 19:30   
Just checked with 15.2 - same issue
(0002337)
eye69   
2016-08-20 10:27   
Just checked with freshly downloaded "winbareos-16.3.1.1471620779.cae6abe-postvista-64-bit-r203.1.exe" on Windows 10 Pro, build 10586.545

When trying to contact the file daemon through IPv6, it just goes to 25% CPU usage and doesn't respond.
(0002381)
tigerfoot   
2016-10-12 14:10   
eye69 or RoyK any of you would be able to start bareos-fd with debug mode
options that should be added to windows service
-d 200
(0002406)
Lee   
2016-10-24 13:11   
Hi there,

Please find trace for IPv6 listener below:



bareos-fd (100): lib/parse_conf.c:151-0 config file = C:\ProgramData\Bareos\bareos-fd.d/*/*.conf
bareos-fd (100): lib/lex.c:356-0 glob C:\ProgramData\Bareos\bareos-fd.d/*/*.conf: 4 files
bareos-fd (100): lib/lex.c:250-0 open config file: C:\ProgramData\Bareos\bareos-fd.d/client/myself.conf
bareos-fd (100): lib/lex.c:250-0 open config file: C:\ProgramData\Bareos\bareos-fd.d/director/bareos-dir.conf
bareos-fd (100): lib/lex.c:250-0 open config file: C:\ProgramData\Bareos\bareos-fd.d/director/bareos-mon.conf
bareos-fd (100): lib/lex.c:250-0 open config file: C:\ProgramData\Bareos\bareos-fd.d/messages/Standard.conf
bareos-fd (90): filed/filed_conf.c:445-0 Inserting Director res: bareos-dir
bareos-fd (100): lib/lex.c:356-0 glob C:\ProgramData\Bareos\bareos-fd.d/*/*.conf: 4 files
bareos-fd (100): lib/lex.c:250-0 open config file: C:\ProgramData\Bareos\bareos-fd.d/client/myself.conf
bareos-fd (100): lib/lex.c:250-0 open config file: C:\ProgramData\Bareos\bareos-fd.d/director/bareos-dir.conf
bareos-fd (100): lib/lex.c:250-0 open config file: C:\ProgramData\Bareos\bareos-fd.d/director/bareos-mon.conf
bareos-fd (100): lib/lex.c:250-0 open config file: C:\ProgramData\Bareos\bareos-fd.d/messages/Standard.conf
id48269-fd (100): lib/jcr.c:141-0 read_last_jobs seek to 192
id48269-fd (100): lib/jcr.c:148-0 Read num_items=0
id48269-fd (150): filed/fd_plugins.c:1664-0 plugin dir is NULL
id48269-fd (10): filed/socket_server.c:96-0 filed: listening on port 9102
id48269-fd (100): lib/bnet_server_tcp.c:170-0 Addresses host[ipv6;0.0.0.0;9102]




************************

For comparison with IPv4 trace, i've provided that also (With a successful connect from director):



bareos-fd (100): lib/parse_conf.c:151-0 config file = C:\ProgramData\Bareos\bareos-fd.d/*/*.conf
bareos-fd (100): lib/lex.c:356-0 glob C:\ProgramData\Bareos\bareos-fd.d/*/*.conf: 4 files
bareos-fd (100): lib/lex.c:250-0 open config file: C:\ProgramData\Bareos\bareos-fd.d/client/myself.conf
bareos-fd (100): lib/lex.c:250-0 open config file: C:\ProgramData\Bareos\bareos-fd.d/director/bareos-dir.conf
bareos-fd (100): lib/lex.c:250-0 open config file: C:\ProgramData\Bareos\bareos-fd.d/director/bareos-mon.conf
bareos-fd (100): lib/lex.c:250-0 open config file: C:\ProgramData\Bareos\bareos-fd.d/messages/Standard.conf
bareos-fd (90): filed/filed_conf.c:445-0 Inserting Director res: Ziff-dir
bareos-fd (100): lib/lex.c:356-0 glob C:\ProgramData\Bareos\bareos-fd.d/*/*.conf: 4 files
bareos-fd (100): lib/lex.c:250-0 open config file: C:\ProgramData\Bareos\bareos-fd.d/client/myself.conf
bareos-fd (100): lib/lex.c:250-0 open config file: C:\ProgramData\Bareos\bareos-fd.d/director/bareos-dir.conf
bareos-fd (100): lib/lex.c:250-0 open config file: C:\ProgramData\Bareos\bareos-fd.d/director/bareos-mon.conf
bareos-fd (100): lib/lex.c:250-0 open config file: C:\ProgramData\Bareos\bareos-fd.d/messages/Standard.conf
id48269-fd (100): lib/jcr.c:141-0 read_last_jobs seek to 192
id48269-fd (100): lib/jcr.c:148-0 Read num_items=0
id48269-fd (150): filed/fd_plugins.c:1664-0 plugin dir is NULL
id48269-fd (10): filed/socket_server.c:96-0 filed: listening on port 9102
id48269-fd (100): lib/bnet_server_tcp.c:170-0 Addresses host[ipv4;0.0.0.0;9102]
id48269-fd (110): filed/socket_server.c:63-0 Conn: Hello Director Ziff-dir calling

id48269-fd (110): filed/socket_server.c:69-0 Got a DIR connection at 24-Oct-2016 12:07:53
id48269-fd (120): filed/dir_cmd.c:630-0 Calling Authenticate
id48269-fd (200): lib/bsys.c:191-0 pthread_cond_timedwait sec=0 usec=100
id48269-fd (50): lib/cram-md5.c:68-0 send: auth cram-md5 <29108.1477307273@id48269-fd> ssl=0
id48269-fd (100): lib/cram-md5.c:123-0 cram-get received: auth cram-md5 <1182722634.1477307273@Ziff-dir> ssl=0
id48269-fd (99): lib/cram-md5.c:143-0 sending resp to challenge: w7k/nCcyQCwlj+R79x/A2D
id48269-fd (200): lib/bsys.c:191-0 pthread_cond_timedwait sec=0 usec=100
id48269-fd (120): filed/dir_cmd.c:632-0 OK Authenticate
id48269-fd (100): filed/dir_cmd.c:495-0 <dird: status
id48269-fd (100): filed/dir_cmd.c:506-0 Executing status command.
id48269-fd (200): lib/runscript.c:149-0 runscript: running all RUNSCRIPT object (ClientAfterJob) JobStatus=C
id48269-fd (150): filed/fd_plugins.c:327-0 No bplugin_list: generate_plugin_event ignored.
id48269-fd (100): filed/dir_cmd.c:568-0 Done with free_jcr
(0002577)
RoyK   
2017-02-20 23:01   
Seems this is still an issue in 16.2. Btw, this is not a 'minor', since IPv6 should be pretty important by now.
(0002946)
kjetilho   
2018-03-15 00:05   
still an issue in 17.2.4-8.1 on Windows 2012R2 (64-bit). same symptoms, last message in trace is

foo.net-fd (100): lib/bnet_server_tcp.c:174-0 Addresses host[ipv6;0.0.0.0;9102]

and when director tries to connect, one CPU core starts spinning. please let me know if I can assist in debugging this.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1080 [bareos-core] director minor have not tried 2019-04-24 12:49 2019-08-07 08:39
Reporter: unki Platform: Linux  
Assigned To: arogge OS: Debian  
Priority: normal OS Version: 9  
Status: new Product Version: 18.2.5  
Product Build: Resolution: reopened  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Error on 'JobBytes sum select' during backup-job was running
Description: During an backup-job running - a Differential backup that was just on the way getting finished - `bareos-dir` crashed (segmentation violation) leaving back only these log lines:

```
24-Apr 10:17 backupsrv-sd JobId 19706: Releasing device "DiffPool4" (/srv/bareos/pool).
24-Apr 10:17 backupsrv-sd JobId 19706: Elapsed time=35:10:48, Transfer rate=516.4 K Bytes/second
24-Apr 10:17 bareos-dir JobId 19706: Insert of attributes batch table with 197406 entries start
24-Apr 10:17 bareos-dir JobId 19706: Insert of attributes batch table done
24-Apr 10:17 bareos-dir JobId 19706: Fatal error: cats/sql_get.cc:1481 cats/sql_get.cc:1481 query SELECT SUM(JobBytes) FROM Job WHERE ClientId = 119 AND JobId != 19706 AND SchedTime > TIMESTAMP '1577823450-01-00 03:38:02' failed:
ERROR: date/time field value out of range: "1577823450-01-00 03:38:02"
LINE 1: ...AND JobId != 19706 AND SchedTime > TIMESTAMP '157782345...
                                                             ^
HINT: Perhaps you need a different "datestyle" setting.

24-Apr 10:17 bareos-dir JobId 19706: Error: JobBytes sum select failed: ERR=
24-Apr 10:17 bareos-dir JobId 19706: Warning: Error getting Quota value: ERR=JobBytes sum select failed: ERR=
```

followed by

```
Apr 24 10:17:51 backup2 bareos-dir[6647]: BAREOS interrupted by signal 11: Segmentation violation
```

Somehow this UNIX-timestamp made it into the datatime-string.

It's Bareos v18.2.5 using a PostgreSQL database (9.6.11) on Debian Stretch (9.8).
Tags:
Steps To Reproduce:
Additional Information: It is possibly this SQL Query - defined in `core/src/cats/postgresql_queries.inc`

```
/* 0059_get_quota_jobbytes.postgresql */
"SELECT SUM(JobBytes) "
  "FROM Job "
 "WHERE ClientId = %s "
   "AND JobId != %s "
   "AND SchedTime > TIMESTAMP '%s' "
,
```
System Description
Attached Files:
Notes
(0003448)
arogge   
2019-07-12 10:31   
this is one of these "i looked at the code and this cannot happen" issues.

The timestamp is calculated and then converted to a datetime string before being injected into the query, so what you observed should be impossible.
Is this somehow reproducible?
(0003547)
unki   
2019-08-01 07:14   
No, luckily this never occurred again - I guess you can feel free to close this issue :)
(0003558)
unki   
2019-08-07 08:39   
Just this morning this issue hits me again - unbelievable :)

It's still on bareos v18.2.5.

this message was logged into syslog

Aug 07 08:20:45 backup2 bareos-dir[663]: BAREOS interrupted by signal 11: Segmentation violation

this in /var/log/bareos/bareos.log

07-Aug 08:20 backup-sd JobId 33591: Releasing device "DiffPool4" (/srv/bareos/pool).
07-Aug 08:20 backup-sd JobId 33591: Elapsed time=129:15:39, Transfer rate=1.745 M Bytes/second
07-Aug 08:20 bareos-dir JobId 33591: Insert of attributes batch table with 7017 entries start
07-Aug 08:20 bareos-dir JobId 33591: Insert of attributes batch table done
07-Aug 08:20 bareos-dir JobId 33591: Fatal error: cats/sql_get.cc:1481 cats/sql_get.cc:1481 query SELECT SUM(JobBytes) FROM Job WHERE ClientId = 119 AND JobId != 33591 AND SchedTime > TIMESTAMP '1577823450-01-00 01:40:56' failed:
ERROR: date/time field value out of range: "1577823450-01-00 01:40:56"
LINE 1: ...AND JobId != 33591 AND SchedTime > TIMESTAMP '157782345...
                                                             ^
HINT: Perhaps you need a different "datestyle" setting.

07-Aug 08:20 bareos-dir JobId 33591: Error: JobBytes sum select failed: ERR=
07-Aug 08:20 bareos-dir JobId 33591: Warning: Error getting Quota value: ERR=JobBytes sum select failed: ERR=

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1107 [bareos-core] director major always 2019-08-02 17:26 2019-08-02 18:22
Reporter: egedwillo Platform: Linux  
Assigned To: OS: CentOS  
Priority: normal OS Version: 7  
Status: new Product Version: 17.2.7  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Backup Job Stuck on Bareos 17.2
Description: Hi,

I have a problem with the backup of my database files in bareos 17.2.

The backup job gets stuck when the volume size reaches approximately 6-7 GB and displays the message "Elapsed time = 00: 10: 37, Transfer rate = 11.39 M Bytes / second", after several hours it shows a timeout error and the backup job fails.

I have another 20 work tasks with the same configuration and procedure. This is the only task that fails.

The size of the backup files is 18 GB in total, after compression in the volumes the size is approximately 11-12 GB.

Bareos sd storage config and device config:

Storage {
  Name = hanab1-carp-sd
  Maximum Concurrent Jobs = 20
  SDAddress = 10.0.1.50
}

Device {
  Name = StorageHanaCarp
  Media Type = File
  Archive Device = /hana/backup/backupbareos
  LabelMedia = yes; # lets Bareos label unlabeled media
  Random Access = yes;
  AutomaticMount = yes; # when device opened, read it
  RemovableMedia = no;
  AlwaysOpen = no;
  Description = "File device. A connecting Director must have the same Name and MediaType."
}

Bareos Dir Pool config and storage config:


Storage {
  Name = hanab1-carp-sd
  Address = -------------
  Password = -------------
  Device = StorageHanaCarp
  Media Type = File
}

Pool {
  Name = carpFullHanaDB
  Pool Type = Backup
  Recycle = yes # Bareos can automatically recycle Volumes
  AutoPrune = yes # Prune expired volumes
  Volume Retention = 14 days # How long should the Full Backups be kept? (0000006)
  Maximum Volume Bytes = 200G # Limit Volume size to something reasonable
  Maximum Volumes = 100 # Limit number of Volumes in Pool
  Label Format = carp # Volumes will be labeled "Full-<volume-id>"
}
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files: bareos-1.JPG (71,503 bytes) 2019-08-02 17:26
https://bugs.bareos.org/file_download.php?file_id=386&type=bug
jpg

bareos-2.JPG (25,969 bytes) 2019-08-02 17:26
https://bugs.bareos.org/file_download.php?file_id=387&type=bug
jpg

bareos-3.JPG (62,018 bytes) 2019-08-02 18:22
https://bugs.bareos.org/file_download.php?file_id=388&type=bug
jpg
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1020 [bareos-core] webui major always 2018-10-10 09:37 2019-07-31 18:03
Reporter: linkstat Platform: Linux  
Assigned To: frank OS: Debian  
Priority: normal OS Version: 9  
Status: assigned Product Version: 18.2.4-rc1  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Webui can not restore a client, if it contains spaces in its name
Description: All my clients have names with spaces on their names like "client-fd using Catalog-XXX"; correctly handled (ie, enclosing the name in quotation marks, or escaping the space with \), this has not represented any problem... until now. Webui can even perform backup tasks (previously defined in the configuration files) and has not presented problems with the spaces. But when it came time to restore something ... it just does not seem to be able to properly handle the character strings that contain spaces inside. Apparently, cuts the string to first place especially found (as you can see by looking at the attached image).
Tags:
Steps To Reproduce: Define a client whose name contains spaces inside, such as "hostname-fd Testing Client".
Try to restore a backup from Webui to that client (it does not matter that the backup was originally made in that client or that the newly defined client is a new destination for the restoration of a backup previously made in another client).
Webui will fail by saying "invalid client argument: hostname-fd". As you can see, Webui will "cut" the client's name to the first string before the first space, and since there is no hostname-fd client, the task will fail; or worse, if additionally there was a client whose name matched the first string before the first space, Webui will restore the wrong client.
Additional Information: bconsole does not present any problem when the clients contain spaces in their names (this of course, when the spaces are correctly handled by the human operator who writes the commands, either by enclosing the name with quotation marks, or escaping spaces with a backslash).
System Description
Attached Files: Bareos - Can not restore when a client name has spaces in their name.jpg (139,884 bytes) 2018-10-10 09:37
https://bugs.bareos.org/file_download.php?file_id=311&type=bug
jpg
Notes
(0003546)
linkstat   
2019-07-31 18:03   
Hello!

Any news regarding this problem? (or any ideas about how to patch it temporarily so that you can use webui for the case described)?
Sometimes it is tedious to use bconsole all the time instead of webui ...

Regards!

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
985 [bareos-core] api feature have not tried 2018-07-15 19:38 2019-07-31 15:57
Reporter: enikq Platform:  
Assigned To: OS:  
Priority: low OS Version:  
Status: acknowledged Product Version: 17.2.6  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Bareos Configuration in JSON format
Description: I need the bareos-dir, bareos-fd and bareos-sd configuration in JSON output.
I noticed you can set the bconsole output to JSOn with `.api 2`, but when doing so `show all` still output the configuration in standard format and not in JSON.

Bacula has a CLI tool to output the config in JSON, see: https://fossies.org/linux/bacula/src/dird/bdirjson.c

Does bareos has somewhere hidden such implementation too, if not can you add this?
Tags:
Steps To Reproduce:
Additional Information:
Attached Files:
Notes
(0003544)
arogge   
2019-07-31 15:57   
Update this to be a feature request.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
733 [bareos-core] director minor always 2016-12-02 19:01 2019-07-30 16:48
Reporter: hk298 Platform: Linux  
Assigned To: OS: Debian  
Priority: normal OS Version: 8  
Status: new Product Version: 16.2.4  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Plugin Options (mssqlvdi) in Job definitions
Description: The manual states that there is a field called "FD Plugin Options" that can be used in the job configuration. I'm trying to specify options for the mssqlvdi plugin. However, when I run a restore job from the console, I always see plugin options = *none*.

The SQL authentication options (serveraddress, instance, database, user, password) are read properly from the file set configuration that I specify as part of the job. However, options like "replace=yes" are ignored when I add them in the file set, and they don't seem to be read from the job configuration either. You have to type them in the console using the modify function. This is slightly annoying, and it becomes a real issue when you try to restore from the webui because in that case there's no way to specify, for example, the "replace" option.
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1102 [bareos-core] file daemon major sometimes 2019-07-20 14:13 2019-07-26 12:21
Reporter: twdragon Platform: x86_64  
Assigned To: OS: Ubuntu  
Priority: normal OS Version: 18.04 LTS  
Status: new Product Version: 18.2.6  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact: