View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1612 [bareos-core] director minor have not tried 2024-04-17 16:09 2024-04-18 18:42
Reporter: hostedpower Platform:  
Assigned To: OS:  
Priority: normal OS Version:  
Status: new Product Version:  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Allow Higher Duplicates = Yes not working
Description: Hi,


We created a job with these properties:


JobDefs {
    Name = "FullRJob"
    Type = Backup

    Level = Full

    Client = xxx
    Write Bootstrap = "/var/lib/bareos/%c.bsr"

    Accurate = No
    Messages = Standard

    # Be careful not to start double jobs (since ran every hour by default)
    Allow Duplicate Jobs = No
    Allow Higher Duplicates = Yes

    Allow Mixed Priority = True
    Priority = 10

    Pool = R-Full

    Incremental Backup Pool = R-Incremental
    Full Backup Pool = R-Full

    Run Script {
        RunsWhen = After
        RunsOnFailure = No
        FailJobOnError = No
        RunsOnClient = No
        console = ".bvfs_update jobid=%i"
    }
}


JobDefs {
    Name = "PITRJob"
    JobDefs = "FullRJob" # Use job defaults from FullRJob

    Level = Incremental
}

Then we have a job of type PITRJob , this job is using the postgreSQL PITR plugin from Bareos and is using this schedule

Schedule {
    Name = "pitr-cycle-standard"

    Run = Incremental hourly at 00:02 # First pick up all last changes
    Run = Full 1/3 at 23:04 # Create new full every 3 days
}

However this Full job is always cancelled, even when it's a "Higher Duplicate".

JobId 281517 already running. Duplicate job not allowed.


What we want is to have all last changes from the database incremental synced, and as fast as possible start a full job after that.

Say we first start the full, then the incremental, we would lose up to 1 hour of incremental data as far as I understand ...

Is there any solution for this? And why is this Higher Duplicate cancelled, it looks like bug because of this setting: "Allow Higher Duplicates = Yes"
Tags:
Steps To Reproduce:
Additional Information:
Attached Files:
Notes
(0005898)
bruno-at-bareos   
2024-04-18 17:00   
Hello, you've missed our announcement about Mantis going to be deprecated in the benefit of github issues.

I'm not sure the parameter combination you're using will work as you want
When I'm referring to https://docs.bareos.org/Configuration/Director.html#config-Dir_Job_AllowDuplicateJobs I don't see any usage of "Allow Higher Duplicates"
When I'm grepping the source code for that config parameter, beside its declaration in the config code of the director, it doesn't look like it is use somewhere else.

So finally the Allow Duplicate = no win in your case.

So look like we have dead cat parameter that can be deprecated and removed.
(0005900)
hostedpower   
2024-04-18 18:42   
Hi Bruno, thanks a lot, is there any way then to accomplish what we want? Because incremental is running quite often, duplicates of the incremental must be avoided. However a full should never be missed.
(0005901)
hostedpower   
2024-04-18 18:42   
PS: Next time I'll open on github :D

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1604 [bareos-core] file daemon crash random 2024-02-26 13:31 2024-04-11 12:12
Reporter: Int Platform: x86  
Assigned To: OS: Windows  
Priority: normal OS Version: 2016  
Status: new Product Version: 23.0.1  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Windows file daemon crashes sometimes during backup job
Description: It happened 5 times in the last three weeks that the Windows file daemon crashed during a backup job.

It happened on two different Windows systems:
* System IGMS00: Windows Server 2016 x64 Build 14393.6709 - 2 crashes so far
* System IGMS04: Windows 10 Professional 22H2 Build 19045.4046 - 3 crashes so far

I attached the Windows event log entries.

On system IGMS04 I enabled the trace debug output but since then the bug did reappear yet.
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files: IGMS00 windows fd crash.txt (1,978 bytes) 2024-02-26 13:31
https://bugs.bareos.org/file_download.php?file_id=621&type=bug
IGMS04 windows fd crash.txt (6,392 bytes) 2024-02-26 13:31
https://bugs.bareos.org/file_download.php?file_id=622&type=bug
2024-03-08 IGMS04 windows fd crash.txt (2,060 bytes) 2024-03-11 09:08
https://bugs.bareos.org/file_download.php?file_id=626&type=bug
Notes
(0005809)
Int   
2024-02-26 13:32   
Made a typo - should be "On system IGMS04 I enabled the trace debug output but since then the bug did NOT reappear yet."
(0005828)
Int   
2024-03-11 09:08   
I managed to capture a trace of the Windows file daemon crash.
But the trace file is quite large (121MB compressed) - do you have a upload where I can place it or should I provide a download link?
(0005830)
bruno-at-bareos   
2024-03-12 13:28   
Hello you should have received and invitation for a shared link to upload the trace.
(0005831)
Int   
2024-03-12 14:10   
Hi, unfortunately I did not receive an invitation to a shared link.
(0005834)
bruno-at-bareos   
2024-03-13 09:52   
Thanks trace received, that maybe analyzed when free time will be allowed.
(0005836)
bruno-at-bareos   
2024-03-13 15:17   
Not sure you've seen it, but 23.0.2 was released (so prev 23.0.3 for current) and that one contain fixes for certain Windows crashes
Please try to upgrade and check if it is still failing.
(0005869)
RobertF.   
2024-03-27 09:30   
I have the same bug on several Windows clients and different Windows Server versions (2012r2, 2019).
I have yesterday updated Bareos to 23. 0. 2 the error still exists.
(0005870)
bruno-at-bareos   
2024-03-27 09:42   
Dear M. Robert Frische, as you are using the subscription channel, and are a paying customer, with a valid support contract, I would like to track your specific case with the dedicated portal
https://servicedesk.bareos.com

if you didn't remember your password there, you can request a new one (which can and should be different from the one use on the customer/sale one)
See you there.
(0005890)
Int   
2024-04-09 08:21   
Same here - update to Bareos to 23.0.2 did not fix the issue.
(0005891)
bruno-at-bareos   
2024-04-11 10:55   
For Intego, if you still have a bareos-fd 22 version you can try this specific build

Our developer have prepared a special version that allow them to generate better trace and crash dump report on windows.
We would really appreciate if you can deploy that bareos-fd on one of your crashing host.

Specific Bareos binaries can be found here, they are bareos-fd version 22 like the fd you reported.
https://download.bareos.org/experimental/ssura-bareos-22-fix-windows-crash/

To work correctly, you will have to install also the debughelp.dll which is part of the windows sdk.
Explanation https://developer.microsoft.com/en-us/windows/downloads/windows-sdk/
sdk download https://go.microsoft.com/fwlink/?linkid=2261842
The sdk being very large the only needed part is Debugging Tool for Windows
Before installing you may want to check your system if debughelp.dll is not yet installed.

Please run the job as previously, with debug level set to 200.

We would like to have also the event log associated with the crash, and inside the event log you will see a path file (where the crash dump is saved), we would like to have this one too.
Also note that the windows event might have bareos-fd as a source and not Bareos in case you try to filter.
(0005892)
Int   
2024-04-11 11:11   
Hi,

> Specific Bareos binaries can be found here, they are bareos-fd version 22 like the fd you reported.
> https://download.bareos.org/experimental/ssura-bareos-22-fix-windows-crash/

I reported the issue with bareos-fd version 23.0.1 (and now 23.0.2) - not with version 22
(0005893)
bruno-at-bareos   
2024-04-11 11:16   
Yeah but you're not entitled to support, so only community effort. Not much can be done then,
(0005894)
Int   
2024-04-11 11:25   
Well, it was you who just asked me to help you to debug this issue and I responded to your request.

> Yeah but you're not entitled to support, so only community effort. Not much can be done then,

Well, so far I supposed this was a give and take. I find the bugs and spend my time to report, debug and test for free.
If you want to play the pay me game, I can do the same. You want me to run debug tests and work as a software tester for you - ask our sales department for a support contract.
Standard rate for my time is 160€/hour
(0005895)
bruno-at-bareos   
2024-04-11 12:12   
We have also an experimental build (based on master) which do the same in the following repository https://download.bareos.org/experimental/ssura-master-add-windows-trace/

I will let other answer to your remark.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1598 [bareos-core] director major have not tried 2024-02-09 10:37 2024-03-28 12:25
Reporter: Int Platform: Linux  
Assigned To: OS: RHEL (and clones)  
Priority: normal OS Version: 8  
Status: new Product Version:  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Bareos 23.0.1 creates new Volume in catalog but no physical File volume in storage
Description: I ran into the issue that Bareos 23.0.1 created a new Volume in catalog but no physical File volume in storage.

This happend during a running backup job when the Pool reached its limit of the configured "Maximum Volumes". The job then got stuck because no free volume was available - message in the job log:

26 2024-02-09 06:25:40 truenas-sd JobId 35326: Job filebackup-s01-fd.2024-02-09_01.00.00_16 is waiting. Cannot find any appendable volumes.
Please use the "label" command to create a new Volume for:
Storage: "FileStorageTrueNAS0002" (/var/lib/bareos/storage)
Pool: Incremental
Media type: File

I then increased the "Maximum Volumes" in the Pool config file and issued a "reload" command in bconsole to load the changed configuration.
The job then continued and created a new Volume - message in the job log:

27 2024-02-09 09:51:01 bareos-dir JobId 35326: Created new Volume "Incremental-0920" in catalog.

But Bareos 23.0.1 created the new Volume only in the catalog but no physical File volume in storage. This lead to this error:

28 2024-02-09 09:51:01 truenas-sd JobId 35326: Warning: stored/mount.cc:248 Open device "FileStorageTrueNAS0002" (/var/lib/bareos/storage) Volume "Incremental-0920" failed: ERR=stored/dev.cc:598 Could not open: /var/lib/bareos/storage/Incremental-0920, ERR=No such file or directory
Tags:
Steps To Reproduce: See Description
Additional Information:
System Description
Attached Files: jobid_35326.log.gz (2,096 bytes) 2024-02-13 14:46
https://bugs.bareos.org/file_download.php?file_id=605&type=bug
jobid_35413.log.gz (1,489 bytes) 2024-02-15 09:30
https://bugs.bareos.org/file_download.php?file_id=606&type=bug
bareos-sd.conf (400 bytes) 2024-02-16 16:04
https://bugs.bareos.org/file_download.php?file_id=607&type=bug
FileStorage.conf (445 bytes) 2024-02-16 16:04
https://bugs.bareos.org/file_download.php?file_id=608&type=bug
jobid_35437.log.gz (1,950 bytes) 2024-02-16 16:04
https://bugs.bareos.org/file_download.php?file_id=609&type=bug
llist-Migrate-0956.log.gz (661 bytes) 2024-02-16 16:04
https://bugs.bareos.org/file_download.php?file_id=610&type=bug
truenas-sd.trace.gz (6,235 bytes) 2024-02-16 16:04
https://bugs.bareos.org/file_download.php?file_id=611&type=bug
JailMountPoint.png (31,827 bytes) 2024-02-20 15:46
https://bugs.bareos.org/file_download.php?file_id=612&type=bug
png

Bareos_UI.png (34,565 bytes) 2024-03-07 12:40
https://bugs.bareos.org/file_download.php?file_id=623&type=bug
png

Show Pool.txt (987 bytes) 2024-03-07 12:40
https://bugs.bareos.org/file_download.php?file_id=624&type=bug
Volume parameters.txt (6,639 bytes) 2024-03-07 12:40
https://bugs.bareos.org/file_download.php?file_id=625&type=bug
Notes
(0005761)
bruno-at-bareos   
2024-02-12 13:32   
Just to confirm: you didn't do the update pool from resource command after your reload right ?
(0005762)
Int   
2024-02-12 14:02   
I edited the config file /etc/bareos/bareos-dir.d/pool/Incremental.conf with a text editor and increased the "Maximum Volumes".
Then I started bconsole and issued the "reload" command.
I did not do the update pool from resource command after the "reload" command in bconsole.
(0005770)
bruno-at-bareos   
2024-02-13 14:00   
Hi we would like to check if its something new. We tried to reproduce the case and the software is working as expected.
Increase pool limit, reload, and a new volume is created and then used.

We would like to see the full joblog, which you can extract with (then compress and attach here)
bconsole <<< "list joblog jobid=35326" > /var/tmp/jobid_35326.log

We have seen in the past, similar issue, but mostly it was because at the time of creating the volume the /var/lib/bareos/storage mount point was not available, and your error looks really similar.
Otherwise the sd would have created that file and would have failed later being not able to read the label.
It maybe be worse the effort to double check system's logs happening at incident time.
Here the SD can't access the file (so the directory).
(0005771)
Int   
2024-02-13 14:46   
I attached the full joblog as requested.

I checked the system's logs of the storage server and it doesn't show any errors during that time.
Later (on Feb 13th) the storage server created new volumes successfully without any changes to config or access rights:

root@truenas-sd:~ # ls -la /var/lib/bareos/storage/
total 114812986408
drwxr-x--- 2 bareos bareos 375 Feb 13 01:38 .
drwxr-x--- 3 bareos bareos 4 Feb 6 14:45 ..
...
-rw-r----- 1 bareos bareos 39998980267 Feb 8 05:19 Incremental-0918
-rw-r----- 1 bareos bareos 39999236662 Feb 9 01:00 Incremental-0919
-rw-r----- 1 bareos bareos 26896987171 Feb 13 01:55 Incremental-0934
-rw-r----- 1 bareos bareos 39999706259 Feb 13 02:04 Incremental-0935
-rw-r----- 1 bareos bareos 32469240858 Feb 13 12:00 Incremental-0936
-rw-r----- 1 bareos bareos 37987976031 Feb 13 01:45 Incremental-0937
-rw-r----- 1 bareos bareos 39999120592 Feb 13 01:38 Incremental-0938
-rw-r----- 1 bareos bareos 35671581125 Feb 13 02:05 Incremental-0939
root@truenas-sd:~ #


I guess the special circumstance here was that the job was running and waiting for a new volume because it ran out of empty/recyclable volumes. And I increased pool limit while the job was running and waiting.
(0005772)
bruno-at-bareos   
2024-02-13 14:51   
Thanks for the log.

You assumption "I guess the special circumstance here was that the job was running and waiting for a new volume because it ran out of empty/recyclable volumes. And I increased pool limit while the job was running and waiting." is what we tested and tried to reproduce without seeing any failures :-( unfortunately as it is more harder to fix something that doesn't failed, or only on unknown circonstances.

I'm seeing root@truenas-sd is the SD detached from the dir ?
(0005773)
Int   
2024-02-13 15:04   
Yes, baros-dir is running in a VM with AlmaLinux release 8.9 (Midnight Oncilla)
bareos-sd is running in a jail with FreeBSD 13.2-RELEASE-p9 on TrueNAS 13.0-U6.1
(0005774)
bruno-at-bareos   
2024-02-14 09:48   
Thank for the details, this may help to reproduce the case.

Are you able to reproduce it easily ( maybe creating a dumb pool with low allowed volumes, and run small job on it to recreate the situation ) ?
(0005775)
Int   
2024-02-15 09:30   
I was able to reproduce it with another pool with low allowed volumes and a small job.
See attached job log.

This time at first nothing happened when I ran the "reload" command in bconsole. The job did not continue and was stuck at

2024-02-15 09:15:52 truenas-sd JobId 35413: Job migrate.2024-02-15_09.10.03_01 is waiting. Cannot find any appendable volumes.
Please use the "label" command to create a new Volume for:
Storage: "FileStorageTrueNAS0001" (/var/lib/bareos/storage)
Pool: Migrate
Media type: File

When I ran the "reload" command in bconsole a second time the job did continue and ran into the same issue.
(0005776)
bruno-at-bareos   
2024-02-15 09:39   
Thanks you. I will forward this to the dev's team.
(0005777)
bruno-at-bareos   
2024-02-15 14:13   
Hi, I'm failing to reproduce yet the case. In the meantime

what is the output of
bconsole <<< "llist volume=Migrate-0950"

Would you mind to rerun the test job with the following set before

bconsole <<< "setdebug level=100 trace=1 timestamp=1 storage=FileStorageTrueNAS0001"

after the job you can remove the debug level

bconsole <<< "setdebug level=0 trace=0 timestamp=0 storage=FileStorageTrueNAS0001"

then check on the storage in /var/lib/bareos and join compressed the trace file there (can be removed afterwards)
(0005780)
Int   
2024-02-16 16:04   
Hi, I collected the information you wanted.

I could not collect the output of
bconsole <<< "llist volume=Migrate-0950"
since I deleted that test volume already. But I collected the output for the newly failed volume "Migrate-0956" instead.

I had to change
bconsole <<< "setdebug level=100 trace=1 timestamp=1 storage=FileStorageTrueNAS0001"
to
bconsole <<< "setdebug level=100 trace=1 timestamp=1 storage=File"
because "FileStorageTrueNAS0001" is the name of the device not the name of the storage.
The storage daemon is using five devices to write jobs in parallel but the issue also happens when only one job is running.
I attached the config files of the storage daemon - maybe they help to reproduce the issue.
(0005784)
bruno-at-bareos   
2024-02-20 10:04   
Thanks for the report, I will check if our dev's can detect something with this. I was not able to reproduce until now.
how is mounted /var/lib/storage/ ?
(0005786)
Int   
2024-02-20 15:46   
I created a mount point for the bareos jail which mounts a path from TrueNAS data pool to /var/lib/bareos/storage/ inside the jail.
See screenshot attached.
(0005787)
Int   
2024-02-21 09:55   
I just realized that I forgot to mention the bareos storage daemon installed in the bareos jail is "bareos-server-23.0.1_1.pkg" from the official FreeBSD repository
https://pkg.freebsd.org/FreeBSD:13:amd64/latest/All/bareos-server-23.0.1_1.pkg
(0005788)
bruno-at-bareos   
2024-02-21 09:58   
Would you mind to test if this is the case with the official Bareos package available here
https://download.bareos.org/current/FreeBSD_13.2/

You can use our helper script to get the repo installed https://download.bareos.org/current/FreeBSD_13.2/add_bareos_repositories.sh
(0005789)
bruno-at-bareos   
2024-02-21 10:43   
Ok seems we may are able to reproduce the case, it will only appear if the sd has emited the first "Please label message"
(0005790)
Int   
2024-02-21 11:14   
Thanks for the feed back. I will skip the effort of switching to the Bareos repository for now.
Let me know if it will be necessary to test with the packages from https://download.bareos.org/current/FreeBSD_13.2/
(0005826)
freedomb   
2024-03-07 12:40   
(Last edited: 2024-03-07 12:41)
Hello everybody,

I have a similar problem, today I reduce the MAX Volumes in the pool file
After editing the files I ran the "reload" command in bconsole and updated all volumes by running the commands update -> 1 -> 13
Somehow in BareosUI it still shows the old values and if I do the command update -> 2 -> 1 (choosing one of the pools) it still shows me the old value.

*version
bareos-dir Version: 23.0.2~pre92.c3dac06f1 (23 February 2024) Ubuntu 22.04.1 LTS ubuntu Ubuntu 22.04.1 LTS

Attached evidence
(0005874)
bruno-at-bareos   
2024-03-27 14:53   
@freedomb you're hijacking a specific issue with completely something different
You can't have 10 maximum volumes in Diff as you already have 25 in ....
(0005888)
freedomb   
2024-03-28 12:25   
Hi Bruno,

I'm sorry if this is different then the original bug.

But there is any procedure to reduce the maxumum volumes?

At this moment I don't need so many volumes.

Thank you

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1007 [bareos-core] api minor always 2018-09-12 15:23 2024-03-27 16:11
Reporter: IvanBayan Platform: Linux  
Assigned To: joergs OS: Ubuntu  
Priority: normal OS Version: 16.04  
Status: resolved Product Version: 17.2.4  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: JSON API: 'list files' returns malformed json
Description: File listing of some jobs returns malformed json:

Tags:
Steps To Reproduce: $echo -e '.api 2\nlist files jobid=799'|sudo bconsole
Connecting to Director localhost:9101
1000 OK: mia-backup03-dir Version: 17.2.4 (21 Sep 2017)
Enter a period to cancel a command.
.api 2
{
  "jsonrpc": "2.0",
  "id": null,
  "result": {
    "api": 2
  }
}list files jobid=799
{
  "jsonrpc": "2.0",
  "id": null,
  "result": {
    "filenames": [
      {
        "filename": "/usr/sbin/pg_updatedicts"
      },
      {
        "filename": "/usr/sbin/"
      },
      {
        "filename": "/usr/sbin/tzconfig"
.....
      {
        "filename": "/var/lib/systemd/deb-systemd-helper-enabled/postgresql.service.dsh-also"
      },
      {},
      {},
      {},
      {},
      {},
.....
      {},
      {
        "nfo/right/america/antigua": "/usr/share/zoneinfo/right/America/Grand_Turk"
      },
      {
        "nfo/right/america/antigua": "/usr/share/zoneinfo/right/America/Cordoba"
      },
      {
        "nfo/right/america/antigua": "/usr/share/zoneinfo/right/America/Thule"
.......
      {
        "rade-scripts/specification": "/usr/share/doc/postgresql-common/changelog.gz"
      },
      {
        "rade-scripts/specification": "/usr/share/doc/postgresql-common/NEWS.Debian.gz"
      },
      {
        "rade-scripts/specification": "/usr/share/doc/postgresql-common/README.Devel"
      },
      {
        "rade-scripts/specification": "/usr/share/doc/postgresql-common/architecture.html"
      },
      {
        "rade-scripts/specification": "/usr/share/doc/postgresql-common/README.systemd"
      },
      {
        "rade-scripts/specification": "/usr/share/doc/postgresql-common/copyright"
      },
      {
        "rade-scripts/specification": "/usr/share/doc/postgresql-common/dependencies.png"
.....
Additional Information:
System Description
Attached Files: 799_list (93,437 bytes) 2018-09-13 13:57
https://bugs.bareos.org/file_download.php?file_id=307&type=bug
job54.tar.bz2 (13,114 bytes) 2018-09-19 09:02
https://bugs.bareos.org/file_download.php?file_id=308&type=bug
list_files_api2_jobid15799_shorten.log (11,205 bytes) 2024-03-20 14:20
https://bugs.bareos.org/file_download.php?file_id=627&type=bug
Notes
(0003105)
joergs   
2018-09-13 13:51   
This looks strange. Can you provide the output of the file list in API mode 0?

$echo -e 'list files jobid=799'|sudo bconsole
(0003106)
IvanBayan   
2018-09-13 13:57   
I've just attached bconsole output.
(0003118)
Gordon Klimm   
2018-09-19 09:01   
I can confirm this behaviour.
Additionally: by repeatedly executing the "api2 - call", i get (three) different malformed results in varying order.
(0003135)
joergs   
2018-10-09 17:08   
That is very strange. I've tested it on mutliple systems, but could not reproduce this bug.

As one list contains >= 2300 files, I also checked a larger backup job (> 6000) files, but still no error.

I've tested newer version of Bareos, but I've not seen relevant changes after 17.2.4.

Can you give my another hint, how to reproduce the problem?

BTW: for what purpose you need this list in json format?
(0003137)
IvanBayan   
2018-10-11 15:28   
First I decided that error was caused by special characters in path or path length, but after quick check I haven't found anything suspicious. How can I help you? I can check other job listings from same job, if it helps.

I was developing python module to provide api for backup restoration, files list was needed to select which of them to restore. Now I use .bvfs calls for same thing.
Gordon Klimm attached full listing with same issue, did you check it? May be if we found something similar between two cases, we will find what cause that problem.
(0003141)
Gordon Klimm   
2018-10-17 11:37   
Here is another observation: it appears as if the error occours only after "n*100 lines" of output (api 0-lines)
(0003142)
IvanBayan   
2018-10-17 13:11   
Huh, you are right. But looks like it occurs not on n*100 line, but on n*100 file entry, json output below:
   908        {
   909          "filename": "/usr/share/zoneinfo/posix/America/Scoresbysund"
   910        },
   911        {
   912          "filename": "/etc/sysctl.d/"
   913        },
   914        {
   915          "filename": "/var/lib/systemd/deb-systemd-helper-enabled/postgresql.service.dsh-also"
   916        },
   917        {
   918          "/porto-novo": "/var/lib/systemd/deb-systemd-helper-enabled/ureadahead.service.dsh-also"
   919        },
   920        {


If you grep for 'filename' entries, you will found something like:
~$ echo -e '.api 2\nlist files jobid=799'|sudo bconsole|grep \"/|nl|head -n 203|tail -n 4
   200	        "filename": "/usr/share/zoneinfo/posix/America/Halifax"
   201	        "": "/usr/share/zoneinfo/posix/America/Port-au-Prince"
   202	        "": "/usr/share/zoneinfo/posix/America/Panama"
   203	        "": "/usr/share/zoneinfo/posix/America/Cancun"
~$ echo -e '.api 2\nlist files jobid=799'|sudo bconsole|grep \"/|nl|head -n 203|tail -n 4
   200	        "filename": "/usr/share/zoneinfo/posix/America/Halifax"
   201	        "filename": "/usr/share/zoneinfo/posix/America/Port-au-Prince"
   202	        "filename": "/usr/share/zoneinfo/posix/America/Panama"
   203	        "filename": "/usr/share/zoneinfo/posix/America/Cancun"
~$ echo -e '.api 2\nlist files jobid=799'|sudo bconsole|grep \"/|nl|head -n 103|tail -n 4
   100	        "filename": "/usr/share/zoneinfo/Europe/Paris"
   101	        "filename": "/usr/share/zoneinfo/Europe/Zaporozhye"
   102	        "filename": "/usr/share/zoneinfo/Europe/Warsaw"
   103	        "filename": "/usr/share/zoneinfo/Europe/Belgrade"
~$ echo -e '.api 2\nlist files jobid=799'|sudo bconsole|grep \"/|nl|head -n 103|tail -n 4
   100	        "filename": "/usr/share/zoneinfo/Europe/Paris"
   101	        "nfo/europe/podgorica": "/usr/share/zoneinfo/Europe/Zaporozhye"
   102	        "nfo/europe/podgorica": "/usr/share/zoneinfo/Europe/Warsaw"
   103	        "nfo/europe/podgorica": "/usr/share/zoneinfo/Europe/Belgrade"
(0005848)
bruno-at-bareos   
2024-03-20 14:20   
Easily reproducible with official 23.0.2
~ around line 291
      {
        "filename": "/usr/share/locale/ca/LC_MESSAGES/kipiplugins.mo"
      },
      {
        "u": "/usr/share/locale/ca/LC_MESSAGES/kjobwidgets6_qt.qm"
      },
(0005849)
bruno-at-bareos   
2024-03-20 14:21   
even with version 23.0.2
(0005868)
joergs   
2024-03-26 12:37   
This issu is now addressed in https://github.com/bareos/bareos/pull/1746
(0005885)
joergs   
2024-03-27 16:11   
Fix committed to bareos master branch with changesetid 18811.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
979 [bareos-core] documentation text N/A 2018-07-05 14:42 2024-03-25 15:11
Reporter: stephand Platform:  
Assigned To: bruno-at-bareos OS:  
Priority: low OS Version:  
Status: feedback Product Version:  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version: 18.3.1  
Summary: Clarify Documentation of Max Wait Time Parameter
Description: Documentation is now:

 Max Wait Time = <time>

    The time specifies the maximum allowed time that a job may block waiting for a resource (such as waiting for a tape to be mounted, or waiting for the storage or file daemons to perform their duties), counted from the when the job starts, (not necessarily the same as when the job was scheduled).

It should be clarified how the time in waiting state is counted, the term "counted from when the job start" could be misleading, because the relevant time counting only starts when the job gets in waiting state.
It should also be mentioned that the time counting is reset to 0 when the job can continue before reaching the Max Wait Time, so in fact it refers to the maximum continuous time of the job in waiting state.
Tags:
Steps To Reproduce:
Additional Information:
Attached Files:
Notes
(0005130)
bruno-at-bareos   
2023-07-04 15:23   
Isn't the purpose of the diagram below the configuration ?
Or should we still rephrase the explanation
(0005306)
bruno-at-bareos   
2023-08-02 14:48   
Just modify documentation about the reset if job continue
(0005867)
bruno-at-bareos   
2024-03-25 15:11   
Beware there's also the same parameter existing for Storage, which has a hard limits of 5 days.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1603 [bareos-core] director major always 2024-02-26 13:16 2024-03-20 15:01
Reporter: Int Platform: Linux  
Assigned To: bruno-at-bareos OS: RHEL (and clones)  
Priority: normal OS Version: 8  
Status: resolved Product Version: 23.0.1  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Labelling of fresh LTO-9 tapes fails with timeout error
Description: Since fresh LTO-9 tapes need to be calibrated by the tape drive on first load (which can take up to 2 hours - see https://www.quantum.com/globalassets/products/tape-storage-new/lto-9/lto-9-quantum-faq-092021.pdf)
the labelling command fails with

ERR=Child died from signal 15: Termination.
Results=Program killed by BAREOS (timeout)
Tags:
Steps To Reproduce: run command

*label storage=Autochanger barcodes slot=11,12,13
Additional Information: full output:

*label storage=Autochanger barcodes slot=11,12,13
Connecting to Storage daemon Autochanger at 192.168.124.209:9103 ...
3306 Issuing autochanger "list" command.
The following Volumes will be labeled:
Slot Volume
==============
  11 NSL140L9
  12 NSL141L9
  13 NSL142L9
Do you want to label these Volumes? (yes|no): yes

...

Connecting to Storage daemon Autochanger at 192.168.124.209:9103 ...
Sending label command for Volume "NSL140L9" Slot 11 ...
3304 Issuing autochanger "load slot 11, drive 0" command.
3992 Bad autochanger "load slot 11, drive 0": ERR=Child died from signal 15: Termination.
Results=Program killed by BAREOS (timeout)

Label command failed for Volume NSL140L9.
Sending label command for Volume "NSL141L9" Slot 12 ...
3307 Issuing autochanger "unload slot 11, drive 0" command.
3995 Bad autochanger "unload slot 11, drive 0": ERR=Child exited with code 1
Results=Unloading drive 0 into Storage Element 11...mtx: Request Sense: Long Report=yes
mtx: Request Sense: Valid Residual=no
mtx: Request Sense: Error Code=70 (Current)
mtx: Request Sense: Sense Key=Aborted Command
mtx: Request Sense: FileMark=no
mtx: Request Sense: EOM=no
mtx: Request Sense: ILI=no
mtx: Request Sense: Additional Sense Code = 29
mtx: Request Sense: Additional Sense Qualifier = 07
mtx: Request Sense: BPV=no
mtx: Request Sense: Error in CDB=no
mtx: Request Sense: SKSV=no
MOVE MEDIUM from Element Address 32 to 266 Failed

Label command failed for Volume NSL141L9.
Sending label command for Volume "NSL142L9" Slot 13 ...
3991 Bad autochanger "loaded? drive 0" command: ERR=Child died from signal 15: Termination.
Results=Program killed by BAREOS (timeout)

3307 Issuing autochanger "unload slot 11, drive 0" command.
3995 Bad autochanger "unload slot 11, drive 0": ERR=Child exited with code 1
Results=Unloading drive 0 into Storage Element 11...mtx: Request Sense: Long Report=yes
mtx: Request Sense: Valid Residual=no
mtx: Request Sense: Error Code=70 (Current)
mtx: Request Sense: Sense Key=Not Ready
mtx: Request Sense: FileMark=no
mtx: Request Sense: EOM=no
mtx: Request Sense: ILI=no
mtx: Request Sense: Additional Sense Code = 04
mtx: Request Sense: Additional Sense Qualifier = 01
mtx: Request Sense: BPV=no
mtx: Request Sense: Error in CDB=no
mtx: Request Sense: SKSV=no
MOVE MEDIUM from Element Address 32 to 266 Failed

Label command failed for Volume NSL142L9.
*
System Description
Attached Files:
Notes
(0005810)
bruno-at-bareos   
2024-02-27 10:04   
Maybe adjusting the mtx-changer.conf value of

# Set to amount of time in seconds to wait after a load
load_sleep=0

and also may it is needed to hack the mtx-changer script itself to add more time

  while [ $i -le 300 ]; do # Wait max 300 seconds
(0005811)
Int   
2024-02-27 11:24   
I decided against changing "load_sleep" as this would affect all tape loads, but a longer timeout is only needed on the first load. If every tape load would have a delay of 2 hours the backup process would be very tedious.

I modified the wait_for_drive() function in the mtx-changer script instead:

wait_for_drive() {
  i=0
  while [ $i -le 8000 ]; do # Wait max 2.22 hours - LTO-9 tapes need 2 hours calibration on first load
    debug "Doing mt -f $1 status ..."
    drivestatus=$(mt -f "$1" status 2>&1)
    if echo "${drivestatus}" | grep "${ready}" >/dev/null 2>&1; then
      break
    fi
    debug "${drivestatus}"
    debug "Device $1 - not ready, retrying ..."
    sleep 100 #was 'sleep 1' - do not poll the drive so often
    i=`expr $i + 100`
  done
}


but this didn't work.
I ran into the same error:

Sending label command for Volume "NSL142L9" Slot 13 ...
3304 Issuing autochanger "load slot 13, drive 0" command.
3992 Bad autochanger "load slot 13, drive 0": ERR=Child died from signal 15: Termination.
Results=Program killed by BAREOS (timeout)

Label command failed for Volume NSL142L9.


The problem behind this is that the wait inside the wait_for_drive() function has no effect as the call to
"mt -f /dev/nsa0 status"
does not return at all while the tape drive is calibrating the LTO-9 tape.
So even the original wait of 300 seconds would not have elapsed as the call of "mt -f /dev/nsa0 status" never returned.

There seems to be another timeout somewhere kicking in that kills the label command.
(0005861)
bruno-at-bareos   
2024-03-20 15:00   
We will normally introduce a new parameter for this timeout

see the proposal in https://github.com/bareos/bareos/pull/1740

You may want to test directly the script proposed in the PR
(0005862)
bruno-at-bareos   
2024-03-20 15:01   
a new parameter max_wait_drive=300 is proposed in the configuration file and used by the script.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1214 [bareos-core] director major always 2020-03-23 09:07 2024-03-20 14:50
Reporter: hostedpower Platform:  
Assigned To: arogge OS:  
Priority: normal OS Version:  
Status: resolved Product Version: 19.2.6  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Bareos hanging/failing (on heavy databases?)
Description: We experience a lot of backups failing sooner or later with Could not reserve volume vol-cons-2368 on "customerx1-incr" (/home/customerx1/bareos)

There is not any reason this should fail since there is ample space on this volume. Moreover we see this happen really often with lotsa backups (but not all the time and not all backups).

We think it has to do with locks in MySQL. Probably the allocation for a new volume takes too long while it's waiting for an answer from MySQL, which makes it timeout internally and leads to this failure.

It's very very unfortunate issue and in fact really making the solution unreliable :(
Tags:
Steps To Reproduce:
Additional Information: You can see the Creating sort index it blocks all other queries:

mysql> show full processlist;
+---------+--------+-----------+--------+---------+------+---------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------+---------------+
| Id | User | Host | db | Command | Time | State | Info | Rows_sent | Rows_examined |
+---------+--------+-----------+--------+---------+------+---------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------+---------------+
| 569514 | bareos | localhost | bareos | Sleep | 2 | | NULL | 0 | 0 |
| 1231194 | bareos | localhost | bareos | Sleep | 2321 | | NULL | 0 | 0 |
| 1231945 | bareos | localhost | bareos | Sleep | 588 | | NULL | 0 | 0 |
| 1232038 | bareos | localhost | bareos | Query | 87 | Waiting for table metadata lock | LOCK TABLES Path write, batch write, Path as p write | 0 | 0 |
| 1232110 | bareos | localhost | bareos | Query | 109 | Creating sort index | SELECT Path.Path, T1.Name, T1.FileIndex, T1.JobId, LStat, DeltaSeq , Fhinfo, Fhnode FROM ( SELECT FileId, Job.JobId AS JobId, FileIndex, File.PathId AS PathId, File.Name AS Name, LStat , DeltaSeq, Fhinfo, Fhnode, Job.JobTDate AS JobTDate FROM Job, File, (SELECT MAX(JobTDate) AS JobTDate, PathId, FileName FROM (SELECT JobTDate, PathId, File.Name AS FileName FROM File JOIN Job USING (JobId) WHERE File.JobId IN (7768,8346,3010,3176,3341,3506,3674,3976,4280,4584,4889,5194,5499,5805,6111,6418,6723,7030,7337,7647,7956,8227,8504) UNION ALL SELECT JobTDate, PathId, File.Name AS FileName FROM BaseFiles JOIN File USING (FileId) JOIN Job ON (BaseJobId = Job.JobId) WHERE BaseFiles.JobId IN (7768,8346,3010,3176,3341,3506,3674,3976,4280,4584,4889,5194,5499,5805,6111,6418,6723,7030,7337,7647,7956,8227,8504) ) AS tmp GROUP BY PathId, FileName) AS T1 WHERE (Job.JobId IN (SELECT DISTINCT BaseJobId FROM BaseFiles WHERE JobId IN (7768,8346,3010,3176,3341,3506,3674,3976,4280,4584,4889,5194,5499,5805,6111,6418,6723,7030,7337,7647,7956,8227,8504)) OR Job.JobId IN (7768,8346,3010,3176,3341,3506,3674,3976,4280,4584,4889,5194,5499,5805,6111,6418,6723,7030,7337,7647,7956,8227,8504)) AND T1.JobTDate = Job.JobTDate AND Job.JobId = File.JobId AND T1.PathId = File.PathId AND T1.FileName = File.Name ) AS T1 JOIN Path ON (Path.PathId = T1.PathId) WHERE FileIndex > 0 ORDER BY T1.JobTDate, FileIndex ASC | 148990 | 18940336 |
| 1232130 | bareos | localhost | bareos | Query | 2 | Waiting for table metadata lock | LOCK TABLES Path write, batch write, Path as p write | 0 | 0 |
| 1232133 | bareos | localhost | bareos | Query | 86 | Waiting for table metadata lock | SELECT Path.Path, T1.Name, T1.FileIndex, T1.JobId, LStat, DeltaSeq , Fhinfo, Fhnode FROM ( SELECT FileId, Job.JobId AS JobId, FileIndex, File.PathId AS PathId, File.Name AS Name, LStat , DeltaSeq, Fhinfo, Fhnode, Job.JobTDate AS JobTDate FROM Job, File, (SELECT MAX(JobTDate) AS JobTDate, PathId, FileName FROM (SELECT JobTDate, PathId, File.Name AS FileName FROM File JOIN Job USING (JobId) WHERE File.JobId IN (8412,3054,3220,3385,3550,3718,4020,4324,4628,4933,5238,5543,5849,6155,6462,6767,7074,7381,7691,8000,8271,8548) UNION ALL SELECT JobTDate, PathId, File.Name AS FileName FROM BaseFiles JOIN File USING (FileId) JOIN Job ON (BaseJobId = Job.JobId) WHERE BaseFiles.JobId IN (8412,3054,3220,3385,3550,3718,4020,4324,4628,4933,5238,5543,5849,6155,6462,6767,7074,7381,7691,8000,8271,8548) ) AS tmp GROUP BY PathId, FileName) AS T1 WHERE (Job.JobId IN (SELECT DISTINCT BaseJobId FROM BaseFiles WHERE JobId IN (8412,3054,3220,3385,3550,3718,4020,4324,4628,4933,5238,5543,5849,6155,6462,6767,7074,7381,7691,8000,8271,8548)) OR Job.JobId IN (8412,3054,3220,3385,3550,3718,4020,4324,4628,4933,5238,5543,5849,6155,6462,6767,7074,7381,7691,8000,8271,8548)) AND T1.JobTDate = Job.JobTDate AND Job.JobId = File.JobId AND T1.PathId = File.PathId AND T1.FileName = File.Name ) AS T1 JOIN Path ON (Path.PathId = T1.PathId) WHERE FileIndex > 0 ORDER BY T1.JobTDate, FileIndex ASC | 0 | 0 |
| 1232145 | bareos | localhost | bareos | Query | 56 | Waiting for table metadata lock | SELECT Path.Path, T1.Name, T1.FileIndex, T1.JobId, LStat, DeltaSeq , Fhinfo, Fhnode FROM ( SELECT FileId, Job.JobId AS JobId, FileIndex, File.PathId AS PathId, File.Name AS Name, LStat , DeltaSeq, Fhinfo, Fhnode, Job.JobTDate AS JobTDate FROM Job, File, (SELECT MAX(JobTDate) AS JobTDate, PathId, FileName FROM (SELECT JobTDate, PathId, File.Name AS FileName FROM File JOIN Job USING (JobId) WHERE File.JobId IN (105,8431,3070,3236,3401,3566,3734,4036,4340,4644,4949,5254,5559,5865,6171,6478,6783,7090,7397,7707,8016,8287,8564) UNION ALL SELECT JobTDate, PathId, File.Name AS FileName FROM BaseFiles JOIN File USING (FileId) JOIN Job ON (BaseJobId = Job.JobId) WHERE BaseFiles.JobId IN (105,8431,3070,3236,3401,3566,3734,4036,4340,4644,4949,5254,5559,5865,6171,6478,6783,7090,7397,7707,8016,8287,8564) ) AS tmp GROUP BY PathId, FileName) AS T1 WHERE (Job.JobId IN (SELECT DISTINCT BaseJobId FROM BaseFiles WHERE JobId IN (105,8431,3070,3236,3401,3566,3734,4036,4340,4644,4949,5254,5559,5865,6171,6478,6783,7090,7397,7707,8016,8287,8564)) OR Job.JobId IN (105,8431,3070,3236,3401,3566,3734,4036,4340,4644,4949,5254,5559,5865,6171,6478,6783,7090,7397,7707,8016,8287,8564)) AND T1.JobTDate = Job.JobTDate AND Job.JobId = File.JobId AND T1.PathId = File.PathId AND T1.FileName = File.Name ) AS T1 JOIN Path ON (Path.PathId = T1.PathId) WHERE FileIndex > 0 ORDER BY T1.JobTDate, FileIndex ASC | 0 | 0 |
| 1232147 | bareos | localhost | bareos | Query | 51 | Waiting for table metadata lock | SELECT Path.Path, T1.Name, T1.FileIndex, T1.JobId, LStat, DeltaSeq , Fhinfo, Fhnode FROM ( SELECT FileId, Job.JobId AS JobId, FileIndex, File.PathId AS PathId, File.Name AS Name, LStat , File.DeltaSeq AS DeltaSeq, File.Fhinfo AS Fhinfo, File.Fhnode AS Fhnode, Job.JobTDate AS JobTDate FROM Job, File, ( SELECT MAX(JobTDate) AS JobTDate, PathId, FileName, DeltaSeq, Fhinfo, Fhnode FROM ( SELECT JobTDate, PathId, File.Name AS FileName, DeltaSeq, Fhinfo, Fhnode FROM File JOIN Job USING (JobId) WHERE File.JobId IN (2880,3044,3210) UNION ALL SELECT JobTDate, PathId, File.Name AS FileName, DeltaSeq, Fhinfo, Fhnode FROM BaseFiles JOIN File USING (FileId) JOIN Job ON (BaseJobId = Job.JobId) WHERE BaseFiles.JobId IN (2880,3044,3210) ) AS tmp GROUP BY PathId, FileName, DeltaSeq, Fhinfo, Fhnode) AS T1 WHERE (Job.JobId IN (SELECT DISTINCT BaseJobId FROM BaseFiles WHERE JobId IN (2880,3044,3210)) OR Job.JobId IN (2880,3044,3210)) AND T1.JobTDate = Job.JobTDate AND Job.JobId = File.JobId AND T1.PathId = File.PathId AND T1.FileName = File.Name ) AS T1 JOIN Path ON (Path.PathId = T1.PathId) WHERE FileIndex > 0 ORDER BY T1.JobTDate, FileIndex ASC | 0 | 0 |
| 1232180 | root | localhost | NULL | Query | 0 | starting | show full processlist | 0 | 0 |
+---------+--------+-----------+--------+---------+------+---------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------+---------------+
10 rows in set (0.00 sec)


+---------+--------+-----------+--------+---------+------+---------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------+---------------+
| 569514 | bareos | localhost | bareos | Sleep | 5 | | NULL | 0 | 1 |
| 1231194 | bareos | localhost | bareos | Sleep | 3165 | | NULL | 0 | 0 |
| 1231945 | bareos | localhost | bareos | Sleep | 531 | | NULL | 0 | 0 |
| 1232133 | bareos | localhost | bareos | Query | 192 | Waiting for table metadata lock | LOCK TABLES Path write, batch write, Path as p write | 0 | 0 |
| 1232202 | bareos | localhost | bareos | Query | 542 | Waiting for table metadata lock | LOCK TABLES Path write, batch write, Path as p write | 0 | 0 |
| 1232276 | bareos | localhost | bareos | Sleep | 2 | | NULL | 0 | 0 |
| 1232305 | bareos | localhost | bareos | Query | 567 | Creating sort index | SELECT Path.Path, T1.Name, T1.FileIndex, T1.JobId, LStat, DeltaSeq , Fhinfo, Fhnode FROM ( SELECT FileId, Job.JobId AS JobId, FileIndex, File.PathId AS PathId, File.Name AS Name, LStat , File.DeltaSeq AS DeltaSeq, File.Fhinfo AS Fhinfo, File.Fhnode AS Fhnode, Job.JobTDate AS JobTDate FROM Job, File, ( SELECT MAX(JobTDate) AS JobTDate, PathId, FileName, DeltaSeq, Fhinfo, Fhnode FROM ( SELECT JobTDate, PathId, File.Name AS FileName, DeltaSeq, Fhinfo, Fhnode FROM File JOIN Job USING (JobId) WHERE File.JobId IN (54,8436,3020,3186) UNION ALL SELECT JobTDate, PathId, File.Name AS FileName, DeltaSeq, Fhinfo, Fhnode FROM BaseFiles JOIN File USING (FileId) JOIN Job ON (BaseJobId = Job.JobId) WHERE BaseFiles.JobId IN (54,8436,3020,3186) ) AS tmp GROUP BY PathId, FileName, DeltaSeq, Fhinfo, Fhnode) AS T1 WHERE (Job.JobId IN (SELECT DISTINCT BaseJobId FROM BaseFiles WHERE JobId IN (54,8436,3020,3186)) OR Job.JobId IN (54,8436,3020,3186)) AND T1.JobTDate = Job.JobTDate AND Job.JobId = File.JobId AND T1.PathId = File.PathId AND T1.FileName = File.Name ) AS T1 JOIN Path ON (Path.PathId = T1.PathId) WHERE FileIndex > 0 ORDER BY T1.JobTDate, FileIndex ASC | 0 | 17339328 |
| 1232308 | bareos | localhost | bareos | Sleep | 422 | | NULL | 0 | 0 |
| 1232313 | bareos | localhost | bareos | Query | 156 | Waiting for table metadata lock | LOCK TABLES Path write, batch write, Path as p write | 0 | 0 |
| 1232460 | root | localhost | NULL | Query | 0 | starting | show full processlist | 0 | 0 |
+---------+--------+-----------+--------+---------+------+---------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------+---------------+



At that moment new volume creation seems to timeout (probably underlying) :


2020-03-23 07:58:55 backupsrvxxx JobId 8836: stored/acquire.cc:271 Job 8836 canceled.
 
2020-03-23 07:58:55 backupsrvxxx JobId 8836: Fatal error: stored/mount.cc:972 Cannot open Dev="customerx1-incr" (/home/customerx1/bareos), Vol=vol-cons-2368
 
2020-03-23 07:58:55 backupsrvxxx JobId 8836: End of all volumes.
 
2020-03-23 07:58:55 backupsrvxxx JobId 8836: Fatal error: stored/mac.cc:752 Fatal append error on device "customerx1-cons" (/home/customerx1/bareos): ERR=stored/dev.cc:747 Could not open: /home/customerx1/bareos/vol-cons-2368, ERR=No such file or directory

 
2020-03-23 07:58:55 backupsrvxxx JobId 8836: Elapsed time=02:16:21, Transfer rate=304.1 K Bytes/second
 
2020-03-23 07:58:55 backupsrvxxx JobId 8836: Releasing device "customerx1-cons" (/home/customerx1/bareos).
 
2020-03-23 07:58:55 backupsrvxxx JobId 8836: Releasing device "customerx1-incr" (/home/customerx1/bareos).
 
2020-03-23 07:58:55 hostedpower-dir JobId 8836: Bareos hostedpower-dir 19.2.6 (11Feb20):
 Build OS: Linux-5.4.7-100.fc30.x86_64 debian Debian GNU/Linux 9.9 (stretch)
 JobId: 8836
 Job: backup-customerx1.xxx.com.2020-03-23_02.28.47_24
 Backup Level: Virtual Full
 Client: "customerx1.xxx.com" 18.2.7 (12Dec19) Linux-5.3.14-200.fc30.x86_64,debian,Debian GNU/Linux 9.9 (stretch),Debian_9.0,x86_64
 FileSet: "linux-all-mysql" 2020-02-12 22:15:00
 Pool: "customerx1-cons" (From Job Pool's NextPool resource)
 Catalog: "MyCatalog" (From Client resource)
 Storage: "customerx1-cons" (From Storage from Pool's NextPool resource)
 Scheduled time: 23-Mar-2020 02:28:47
 Start time: 03-Mar-2020 01:08:21
 End time: 03-Mar-2020 01:13:43
 Elapsed time: 5 mins 22 secs
 Priority: 10
 SD Files Written: 234,942
 SD Bytes Written: 2,488,124,067 (2.488 GB)
 Rate: 7727.1 KB/s
 Volume name(s): vol-cons-2368
 Volume Session Id: 4826
 Volume Session Time: 1581609872
 Last Volume Bytes: 6,660,046,477 (6.660 GB)
 SD Errors: 1
 SD termination status: Canceled
 Accurate: yes
 Bareos binary info: official Bareos subscription
 Termination: Backup Canceled

 
2020-03-23 07:55:33 backupsrvxxx JobId 8836: Warning: stored/vol_mgr.cc:544 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-2368 from dev="customerx1-cons" (/home/customerx1/bareos) to "customerx1-incr" (/home/customerx1/bareos)
 
2020-03-23 07:50:33 backupsrvxxx JobId 8836: Warning: stored/vol_mgr.cc:544 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-2368 from dev="customerx1-cons" (/home/customerx1/bareos) to "customerx1-incr" (/home/customerx1/bareos)
 
2020-03-23 07:45:33 backupsrvxxx JobId 8836: Warning: stored/vol_mgr.cc:544 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-2368 from dev="customerx1-cons" (/home/customerx1/bareos) to "customerx1-incr" (/home/customerx1/bareos)
 
2020-03-23 07:40:33 backupsrvxxx JobId 8836: Warning: stored/vol_mgr.cc:544 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-2368 from dev="customerx1-cons" (/home/customerx1/bareos) to "customerx1-incr" (/home/customerx1/bareos)
 
2020-03-23 07:35:33 backupsrvxxx JobId 8836: Warning: stored/vol_mgr.cc:544 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-2368 from dev="customerx1-cons" (/home/customerx1/bareos) to "customerx1-incr" (/home/customerx1/bareos)
 
2020-03-23 07:30:33 backupsrvxxx JobId 8836: Warning: stored/vol_mgr.cc:544 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-2368 from dev="customerx1-cons" (/home/customerx1/bareos) to "customerx1-incr" (/home/customerx1/bareos)
 
2020-03-23 07:25:33 backupsrvxxx JobId 8836: Warning: stored/vol_mgr.cc:544 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-2368 from dev="customerx1-cons" (/home/customerx1/bareos) to "customerx1-incr" (/home/customerx1/bareos)
 
2020-03-23 07:20:33 backupsrvxxx JobId 8836: Warning: stored/vol_mgr.cc:544 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-2368 from dev="customerx1-cons" (/home/customerx1/bareos) to "customerx1-incr" (/home/customerx1/bareos)
 
2020-03-23 07:15:33 backupsrvxxx JobId 8836: Warning: stored/vol_mgr.cc:544 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-2368 from dev="customerx1-cons" (/home/customerx1/bareos) to "customerx1-incr" (/home/customerx1/bareos)
 
2020-03-23 07:10:33 backupsrvxxx JobId 8836: Warning: stored/vol_mgr.cc:544 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-2368 from dev="customerx1-cons" (/home/customerx1/bareos) to "customerx1-incr" (/home/customerx1/bareos)
 
2020-03-23 07:05:33 backupsrvxxx JobId 8836: Warning: stored/vol_mgr.cc:544 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-2368 from dev="customerx1-cons" (/home/customerx1/bareos) to "customerx1-incr" (/home/customerx1/bareos)
 
2020-03-23 07:00:33 backupsrvxxx JobId 8836: Warning: stored/vol_mgr.cc:544 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-2368 from dev="customerx1-cons" (/home/customerx1/bareos) to "customerx1-incr" (/home/customerx1/bareos)
 
2020-03-23 06:55:33 backupsrvxxx JobId 8836: Warning: stored/vol_mgr.cc:544 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-2368 from dev="customerx1-cons" (/home/customerx1/bareos) to "customerx1-incr" (/home/customerx1/bareos)
 
2020-03-23 06:50:33 backupsrvxxx JobId 8836: Warning: stored/vol_mgr.cc:544 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-2368 from dev="customerx1-cons" (/home/customerx1/bareos) to "customerx1-incr" (/home/customerx1/bareos)
 
2020-03-23 06:45:33 backupsrvxxx JobId 8836: Please mount read Volume "vol-cons-2368" for:
 Job: backup-customerx1.xxx.com.2020-03-23_02.28.47_24
 Storage: "customerx1-incr" (/home/customerx1/bareos)
 Pool: customerx1-incr
 Media type: customerx1
 
2020-03-23 06:40:33 backupsrvxxx JobId 8836: Warning: stored/vol_mgr.cc:544 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-2368 from dev="customerx1-cons" (/home/customerx1/bareos) to "customerx1-incr" (/home/customerx1/bareos)
 
2020-03-23 06:35:33 backupsrvxxx JobId 8836: Warning: stored/vol_mgr.cc:544 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-2368 from dev="customerx1-cons" (/home/customerx1/bareos) to "customerx1-incr" (/home/customerx1/bareos)
 
2020-03-23 06:30:33 backupsrvxxx JobId 8836: Warning: stored/vol_mgr.cc:544 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-2368 from dev="customerx1-cons" (/home/customerx1/bareos) to "customerx1-incr" (/home/customerx1/bareos)
 
2020-03-23 06:25:33 backupsrvxxx JobId 8836: Warning: stored/vol_mgr.cc:544 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-2368 from dev="customerx1-cons" (/home/customerx1/bareos) to "customerx1-incr" (/home/customerx1/bareos)
 
2020-03-23 06:20:33 backupsrvxxx JobId 8836: Warning: stored/vol_mgr.cc:544 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-2368 from dev="customerx1-cons" (/home/customerx1/bareos) to "customerx1-incr" (/home/customerx1/bareos)
 
2020-03-23 06:15:33 backupsrvxxx JobId 8836: Warning: stored/vol_mgr.cc:544 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-2368 from dev="customerx1-cons" (/home/customerx1/bareos) to "customerx1-incr" (/home/customerx1/bareos)
 
2020-03-23 06:10:33 backupsrvxxx JobId 8836: Warning: stored/vol_mgr.cc:544 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-2368 from dev="customerx1-cons" (/home/customerx1/bareos) to "customerx1-incr" (/home/customerx1/bareos)
 
2020-03-23 06:05:33 backupsrvxxx JobId 8836: Warning: stored/vol_mgr.cc:544 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-2368 from dev="customerx1-cons" (/home/customerx1/bareos) to "customerx1-incr" (/home/customerx1/bareos)
 
2020-03-23 06:00:33 backupsrvxxx JobId 8836: Warning: stored/vol_mgr.cc:544 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-2368 from dev="customerx1-cons" (/home/customerx1/bareos) to "customerx1-incr" (/home/customerx1/bareos)
 
2020-03-23 05:55:33 backupsrvxxx JobId 8836: Warning: stored/vol_mgr.cc:544 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-2368 from dev="customerx1-cons" (/home/customerx1/bareos) to "customerx1-incr" (/home/customerx1/bareos)
 
2020-03-23 05:50:32 backupsrvxxx JobId 8836: Warning: stored/vol_mgr.cc:544 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-2368 from dev="customerx1-cons" (/home/customerx1/bareos) to "customerx1-incr" (/home/customerx1/bareos)
 
2020-03-23 05:45:32 backupsrvxxx JobId 8836: End of Volume at file 1 on device "customerx1-incr" (/home/customerx1/bareos), Volume "vol-cons-0042"
 
2020-03-23 05:45:32 backupsrvxxx JobId 8836: Warning: stored/vol_mgr.cc:544 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-2368 from dev="customerx1-cons" (/home/customerx1/bareos) to "customerx1-incr" (/home/customerx1/bareos)
 
2020-03-23 05:45:32 backupsrvxxx JobId 8836: Warning: stored/acquire.cc:343 Read acquire: stored/label.cc:268 Could not reserve volume vol-cons-2368 on "customerx1-incr" (/home/customerx1/bareos)
 
2020-03-23 05:45:32 backupsrvxxx JobId 8836: Please mount read Volume "vol-cons-2368" for:
 Job: backup-customerx1.xxx.com.2020-03-23_02.28.47_24
 Storage: "customerx1-incr" (/home/customerx1/bareos)
 Pool: customerx1-incr
 Media type: customerx1
 
2020-03-23 05:42:34 backupsrvxxx JobId 8836: Volume "vol-cons-2368" previously written, moving to end of data.
 
2020-03-23 05:42:34 backupsrvxxx JobId 8836: Ready to append to end of Volume "vol-cons-2368" size=4161368485
 
2020-03-23 05:42:34 backupsrvxxx JobId 8836: Forward spacing Volume "vol-cons-0042" to file:block 0:275.
 
2020-03-23 05:42:33 hostedpower-dir JobId 8836: Bootstrap records written to /var/lib/bareos/hostedpower-dir.restore.2224.bsr
 
2020-03-23 05:42:33 hostedpower-dir JobId 8836: Connected Storage daemon at backupsrvxxx:9103, encryption: ECDHE-PSK-CHACHA20-POLY1305
 
2020-03-23 05:42:33 hostedpower-dir JobId 8836: Using Device "customerx1-incr" to read.
 
2020-03-23 05:42:33 hostedpower-dir JobId 8836: Using Device "customerx1-cons" to write.
 
2020-03-23 05:42:33 backupsrvxxx JobId 8836: Ready to read from volume "vol-cons-0042" on device "customerx1-incr" (/home/customerx1/bareos).
 
2020-03-23 05:42:25 hostedpower-dir JobId 8836: Start Virtual Backup JobId 8836, Job=backup-customerx1.xxx.com.2020-03-23_02.28.47_24
 
2020-03-23 05:42:25 hostedpower-dir JobId 8836: Consolidating JobIds 58,8418,3024,3190



As soon this part happens:

2020-03-23 05:45:32 backupsrvxxx JobId 8836: Warning: stored/acquire.cc:343 Read acquire: stored/label.cc:268 Could not reserve volume vol-cons-2368 on "customerx1-incr" (/home/customerx1/bareos)
 
2020-03-23 05:45:32 backupsrvxxx JobId 8836: Please mount read Volume "vol-cons-2368" for:
 Job: backup-customerx1.xxx.com.2020-03-23_02.28.47_24
 Storage: "customerx1-incr" (/home/customerx1/bareos)
 Pool: customerx1-incr
 Media type: customerx1


Is never recovers from it as it looks like.

We have to cancel the job and most of the times it happens again sooner or later :(
Attached Files:
Notes
(0003913)
hostedpower   
2020-03-23 21:44   
This query takes ages and ages :)

explain SELECT Path.Path, T1.Name, T1.FileIndex, T1.JobId, LStat, DeltaSeq , Fhinfo, Fhnode FROM ( SELECT FileId, Job.JobId AS JobId, FileIndex, File.PathId AS PathId, File.Name AS Name, LStat , File.DeltaSeq AS DeltaSeq, File.Fhinfo AS Fhinfo, File.Fhnode AS Fhnode, Job.JobTDate AS JobTDate FROM Job, File, ( SELECT MAX(JobTDate) AS JobTDate, PathId, FileName, DeltaSeq, Fhinfo, Fhnode FROM ( SELECT JobTDate, PathId, File.Name AS FileName, DeltaSeq, Fhinfo, Fhnode FROM File JOIN Job USING (JobId) WHERE File.JobId IN (8854,3351,3516,3684,3986,4290,4594,4899,5204,5509,5815,6121,6428,6733,7040,7347,7657,7966,8237,8514,8682) UNION ALL SELECT JobTDate, PathId, File.Name AS FileName, DeltaSeq, Fhinfo, Fhnode FROM BaseFiles JOIN File USING (FileId) JOIN Job ON (BaseJobId = Job.JobId) WHERE BaseFiles.JobId IN (8854,3351,3516,3684,3986,4290,4594,4899,5204,5509,5815,6121,6428,6733,7040,7347,7657,7966,8237,8514,8682) ) AS tmp GROUP BY PathId, FileName, DeltaSeq, Fhinfo, Fhnode) AS T1 WHERE (Job.JobId IN (SELECT DISTINCT BaseJobId FROM BaseFiles WHERE JobId IN (8854,3351,3516,3684,3986,4290,4594,4899,5204,5509,5815,6121,6428,6733,7040,7347,7657,7966,8237,8514,8682)) OR Job.JobId IN (8854,3351,3516,3684,3986,4290,4594,4899,5204,5509,5815,6121,6428,6733,7040,7347,7657,7966,8237,8514,8682)) AND T1.JobTDate = Job.JobTDate AND Job.JobId = File.JobId AND T1.PathId = File.PathId AND T1.FileName = File.Name ) AS T1 JOIN Path ON (Path.PathId = T1.PathId) WHERE FileIndex > 0 ORDER BY T1.JobTDate, FileIndex ASC;
+----+-------------+------------+------------+--------+---------------------+-------------+---------+----------------------------------------+-----------+----------+-----------------------------------------------------------+
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+------------+------------+--------+---------------------+-------------+---------+----------------------------------------+-----------+----------+-----------------------------------------------------------+
| 1 | PRIMARY | Job | NULL | index | PRIMARY,JobTDate | JobTDate | 9 | NULL | 4558 | 100.00 | Using where; Using index; Using temporary; Using filesort |
| 1 | PRIMARY | <derived3> | NULL | ref | <auto_key1> | <auto_key1> | 9 | bareos.Job.JobTDate | 23494 | 100.00 | NULL |
| 1 | PRIMARY | File | NULL | ref | JobId,PathId | JobId | 265 | bareos.Job.JobId,T1.PathId,T1.FileName | 1 | 33.33 | Using where |
| 1 | PRIMARY | Path | NULL | eq_ref | PRIMARY | PRIMARY | 4 | T1.PathId | 1 | 100.00 | NULL |
| 6 | SUBQUERY | BaseFiles | NULL | ALL | basefiles_jobid_idx | NULL | NULL | NULL | 1 | 100.00 | Using where |
| 3 | DERIVED | <derived4> | NULL | ALL | NULL | NULL | NULL | NULL | 81892097 | 100.00 | Using temporary; Using filesort |
| 4 | DERIVED | File | NULL | ALL | JobId | NULL | NULL | NULL | 303320202 | 27.00 | Using where |
| 4 | DERIVED | Job | NULL | eq_ref | PRIMARY | PRIMARY | 4 | bareos.File.JobId | 1 | 100.00 | NULL |
| 5 | UNION | BaseFiles | NULL | ALL | basefiles_jobid_idx | NULL | NULL | NULL | 1 | 100.00 | Using where |
| 5 | UNION | Job | NULL | eq_ref | PRIMARY | PRIMARY | 4 | bareos.BaseFiles.BaseJobId | 1 | 100.00 | NULL |
| 5 | UNION | File | NULL | eq_ref | PRIMARY | PRIMARY | 8 | bareos.BaseFiles.FileId | 1 | 100.00 | NULL |
+----+-------------+------------+------------+--------+---------------------+-------------+---------+----------------------------------------+-----------+----------+-----------------------------------------------------------+
11 rows in set, 1 warning (0.00 sec)
(0003917)
arogge   
2020-03-26 11:01   
That's one of the reasons why PostgreSQL has always been the preferred backend and MySQL has been deprecated in 19.2.
If you can find a way to make that query run faster (schema-change or database-tuning, not a query-change), we may consider adding it. However, don't expect any in-depth changes for MySQL anymore.
(0003926)
hostedpower   
2020-04-06 14:26   
ok we started with MySQL since that was supported back then. I don't believe this is something unsolvable with MySQL, but I think indeed priorities always have been with postgres. Any way to migrate all data to postgres? We have 120 GB of database data atm I believe.


Because at the moment we see this problem more and more lately :(

When we retry the job afterwards it does succeed!! So it looks like something temporary is causing this.
(0003927)
hostedpower   
2020-04-06 15:42   
Found it here about migration: https://docs.bareos.org/Appendix/Howtos.html
(0003928)
hostedpower   
2020-04-07 12:13   
PS: We're looking into postgresql, do you support postgresql 12? :)

On the other hand it's strange we have these reported errors intermittently, I really hope if we go over all the trouble to convert to postgresql we don't run into the same or other issues :)
(0003955)
hostedpower   
2020-04-22 21:22   
ok we switched to postgresql now, certain things seems slower than it was with MySQL at first sight, I hope it helps for our other problems :|
(0003957)
hostedpower   
2020-04-22 21:44   
ok I retried some jobs, but I see the same error messages as initially reported unfortunately :(

Messages like: 2020-03-23 07:55:33 backupsrvxxx JobId 8836: Warning: stored/vol_mgr.cc:544 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-2368 from dev="customerx1-cons" (/home/customerx1/bareos) to "customerx1-incr" (/home/customerx1/bareos)
(0003958)
hostedpower   
2020-04-23 09:38   
We ran consolidates today and so far so good, I think my previous conclusions were taken too soon and it seems the consolidate at least is more performant with PostgreSQL than it was with MySQL!

Not sure yet about the bug reported here... :)
(0003978)
hostedpower   
2020-04-30 10:50   
Unfortunately we had the initially reported error now with posgresql
(0003979)
hostedpower   
2020-04-30 10:57   
We also noticed that we we retry this failed job with posgres, ALL jobs are consolidated, just keeping a single job!! So we LOST ALL BACKUPS (except 1 full)


I don't think this ever happened with MySQL. Example from cancelled job:

our-dir JobId 20011: Consolidating JobIds 19691,14190

Now we cancelled that job and used retry:

our-dir JobId 20080: Consolidating JobIds 12652,19691,14190,14486,14659,14927,15229,15533,15835,16137,16438,16691,16966,17235,17500,17781,18051,18338,18630,18942,19256,19571,19892

How can this happen?
(0003981)
kszmigielski   
2020-05-04 15:06   
I have exactly the same problem. The problem also occurs in version 19.2.7
(0003985)
hostedpower   
2020-05-07 10:47   
this problems seems to get worse atm, anything that can be done about it?
(0004030)
hostedpower   
2020-08-05 13:14   
Also there is big bug when re-trying a hung consolidate job. IT DELETES ALL BACKUPS AND KEEPS ONLY ONE.

See my previous message (2 nd previous)



We still have the initially reported issue as well :(
(0005857)
bruno-at-bareos   
2024-03-20 14:50   
first problem is fixed, last problem is a duplicate due to rerun usage of consolidated jobs without any jobids.
this will be fixed soon.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
854 [bareos-core] director tweak have not tried 2017-09-21 10:21 2024-03-20 14:47
Reporter: hostedpower Platform: Linux  
Assigned To: stephand OS: Debian  
Priority: normal OS Version: 8  
Status: resolved Product Version: 16.2.5  
Product Build: Resolution: no change required  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Problem with always incremental virtual full
Description: Hi,


All the sudden we have issues with virtual full (for consolidate) no longer working.

We have 2 pools for each customer. One is for the full (consolidate) and the other for the incremental.

We used to have the option to limit a single job to a single volume, we removed that a while ago, maybe there is a relation.

We also had to downgrade 16.2.6 to 16.2.5 because of the MySQL slowness issues, that happened recently, so that's also a possibility.

We have the feeling this software is not very reliable or at least very complex to get it somewhat stable.

PS: The volume it asks is there, but it doesn't want to use it :(

We used this config for a long time without this particular issue, except for the

        Action On Purge = Truncate
and commenting the:
# Maximum Volume Jobs = 1 # a new file for each backup that is done
Tags:
Steps To Reproduce: Config:



Storage {
    Name = customerx-incr
    Device = customerx-incr # bareos-sd Device
    Media Type = customerx
    Address = xxx # backup server fqdn > sent to client sd
    Password = "xxx" # password for Storage daemon
    Maximum Concurrent Jobs = 8 # required for virtual full
}

Storage {
    Name = customerx-cons
    Device = customerx-cons # bareos-sd Device
    Media Type = customerx
    Address = xxx # backup server fqdn > sent to client sd
    Password = "xxx" # password for Storage daemon
    Maximum Concurrent Jobs = 8 # required for virtual full
}

cat /etc/bareos/pool-defaults.conf
#pool defaults used for always incremental scheme
        Pool Type = Backup
# Maximum Volume Jobs = 1 # a new file for each backup that is done
        Recycle = yes # Bareos can automatically recycle Volumes
        Auto Prune = yes # Prune expired volumes
        Volume Use Duration = 23h
        Action On Purge = Truncate
#end


Pool {
    Name = customerx-incr
    Storage = customerx-incr
    LabelFormat = "vol-incr-"
    Next Pool = customerx-cons
    @/etc/bareos/pool-defaults.conf
}

Pool {
    Name = customerx-cons
    Storage = customerx-cons
    LabelFormat = "vol-cons-"
    @/etc/bareos/pool-defaults.conf
}



# lb1.customerx.com
Job {
    Name = "backup-lb1.customerx.com"
    Client = lb1.customerx.com
    Pool = customerx-incr
    Full Backup Pool = customerx-cons
    FileSet = "linux-all"
    Schedule = "always-inc-cycle-4"
    #Defaults
    JobDefs = "HPJobInc"
    Maximum Concurrent Jobs = 8 # Let up to 8 jobs run
}

Client {
    Name = lb1.customerx.com
    Address = lb1.customerx.com
# Catalog = MyCatalog
    Password = "xxx" # password for Storage daemon
    AutoPrune = yes
}


# lb2.customerx.com
Job {
    Name = "backup-lb2.customerx.com"
    Client = lb2.customerx.com
    Pool = customerx-incr
    Full Backup Pool = customerx-cons
    FileSet = "linux-all"
    Schedule = "always-inc-cycle-4"
    #Defaults
    JobDefs = "HPJobInc"
    Maximum Concurrent Jobs = 8 # Let up to 8 jobs run
}

Client {
    Name = lb2.customerx.com
    Address = lb2.customerx.com
# Catalog = MyCatalog
    Password = "xxx" # password for Storage daemon
    AutoPrune = yes
}



# app1.customerx.com
Job {
    Name = "backup-app1.customerx.com"
    Client = app1.customerx.com
    Pool = customerx-incr
    Full Backup Pool = customerx-cons
    FileSet = "linux-all"
    Schedule = "always-inc-cycle-4"
    #Defaults
    JobDefs = "HPJobInc"
    Maximum Concurrent Jobs = 8 # Let up to 8 jobs run
}

Client {
    Name = app1.customerx.com
    Address = app1.customerx.com
# Catalog = MyCatalog
    Password = "xxx" # password for Storage daemon
    AutoPrune = yes
}



# app2.customerx.com
Job {
    Name = "backup-app2.customerx.com"
    Client = app2.customerx.com
    Pool = customerx-incr
    Full Backup Pool = customerx-cons
    FileSet = "linux-all"
    Schedule = "always-inc-cycle-4"
    #Defaults
    JobDefs = "HPJobInc"
    Maximum Concurrent Jobs = 8 # Let up to 8 jobs run
}

Client {
    Name = app2.customerx.com
    Address = app2.customerx.com
# Catalog = MyCatalog
    Password = "xxx" # password for Storage daemon
    AutoPrune = yes
}

# app3.customerx.com
Job {
    Name = "backup-app3.customerx.com"
    Client = app3.customerx.com
    Pool = customerx-incr
    Full Backup Pool = customerx-cons
    FileSet = "linux-all"
    Schedule = "always-inc-cycle-4"
    #Defaults
    JobDefs = "HPJobInc"
    Maximum Concurrent Jobs = 8 # Let up to 8 jobs run
}

Client {
    Name = app3.customerx.com
    Address = app3.customerx.com
# Catalog = MyCatalog
    Password = "xxx" # password for Storage daemon
    AutoPrune = yes
}




# db1.customerx.com
Job {
    Name = "backup-db1.customerx.com"
    Client = db1.customerx.com
    Pool = customerx-incr
    Full Backup Pool = customerx-cons
    FileSet = "linux-all-mysql"
    Schedule = "always-inc-cycle-4"
    #Defaults
    JobDefs = "HPJobInc"
    Maximum Concurrent Jobs = 8 # Let up to 8 jobs run
}

Client {
    Name = db1.customerx.com
    Address = db1.customerx.com
# Catalog = MyCatalog
    Password = "xxx" # password for Storage daemon
    AutoPrune = yes
}


# db2.customerx.com
Job {
    Name = "backup-db2.customerx.com"
    Client = db2.customerx.com
    Pool = customerx-incr
    Full Backup Pool = customerx-cons
    FileSet = "linux-all-mysql"
    Schedule = "always-inc-cycle-4"
    #Defaults
    JobDefs = "HPJobInc"
    Maximum Concurrent Jobs = 8 # Let up to 8 jobs run
}

Client {
    Name = db2.customerx.com
    Address = db2.customerx.com
# Catalog = MyCatalog
    Password = "xxx" # password for Storage daemon
    AutoPrune = yes
}

# db3.customerx.com
Job {
    Name = "backup-db3.customerx.com"
    Client = db3.customerx.com
    Pool = customerx-incr
    Full Backup Pool = customerx-cons
    FileSet = "linux-all-mysql"
    Schedule = "always-inc-cycle-4"
    #Defaults
    JobDefs = "HPJobInc"
    Maximum Concurrent Jobs = 8 # Let up to 8 jobs run
}

Client {
    Name = db3.customerx.com
    Address = db3.customerx.com
# Catalog = MyCatalog
    Password = "xxx" # password for Storage daemon
    AutoPrune = yes
}




# Backup for customerx
Device {
  Name = customerx-incr
  Media Type = customerx
  Archive Device = /home/customerx/bareos
  LabelMedia = yes;
  Random Access = Yes;
  AutomaticMount = yes;
  RemovableMedia = no;
  AlwaysOpen = no;
  Maximum Concurrent Jobs = 8
}

Device {
  Name = customerx-cons
  Media Type = customerx
  Archive Device = /home/customerx/bareos
  LabelMedia = yes;
  Random Access = Yes;
  AutomaticMount = yes;
  RemovableMedia = no;
  AlwaysOpen = no;
  Maximum Concurrent Jobs = 8
}
Additional Information: 2017-09-21 10:07:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 10:02:35 bareos-sd JobId 2467: Please mount read Volume "vol-cons-0378" for:
 Job: backup-db3.customerx.cloud.2017-09-21_09.00.04_05
 Storage: "customerx-incr" (/home/customerx/bareos)
 Pool: customerx-incr
 Media type: customerx
 
2017-09-21 09:57:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:52:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:47:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:42:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:37:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:32:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:27:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:22:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:17:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:12:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:07:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:02:35 hostedpower-dir JobId 2467: Bootstrap records written to /var/lib/bareos/hostedpower-dir.restore.107.bsr
 
2017-09-21 09:02:35 hostedpower-dir JobId 2467: Using Device "customerx-incr" to read.
 
2017-09-21 09:02:35 hostedpower-dir JobId 2467: Using Device "customerx-cons" to write.
 
2017-09-21 09:02:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:02:35 bareos-sd JobId 2467: Warning: acquire.c:294 Read acquire: label.c:254 Could not reserve volume vol-cons-0378 on "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:02:35 bareos-sd JobId 2467: Please mount read Volume "vol-cons-0378" for:
 Job: backup-db3.customerx.cloud.2017-09-21_09.00.04_05
 Storage: "customerx-incr" (/home/customerx/bareos)
 Pool: customerx-incr
 Media type: customerx
 
2017-09-21 09:02:34 hostedpower-dir JobId 2467: Start Virtual Backup JobId 2467, Job=backup-db3.customerx.cloud.2017-09-21_09.00.04_05
 
2017-09-21 09:02:34 hostedpower-dir JobId 2467: Consolidating JobIds 2392,971
System Description
Attached Files:
Notes
(0002744)
joergs   
2017-09-21 16:08   
I see some problems in this configuration.

You should check section http://doc.bareos.org/master/html/bareos-manual-main-reference.html#ConcurrentDiskJobs from the manual.

Each device can only read/write one volume at a time. VirtualFull requires multiple volumes.

Basically, you need multiple devices to the some "Storage Directory", each with "Maximum Concurrent Jobs = 1" to make it work.

Your setting

Device {
  Name = customerx-incr
  Media Type = customerx
  Archive Device = /home/customerx/bareos
  LabelMedia = yes;
  Random Access = Yes;
  AutomaticMount = yes;
  RemovableMedia = no;
  AlwaysOpen = no;
  Maximum Concurrent Jobs = 8
}

will use interleaving instead, which could cause performance issues on restore.
(0002745)
hostedpower   
2017-09-21 16:14   
So you suggest just making the device:

Device {
  Name = customerx-incr
  Media Type = customerx
  Archive Device = /home/customerx/bareos
  LabelMedia = yes;
  Random Access = Yes;
  AutomaticMount = yes;
  RemovableMedia = no;
  AlwaysOpen = no;
  Maximum Concurrent Jobs = 1
}

This would fix all issues?

Before we had this and I think that worked also: Maximum Volume Jobs = 1 , but it seems discouraged...
(0002746)
joergs   
2017-09-21 16:26   
No, this will not fix the issue, but it prevents interleaving, which might cause other problems.

I suggest, by pointing to the documentation, that you setup multiple Devices all pointing to the same Archive Device. Then, access them all to a Director Storage, like

Storage {
    Name = customerx
    Device = customerx-dev1, customerx-dev2, ... # bareos-sd Device
    Media Type = customerx
    Address = xxx # backup server fqdn > sent to client sd
    Password = "xxx" # password for Storage daemon
    Maximum Concurrent Jobs = 8 # required for virtual full
}
(0002747)
joergs   
2017-09-21 16:26   
With Maximum Concurrent Jobs = 8 you should use 8 Devices.
(0002748)
hostedpower   
2017-09-21 16:37   
Hi Joergs,

Thanks a lot, that documentation makes sense and seems to have improved since I last read it (or I missed it somehow).

Will test it like that, but it looks very promising :)
(0002754)
hostedpower   
2017-09-21 22:57   
I've read it, but it would mean I need to create multiple storage devices in this case? That's a lot of extra definitions to just backup 1 customer. It would be nice if you would simply declare the device object and tell there are 8 from it for example. All in just one definition. That shouldn't be too hard I suppose?

Something like:

Device {
  Name = customerx-incr
  Media Type = customerx
  Archive Device = /home/customerx/bareos
  LabelMedia = yes;
  Random Access = Yes;
  AutomaticMount = yes;
  RemovableMedia = no;
  AlwaysOpen = no;
  Maximum Concurrent Jobs = 8 --> this automatically creates customerx-incr1 --> customerx-incr8, probably with some extra setting to allow it.
}



For now, would it be a solution to set

Device {
  Name = customerx-incr
  Media Type = customerx
  Archive Device = /home/customerx/bareos
  LabelMedia = yes;
  Random Access = Yes;
  AutomaticMount = yes;
  RemovableMedia = no;
  AlwaysOpen = no;
  Maximum Concurrent Jobs = 1 # just use one at the same time
}

Since we have so many clients, it would still run multiple backups at the same time for different clients I suppose? (Each client has his own media, device and storage)


PS: We want to keep about 20 days of data, is the next config ok together with the above scenario?


JobDefs {
    Name = "HPJobInc"
    Type = Backup
    Level = Full
    Write Bootstrap = "/var/lib/bareos/%c.bsr"
    Accurate=yes
    Level=Full
    Messages = Standard
    Always Incremental = yes
    Always Incremental Job Retention = 20 days
    # The resulting interval between full consolidations when running daily backups and daily consolidations is Always Incremental Max Full Age Dir Job - Always Incremental Job Retention Dir Job.
    Always Incremental Max Full Age = 35 days # should NEVER be less then Always Incremental Job Retention -> Every 15 days the full backup is also consolidated ( Always Incremental Max Full Age Dir Job - Always Incremental Job Retention Dir )
    Always Incremental Keep Number = 5 # Guarantee that at least the specified number of Backup Jobs will persist, even if they are older than "Always Incremental Job Retention".
    Maximum Concurrent Jobs = 5 # Let up to 5 jobs run
}
(0002759)
hostedpower   
2017-09-25 09:50   
We used this now:

  Maximum Concurrent Jobs = 1 # just use one at the same time

But some jobs also have the same error:

2017-09-25 09:00:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 
2017-09-25 08:55:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)


It looks almost like multiple jobs can't exist together in one volume (well they can, but then issues like this start to occur.

Before, probably with the "Maximum Volume Jobs = 1", we never encountered these problems.
(0002760)
joergs   
2017-09-25 11:17   
Only one access per volume is possible at a time.
So reading and writing to the same volume is not possible.

I thought, you covered this by "Maximum Use Duration = 23h". Have you disabled it, or did you run multiple jobs during one day?

However, this is a bug tracker. I think further questions about Always Incrementals are best handled using the bareos-users mailing list or a bareos.com support ticket.
(0002761)
hostedpower   
2017-09-25 11:37   
Yes I was wondering why we encountered it now and never before.

It wants to swap consolidate pool for and incremental pool (or vice versa). Don't understand why.
(0002763)
joergs   
2017-09-25 12:15   
Please attach the output of

list joblog jobid=2668
(0002764)
hostedpower   
2017-09-25 12:22   
Enter a period to cancel a command.
*list joblog jobid=2668
Automatically selected Catalog: MyCatalog
Using Catalog "MyCatalog"
 2017-09-24 09:00:13 hostedpower-dir JobId 2668: Start Virtual Backup JobId 2668, Job=backup-web.hosted-power.com.2017-09-24_09.00.03_27
 2017-09-24 09:00:13 hostedpower-dir JobId 2668: Consolidating JobIds 2593,1173
 2017-09-24 09:00:18 hostedpower-dir JobId 2668: Bootstrap records written to /var/lib/bareos/hostedpower-dir.restore.87.bsr
 2017-09-24 09:00:18 hostedpower-dir JobId 2668: Using Device "hostedpower-incr" to read.
 2017-09-24 09:00:18 hostedpower-dir JobId 2668: Using Device "hostedpower-cons" to write.
 2017-09-24 09:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:00:18 bareos-sd JobId 2668: Warning: acquire.c:294 Read acquire: label.c:254 Could not reserve volume vol-cons-0344 on "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:00:18 bareos-sd JobId 2668: Please mount read Volume "vol-cons-0344" for:
    Job: backup-web.hosted-power.com.2017-09-24_09.00.03_27
    Storage: "hostedpower-incr" (/home/hostedpower/bareos)
    Pool: hostedpower-incr
    Media type: hostedpower
 2017-09-24 09:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 10:00:18 bareos-sd JobId 2668: Please mount read Volume "vol-cons-0344" for:
    Job: backup-web.hosted-power.com.2017-09-24_09.00.03_27
    Storage: "hostedpower-incr" (/home/hostedpower/bareos)
    Pool: hostedpower-incr
    Media type: hostedpower
 2017-09-24 10:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 10:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 10:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 10:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 10:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 10:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 10:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 10:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 10:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 10:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 10:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:20:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:25:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:30:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:35:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:40:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:45:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:50:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:55:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 12:00:18 bareos-sd JobId 2668: Please mount read Volume "vol-cons-0344" for:
    Job: backup-web.hosted-power.com.2017-09-24_09.00.03_27
    Storage: "hostedpower-incr" (/home/hostedpower/bareos)
    Pool: hostedpower-incr
    Media type: hostedpower
 2017-09-24 12:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 12:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 12:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 12:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 12:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 12:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 12:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 12:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 12:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 12:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 12:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 16:00:18 bareos-sd JobId 2668: Please mount read Volume "vol-cons-0344" for:
    Job: backup-web.hosted-power.com.2017-09-24_09.00.03_27
    Storage: "hostedpower-incr" (/home/hostedpower/bareos)
    Pool: hostedpower-incr
    Media type: hostedpower
 2017-09-24 16:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 16:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 16:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 16:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 16:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 16:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 16:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 16:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 16:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 16:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 16:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 00:00:18 bareos-sd JobId 2668: Please mount read Volume "vol-cons-0344" for:
    Job: backup-web.hosted-power.com.2017-09-24_09.00.03_27
    Storage: "hostedpower-incr" (/home/hostedpower/bareos)
    Pool: hostedpower-incr
    Media type: hostedpower
 2017-09-25 00:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 00:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 00:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 00:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 00:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 00:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 00:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 00:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 00:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 00:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 00:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:30:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:35:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:40:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:45:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:50:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:55:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:00:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:05:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:10:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:15:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:20:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:25:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:30:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:35:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:40:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:45:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:50:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:55:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:00:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:05:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:10:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:15:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:20:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:25:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:30:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:35:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:40:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:45:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:50:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:55:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:00:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:05:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:10:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:15:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:20:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:25:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:30:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:35:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:40:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:45:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:50:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:55:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:00:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:05:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:10:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:15:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:20:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:25:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:30:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:35:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:40:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:45:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:50:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:55:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 09:00:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 09:05:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 09:10:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 09:15:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 09:18:14 bareos-sd JobId 2668: acquire.c:221 Job 2668 canceled.
 2017-09-25 09:18:14 bareos-sd JobId 2668: Elapsed time=418423:18:14, Transfer rate=0 Bytes/second
 2017-09-25 09:18:14 hostedpower-dir JobId 2668: Bareos hostedpower-dir 16.2.5 (03Mar17):
  Build OS: x86_64-pc-linux-gnu debian Debian GNU/Linux 8.0 (jessie)
  JobId: 2668
  Job: backup-web.hosted-power.com.2017-09-24_09.00.03_27
  Backup Level: Virtual Full
  Client: "web.hosted-power.com" 16.2.4 (01Jul16) x86_64-pc-linux-gnu,debian,Debian GNU/Linux 7.0 (wheezy),Debian_7.0,x86_64
  FileSet: "linux-all-mysql" 2017-08-09 00:15:00
  Pool: "hostedpower-cons" (From Job Pool's NextPool resource)
  Catalog: "MyCatalog" (From Client resource)
  Storage: "hostedpower-cons" (From Storage from Pool's NextPool resource)
  Scheduled time: 24-Sep-2017 09:00:03
  Start time: 04-Sep-2017 02:15:02
  End time: 04-Sep-2017 02:42:30
  Elapsed time: 27 mins 28 secs
  Priority: 10
  SD Files Written: 0
  SD Bytes Written: 0 (0 B)
  Rate: 0.0 KB/s
  Volume name(s):
  Volume Session Id: 194
  Volume Session Time: 1506027627
  Last Volume Bytes: 0 (0 B)
  SD Errors: 0
  SD termination status: Canceled
  Accurate: yes
  Termination: Backup Canceled

You have messages.
*
(0002765)
hostedpower   
2017-09-27 11:57   
Hi,

Now jobs seem to success for the moment.

They also seem to be set as incremental all the time while before it was set as full after consolidate.

Example of such job

2017-09-27 09:02:07 hostedpower-dir JobId 2892: Joblevel was set to joblevel of first consolidated job: Incremental
 
2017-09-27 09:02:07 hostedpower-dir JobId 2892: Bareos hostedpower-dir 16.2.5 (03Mar17):
 Build OS: x86_64-pc-linux-gnu debian Debian GNU/Linux 8.0 (jessie)
 JobId: 2892
 Job: backup-web.hosted-power.com.2017-09-27_09.00.02_47
 Backup Level: Virtual Full
 Client: "web.hosted-power.com" 16.2.4 (01Jul16) x86_64-pc-linux-gnu,debian,Debian GNU/Linux 7.0 (wheezy),Debian_7.0,x86_64
 FileSet: "linux-all-mysql" 2017-08-09 00:15:00
 Pool: "hostedpower-cons" (From Job Pool's NextPool resource)
 Catalog: "MyCatalog" (From Client resource)
 Storage: "hostedpower-cons" (From Storage from Pool's NextPool resource)
 Scheduled time: 27-Sep-2017 09:00:02
 Start time: 07-Sep-2017 02:15:03
 End time: 07-Sep-2017 02:42:52
 Elapsed time: 27 mins 49 secs
 Priority: 10
 SD Files Written: 2,803
 SD Bytes Written: 8,487,235,164 (8.487 GB)
 Rate: 5085.2 KB/s
 Volume name(s): vol-cons-0010
 Volume Session Id: 121
 Volume Session Time: 1506368550
 Last Volume Bytes: 8,495,713,067 (8.495 GB)
 SD Errors: 0
 SD termination status: OK
 Accurate: yes
 Termination: Backup OK

 
2017-09-27 09:02:07 hostedpower-dir JobId 2892: purged JobIds 2817,1399 as they were consolidated into Job 2892
 
2017-09-27 09:02:07 hostedpower-dir JobId 2892: Begin pruning Jobs older than 6 months .
 
2017-09-27 09:02:07 hostedpower-dir JobId 2892: No Jobs found to prune.
 
2017-09-27 09:02:07 hostedpower-dir JobId 2892: Begin pruning Files.
 
2017-09-27 09:02:07 hostedpower-dir JobId 2892: No Files found to prune.
 
2017-09-27 09:02:07 hostedpower-dir JobId 2892: End auto prune.

 
2017-09-27 09:02:03 bareos-sd JobId 2892: Elapsed time=00:01:46, Transfer rate=80.06 M Bytes/second
 
2017-09-27 09:00:32 bareos-sd JobId 2892: End of Volume at file 1 on device "hostedpower-incr" (/home/hostedpower/bareos), Volume "vol-cons-0344"
 
2017-09-27 09:00:32 bareos-sd JobId 2892: Ready to read from volume "vol-incr-0097" on device "hostedpower-incr" (/home/hostedpower/bareos).
 
2017-09-27 09:00:32 bareos-sd JobId 2892: Forward spacing Volume "vol-incr-0097" to file:block 0:4709591.
 
2017-09-27 09:00:17 bareos-sd JobId 2892: Recycled volume "vol-cons-0010" on device "hostedpower-cons" (/home/hostedpower/bareos), all previous data lost.
 
2017-09-27 09:00:17 bareos-sd JobId 2892: Forward spacing Volume "vol-cons-0344" to file:block 0:215.
 
2017-09-27 09:00:14 hostedpower-dir JobId 2892: Start Virtual Backup JobId 2892, Job=backup-web.hosted-power.com.2017-09-27_09.00.02_47
 
2017-09-27 09:00:14 hostedpower-dir JobId 2892: Consolidating JobIds 2817,1399
 
2017-09-27 09:00:14 hostedpower-dir JobId 2892: Bootstrap records written to /var/lib/bareos/hostedpower-dir.restore.48.bsr
 
2017-09-27 09:00:14 hostedpower-dir JobId 2892: Using Device "hostedpower-incr" to read.
 
2017-09-27 09:00:14 hostedpower-dir JobId 2892: Max configured use duration=82,800 sec. exceeded. Marking Volume "vol-cons-0344" as Used.
 
2017-09-27 09:00:14 hostedpower-dir JobId 2892: There are no more Jobs associated with Volume "vol-cons-0010". Marking it purged.
 
2017-09-27 09:00:14 hostedpower-dir JobId 2892: All records pruned from Volume "vol-cons-0010"; marking it "Purged"
 
2017-09-27 09:00:14 hostedpower-dir JobId 2892: Recycled volume "vol-cons-0010"
 
2017-09-27 09:00:14 hostedpower-dir JobId 2892: Using Device "hostedpower-cons" to write.
 
2017-09-27 09:00:14 bareos-sd JobId 2892: Ready to read from volume "vol-cons-0344" on device "hostedpower-incr" (/home/hostedpower/bareos).
(0002766)
hostedpower   
2017-09-28 12:39   
Very strange things, most work for the moment it seems (but how long).

Some show full , other incremental


hostedpower-dir JobId 2956: Joblevel was set to joblevel of first consolidated job: Full
 
2017-09-28 09:05:03 hostedpower-dir JobId 2956: Bareos hostedpower-dir 16.2.5 (03Mar17):
 Build OS: x86_64-pc-linux-gnu debian Debian GNU/Linux 8.0 (jessie)
 JobId: 2956
 Job: backup-vps53404.2017-09-28_09.00.02_20
 Backup Level: Virtual Full
 Client: "vps53404" 16.2.4 (01Jul16) Microsoft Windows Server 2012 Standard Edition (build 9200), 64-bit,Cross-compile,Win64
 FileSet: "windows-all" 2017-08-08 22:15:00
 Pool: "vps53404-cons" (From Job Pool's NextPool resource)
 Catalog: "MyCatalog" (From Client resource)
 Storage: "vps53404-cons" (From Storage from Pool's NextPool resource)
 Scheduled time: 28-Sep-2017 09:00:02

Before they always all showed full
(0002802)
hostedpower   
2017-10-19 09:43   
Well I downgraded to 16.2.5 and guess what, issue was gone for few weeks.

Now I tried out the 16.2.7 and issue is back again ...

2017-10-19 09:40:03 bareos-sd JobId 4445: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0340 from dev="vps52320-cons" (/home/vps52320/bareos) to "vps52320-incr" (/home/vps52320/bareos)
 
2017-10-19 09:35:03 bareos-sd JobId 4445: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0340 from dev="vps52320-cons" (/home/vps52320/bareos) to "vps52320-incr" (/home/vps52320/bareos)
 
2017-10-19 09:30:03 bareos-sd JobId 4445: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0340 from dev="vps52320-cons" (/home/vps52320/bareos) to "vps52320-incr" (/home/vps52320/bareos)
 
2017-10-19 09:25:03 bareos-sd JobId 4445: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0340 from dev="vps52320-cons" (/home/vps52320/bareos) to "vps52320-incr" (/home/vps52320/bareos)
 
2017-10-19 09:20:03 bareos-sd JobId 4445: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0340 from dev="vps52320-cons" (/home/vps52320/bareos) to "vps52320-incr" (/home/vps52320/bareos)

Coincidence? I start doubting it.
(0002830)
hostedpower   
2017-12-02 12:19   
Hi,


I'm still on 17.2.7 and have to say it's an intermittent error. It goes fine for days and then all the sudden sometimes one or more jobs suffer from it.

Never had it in the past until a certain version, pretty sure of that.
(0002949)
hostedpower   
2018-03-26 20:22   
The problem was long time gone, but now it's back in full force. Any idea what the cause could be?
(0002991)
stephand   
2018-05-04 10:57   
With larger MySQL databases and Bareos 17.2, for incremental jobs with accurate=yes, it seems to help to add the following index:

CREATE INDEX jobtdate_idx ON Job (JobTDate);
ANALYZE TABLE Job;

Could you please check if that works for you?
(0002992)
hostedpower   
2018-05-04 11:16   
ok thanks, we addeed the index, but it took only 0.5 seconds. Usually this means there was not an issue :)

Creating an index which is slow, usually means there is (serious) performance gain.
(0002994)
stephand   
2018-05-04 17:56   
For sure it depends on the size of the Job table. I've measured it 25% faster with this index with 10000 records in the Job table.

However, looking at the logs like
2017-10-19 09:20:03 bareos-sd JobId 4445: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0340 from dev="vps52320-cons" (/home/vps52320/bareos) to "vps52320-incr" (/home/vps52320/bareos)
that's a different problem, has nothing to do with that index.
As Joerg already suggested to use multiple storage devices, I'd propose increase the number. This is documented meanwhile at
http://doc.bareos.org/master/html/bareos-manual-main-reference.html#UsingMultipleStorageDevices
(0003047)
stephand   
2018-06-21 07:59   
Were you able to solve the problem by using multiple storages devices?
(0003053)
hostedpower   
2018-06-27 23:56   
while this might work, we use at least 50-60 devices atm. So it would be a lot of extra work to add extra storage devices.

Why not create a device of type 'disk volume' and automatically give it the right properties? It would make things SO MUCH EASIER and bring bareos into the 2018 ...

Anything that can be done to get this supported? We would want to sponsor this if needed...
(0003145)
stephand   
2018-10-22 15:54   
Hi,

When you say "a lot of extra work to add extra storage devices" are you talking about the Storage Daemon configuration mentioned at
http://doc.bareos.org/master/html/bareos-manual-main-reference.html#UsingMultipleStorageDevices
which is a always

Device {
  Name = FileStorage1
  Media Type = File
  Archive Device = /var/lib/bareos/storage
  LabelMedia = yes
  Random Access = yes
  AutomaticMount = yes
  RemovableMedia = no
  AlwaysOpen = no
  Maximum Concurrent Jobs = 1
}

and only the number in Name = FileStorage1 is increased for each Device?

Are you already using configuration management tools like Ansible, Puppet etc? Then it shouldn't be too hard to get done.

Or what exactly do you mean by "Why not create a device of type 'disk volume' and automatically give it the right properties? It would make things SO MUCH EASIER and bring bareos into the 2018 ..."?

Let me guess, lets imagine we would have MultiDevice in the SD configuration, then the example config from http://doc.bareos.org/master/html/bareos-manual-main-reference.html#UsingMultipleStorageDevices could be written like this:

The Bareos Director config:

Director {
  Name = bareos-dir.example.com
  QueryFile = "/usr/lib/bareos/scripts/query.sql"
  Maximum Concurrent Jobs = 10
  Password = "<secret>"
}
 
Storage {
  Name = File
  Address = bareos-sd.bareos.com
  Password = "<sd-secret>"
  Device = MultiFileStorage
  # number of devices = Maximum Concurrent Jobs
  Maximum Concurrent Jobs = 4
  Media Type = File
}

The Bareos Storagedaemon config:

Storage {
  Name = bareos-sd.example.com
  # any number >= 4
  Maximum Concurrent Jobs = 20
}
 
Director {
  Name = bareos-dir.example.com
  Password = "<sd-secret>"
}
 
MultiDevice {
  Name = MultiFileStorage
  Media Type = File
  Archive Device = /var/lib/bareos/storage
  LabelMedia = yes
  Random Access = yes
  AutomaticMount = yes
  RemovableMedia = no
  AlwaysOpen = no
  Maximum Concurrent Jobs = 1
  Multi Device Count = 4
}
 
Do you mean that?

Or if not, please give an example of how the config should look like to make things easier for you.
(0003146)
hostedpower   
2018-10-22 16:25   
We're indeed looking into Ansible to automate this, but still, something like:

MultiDevice {
  Name = MultiFileStorage
  Media Type = File
  Archive Device = /var/lib/bareos/storage
  LabelMedia = yes
  Random Access = yes
  AutomaticMount = yes
  RemovableMedia = no
  AlwaysOpen = no
  Maximum Concurrent Jobs = 1
  Multi Device Count = 4
}

would be more than fantastic!!

Just a single device supporting concurrent access in an easy fashion!

Probably we could then also set "Maximum Concurrent Jobs = 4" Pretty safely?


I can imagine if you're used to Bareos (and tapes), this seems maybe a strange way of working.

However if you're used to (most) other backup software supporting harddrives by original design, the way it's designed now for disks is just way too complicated :(
(0003768)
hostedpower   
2020-02-10 21:08   
Hi,


I think this feature was written: https://docs.bareos.org/Configuration/StorageDaemon.html#storageresourcemultiplieddevice

Does it require an autochanger for this to work as disussed in this thread? Or would simply more devices thanks to the count parameter be sufficient?

I ask since lately we see a lot of errors again as reported here :| :)
(0003774)
arogge   
2020-02-11 09:34   
The autochanger is optional, but the feature won't help you if you don't configure an autochanger.
With the autochanger you only need to configure one storage device in your Director. Otherwise you'd need to configure each of the mutliplied devices in your director separately.

This is - of course - not a physical existing autochanger, it is just an autochanger configuration in the storage daemon to group the different storage devices together.
(0003775)
hostedpower   
2020-02-11 10:00   
ok, but in our case we ha
(0003776)
hostedpower   
2020-02-11 10:02   
ok, but in our case we have more than 100 storages configured with different names.

Do we need multiple autochangers as well or just 1 auto changer for all these devices? I'm afraid it's the latter, so we'd have to add tons of autochangers as well right? :)
(0003777)
arogge   
2020-02-11 10:30   
If you have more than 100 storages, I hope you're generating your configuration with something like puppet or ansible.
Why exactly do you need such a large amount of individual storages?
Usually if you're using only File-based storage a single storage (or file-autochanger) per RAID array is enough. Everything else usually just overcomplicates things in the end.
(0003798)
hostedpower   
2020-02-12 10:04   
Well it's because we allocate storage space for each customer, so each customer pays for his own storage. Putting all into one large storage, wouldn't show us anymore who is using what exactly.

Is there a better way to "allocate" storage for individual customers while at the same time use large storage as you suggest?

PS: Yes we generate config, but updating config now to include autochanger, would still be quite some work since we generate this config only once .

Just adding a device count is easy since we use include file. So adding autochanger now isn't really what we hoped for :)
(0004060)
hostedpower   
2020-12-01 12:43   
Hi,

We still have this: Need volume from other drive, but swap not possible

Strange thing is that it works 99% of times, but then we have periods we see this error a lot. I don't understand why this works so well mostly to work so bad at other times.

 It's one of the primary reasons we're now looking to other backup solutions. Next to that we have many storage servers and bareos has currently no way to let "x number of tasks" run on a per storage server basis.
(0004217)
hostedpower   
2021-08-25 11:27   
(Last edited: 2021-08-26 23:41)
Ok we finally re-architectured our whole backup infrastructure, only to find this problem/bug hits us hard again.

We use the latest bareos 20.2 version.

We now use one large folder for all backups with 10 concurrent consolidates (max). We use postgresql as our database engine (so it cannot be because of MySQL). We tried to follow all best practices, I don't understand what is wrong with it :(


Storage {
        Name = AI-Incremental
        Device = AI-Incremental-Autochanger
        Media Type = AI
        Address = xxx
        Password = "xxx"
        Maximum Concurrent Jobs = 10
        Autochanger = yes
}

Storage {
        Name = AI-Consolidated
        Device = AI-Consolidated-Autochanger
        Media Type = AI
        Address = xxx
        Password = "xxx"
        Maximum Concurrent Jobs = 10
        Autochanger = yes
}

Device {
  Name = AI-Incremental
  Media Type = AI
  Archive Device = /var/lib/bareos/storage
  LabelMedia = yes;
  Random Access = Yes;
  AutomaticMount = yes;
  RemovableMedia = no;
  AlwaysOpen = no;
  Maximum Concurrent Jobs = 1 # this has to be 1! (1 for each device)
  Count = 10
}

Autochanger {
  Name = "AI-Incremental-Autochanger"
  Device = AI-Incremental

  Changer Device = /dev/null
  Changer Command = ""
}


Device {
  Name = AI-Consolidated
  Media Type = AI
  Archive Device = /var/lib/bareos/storage
  LabelMedia = yes;
  Random Access = Yes;
  AutomaticMount = yes;
  RemovableMedia = no;
  AlwaysOpen = no;
  Maximum Concurrent Jobs = 1 # this has to be 1! (1 for each device)
  Count = 10
}

Autochanger {
  Name = "AI-Consolidated-Autochanger"
  Device = AI-Consolidated

  Changer Device = /dev/null
  Changer Command = ""
}


I suppose the error must be easy to spot? Or everyone would have this problem :(

(0004218)
hostedpower   
2021-08-25 11:32   
3838 machine.example.com-files machine.example.com Backup VirtualFull 0 0.00 B 0 Running
Messages
Search
#
Timestamp
Message
52 2021-08-25 11:25:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
51 2021-08-25 11:20:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
50 2021-08-25 11:15:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
49 2021-08-25 11:10:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
48 2021-08-25 11:05:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
47 2021-08-25 11:00:25 backup1-sd JobId 3838: Please mount read Volume "vol-cons-0287" for:
Job: machine.example.com-files.2021-08-25_08.00.19_36
Storage: "AI-Incremental0007" (/var/lib/bareos/storage)
Pool: AI-Incremental
Media type: AI
46 2021-08-25 10:55:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
45 2021-08-25 10:50:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
44 2021-08-25 10:45:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
43 2021-08-25 10:40:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
42 2021-08-25 10:35:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
41 2021-08-25 10:30:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
40 2021-08-25 10:25:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
39 2021-08-25 10:20:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
38 2021-08-25 10:15:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
37 2021-08-25 10:10:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
36 2021-08-25 10:05:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
35 2021-08-25 10:00:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
34 2021-08-25 09:55:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
33 2021-08-25 09:50:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
32 2021-08-25 09:45:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
31 2021-08-25 09:40:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
30 2021-08-25 09:35:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
29 2021-08-25 09:30:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
28 2021-08-25 09:25:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
27 2021-08-25 09:20:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
26 2021-08-25 09:15:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
25 2021-08-25 09:10:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
24 2021-08-25 09:05:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
23 2021-08-25 09:00:25 backup1-sd JobId 3838: Please mount read Volume "vol-cons-0287" for:
Job: machine.example.com-files.2021-08-25_08.00.19_36
Storage: "AI-Incremental0007" (/var/lib/bareos/storage)
Pool: AI-Incremental
Media type: AI
22 2021-08-25 08:55:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
21 2021-08-25 08:50:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
20 2021-08-25 08:45:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
19 2021-08-25 08:40:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
18 2021-08-25 08:35:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
17 2021-08-25 08:30:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
16 2021-08-25 08:25:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
15 2021-08-25 08:20:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
14 2021-08-25 08:15:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
13 2021-08-25 08:10:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
12 2021-08-25 08:05:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
11 2021-08-25 08:00:25 backup1-sd JobId 3838: Please mount read Volume "vol-cons-0287" for:
Job: machine.example.com-files.2021-08-25_08.00.19_36
Storage: "AI-Incremental0007" (/var/lib/bareos/storage)
Pool: AI-Incremental
Media type: AI
10 2021-08-25 08:00:25 backup1-sd JobId 3838: Warning: stored/acquire.cc:348 Read acquire: stored/label.cc:269 Could not reserve volume vol-cons-0287 on "AI-Incremental0007" (/var/lib/bareos/storage)
9 2021-08-25 08:00:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
8 2021-08-25 08:00:24 backup1-sd JobId 3838: Ready to append to end of Volume "vol-cons-0287" size=12609080131
7 2021-08-25 08:00:24 backup1-sd JobId 3838: Volume "vol-cons-0287" previously written, moving to end of data.
6 2021-08-25 08:00:24 backup1-dir JobId 3838: Using Device "AI-Consolidated0007" to write.
5 2021-08-25 08:00:24 backup1-dir JobId 3838: Using Device "AI-Incremental0007" to read.
4 2021-08-25 08:00:23 backup1-dir JobId 3838: Connected Storage daemon at backupx.xxxx:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
3 2021-08-25 08:00:23 backup1-dir JobId 3838: Bootstrap records written to /var/lib/bareos/backup1-dir.restore.379.bsr
(0004219)
hostedpower   
2021-08-25 11:48   
I see now it tries to mount consolidate volumes on the incremental devices, you can see it in above sample, but also below:
25-Aug 08:02 backup1-dir JobId 3860: Start Virtual Backup JobId 3860, Job=machine.example.com-files.2021-08-25_08.00.31_02
25-Aug 08:02 backup1-dir JobId 3860: Consolidating JobIds 3563,724
25-Aug 08:02 backup1-dir JobId 3860: Bootstrap records written to /var/lib/bareos/backup1-dir.restore.394.bsr
25-Aug 08:02 backup1-dir JobId 3860: Connected Storage daemon at xxx.xxx:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
25-Aug 08:02 backup1-dir JobId 3860: Using Device "AI-Incremental0005" to read.
25-Aug 08:02 backup1-dir JobId 3860: Using Device "AI-Consolidated0005" to write.
25-Aug 08:02 backup1-sd JobId 3860: Volume "vol-cons-0292" previously written, moving to end of data.
25-Aug 08:02 backup1-sd JobId 3860: Ready to append to end of Volume "vol-cons-0292" size=26118365623
25-Aug 08:02 backup1-sd JobId 3860: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0005" (/var/lib/bareos/storage)
25-Aug 08:02 backup1-sd JobId 3860: Warning: stored/acquire.cc:348 Read acquire: stored/label.cc:269 Could not reserve volume vol-cons-0287 on "AI-Incremental0005" (/var/lib/bareos/storage)
25-Aug 08:02 backup1-sd JobId 3860: Please mount read Volume "vol-cons-0287" for:
    Job: machine.example.com-files.2021-08-25_08.00.31_02
    Storage: "AI-Incremental0005" (/var/lib/bareos/storage)
    Pool: AI-Incremental
    Media type: AI

This might be the cause? What could be causing this?
(0004220)
hostedpower   
2021-08-25 11:51   
This is the first job today with the messages, but it succeeded anyway; maybe you could see what is going wrong here?

2021-08-24 15:53:54 backup1-dir JobId 3549: console command: run AfterJob ".bvfs_update JobId=3549"
30 2021-08-24 15:53:54 backup1-dir JobId 3549: End auto prune.

29 2021-08-24 15:53:54 backup1-dir JobId 3549: No Files found to prune.
28 2021-08-24 15:53:54 backup1-dir JobId 3549: Begin pruning Files.
27 2021-08-24 15:53:54 backup1-dir JobId 3549: No Jobs found to prune.
26 2021-08-24 15:53:54 backup1-dir JobId 3549: Begin pruning Jobs older than 6 months .
25 2021-08-24 15:53:54 backup1-dir JobId 3549: purged JobIds 3237,648 as they were consolidated into Job 3549
24 2021-08-24 15:53:54 backup1-dir JobId 3549: Bareos backup1-dir 20.0.2 (10Jun21):
Build OS: Debian GNU/Linux 10 (buster)
JobId: 3549
Job: another.xxx-files.2021-08-24_15.48.51_46
Backup Level: Virtual Full
Client: "another.xxx" 20.0.2 (10Jun21) Debian GNU/Linux 10 (buster),debian
FileSet: "linux-files" 2021-07-20 16:03:24
Pool: "AI-Consolidated" (From Job Pool's NextPool resource)
Catalog: "MyCatalog" (From Client resource)
Storage: "AI-Consolidated" (From Storage from Pool's NextPool resource)
Scheduled time: 24-Aug-2021 15:48:51
Start time: 03-Aug-2021 23:08:50
End time: 03-Aug-2021 23:09:30
Elapsed time: 40 secs
Priority: 10
SD Files Written: 653
SD Bytes Written: 55,510,558 (55.51 MB)
Rate: 1387.8 KB/s
Volume name(s): vol-cons-0288
Volume Session Id: 2056
Volume Session Time: 1628888564
Last Volume Bytes: 55,596,662 (55.59 MB)
SD Errors: 0
SD termination status: OK
Accurate: yes
Bareos binary info: official Bareos subscription
Job triggered by: User
Termination: Backup OK

23 2021-08-24 15:53:54 backup1-dir JobId 3549: Joblevel was set to joblevel of first consolidated job: Incremental
22 2021-08-24 15:53:54 backup1-dir JobId 3549: Insert of attributes batch table done
21 2021-08-24 15:53:54 backup1-dir JobId 3549: Insert of attributes batch table with 653 entries start
20 2021-08-24 15:53:54 backup1-sd JobId 3549: Releasing device "AI-Incremental0008" (/var/lib/bareos/storage).
19 2021-08-24 15:53:54 backup1-sd JobId 3549: Releasing device "AI-Consolidated0008" (/var/lib/bareos/storage).
18 2021-08-24 15:53:54 backup1-sd JobId 3549: Elapsed time=00:00:01, Transfer rate=55.51 M Bytes/second
17 2021-08-24 15:53:54 backup1-sd JobId 3549: Forward spacing Volume "vol-incr-0135" to file:block 0:2909195921.
16 2021-08-24 15:53:54 backup1-sd JobId 3549: Ready to read from volume "vol-incr-0135" on device "AI-Incremental0008" (/var/lib/bareos/storage).
15 2021-08-24 15:53:54 backup1-sd JobId 3549: End of Volume at file 0 on device "AI-Incremental0008" (/var/lib/bareos/storage), Volume "vol-cons-0284"
14 2021-08-24 15:53:54 backup1-sd JobId 3549: Forward spacing Volume "vol-cons-0284" to file:block 0:307710024.
13 2021-08-24 15:53:54 backup1-sd JobId 3549: Ready to read from volume "vol-cons-0284" on device "AI-Incremental0008" (/var/lib/bareos/storage).
12 2021-08-24 15:48:54 backup1-sd JobId 3549: Please mount read Volume "vol-cons-0284" for:
Job: another.xxx-files.2021-08-24_15.48.51_46
Storage: "AI-Incremental0008" (/var/lib/bareos/storage)
Pool: AI-Incremental
Media type: AI
11 2021-08-24 15:48:54 backup1-sd JobId 3549: Warning: stored/acquire.cc:348 Read acquire: stored/label.cc:269 Could not reserve volume vol-cons-0284 on "AI-Incremental0008" (/var/lib/bareos/storage)
10 2021-08-24 15:48:54 backup1-sd JobId 3549: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=1 num_writers=0 num_reserve=0 swap=0 vol=vol-cons-0284 from dev="AI-Incremental0005" (/var/lib/bareos/storage) to "AI-Incremental0008" (/var/lib/bareos/storage)
9 2021-08-24 15:48:54 backup1-sd JobId 3549: Wrote label to prelabeled Volume "vol-cons-0288" on device "AI-Consolidated0008" (/var/lib/bareos/storage)
8 2021-08-24 15:48:54 backup1-sd JobId 3549: Labeled new Volume "vol-cons-0288" on device "AI-Consolidated0008" (/var/lib/bareos/storage).
7 2021-08-24 15:48:53 backup1-dir JobId 3549: Using Device "AI-Consolidated0008" to write.
6 2021-08-24 15:48:53 backup1-dir JobId 3549: Created new Volume "vol-cons-0288" in catalog.
5 2021-08-24 15:48:53 backup1-dir JobId 3549: Using Device "AI-Incremental0008" to read.
4 2021-08-24 15:48:52 backup1-dir JobId 3549: Connected Storage daemon at xxx.xxx:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
3 2021-08-24 15:48:52 backup1-dir JobId 3549: Bootstrap records written to /var/lib/bareos/backup1-dir.restore.331.bsr
2 2021-08-24 15:48:52 backup1-dir JobId 3549: Consolidating JobIds 3237,648
1 2021-08-24 15:48:52 backup1-dir JobId 3549: Start Virtual Backup JobId 3549, Job=another.xxx-files.2021-08-24_15.48.51_46
(0004221)
hostedpower   
2021-08-25 11:54   
Just saw this swap not possible error also sometimes happen when the same device/storage/pool was used:

5 2021-08-24 15:54:03 backup1-sd JobId 3553: Ready to read from volume "vol-incr-0136" on device "AI-Incremental0002" (/var/lib/bareos/storage).
14 2021-08-24 15:49:03 backup1-sd JobId 3553: Please mount read Volume "vol-incr-0136" for:
Job: xxx.xxx.bxxe-files.2021-08-24_15.48.52_50
Storage: "AI-Incremental0002" (/var/lib/bareos/storage)
Pool: AI-Incremental
Media type: AI
13 2021-08-24 15:49:03 backup1-sd JobId 3553: Warning: stored/acquire.cc:348 Read acquire: stored/label.cc:269 Could not reserve volume vol-incr-0136 on "AI-Incremental0002" (/var/lib/bareos/storage)
12 2021-08-24 15:49:03 backup1-sd JobId 3553: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=1 num_writers=0 num_reserve=0 swap=0 vol=vol-incr-0136 from dev="AI-Incremental0006" (/var/lib/bareos/storage) to "AI-Incremental0002" (/var/lib/bareos/storage)
11 2021-08-24 15:49:03 backup1-sd JobId 3553: End of Volume at file 0 on device "AI-Incremental0002" (/var/lib/bareos/storage), Volume "vol-cons-0285"
(0004222)
hostedpower   
2021-08-25 12:20   
PS: Consolidate job missing in the posted config above:

Job {
        Name = "Consolidate"
        Type = "Consolidate"
        Accurate = "yes"

        JobDefs = "DefaultJob"

        Schedule = "ConsolidateCycle"
        Max Full Consolidations = 200

        Maximum Concurrent Jobs = 1
        Prune Volumes = yes

        Priority = 11
}
(0004227)
hostedpower   
2021-08-26 13:11   
2021-08-26 11:34:12 backup1-sd JobId 4151: Warning: stored/acquire.cc:348 Read acquire: stored/label.cc:269 Could not reserve volume vol-cons-0282 on "AI-Incremental0004" (/var/lib/bareos/storage) <-------
10 2021-08-26 11:34:12 backup1-sd JobId 4151: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0282 from dev="AI-Consolidated0010" (/var/lib/bareos/storage) to "AI-Incremental0004" (/var/lib/bareos/storage)
9 2021-08-26 11:34:12 backup1-sd JobId 4151: Wrote label to prelabeled Volume "vol-cons-0298" on device "AI-Incremental0001" (/var/lib/bareos/storage)
8 2021-08-26 11:34:12 backup1-sd JobId 4151: Labeled new Volume "vol-cons-0298" on device "AI-Incremental0001" (/var/lib/bareos/storage).
7 2021-08-26 11:34:12 backup1-dir JobId 4151: Using Device "AI-Incremental0001" to write.
6 2021-08-26 11:34:12 backup1-dir JobId 4151: Created new Volume "vol-cons-0298" in catalog.


All jobs don't even continue after this "event"...
(0004494)
hostedpower   
2022-01-31 08:33   
This still happens, even after using seperate devices and labels etc
(0005856)
bruno-at-bareos   
2024-03-20 14:47   
Not sure this has still an interest after 2 years.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1426 [bareos-core] director minor always 2022-02-07 10:37 2024-03-20 14:38
Reporter: mschiff Platform: Linux  
Assigned To: stephand OS: any  
Priority: normal OS Version: 3  
Status: acknowledged Product Version: 21.0.0  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: bareos send useless operator "mount" messages
Description: The default configuration has messages/Standard.conf which contains:

operator = <email> = mount

which shouldsend an email if an operator is required for a job to continue.

But these mails will also be triggered on a busy bareos-sd with multiple virtual drives and multiple jobs running, when a job just needs to wait a bit for a volume to become available.
Every month, when our systems are doing virtual full backups at night, we get lots of mails like:

06-Feb 23:37 kilo-sd JobId 58793: Please mount read Volume "VolIncr-0034" for:
    Job: BackupIndia.2022-02-06_23.15.01_31
    Storage: "MultiFileStorage0001" (/srv/backup/bareos)
    Pool: Incr
    Media type: File

But in the morning, all jobs are finished successsfully.

So when one job is reading a volume and another job is waiting for the same volume, an email is triggered. But after waiting a couple of minutes, this "issue" solves itself.

It should be possible to set some timeout, after which such messages are being sent, so that they will only be sent on really hanging jobs.

This is part of the joblog:

 2022-02-06 23:25:38 kilo-dir JobId 58793: Start Virtual Backup JobId 58793, Job=BackupIndia.2022-02-06_23.15.01_31
 2022-02-06 23:25:38 kilo-dir JobId 58793: Consolidating JobIds 58147,58164,58182,58200,58218,58236,58254,58272,58290,58308,58326,58344,58362,58380,58398,58416,58434,58452,58470,58488,58506
,58524,58542,58560,58578,58596,58614,58632,58650,58668,58686,58704,58722,58740,58758,58764
 2022-02-06 23:25:40 kilo-dir JobId 58793: Bootstrap records written to /var/lib/bareos/kilo-dir.restore.16.bsr
 2022-02-06 23:25:40 kilo-dir JobId 58793: Connected Storage daemon at kilo.sys4.de:9103, encryption: TLS_AES_256_GCM_SHA384 TLSv1.3
 2022-02-06 23:26:42 kilo-dir JobId 58793: Using Device "MultiFileStorage0001" to read.
 2022-02-06 23:26:42 kilo-dir JobId 58793: Using Device "MultiFileStorage0002" to write.
 2022-02-06 23:26:42 kilo-sd JobId 58793: Ready to read from volume "VolFull-0165" on device "MultiFileStorage0001" (/srv/backup/bareos).
 2022-02-06 23:26:42 kilo-sd JobId 58793: Forward spacing Volume "VolFull-0165" to file:block 0:3367481982.
 2022-02-06 23:26:53 kilo-sd JobId 58793: End of Volume at file 1 on device "MultiFileStorage0001" (/srv/backup/bareos), Volume "VolFull-0165"
 2022-02-06 23:26:53 kilo-sd JobId 58793: Ready to read from volume "VolFull-0168" on device "MultiFileStorage0001" (/srv/backup/bareos).
 2022-02-06 23:26:53 kilo-sd JobId 58793: Forward spacing Volume "VolFull-0168" to file:block 2:1033779909.
 2022-02-06 23:26:54 kilo-sd JobId 58793: End of Volume at file 2 on device "MultiFileStorage0001" (/srv/backup/bareos), Volume "VolFull-0168"
 2022-02-06 23:26:54 kilo-sd JobId 58793: Ready to read from volume "VolFull-0169" on device "MultiFileStorage0001" (/srv/backup/bareos).
 2022-02-06 23:26:54 kilo-sd JobId 58793: Forward spacing Volume "VolFull-0169" to file:block 0:64702.
 2022-02-06 23:27:03 kilo-sd JobId 58793: End of Volume at file 1 on device "MultiFileStorage0001" (/srv/backup/bareos), Volume "VolFull-0169"
 2022-02-06 23:27:03 kilo-sd JobId 58793: Warning: stored/vol_mgr.cc:542 Need volume from other drive, but swap not possible. Status: read=1 num_writers=0 num_reserve=0 swap=0 vol=VolIncr-0
022 from dev="MultiFileStorage0004" (/srv/backup/bareos) to "MultiFileStorage0001" (/srv/backup/bareos)
 2022-02-06 23:27:03 kilo-sd JobId 58793: Warning: stored/acquire.cc:348 Read acquire: stored/label.cc:268 Could not reserve volume VolIncr-0022 on "MultiFileStorage0001" (/srv/backup/bareo
s)
 2022-02-06 23:27:03 kilo-sd JobId 58793: Please mount read Volume "VolIncr-0022" for:
    Job: BackupIndia.2022-02-06_23.15.01_31
    Storage: "MultiFileStorage0001" (/srv/backup/bareos)
    Pool: Incr
    Media type: File
 2022-02-06 23:32:03 kilo-sd JobId 58793: Ready to read from volume "VolIncr-0022" on device "MultiFileStorage0001" (/srv/backup/bareos).
 2022-02-06 23:32:03 kilo-sd JobId 58793: Forward spacing Volume "VolIncr-0022" to file:block 0:3331542115.
 2022-02-06 23:32:03 kilo-sd JobId 58793: End of Volume at file 0 on device "MultiFileStorage0001" (/srv/backup/bareos), Volume "VolIncr-0022"
 2022-02-06 23:32:03 kilo-sd JobId 58793: Ready to read from volume "VolIncr-0023" on device "MultiFileStorage0001" (/srv/backup/bareos).
 2022-02-06 23:32:03 kilo-sd JobId 58793: Forward spacing Volume "VolIncr-0023" to file:block 0:750086502.
Tags:
Steps To Reproduce:
Additional Information:
System Description General issues regarding all linux platforms or not specific to only one distribution
Attached Files:
Notes
(0004526)
stephand   
2022-02-24 11:46   
Thanks for reporting this issue. I also already noticed that problem.
It will be very hard to fix this properly without a complete redesign of the whole reservation logic, which would be a huge effort.
But meanwhile we could think about a workaround to mitigate this somehow.
(0005854)
bruno-at-bareos   
2024-03-20 14:38   
related to internal task 5574

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1155 [bareos-core] General feature always 2019-12-13 09:31 2024-03-20 13:54
Reporter: bigz Platform: Linux  
Assigned To: joergs OS: any  
Priority: high OS Version: 3  
Status: resolved Product Version: 19.2.4~pre  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Impossible to connect to a TLS with no PSK configured director
Description: I configured my director with TLS in a centos7 docker image.

I want to connect with python-bareos pip module on order to send a command. The python client does not support TLS configuration without PSK.

I think python client does not support this configuration. I do enhancement in my fork github repo (https://github.com/bigzbigz/bareos/tree/dev/bigz/master/python-support-tls-without-psk)

I plan to push a pull request on the officiel repo in order to fix the problem. I need your opinion before.


Tags:
Steps To Reproduce: I work in a venv

-> % pip install sslpsk python-bareos
[...]
-> % pip list
Package Version Location
--------------- ------- --------------------------------------------
pip 19.3.1
pkg-resources 0.0.0
python-bareos 18.2.5
python-dateutil 2.8.1
setuptools 42.0.2
six 1.13.0
sslpsk 1.0.0
wheel 0.33.6

I try with TLS-PSK require

-> % python bconsole.py -d --name bareos-dir --port 9101 --address bareos-dir -p $PASS --tls-psk-require
DEBUG bconsole.<module>: options: {'name': 'bareos-dir', 'password': 'xxxxxxxx', 'port': '9101', 'address': 'bareos-dir', 'protocolversion': 2, 'tls_psk_require': True}
DEBUG lowlevel.__init__: init
DEBUG lowlevel.__connect_plain: connected to bareos-dir:9101
DEBUG lowlevel.__connect_tls_psk: identity = R_CONSOLEbareos-dir, password = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Traceback (most recent call last):
  File "bconsole.py", line 28, in <module>
    director = bareos.bsock.DirectorConsole(**bareos_args)
  File "/home/user/Downloads/bareos/python-bareos/bareos/bsock/directorconsole.py", line 99, in __init__
    self.connect(address, port, dirname, ConnectionType.DIRECTOR, name, password)
  File "/home/user/Downloads/bareos/python-bareos/bareos/bsock/lowlevel.py", line 104, in connect
    return self.__connect()
  File "/home/user/Downloads/bareos/python-bareos/bareos/bsock/lowlevel.py", line 119, in __connect
    self.__connect_tls_psk()
  File "/home/user/Downloads/bareos/python-bareos/bareos/bsock/lowlevel.py", line 191, in __connect_tls_psk
    server_side=False)
  File "/home/user/.virtualenvs/bareos/lib/python3.7/site-packages/sslpsk/sslpsk.py", line 106, in wrap_socket
    _ssl_set_psk_client_callback(sock, cb)
  File "/home/user/.virtualenvs/bareos/lib/python3.7/site-packages/sslpsk/sslpsk.py", line 73, in _ssl_set_psk_client_callback
    ssl_id = _sslpsk.sslpsk_set_psk_client_callback(_sslobj(sock))
  File "/home/user/.virtualenvs/bareos/lib/python3.7/site-packages/sslpsk/sslpsk.py", line 55, in _sslobj
    return sock._sslobj._sslobj
AttributeError: '_ssl._SSLSocket' object has no attribute '_sslobj'

I try with no TLS-PSK require (default configuration)

-> % python bconsole.py -d --name bareos-dir --port 9101 --address bareos-dir -p $PASS
/home/user/Downloads/bareos/python-bareos/bareos/bsock/lowlevel.py:38: UserWarning: Connection encryption via TLS-PSK is not available, as the module sslpsk is not installed.
  warnings.warn(u'Connection encryption via TLS-PSK is not available, as the module sslpsk is not installed.')
DEBUG bconsole.<module>: options: {'name': 'bareos-dir', 'password': 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx', 'port': '9101', 'address': 'bareos-dir', 'protocolversion': 2, 'tls_psk_require': False}
DEBUG lowlevel.__init__: init
DEBUG lowlevel.__connect_plain: connected to bareos-dir:9101
DEBUG lowlevel.__connect: Encryption: None
DEBUG lowlevel.send: bytearray(b'Hello bareos-dir calling version 18.2.5')
DEBUG lowlevel.recv_bytes: expecting 4 bytes.
DEBUG lowlevel.recv: header: -4
WARNING lowlevel._handleSocketError: socket error: Conversation terminated (-4)
Received unexcepted signal: Conversation terminated (-4)
Additional Information: Director configuration:
Director {
  Name = @@DIR_NAME@@-dir
  DIRport = 9101 # where we listen for UA connections
  QueryFile = "/usr/lib/bareos/scripts/query.sql"
  WorkingDirectory = "/var/spool/bareos"
  PidDirectory = "/var/run"
  Password = "@@DIR_PASSWORD@@" # Console password
  Messages = Daemon
  Auditing = yes
  TLS Enable = yes
  TLS Require = yes
  TLS DH File = /etc/ssl/dh1024.pem
  TLS CA Certificate File = /etc/ssl/certs/ca-bundle.crt
  TLS Key = /etc/ssl/private/client.key
  TLS Certificate = /etc/ssl/certs/client.pem
}
System Description General issues regarding all linux platforms or not specific to only one distribution
Attached Files: console.log (32,684 bytes) 2019-12-17 22:44
https://bugs.bareos.org/file_download.php?file_id=396&type=bug
Notes
(0003663)
arogge   
2019-12-13 10:46   
Thanks for taking a look.
I'm not sure I already understand what happens in your environment. However, if you want to touch the code, you should probably checkout the master branch and use python-bareos from there.
(0003664)
bigz   
2019-12-13 10:57   
I verify the error with the code from the master branch in python-bareos folder.
(0003665)
joergs   
2019-12-13 11:39   
it is correct, that python-bareos does not support TLS other then TLS-PSK.
My assumption has been, that most new installations will use TLS-PSK. However, a patch to also support normal TLS without PSK is welcome.
I took a first look at your code. It looks good so far.
However, if I read it correctly, you allow TLS, but don't verify against a custom CA? Have I missed something there or is it your intention to plainly accepting all TLS connections?

Have you seen the systemtest testing the python-bareos authentication?
See https://docs.bareos.org/master/DeveloperGuide/BuildAndTestBareos.html#building-the-test-environment

Instead of running all tests, you can also change to the build/systemtests/tests/python-bareos-test directory and run "./testrunner" from there.
This way you can verify, that your change to not change existing behavior and maybe you can add extra tests for your functionality.

With what version of Python have you tested? I experienced difficulties with the Python3 version of sslpsk. What OS/distribution did you use, as at least on new Fedora (>=30) systems there are also compile problems with sslpsk?

Currently, we use tls_psk_enable and tls_psk_require parameter. You added tls_enable and tls_require. I'm not sure, if this is the best way to configure it, especially, if more parameter as CA are required. I'll discuss about this in our next developer meeting.
(0003666)
bigz   
2019-12-13 12:06   
You can use a custom CA (this is my configuration). The use of ssl.wrap_socket automatically check the CA you installed in the operating system (normally in /etc/ssl/certs). It is possible to load an extra CA chain

I don't see the systemtest. I use my travis-ci account to check the existing CI from the official repo.
I will think about if a new test is possible to verify my enhancement.

I use Python 3.7.3 version. My OS is Ubuntu 19.04 and I use official python package. Modules are installed with a virtualenv with pip command.
(0003668)
bigz   
2019-12-14 22:01   
(Last edited: 2019-12-14 22:30)
Hello joergs
I have difficulties to build bareos project with cmake like you explain in your note. I think I have dependencies missing but I don't find which one is missing. I installed libacl1-dev and zlib1g-dev on my ubuntu19.04. Do you have the list of dependencies packages is needed ?
When I use this command I have this error.
-> % cmake -Dsqlite3=yes -Dtraymonitor=yes ../bareos
[...]
-- Disabled test: system:bconsole-pam
CMake Error: The following variables are used in this project, but they are set to NOTFOUND.
Please set them or make sure they are set and tested correctly in the CMake files:
Readline_INCLUDE_DIR (ADVANCED)
   used as include directory in directory /home/user/Perso/clion/bareos/core/src/console
   used as include directory in directory /home/user/Perso/clion/bareos/core/src/console
   used as include directory in directory /home/user/Perso/clion/bareos/core/src/console
   used as include directory in directory /home/user/Perso/clion/bareos/core/src/console
   used as include directory in directory /home/user/Perso/clion/bareos/core/src/console
   used as include directory in directory /home/user/Perso/clion/bareos/core/src/console
   used as include directory in directory /home/user/Perso/clion/bareos/core/src/console
   used as include directory in directory /home/user/Perso/clion/bareos/core/src/console
   used as include directory in directory /home/user/Perso/clion/bareos/core/src/console

I don't understand the error
Thanks

(0003669)
bigz   
2019-12-14 23:40   
I passed to build the project...I continue investigation but my previous errors are solved
(0003670)
joergs   
2019-12-15 09:48   
Good that you passed the build process. You find the dependency packages in the files we use to create Debian packages: https://github.com/bareos/bareos/blob/master/core/platforms/packaging/bareos.dsc and/or https://github.com/bareos/bareos/blob/master/core/debian/control (or http://download.bareos.org/bareos/experimental/nightly/xUbuntu_18.04/bareos_19.2.4*.dsc). Make sure to have libjansson-dev installed, otherwise, Bareos will build but misses functionality required for the test.
(0003671)
bigz   
2019-12-15 14:51   
Hello,
I have a small error in ./testrunner

-> % /bin/zsh /home/user/Perso/clion/bareos/cmake-build-release/systemtests/tests/python-bareos-test/testrunner [devel|…]
creating database (sqlite3)
running /home/user/Perso/clion/bareos/cmake-build-release/systemtests/scripts/setup
 
 
=== python-bareos-test: starting at 14:46:34 ===
=
=
exit(0) is called. Set test to failure and end test.
end_test:7: no matches found: /home/user/Perso/clion/bareos/cmake-build-release/systemtests/tests/python-bareos-test/working/bareos.*.traceback
end_test:8: no matches found: /home/user/Perso/clion/bareos/cmake-build-release/systemtests/tests/python-bareos-test/working/bareos.*.traceback
end_test:9: no matches found: /home/user/Perso/clion/bareos/cmake-build-release/systemtests/tests/python-bareos-test/working/*.bactrace
end_test:10: no matches found: /home/user/Perso/clion/bareos/cmake-build-release/systemtests/tests/python-bareos-test/working/*.bactrace
 
  !!!!! python-bareos-test failed!!! 14:46:34 !!!!!
   Status: estat=998 zombie=0 backup=0 restore=0 diff=0
 
I think I don't understand the behavior of the start_test() function in functions. A trap is added at the beginning of the function and the trap is always taken at the end of this start_test() function as a consequence end_test() is called and no tests are done. Is it a desired bahavior ?
(0003672)
joergs   
2019-12-15 19:26   
Interesting. However, this problem does only occur when using zsh. It seams, that you are the first who ever tried it with it. Normally, we use bash (dash) or ksh. With these, the test runs as expected.
(0003675)
bigz   
2019-12-16 20:22   
(Last edited: 2019-12-16 22:30)
Problem is solved and it comes with my zsh interpreter. I just change it with bash.
I already have a problem because I use default python3.7 version of my ubuntu OS. It seems to have a problem with sslpsk module and python version 3.7 (https://github.com/drbild/sslpsk/issues/11). I will try with python3.6 and I'll give you the answer.

(0003679)
bigz   
2019-12-17 22:44   
I change my python version to 3.6.5 in order to avoid sslpsk problem
Sorry but I already have errors when I execute ./testrunner from master branch. I upload console.log file with stdout.

Please could you watch and tell me what do you think about ? In my opinion problem comes from "WARNING lowlevel._handleSocketError: socket error: [SSL: ATTEMPT_TO_REUSE_SESSION_IN_DIFFERENT_CONTEXT] attempt to reuse session in different context (_ssl.c:833)". In this situation, the connection falls back in plain and the test fails.
I have no problem when I use builded bconsole -c bconsole-admin-tls.conf or bconsole -c bconsole-admin-notls.conf command. All of 2 are encrypted with TLS_CHACHA20_POLY1305_SHA256
I try to google ATTEMPT_TO_REUSE_SESSION_IN_DIFFERENT_CONTEXT but I don't find an interesting answer.
Maybe you have an opinion on the problem ?
As you send in your email, I will create a draft pull request tomorrow.
(0003705)
bigz   
2019-12-18 22:48   
(Last edited: 2019-12-18 23:10)
Hello,
I work today and I rebase to the bareos master branch. I do not have the problem anymore. You have been doing commits in last few days but I don't understand how do you solve my problem

I did small fixes in python-bareos with a pull request in bareos github repo.

I already have error when I execute python unittests .Do you manage to perfom the unittest ? Do you send me a log file of the execution ?

Thanks

(0003707)
joergs   
2019-12-19 12:55   
Hi, I accepted https://github.com/bareos/bareos/pull/382.

Have I understood you correctly, that connecting to a Director console without TLS-PSK, but with TLS by certificate does work now? I've not changed the behavior intentionally.

The systemtest also fails on my system when using Python 3. With Python 2 it works without problems. I assumed a general problem with sslpsk on Python 3, but after you saying, it works somehow in your environment, I assumend a local problem.
After your hint, I checked the project https://github.com/drbild/sslpsk again and saw, that the example code works on Python 3. I hope to find the time to check about this in more detail soon.
(0003712)
joergs   
2019-12-20 17:22   
I'm not sure, what have changed, but the example and test code from https://github.com/drbild/sslpsk does no longer work on my machine.
(0003713)
bigz   
2019-12-21 14:42   
Hello,
It seems to don't work for him as well => https://travis-ci.org/drbild/sslpsk
(0004100)
b.braunger@syseleven.de   
2021-03-18 16:18   
I have to hosts using python-baroes with sslpsk. One Ubuntu 14.04 with python 3.4 (works well) and one Ubuntu 20.04 with python 3.6 which throw this error:

ssl.SSLError: [SSL: ATTEMPT_TO_REUSE_SESSION_IN_DIFFERENT_CONTEXT] attempt to reuse session in different context (_ssl.c:852)

Am I getting that right, that sslpsk version 1.0.0 seems not to work on modern systems at all?

One can circumvent this problem by setting tls_psk_require=False and let the connection fallback to unencrypted mode.
(0004101)
joergs   
2021-03-18 16:25   
Yes, this is correct. Another way to circumvent this problem is described at https://docs.bareos.org/master/include/autogenerated/autosummary/python-bareos/bareos.bsock.html#transport-encryption-tls-psk

Basically is says, use the latest version from sslpsk master and set it ti TLSv1.2.
(0004102)
bigz   
2021-03-19 22:41   
I confirm joergs' words
(0005846)
bruno-at-bareos   
2024-03-20 13:54   
Pr merged. tls-psk is native in python 3.13

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1599 [bareos-core] installer / packages minor always 2024-02-12 11:30 2024-03-07 20:11
Reporter: adf_patrickha Platform: Linux  
Assigned To: slederer OS: Debian  
Priority: normal OS Version: 10  
Status: resolved Product Version:  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: FD LDAP Plugin has broken dependencies on Debian 11+ (bareos-filedaemon-ldap-python-plugin -> python-ldap)
Description: The package of the FD Python plugin for LDAP backups (bareos-filedaemon-ldap-python-plugin) has broken dependencies, starting on Debian 11. The package is depending on `python-ldap` which is no longer available in Debian 11 and should be replaced with `python3-ldap`.
I'm not sure if the python plugin is fully python3 compatible, as the script uses the shebang `#!/usr/bin/env python` and not `#!/usr/bin/env python3`. But as the package `python-ldap` is not longer installable on Debian 11 and higher and Python 2 is EOL, it should be updated and the dependency should be changed to `python3-ldap` either way.
Tags: debian 11, fd, filedemon, ldap, plugin
Steps To Reproduce: Try to install the package `bareos-filedaemon-ldap-python-plugin` on Debian 11 or higher with: `apt install bareos-filedaemon-ldap-python-plugin`.
Additional Information:
System Description
Attached Files:
Notes
(0005764)
bruno-at-bareos   
2024-02-12 16:36   
We will take care of this in a future update
(0005778)
adf_patrickha   
2024-02-15 17:15   
Thx for looking into it! Also some additional documentation on how to actually use the plugin (FileSet, Jobs, client config, how to do restores, ...) would be nice. At the moment it's not really usable.
(0005779)
adf_patrickha   
2024-02-15 17:15   
I'm referring to this documentation: https://docs.bareos.org/TasksAndConcepts/Plugins.html#ldap-plugin
(0005827)
slederer   
2024-03-07 20:11   
Fix committed to bareos master branch with changesetid 18726.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1601 [bareos-core] webui major always 2024-02-21 15:29 2024-02-28 10:07
Reporter: khvalera Platform: Linux  
Assigned To: bruno-at-bareos OS: any  
Priority: high OS Version: 3  
Status: resolved Product Version: 23.0.1  
Product Build: Resolution: unable to reproduce  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: It is impossible to restore files from an archive via the web interface
Description: It is not possible to restore archived files via the web interface when “Merge all related jobs to last full backup of selected backup job” is selected to “Yes”.
When this option is selected, files are not displayed in the “File selection”.
In bareos-audit.log I see the following empty parameters:
backup-dir: Console [web-admin] from [::1] cmdline .bvfs_get_jobids jobid=7690 all
backup-dir: Console [web-admin] from [::1] cmdline llist backups client="client_16.1" fileset="any" order=desc
backup-dir: Console [web-admin] from [::1] cmdline show fileset="linux-config-system"
backup-dir: Console [web-admin] from [::1] cmdline .bvfs_update
backup-dir: Console [web-admin] from [::1] cmdline llist clients current
backup-dir: Console [web-admin] from [::1] cmdline show client=client_16.1
backup-dir: Console [web-admin] from [::1] cmdline .bvfs_get_jobids jobid=7690 all
backup-dir: Console [web-admin] from [::1] cmdline .bvfs_lsdirs jobid= path= limit=1000 offset=0
backup-dir: Console [web-admin] from [::1] cmdline .bvfs_lsdirs jobid= path=@ limit=1000
backup-dir: Console [web-admin] from [::1] cmdline .bvfs_lsfiles jobid= path= limit=1000 offset=0
Tags:
Steps To Reproduce:
Additional Information:
System Description General issues regarding all linux platforms or not specific to only one distribution
Attached Files: Screenshot_20240221_160412.png (85,262 bytes) 2024-02-21 15:31
https://bugs.bareos.org/file_download.php?file_id=613&type=bug
png

Screenshot_20240221_160228.png (73,327 bytes) 2024-02-21 15:31
https://bugs.bareos.org/file_download.php?file_id=614&type=bug
png

20240221_163441_1.jpg (190,940 bytes) 2024-02-21 16:36
https://bugs.bareos.org/file_download.php?file_id=615&type=bug
jpg

Screenshot_20240221_180853.png (67,805 bytes) 2024-02-21 17:10
https://bugs.bareos.org/file_download.php?file_id=616&type=bug
png
Notes
(0005791)
khvalera   
2024-02-21 15:31   
I am attaching screenshots.
(0005792)
bruno-at-bareos   
2024-02-21 16:31   
Archive jobs are not meant to be merged with normal jobs.
If you enable merge jobs all, then only the active full and its incremental chain can be used for.

You certainly want to use the other tabs (restore file by version).
(0005793)
bruno-at-bareos   
2024-02-21 16:36   
Just to add this it is working in both case for archive.
version 23.0.1
(0005794)
khvalera   
2024-02-21 17:09   
I probably didn’t quite understand you, but as I understand it, with this parameter it’s possible to combine all incremental tasks with a full one!? This does not happen and even if you select a full task, the list of files will be empty.

(0005795)
khvalera   
2024-02-21 17:10   
I am attaching screenshots.
(0005796)
bruno-at-bareos   
2024-02-21 17:26   
> I probably didn’t quite understand you, but as I understand it, with this parameter it’s possible to combine all incremental tasks with a full one!? This does not happen and even if you select a > full task, the list of files will be empty.

Closing as invalid, A full + all incremental on top of it are proposed if the chain work of course, but you were talking about archive.

If it is not working, please go to bconsole choice 5 (which is the equivalent and report success or error)
is the bvfs update has been done for that job or does it failed if space is missing etc ...
(0005797)
bruno-at-bareos   
2024-02-21 17:27   
Not changes needed
Please upgrade to current version which is 23.0.1
(0005798)
khvalera   
2024-02-21 17:44   
(Last edited: 2024-02-21 17:44)
I’m using version 23.0.1, it’s not in the bugs interface, that’s why I tore out 22.1.3.
Tried:
  .bvfs_clear_cache yes
  .bvfs_update
no errors.
There are no problems with restoring from the console (I just tried it, everything works).
(0005799)
bruno-at-bareos   
2024-02-22 09:36   
set correct version
(0005800)
bruno-at-bareos   
2024-02-22 09:47   
I'm a bit astonish, as version 23.0.1 version is present at first position in the selector.

I'm sorry to inform you that we can't reproduce your problem, you have to give us more information so we will be able to do a diagnostic.

on test picking any incremental lead to the following
show client=bareos-fd
.bvfs_get_jobids jobid=4051
.bvfs_lsdirs jobid=15284,15506,15184,15193,15203,15213,15223,15232,15242,15252,15262,15271,15280,15290,15303,15313,15344,15354,15364,15373,15383,15393,15433,15446,15456,15465,15475,15484,15494,15502 path= limit=1000 offset=0
.bvfs_lsfiles jobid=15284,15506,15184,15193,15203,15213,15223,15232,15242,15252,15262,15271,15280,15290,15303,15313,15344,15354,15364,15373,15383,15393,15433,15446,15456,15465,15475,15484,15494,15502 path= limit=1000 offset=0
.bvfs_lsfiles jobid=15284,15506,15184,15193,15203,15213,15223,15232,15242,15252,15262,15271,15280,15290,15303,15313,15344,15354,15364,15373,15383,15393,15433,15446,15456,15465,15475,15484,15494,15502 path=@ limit=1000 offset=0

So your first .bvfs_get_jobids jobid=4151 in bconsole should return the list of related or mandatory jobid etc...
(0005806)
khvalera   
2024-02-22 16:06   
My command .bvfs_get_jobids jobid=7737 does not return dependent jobs from 7737 JobId
What could be the problem?
(0005807)
bruno-at-bareos   
2024-02-22 16:25   
No ideas of the why: this need real proper investigation which often take time.

You may want to contact our sales department to get a quote for consulting/subscription/support see sales(at)bareos(dot)com

Otherwise you may add information here or on the Mailing list to ask free advises there.
(0005819)
bruno-at-bareos   
2024-02-28 10:07   
Software work as expected when a jobid is present and its full chain of incrementals

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1591 [bareos-core] director minor random 2024-01-19 16:17 2024-02-08 17:19
Reporter: raschu Platform: Linux  
Assigned To: bruno-at-bareos OS: Debian  
Priority: normal OS Version: 10  
Status: assigned Product Version:  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Error: stored/device.cc:222 Error getting Volume info
Description: Hi,

we manage over 70 backup jobs. In rare cases we observe the error "Error: stored/device.cc:222 Error getting Volume info" in the backup mails. The job was successful and is marked with a warning.

2 examples:

19-Jan 02:19 bareos-dir JobId 8696: There are no more Jobs associated with Volume "incr7d-1589". Marking it purged.
19-Jan 02:19 bareos-dir JobId 8696: All records pruned from Volume "incr7d-1589"; marking it "Purged"
19-Jan 02:19 bareos-dir JobId 8696: Recycled volume "incr7d-1589"
19-Jan 02:19 bareos-dir JobId 8696: Using Device "bstore02-vd01" to write.
19-Jan 02:19 client01 JobId 8696: Connected Storage daemon at storage02:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
19-Jan 02:19 client01 JobId 8696: Encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
19-Jan 02:19 client01 JobId 8696: Extended attribute support is enabled
19-Jan 02:19 client01 JobId 8696: ACL support is enabled
19-Jan 02:19 bareos-sd JobId 8696: Recycled volume "incr7d-1589" on device "bstore02-vd01" (/bstore02), all previous data lost.
19-Jan 02:19 bareos-dir JobId 8696: Max Volume jobs=1 exceeded. Marking Volume "incr7d-1589" as Used.
19-Jan 02:19 bareos-sd JobId 8696: Error: stored/device.cc:222 Error getting Volume info: 1998 Volume "incr7d-1589" catalog status is Used, but should be Append, Purged or Recycle.
19-Jan 02:19 bareos-sd JobId 8696: Releasing device "bstore02-vd01" (/bstore02).
19-Jan 02:19 bareos-sd JobId 8696: Elapsed time=00:00:02, Transfer rate=7.527 M Bytes/second


19-Jan 01:00 bareos-dir JobId 8693: Sending Accurate information.
19-Jan 02:15 bareos-dir JobId 8693: Created new Volume "aincr30d-bsd02-2821" in catalog.
19-Jan 02:15 bareos-dir JobId 8693: Using Device "bstore02-vd01" to write.
19-Jan 02:17 client02 JobId 8693: Connected Storage daemon at storage02:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
19-Jan 02:17 client02 JobId 8693: Encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
19-Jan 02:17 client02 JobId 8693: Extended attribute support is enabled
19-Jan 02:17 bareos-sd JobId 8693: Labeled new Volume "aincr30d-bsd02-2821" on device "bstore02-vd01" (/bstore02).
19-Jan 02:17 client02 JobId 8693: ACL support is enabled
19-Jan 02:17 bareos-sd JobId 8693: Wrote label to prelabeled Volume "aincr30d-bsd02-2821" on device "bstore02-vd01" (/bstore02)
19-Jan 02:17 bareos-dir JobId 8693: Max Volume jobs=1 exceeded. Marking Volume "aincr30d-bsd02-2821" as Used.
19-Jan 02:18 bareos-sd JobId 8693: Error: stored/device.cc:222 Error getting Volume info: 1998 Volume "aincr30d-bsd02-2821" catalog status is Used, but should be Append, Purged or Recycle.
19-Jan 02:18 bareos-sd JobId 8693: Releasing device "bstore02-vd01" (/bstore02).
19-Jan 02:18 bareos-sd JobId 8693: Elapsed time=00:01:29, Transfer rate=345.0 K Bytes/second


I can't understand the error. The volumes were either newly created or recently reset.
The director has the version "23.0.1~pre57.8e89bfe0a-40".

Thanks Ralf
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files: tracefile.zip (223,303 bytes) 2024-02-06 15:39
https://bugs.bareos.org/file_download.php?file_id=600&type=bug
Notes
(0005712)
bruno-at-bareos   
2024-01-23 13:48   
Two jobs accessing the same volume at the same time ?
(0005717)
raschu   
2024-01-24 18:02   
Thanks Bruno. No, because I set only 1 jobs per volume:

"""
Pool {
  Name = aincr30d-bsd02 # 30 days always incremental
  Pool Type = Backup
  Recycle = yes # Bareos can automatically recycle Volumes
  AutoPrune = yes # Prune expired volumes
  Volume Retention = 65 days # 30 days, recommended retention buffer >2x
  Maximum Volume Bytes = 5 g # Limit Volume size to something reasonable
  Volume Use Duration = 23 hours # defines the time period that the Volume can be written
  Maximum Volume Jobs = 1 # nur ein Job pro Volume, damit fehlgeschlagene Jobs besser gelöscht werden können
  Label Format = "aincr30d-bsd02-" # Volumes label
  Storage = bsd02
  Next Pool = aicons30d-bsd02
}
"""

Bye Ralf
(0005719)
bruno-at-bareos   
2024-01-25 08:59   
And what about the maximum concurrent job on the device ?
(0005721)
raschu   
2024-01-25 10:01   
(Last edited: 2024-01-25 10:08)
Hi Bruno, we use some virtual devices for parallel jobs. The config is:

Storage { # define myself
  Name = bsd02
  Description = "storage for internal backup - filter: !hrz in fqdn"
  Address = storage02
  Password = "pass"
  Device = bstore02
  Device = bstore02-vd01
  Device = bstore02-vd02
  Device = bstore02-vd03
  Media Type = File02
  Maximum Concurrent Jobs = 20
}

Config from the storage daemon:

Device {
  Name = bstore02
  Media Type = File02
  Device Type = File
  Archive Device = /bstore02
  Maximum Concurrent Jobs = 1
  LabelMedia = yes; # lets Bareos label unlabeled media
  Random Access = yes;
  AutomaticMount = yes; # when device opened, read it
  RemovableMedia = no;
  AlwaysOpen = no;
}

Device {
  Name = bstore02-vd01
  Media Type = File02
  Device Type = File
  Archive Device = /bstore02
  Maximum Concurrent Jobs = 1
  LabelMedia = yes; # lets Bareos label unlabeled media
  Random Access = yes;
  AutomaticMount = yes; # when device opened, read it
  RemovableMedia = no;
  AlwaysOpen = no;
  Description = "virtual device - linked for more concurrent jobs"
}

Device {
  Name = bstore02-vd02
  Media Type = File02
  Device Type = File
  Archive Device = /bstore02
  Maximum Concurrent Jobs = 1
  LabelMedia = yes; # lets Bareos label unlabeled media
  Random Access = yes;
  AutomaticMount = yes; # when device opened, read it
  RemovableMedia = no;
  AlwaysOpen = no;
  Description = "virtual device - linked for more concurrent jobs"
}

Device {
  Name = bstore02-vd03
  Media Type = File02
  Device Type = File
  Archive Device = /bstore02
  Maximum Concurrent Jobs = 1
  LabelMedia = yes; # lets Bareos label unlabeled media
  Random Access = yes;
  AutomaticMount = yes; # when device opened, read it
  RemovableMedia = no;
  AlwaysOpen = no;
  Description = "virtual device - linked for more concurrent jobs"
}

I took the idea from here: https://svennd.be/concurrent-jobs-in-bareos-with-disk-storage/


Thanks Ralf
(0005722)
bruno-at-bareos   
2024-01-25 13:25   
Things looks good, would you mind to find back the complete job's log for those having that issue.

btw if you want to optimize even more the configuration you can get inspired by future
https://github.com/bareos/bareos/pull/1467#issuecomment-1780956976
;-)
(0005727)
raschu   
2024-02-05 10:35   
Hi Bruno, thanks. The new option with the autochanger is good :-)

Now here more examples. I got the error 4 times (different jobs) last weekend :-/

"""
05-Feb 04:01 bareos-dir JobId 9638: Start Backup JobId 9638, Job=client01.2024-02-05_04.00.01_09
05-Feb 04:01 bareos-dir JobId 9638: Connected Storage daemon at bstore02:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
05-Feb 04:01 bareos-dir JobId 9638: Encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
05-Feb 04:01 bareos-dir JobId 9638: Connected Client: client01 at client01:9102, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
05-Feb 04:01 bareos-dir JobId 9638: Handshake: Immediate TLS
05-Feb 04:01 bareos-dir JobId 9638: Encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
05-Feb 04:01 bareos-dir JobId 9638: Sending Accurate information.
05-Feb 04:29 bareos-dir JobId 9638: Created new Volume "aincr180d-bsd02-3234" in catalog.
05-Feb 04:29 bareos-dir JobId 9638: Using Device "bstore02-vd03" to write.
05-Feb 04:29 client01 JobId 9638: Connected Storage daemon at bstore02:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
05-Feb 04:29 client01 JobId 9638: Encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
05-Feb 04:29 client01 JobId 9638: Extended attribute support is enabled
05-Feb 04:29 bareos-sd JobId 9638: Labeled new Volume "aincr180d-bsd02-3234" on device "bstore02-vd03" (/bstore02).
05-Feb 04:29 client01 JobId 9638: ACL support is enabled
05-Feb 04:29 bareos-sd JobId 9638: Wrote label to prelabeled Volume "aincr180d-bsd02-3234" on device "bstore02-vd03" (/bstore02)
05-Feb 04:29 bareos-dir JobId 9638: Max Volume jobs=1 exceeded. Marking Volume "aincr180d-bsd02-3234" as Used.
05-Feb 04:29 bareos-sd JobId 9638: Error: stored/device.cc:222 Error getting Volume info: 1998 Volume "aincr180d-bsd02-3234" catalog status is Used, but should be Append, Purged or Recycle.
05-Feb 04:29 bareos-sd JobId 9638: Releasing device "bstore02-vd03" (/bstore02).
05-Feb 04:29 bareos-sd JobId 9638: Elapsed time=00:00:01, Transfer rate=0 Bytes/second
05-Feb 04:29 bareos-dir JobId 9638: Bareos bareos-dir 23.0.2~pre6.5fbc90f69 (23Jan24):
  Build OS: Debian GNU/Linux 11 (bullseye)
  JobId: 9638
  Job: client01.2024-02-05_04.00.01_09
  Backup Level: Incremental, since=2024-02-04 04:06:17
  Client: "client01" 23.0.2~pre32.0a0e55739 (31Jan24) Debian GNU/Linux 12 (bookworm),debian
  FileSet: "client01" 2024-01-19 04:00:00
  Pool: "aincr180d-bsd02" (From Job resource)
  Catalog: "MyCatalog" (From Client resource)
  Storage: "bsd02" (From Pool resource)
  Scheduled time: 05-Feb-2024 04:00:01
  Start time: 05-Feb-2024 04:01:02
  End time: 05-Feb-2024 04:29:08
  Elapsed time: 28 mins 6 secs
  Priority: 10
  Allow Mixed Priority: no
  FD Files Written: 0
  SD Files Written: 0
  FD Bytes Written: 0 (0 B)
  SD Bytes Written: 0 (0 B)
  Quota Used: 14,609,097,104 (14.60 GB)
  Burst Quota: 0 (0 B)
  Soft Quota: 26,843,545,600 (26.84 GB)
  Hard Quota: 32,212,254,720 (32.21 GB)
  Grace Expiry Date: Soft Quota not exceeded
  Rate: 0.0 KB/s
  Software Compression: None
  VSS: no
  Encryption: yes
  Accurate: yes
  Volume name(s):
  Volume Session Id: 565
  Volume Session Time: 1704452670
  Last Volume Bytes: 0 (0 B)
  Non-fatal FD errors: 0
  SD Errors: 1
  FD termination status: OK
  SD termination status: OK
  Bareos binary info: Bareos community build (UNSUPPORTED): Get professional support from https://www.bareos.com
  Job triggered by: Scheduler
  Termination: Backup OK -- with warnings
"""
(0005728)
bruno-at-bareos   
2024-02-05 10:50   
So to resume the client is using encryption and you have quotas in place ?

Any kind of predictive path to reproduce, so you may be able to run the daemons with a high debug level ?
(0005729)
raschu   
2024-02-05 11:04   
Hi Bruno, yes we use encryption and quotas.

The clients are "internal customer" so it is not so easy to set a higher debug level.
But for my opinion - it must be a director or catalog problem!?

It is strange because Bareos create new volume and detect it is status 'used'?

Thanks Ralf
(0005730)
bruno-at-bareos   
2024-02-05 11:23   
you can raise a debug level to any client in bconsole, so not a big concern, I would say.

The question is why you are able to reproduce that error, while we don't with all the tests ( and real ) runs here.
If we can reproduce, we know we can fix it: the problem is how can I reproduce that ;-)
(0005731)
raschu   
2024-02-05 11:40   
Hi Bruno, okay - thanks :-)

Perfect, I set the debug level for one client which often gets the error.

*setdebug level=200 trace=1 timestamp=1 client=renamedclient
Connecting to Client renamedclient at renamedclient:9102
2000 OK setdebug=200 trace=1 hangup=0 timestamp=1 tracefile=/var/lib/bareos/renamedclient.trace

Now I wait for the next job this night.

Bye Ralf
(0005732)
bruno-at-bareos   
2024-02-05 13:52   
but you will need to also trace the sd ...
beware that tracefile can grow quick large ...
(0005733)
raschu   
2024-02-05 15:48   
Thanks, the sd is now also in debug mode :-)
(0005739)
raschu   
2024-02-06 15:39   
Hi Bruno, this night we got also the error and debug was enabled :) I hope it helps to find the problem. I changed the hostnames.

I attached the trace logs. The log from the client-fd is very large and does not contain any data that will help us.

Thanks Ralf

The jobid was 9689. Here the details:


06-Feb 04:00 bareos-dir JobId 9689: Start Backup JobId 9689, Job=client01.renamed.2024-02-06_04.00.01_09
06-Feb 04:00 bareos-dir JobId 9689: Connected Storage daemon at storageserver.renamed:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
06-Feb 04:00 bareos-dir JobId 9689: Encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
06-Feb 04:00 bareos-dir JobId 9689: Probing client protocol... (result will be saved until config reload)
06-Feb 04:00 bareos-dir JobId 9689: Connected Client: client01.renamed at client01.renamed:9102, encryption: PSK-AES256-CBC-SHA TLSv1.2
06-Feb 04:00 bareos-dir JobId 9689: Handshake: Immediate TLS
06-Feb 04:00 bareos-dir JobId 9689: Encryption: PSK-AES256-CBC-SHA TLSv1.2
06-Feb 04:00 bareos-dir JobId 9689: Sending Accurate information.
06-Feb 04:05 bareos-dir JobId 9689: Created new Volume "aincr30d-bsd02-3253" in catalog.
06-Feb 04:05 bareos-dir JobId 9689: Using Device "bstore02-vd03" to write.
06-Feb 04:05 client01.renamed JobId 9689: Connected Storage daemon at storageserver.renamed:9103, encryption: PSK-AES256-CBC-SHA TLSv1.2
06-Feb 04:05 client01.renamed JobId 9689: Encryption: PSK-AES256-CBC-SHA TLSv1.2
06-Feb 04:05 client01.renamed JobId 9689: Extended attribute support is enabled
06-Feb 04:05 bareos-sd JobId 9689: Labeled new Volume "aincr30d-bsd02-3253" on device "bstore02-vd03" (/bstore02).
06-Feb 04:05 client01.renamed JobId 9689: ACL support is enabled
06-Feb 04:05 bareos-sd JobId 9689: Wrote label to prelabeled Volume "aincr30d-bsd02-3253" on device "bstore02-vd03" (/bstore02)
06-Feb 04:05 bareos-dir JobId 9689: Max Volume jobs=1 exceeded. Marking Volume "aincr30d-bsd02-3253" as Used.
06-Feb 04:05 bareos-sd JobId 9689: Error: stored/device.cc:222 Error getting Volume info: 1998 Volume "aincr30d-bsd02-3253" catalog status is Used, but should be Append, Purged or Recycle.
06-Feb 04:30 bareos-sd JobId 9689: Releasing device "bstore02-vd03" (/bstore02).
06-Feb 04:30 bareos-sd JobId 9689: Elapsed time=00:24:26, Transfer rate=39.40 K Bytes/second
06-Feb 04:30 bareos-dir JobId 9689: Insert of attributes batch table with 321 entries start
06-Feb 04:30 bareos-dir JobId 9689: Insert of attributes batch table done
06-Feb 04:30 bareos-dir JobId 9689: Bareos bareos-dir 23.0.2~pre6.5fbc90f69 (23Jan24):
  Build OS: Debian GNU/Linux 11 (bullseye)
  JobId: 9689
  Job: client01.renamed.2024-02-06_04.00.01_09
  Backup Level: Incremental, since=2024-02-05 04:00:18
  Client: "client01.renamed" 23.0.2~pre32.0a0e55739 (31Jan24) Red Hat Enterprise Linux Server release 7.9 (Maipo),redhat
  FileSet: "client01.renamed" 2024-01-20 04:00:00
  Pool: "aincr30d-bsd02" (From Job resource)
  Catalog: "MyCatalog" (From Client resource)
  Storage: "bsd02" (From Pool resource)
  Scheduled time: 06-Feb-2024 04:00:01
  Start time: 06-Feb-2024 04:00:16
  End time: 06-Feb-2024 04:30:13
  Elapsed time: 29 mins 57 secs
  Priority: 10
  Allow Mixed Priority: no
  FD Files Written: 321
  SD Files Written: 321
  FD Bytes Written: 57,709,242 (57.70 MB)
  SD Bytes Written: 57,764,329 (57.76 MB)
  Quota Used: 720,602,415,037 (720.6 GB)
  Burst Quota: 0 (0 B)
  Soft Quota: 1,073,741,824,000 (1.073 TB)
  Hard Quota: 1,288,490,188,800 (1.288 TB)
  Grace Expiry Date: Soft Quota not exceeded
  Rate: 32.1 KB/s
  Software Compression: 46.4 % (lz4)
  VSS: no
  Encryption: no
  Accurate: yes
  Volume name(s): aincr30d-bsd02-3253
  Volume Session Id: 583
  Volume Session Time: 1704452670
  Last Volume Bytes: 57,784,439 (57.78 MB)
  Non-fatal FD errors: 0
  SD Errors: 1
  FD termination status: OK
  SD termination status: OK
  Bareos binary info: Bareos community build (UNSUPPORTED): Get professional support from https://www.bareos.com
  Job triggered by: Scheduler
  Termination: Backup OK -- with warnings
(0005740)
bruno-at-bareos   
2024-02-06 17:41   
Thanks could you also add the output of

llist volume=aincr180d-bsd02-2334
llist volume=aincr30d-bsd02-3243
llist volume=aincr30d-bsd02-3249
llist volume=incr30d-3250

If you are using any script for secure erase command thanks to also give the maximum of information.
(0005741)
raschu   
2024-02-07 15:37   
Thanks Bruno. Here are the output:

*llist volume=aincr180d-bsd02-2334
No results to list.

*llist volume=aincr30d-bsd02-3243
          mediaid: 3,243
       volumename: aincr30d-bsd02-3243
             slot: 0
           poolid: 17
             pool: aincr30d-bsd02
        mediatype: File02
     firstwritten: 2024-02-05 22:00:03
      lastwritten: 2024-02-06 08:04:56
        labeldate: 2024-02-05 22:00:03
          voljobs: 1
         volfiles: 0
        volblocks: 1,092
        volmounts: 1
         volbytes: 1,145,006,249
        volerrors: 0
        volwrites: 1,093
 volcapacitybytes: 0
        volstatus: Used
          enabled: 1
          recycle: 1
     volretention: 5,616,000
   voluseduration: 82,800
       maxvoljobs: 1
      maxvolfiles: 0
      maxvolbytes: 5,368,709,120
        inchanger: 0
          endfile: 0
         endblock: 1,145,006,248
        labeltype: 0
        storageid: 3
         deviceid: 0
       locationid: 0
     recyclecount: 0
     initialwrite:
    scratchpoolid: 0
      scratchpool:
    recyclepoolid: 0
      recyclepool:
          comment:
          storage: bsd02

*llist volume=aincr30d-bsd02-3249
          mediaid: 3,249
       volumename: aincr30d-bsd02-3249
             slot: 0
           poolid: 17
             pool: aincr30d-bsd02
        mediatype: File02
     firstwritten: 2024-02-06 04:00:02
      lastwritten: 2024-02-06 04:03:18
        labeldate: 2024-02-06 04:00:02
          voljobs: 1
         volfiles: 1
        volblocks: 5,119
        volmounts: 1
         volbytes: 5,367,660,784
        volerrors: 0
        volwrites: 5,120
 volcapacitybytes: 0
        volstatus: Full
          enabled: 1
          recycle: 1
     volretention: 5,616,000
   voluseduration: 82,800
       maxvoljobs: 1
      maxvolfiles: 0
      maxvolbytes: 5,368,709,120
        inchanger: 0
          endfile: 1
         endblock: 1,072,693,487
        labeltype: 0
        storageid: 3
         deviceid: 0
       locationid: 0
     recyclecount: 0
     initialwrite:
    scratchpoolid: 0
      scratchpool:
    recyclepoolid: 0
      recyclepool:
          comment:
          storage: bsd02

*llist volume=incr30d-3250
          mediaid: 3,250
       volumename: incr30d-3250
             slot: 0
           poolid: 6
             pool: incr30d
        mediatype: File02
     firstwritten: 2024-02-06 04:00:03
      lastwritten: 2024-02-06 04:41:15
        labeldate: 2024-02-06 04:00:03
          voljobs: 1
         volfiles: 0
        volblocks: 447
        volmounts: 1
         volbytes: 468,619,261
        volerrors: 0
        volwrites: 448
 volcapacitybytes: 0
        volstatus: Used
          enabled: 1
          recycle: 1
     volretention: 2,592,000
   voluseduration: 82,800
       maxvoljobs: 1
      maxvolfiles: 0
      maxvolbytes: 5,368,709,120
        inchanger: 0
          endfile: 0
         endblock: 468,619,260
        labeltype: 0
        storageid: 3
         deviceid: 0
       locationid: 0
     recyclecount: 0
     initialwrite:
    scratchpoolid: 0
      scratchpool:
    recyclepoolid: 0
      recyclepool:
          comment:
          storage: bsd02


######

In time we use no erase script.
But if we want to get more disc space we use this select to identify deletion candidates:

psql -d bareos -h localhost -U $dbUser -c "SELECT m.VolumeName FROM Media m where m.VolStatus not in ('Append','Purged') and not exists (select 1 from JobMedia jm where jm.MediaId=m.MediaId);"

In this case we run the command "delete volume=$volName yes" and delete the volume-files on storage.

Bye Ralf
(0005745)
bruno-at-bareos   
2024-02-08 09:19   
While using sql command I would really advise to use the sqlquery module of bconsole

bconsole <<< "sqlquery
SELECT m.VolumeName FROM Media m where m.VolStatus not in ('Append','Purged') and not exists (select 1 from JobMedia jm where jm.MediaId=m.MediaId);
"

So if I understand, you try to "improve" the volume pruning and volume reservation.
What happen if you stop doing this? Is the error still reproducible?
(0005752)
raschu   
2024-02-08 17:19   
Thanks for the answer Bruno.

I haven't used this procedure for 3 weeks. In my opinion it has no influence on it. Now with the error, new volumes are created directly and these are then evaluated incorrectly.

I currently have this effect with about 3 backups every night. Other systems run without problems.

Bye Ralf

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1584 [bareos-core] file daemon minor always 2023-12-19 10:44 2024-01-16 15:11
Reporter: mdc Platform: x86_64  
Assigned To: bruno-at-bareos OS: CentOS  
Priority: normal OS Version: stream 8  
Status: resolved Product Version:  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Incremental PostgreSQL backup will log an warning on the database server
Description: Using the new PostgreSQL plug-in of Bareos 23 will log an error on the database server when running an incremental backup.
Seen on RHEL7-9 on PostgreSQL 12 and 14. (Possible also on other)

warning on the data base server after the incremental one:
2023-12-19T10:32:44+0100 <HOSTNAME> postgres[63014]: [8-1] < 2023-12-19 10:32:44.100 CET >WARNING: aborting backup due to backend exiting before pg_stop_backup was called
Tags: plugin, postgresql
Steps To Reproduce: Let an full backup run following by an incremental one.
Additional Information:
System Description
Attached Files: debug.log (65,056 bytes) 2023-12-20 08:43
https://bugs.bareos.org/file_download.php?file_id=596&type=bug
Notes
(0005648)
bruno-at-bareos   
2023-12-19 11:12   
The connection by pg8000 between the start and the end of the job need to be stable.

maybe you have something that kill the connection. none of our tests shows a trouble.
you may have to track the connection/disconnection in postgresql logs.
(0005649)
mdc   
2023-12-19 11:26   
On all servers, the file daemon and the database are on the same. And the connection are done by an socket.
Here the log of the data base with enabled connection logging:
2023-12-19T11:24:15+0100 postgres[2245125]: [13-1] < 2023-12-19 11:24:15.967 CET >LOG: connection received: host=[local]
2023-12-19T11:24:15+0100 postgres[2245125]: [14-1] < 2023-12-19 11:24:15.969 CET >LOG: connection authenticated: identity="root" method=peer (/var/lib/pgsql/14/data/pg_hba.conf:83)
2023-12-19T11:24:15+0100 postgres[2245125]: [15-1] < 2023-12-19 11:24:15.969 CET >LOG: connection authorized: user=root database=postgres
2023-12-19T11:24:17+0100 postgres[2245125]: [16-1] < 2023-12-19 11:24:17.309 CET >WARNING: aborting backup due to backend exiting before pg_stop_backup was called
2023-12-19T11:24:17+0100 postgres[2245125]: [17-1] < 2023-12-19 11:24:17.309 CET >LOG: disconnection: session time: 0:00:01.342 user=root database=postgres host=[local]
(0005650)
bruno-at-bareos   
2023-12-19 15:52   
Would you mind to help us, by running at least one job with setdebug level=150 on the client.
A description how the PG cluster is setup and run, to maybe have a chance to reproduce this issue.

None of the systemtest in use actually has shown this error, on any platform tested.
(0005651)
mdc   
2023-12-20 08:43   
Yes of curse. Here are the debug output, I hope it will help.
But don't wonder, about the delay of the next answer, because today is my last day in the office until the "next year".
(0005652)
mdc   
2023-12-20 08:44   
The client was running via:
bareos-fd -d 150 -f 2>&1 >/tmp/debug.log
for this test.
(0005659)
bruno-at-bareos   
2023-12-21 09:56   
We will need the fileset, and the corresponding log in the pg cluster to have a chance to understand what's happening there.
(0005663)
bruno-at-bareos   
2024-01-03 14:45   
ok found the information in postgresql database log
(0005678)
bruno-at-bareos   
2024-01-09 16:13   
PR created https://github.com/bareos/bareos/pull/1655
Packages to test will appear after build & test in https://download.bareos.org/experimental/PR-1655/
(0005680)
mdc   
2024-01-10 07:12   
Hi Bruno,
I have build an new 23 version with this patch and the warning are now gone.
Thanks
(0005681)
bruno-at-bareos   
2024-01-10 10:44   
Thanks for the report, I'm still investigating some use case (especially incrementals without changes which may raise the job in W state).
Will fix that a bit later.
(0005686)
bruno-at-bareos   
2024-01-16 15:11   
Fix committed to bareos master branch with changesetid 18537.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1553 [bareos-core] storage daemon major always 2023-09-20 10:18 2023-12-13 15:31
Reporter: robertdb Platform: Linux  
Assigned To: OS: Debian  
Priority: high OS Version: 9  
Status: new Product Version: 22.1.0  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: S3 (droplet) returns an error on Exoscale S3 service
Description: Using [Exoscale Object Storage](https://www.exoscale.com/object-storage/) ("S3 compatible"), I'm getting this error and a failing job:

```text
bareos-storagedaemon JobId 1: Warning: stored/label.cc:358 Open device "EXO_S3_1-00" (Exoscale S3 Storage) Volume "Full-0001" failed: ERR=stored/dev.cc:602 Could not open: Exoscale S3 Storage/Full-0001
```
Tags: "libdroplet", "s3"
Steps To Reproduce: Configure bareos-sd with these files:

/etc/bareos/bareos-sd.d/device/EXO_S3_1-00.conf:

```text
Device {
  Name = "EXO_S3_1-00"
  Description = "ExoScale S3 device."
  Maximum Concurrent Jobs = 1
  Media Type = "S3_Object"
  Archive Device = "Exoscale S3 Storage"
  Device Type = "droplet"
  Device Options = "profile=/etc/bareos/bareos-sd.d/device/droplet/exoscale.profile,bucket=bareos-backups,chunksize=100M"
  LabelMedia = yes
  Random Access = yes
  AutomaticMount = yes
  RemovableMedia = No
  AlwaysOpen = No
}
```

/etc/bareos/bareos-sd.d/device/droplet/exoscale.profile:

```text
host = "sos-ch-gva-2.exo.io:443"
use_https = true
access_key = REDACTED
secret_key = REDACTED
pricing_dir = ""
backend = s3
aws_auth_sign_version = 4
aws_region = ch-gva-2
```

Start a backup on this Storage.
Additional Information: The packages are installed from the "subscription repository": (`cat /etc/apt/sources.list.d/bareos.list`)

```text
deb [signed-by=/etc/apt/bareos.gpg] https://download.bareos.com/bareos/release/22/Debian_12 /
```

I'm using these version: (`dpkg -l | grep bareos`)

```text
ii bareos-common 22.1.0-2 amd64 Backup Archiving Recovery Open Sourced - common files
ii bareos-dbg 22.1.0-2 amd64 Backup Archiving Recovery Open Sourced - debugging symbols
ii bareos-storage 22.1.0-2 amd64 Backup Archiving Recovery Open Sourced - storage daemon
ii bareos-storage-droplet 22.1.0-2 amd64 Backup Archiving Recovery Open Sourced - storage daemon droplet backend
ii bareos-storage-tape 22.1.0-2 amd64 Backup Archiving Recovery Open Sourced - storage daemon tape support
ii bareos-tools 22.1.0-2 amd64 Backup Archiving Recovery Open Sourced - common tools
```

On this host: (`cat /etc/os-release`)

```text
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
NAME="Debian GNU/Linux"
VERSION_ID="12"
VERSION="12 (bookworm)"
VERSION_CODENAME=bookworm
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
```
System Description
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1582 [bareos-core] General trivial have not tried 2023-12-13 13:50 2023-12-13 13:50
Reporter: arogge_adm Platform:  
Assigned To: OS:  
Priority: normal OS Version:  
Status: new Product Version:  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Release Bareos 22.1.4
Description: This ticket acts as a tracker ticket to collect information about releasing Bareos 22.1.4
Tags: release
Steps To Reproduce:
Additional Information:
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1583 [bareos-core] General trivial have not tried 2023-12-13 13:50 2023-12-13 13:50
Reporter: arogge_adm Platform:  
Assigned To: OS:  
Priority: normal OS Version:  
Status: new Product Version:  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Release Bareos 24.0.0
Description: This ticket acts as a tracker ticket to collect information about releasing Bareos 24.0.0
Tags: release
Steps To Reproduce:
Additional Information:
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1565 [bareos-core] file daemon crash always 2023-11-01 00:14 2023-12-12 10:11
Reporter: jamyles Platform: Mac  
Assigned To: joergs OS: MacOS X  
Priority: high OS Version: 10  
Status: resolved Product Version: 22.1.1  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: bareos-fd crash on macOS 14.1 Sonoma
Description: bareos-fd crashes with "Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '-[__NSCFString stringByStandardizingPath]: unrecognized selector sent to instance 0x600002118580'." Can reproduce on 22.1.1 subscription release and 23.0.0~pre1137. /var/run/bareos.log attached.
Tags:
Steps To Reproduce: /usr/local/bareos/sbin/bareos-fd --version

or in normal operation.
Additional Information: % sw_vers
ProductName: macOS
ProductVersion: 14.1
BuildVersion: 23B74
% otool -L /usr/local/bareos/sbin/bareos-fd
/usr/local/bareos/sbin/bareos-fd:
    /usr/lib/libiconv.2.dylib (compatibility version 7.0.0, current version 7.0.0)
    /System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation (compatibility version 150.0.0, current version 1953.255.0)
    @rpath/libbareosfind.22.dylib (compatibility version 22.0.0, current version 22.1.1)
    @rpath/libbareos.22.dylib (compatibility version 22.0.0, current version 22.1.1)
    /usr/lib/libz.1.dylib (compatibility version 1.0.0, current version 1.2.11)
    @rpath/libbareoslmdb.22.dylib (compatibility version 22.0.0, current version 22.1.1)
    @rpath/libbareosfastlz.22.dylib (compatibility version 22.0.0, current version 22.1.1)
    /usr/lib/libc++.1.dylib (compatibility version 1.0.0, current version 1300.36.0)
    /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 1319.0.0)
%
System Description
Attached Files: bareos.log (5,283 bytes) 2023-11-01 00:14
https://bugs.bareos.org/file_download.php?file_id=570&type=bug
Notes
(0005509)
joergs   
2023-11-09 15:04   
Thank you for your report. I also read your message on bareos-users.
Unfortunately, I don't have access to a test machine with macOS 14. For building (and testing) we use github actions. We use macos-12 there. macos-13 is only available as beta, so I'm afraid it will take a long time, before macos-14 will be available.
However, the other problem described on bareos-users (Working Directory: "/usr/local/var/lib/bareos" not found. Cannot continue.) has been addressed by https://github.com/bareos/bareos/pull/1592.
Without adapting, I wasn't able to start the bareos-fd (without copying the config files around).
Test packages are available at https://download.bareos.org/experimental/PR-1592/MacOS/
To you mind to give it a try to see, if this also influence the problem from this bug report?
I'm also not against applying your LC_MESSAGES=C to the plist file we do provide, as the bareos-fd do not use language support at all. Still, finding the root cause would be much better.
Are your aware, where I could get access to a macOS 14 test machine, maybe as cloud offering?
(0005510)
jamyles   
2023-11-09 16:30   
Thanks for the update. PR-1592 does fix the Working Directory issue in my testing.

I'm working to get access to a macOS 14 system that you can use to test, at least in the short term, and I'll email you directly about that.
(0005566)
joergs   
2023-12-04 17:30   
With help of jamyles, we've been able to reproduce and solve the problem. https://github.com/bareos/bareos/pull/1592 is now updated accordingly and will hopefully get merged soon.
(0005601)
joergs   
2023-12-12 10:11   
Fix committed to bareos master branch with changesetid 18412.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1571 [bareos-core] General feature always 2023-11-23 12:17 2023-12-07 10:34
Reporter: hostedpower Platform:  
Assigned To: bruno-at-bareos OS:  
Priority: low OS Version:  
Status: resolved Product Version:  
Product Build: Resolution: no change required  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Differential backups for postgresql new plugin
Description: Hi,


We used the former postgresql plugin from Bareos, and never had this message during backup:

Differential backups are not supported! Only Full and Incremental


This is effectively during a Differential backup. We never had such messages with the old plugin, so we're wondering why we have them now with the Bareos version 23 plugin

Also we wonder, is this hard to support. Isn't the differential simply adding all incrementals into one? We checked old postgresql plugin backups and we have the feeling this also actually just worked correctly.
Tags:
Steps To Reproduce:
Additional Information:
Attached Files: image.png (99,946 bytes) 2023-11-23 16:28
https://bugs.bareos.org/file_download.php?file_id=574&type=bug
png
Notes
(0005525)
bruno-at-bareos   
2023-11-23 13:35   
After checking how the old plugin behave and especially that to be able to do differential, you will need to have all wal archive present in the right order since last full and we can't warranty that especially as the plugin is not in charge to manage the archive repository.

There's no internal build-in of differential backup in PostgreSQL. The plugin being based on PITR Wal archiving methodology there's no real needs to have a differential level. Full + n number of incremental will allow the restoration to the desired point in time.

As it is specifically linked to the cluster, it also forbidden the use of feature like Virtual Full.
(0005526)
hostedpower   
2023-11-23 13:47   
So you're not simply adding incremental backups together? Since that should work up to the last full otherwise? Or do I oversee something here? :)
(0005527)
bruno-at-bareos   
2023-11-23 14:22   
Well we provide enough effort I guess to explain what's going on with the plugin in the documentation
https://docs.bareos.org/master/TasksAndConcepts/Plugins.html#postgresql-plugin

And plugin backup are different from traditional file backup.
(0005528)
hostedpower   
2023-11-23 16:28   
Well if it wouldn't work, I have to believe you. I was just under the impression that the incremental backups just contain wall data. I checked on the old plugin, and there the differential backups are running fine. If I look into those backups, the differential backups just contain all the backups from the incremental ones. It's very easy, all files seem to be simply present.

So I must be overlooking something, because apparentely, having the exact same wal files as in the incremental backups wouldn't be sufficient? :)
(0005532)
bruno-at-bareos   
2023-11-27 10:24   
I can't state anything from the screenshot, as the content greatly depend of how checkbox option are checked.
Maybe here I just see the result of full+incremental job merged :-)
(0005534)
hostedpower   
2023-11-27 13:11   
No, this was of a single job (unchecked the boxes to merge).

In any case, since it are just wal files that are kept with incremental, wouldn't a differential just work? Or am I missing something in the puzzle?

The old jobs (previous plugin with differential as well) seem to work when we test, but maybe we didn't test the right case or something.
(0005570)
bruno-at-bareos   
2023-12-05 16:10   
Just a word about this case. The old plugin blindly believe in the fact that all archived wal are there since the full and backup those. If one is missing, the restore will not be possible while the backup would have status "OK".

We decided for the new one to not let user believe the backup was ok when the chance it is not grow as time pass, that's why we disable the Differential mode.
(0005571)
hostedpower   
2023-12-05 16:28   
Also for the incremental we need to make sure we keep enough wall files or otherwise we would have problems? I don't see the big difference differential versus incremental :)
(0005572)
bruno-at-bareos   
2023-12-05 16:50   
Yes this is not a completely a wrong assumption ;-)
Discovered afterwards.
(0005573)
hostedpower   
2023-12-05 17:32   
Maybe it would be a good idea to allow differential again, warn in the notes we need long enough wall files, and longterm some safety mechanism would be even better hahaha :)

We do a differential each day , so for us it's really nice to keep the differentials working, it's a great advantage since we take 24 incrementals per day, it's crazy to keep these all for a week long.
(0005576)
bruno-at-bareos   
2023-12-07 10:33   
So the final statement on our side (of course for the moment, with money or PR everything can be changed :-))
We will not publish the plugin with level D allowed.

But as you know it is quite simple to hack it. So for your own usage you may want to change the M_FATAL to something else line 217
https://github.com/bareos/bareos/blob/1e7d73e668609f39a7caf4422710dfb58f1c0cd1/core/src/plugins/filed/python/postgresql/bareos-fd-postgresql.py#L217

At your own risk.
(0005577)
bruno-at-bareos   
2023-12-07 10:34   
Differential support will not be enable for this plugin.
Workaround for people knowing the risk they took, can be proposed in a PR and merged afterward if acceptable.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1422 [bareos-core] General major always 2022-01-20 11:58 2023-11-20 11:16
Reporter: niklas.skog Platform: Linux  
Assigned To: arogge OS: Debian  
Priority: normal OS Version: 11  
Status: confirmed Product Version: 21.0.0  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Bareos Libcloud Plugin incompatibe
Description: Goal is: to do backups of S3-buckets using Bareos

Situation:

Installed the Bareos 21.0.0-4 and "bareos-filedaemon-libcloud-python-plugin" on Debian 11 from "https://download.bareos.org/bareos/release/21/Debian_11"

Installed the "python3-libcloud" package on which the Plugin "bareos-filedaemon-libcloud-python-plugin" depends.

Configured the plugin according https://docs.bareos.org/TasksAndConcepts/Plugins.html

Trying to start a job, that should backup the data from S3 and getting following error in bconsole output:
---
20-Jan 08:27 bareos-dir JobId 13: Connected Storage daemon at backup:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
20-Jan 08:27 bareos-dir JobId 13: Using Device "FileStorage" to write.
20-Jan 08:27 bareos-dir JobId 13: Connected Client: backup-fd at localhost:9102, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
20-Jan 08:27 bareos-dir JobId 13: Handshake: Immediate TLS
20-Jan 08:27 backup-fd JobId 13: Fatal error: python3-fd-mod: BareosFdPluginLibcloud [50986]: Need Python version < 3.8 for Bareos Libcloud (current version: 3.9.2)
20-Jan 08:27 backup-fd JobId 13: Connected Storage daemon at backup:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
20-Jan 08:27 backup-fd JobId 13: Fatal error: TwoWayAuthenticate failed, because job was canceled.
20-Jan 08:27 backup-fd JobId 13: Fatal error: Failed to authenticate Storage daemon.
20-Jan 08:27 bareos-dir JobId 13: Fatal error: Bad response to Storage command: wanted 2000 OK storage
, got 2902 Bad storage
[...]
---

and the job fails.

Thus, main message:

"Fatal error: python3-fd-mod: BareosFdPluginLibcloud [50986]: Need Python version < 3.8 for Bareos Libcloud (current version: 3.9.2)"

which is understandable, because Debian 11 brings python3.9.*:

---
root@backup:/etc/bareos/bareos-dir.d/fileset# apt-cache policy python3
python3:
  Installed: 3.9.2-3
  Candidate: 3.9.2-3
  Version table:
 *** 3.9.2-3 500
        500 http://cdn-aws.deb.debian.org/debian bullseye/main amd64 Packages
        100 /var/lib/dpkg/status
root@backup:/etc/bareos/bareos-dir.d/fileset#
---


Accordingly, the plugin is incompatible with the current Debian version.
Tags: libcloud, plugin, s3
Steps To Reproduce: * install stock debian 11
* install & configure bareos 21, "python3-libcloud" and "bareos-filedaemon-libcloud-python-plugin"
* configure the plugin according https://docs.bareos.org/TasksAndConcepts/Plugins.html
* try to run a job that is backing up an S3-bucket
* this will fail
Additional Information:
Attached Files:
Notes
(0004487)
arogge   
2022-01-27 11:49   
You cannot use Python 3.9 or newer with python libcloud plugin due to a limitation in Python 3.9.
We're looking into this, but it isn't that easy to work around that limitation.
(0005513)
troloff   
2023-11-14 15:42   
Is there any progress in this issue?

As far as I can see, the last Debian version which brings a suitable Python is now oldoldstable with Debian 12 being released mid 2023. Would be nice to have a Ceph backup again, or a workaround.
(0005517)
arogge   
2023-11-16 09:47   
I understand that this is frustrating, but we would have to rewrite most of the plugin due to Python deciding we cannot use multi-process in a subinterpreter anymore.
The workaround right now is to use a Bareos FD on RHEL 8, one of its clones or CentOS Stream 8. These will stay supported until 2029.
(0005518)
troloff   
2023-11-16 10:59   
Would it be possible to fund the redesign? If so, maybe you could contact me to discuss the details.
(0005519)
arogge   
2023-11-20 11:16   
We're going to do an estimate and then look for co-funding. It will probably take a few days, but sales will be happy to get in touch with you.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1309 [bareos-core] General feature always 2021-01-19 02:27 2023-10-24 09:37
Reporter: Ruth Ivimey-Cook Platform: amd64  
Assigned To: OS: Linux  
Priority: normal OS Version: Ubuntu 20.04  
Status: new Product Version: 19.2.9  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Update tape selection so the drive is not fixed
Description: Currently, the tape drive used for writing a job is determined once (fixed) and all tapes must use that drive. At least, that is my understanding.

This means a system with two of the same sort of drives cannot 'double buffer' them so while one is being written the other is waiting to be changed. Also it means a long running job can hog a drive even though there is a tape loaded that would satisfy subsequent jobs in the other drive.

I delved into the code a year or so ago and found the drive selection is quite embedded, and non-trivial to change. However, changing the way bareos selects drives and storage daemons so it only picks the drive once it needs a new volume would make a great deal of sense to me, and open up new ways of deploying Bareos. I did try having two storage daemons each controlling one drive, (rather than one daemon for both) but it makes no difference.

It would be even more fun if the director could expose a type of plugin that could be involved in the selection process: essentially a call from the director to the plugin saying "here's my state, what now", that returned an instruction like "use storage=X volume=Y". It would be within this plugin that user interaction happens (i.e. the current prompt "Please mount append Volume "ZZZ" or label a new one for:", and so plugins could potentially cause an X11 prompt, not just a console one.
Tags: drive, reservation, sd
Steps To Reproduce: On a system with two tape drives (or equivalent) directly connected:

1. Start a job on one drive that exceeds the remaining length of that tape (tape 1);
2. Insert in the other drive a tape (tape 2) that is suitable for continuing the job once the first tape is full, and mount it;
3. Note that when tape 1 is full, bareos asks for a new tape (tape 2) to be inserted in drive 1, even though the tape its asking for is already mounted in drive 2.

4. Unmount tape 2, eject both tapes, and put the new tape in drive 1.
5. Note that bareos resumes writing the job as expected.

I presume this would also be true when two tape changers were used (i.e. a tape in changer 1 fills, there is a tape in changer 2 that satisfies the need, but bareos won't use it.
Additional Information:
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1285 [bareos-core] installer / packages major always 2020-12-10 13:00 2023-10-24 09:19
Reporter: dupondje Platform: Linux  
Assigned To: arogge OS: CentOS  
Priority: high OS Version: 8  
Status: resolved Product Version: 19.2.8  
Product Build: Resolution: no change required  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Unable to install oVirt Plugin
Description:   - nothing provides python-ovirt-engine-sdk4 needed by bareos-filedaemon-ovirt-python-plugin-19.2.7-2.el8.x86_64
  - nothing provides python-pycurl needed by bareos-filedaemon-ovirt-python-plugin-19.2.7-2.el8.x86_64
  - nothing provides python-lxml needed by bareos-filedaemon-ovirt-python-plugin-19.2.7-2.el8.x86_64

But those packages do not exist on CentOS8, there its python3- ...
Tags:
Steps To Reproduce:
Additional Information:
System Description General issues regarding all linux platforms or not specific to only one distribution
Attached Files:
Notes
(0004066)
arogge_adm   
2020-12-16 09:00   
That is a known issue, sorry for that.
Could you try with Bareos 20.0.0, that should hit the download mirror later today?

In that new version we provide plugins for python2 and python3 and have removed the RPM dependencies you're mentioning.

You will now have to install the oVirt SDK manually, but you can also use pip to do it now.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1559 [bareos-core] General trivial have not tried 2023-10-23 16:24 2023-10-23 16:24
Reporter: arogge_adm Platform:  
Assigned To: OS:  
Priority: normal OS Version:  
Status: new Product Version:  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Release Bareos 21.1.9
Description: This ticket acts as a tracker ticket to collect information about releasing Bareos 21.1.9
Tags: release
Steps To Reproduce:
Additional Information:
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1555 [bareos-core] webui minor always 2023-09-25 20:00 2023-10-12 16:11
Reporter: Animux Platform: Linux  
Assigned To: bruno-at-bareos OS: Debian  
Priority: normal OS Version: 10  
Status: resolved Product Version: 22.1.0  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: bareos-webui depends needlessly on libapache2-mod-fcgid
Description: The official debian package from the community repository has the following depends:

> Depends: apache2 | httpd, libapache2-mod-fcgid, php-fpm (>= 7.0), php-date, php-intl, php-json, php-curl

When using any http server other than apache2 the dependency on libapache2-mod-fcgid is wrong and is pulling unnecessary additional dependencies (something like apache2-bin). Other official debian packages are using "Recommends" (f.e. sympa) or "Suggests" (f.a. munin or oar-restful-api) for such dependencies.

Can you downgrade the dependency on libapache2-mod-fcgid to at least recommends? "Recommends" would still install libapache2-mod-fcgid in default setups, but would allow the server administrator to skip the installation or to remove the package afterwards.
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0005448)
bruno-at-bareos   
2023-09-26 11:27   
Are you willing to propose a PR to fix this ?
(0005456)
bruno-at-bareos   
2023-10-11 16:13   
Will be addressed in https://github.com/bareos/bareos/pull/1573 Maybe backported
(0005472)
bruno-at-bareos   
2023-10-12 16:11   
Fix committed to bareos master branch with changesetid 18115.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1550 [bareos-core] installer / packages minor always 2023-08-31 12:06 2023-10-12 16:11
Reporter: roland Platform: Linux  
Assigned To: bruno-at-bareos OS: Debian  
Priority: normal OS Version: 11  
Status: resolved Product Version: 22.1.0  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Debian: Build-Depends: openssl missing
Description: Building Bareos 21.x and 22.x (including todays GIT HEAD) fails in a clean Debian 11 (bullseye) build environment (using cowbuilder/pbuilder) during the dh_auto_configure run with the following message:

CMake Error at systemtests/CMakeLists.txt:196 (message):
  Creation of certificates failed: 127 /build/bareos-22.1.0/cmake-build

That's because openssl package is not installed in a clean environment and it's missing in debian/control Build-Depends.
Tags: build, debian 11
Steps To Reproduce: # First get the official source package and rename it:
wget https://github.com/bareos/bareos/archive/refs/tags/Release/22.1.0.tar.gz
mv 22.1.0.tar.gz bareos_22.1.0.orig.tar.gz

# Now get the same via GIT (I could also unpack the above package):
git clone https://github.com/bareos/bareos.git
cd bareos
git checkout Release/22.1.0

# Create the missing debian/changelog file:
dch --create --empty --package bareos -v 22.1.0-rr1+deb11u1
dch -a 'RoRo Build bullseye'

# Create a Debian source package (.dsc, .debian.tar.xz):
env DEB_BUILD_PROFILES="debian bullseye" debuild -us -uc -S -d

# And now finally build the Debian source package in a clean bullseye chroot:
sudo env DEB_BUILD_PROFILES="debian bullseye" cowbuilder --build --basepath /var/cache/pbuilder/base-bullseye.cow ../bareos_22.1.0-rr1+deb11u1.dsc
Additional Information: With Bareos 20.x I did not run into this issue.

Adding "openssl" to the Build-Depends in debian/control debian/control.src avoids running into the above build failure.

I'm not sure, whether there are other missing build dependencies, at least the build is complaining above some pam stuff missing, but these don't stop the build.

I still see several automated tests failing, but have to dig deeper there.
System Description
Attached Files:
Notes
(0005403)
bruno-at-bareos   
2023-09-11 17:07   
Would you mind to open a PR to fix the issue on github?
(0005457)
bruno-at-bareos   
2023-10-11 16:13   
Will be addressed in https://github.com/bareos/bareos/pull/1573
(0005471)
bruno-at-bareos   
2023-10-12 16:11   
Fix committed to bareos master branch with changesetid 18114.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1283 [bareos-core] General feature always 2020-12-05 13:03 2023-09-11 17:35
Reporter: rugk Platform:  
Assigned To: arogge OS:  
Priority: high OS Version:  
Status: acknowledged Product Version: 19.2.8  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Remove insecure MD5 hashing
Description: You use/provide CRAM-MD5 hashing:
https://github.com/bareos/bareos/blob/819bba62ebdadd2ac0bd773ac8d26f4f60f5d39e/python-bareos/bareos/util/password.py#L51

However MD5 is easily brute-forcable nowadays, vulnerable to an (active) MITM attack and has many more weaknesses:
https://en.wikipedia.org/wiki/CRAM-MD5#Weaknesses

And it has been deprecated since 2008.
https://tools.ietf.org/html/draft-ietf-sasl-crammd5-to-historic-00
Tags:
Steps To Reproduce:
Additional Information: The linked RFC recommends e.g. SCRAM as an alternative.

AFAIK you use TLS, which should mitigate this problem, but then such an additional authentication is also quite useless here.
You may consider, if appropriate for your use case and not already done. using password stretching hashes (PBKDF, Argon2 etc.) on the server for a secure storage or possibly some kind of private-public-key authentication scheme.
These are only ideas for the future though. For now, just remove legacy and insecure algorithms, or – at least – mark them as deprecated as you should have done in 2008! At most, they can give a false sense of security.
Attached Files:
Notes
(0005409)
arogge   
2023-09-11 17:35   
Right now Bareos has two protocol modes to operate in.
The legacy one is what we inherited from the predecessor project. It uses CRAM-MD5 on plaintext connnections (even if you have TLS enabled).
The modernized protocol does immediate TLS and then authenticates using CRAM-MD5 inside that TLS-session.
While this is still obviously legacy, we chose to keep it for a few reasons:
- the legacy clients require that type of authentication
- in Bareos context it isn't worse than sending a plain password
- it is still considered safe when used via a TLS connection (which is the default nowadays)

Having said that, the document from 2008 that you're referencing is a draft and was never made a standard.

If we decide to implement another incompatible protocol change, we will definitely get rid of CRAM. We will probably not get rid of the shared secrets, so password stretching won't work.
Concerning PKI we decided against it, as PSK seems to be sufficient for our use-case.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1122 [bareos-core] General major always 2019-10-18 19:32 2023-09-04 16:40
Reporter: xyros Platform: Linux  
Assigned To: bruno-at-bareos OS: Ubuntu  
Priority: normal OS Version: 16.04  
Status: resolved Product Version: 18.2.6  
Product Build: Resolution: no change required  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Consolidate queues and indefinitely orphans jobs but falsely reports status as "Consolidate OK" for last queued
Description: My Consolidate job never succeeds -- quickly terminating with "Consolidate OK" while leaving all the VirtualFull jobs it started queued and orphaned.

In the WebUI listing for the allegedly successful Consolidate run, it always list the sequentially last (by job ID) client it queued as being the successful run; however, the level is "Incremental," nothing is actually done, and the client's VirtualFull job is actually still queued up with all the other clients.

In bconsole the status is similar to this:

Running Jobs:
Console connected at 15-Oct-19 15:06
 JobId Level Name Status
======================================================================
   636 Virtual PandoraFMS.2019-10-15_14.33.02_06 is waiting on max Storage jobs
   637 Virtual MongoDB.2019-10-15_14.33.03_09 is waiting on max Storage jobs
   638 Virtual DNS-DHCP.2019-10-15_14.33.04_11 is waiting on max Storage jobs
   639 Virtual Desktop_1.2019-10-15_14.33.05_19 is waiting on max Storage jobs
   640 Virtual Desktop_2.2019-10-15_14.33.05_20 is waiting on max Storage jobs
   641 Virtual Desktop_3.2019-10-15_14.33.06_21 is waiting on max Storage jobs
====


Given that above output, for example the WebUI would show the following:

    642 Consolidate desktop3-fd.hq Consolidate Incremental 0 0.00 B 0 Success
    641 Desktop_3 desktop3-fd.hq Backup VirtualFull 0 0.00 B 0 Queued
    640 Desktop_2 desktop2-fd.hq Backup VirtualFull 0 0.00 B 0 Queued
    639 Desktop_1 desktop1-fd.hq Backup VirtualFull 0 0.00 B 0 Queued
    638 DNS-DHCP dns-dhcp-fd.hq Backup VirtualFull 0 0.00 B 0 Queued
    637 MongoDB mongodb-fd.hq Backup VirtualFull 0 0.00 B 0 Queued
    636 PandoraFMS pandorafms-fd.hq Backup VirtualFull 0 0.00 B 0 Queued


I don't know if this has anything to do with the fact that I have multiple storage definitions, for each VLAN the server is on, and an additional one dedicated for the storage addressable on the default IP (see bareos-dir/storage/File.conf in attached bareos.zip file). Technically this should not matter, but I get the impression Bareos nas not been designed/tested to elegantly work in an environment where the server participates in VLANs.

The reason I'm using VLANs is so that connections do not have to go through a router to reach the clients. Therefore, the full network bandwidth of each LAN segment is available to the Bareos client/server data transfer.

I've tried debugging the Consolidate backup process, using "bareos-dir -d 400 >> /var/log/bareos-dir.log;" however, I get nothing that particularly identifies the issue. I have attached a truncated log file that contains activity starting with queuing the second-to-last. I've cut off the log at the point where it is stuck in the endless cycling with output of:

bareos-dir (50): dird/jobq.cc:951-0 Inc Rstore=File-AI-VLAN105 rncj=1
bareos-dir (50): dird/jobq.cc:1004-0 Fail to acquire Wstore=File-AI-VLAN105 wncj=1
bareos-dir (50): dird/jobq.cc:971-0 Dec Rstore=File-AI-VLAN105 rncj=0
bareos-dir (50): dird/jobq.cc:951-0 Inc Rstore=File-AI-VLAN105 rncj=1
bareos-dir (50): dird/jobq.cc:1004-0 Fail to acquire Wstore=File-AI-VLAN105 wncj=1
bareos-dir (50): dird/jobq.cc:971-0 Dec Rstore=File-AI-VLAN105 rncj=0
bareos-dir (50): dird/jobq.cc:951-0 Inc Rstore=File-AI-VLAN107 rncj=1
bareos-dir (50): dird/jobq.cc:1004-0 Fail to acquire Wstore=File-AI-VLAN107 wncj=1
bareos-dir (50): dird/jobq.cc:971-0 Dec Rstore=File-AI-VLAN107 rncj=0
bareos-dir (50): dird/jobq.cc:951-0 Inc Rstore=File-AI-VLAN107 rncj=1
bareos-dir (50): dird/jobq.cc:1004-0 Fail to acquire Wstore=File-AI-VLAN107 wncj=1
etc...

For convenience, I have attached all the most relevant excerpts of my configuration files (sanitized for privacy/security reasons).

I suspect there's a bug that is responsible for this; however, I'm unable to make heads or tails of what's going on.

Could someone please take a look?

Thanks
Tags: always incremental, consolidate
Steps To Reproduce: 1. Place Bareos on a network switch (virtual or actual) with tagged VLANS
2. Configure Bareos host to have connectivity on three or more VLANs
3. Make sure you have clients you can backup, on each of the VLANs
4. Use the attached config files as reference for setting up storages and jobs for testing.
Additional Information:
System Description
Attached Files: bareos.zip (9,113 bytes) 2019-10-18 19:32
https://bugs.bareos.org/file_download.php?file_id=391&type=bug
bareos-dir.log (41,361 bytes) 2019-10-18 19:32
https://bugs.bareos.org/file_download.php?file_id=392&type=bug
Notes
(0004008)
xyros   
2020-06-11 10:11   
Figured it out myself. The official documentation needs a full working example, as the always incremental backup configuration is very finicky and the error message provide insufficient guidance for resolution.
(0005367)
bruno-at-bareos   
2023-09-04 16:40   
Fixed by user adapted configuration

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1333 [bareos-core] storage daemon block have not tried 2021-03-27 16:33 2023-07-31 14:10
Reporter: noone Platform: x86_64  
Assigned To: bruno-at-bareos OS: SLES  
Priority: normal OS Version: 15.1  
Status: resolved Product Version: 19.2.9  
Product Build: Resolution: no change required  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: mtx-changer stopped working
Description: mtx-changer stopped working after an update.
Each time a new tape was loaded by bareos I got an error message like:
"""
Connecting to Storage daemon SL3-LTO5-00 at uranus.mcservice.eu:9103 ...
3304 Issuing autochanger "load slot 6, drive 0" command.
3992 Bad autochanger "load slot 6, drive 0": ERR=Child died from signal 15: Termination.
Results=Program killed by BAREOS (timeout)

3001 Mounted Volume: ALN043L5
3001 Device "tapedrive-lto-5" (/dev/tape/by-id/scsi-300e09e60001ce29a-nst) is already mounted with Volume "ALN043L5"
"""
In reality the tape was loaded and after 5 minutes the command was killed by timeout. The tape is loaded after roughly 120 seconds and is readable by that time using other applications (like dd or tapeinfo).

I tracked it down to the `wait_for_drive()` function in the script. I modified the function to look like
```
wait_for_drive() {
  i=0
  while [ $i -le 300 ]; do # Wait max 300 seconds
    debug "tapeinfo -f $1 2>&1"
    debug `tapeinfo -f $1 2>&1`
    debug "mt -f $1 status 2>&1"
    debug `mt -f $1 status 2>&1`
    if mt -f $1 status 2>&1 | grep "${ready}" >/dev/null 2>&1; then
      break
    fi
    debug "Device $1 - not ready, retrying..."
    sleep 1
    i=`expr $i + 1`
  done
}
```
An example protocol output is (shortened)
"""
20210327-16:20:35 tapeinfo -f /dev/tape/by-id/scsi-300e09e60001ce29a-nst 2>&1
20210327-16:20:35 mtx: Request Sense: Long Report=yes mtx: Request Sense: Valid Residual=no mtx: Request Sense: Error Code=0 (Unknown?!) mtx: Request Sense: Sense Key=No Sense mtx: Request Sense: FileMark=no mtx: Request Sense: EOM=no mtx: Request Sense: ILI=no mtx: Request Sense: Additional Sense Code = 00 mtx: Request Sense: Additional Sense Qualifier = 00 mtx: Request Sense: BPV=no mtx: Request Sense: Error in CDB=no mtx: Request Sense: SKSV=no INQUIRY Command Failed
20210327-16:20:35 mt -f /dev/tape/by-id/scsi-300e09e60001ce29a-nst status 2>&1
20210327-16:20:35 Unknown tape drive: file no= -1 block no= -1
20210327-16:20:35 Device /dev/tape/by-id/scsi-300e09e60001ce29a-nst - not ready, retrying...
"""
I verified via bash that at this time the tapedrive was ready using tapeinfo.



Tags:
Steps To Reproduce:
Additional Information:
System Description General issues regarding all linux platforms or not specific to only one distribution
Attached Files:
Notes
(0004105)
noone   
2021-03-27 16:39   
For anyone facing this problem:

I found a workaround. The mt commands return value is the problem. So I am using now tapeinfo as a replacement.
At you own risk you could try to replace the `wait_for_drive` function by
```
wait_for_drive() {
  i=0
  while [ $i -le 300 ]; do # Wait max 300 seconds
    # Code Changed because mt has stopped working in December 2020. This is a provisional fix...
    #if mt -f $1 status 2>&1 | grep "${ready}" >/dev/null 2>&1; then
    if tapeinfo -f $1 2>&1 | grep "Ready: yes" >/dev/null 2>&1; then
      break
    fi
    debug "Device $1 - not ready, retrying..."
    sleep 1
    i=`expr $i + 1`
  done
}
```

Might or might not work on different systems. To work bareos-sd has to run as root on my machine.

PS:
I found out that the mt command returns
"""
uranus:~ # mt -f /dev/tape/by-id/scsi-300e09e60001ce29a-nst status
Unknown tape drive:

   file no= -1 block no= -1
"""
so this might be the reason why it stopped working. But I am unable to find out why the output of mt has changed.
(0005279)
bruno-at-bareos   
2023-07-31 14:10   
Thanks for your tips. As it is something we can fix, we mark it as resolved.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
978 [bareos-core] director feature N/A 2018-07-04 22:59 2023-07-04 15:26
Reporter: stevec Platform: Linux  
Assigned To: bruno-at-bareos OS: Ubuntu  
Priority: normal OS Version: 16.04  
Status: resolved Product Version: 16.2.7  
Product Build: Resolution: won't fix  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Define encryption to pool ; auto encrypt tapes when purged
Description:
Ability to migrate to/from encrypted volumes in a pool over time when volumes are purged/overwritten with new data automatically.

Current method of flagging a tape for encryption at label time has issues when you want to migrate pools to/from an encrypted estate over time. Also current methods are very 'patchwork' for scsi encryption and would be better served to have all settings just in the config files directly.
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0005131)
bruno-at-bareos   
2023-07-04 15:26   
Sorry to close but this feature will not happen without community work.

As a workaround a scratch pool dedicated to encrypted tape can be added and configured to all pool using such medium.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1533 [bareos-core] vmware plugin major always 2023-04-29 13:41 2023-06-26 15:11
Reporter: CirocN Platform: Linux  
Assigned To: stephand OS: RHEL (and clones)  
Priority: urgent OS Version: 8  
Status: resolved Product Version: 22.0.3  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Restoring vmware vms keeps failing, can't restore the data.
Description: First I need to mention I am new to Bareos and I have been working with it for the past couple of weeks to replace our old backup solution.
I am trying to use the VMware plugin to get backups of our vmdks for disaster recovery or if we need to extract a specific file using guestfish.
I have followed the official documents regarding setting up the VMware plugin at https://docs.bareos.org/TasksAndConcepts/Plugins.html#general
The backups of the vm in our vSphere server is successful.
But when I try to restore the backups it keeps failing with the following information:

bareos87.simlab.xyz-fd JobId 3: Fatal error: bareosfd: Traceback (most recent call last):
File "/usr/lib64/bareos/plugins/BareosFdWrapper.py", line 85, in create_file
return bareos_fd_plugin_object.create_file(restorepkt)
File "/usr/lib64/bareos/plugins/BareosFdPluginVMware.py", line 504, in create_file
cbt_data = self.vadp.restore_objects_by_objectname[objectname]["data"]
KeyError: '/VMS/Datacenter/backup_client/[DatastoreVM] backup_client/backup_client.vmdk


I have tried the same steps on Bareos 21 and 20, and also tried it on Redhat 9.1 and I kept getting same exact error.

Tags:
Steps To Reproduce: After setting up the vmware plugin accoding to official documents, I have ran the backups using:
run job=vm-websrv1 level=Full
Web-GUI shows the job instantly and after about 10 minutes the job's status shows success.
Right after the backup is done when I try to restore the backup using Web-GUI or console, I keep getting the same error:

19 2023-04-29 07:34:05 bareos-dir JobId 4: Error: Bareos bareos-dir 22.0.4~pre63.807bc5689 (17Apr23):
Build OS: Red Hat Enterprise Linux release 8.7 (Ootpa)
JobId: 4
Job: RestoreFiles.2023-04-29_07.33.59_43
Restore Client: bareos-fd
Start time: 29-Apr-2023 07:34:01
End time: 29-Apr-2023 07:34:05
Elapsed time: 4 secs
Files Expected: 1
Files Restored: 0
Bytes Restored: 0
Rate: 0.0 KB/s
FD Errors: 1
FD termination status: Fatal Error
SD termination status: Fatal Error
Bareos binary info: Bareos community build (UNSUPPORTED): Get professional support from https://www.bareos.com
Job triggered by: User
Termination: *** Restore Error ***

18 2023-04-29 07:34:05 bareos-dir JobId 4: Warning: File count mismatch: expected=1 , restored=0
17 2023-04-29 07:34:05 bareos-sd JobId 4: Releasing device "FileStorage" (/var/lib/bareos/storage).
16 2023-04-29 07:34:05 bareos-sd JobId 4: Error: lib/bsock_tcp.cc:454 Socket has errors=1 on call to client:192.168.111.136:9103
15 2023-04-29 07:34:05 bareos-sd JobId 4: Fatal error: stored/read.cc:146 Error sending to File daemon. ERR=Connection reset by peer
14 2023-04-29 07:34:05 bareos-sd JobId 4: Error: lib/bsock_tcp.cc:414 Wrote 65536 bytes to client:192.168.111.136:9103, but only 16384 accepted.
13 2023-04-29 07:34:04 bareos87.simlab.xyz-fd JobId 4: Fatal error: bareosfd: Traceback (most recent call last):
File "/usr/lib64/bareos/plugins/BareosFdWrapper.py", line 85, in create_file
return bareos_fd_plugin_object.create_file(restorepkt)
File "/usr/lib64/bareos/plugins/BareosFdPluginVMware.py", line 504, in create_file
cbt_data = self.vadp.restore_objects_by_objectname[objectname]["data"]
KeyError: '/VMS/Datacenter/backup_client/[DatastoreVM] backup_client/backup_client.vmdk'

12 2023-04-29 07:34:04 bareos-sd JobId 4: Forward spacing Volume "Full-0001" to file:block 0:627.
11 2023-04-29 07:34:04 bareos-sd JobId 4: Ready to read from volume "Full-0001" on device "FileStorage" (/var/lib/bareos/storage).
10 2023-04-29 07:34:04 bareos87.simlab.xyz-fd JobId 4: Encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
9 2023-04-29 07:34:04 bareos87.simlab.xyz-fd JobId 4: Connected Storage daemon at bareos87.simlab.xyz:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
8 2023-04-29 07:34:02 bareos-dir JobId 4: Encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
7 2023-04-29 07:34:02 bareos-dir JobId 4: Handshake: Immediate TLS
6 2023-04-29 07:34:02 bareos-dir JobId 4: Connected Client: bareos-fd at localhost:9102, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
5 2023-04-29 07:34:02 bareos-dir JobId 4: Probing client protocol... (result will be saved until config reload)
4 2023-04-29 07:34:02 bareos-dir JobId 4: Using Device "FileStorage" to read.
3 2023-04-29 07:34:02 bareos-dir JobId 4: Encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
2 2023-04-29 07:34:02 bareos-dir JobId 4: Connected Storage daemon at bareos87.simlab.xyz:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
1 2023-04-29 07:34:01 bareos-dir JobId 4: Start Restore Job RestoreFiles.2023-04-29_07.33.59_43
Additional Information:
System Description
Attached Files: Restore_Failure.png (84,322 bytes) 2023-04-29 13:41
https://bugs.bareos.org/file_download.php?file_id=554&type=bug
png

Backup_Success.png (78,727 bytes) 2023-04-29 13:41
https://bugs.bareos.org/file_download.php?file_id=555&type=bug
png

BareosFdPluginVMware.png (30,436 bytes) 2023-05-03 23:32
https://bugs.bareos.org/file_download.php?file_id=556&type=bug
png

docs.bareos.org-instruction.png (36,604 bytes) 2023-05-03 23:32
https://bugs.bareos.org/file_download.php?file_id=557&type=bug
png

BareosFdPluginVMware.patch (612 bytes) 2023-06-15 16:21
https://bugs.bareos.org/file_download.php?file_id=562&type=bug
Notes
(0004995)
bruno-at-bareos   
2023-05-03 15:37   
As all continuous tests are working perfectly, you certainly miss a point into your configuration, without the configuration nobody will be able to find the problem.

Also if you want to show a job result screeshots are certainly the worse way.
please use the text log result of bconsole <<< "list joblog jobid=2" as attachement here.
(0005003)
CirocN   
2023-05-03 23:32   
I have found out that this is the result of miss matching the backup path and restore path.
The backup is getting created with /VMS/Datacenter//backup_client/[DatastoreVM] backup_client/backup_client.vmdk while /VMS/Datacenter/backup_client/[DatastoreVM] backup_client/backup_client.vmdk.
I have noticed in your trying to strip it off on BareosFdPluginVMware.py on line 366 but it is not working.

CODE:
        if "uuid" in self.options:
            self.vadp.backup_path = "/VMS/%s" % (self.options["uuid"])
        else:
            self.vadp.backup_path = "/VMS/%s/%s/%s" % (
                self.options["dc"],
                self.options["folder"].strip("/"),
                self.options["vmname"],
            )

My configuration for the folder is precisely what is suggested in the official document: folder=/


The workaround I have found for now was to edit the BareosFdPluginVMware.py Python script to the following code but it needs to get properly fixed:

CODE:
        if "uuid" in self.options:
            self.vadp.backup_path = "/VMS/%s" % (self.options["uuid"])
        else:
            self.vadp.backup_path = "/VMS/%s%s/%s" % (
                self.options["dc"],
                self.options["folder"].strip("/"),
                self.options["vmname"],
            )
(0005065)
stephand   
2023-06-06 17:58   
Thanks for reporting this, confirming that it doesn't work properly when using folder=/ for backups.
The root cause is the double slash in the backup path, like in your example
/VMS/Datacenter//backup_client/[DatastoreVM] backup_client/backup_client.vmdk

I will provide a proper fix.
(0005084)
awillem   
2023-06-15 16:21   
Hi,

here's a more elegant solution, "non breaking" the actual design in case you're using a mixed with- and without-folder VM hierarchy.

Hope this helps others.

BR
Arnaud
(0005085)
stephand   
2023-06-16 11:20   
Thanks for you proposed solution.

The PR 1484 already contains a fix that for this issue which will work when using a mixed with- and without-folder VM hierarchy.
See https://github.com/bareos/bareos/pull/1484

Regards,
Stephan
(0005100)
stephand   
2023-06-26 15:11   
Fix committed to bareos master branch with changesetid 17739.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1523 [bareos-core] file daemon crash always 2023-03-13 13:09 2023-06-26 13:52
Reporter: mp Platform:  
Assigned To: bruno-at-bareos OS:  
Priority: urgent OS Version:  
Status: resolved Product Version: 22.0.2  
Product Build: Resolution: duplicate  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: fatal error bareos-fd for full backup postgresql 11
Description: Full backup of Postgesql 11 crash with error:
Error: python3-fd-mod: Could net get stat-info for file /var/lib/postgresql/11/main/base/964374/t4_384322129: "[Errno 2] No such file or directory: '/var/lib/postgresql/11/main/base/964374/t4_384322129'"

1c-pg11-fd JobId 238: Fatal error: bareosfd: Traceback (most recent call last):
File "/usr/lib/bareos/plugins/BareosFdWrapper.py", line 61, in start_backup_file
return bareos_fd_plugin_object.start_backup_file(savepkt)
File "/usr/lib/bareos/plugins/BareosFdPluginPostgres.py", line 396, in start_backup_file
return super(BareosFdPluginPostgres, self).start_backup_file(savepkt)
File "/usr/lib/bareos/plugins/BareosFdPluginLocalFilesBaseclass.py", line 118, in start_backup_file
mystatp.st_mode = statp.st_mode
UnboundLocalError: local variable 'statp' referenced before assignment
Tags:
Steps To Reproduce:
Additional Information: Backup fail every time when try to backup postgesql 11. At same time backup of pg14 finished w/o any problem
Attached Files:
Notes
(0005093)
bruno-at-bareos   
2023-06-26 10:10   
(Last edited: 2023-06-26 10:11)
Is this reproducible in a way or another. Looks like your table (or part of) has been dropped during the backup ?

Is a vacuum full works on this table ? and the file exist ?
(0005099)
bruno-at-bareos   
2023-06-26 13:52   
In issue 1520 we acknowledge that having a warning instead an error should be the right behavior.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
690 [bareos-core] General crash always 2016-08-23 18:19 2023-05-09 16:58
Reporter: tigerfoot Platform: x86_64  
Assigned To: bruno-at-bareos OS: openSUSE  
Priority: low OS Version: Leap 42.1  
Status: resolved Product Version: 15.2.5  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: bcopy segfault
Description: With a small volume Full001 I'm using a bsr (Catalog job)
to try to copy this job to another media (and device)

bcopy start to work but segv at the end
Tags:
Steps To Reproduce: Pick a bsr of a job in a multiple jobs volume, run bcopy with a new volume on a different destination (different media)
Additional Information: gdb /usr/sbin/bcopy
GNU gdb (GDB; openSUSE Leap 42.1) 7.9.1
Copyright (C) 2015 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law. Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-suse-linux".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<http://bugs.opensuse.org/>.
Find the GDB manual and other documentation resources online at:
<http://www.gnu.org/software/gdb/documentation/>.
For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from /usr/sbin/bcopy...Reading symbols from /usr/lib/debug/usr/sbin/bcopy.debug...done.
done.
(gdb) run -d200 -b Catalog.bsr -c /etc/bareos/bareos-sd.conf -v -o catalog01 Default FileStorage
Starting program: /usr/sbin/bcopy -d200 -b Catalog.bsr -c /etc/bareos/bareos-sd.conf -v -o catalog01 Default FileStorage
Missing separate debuginfos, use: zypper install glibc-debuginfo-2.19-22.1.x86_64
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Detaching after fork from child process 3016.
bcopy (90): stored_conf.c:837-0 Inserting Device res: Default
bcopy (90): stored_conf.c:837-0 Inserting Director res: earth-dir
Detaching after fork from child process 3019.
bcopy (50): plugins.c:222-0 load_plugins
bcopy (50): plugins.c:302-0 Found plugin: name=autoxflate-sd.so len=16
bcopy (50): sd_plugins.c:371-0 is_plugin_compatible called
bcopy (50): plugins.c:299-0 Rejected plugin: want=-sd.so name=bpipe-fd.so len=11
bcopy (50): plugins.c:302-0 Found plugin: name=scsicrypto-sd.so len=16
bcopy (200): scsicrypto-sd.c:137-0 scsicrypto-sd: Loaded: size=104 version=3
bcopy (50): sd_plugins.c:371-0 is_plugin_compatible called
bcopy (50): plugins.c:302-0 Found plugin: name=scsitapealert-sd.so len=19
bcopy (200): scsitapealert-sd.c:99-0 scsitapealert-sd: Loaded: size=104 version=3
bcopy (50): sd_plugins.c:371-0 is_plugin_compatible called
bcopy (8): crypto_cache.c:55-0 Could not open crypto cache file. /var/lib/bareos/bareos-sd.9103.cryptoc ERR=No such file or directory
bcopy (100): bcopy.c:194-0 About to setup input jcr
bcopy (200): autoxflate-sd.c:183-0 autoxflate-sd: newPlugin JobId=0
bcopy (200): scsicrypto-sd.c:168-0 scsicrypto-sd: newPlugin JobId=0
bcopy (200): scsitapealert-sd.c:130-0 scsitapealert-sd: newPlugin JobId=0
bcopy: butil.c:271-0 Using device: "Default" for reading.
bcopy (100): dev.c:393-0 init_dev: tape=0 dev_name=/var/lib/bareos/storage/default
bcopy (100): dev.c:395-0 dev=/var/lib/bareos/storage/default dev_max_bs=0 max_bs=0
bcopy (100): block.c:127-0 created new block of blocksize 64512 (dev->device->label_block_size) as dev->max_block_size is zero
bcopy (200): acquire.c:791-0 Attach Jid=0 dcr=61fc28 size=0 dev="Default" (/var/lib/bareos/storage/default)
bcopy (150): vol_mgr.c:171-0 add read_vol=Full-0001 JobId=0
bcopy (100): butil.c:164-0 Acquire device for read
bcopy (100): acquire.c:63-0 dcr=61fc28 dev=623f58
bcopy (100): acquire.c:64-0 MediaType dcr= dev=default
bcopy (100): acquire.c:92-0 Want Vol=Full-0001 Slot=0
bcopy (100): acquire.c:106-0 MediaType dcr=default dev=default
bcopy (100): acquire.c:174-0 MediaType dcr=default dev=default
bcopy (100): acquire.c:193-0 dir_get_volume_info vol=Full-0001
bcopy (100): askdir.c:731-0 Fake dir_get_volume_info
bcopy (200): scsitapealert-sd.c:228-0 scsitapealert-sd: tapealert is not enabled on device /var/lib/bareos/storage/default
bcopy (100): mount.c:650-0 No swap_dev set
bcopy (100): mount.c:600-0 Must load "Default" (/var/lib/bareos/storage/default)
bcopy (100): autochanger.c:99-0 Device "Default" (/var/lib/bareos/storage/default) is not an autochanger
bcopy (200): scsitapealert-sd.c:192-0 scsitapealert-sd: Unknown event 5
bcopy (100): acquire.c:235-0 stored: open vol=Full-0001
bcopy (100): dev.c:561-0 open dev: type=1 dev_name="Default" (/var/lib/bareos/storage/default) vol=Full-0001 mode=OPEN_READ_ONLY
bcopy (100): dev.c:572-0 call open_device mode=OPEN_READ_ONLY
bcopy (190): dev.c:1006-0 Enter mount
bcopy (100): dev.c:646-0 open disk: mode=OPEN_READ_ONLY open(/var/lib/bareos/storage/default/Full-0001, 0x0, 0640)
bcopy (100): dev.c:662-0 open dev: disk fd=3 opened
bcopy (100): dev.c:580-0 preserve=0xffffde60 fd=3
bcopy (100): acquire.c:243-0 opened dev "Default" (/var/lib/bareos/storage/default) OK
bcopy (100): acquire.c:257-0 calling read-vol-label
bcopy (100): dev.c:502-0 setting minblocksize to 64512, maxblocksize to label_block_size=64512, on device "Default" (/var/lib/bareos/storage/default)
bcopy (100): label.c:76-0 Enter read_volume_label res=0 device="Default" (/var/lib/bareos/storage/default) vol=Full-0001 dev_Vol=*NULL* max_blocksize=64512
bcopy (130): label.c:140-0 Big if statement in read_volume_label
bcopy (190): label.c:907-0 unser_vol_label

Volume Label:
Id : Bareos 2.0 immortal
VerNo : 20
VolName : Full-0001
PrevVolName :
VolFile : 0
LabelType : VOL_LABEL
LabelSize : 159
PoolName : Full
MediaType : default
PoolType : Backup
HostName : earth
Date label written: 23-aoû-2016 16:12
bcopy (130): label.c:213-0 Compare Vol names: VolName=Full-0001 hdr=Full-0001

Volume Label:
Id : Bareos 2.0 immortal
VerNo : 20
VolName : Full-0001
PrevVolName :
VolFile : 0
LabelType : VOL_LABEL
LabelSize : 159
PoolName : Full
MediaType : default
PoolType : Backup
HostName : earth
Date label written: 23-aoû-2016 16:12
bcopy (130): label.c:234-0 Leave read_volume_label() VOL_OK
bcopy (100): label.c:251-0 Call reserve_volume=Full-0001
bcopy (150): vol_mgr.c:414-0 enter reserve_volume=Full-0001 drive="Default" (/var/lib/bareos/storage/default)
bcopy (150): vol_mgr.c:323-0 new Vol=Full-0001 at 625718 dev="Default" (/var/lib/bareos/storage/default)
bcopy (150): vol_mgr.c:582-0 === set in_use. vol=Full-0001 dev="Default" (/var/lib/bareos/storage/default)
bcopy (150): vol_mgr.c:619-0 Inc walk_start use_count=2 volname=Full-0001
bcopy (150): vol_mgr.c:259-0 List end new volume: Full-0001 in_use=1 swap=0 on device "Default" (/var/lib/bareos/storage/default)
bcopy (200): scsitapealert-sd.c:228-0 scsitapealert-sd: tapealert is not enabled on device /var/lib/bareos/storage/default
bcopy (100): dev.c:432-0 Device "Default" (/var/lib/bareos/storage/default) has dev->device->max_block_size of 0 and dev->max_block_size of 64512, dcr->VolMaxBlocksize is 0
bcopy (100): dev.c:474-0 set minblocksize to 64512, maxblocksize to 64512 on device "Default" (/var/lib/bareos/storage/default)
bcopy (100): acquire.c:263-0 Got correct volume.
23-aoû 18:16 bcopy JobId 0: Ready to read from volume "Full-0001" on device "Default" (/var/lib/bareos/storage/default).
bcopy (100): acquire.c:370-0 dcr=61fc28 dev=623f58
bcopy (100): acquire.c:371-0 MediaType dcr=default dev=default
bcopy (100): bcopy.c:212-0 About to setup output jcr
bcopy (200): autoxflate-sd.c:183-0 autoxflate-sd: newPlugin JobId=0
bcopy (200): scsicrypto-sd.c:168-0 scsicrypto-sd: newPlugin JobId=0
bcopy (200): scsitapealert-sd.c:130-0 scsitapealert-sd: newPlugin JobId=0
bcopy: butil.c:274-0 Using device: "FileStorage" for writing.
bcopy (100): dev.c:393-0 init_dev: tape=0 dev_name=/var/lib/bareos/storage/file/
bcopy (100): dev.c:395-0 dev=/var/lib/bareos/storage/file/ dev_max_bs=0 max_bs=0
bcopy (100): block.c:127-0 created new block of blocksize 64512 (dev->device->label_block_size) as dev->max_block_size is zero
bcopy (200): acquire.c:791-0 Attach Jid=0 dcr=6260d8 size=0 dev="FileStorage" (/var/lib/bareos/storage/file/)
bcopy (150): vol_mgr.c:169-0 read_vol=Full-0001 JobId=0 already in list.
bcopy (120): device.c:266-0 start open_output_device()
bcopy (129): device.c:275-0 Device is file, deferring open.
bcopy (100): bcopy.c:225-0 About to acquire device for writing
bcopy (100): dev.c:561-0 open dev: type=1 dev_name="FileStorage" (/var/lib/bareos/storage/file/) vol=catalog01 mode=OPEN_READ_WRITE
bcopy (100): dev.c:572-0 call open_device mode=OPEN_READ_WRITE
bcopy (190): dev.c:1006-0 Enter mount
bcopy (100): dev.c:646-0 open disk: mode=OPEN_READ_WRITE open(/var/lib/bareos/storage/file/catalog01, 0x2, 0640)
bcopy (100): dev.c:662-0 open dev: disk fd=4 opened
bcopy (100): dev.c:580-0 preserve=0xffffe0b0 fd=4
bcopy (100): acquire.c:400-0 acquire_append device is disk
bcopy (190): acquire.c:435-0 jid=0 Do mount_next_write_vol
bcopy (150): mount.c:71-0 Enter mount_next_volume(release=0) dev="FileStorage" (/var/lib/bareos/storage/file/)
bcopy (150): mount.c:84-0 mount_next_vol retry=0
bcopy (200): scsitapealert-sd.c:228-0 scsitapealert-sd: tapealert is not enabled on device /var/lib/bareos/storage/file/
bcopy (100): mount.c:650-0 No swap_dev set
bcopy (200): scsitapealert-sd.c:192-0 scsitapealert-sd: Unknown event 5
bcopy (200): mount.c:390-0 Before dir_find_next_appendable_volume.
bcopy (100): askdir.c:731-0 Fake dir_get_volume_info
bcopy (150): mount.c:124-0 After find_next_append. Vol=catalog01 Slot=0
bcopy (100): autochanger.c:99-0 Device "FileStorage" (/var/lib/bareos/storage/file/) is not an autochanger
bcopy (150): mount.c:173-0 autoload_dev returns 0
bcopy (150): mount.c:209-0 want vol=catalog01 devvol= dev="FileStorage" (/var/lib/bareos/storage/file/)
bcopy (100): dev.c:502-0 setting minblocksize to 64512, maxblocksize to label_block_size=64512, on device "FileStorage" (/var/lib/bareos/storage/file/)
bcopy (100): label.c:76-0 Enter read_volume_label res=0 device="FileStorage" (/var/lib/bareos/storage/file/) vol=catalog01 dev_Vol=*NULL* max_blocksize=64512
bcopy (130): label.c:140-0 Big if statement in read_volume_label
bcopy (190): label.c:907-0 unser_vol_label

Volume Label:
Id : Bareos 2.0 immortal
VerNo : 20
VolName : catalog01
PrevVolName :
VolFile : 0
LabelType : VOL_LABEL
LabelSize : 159
PoolName : Default
MediaType : file
PoolType : Backup
HostName : earth
Date label written: 23-aoû-2016 17:59
bcopy (130): label.c:213-0 Compare Vol names: VolName=catalog01 hdr=catalog01

Volume Label:
Id : Bareos 2.0 immortal
VerNo : 20
VolName : catalog01
PrevVolName :
VolFile : 0
LabelType : VOL_LABEL
LabelSize : 159
PoolName : Default
MediaType : file
PoolType : Backup
HostName : earth
Date label written: 23-aoû-2016 17:59
bcopy (130): label.c:234-0 Leave read_volume_label() VOL_OK
bcopy (100): label.c:251-0 Call reserve_volume=catalog01
bcopy (150): vol_mgr.c:414-0 enter reserve_volume=catalog01 drive="FileStorage" (/var/lib/bareos/storage/file/)
bcopy (150): vol_mgr.c:619-0 Inc walk_start use_count=2 volname=Full-0001
bcopy (150): vol_mgr.c:259-0 List begin reserve_volume: Full-0001 in_use=1 swap=0 on device "Default" (/var/lib/bareos/storage/default)
bcopy (150): vol_mgr.c:323-0 new Vol=catalog01 at 6258c8 dev="FileStorage" (/var/lib/bareos/storage/file/)
bcopy (150): vol_mgr.c:582-0 === set in_use. vol=catalog01 dev="FileStorage" (/var/lib/bareos/storage/file/)
bcopy (150): vol_mgr.c:619-0 Inc walk_start use_count=2 volname=Full-0001
bcopy (150): vol_mgr.c:259-0 List end new volume: Full-0001 in_use=1 swap=0 on device "Default" (/var/lib/bareos/storage/default)
bcopy (150): vol_mgr.c:638-0 Inc walk_next use_count=2 volname=catalog01
bcopy (150): vol_mgr.c:259-0 List end new volume: catalog01 in_use=1 swap=0 on device "FileStorage" (/var/lib/bareos/storage/file/)
bcopy (200): scsitapealert-sd.c:228-0 scsitapealert-sd: tapealert is not enabled on device /var/lib/bareos/storage/file/
bcopy (100): dev.c:432-0 Device "FileStorage" (/var/lib/bareos/storage/file/) has dev->device->max_block_size of 0 and dev->max_block_size of 64512, dcr->VolMaxBlocksize is 0
bcopy (100): dev.c:474-0 set minblocksize to 64512, maxblocksize to 64512 on device "FileStorage" (/var/lib/bareos/storage/file/)
bcopy (150): mount.c:438-0 Want dirVol=catalog01 dirStat=
bcopy (150): mount.c:446-0 Vol OK name=catalog01
bcopy (100): askdir.c:731-0 Fake dir_get_volume_info
bcopy (100): askdir.c:731-0 Fake dir_get_volume_info
bcopy (200): mount.c:289-0 applying vol block sizes to device "FileStorage" (/var/lib/bareos/storage/file/): dcr->VolMinBlocksize set to 0, dcr->VolMaxBlocksize set to 0
bcopy (100): dev.c:432-0 Device "FileStorage" (/var/lib/bareos/storage/file/) has dev->device->max_block_size of 0 and dev->max_block_size of 64512, dcr->VolMaxBlocksize is 0
bcopy (100): dev.c:474-0 set minblocksize to 64512, maxblocksize to 64512 on device "FileStorage" (/var/lib/bareos/storage/file/)
bcopy (100): mount.c:323-0 Device previously written, moving to end of data. Expect 0 bytes
23-aoû 18:16 bcopy JobId 0: Volume "catalog01" previously written, moving to end of data.
bcopy (100): dev.c:749-0 Enter eod
bcopy (200): dev.c:761-0 ====== Seek to 14465367
23-aoû 18:16 bcopy JobId 0: Warning: For Volume "catalog01":
The sizes do not match! Volume=14465367 Catalog=0
Correcting Catalog
bcopy (150): mount.c:341-0 update volinfo mounts=1
bcopy (150): mount.c:351-0 set APPEND, normal return from mount_next_write_volume. dev="FileStorage" (/var/lib/bareos/storage/file/)
bcopy (190): acquire.c:448-0 Output pos=0:14465367
bcopy (100): acquire.c:459-0 === nwriters=1 nres=0 vcatjob=1 dev="FileStorage" (/var/lib/bareos/storage/file/)
23-aoû 18:16 bcopy JobId 0: Forward spacing Volume "Full-0001" to file:block 0:1922821345.
bcopy (100): dev.c:892-0 ===== lseek to 1922821345
bcopy (10): bcopy.c:384-0 Begin Job Session Record: VolSessionId=3 VolSessionTime=1471954183 JobId=2 DataLen=144
Begin Job Session Record: VolSessionId=3 VolSessionTime=1471954183 JobId=2 DataLen=144
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=1335
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=2407
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=3479
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=4551
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=5623
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=6695
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=7767
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=8839
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=9911
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=10983
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=12055
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=13127
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=14199
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=15271
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=16343
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=17415
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=18487
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=19559
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=20631
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=21703
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=22775
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=23847
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=24919
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=25991
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=27063
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=28135
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=29207
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=30279
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=31351
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=32423
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=33495
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=34567
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=35639
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=36711
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=37783
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=38855
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=39927
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=40999
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=42071
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=43143
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=44215
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=45287
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=46359
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=47431
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=48503
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=49575
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=50647
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=51719
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=52791
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=53863
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=54935
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=56007
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=57079
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=58151
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=59223
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=60295
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=61367
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=62439
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=63511
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=64583
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=107
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=1179
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=2251
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=3323
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=4395
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=5467
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=6539
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=7611
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=8683
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=9755
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=10827
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=11899
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=12971
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=600 rem=200
bcopy (10): bcopy.c:384-0 End Job Session Record: VolSessionId=3 VolSessionTime=1471954183 JobId=2 DataLen=180
End Job Session Record: VolSessionId=3 VolSessionTime=1471954183 JobId=2 DataLen=180
bcopy (200): read_record.c:243-0 End of file 0 on device "Default" (/var/lib/bareos/storage/default), Volume "Full-0001"
23-aoû 18:16 bcopy JobId 0: End of Volume at file 0 on device "Default" (/var/lib/bareos/storage/default), Volume "Full-0001"
bcopy (150): vol_mgr.c:713-0 === clear in_use vol=Full-0001
bcopy (150): vol_mgr.c:732-0 === set not reserved vol=Full-0001 num_writers=0 dev_reserved=0 dev="Default" (/var/lib/bareos/storage/default)
bcopy (150): vol_mgr.c:763-0 === clear in_use vol=Full-0001
bcopy (150): vol_mgr.c:777-0 === remove volume Full-0001 dev="Default" (/var/lib/bareos/storage/default)
bcopy (150): vol_mgr.c:619-0 Inc walk_start use_count=2 volname=catalog01
bcopy (150): vol_mgr.c:259-0 List free_volume: catalog01 in_use=1 swap=0 on device "FileStorage" (/var/lib/bareos/storage/file/)
bcopy (90): mount.c:928-0 NumReadVolumes=1 CurReadVolume=1
bcopy (150): vol_mgr.c:705-0 vol_unused: no vol on "Default" (/var/lib/bareos/storage/default)
bcopy (150): vol_mgr.c:619-0 Inc walk_start use_count=2 volname=catalog01
bcopy (150): vol_mgr.c:259-0 List null vol cannot unreserve_volume: catalog01 in_use=1 swap=0 on device "FileStorage" (/var/lib/bareos/storage/file/)
bcopy (90): mount.c:947-0 End of Device reached.
23-aoû 18:16 bcopy JobId 0: End of all volumes.
bcopy (10): bcopy.c:384-0 Unknown Record: VolSessionId=0 VolSessionTime=0 JobId=0 DataLen=0
Unknown Record: VolSessionId=0 VolSessionTime=0 JobId=0 DataLen=0
bcopy: bcopy.c:328-0 EOT label not copied.
bcopy (100): block.c:449-0 return write_block_to_dev no data to write
bcopy: bcopy.c:251-0 1 Jobs copied. 377 records copied.
bcopy (100): dev.c:933-0 close_dev "Default" (/var/lib/bareos/storage/default)
bcopy (100): dev.c:1043-0 Enter unmount
bcopy (100): dev.c:921-0 Clear volhdr vol=Full-0001
bcopy (100): dev.c:933-0 close_dev "FileStorage" (/var/lib/bareos/storage/file/)
bcopy (100): dev.c:1043-0 Enter unmount
bcopy (100): dev.c:921-0 Clear volhdr vol=catalog01
bcopy (150): vol_mgr.c:193-0 remove_read_vol=Full-0001 JobId=0 found=1
bcopy: lockmgr.c:319-0 ASSERT failed at acquire.c:839: !priority || priority >= max_prio
[New Thread 0x7ffff53e7700 (LWP 3015)]

Program received signal SIGSEGV, Segmentation fault.
0x00007ffff7743c29 in lmgr_thread_t::pre_P (this=0x61c478, m=0x61fc58, priority=-1431655766, f=0x7ffff7bc5f7b "acquire.c", l=839) at lockmgr.c:319
319 lockmgr.c: No such file or directory.
Missing separate debuginfos, use: zypper install libcap2-debuginfo-2.22-16.1.x86_64 libgcc_s1-debuginfo-5.3.1+r233831-6.1.x86_64 libjansson4-debuginfo-2.7-3.2.x86_64 liblzo2-2-debuginfo-2.08-4.1.x86_64 libopenssl1_0_0-debuginfo-1.0.1i-15.1.x86_64 libstdc++6-debuginfo-5.3.1+r233831-6.1.x86_64 libwrap0-debuginfo-7.6-885.4.x86_64 libz1-debuginfo-1.2.8-6.4.x86_64
(gdb) bt
#0 0x00007ffff7743c29 in lmgr_thread_t::pre_P (this=0x61c478, m=0x61fc58, priority=-1431655766, f=0x7ffff7bc5f7b "acquire.c", l=839) at lockmgr.c:319
0000001 0x00007ffff77433a6 in bthread_mutex_lock_p (m=m@entry=0x61fc58, file=file@entry=0x7ffff7bc5f7b "acquire.c", line=line@entry=839) at lockmgr.c:742
0000002 0x00007ffff7b9e72f in free_dcr (dcr=0x61fc28) at acquire.c:839
0000003 0x00007ffff7baa43f in my_free_jcr (jcr=0x622bc8) at butil.c:215
0000004 0x00007ffff77409bd in b_free_jcr (file=file@entry=0x402d83 "bcopy.c", line=line@entry=256, jcr=0x622bc8) at jcr.c:641
0000005 0x0000000000401f98 in main (argc=<optimized out>, argv=<optimized out>) at bcopy.c:256
(gdb) cont
Continuing.
[Thread 0x7ffff53e7700 (LWP 3015) exited]

Program terminated with signal SIGSEGV, Segmentation fault.
Attached Files:
Notes
(0002413)
tigerfoot   
2016-10-27 12:07   
End of the trace with full debuginfo packages installed.

27-oct-2016 12:05:29.103637 bcopy (90): mount.c:947-0 End of Device reached.
27-oct 12:05 bcopy JobId 0: End of all volumes.
27-oct-2016 12:05:29.103642 bcopy (10): bcopy.c:384-0 Unknown Record: VolSessionId=0 VolSessionTime=0 JobId=0 DataLen=0
Unknown Record: VolSessionId=0 VolSessionTime=0 JobId=0 DataLen=0
bcopy: bcopy.c:328-0 EOT label not copied.
27-oct-2016 12:05:29.103650 bcopy (100): block.c:449-0 return write_block_to_dev no data to write
bcopy: bcopy.c:251-0 1 Jobs copied. 309113 records copied.
27-oct-2016 12:05:29.103656 bcopy (100): dev.c:933-0 close_dev "Default" (/var/lib/bareos/storage/default)
27-oct-2016 12:05:29.103703 bcopy (100): dev.c:1043-0 Enter unmount
27-oct-2016 12:05:29.103707 bcopy (100): dev.c:921-0 Clear volhdr vol=Full-0001
27-oct-2016 12:05:29.103713 bcopy (100): dev.c:933-0 close_dev "FileStorage" (/var/lib/bareos/storage/file)
27-oct-2016 12:05:29.103716 bcopy (100): dev.c:1043-0 Enter unmount
27-oct-2016 12:05:29.103719 bcopy (100): dev.c:921-0 Clear volhdr vol=catalog01
27-oct-2016 12:05:29.103726 bcopy (150): vol_mgr.c:193-0 remove_read_vol=Full-0001 JobId=0 found=1
bcopy: lockmgr.c:319-0 ASSERT failed at acquire.c:839: !priority || priority >= max_prio
319 lockmgr.c: No such file or directory.

Thread 1 "bcopy" received signal SIGSEGV, Segmentation fault.
0x00007ffff7743c29 in lmgr_thread_t::pre_P (this=0x61c478, m=0x623008, priority=-1431655766, f=0x7ffff7bc5f7b "acquire.c", l=839) at lockmgr.c:319
(gdb) bt
#0 0x00007ffff7743c29 in lmgr_thread_t::pre_P (this=0x61c478, m=0x623008, priority=-1431655766, f=0x7ffff7bc5f7b "acquire.c", l=839) at lockmgr.c:319
0000001 0x00007ffff77433a6 in bthread_mutex_lock_p (m=m@entry=0x623008, file=file@entry=0x7ffff7bc5f7b "acquire.c", line=line@entry=839) at lockmgr.c:742
0000002 0x00007ffff7b9e72f in free_dcr (dcr=0x622fd8) at acquire.c:839
0000003 0x00007ffff7baa43f in my_free_jcr (jcr=0x623558) at butil.c:215
0000004 0x00007ffff77409bd in b_free_jcr (file=file@entry=0x402d83 "bcopy.c", line=line@entry=256, jcr=0x623558) at jcr.c:641
0000005 0x0000000000401f98 in main (argc=<optimized out>, argv=<optimized out>) at bcopy.c:256
(gdb) cont
Continuing.
[Thread 0x7ffff53e7700 (LWP 6975) exited]

Program terminated with signal SIGSEGV, Segmentation fault.
The program no longer exists.


(full stack will be attached)
(0002414)
tigerfoot   
2016-10-27 12:18   
trace can't be attached (log.xz is 24Mb)
you can have it here
https://dav.ioda.net/index.php/s/V7RPrq6M3KtbFc0/download
(0005031)
bruno-at-bareos   
2023-05-09 16:58   
Has been fixed in recent version 21.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
587 [bareos-core] director minor always 2015-12-22 16:48 2023-05-09 16:55
Reporter: joergs Platform: Linux  
Assigned To: bruno-at-bareos OS: any  
Priority: low OS Version: 3  
Status: resolved Product Version: 15.2.2  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: joblog has "Backup Error", but jobstatus is set to successfull ('T') if writing the bootstrap file fails
Description: If the director can't write the bootstrap file, the joblog says:

...
  Non-fatal FD errors: 1
  SD Errors: 0
  FD termination status: OK
  SD termination status: OK
  Termination: *** Backup Error ***

however, the jobstatus is 'T':

+-------+---------------+-----------------+---------------------+------+-------+----------+----------+-----------+
| JobId | Name | Client | StartTime | Type | Level | JobFiles | JobBytes | JobStatus |
+-------+---------------+-----------------+---------------------+------+-------+----------+----------+-----------+
| 225 | BackupClient1 | ting.dass-it-fd | 2015-12-22 16:32:13 | B | I | 2 | 44 | T |
+-------+---------------+-----------------+---------------------+------+-------+----------+----------+-----------+
Tags:
Steps To Reproduce: configure a job with

  Write Bootstrap = "/NONEXISTINGPATH/%c.bsr"

and run the job.

Compare status from "list joblog" with "list jobs".
Additional Information: list joblog jobid=...

will show something like:

...
 2015-12-22 16:46:12 ting.dass-it-dir JobId 226: Error: Could not open WriteBootstrap file:
/NONEXISTINGPATH/ting.dass-it-fd.bsr: ERR=No such file or directory
...
  Non-fatal FD errors: 1
  SD Errors: 0
  FD termination status: OK
  SD termination status: OK
  Termination: *** Backup Error ***

However "list jobs" will show 'T'.
System Description General issues regarding all linux platforms or not specific to only one distribution
Attached Files:
Notes
(0005029)
bruno-at-bareos   
2023-05-09 16:55   
Fixed in recent version.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
301 [bareos-core] director tweak always 2014-05-27 17:02 2023-05-09 16:41
Reporter: alexbrueckel Platform:  
Assigned To: bruno-at-bareos OS: Debian 7  
Priority: low OS Version:  
Status: resolved Product Version: 13.2.2  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Inconsistency when configuring bandwith limitation
Description: Hi,
while configuring different jobs for a client, some width bandwith limitation, i noticed that every configuration item could be placed in quotation marks except the desired max. bandwidth.

It's a bit inconsistent this way so it would be great if this could be fixed.

Thank you very much
Tags:
Steps To Reproduce:
Additional Information:
Attached Files:
Notes
(0000889)
mvwieringen   
2014-05-31 22:04   
An example and the exact error would be handy. Its probably some missing
parsing as all config code uses the same config parser. But without a
clear example and the exact error its not something we can go on.
(0000899)
alexbrueckel   
2014-06-04 17:37   
Hi,

here's the example that works:
Job {
  Name = "myhost-backupjob"
  Client = "myhost.mydomain.tld"
  JobDefs = "default"
  FileSet = "myfileset"
  Maximum Bandwidth = 10Mb/s
}

Note that the bandwidth value has noch quotation marks.

Thats an example that doesn't work:
Job {
  [same as above]
  Maximum Bandwidth = "10Mb/s"
}

The error message i get in this case is:
ERROR in parse_conf.c:764 Config error: expected a speed, got: 10Mb/s

Hope that helps and thanks for your work.
Alex
(0000900)
mvwieringen   
2014-06-06 15:39   
It seems that the config engine only allows a quoted string. E.g. all numbers
are now allowed to have a quotation. As the speed get parsed by the same function
as a number it currently doesn't allow you to use quotes. You can indeed argue
that its inconsistent but it seems to be envisioned by the original creator of
the config engine. We might change this one day but for now I wouldn't hold my
breath for it to occur any time soon. There are just more important things to do.
(0001086)
joergs   
2014-12-01 16:13   
I added some notes about this to the documentation.
(0005023)
bruno-at-bareos   
2023-05-09 16:41   
documentation updated.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1447 [bareos-core] file daemon tweak always 2022-04-06 14:12 2023-03-07 12:09
Reporter: mdc Platform: x86_64  
Assigned To: bruno-at-bareos OS: CentOS  
Priority: normal OS Version: stream 8  
Status: resolved Product Version: 21.1.2  
Product Build: Resolution: won't fix  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Restore of unencrypted files on an encrypted fd thrown an error, bur works.
Description: When restore files from an client that stores the files unencrypted on an client that normally only runs encrypted backups, the restore will work, but an error is thrown.
Tags:
Steps To Reproduce: Sample config:
Client A:
Client {
...
PKI Signatures = Yes
PKI Encryption = Yes
PKI Cipher = aes256
PKI Master Key = ".../master.key"
PKI Keypair = ".../all.keys"
}
Client B:
Client {
...
# without the cryptor config
}

Both can do its' backup and restore for itself to the storage. But when an restore is done, with files from client B on client A, then the files are restored as request, but for every file an error is logged:
clienta JobId 72: Error: filed/crypto.cc:168 Missing cryptographic signature for /var/tmp/bareos/var/log/journal/e882cedd07af40b386b29cfa9c88466f/user-70255@bdb4fa2d506c45ba8f8163f7e4ee7dac-0000000000b6f8c1-0005d99dd2d23d5a.journal
and the hole job is marked as failed.
Additional Information: Because the restore itself works, I think the job should only marked as "OK with warnings" and the "Missing cryptographic signature ..." only as an warning instant of an error.
System Description
Attached Files:
Notes
(0004902)
bruno-at-bareos   
2023-03-07 12:09   
Thanks you for your report. In a bug triage session, we came to the following conclusion for this case.
We understand completely the case, and agree it should be better handled by the code.

The workaround is to change your configuration as with the parameter PKI Signatures = Yes you are requesting the fact that normally you care about the signature for all data, so the job got its failing status. If you need to restore unencrypted data to that client, you should during the restore time to comment out that parameter.

On our side, nobody will work on that improvement, but feel free to propose a fix in a PR on github.
Thanks

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1512 [bareos-core] installer / packages major always 2023-02-07 04:48 2023-02-07 13:37
Reporter: MarceloRuiz Platform:  
Assigned To: bruno-at-bareos OS:  
Priority: high OS Version:  
Status: resolved Product Version:  
Product Build: Resolution: won't fix  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Updating Bareos polutes/invalidates user settings
Description: Upating Bareos recreates all the sample configuration files inside '/etc/bareos/' even if the folder exists and contains a working configuration.
Tags:
Steps To Reproduce: Install Bareos, configure it using custom filenames, update Bareos.
Additional Information: Bareos should not touch an existing '/etc/bareos/' folder if it exists. That means that a user spent a considerable amount of time configuring the program and a simple system update will make the whole configuration invalid and Bareos won't even start.
If there is a need to provide a sample configuration, do it in a separate folder, like '/etc/bareos-sample-config' so it won't break a working configuration. The installer/updater could even delete that folder before the install/update and recreate to provide an up-to-date example of the configuration for the current version without risking breaking anything.
Attached Files:
Notes
(0004874)
bruno-at-bareos   
2023-02-07 13:37   
What OS are you using ?

We state into the documentation to not remove any installed files by your package manager, as it will be reinstalled if you delete them.
rpm will create for you rpmnew or rpmold files when they have been changed.
make install create .new .old files if existing are already there or have being changed.

One of the best way is to simply comment or keep them empty, so no changes will happen on update.

It is how it is actually, as we didn't have find a way to make the product as easy as possible for newcomer proposing a ready to use configuration.

Our the expert side, you can also simple create your personal /etc/bareos-production structure and create systemd overwrite service to use and point to that location.
(0004875)
bruno-at-bareos   
2023-02-07 13:37   
No changes will occurs

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1498 [bareos-core] webui minor random 2022-12-15 15:00 2022-12-21 11:47
Reporter: alexanderbazhenov Platform:  
Assigned To: frank OS:  
Priority: normal OS Version:  
Status: assigned Product Version:  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Failed to send result as json. Maybe the result message is too long?
Description: Got something like this again in 21.0 version: https://bugs.bareos.org/view.php?id=719

Failed to retrieve data from Bareos director
Error message received from director:

Failed to send result as json. Maybe the result message is too long?
Tags: director, job, postgresql, ubuntu20.04, webui
Steps To Reproduce: Don't know the steps, but as I got it happens when more volumes or output used. Especcially when you run a script on a client, e.g. gitlab dump:

sudo mkdir /var/opt/gitlab/backups
sudo chown git /var/opt/gitlab/backups
sudo gitlab-rake gitlab:backup:create SKIP=artifacts

If you got this once you'll not be able to open any job details (the same message all the time) until all backup jobs will finish.
Additional Information: I don't know is it web bug or just director, but there is no error in director logs.

Additional info:

root@bareos:/etc/bareos/bareos-dir.d/catalog# dpkg -l | grep bareos
ii bareos 21.0.0-4 amd64 Backup Archiving Recovery Open Sourced - metapackage
ii bareos-bconsole 21.0.0-4 amd64 Backup Archiving Recovery Open Sourced - text console
ii bareos-client 21.0.0-4 amd64 Backup Archiving Recovery Open Sourced - client metapackage
ii bareos-common 21.0.0-4 amd64 Backup Archiving Recovery Open Sourced - common files
ii bareos-database-common 21.0.0-4 amd64 Backup Archiving Recovery Open Sourced - common catalog files
ii bareos-database-postgresql 21.0.0-4 amd64 Backup Archiving Recovery Open Sourced - PostgreSQL backend
ii bareos-database-tools 21.0.0-4 amd64 Backup Archiving Recovery Open Sourced - database tools
ii bareos-director 21.0.0-4 amd64 Backup Archiving Recovery Open Sourced - director daemon
ii bareos-filedaemon 21.0.0-4 amd64 Backup Archiving Recovery Open Sourced - file daemon
ii bareos-storage 21.0.0-4 amd64 Backup Archiving Recovery Open Sourced - storage daemon
ii bareos-tools 21.0.0-4 amd64 Backup Archiving Recovery Open Sourced - common tools
ii bareos-traymonitor 21.0.0-4 amd64 Backup Archiving Recovery Open Sourced - tray monitor
ii bareos-webui 21.0.0-4 all Backup Archiving Recovery Open Sourced - webui

Postgre installed with defaults:

root@bareos:/etc/bareos/bareos-dir.d/catalog# dpkg -l | grep postgresql
ii bareos-database-postgresql 21.0.0-4 amd64 Backup Archiving Recovery Open Sourced - PostgreSQL backend
ii pgdg-keyring 2018.2 all keyring for apt.postgresql.org
ii postgresql-14 14.1-2.pgdg20.04+1 amd64 The World's Most Advanced Open Source Relational Database
ii postgresql-client-14 14.1-2.pgdg20.04+1 amd64 front-end programs for PostgreSQL 14
ii postgresql-client-common 232.pgdg20.04+1 all manager for multiple PostgreSQL client versions
ii postgresql-common 232.pgdg20.04+1 all PostgreSQL database-cluster manager

root@bareos:/etc/bareos/bareos-dir.d/catalog# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 20.04.3 LTS
Release: 20.04
Codename: focal

Any ideas? Or what other info I should provide.
Attached Files: joblog_jobid4230_json.log (282,928 bytes) 2022-12-15 19:15
https://bugs.bareos.org/file_download.php?file_id=539&type=bug
Notes
(0004839)
bruno-at-bareos   
2022-12-15 16:54   
To help debugging, it would be nice to have at least one of the offending joblog, which can be extracted with bconsole.
please to so and attach the output here (if < 2MB) or on an accessible share.

Developper can be also interested by the same output in json, to do so you can switch bconsole output to ".api 2"

bconsole <<<"@output /var/tmp/joblog_jobidXXXX_json.log
.api 2
list joblog jobid=XXXX
"

where XXX is the problematic jobid.
(0004840)
alexanderbazhenov   
2022-12-15 19:15   
Here is one of them.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1450 [bareos-core] documentation tweak always 2022-04-20 10:12 2022-11-10 16:53
Reporter: mdc Platform: x86_64  
Assigned To: bruno-at-bareos OS: CentOS  
Priority: normal OS Version: stream 8  
Status: resolved Product Version: 21.1.2  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Wrong link to git hub
Description: The GH link in
https://docs.bareos.org/TasksAndConcepts/Plugins.html#python-fd-plugin
points to:
https://github.com/bareos/bareos/tree/master/contrib/fd-plugins/options-plugin-sample
But correct will be:
https://github.com/bareos/bareos/tree/master/contrib/fd-plugins/bareos_option_example
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0004572)
bruno-at-bareos   
2022-04-20 11:44   
Thanks for your report.
We have a fix in progress for that in https://github.com/bareos/bareos/pull/1165
(0004573)
bruno-at-bareos   
2022-04-21 10:21   
PR1165 merged (master), PR1167 Bareos-21 in progress
(0004576)
bruno-at-bareos   
2022-04-21 15:16   
Fix for bareos-21 (default) documentation has been merged too.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1445 [bareos-core] bconsole minor always 2022-03-31 08:35 2022-11-10 16:52
Reporter: mdc Platform: x86_64  
Assigned To: bruno-at-bareos OS: CentOS  
Priority: normal OS Version: stream 8  
Status: resolved Product Version: 21.1.2  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version: 21.1.3  
    Target Version:  
Summary: Quotes are missing at the director name on export
Description: When calling configure export client="Foo" on the console, in the exported file the quotes for the director name are missing.
instant of:
Director {
  Name = "Bareos Director"
this will exported:
Director {
  Name = Bareos Director

As written in the documentation, quotes must be used, when the string contains an space.
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0004562)
bruno-at-bareos   
2022-03-31 10:06   
Hello, I'm just confirmed the missing quote on export.
But even if space is allowed in such resource name, we really advise you to avoid them, it will hurt you in lot a situation.
Space in name for example also doesn't work well with autocompletion in bconsole, etc.

It is safer to consider Name = resource as fqdn name using only ascii alphanumeric and .-_ as special characters.


Regards
(0004577)
bruno-at-bareos   
2022-04-25 16:49   
PR1171 in progress.
(0004590)
bruno-at-bareos   
2022-05-04 17:10   
PR-1171 merged + backport for 21 1173 merged
will appear in next 21.1.3

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1429 [bareos-core] documentation major have not tried 2022-02-14 16:29 2022-11-10 16:52
Reporter: abaguinski Platform:  
Assigned To: bruno-at-bareos OS:  
Priority: high OS Version:  
Status: resolved Product Version: 20.0.5  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Mysql to Postgres migration howto doesn't explain how to initialise the postgres database
Description: I'm trying to figure out how to migrate the catalog from mysql to postgres but I think I'm missing something. The howto (https://docs.bareos.org/Appendix/Howtos.html#prepare-the-new-database) suggests: "Firstly, create a new PostgreSQL database as described in Prepare Bareos database" and links to this document: "https://docs.bareos.org/IntroductionAndTutorial/InstallingBareos.html#prepare-bareos-database", which in turn instructs to run a series of commands that would initialize the database (https://docs.bareos.org/IntroductionAndTutorial/InstallingBareos.html#id9):

su postgres -c /usr/lib/bareos/scripts/create_bareos_database
su postgres -c /usr/lib/bareos/scripts/make_bareos_tables
su postgres -c /usr/lib/bareos/scripts/grant_bareos_privileges

However these commands assume that I mean the currently configured Mysql catalog and fail because Mysql catalog is deprecated:

su postgres -c /usr/lib/bareos/scripts/create_bareos_database
Creating mysql database
The MySQL database backend is deprecated. Please use PostgreSQL instead.
Creating of bareos database failed.

Does that mean I first have to "Add the new PostgreSQL database to the current Bareos Director configuration" (second sentence of the Howto section) and only then go back to the first sentence? Shouldn't the sentences be swapped then (except for "Firstly, ")? And will the create_bareos_database understand which catalog I mean when I configure two catalogs at the same time?

Tags:
Steps To Reproduce: 1. Install bareos 19 with mysql catalog
2. upgrade to bareos 20
3. try to follow the how exactly how it is written
Additional Information:
Attached Files:
Notes
(0004527)
bruno-at-bareos   
2022-02-24 15:56   
I've been able to reproduce the problem, which is due to missing keywords into the documentation. (Passing the dbdriver to scripts)

at the postgresql create database stage could you retry by using this commands

  su - postgres /usr/lib/bareos/scripts/create_bareos_database postgresql
  su - postgres /usr/lib/bareos/scripts/make_bareos_tables postgresql
  su - postgres /usr/lib/bareos/scripts/grant_bareos_privileges postgresql

After that you should be able to use bareos-dbcopy as documented.
Thanks to confirm this works for you, I will then propose and update to the documentation.
(0004528)
abaguinski   
2022-02-25 08:51   
Hi

Thanks for your reaction!

In the mean time we were able to migrate to postgres with a slight difference in the order of steps: 1 added the new catalog resource to the director configuration, 2 created and initialized the postgres database using these scripts. Indeed we've found that the 'postgresql' argument was necessary.

Since we have done it already in this order I unfortunately cannot confirm if only adding the argument was enough (i.e. would the scripts with extra argument work without the catalog resource)

Greetings,
Artem
(0004529)
bruno-at-bareos   
2022-02-28 09:29   
Thanks for your feedback,
Yes the script would have worked without the second catalog resource, when you give them the dbtype.

I will update the documentation to be more precise in that sense.
(0004530)
bruno-at-bareos   
2022-03-01 15:33   
PR#1093 and PR#1094 in review actually
(0004543)
bruno-at-bareos   
2022-03-21 10:56   
PR1094 for updating documentation has been merged.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1480 [bareos-core] documentation minor always 2022-08-30 12:33 2022-11-10 16:51
Reporter: crameleon Platform: Bareos 21.1.3  
Assigned To: frank OS: SUSE Linux Enterprise Server  
Priority: low OS Version: 15 SP4  
Status: resolved Product Version:  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: password string length limitation
Description: Hi,

if I try to log into the web console with the following configuration snippet active:

Console {
  Name = "mygreatusername"
  Password = "SX~E5eMw21shy%z!!B!cZ0PiQ)ex+FOn$Q-A&iv~B3,x|dSGqxsP&4}Zm6iF;[6c6#>LcAvFArcL%d|J}Ae*NB.g8S?$}gJ4mqUH:6aS+Jh6Vtv^Qhno7$>FW24|t2gq"
  Profile = "mygreatwebuiprofile"
  TLS Enable = No
}

The web UI prints the following message:

"Please provide a director, username and password."

If I change the password line to something more simple:

Console {
  Name = "suse-superuser"
  Password = "12345"
  Profile = "webui-superadmin"
  TLS Enable = No
}

Login works as expected.

Since the system does not seem to print any error messages about invalid passwords in its configuration, it would be nice if the allowed characters and lengths (and possibly a sample `pwgen -r <forbidden characters> <length> 1` command) were documented.

Best,
Georg
Tags:
Steps To Reproduce: 1. Configure a web UI user with a complex password such as SX~E5eMw21shy%z!!B!cZ0PiQ)ex+FOn$Q-A&iv~B3,x|dSGqxsP&4}Zm6iF;[6c6#>LcAvFArcL%d|J}Ae*NB.g8S?$}gJ4mqUH:6aS+Jh6Vtv^Qhno7$>FW24|t2gq
2. Copy paste username and password into the browser
3. Try to log in
Additional Information:
Attached Files:
Notes
(0004737)
bruno-at-bareos   
2022-08-31 11:16   
Thanks for your report, the title is a bit misleading, as the problem seems to be present only with the webui.
Having a strong password like described work perfectly with dir<->bconsole for example.

We are now checking where the problem really occur.
(0004738)
bruno-at-bareos   
2022-08-31 11:17   
Long or complicated password are truncated during POST operation with login form.
Those password work well with bconsole for example.
(0004739)
crameleon   
2022-08-31 11:28   
Apologies, I did not consider it to be specific to the webui. Thanks for looking into this! Maybe the POST truncation could be adjusted in my Apache web server?
(0004740)
bruno-at-bareos   
2022-08-31 11:38   
Actual research has proved that the length is important and the password for webui console should be less than 64 chars.
Maybe you can confirm this also on your installation so when our dev's will check this it will be more precise about the symptoms.
(0004741)
crameleon   
2022-09-02 19:00   
Can confirm, with 64 characters it works fine!
(0004742)
crameleon   
2022-09-02 19:02   
And I can also confirm, with one more character, so 65 in total, it returns the "Please provide a director, username and password." message.
(0004744)
frank   
2022-09-08 15:23   
(Last edited: 2022-09-08 16:33)
The form data input filter for password input is set to validate for a PW length between 1 and 64. We simply can remove the max value from the filter to not cause problems like this or set it to a value corresponding to what is allowed in configuration files.
(0004747)
frank   
2022-09-13 18:11   
Fix committed to bareos master branch with changesetid 16581.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1489 [bareos-core] webui minor always 2022-11-02 06:23 2022-11-09 14:11
Reporter: dimmko Platform: Linux  
Assigned To: frank OS: Ubuntu  
Priority: normal OS Version: 22.04  
Status: resolved Product Version: 21.1.4  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: broken storage pool link
Description: Hello!
Sorry for my very bad English!

I have a error when go to see detail
bareos-webui/pool/details/Diff

Tags:
Steps To Reproduce: 1) login in webui
2) click on jobid
3) click on "+"
4) click on pool - Full (for example).
Additional Information: Error:

An error occurred
An error occurred during execution; please try again later.
Additional information:
Exception
File:
/usr/share/bareos-webui/module/Pool/src/Pool/Model/PoolModel.php:94
Message:
Missing argument.
Stack trace:
#0 /usr/share/bareos-webui/module/Pool/src/Pool/Controller/PoolController.php(137): Pool\Model\PoolModel->getPool()
0000001 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/Controller/AbstractActionController.php(82): Pool\Controller\PoolController->detailsAction()
0000002 [internal function]: Zend\Mvc\Controller\AbstractActionController->onDispatch()
0000003 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(444): call_user_func()
0000004 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(205): Zend\EventManager\EventManager->triggerListeners()
0000005 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/Controller/AbstractController.php(118): Zend\EventManager\EventManager->trigger()
0000006 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/DispatchListener.php(93): Zend\Mvc\Controller\AbstractController->dispatch()
0000007 [internal function]: Zend\Mvc\DispatchListener->onDispatch()
0000008 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(444): call_user_func()
0000009 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(205): Zend\EventManager\EventManager->triggerListeners()
0000010 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/Application.php(314): Zend\EventManager\EventManager->trigger()
0000011 /usr/share/bareos-webui/public/index.php(46): Zend\Mvc\Application->run()
0000012 {main}
Attached Files: bareos_webui_error.png (63,510 bytes) 2022-11-02 06:23
https://bugs.bareos.org/file_download.php?file_id=538&type=bug
png
Notes
(0004821)
bruno-at-bareos   
2022-11-03 10:18   
What is needed to try to understand the error is you pool configuration, also maybe you can use your browser console to log POST and GET answer and headers.
Maybe you can also afford the effort to check the php-fpm if used log and apache log (access and error) when the problem occur.

Thanks.
(0004824)
dimmko   
2022-11-07 09:01   
(Last edited: 2022-11-07 09:05)
bruno-at-bareos, thank's for you comment.

1) my pool - Diff
Pool {
  Name = Diff
  Pool Type = Backup
  RecyclePool = Diff
  Purge Oldest Volume = yes
  Recycle = no
  Recycle Oldest Volume = no
  AutoPrune = no
  Volume Retention = 21 days
  ActionOnPurge = Truncate
  Maximum Volume Jobs = 1
  Label Format = "${Client}_${Level}_${Pool}.${Year}-${Month:p/2/0/r}-${Day:p/2/0/r}-${Hour:p/2/0/r}-${Minute:p/2/0/r}_${JobId}"
}

apache2 access.log
[07/Nov/2022:10:40:58 +0300] "GET /pool/details/Diff HTTP/1.1" 500 3225 "http://192.168.5.16/job/?period=1&status=Success" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36"

apache error.log
[Mon Nov 07 10:50:09.844798 2022] [php:warn] [pid 1340] [client 192.168.1.13:61800] PHP Warning: "continue" targeting switch is equivalent to "break". Did you mean to use "continue 2"? in /usr/share/bareos-webui/vendor/zendframework/zend-stdlib/src/ArrayObject.php on line 426, referer: http://192.168.5.16/job/?period=1&status=Success


In Chrome (103):
General:
Request URL: http://192.168.5.16/pool/details/Diff
Request Method: GET
Status Code: 500 Internal Server Error
Remote Address: 192.168.5.16:80
Referrer Policy: strict-origin-when-cross-origin

Response Headers:
HTTP/1.1 500 Internal Server Error
Date: Mon, 07 Nov 2022 07:59:54 GMT
Server: Apache/2.4.52 (Ubuntu)
Expires: Thu, 19 Nov 1981 08:52:00 GMT
Cache-Control: no-store, no-cache, must-revalidate
Pragma: no-cache
Content-Length: 2927
Connection: close
Content-Type: text/html; charset=UTF-8

Request Headers:
GET /pool/details/Diff HTTP/1.1
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9
Accept-Encoding: gzip, deflate
Accept-Language: ru-RU,ru;q=0.9,en-US;q=0.8,en;q=0.7
Cache-Control: no-cache
Connection: keep-alive
Cookie: bareos=o87i7ftkdsf2r160k2j0g5vic2
DNT: 1
Host: 192.168.5.16
Pragma: no-cache
Referer: http://192.168.5.16/job/?period=1&status=Success
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36
(0004825)
dimmko   
2022-11-07 09:18   
Enable display_error in php

[Mon Nov 07 11:17:57.573002 2022] [php:error] [pid 1545] [client 192.168.1.13:63174] PHP Fatal error: Uncaught Zend\\Session\\Exception\\InvalidArgumentException: 'session.name' is not a valid sessions-related ini setting. in /usr/share/bareos-webui/vendor/zendframework/zend-session/src/Config/SessionConfig.php:90\nStack trace:\n#0 /usr/share/bareos-webui/vendor/zendframework/zend-session/src/Config/StandardConfig.php(266): Zend\\Session\\Config\\SessionConfig->setStorageOption()\n#1 /usr/share/bareos-webui/vendor/zendframework/zend-session/src/Config/StandardConfig.php(114): Zend\\Session\\Config\\StandardConfig->setName()\n#2 /usr/share/bareos-webui/module/Application/Module.php(154): Zend\\Session\\Config\\StandardConfig->setOptions()\n#3 [internal function]: Application\\Module->Application\\{closure}()\n#4 /usr/share/bareos-webui/vendor/zendframework/zend-servicemanager/src/ServiceManager.php(939): call_user_func()\n#5 /usr/share/bareos-webui/vendor/zendframework/zend-servicemanager/src/ServiceManager.php(1099): Zend\\ServiceManager\\ServiceManager->createServiceViaCallback()\n#6 /usr/share/bareos-webui/vendor/zendframework/zend-servicemanager/src/ServiceManager.php(638): Zend\\ServiceManager\\ServiceManager->createFromFactory()\n#7 /usr/share/bareos-webui/vendor/zendframework/zend-servicemanager/src/ServiceManager.php(598): Zend\\ServiceManager\\ServiceManager->doCreate()\n#8 /usr/share/bareos-webui/vendor/zendframework/zend-servicemanager/src/ServiceManager.php(530): Zend\\ServiceManager\\ServiceManager->create()\n#9 /usr/share/bareos-webui/module/Application/Module.php(82): Zend\\ServiceManager\\ServiceManager->get()\n#10 /usr/share/bareos-webui/module/Application/Module.php(42): Application\\Module->initSession()\n#11 [internal function]: Application\\Module->onBootstrap()\n#12 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(444): call_user_func()\n#13 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(205): Zend\\EventManager\\EventManager->triggerListeners()\n#14 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/Application.php(157): Zend\\EventManager\\EventManager->trigger()\n#15 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/Application.php(261): Zend\\Mvc\\Application->bootstrap()\n#16 /usr/share/bareos-webui/public/index.php(46): Zend\\Mvc\\Application::init()\n#17 {main}\n\nNext Zend\\ServiceManager\\Exception\\ServiceNotCreatedException: An exception was raised while creating "Zend\\Session\\SessionManager"; no instance returned in /usr/share/bareos-webui/vendor/zendframework/zend-servicemanager/src/ServiceManager.php:946\nStack trace:\n#0 /usr/share/bareos-webui/vendor/zendframework/zend-servicemanager/src/ServiceManager.php(1099): Zend\\ServiceManager\\ServiceManager->createServiceViaCallback()\n#1 /usr/share/bareos-webui/vendor/zendframework/zend-servicemanager/src/ServiceManager.php(638): Zend\\ServiceManager\\ServiceManager->createFromFactory()\n#2 /usr/share/bareos-webui/vendor/zendframework/zend-servicemanager/src/ServiceManager.php(598): Zend\\ServiceManager\\ServiceManager->doCreate()\n#3 /usr/share/bareos-webui/vendor/zendframework/zend-servicemanager/src/ServiceManager.php(530): Zend\\ServiceManager\\ServiceManager->create()\n#4 /usr/share/bareos-webui/module/Application/Module.php(82): Zend\\ServiceManager\\ServiceManager->get()\n#5 /usr/share/bareos-webui/module/Application/Module.php(42): Application\\Module->initSession()\n#6 [internal function]: Application\\Module->onBootstrap()\n#7 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(444): call_user_func()\n#8 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(205): Zend\\EventManager\\EventManager->triggerListeners()\n#9 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/Application.php(157): Zend\\EventManager\\EventManager->trigger()\n#10 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/Application.php(261): Zend\\Mvc\\Application->bootstrap()\n#11 /usr/share/bareos-webui/public/index.php(46): Zend\\Mvc\\Application::init()\n#12 {main}\n thrown in /usr/share/bareos-webui/vendor/zendframework/zend-servicemanager/src/ServiceManager.php on line 946, referer: http://192.168.5.16/job/?period=1&status=Success
(0004826)
bruno-at-bareos   
2022-11-07 09:55   
I was able to reproduce, what is funny is if you go to storage -> pool tab -> pool name it will work.
We will transfer that to developer.
(0004827)
bruno-at-bareos   
2022-11-07 09:57   
There's a subtile difference in url called

by storage, pool, poolname url is bareos-webui/pool/details/?pool=Full
by jobid, + details, pool bareos-webui/pool/details/Full
-> create error missing parameter
(0004828)
dimmko   
2022-11-07 10:34   
bruno-at-bareos, your method is work, thank's.
(0004830)
frank   
2022-11-08 15:11   
Fix committed to bareos master branch with changesetid 16853.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1357 [bareos-core] director crash have not tried 2021-05-18 10:53 2022-11-09 14:09
Reporter: harm Platform:  
Assigned To: arogge OS:  
Priority: normal OS Version:  
Status: resolved Product Version:  
Product Build: Resolution: unable to reproduce  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: bareos-dir: ERROR in lib/mem_pool.cc:215 Failed ASSERT: obuf
Description: Hello folks,

when I try to make a long term copy of an always incremental backup the Bareos director crashes.

Version: 20.0.1 (02 March 2021) Ubuntu 20.04.1 LTS

Please let me know what more information you need.

Best regards
Harm
Tags:
Steps To Reproduce: Follow the instructions of https://docs.bareos.org/TasksAndConcepts/AlwaysIncrementalBackupScheme.html
Additional Information:
Attached Files:
Notes
(0004130)
harm   
2021-05-19 15:15   
The problem seems to occur when a client is selected. I don't seem to have quite grasped the concept yet, but the error should be handled?
(0004149)
arogge   
2021-06-09 17:48   
We need a meaningful backtrace to debug that. Please install a debugger and debug-Packages (or tell me what system your director runs on so I can provide you the commands) and reproduce the issue, so we can see what goes wrong.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1474 [bareos-core] storage daemon crash always 2022-07-27 16:12 2022-10-04 10:28
Reporter: jens Platform: Linux  
Assigned To: bruno-at-bareos OS: Debian  
Priority: normal OS Version: 9  
Status: resolved Product Version: 19.2.12  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: bareos-sd crashing on VirtualFull with SIGSEGV ../src/lib/serial.cc file not found
Description: When running 'always incremental' backup scheme the storage daemon crashes with segmentation fault
on VirtualFull backup triggered by consolidation.

Job error:
bareos-dir JobId 1267: Fatal error: Director's comm line to SD dropped.

GDB debug:
bareos-sd (200): stored/mac.cc:159-154 joblevel from SOS_LABEL is now F
bareos-sd (130): stored/label.cc:672-154 session_label record=ec015288
bareos-sd (150): stored/label.cc:718-154 Write sesson_label record JobId=154 FI=SOS_LABEL SessId=1 Strm=154 len=165 remainder=0
bareos-sd (150): stored/label.cc:722-154 Leave WriteSessionLabel Block=1351364161d File=0d
bareos-sd (200): stored/mac.cc:221-154 before write JobId=154 FI=1 SessId=1 Strm=UNIX-Attributes-EX len=123
Thread 4 "bareos-sd" received signal SIGSEGV, Segmentation fault.

[Switching to Thread 0x7ffff4c5b700 (LWP 2271)]
serial_uint32 (ptr=ptr@entry=0x7ffff4c5aa70, v=<optimized out>) at ../../../src/lib/serial.cc:76
76 ../../../src/lib/serial.cc: No such file or directory.


I am running daily incrementals into 'File' pool, consolidating every 4 days into 'FileCons' pool, a virtual full every 1st Monday of a month into 'LongTerm-Disk' pool
and finally a migration fto tape every 2nd Monday of a month from 'LongTerm-Disk' pool into 'LongTerm-Tape' pool.

BareOS version: 19.2.7
BareOS director and storage daemon on the same machine.
Disk storage on CEPH mount
Tape storage on Fujitsu Eternus LT2 tape library with 1 LTO-7 drive

---------------------------------------------------------------------------------------------------
Storage Device config:

FileStorage with 10 devices, all into same 1st folder:

Device {
  Name = FileStorage
  Media Type = File
  Archive Device = /storage/backup/bareos_Incremental # storage location
  LabelMedia = yes # lets Bareos label unlabeled media
  Random Access = yes # allow this device to be used by any job
  AutomaticMount = yes # when device opened, read it
  RemovableMedia = no # fixed media ( no tape, no usb )
  AlwaysOpen = no
  Auto Inflate = both # auto-decompress in- out- stream
  Auto Deflate = both # auto-compress in- out- stream ( backup server side compression )
  Auto Deflate Algorithm = LZ4HC # compression algorithm
}
....

FileStorageCons with 10 devices, all into same 2nd folder

Name = FileStorageCons
  Media Type = FileCons
  Archive Device = /storage/backup/bareos_Consolidate # storage location
  LabelMedia = yes # lets Bareos label unlabeled media
  Random Access = yes # allow this device to be used by any job
  AutomaticMount = yes # when device opened, read it
  RemovableMedia = no # fixed media ( no tape, no usb )
  AlwaysOpen = no
  Auto Inflate = both # auto-decompress in- out- stream
  Auto Deflate = both # auto-compress in- out- stream ( backup server side compression )
  Auto Deflate Algorithm = LZ4HC # compression algorithm
}
...

FileStorageVault with 10 devices, all into same 3rd folder

Name = FileStorageVault
  Media Type = FileVLT
  Archive Device = /storage/backup/bareos_LongTermDisk # storage location
  LabelMedia = yes # lets Bareos label unlabeled media
  Random Access = yes # allow this device to be used by any job
  AutomaticMount = yes # when device opened, read it
  RemovableMedia = no # fixed media ( no tape, no usb )
  AlwaysOpen = no
  Auto Inflate = both # auto-decompress in- out- stream
  Auto Deflate = both # auto-compress in- out- stream ( backup server side compression )
  Auto Deflate Algorithm = LZ4HC # compression algorithm
}
....

Tape Device:

Device {
  Name = IBM-ULTRIUM-HH7
  Device Type = Tape
  DriveIndex = 0
  ArchiveDevice = /dev/nst0
  Media Type = IBM-LTO-7
  AutoChanger = yes
  AutomaticMount = yes
  LabelMedia = yes
  RemovableMedia = yes
  Autoselect = yes
  MaximumFileSize = 10GB
  Spool Directory = /storage/scratch
  Maximum Spool Size = 2199023255552 # maximum total spool size in bytes (2Tbyte)
}

---------------------------------------------------------------------------------------------------
Pool Config:

Pool {
  Name = AI-Incremental # name of the media pool
  Pool Type = Backup # pool type
  Recycle = yes # BAReOS can automatically recycle volumes from that pool
  AutoPrune = yes # do not automatically prune expired volumes
  Volume Retention = 72 days
  Storage = File # storage device to be used
  Maximum Volume Bytes = 10G # maximum file size per volume
  Maximum Volumes = 500 # maximum allowed total number of volumes in pool
  Label Format = "AI-Incremental_" # volumes will be labeled "AI-Incremental_-<volume-id>"
  Volume Use Duration = 36 days # volume will be no longer used than
  Next Pool = AI-Consolidate # next pool for consolidation
  Job Retention = 72 days
  File Retention = 36 days
}

Pool {
  Name = AI-Consolidate # name of the media pool
  Pool Type = Backup # pool type
  Recycle = yes # BAReOS can automatically recycle volumes from that pool
  AutoPrune = yes # do not automatically prune expired volumes
  Volume Retention = 366 days
  Job Retention = 180 days
  File Retention = 93 days
  Storage = FileCons # storage device to be used
  Maximum Volume Bytes = 10G # maximum file size per volume
  Maximum Volumes = 1000 # maximum allowed total number of volumes in pool
  Label Format = "AI-Consolidate_" # volumes will be labeled "AI-Consolidate_-<volume-id>"
  Volume Use Duration = 2 days # volume will be no longer used than
  Next Pool = LongTerm-Disk # next pool for long term backups to disk
}

Pool {
  Name = LongTerm-Disk # name of the media pool
  Pool Type = Backup # pool type
  Recycle = yes # BAReOS can automatically recycle volumes from that pool
  AutoPrune = yes # do not automatically prune expired volumes
  Volume Retention = 732 days
  Job Retention = 732 days
  File Retention = 366 days
  Storage = FileVLT # storage device to be used
  Maximum Volume Bytes = 10G # maximum file size per volume
  Maximum Volumes = 1000 # maximum allowed total number of volumes in pool
  Label Format = "LongTerm-Disk_" # volumes will be labeled "LongTerm-Disk_<volume-id>"
  Volume Use Duration = 2 days # volume will be no longer used than
  Next Pool = LongTerm-Tape # next pool for long term backups to disk
  Migration Time = 2 days # Jobs older than 2 days in this pool will be migrated to 'Next Pool'
}

Pool {
  Name = LongTerm-Tape
  Pool Type = Backup
  Recycle = yes # Bareos can automatically recycle Volumes
  AutoPrune = yes # Prune expired volumes
  Volume Retention = 732 days # How long should the Backups be kept? (0000012)
  Job Retention = 732 days
  File Retention = 366 days
  Storage = TapeLibrary # Physical Media
  Maximum Block Size = 1048576
  Recycle Pool = Scratch
  Cleaning Prefix = "CLN"
}

---------------------------------------------------------------------------------------------------
JobDefs:

JobDefs {
  Name = AI-Incremental
  Type = Backup
  Level = Incremental
  Storage = File
  Messages = Standard
  Pool = AI-Incremental
  Incremental Backup Pool = AI-Incremental
  Full Backup Pool = AI-Consolidate
  Accurate = yes
  Allow Mixed Priority = yes
  Always Incremental = yes
  Always Incremental Job Retention = 36 days
  Always Incremental Keep Number = 14
  Always Incremental Max Full Age = 31 days
}

JobDefs {
  Name = AI-Consolidate
  Type = Consolidate
  Storage = File
  Messages = Standard
  Pool = AI-Consolidate
  Priority = 25
  Write Bootstrap = "/storage/bootstrap/%c.bsr"
  Incremental Backup Pool = AI-Incremental
  Full Backup Pool = AI-Consolidate
  Max Full Consolidations = 1
  Prune Volumes = yes
  Accurate = yes
}

JobDefs {
  Name = LongTermDisk
  Type = Backup
  Level = VirtualFull
  Messages = Standard
  Pool = AI-Consolidate
  Priority = 30
  Write Bootstrap = "/storage/bootstrap/%c.bsr"
  Accurate = yes
  Run Script {
    console = "update jobid=%1 jobtype=A"
    Runs When = After
    Runs On Client = No
    Runs On Failure = No
  }
}

JobDefs {
  Name = "LongTermTape"
  Pool = LongTerm-Disk
  Messages = Standard
  Type = Migrate
}


---------------------------------------------------------------------------------------------------
Job Config ( per client )

Job {
  Name = "Incr-<client>"
  Description = "<client> always incremental 36d retention"
  Client = <client>
  Jobdefs = AI-Incremental
  FileSet="fileset-<client>"
  Schedule = "daily_incremental_<client>"
  # Write Bootstrap file for disaster recovery.
  Write Bootstrap = "/storage/bootstrap/%j.bsr"
  # The higher the number the lower the job priority
  Priority = 15
  Run Script {
    Console = ".bvfs_update jobid=%i"
    RunsWhen = After
    RunsOnClient = No
  }
}

Job {
  Name = "AI-Consolidate"
  Description = "consolidation of 'always incremental' jobs"
  Client = backup.mgmt.drs
  FileSet = SelfTest
  Jobdefs = AI-Consolidate
  Schedule = consolidate

  # The higher the number the lower the job priority
  Priority = 25
}

Job {
  Name = "VFull-<client>"
  Description = "<client> monthly virtual full"
  Messages = Standard
  Client = <client>
  Type = Backup
  Level = VirtualFull
  Jobdefs = LongTermDisk
  FileSet=fileset-<client>
  Pool = AI-Consolidate
  Schedule = virtual-full_<client>
  Priority = 30
  Run Script {
    Console = ".bvfs_update"
    RunsWhen = After
    RunsOnClient = No
  }
}

Job {
  Name = "migrate-2-tape"
  Description = "monthly migration of virtual full backups from LongTerm-Disk to LongTerm-Tape pool"
  Jobdefs = LongTermTape
  Selection Type = PoolTime
  Schedule = "migrate-2-tape"
  Priority = 15
  Run Script {
    Console = ".bvfs_update jobid=%i"
    RunsWhen = After
    RunsOnClient = No
  }
}

---------------------------------------------------------------------------------------------------
Schedule config:

Schedule {
  Name = "daily_incremental_<client>"
  Run = daily at 02:00
}

Schedule {
  Name = "consolidate"
  Run = Incremental 3/4 at 00:00
}

Schedule {
  Name = "virtual-full_<client>"
  Run = 1st monday at 10:00
}

Schedule {
  Name = "migrate-2-tape"
  Run = 2nd monday at 8:00
}

---------------------------------------------------------------------------------------------------
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files: bareos_sd_debug.zip (3,771 bytes) 2022-07-27 16:59
https://bugs.bareos.org/file_download.php?file_id=530&type=bug
Notes
(0004688)
bruno-at-bareos   
2022-07-27 16:43   
could you check in your working dir (/var/lib/bareos) of SD any other trace backtrace and debug files
if you have them, please attach them (eventually compressed).
(0004690)
jens   
2022-07-27 17:00   
debug files attached in private note
(0004697)
bruno-at-bareos   
2022-07-28 09:34   
What is the reason behind running 19.2 instead upgrading to 21 ?
(0004699)
jens   
2022-07-28 13:06   
1. missing comprehensive and easy to follow step-by-step guide on how to upgrade
2. being unconfident about a flawless upgrade procedure without rendering backup data unusable
3. lack of experience and skilled personnel resulting in major effort to roll out new version
4. limited access to online repositories to update local mirrors -> very long lead time to get new versions
(0004700)
jens   
2022-07-28 13:09   
(Last edited: 2022-07-28 13:12)
For the above reasons I am little hesitant to take the effort of upgrading.
Currently I am considering an update only if this will be the only chance to get the issue resolved.
I need confirmation from your end first.
My hope is that there is just something wrong in my configuration or I am running an adverse setup and changing either one might resolve the issue ?
(0004701)
bruno-at-bareos   
2022-08-01 11:59   
Hi Jens,

Thanks for the complements.

Did this crash happen each time a consolidation VF is created ?
(0004702)
bruno-at-bareos   
2022-08-01 12:04   
Maybe related to fixed in 19.2.9 (available with subscription)
 - fix a memory corruption when autolabeling with increased maxiumum block size
https://docs.bareos.org/bareos-19.2/Appendix/ReleaseNotes.html#id12
(0004703)
jens   
2022-08-01 12:05   
Hi Bruno,

so far, yes, that is my experience.
It is always failing.
Also when repeating or manually rescheduling the failed job through the web-ui during idle hours where nothing else is running on the director.
(0004704)
jens   
2022-08-01 12:14   
The "- fix a memory corruption when autolabeling with increased maximum block size" indeed could be a lead
as I see the following in the job logs ?

Warning: For Volume "AI-Consolidate_0118": The sizes do not match!
Volume=64574484 Catalog=32964717
Correcting Catalog
(0004705)
bruno-at-bareos   
2022-08-02 13:42   
Hi Jens, a quick note about the size do not match, this is unrelated. Aborted or failed job can have this effect.

This Fix was introduce with this commit https://github.com/bareos/bareos/commit/0086b852d and the 19.2.9 has the fix.
(0004800)
bruno-at-bareos   
2022-10-04 10:27   
Closing as a fix already exist
(0004801)
bruno-at-bareos   
2022-10-04 10:28   
Fix is present in source code and published subscription binaries.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1476 [bareos-core] file daemon major always 2022-08-03 16:01 2022-08-23 12:08
Reporter: support@ingenium.trading Platform: Linux  
Assigned To: bruno-at-bareos OS: CentOS  
Priority: high OS Version: 7  
Status: resolved Product Version:  
Product Build: Resolution: won't fix  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Backups for Full and Incremental are approx 10 times bigger than the server used
Description: Whenever a backup job is running, it takes very long without errors and the file size is 10 times bigger than the server usual uses.

Earlier we had issues with the connectivity so I enabled heartbeat in the clien/myself.conf => Heartbeat Interval = 1 min
Tags:
Steps To Reproduce: Manually start the job via webui or bconsole.
Additional Information: Backup Server:
OS: Fedora 35
Bareos Version: 22.0.0~pre613.d7109f123

Client Server:
OS: Alma Linux 9 / CentOS7
Bareos Version: 22.0.0~pre613.d7109f123

Backup job:
03-Aug 09:48 bareos-dir JobId 565: No prior Full backup Job record found.
03-Aug 09:48 bareos-dir JobId 565: No prior or suitable Full backup found in catalog. Doing FULL backup.
03-Aug 09:48 bareos-dir JobId 565: Start Backup JobId 565, Job=td02.example.com.2022-08-03_09.48.28_03
03-Aug 09:48 bareos-dir JobId 565: Connected Storage daemon at backup01.example.com:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
03-Aug 09:48 bareos-dir JobId 565: Max configured use duration=82,800 sec. exceeded. Marking Volume "AI-Example-Consolidated-0490" as Used.
03-Aug 09:48 bareos-dir JobId 565: Created new Volume "AI-Example-Consolidated-0584" in catalog.
03-Aug 09:48 bareos-dir JobId 565: Using Device "FileStorage01" to write.
03-Aug 09:48 bareos-dir JobId 565: Probing client protocol... (result will be saved until config reload)
03-Aug 09:48 bareos-dir JobId 565: Connected Client: td02.example.com at td02.example.com:9102, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
03-Aug 09:48 bareos-dir JobId 565: Handshake: Immediate TLS
03-Aug 09:48 bareos-dir JobId 565: Encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
03-Aug 09:48 trade02-fd JobId 565: Connected Storage daemon at backup01.example.com:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
03-Aug 09:48 trade02-fd JobId 565: Extended attribute support is enabled
03-Aug 09:48 trade02-fd JobId 565: ACL support is enabled
03-Aug 09:48 bareos-sd JobId 565: Labeled new Volume "AI-Example-Consolidated-0584" on device "FileStorage01" (/backup/bareos/storage).
03-Aug 09:48 bareos-sd JobId 565: Wrote label to prelabeled Volume "AI-Example-Consolidated-0584" on device "FileStorage01" (/backup/bareos/storage)
03-Aug 09:48 trade02-fd JobId 565: Disallowed filesystem. Will not descend from / into /dev
03-Aug 09:51 trade02-fd JobId 565: Disallowed filesystem. Will not descend from / into /run
03-Aug 09:51 trade02-fd JobId 565: Disallowed filesystem. Will not descend from / into /sys
03-Aug 09:51 trade02-fd JobId 565: Disallowed filesystem. Will not descend from / into /var/lib/nfs/rpc_pipefs
03-Aug 10:27 bareos-sd JobId 565: User defined maximum volume capacity 107,374,182,400 exceeded on device "FileStorage01" (/backup/bareos/storage).
03-Aug 10:27 bareos-sd JobId 565: End of medium on Volume "AI-Example-Consolidated-0584" Bytes=107,374,159,911 Blocks=1,664,406 at 03-Aug-2022 10:27.
03-Aug 10:27 bareos-dir JobId 565: Created new Volume "AI-Example-Consolidated-0585" in catalog.
03-Aug 10:27 bareos-sd JobId 565: Labeled new Volume "AI-Example-Consolidated-0585" on device "FileStorage01" (/backup/bareos/storage).
03-Aug 10:27 bareos-sd JobId 565: Wrote label to prelabeled Volume "AI-Example-Consolidated-0585" on device "FileStorage01" (/backup/bareos/storage)
03-Aug 10:27 bareos-sd JobId 565: New volume "AI-Example-Consolidated-0585" mounted on device "FileStorage01" (/backup/bareos/storage) at 03-Aug-2022 10:27.
03-Aug 11:07 bareos-sd JobId 565: User defined maximum volume capacity 107,374,182,400 exceeded on device "FileStorage01" (/backup/bareos/storage).
03-Aug 11:07 bareos-sd JobId 565: End of medium on Volume "AI-Example-Consolidated-0585" Bytes=107,374,160,137 Blocks=1,664,406 at 03-Aug-2022 11:07.
03-Aug 11:07 bareos-dir JobId 565: Created new Volume "AI-Example-Consolidated-0586" in catalog.
03-Aug 11:07 bareos-sd JobId 565: Labeled new Volume "AI-Example-Consolidated-0586" on device "FileStorage01" (/backup/bareos/storage).
03-Aug 11:07 bareos-sd JobId 565: Wrote label to prelabeled Volume "AI-Example-Consolidated-0586" on device "FileStorage01" (/backup/bareos/storage)
03-Aug 11:07 bareos-sd JobId 565: New volume "AI-Example-Consolidated-0586" mounted on device "FileStorage01" (/backup/bareos/storage) at 03-Aug-2022 11:07.
03-Aug 11:46 bareos-sd JobId 565: User defined maximum volume capacity 107,374,182,400 exceeded on device "FileStorage01" (/backup/bareos/storage).
03-Aug 11:46 bareos-sd JobId 565: End of medium on Volume "AI-Example-Consolidated-0586" Bytes=107,374,160,137 Blocks=1,664,406 at 03-Aug-2022 11:46.
03-Aug 11:46 bareos-dir JobId 565: Created new Volume "AI-Example-Consolidated-0587" in catalog.
03-Aug 11:46 bareos-sd JobId 565: Labeled new Volume "AI-Example-Consolidated-0587" on device "FileStorage01" (/backup/bareos/storage).
03-Aug 11:46 bareos-sd JobId 565: Wrote label to prelabeled Volume "AI-Example-Consolidated-0587" on device "FileStorage01" (/backup/bareos/storage)
03-Aug 11:46 bareos-sd JobId 565: New volume "AI-Example-Consolidated-0587" mounted on device "FileStorage01" (/backup/bareos/storage) at 03-Aug-2022 11:46.
03-Aug 12:25 bareos-sd JobId 565: User defined maximum volume capacity 107,374,182,400 exceeded on device "FileStorage01" (/backup/bareos/storage).
03-Aug 12:25 bareos-sd JobId 565: End of medium on Volume "AI-Example-Consolidated-0587" Bytes=107,374,160,137 Blocks=1,664,406 at 03-Aug-2022 12:25.
03-Aug 12:25 bareos-dir JobId 565: Created new Volume "AI-Example-Consolidated-0588" in catalog.
03-Aug 12:25 bareos-sd JobId 565: Labeled new Volume "AI-Example-Consolidated-0588" on device "FileStorage01" (/backup/bareos/storage).
03-Aug 12:25 bareos-sd JobId 565: Wrote label to prelabeled Volume "AI-Example-Consolidated-0588" on device "FileStorage01" (/backup/bareos/storage)
03-Aug 12:25 bareos-sd JobId 565: New volume "AI-Example-Consolidated-0588" mounted on device "FileStorage01" (/backup/bareos/storage) at 03-Aug-2022 12:25.
03-Aug 12:56 bareos-sd JobId 565: Releasing device "FileStorage01" (/backup/bareos/storage).
03-Aug 12:56 bareos-sd JobId 565: Elapsed time=03:08:04, Transfer rate=45.57 M Bytes/second
03-Aug 12:56 bareos-dir JobId 565: Insert of attributes batch table with 188627 entries start
03-Aug 12:56 bareos-dir JobId 565: Insert of attributes batch table done
03-Aug 12:56 bareos-dir JobId 565: Bareos bareos-dir 22.0.0~pre613.d7109f123 (01Aug22):
  Build OS: Fedora release 35 (Thirty Five)
  JobId: 565
  Job: td02.example.com.2022-08-03_09.48.28_03
  Backup Level: Full (upgraded from Incremental)
  Client: "td02.example.com" 22.0.0~pre553.6a41db3f7 (07Jul22) CentOS Stream release 9,redhat
  FileSet: "ExampleLinux" 2022-08-03 09:48:28
  Pool: "AI-Example-Consolidated" (From Job FullPool override)
  Catalog: "MyCatalog" (From Client resource)
  Storage: "File" (From Pool resource)
  Scheduled time: 03-Aug-2022 09:48:27
  Start time: 03-Aug-2022 09:48:31
  End time: 03-Aug-2022 12:56:50
  Elapsed time: 3 hours 8 mins 19 secs
  Priority: 10
  FD Files Written: 188,627
  SD Files Written: 188,627
  FD Bytes Written: 514,227,307,623 (514.2 GB)
  SD Bytes Written: 514,258,382,470 (514.2 GB)
  Rate: 45510.9 KB/s
  Software Compression: None
  VSS: no
  Encryption: no
  Accurate: yes
  Volume name(s): AI-Example-Consolidated-0584|AI-Example-Consolidated-0585|AI-Example-Consolidated-0586|AI-Example-Consolidated-0587|AI-Example-Consolidated-0588
  Volume Session Id: 4
  Volume Session Time: 1659428963
  Last Volume Bytes: 85,150,808,401 (85.15 GB)
  Non-fatal FD errors: 0
  SD Errors: 0
  FD termination status: OK
  SD termination status: OK
  Bareos binary info: pre-release version: Get official binaries and vendor support on bareos.com
  Job triggered by: User
  Termination: Backup OK

03-Aug 12:56 bareos-dir JobId 565: shell command: run AfterJob "echo '.bvfs_update jobid=565' | bconsole"
03-Aug 12:56 bareos-dir JobId 565: AfterJob: .bvfs_update jobid=565 | bconsole

Client Alma Linux 9:
Filesystem Size Used Avail Use% Mounted on
devtmpfs 47G 0 47G 0% /dev
tmpfs 47G 196K 47G 1% /dev/shm
tmpfs 19G 2.3M 19G 1% /run
/dev/mapper/almalinux-root 12G 3.5G 8.6G 29% /
/dev/sda1 2.0G 237M 1.6G 13% /boot
/dev/mapper/almalinux-opt 8.0G 90M 7.9G 2% /opt
/dev/mapper/almalinux-home 12G 543M 12G 5% /home
/dev/mapper/almalinux-var 8.0G 309M 7.7G 4% /var
/dev/mapper/almalinux-opt_ExampleAd 8.0G 373M 7.7G 5% /opt/ExampleAd
/dev/mapper/almalinux-opt_ExampleEn 32G 7.5G 25G 24% /opt/ExampleEn
/dev/mapper/almalinux-var_log 20G 8.1G 12G 41% /var/log
/dev/mapper/almalinux-var_lib 12G 259M 12G 3% /var/lib
tmpfs 9.3G 0 9.3G 0% /run/user/1703000011
tmpfs 9.3G 0 9.3G 0% /run/user/1703000004



Server JobDefs:
JobDefs {
  Name = "ExampleLinux"
  Type = Backup
  Client = bareos-fd
  FileSet = "ExampleLinux"
  Storage = File
  Messages = Standard
  Schedule = "BasicBackup"
  Pool = AI-Example-Incremental
  Priority = 10
  Write Bootstrap = "/var/lib/bareos/%c.bsr"
  Full Backup Pool = AI-Example-Consolidated # write Full Backups into "Full" Pool (0000005)
  Incremental Backup Pool = AI-Example-Incremental # write Incr Backups into "Incremental" Pool (0000011)
}
System Description
Attached Files:
Notes
(0004710)
bruno-at-bareos   
2022-08-03 18:15   
with bconsole show FileSet="ExampleLinux" we will better understand what you've tried to do.

bconsole
estimate job=td02.example.com listing
will show you all the file included.
(0004731)
bruno-at-bareos   
2022-08-23 12:08   
No informations given to go further

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1471 [bareos-core] installer / packages tweak N/A 2022-07-13 11:39 2022-07-28 09:27
Reporter: amodia Platform:  
Assigned To: bruno-at-bareos OS: Debian 11  
Priority: normal OS Version:  
Status: resolved Product Version: 21.1.3  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Shell example script for Bareos installation on Debian / Ubuntu uses deprecated "apt-key"
Description: Using the script raises the warning: "apt-key is deprecated". In order to correct this, it is suggested to change
---
# add package key
wget -q $URL/Release.key -O- | apt-key add -
---
to
+++
# add package key
wget -q $URL/Release.key -O- | gpg --dearmor -o /usr/share/keyrings/bareos.gpg
sed -i -e 's#deb #deb [signed-by=/usr/share/keyrings/bareos.gpg] #' /etc/apt/sources.list.d/bareos.list
+++
Tags:
Steps To Reproduce:
Additional Information:
Attached Files:
Notes
(0004665)
bruno-at-bareos   
2022-07-14 10:09   
Would this be valid for any version of Debian/ubuntu used (deb 9, and ubuntu 18.04) ?
(0004666)
bruno-at-bareos   
2022-07-14 10:44   
We appreciate any effort made to make our software better.
This would be a nice improvement.
Testing on old systems seems ok, checking how much effort to change the code and handle the update/upgrade process on user installation + documentation changes.
(0004667)
bruno-at-bareos   
2022-07-14 11:01   
Adding public reference of the why apt-key should be changed and how,
https://askubuntu.com/questions/1286545/what-commands-exactly-should-replace-the-deprecated-apt-key/1307181#1307181

Maybe changing to Deb822 .sources files is the way to go.
(0004669)
amodia   
2022-07-14 13:06   
I ran into this issue on the update from Bareos 20 to 21. So I can't comment on earlier versions.
My "solution" was the first that worked. Any solution that is better, more compatible and/or requires less effort is appreciated.
(0004695)
bruno-at-bareos   
2022-07-28 09:26   
Changes applied to future documentation
commit c08b56c1a
PR1203
(0004696)
bruno-at-bareos   
2022-07-28 09:27   
Follow status in PR1203 https://github.com/bareos/bareos/pull/1203

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1253 [bareos-core] webui major always 2020-06-17 09:58 2022-07-20 14:09
Reporter: tagort214 Platform: Linux  
Assigned To: frank OS: Ubuntu  
Priority: normal OS Version: 16.04  
Status: acknowledged Product Version: 19.2.7  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Can't restore files from Webui
Description: When I try to restore files from Webui it returns this error:

here was an error while loading data for this tree.

Error: ajax

Plugin: core

Reason: Could not load node

Data:

{"id":"#","xhr":{"readyState":4,"responseText":"\n\n\n \n \n \n \n\n \n \n\n\n \n \n\n\n\n\n \n\n \n\n
\n \n\n
Произошла ошибка
\n
An error occurred during execution; please try again later.
\n\n\n\n
Дополнительная информация:
\n
Zend\\Json\\Exception\\RuntimeException
\n

\n
Файл:
\n
    \n

    /usr/share/bareos-webui/vendor/zendframework/zend-json/src/Json.php:68

    \n \n
Сообщение:
\n
    \n

    Decoding failed: Syntax error

    \n \n
Трассировки стека:
\n
    \n

    #0 /usr/share/bareos-webui/module/Restore/src/Restore/Model/RestoreModel.php(54): Zend\\Json\\Json::decode('', 1)\n#1 /usr/share/bareos-webui/module/Restore/src/Restore/Controller/RestoreController.php(481): Restore\\Model\\RestoreModel->getDirectories(Object(Bareos\\BSock\\BareosBSock), '207685', '#')\n#2 /usr/share/bareos-webui/module/Restore/src/Restore/Controller/RestoreController.php(555): Restore\\Controller\\RestoreController->getDirectories()\n#3 /usr/share/bareos-webui/module/Restore/src/Restore/Controller/RestoreController.php(466): Restore\\Controller\\RestoreController->buildSubtree()\n#4 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/Controller/AbstractActionController.php(82): Restore\\Controller\\RestoreController->filebrowserAction()\n#5 [internal function]: Zend\\Mvc\\Controller\\AbstractActionController->onDispatch(Object(Zend\\Mvc\\MvcEvent))\n#6 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(444): call_user_func(Array, Object(Zend\\Mvc\\MvcEvent))\n#7 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(205): Zend\\EventManager\\EventManager->triggerListeners('dispatch', Object(Zend\\Mvc\\MvcEvent), Object(Closure))\n#8 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/Controller/AbstractController.php(118): Zend\\EventManager\\EventManager->trigger('dispatch', Object(Zend\\Mvc\\MvcEvent), Object(Closure))\n#9 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/DispatchListener.php(93): Zend\\Mvc\\Controller\\AbstractController->dispatch(Object(Zend\\Http\\PhpEnvironment\\Request), Object(Zend\\Http\\PhpEnvironment\\Response))\n#10 [internal function]: Zend\\Mvc\\DispatchListener->onDispatch(Object(Zend\\Mvc\\MvcEvent))\n#11 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(444): call_user_func(Array, Object(Zend\\Mvc\\MvcEvent))\n#12 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(205): Zend\\EventManager\\EventManager->triggerListeners('dispatch', Object(Zend\\Mvc\\MvcEvent), Object(Closure))\n#13 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/Application.php(314): Zend\\EventManager\\EventManager->trigger('dispatch', Object(Zend\\Mvc\\MvcEvent), Object(Closure))\n#14 /usr/share/bareos-webui/public/index.php(24): Zend\\Mvc\\Application->run()\n#15 {main}

    \n \n

\n\n\n
\n\n \n \n\n\n","status":500,"statusText":"Internal Server Error"}}

Also, in apache2 error logs i see this strings:
[:error] [pid 13597] [client 172.32.1.51:56276] PHP Notice: Undefined index: type in /usr/share/bareos-webui/module/Restore/src/Restore/Form/RestoreForm.php on line 91, referer: http://bareos.ivt.lan/bareos-webui/client/details/clientname
 [:error] [pid 14367] [client 172.32.1.51:56278] PHP Warning: unpack(): Type N: not enough input, need 4, have 0 in /usr/share/bareos-webui/vendor/Bareos/library/Bareos/BSock/BareosBSock.php on line 172, referer: http://bareos.ivt.lan/bareos-webui/restore/?mergefilesets=1&mergejobs=1&client=clientname&jobid=207728


Tags:
Steps To Reproduce: 1) Login to webui
2) Select job and click show files (or select client from restore tab)
Additional Information:
System Description
Attached Files: Снимок экрана_2020-06-17_10-57-42.png (37,854 bytes) 2020-06-17 09:58
https://bugs.bareos.org/file_download.php?file_id=443&type=bug
png

Снимок экрана_2020-06-17_10-57-24.png (47,279 bytes) 2020-06-17 09:58
https://bugs.bareos.org/file_download.php?file_id=444&type=bug
png
Notes
(0004242)
frank   
2021-08-31 16:14   
tagort214:

It seems we are receiving malformed JSON from the director here as the decoding throws a syntax error.

We should have a look at the JSON result the director provides for the particular directory you are
trying to list in webui by using bvfs API commands (https://docs.bareos.org/DeveloperGuide/api.html#bvfs-api) in bconsole.


In bconsole please do the following:


1. Get the jobid of the job that causes the described issue, 142 in this example. Replace the jobid from the example below with your specific jobid(s).

*.bvfs_get_jobids jobid=142 all
1,55,142
*.bvfs_lsdirs path= jobid=1,55,142
38 0 0 A A A A A A A A A A A A A A .
37 0 0 A A A A A A A A A A A A A A /


2. Navigate to the folder which causes problems by using pathid, pathids will differ at yours.

*.bvfs_lsdirs pathid=37 jobid=1,55,142
37 0 0 A A A A A A A A A A A A A A .
38 0 0 A A A A A A A A A A A A A A ..
57 0 0 A A A A A A A A A A A A A A ceph/
*

*.bvfs_lsdirs pathid=57 jobid=1,55,142
57 0 0 A A A A A A A A A A A A A A .
37 0 0 A A A A A A A A A A A A A A ..
56 0 0 A A A A A A A A A A A A A A groups/
*

*.bvfs_lsdirs pathid=56 jobid=1,55,142
56 0 0 A A A A A A A A A A A A A A .
57 0 0 A A A A A A A A A A A A A A ..
51 11817 142 P0A V9T EHt CcR A A A 8AA BAA L4 BhLhQA BhLhP/ BhLhP/ A A C group_aa/

Let's pretend group_aa (pathid 51) is the folder we can not list properly in webui.


3. Switch to API mode 2 (JSON) now and list the content of folder group_aa (pathid 51) to get the JSON result.

*.api 2
{
  "jsonrpc": "2.0",
  "id": null,
  "result": {
    "api": 2
  }
}*.bvfs_lsdirs pathid=51 jobid=1,55,142
{
  "jsonrpc": "2.0",
  "id": null,
  "result": {
    "directories": [
      {
        "type": "D",
        "pathid": 51,
        "fileid": 11817,
        "jobid": 142,
        "lstat": "P0A V9T EHt CcR A A A 8AA BAA L4 BhLhQA BhLhP/ BhLhP/ A A C",
        "name": ".",
        "fullpath": ".",
        "stat": {
          "dev": 64768,
          "ino": 89939,
          "mode": 16877,
          "nlink": 10001,
          "uid": 0,
          "gid": 0,
          "user": "root",
          "group": "root",
          "rdev": 0,
          "size": 245760,
          "atime": 1630409728,
          "mtime": 1630409727,
          "ctime": 1630409727
        },
        "linkfileindex": 0
      },
      {
        "type": "D",
        "pathid": 56,
        "fileid": 0,
        "jobid": 0,
        "lstat": "A A A A A A A A A A A A A A",
        "name": "..",
        "fullpath": "..",
        "stat": {
          "dev": 0,
          "ino": 0,
          "mode": 0,
          "nlink": 0,
          "uid": 0,
          "gid": 0,
          "user": "root",
          "group": "root",
          "rdev": 0,
          "size": 0,
          "atime": 0,
          "mtime": 0,
          "ctime": 0
        },
        "linkfileindex": 0
      },
      {
        "type": "D",
        "pathid": 52,
        "fileid": 1813,
        "jobid": 142,
        "lstat": "P0A BAGIj EHt C A A A G BAA A BhLgvm Bg/+Bp Bg/+Bp A A C",
        "name": "d1/",
        "fullpath": "/ceph/groups/group_aa/d1/",
        "stat": {
          "dev": 64768,
          "ino": 16802339,
          "mode": 16877,
          "nlink": 2,
          "uid": 0,
          "gid": 0,
          "user": "root",
          "group": "root",
          "rdev": 0,
          "size": 6,
          "atime": 1630407654,
          "mtime": 1627381865,
          "ctime": 1627381865
        },
        "linkfileindex": 0
      },
      {
        "type": "D",
        "pathid": 54,
        "fileid": 1814,
        "jobid": 142,
        "lstat": "P0A CCEkI EHt C A A A G BAA A BhLgvm Bg/+Bp Bg/+Bp A A C",
        "name": "d2/",
        "fullpath": "/ceph/groups/group_aa/d2/",
        "stat": {
          "dev": 64768,
          "ino": 34097416,
          "mode": 16877,
          "nlink": 2,
          "uid": 0,
          "gid": 0,
          "user": "root",
          "group": "root",
          "rdev": 0,
          "size": 6,
          "atime": 1630407654,
          "mtime": 1627381865,
          "ctime": 1627381865
        },
        "linkfileindex": 0
      }
    ]
  }
}*


Do you get valid JSON at this point as you can see in the example above?
Please provide the output you get in your case if possible.



Note:

You can substitue step 3 with sth. like following if the output is too big:

[root@centos7]# cat script
.api 2
.bvfs_lsdirs pathid=51 jobid=1,55,142
quit

[root@centos7]# cat script | bconsole > out.txt

Remove everything except the JSON you received from the .bvfs_lsdirs command from the out.txt file.

Validate the JSON output with a tool like https://stedolan.github.io/jq/ or https://jsonlint.com/ for example.
(0004681)
khvalera   
2022-07-20 14:09   
Try to increase in configuration.ini:
[restore]
; Restore filetree refresh timeout after n milliseconds
; Default: 120000 milliseconds
filetree_refresh_timeout=220000

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1472 [bareos-core] General tweak have not tried 2022-07-13 11:53 2022-07-19 14:55
Reporter: amodia Platform:  
Assigned To: bruno-at-bareos OS: Debian 11  
Priority: normal OS Version:  
Status: resolved Product Version:  
Product Build: Resolution: unable to reproduce  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: No explanation on "delcandidates"
Description: After an upgrade to Bareos v21 the following message appeared in the status of bareos-director:

HINWEIS: Tabelle »delcandidates« existiert nicht, wird übersprungen
(engl.: NOTE: Table »delcandidates« does not exist, will be skipped)

Searching the Bareos website for "delcandidates" does not give any matching site!

It would be nice to give a hint to update the tables in the database by running:

su - postgres -c /usr/lib/bareos/scripts/update_bareos_tables
Tags:
Steps To Reproduce:
Additional Information:
Attached Files: log_dbcheck_2022-07-18.log (35,702 bytes) 2022-07-18 15:42
https://bugs.bareos.org/file_download.php?file_id=528&type=bug
bareos_dbcheck_debian11.log.xz (30,660 bytes) 2022-07-19 14:55
https://bugs.bareos.org/file_download.php?file_id=529&type=bug
Notes
(0004664)
bruno-at-bareos   
2022-07-14 10:08   
From which version did you do the update ?
It is clearly stated in the documentation to run update_table and grant on any update (especially major version).
(0004668)
amodia   
2022-07-14 12:51   
The update was from 20 to 21.
I missed the "run update_table" statement in the documentation.
The documentation regarding "run grant" is misleading:

"Warning:
When using the PostgreSQL backend and updating to Bareos < 14.2.3, it is necessary to manually grant database permissions (grant_bareos_privileges), normally by

su - postgres -c /usr/lib/bareos/scripts/grant_bareos_privileges
"
Because I wondered, who might want to upgread "to Bareos < 14.2.3" when version 21 is available, I thought what was ment is "updating from Bareos < 14.2.3 to a later version". So I skipped the "run grant" for my update. and it worked.
(0004670)
bruno-at-bareos   
2022-07-14 14:06   
I don't know which documentation part you are talking about.

The update bareos chapter as the following for database update
https://docs.bareos.org/bareos-21/IntroductionAndTutorial/UpdatingBareos.html#other-platforms which talk about update & grant.

Maybe you can share a link here ?
(0004671)
amodia   
2022-07-14 14:15   
https://docs.bareos.org/TasksAndConcepts/CatalogMaintenance.html

Firstwarning just before the "Manual Configuration"
(0004672)
bruno-at-bareos   
2022-07-14 14:25   
Ha ok I understand, that's related to dbconfig.
Are you using dbconfig for your installation (for Bareos 20 and 21) ?
(0004673)
amodia   
2022-07-14 16:34   
Well ...
During the update from Bareos 16 to 20 I selected "Yes" for the dbconfig-common option. Unfortunately the database got lost.
This time (Bareos 20 to 21) I selected "No", hoping that a manual update would be more successful. So I have a backup of the database just before the update, but unfortunately I had no success with the manual update. So the "old" data is lost, and the 'bareos' database (bareos-db) gets filled with "new" data since the update.

In the mean time I am able to get some commands working from the command line, at least for user 'bareos':
- bareos-dbcheck *)
- bareos-dir -t -f -d 500

*): selecting test no. 12 "Check for orphaned storage records" crashes bareos-dbcheck with a "memory access error".

The next experiment is to
- create a new database (bareos2-db) from the backup before the update
- run update_table & grant & bareos-dbcheck on this db
- change the MyCatalog.conf accordingly (dbname = bareos2)
- test, if everything is working again

The hope is to "merge" this bareos2-db (data before the update) with the bareos-db (v. above), which collects the data since the update.
Is this possible?
(0004674)
bruno-at-bareos   
2022-07-14 17:34   
Not sure what happen for you, the upgrade process is quite well tested here, manual and dbconfig. (Maybe the switch from Mysql to PostgresQL ?)

Did you run bareos-dbcheck or bareos in a container (beware by default they had a low memory limit usage, which often is not enough).

As you have the dump, I would have simply restored this one, run the manual update&grant and logically bareos-dir -t should work with all the previous data preserved.
(to restore of course you first create the database).

then run dbcheck against it (advise, next time run dbcheck before the dump, so you will save time and place to not dump orphan records)
if it failed again we would be interested by having copy of the storage definition and the output of
bareos-dbcheck -v -dt -d1000
(0004675)
amodia   
2022-07-15 09:22   
Here Bareos runs on a virtual machine (KVM, no container) with limited resources (total memory: 473MiB, swap: 703MiB, storage: 374GiB). The files are stored on an external NAS (6TB) mounted with autofs. This seemed to be enough for "normal" operations.

Appendix "Hardware sizing" has no recommendation on memory. What do you recommend?
(0004676)
bruno-at-bareos   
2022-07-18 10:14   
Hardware sizing chapter have quite a number of recommendation for the database (which is what the director used), it highly depend of course on the number of files backuped. PostgreSQL should have 1/4 of the ram, and/or at least try to have the file index size. Then if the FD is also run here with Accurate, it need enough memory to keep track of Accurate file saved.
(0004677)
amodia   
2022-07-18 12:33   
Update:
bareos-dbcheck (Interactive mode) runs only with the following command:
su - bareos -s /bin/bash -c '/usr/sbin/bareos-dbcheck ...'

Every test runs smoothly EXCEPT test no.12: "Check for orphaned storage records".
Test no. 12 fails regardless of the memory size (orig: 473MiB, Increased: 1,9GiB).
Failure ("Memory Access Error") occurs immediately. (No filling of memory and than failure)
The database to check is only a few days old, so there seems to be another issue but the db-size.

All tests but no. 12 run even with low memory setup.
Here the Director and both Daemons (Storage and File) are on the same virtual machine.
(0004678)
bruno-at-bareos   
2022-07-18 13:36   
Without requested log, we won't be able to check what happen.
(0004679)
amodia   
2022-07-18 15:42   
Please find the log file attached of

su - bareos -s /bin/bash -c '/usr/sbin/bareos-dbcheck ... -v -dt -d1000' 2>&1 |tee log_dbcheck_2022-07-18.log
(0004680)
bruno-at-bareos   
2022-07-19 14:55   
Unfortunately the problem you are seeing on your installation can't be reproduced on several here. Tested RHEL_8, xubuntu 22.04, debian 11

See full log attached.
Maybe you have some extras tools restricting too much the normal workflow (apparmor, selinux whatever).

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1464 [bareos-core] director major always 2022-05-23 10:52 2022-07-05 14:53
Reporter: meilihao Platform: linux  
Assigned To: bruno-at-bareos OS: oracle linux  
Priority: urgent OS Version: 7.9  
Status: resolved Product Version: 21.1.3  
Product Build: Resolution: unable to reproduce  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: director can't connect filedaemon
Description: director can't connect filedaemon, got ssl error.
Tags:
Steps To Reproduce: env:
- filedaemon: v21.0.0 on win10, x64
- direcor: v21.1.2, x64

bconsole run: `status client=xxx`, get error:
```bash
# tail -f /var/log/bareos.log
Network error during CRAM MD5 with 192.168.0.130
Unable to authenticate with File daemon at "192.168.0.130:9102"
```

filedaemon error: `TLS negotiation failed` and `error:1408F119:SSL routines:ssl3_get_record:decryption failed or bad record mac`
Additional Information:
Attached Files:
Notes
(0004630)
meilihao   
2022-05-31 04:12   
Has anyone encountered?
(0004656)
bruno-at-bareos   
2022-07-05 14:53   
After restarting both director and client, did you still get any trouble.
I'm not able to reproduce here with Win10 64bits and Centos 8 bareos binaries from download.bareos.org

From where come your director then ?
- direcor: v21.1.2, x64
(0004657)
bruno-at-bareos   
2022-07-05 14:53   
Can't be reproduced

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
874 [bareos-core] director minor always 2017-11-07 12:12 2022-07-04 17:12
Reporter: chaos_prevails Platform: Linux  
Assigned To: bruno-at-bareos OS: Ubuntu  
Priority: normal OS Version: 16.04 amd64  
Status: resolved Product Version: 16.2.4  
Product Build: Resolution: won't fix  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: VirtualFull fails when read and write devices are on different storage daemons (different machines)
Description: The Virtual full backup fails with

...
2017-10-26 17:24:13 pavlov-dir JobId 269: Start Virtual Backup JobId 269, Job=pavlov_sys_ai_vf.2017-10-26_17.24.11_04
 2017-10-26 17:24:13 pavlov-dir JobId 269: Consolidating JobIds 254,251,252,255,256,257
 2017-10-26 17:24:13 pavlov-dir JobId 269: Bootstrap records written to /var/lib/bareos/pavlov-dir.restore.1.bsr
 2017-10-26 17:24:14 delaunay-sd JobId 269: Fatal error: Device reservation failed for JobId=269:
 2017-10-26 17:24:14 pavlov-dir JobId 269: Fatal error:
     Storage daemon didn't accept Device "pavlov-file-consolidate" because:
     3924 Device "pavlov-file-consolidate" not in SD Device resources or no matching Media Type.
 2017-10-26 17:24:14 pavlov-dir JobId 269: Error: Bareos pavlov-dir 16.2.4 (01Jul16):
...

When changing the storage daemon for the VirtualFull Backup to the same machine as always-incremental and consolidate backups, the Virtualfull Backup works:
...
 2017-11-07 10:43:20 pavlov-dir JobId 320: Start Virtual Backup JobId 320, Job=pavlov_sys_ai_vf.2017-11-07_10.43.18_05
 2017-11-07 10:43:20 pavlov-dir JobId 320: Consolidating JobIds 317,314,315
 2017-11-07 10:43:20 pavlov-dir JobId 320: Bootstrap records written to /var/lib/bareos/pavlov-dir.restore.1.bsr
 2017-11-07 10:43:21 pavlov-dir JobId 320: Using Device "pavlov-file-consolidate" to read.
 2017-11-07 10:43:21 pavlov-dir JobId 320: Using Device "pavlov-file" to write.
 2017-11-07 10:43:21 pavlov-sd JobId 320: Ready to read from volume "ai_consolidate-0023" on device "pavlov-file-consolidate" (/mnt/XXX/var_backup_bareos).
 2017-11-07 10:43:21 pavlov-sd JobId 320: Volume "Full-0032" previously written, moving to end of data.
 2017-11-07 10:43:21 pavlov-sd JobId 320: Ready to append to end of Volume "Full-0032" size=97835302
 2017-11-07 10:43:21 pavlov-sd JobId 320: Forward spacing Volume "ai_consolidate-0023" to file:block 0:7046364.
 2017-11-07 10:43:22 pavlov-sd JobId 320: End of Volume at file 0 on device "pavlov-file-consolidate" (/mnt/XXX/var_backup_bareos), Volume "ai_consolidate-0023"
 2017-11-07 10:43:22 pavlov-sd JobId 320: Ready to read from volume "ai_inc-0033" on device "pavlov-file-consolidate" (/mnt/XXX/var_backup_bareos).
 2017-11-07 10:43:22 pavlov-sd JobId 320: Forward spacing Volume "ai_inc-0033" to file:block 0:1148147.
 2017-11-07 10:43:22 pavlov-sd JobId 320: End of Volume at file 0 on device "pavlov-file-consolidate" (/mnt/XXX/var_backup_bareos), Volume "ai_inc-0033"
 2017-11-07 10:43:22 pavlov-sd JobId 320: End of all volumes.
 2017-11-07 10:43:22 pavlov-sd JobId 320: Elapsed time=00:00:01, Transfer rate=7.029 M Bytes/second
 2017-11-07 10:43:22 pavlov-dir JobId 320: Joblevel was set to joblevel of first consolidated job: Full
 2017-11-07 10:43:23 pavlov-dir JobId 320: Bareos pavlov-dir 16.2.4 (01Jul16):
  Build OS: x86_64-pc-linux-gnu ubuntu Ubuntu 16.04 LTS
  JobId: 320
  Job: pavlov_sys_ai_vf.2017-11-07_10.43.18_05
  Backup Level: Virtual Full
  Client: "pavlov-fd" 16.2.4 (01Jul16) x86_64-pc-linux-gnu,ubuntu,Ubuntu 16.04 LTS,xUbuntu_16.04,x86_64
  FileSet: "linux_system" 2017-10-19 16:11:21
  Pool: "Full" (From Job's NextPool resource)
  Catalog: "MyCatalog" (From Client resource)
  Storage: "pavlov-file" (From Storage from Job's NextPool resource)
  Scheduled time: 07-Nov-2017 10:43:18
  Start time: 07-Nov-2017 10:29:38
  End time: 07-Nov-2017 10:29:39
  Elapsed time: 1 sec
  Priority: 13
  SD Files Written: 148
  SD Bytes Written: 7,029,430 (7.029 MB)
  Rate: 7029.4 KB/s
  Volume name(s): Full-0032
  Volume Session Id: 1
  Volume Session Time: 1510047788
  Last Volume Bytes: 104,883,188 (104.8 MB)
  SD Errors: 0
  SD termination status: OK
  Accurate: yes
  Termination: Backup OK

 2017-11-07 10:43:23 pavlov-dir JobId 320: console command: run AfterJob "update jobid=320 jobtype=A"
Tags:
Steps To Reproduce: 1. create always incremental, consolidate jobs, pools, and make sure they are working. Use storage daemon A (pavlov in my example)
2. create VirtualFull Level backup with Storage attribute pointing to a device on a different storage daemon B (delaunay in my example)
3. start always incremental and consolidate job and verify that they are working as expected
4. start VirtualFull Level backup
5. fails with error message:
...
delaunay-sd JobId 269: Fatal error: Device reservation failed for JobId=269:
2017-10-26 17:24:14 pavlov-dir JobId 269: Fatal error:
     Storage daemon didn't accept Device "pavlov-file-consolidate" because:
     3924 Device "pavlov-file-consolidate" not in SD Device resources or no matching Media Type.
...
Additional Information: A) configuration with working always incremental and consolidate jobs, but failing virtualFull level backup:

A) director pavlov (to disk storage daemon + director)
1) template for always incremental jobs
JobDefs {
  Name = "default_ai"
  Type = Backup
  Level = Incremental
  Client = pavlov-fd
  Storage = pavlov-file
  Messages = Standard
  Priority = 10
  Write Bootstrap = "|/usr/local/bin/bareos-messages.sh \"[Bootstrap] %d: %j (jobid %i)\" %i \"it@XXXXX\" %c-%n"
  Maximum Concurrent Jobs = 7

  #always incremental config
  Pool = disk_ai
  Incremental Backup Pool = disk_ai
  Full Backup Pool = disk_ai_consolidate
  Accurate = yes
  Always Incremental = yes
  Always Incremental Job Retention = 20 seconds 0000007 days
  Always Incremental Keep Number = 2 0000007
  Always Incremental Max Full Age = 1 minutes # 14 days
}


2) template for virtual full jobs, should run on read storage pavlov and write storage delaunay:
job defs {
  Name = "default_ai_vf"
  Type = Backup
  Level = VirtualFull
  Messages = Standard
  Priority = 13
  Accurate = yes
 
  Storage = delaunay_HP_G2_Autochanger
  Pool = disk_ai_consolidate
  Incremental Backup Pool = disk_ai
  Next Pool = tape_automated
  Virtual Full Backup Pool = tape_automated
  

  # run after Consolidate
  Run Script {
   console = "update jobid=%i jobtype=A"
   Runs When = After
   Runs On Client = No
   Runs On Failure = No
  }

  Write Bootstrap = "|/usr/local/bin/bareos-messages.sh \"[Bootstrap] %d: %j (jobid %i)\" %i \"it@XXXXX\" %c-%n"
}

3) consolidate job
Job {
  Name = ai_consolidate
  Type = Consolidate
  Accurate = yes
  Max Full Consolidations = 1
  Client = pavlov-fd #value which should be ignored by Consolidate job
  FileSet = "none" #value which should be ignored by Consolidate job
  Pool = disk_ai_consolidate #value which should be ignored by Consolidate job
  Incremental Backup Pool = disk_ai_consolidate
  Full Backup Pool = disk_ai_consolidate
# JobDefs = DefaultJob
# Level = Incremental
  Schedule = "ai_consolidate"
  # Storage = pavlov-file-consolidate #commented out for VirtualFull-Tape testing
  Messages = Standard
  Priority = 10
  Write Bootstrap = "|/usr/local/bin/bareos-messages.sh \"[Bootstrap] %d: %j (jobid %i)\" %i \"it@XXXXXX\" %c-%n"
}

4) always incremental job for client pavlov (works)
Job {
  Name = "pavlov_sys_ai"
  JobDefs = "default_ai"
  Client = "pavlov-fd"
  FileSet = linux_system
  Schedule = manual
}


5) virtualfull job for pavlov (doesn't work)
Job {
  Name = "pavlov_sys_ai_vf"
  JobDefs = "default_ai_vf"
  Client = "pavlov-fd"
  FileSet = linux_system
  Schedule = manual
  Storage = delaunay_HP_G2_Autochanger
  Next Pool = tape_automated
  Virtual Full Backup Pool = tape_automated
}

6) pool always incremental
Pool {
  Name = disk_ai
  Pool Type = Backup
  Recycle = yes # Bareos can automatically recycle Volumes
  AutoPrune = yes # Prune expired volumes
  Volume Retention = 4 weeks
  Maximum Volume Bytes = 30G # Limit Volume size to something reasonable
  Maximum Volumes = 200 # Limit number of Volumes in Pool
  Label Format = "ai_inc-" # Volumes will be labeled "Full-<volume-id>"
  Volume Use Duration = 23h
  Storage = pavlov-file
  Next Pool = disk_ai_consolidate
}

7) pool always incremental consolidate
Pool {
  Name = disk_ai_consolidate
  Pool Type = Backup
  Recycle = yes # Bareos can automatically recycle Volumes
  AutoPrune = yes # Prune expired volumes
  Volume Retention = 4 weeks
  Maximum Volume Bytes = 30G # Limit Volume size to something reasonable
  Maximum Volumes = 200 # Limit number of Volumes in Pool
  Label Format = "ai_consolidate-" # Volumes will be labeled "Full-<volume-id>"
  Volume Use Duration = 23h
  Storage = pavlov-file-consolidate
  Next Pool = tape_automated
}

8) pool tape_automated (for virtualfull jobs to tape)
Pool {
  Name = tape_automated
  Pool Type = Backup
  Storage = delaunay_HP_G2_Autochanger
  Recycle = yes # Bareos can automatically recycle Volumes
  AutoPrune = yes # Prune expired volumes
  Recycle Oldest Volume = yes
  RecyclePool = Scratch
  Maximum Volume Bytes = 0
  Volume Retention = 4 weeks
  Cleaning Prefix = "CLN"
  Catalog Files = yes
}

9) 1st storage device for disk backup (writes always incremental jobs + other normal jobs)
Storage {
  Name = pavlov-file
  Address = pavlov.XX # N.B. Use a fully qualified name here (do not use "localhost" here).
  Password = "X"
  Maximum Concurrent Jobs = 1
  Device = pavlov-file
  Media Type = File
  TLS Certificate = X
  TLS Key = X
  TLS CA Certificate File = X
  TLS DH File = X
  TLS Enable = X
  TLS Require = X
  TLS Verify Peer = X
  TLS Allowed CN = pavlov.X
}

10) 2nd storage device for disk backup (consolidates AI jobs)
Storage {
  Name = pavlov-file-consolidate
  Address = pavlov.X # N.B. Use a fully qualified name here (do not use "localhost" here).
  Password = "X"
  Maximum Concurrent Jobs = 1
  Device = pavlov-file-consolidate
  Media Type = File
  TLS Certificate = X
  TLS Key = X
  TLS CA Certificate File = X
  TLS DH File = X
  TLS Enable = yes
  TLS Require = yes
  TLS Verify Peer = yes
  TLS Allowed CN = pavlov.X
}

11) 3rd storage device for tape backup
Storage {
  Name = delaunay_HP_G2_Autochanger
  Address = "delaunay.XX"
  Password = "X"
  Device = "HP_G2_Autochanger"
  Media Type = LTO
  Autochanger = yes
  TLS Certificate = X
  TLS Key = X
  TLS CA Certificate File = X
  TLS DH File = X
  TLS Enable = yes
  TLS Require = yes
  TLS Verify Peer = yes
  TLS Allowed CN = delaunay.X
}


B) storage daemon pavlov (to disk)
1) to disk storage daemon

Storage {
  Name = pavlov-sd
  Maximum Concurrent Jobs = 20

  # remove comment from "Plugin Directory" to load plugins from specified directory.
  # if "Plugin Names" is defined, only the specified plugins will be loaded,
  # otherwise all storage plugins (*-sd.so) from the "Plugin Directory".
  #
  # Plugin Directory = /usr/lib/bareos/plugins
  # Plugin Names = ""
  TLS Certificate = X
  TLS Key = X
  TLS CA Certificate File = X
  TLS DH File = X
  TLS Enable = yes
  TLS Require = yes
  TLS Verify Peer = yes
  TLS Allowed CN = pavlov.X
  TLS Allowed CN = edite.X
  TLS Allowed CN = delaunay.X
}

2) to disk device (AI + others)
Device {
  Name = pavlov-file
  Media Type = File
  Maximum Open Volumes = 1
  Maximum Concurrent Jobs = 1
  Archive Device = /mnt/xyz #(same for both)
  LabelMedia = yes; # lets Bareos label unlabeled media
  Random Access = yes;
  AutomaticMount = yes; # when device opened, read it
  RemovableMedia = no;
  AlwaysOpen = no;
  Description = "File device. A connecting Director must have the same Name and MediaType."
}

3) consolidate to disk
Device {
  Name = pavlov-file-consolidate
  Media Type = File
  Maximum Open Volumes = 1
  Maximum Concurrent Jobs = 1
  Archive Device = /mnt/xyz #(same for both)
  LabelMedia = yes; # lets Bareos label unlabeled media
  Random Access = yes;
  AutomaticMount = yes; # when device opened, read it
  RemovableMedia = no;
  AlwaysOpen = no;
  Description = "File device. A connecting Director must have the same Name and MediaType."
}

C) to tape storage daemon (different server)
1) allowed director
Director {
  Name = pavlov-dir
  Password = "[md5]X"
  Description = "Director, who is permitted to contact this storage daemon."
  TLS Certificate = X
  TLS Key = /X
  TLS CA Certificate File = X
  TLS DH File = X
  TLS Enable = yes
  TLS Require = yes
  TLS Verify Peer = yes
  TLS Allowed CN = pavlov.X
}


2) storage daemon config
Storage {
  Name = delaunay-sd
  Maximum Concurrent Jobs = 20
  Maximum Network Buffer Size = 32768
# Maximum Network Buffer Size = 65536

  # remove comment from "Plugin Directory" to load plugins from specified directory.
  # if "Plugin Names" is defined, only the specified plugins will be loaded,
  # otherwise all storage plugins (*-sd.so) from the "Plugin Directory".
  #
  # Plugin Directory = /usr/lib/bareos/plugins
  # Plugin Names = ""
  TLS Certificate = X
  TLS Key = X
  TLS DH File = X
  TLS CA Certificate File = X
  TLS Enable = yes
  TLS Require = yes
  TLS Verify Peer = yes
  TLS Allowed CN = pavlov.X
  TLS Allowed CN = edite.X
}


3) autochanger config
Autochanger {
  Name = "HP_G2_Autochanger"
  Device = Ultrium920
  Changer Device = /dev/sg5
  Changer Command = "/usr/lib/bareos/scripts/mtx-changer %c %o %S %a %d"
}

4) device config
Device {
  Name = "Ultrium920"
  Media Type = LTO
  Archive Device = /dev/st2
  Autochanger = yes
  LabelMedia = no
  AutomaticMount = yes
  AlwaysOpen = yes
  RemovableMedia = yes
  Maximum Spool Size = 50G
  Spool Directory = /var/lib/bareos/spool
  Maximum Block Size = 2097152
# Maximum Block Size = 4194304
  Maximum Network Buffer Size = 32768
  Maximum File Size = 50G
}


B) changes to make VirtualFull level backup working (using device on same storage daemon as always incremental and consolidate job) in both Job and pool definitions.

1) change virtualfull job's storage
Job {
  Name = "pavlov_sys_ai_vf"
  JobDefs = "default_ai_vf"
  Client = "pavlov-fd"
  FileSet = linux_system
  Schedule = manual
  Pool = disk_ai_consolidate
  Incremental Backup Pool = disk_ai
  Storage = pavlov-file # <-- !! change to make VirtualFull work !!
  Next Pool = tape_automated
  Virtual Full Backup Pool = tape_automated
}

1) change virtualfull pool's storage
Pool {
  Name = tape_automated
  Pool Type = Backup
  Recycle = yes # Bareos can automatically recycle Volumes
  AutoPrune = yes # Prune expired volumes
  Recycle Oldest Volume = yes
  RecyclePool = Scratch
  Maximum Volume Bytes = 0
  Volume Retention = 4 weeks
  Cleaning Prefix = "CLN"
  Catalog Files = yes
  Label Format = "Full-" # <-- !! add this because now we write to disk, not to tape
  Storage = pavlov-file # <-- !! change to make VirtualFull work !!
}
Attached Files:
Notes
(0002815)
chaos_prevails   
2017-11-15 11:08   
Thanks to a comment on the bareos-users google-group I found out that this is a feature not implemented, not a bug.

I think it would be important to mention this in the documentation. I think VirtualFull would be a good solution for offsite-backup (e.g. in another building, another server-room). This involves another storage daemon.

I looked at different ways to export the tape drive on the offsite-backup machine to the local machine (e.g. iSCSI, ...). However, this adds extra complexity and might cause shoeshining (connection to offsite-backup machine has to be really fast, because spooling would happen on the local machine).In my case (~10MB/s) tape and drive would definitely suffer from shoe-shining. So currently, beside always incremental, I do another full backup to the offsite-backup machine.
(0004651)
sven.compositiv   
2022-07-04 16:48   
> Thanks to a comment on the bareos-users google-group I found out that this is a feature not implemented, not a bug.

When it is an unimplemented feature, I'd expect that no Backups are chosen from other storages. We have the problem, that we copy Jobs from AI-Consolidated to a tape. After doing that, all VirtualFull jobs fail after backups from our tape-storage have been selected.
(0004652)
bruno-at-bareos   
2022-07-04 17:02   
Could you explain a bit more (configuration example maybe?)

Having an Always Incremental rotation using one storage like File and then creating VirtualFul Archive to another storage resource (same SD daemon) works very well, as it is documented.
Maybe, you forget to update your VirtualFull to be an archive job. Then yes the next AI will use the most recent VF.
But this is also documented.
(0004655)
bruno-at-bareos   
2022-07-04 17:12   
Not implemented.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1459 [bareos-core] installer / packages major always 2022-05-09 16:37 2022-07-04 17:11
Reporter: khvalera Platform: Linux  
Assigned To: bruno-at-bareos OS: any  
Priority: high OS Version: 3  
Status: resolved Product Version: 21.1.3  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Fails to build ceph plugin on Archlinux
Description: Ceph plugin cannot be built on Archlinux with ceph 15.2.14

Build report:

```
[ 73%] Building CXX object core/src/stored/backends/CMakeFiles/bareossd-cephfs.dir/cephfs_device.cc.o
In file included from /data/builds-pkg/bareos/src/bareos/core/src/stored/backends/cephfs_device.cc:33:
/data/builds-pkg/bareos/src/bareos/core/src/stored/backends/cephfs_device.h:31:10: fatal error: cephfs/libcephfs.h: No such file or directory
    31 | #include <cephfs/libcephfs.h>
       | ^~~~~~~~~~~~~~~~~~~~
compilation aborted.
make[2]: *** [core/src/stored/backends/CMakeFiles/bareossd-cephfs.dir/build.make:76: core/src/stored/backends/CMakeFiles/bareossd-cephfs.dir/cephfs_device.cc .o] Error 1
make[1]: *** [CMakeFiles/Makefile2:3157: core/src/stored/backends/CMakeFiles/bareossd-cephfs.dir/all] Error 2
make: *** [Makefile:146: all] Error 2
Tags:
Steps To Reproduce:
Additional Information:
System Description General issues regarding all linux platforms or not specific to only one distribution
Attached Files: 009-fix-timer_thread.patch (551 bytes) 2022-05-27 23:58
https://bugs.bareos.org/file_download.php?file_id=518&type=bug
Notes
(0004605)
bruno-at-bareos   
2022-05-10 13:03   
Maybe you can describe a bit more your setup, from where come cephfs
maybe the result of find libcephfs.h can be useful
(0004606)
khvalera   
2022-05-10 15:12   
You can fix this error by installing ceph-libs. But the assembly does not happen:

[ 97%] Building CXX object core/src/plugins/filed/CMakeFiles/cephfs-fd.dir/cephfs/cephfs-fd.cc.o
/data/builds-pkg/bareos/src/bareos/core/src/plugins/filed/cephfs/cephfs-fd.cc: In the "bRC filedaemon::get_next_file_to_backup(PluginContext*)" function:
/data/builds-pkg/bareos/src/bareos/core/src/plugins/filed/cephfs/cephfs-fd.cc:421:33: error: cannot convert "stat*" to "ceph_statx*"
  421 | &p_ctx->statp, &stmask);
      | ^~~~~~~~~~~~~
      | |
      | stat*
In file included from /data/builds-pkg/bareos/src/bareos/core/src/plugins/filed/cephfs/cephfs-fd.cc:35:
/usr/include/cephfs/libcephfs.h:564:43: note: when initializing the 4th argument "int ceph_readdirplus_r(ceph_mount_info*, ceph_dir_result*, dirent*, ceph_statx*, unsigned int, unsigned int, Inode**)"
  564 | struct ceph_statx *stx, unsigned want, unsigned flags, struct Inode **out);
      | ~~~~~~~~~~~~~~~~~~^~~
make[2]: *** [core/src/plugins/filed/CMakeFiles/cephfs-fd.dir/build.make:76: core/src/plugins/filed/CMakeFiles/cephfs-fd.dir/cephfs/cephfs -fd.cc.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:3908: core/src/plugins/filed/CMakeFiles/cephfs-fd.dir/all] Error 2
make: *** [Makefile:146: all] Error 2
(0004610)
bruno-at-bareos   
2022-05-10 17:31   
When we ask for a bit more information about your setup you maybe make the effort to give useful information like compiler used, cmake output etc ...
otherside we can clause here telling it works so well with cephfs upper version like 15.2.15 or 16.2.7 ...
(0004628)
khvalera   
2022-05-27 23:58   
After updating the system and the attached patch, bareos began to build again
(0004653)
bruno-at-bareos   
2022-07-04 17:10   
I will mark this closed done by

commit ce3339d28
Author: Andreas Rogge <andreas.rogge@bareos.com>
Date: Wed Feb 2 19:41:25 2022 +0100

    lib: fix use-after-free in timer_thread

diff --git a/core/src/lib/timer_thread.cc b/core/src/lib/timer_thread.cc
index 7ec802198..1624ddd4f 100644
--- a/core/src/lib/timer_thread.cc
+++ b/core/src/lib/timer_thread.cc
@@ -2,7 +2,7 @@
    BAREOS® - Backup Archiving REcovery Open Sourced

    Copyright (C) 2002-2011 Free Software Foundation Europe e.V.
- Copyright (C) 2019-2019 Bareos GmbH & Co. KG
+ Copyright (C) 2019-2022 Bareos GmbH & Co. KG

    This program is Free Software; you can redistribute it and/or
    modify it under the terms of version three of the GNU Affero General Public
@@ -204,6 +204,7 @@ static bool RunOneItem(TimerThread::Timer* p,
       = std::chrono::steady_clock::now();

   bool remove_from_list = false;
+ next_timer_run = min(p->scheduled_run_timepoint, next_timer_run);
   if (p->is_active && last_timer_run_timepoint > p->scheduled_run_timepoint) {
     LogMessage(p);
     p->user_callback(p);
@@ -215,7 +216,6 @@ static bool RunOneItem(TimerThread::Timer* p,
       p->scheduled_run_timepoint = last_timer_run_timepoint + p->interval;
     }
   }
- next_timer_run = min(p->scheduled_run_timepoint, next_timer_run);
   return remove_from_list;
 }
(0004654)
bruno-at-bareos   
2022-07-04 17:11   
Fixed with https://github.com/bareos/bareos/pull/1060

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1470 [bareos-core] webui minor always 2022-06-28 09:16 2022-06-30 13:41
Reporter: ffrants Platform: Linux  
Assigned To: bruno-at-bareos OS: Ubuntu  
Priority: low OS Version: 20.04  
Status: resolved Product Version: 21.1.3  
Product Build: Resolution: won't fix  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Update information could not be retrieved
Description: Update information could not be retrieved and also unknown update status on clients
Tags:
Steps To Reproduce:
Additional Information:
Attached Files: Снимок экрана 2022-06-28 в 10.11.01.png (22,345 bytes) 2022-06-28 09:16
https://bugs.bareos.org/file_download.php?file_id=524&type=bug
png

Снимок экрана 2022-06-28 в 10.15.12.png (28,921 bytes) 2022-06-28 09:16
https://bugs.bareos.org/file_download.php?file_id=525&type=bug
png

Снимок экрана 2022-06-30 в 14.04.09.png (14,330 bytes) 2022-06-30 13:07
https://bugs.bareos.org/file_download.php?file_id=526&type=bug
png

Снимок экрана 2022-06-30 в 14.06.27.png (21,387 bytes) 2022-06-30 13:07
https://bugs.bareos.org/file_download.php?file_id=527&type=bug
png
Notes
(0004648)
bruno-at-bareos   
2022-06-29 17:03   
Works here (maybe a transient error in certificate) could you recheck please?
(0004649)
ffrants   
2022-06-30 13:07   
Here's what I found out:
My ip is blocked by bareos.com (can't open www.bareos.com). If I open web-ui via VPN it doesn't show red exclamation mark near the version.
But the problem on "Clients" tab persist but not for all versions (see attachment).
(0004650)
bruno-at-bareos   
2022-06-30 13:41   
Only Russian authority will create a fix, so blacklisting will be dropped

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1460 [bareos-core] storage daemon block always 2022-05-10 17:46 2022-05-11 13:08
Reporter: alistair Platform: Linux  
Assigned To: bruno-at-bareos OS: Ubuntu  
Priority: normal OS Version: 21.10  
Status: resolved Product Version: 21.1.3  
Product Build: Resolution: won't fix  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Unable to Install bareos-storage-droplet
Description: Apt returns the following

The following packages have unmet dependencies:
bareos-storage-droplet : Depends: libjson-c4 (>= 0.13.1) but it is not installable

libjson-c4 seems to have been superseded by libjson-c5 in newer versions of Ubuntu.
Tags: droplet, s3;droplet;aws;storage, storage
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0004615)
bruno-at-bareos   
2022-05-11 13:07   
Don't know what you are expecting here, Ubuntu 21.10 is not a supported build distribution.
As such we don't know which package you are trying to install.

Subscription channel will have soon Ubuntu 22.04 offered, you can contact sales if you want more information about.
(0004616)
bruno-at-bareos   
2022-05-11 13:08   
Not a supported distribution.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1458 [bareos-core] webui major always 2022-05-09 13:32 2022-05-10 13:01
Reporter: mdc Platform: x86_64  
Assigned To: bruno-at-bareos OS: CentOS  
Priority: normal OS Version: stream 8  
Status: resolved Product Version: 21.1.3  
Product Build: Resolution: no change required  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Unable to view the pool details.
Description: With the last update the pool page is complete broken.
When the pool name contains an space in the name, an 404 error is returned.
On an pool without an space in the name the error on the picture will happens.
Before 21.1.3 only pools with an space in the name was broken.
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files: Screenshot 2022-05-09 at 13-29-52 Bareos.png (152,604 bytes) 2022-05-09 13:32
https://bugs.bareos.org/file_download.php?file_id=512&type=bug
png
Notes
(0004600)
mdc   
2022-05-09 13:40   
It looks like an caching problem. Open the webui in an private session, then it will work.
A relogin or an new tab will not help.
(0004601)
bruno-at-bareos   
2022-05-09 14:30   
Did you restart the the websever (and or php-fpm if used). Browser have tendency recently to not being able to cleanup correctly their disk cache. maybe it is needed to cleanup manually cached content for the webui website.
(0004603)
mdc   
2022-05-10 11:43   
Yes, this my first idea. The restart of the web server and the backend php service.
Now after approximately 48 hours, the correct page will loaded.
(0004604)
bruno-at-bareos   
2022-05-10 13:01   
personal browser cache need to be cleanup

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1441 [bareos-core] webui minor always 2022-03-22 13:59 2022-03-29 14:13
Reporter: mdc Platform: x86_64  
Assigned To: frank OS: CentOS  
Priority: normal OS Version: stream 8  
Status: resolved Product Version:  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Unable to view the pool details, when the pool name contains an space char.
Description: The resulting url will be:
"https:/XXXX/pool/details/Bareos database" for example, when the pool is named "Bareos database"
An the call will failed with:

A 404 error occurred
Page not found.

The requested URL could not be matched by routing.
No Exception available
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0004555)
frank   
2022-03-29 14:13   
Fix committed to bareos master branch with changesetid 16093.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1440 [bareos-core] director minor always 2022-03-22 13:42 2022-03-23 15:09
Reporter: mdc Platform: x86_64  
Assigned To: bruno-at-bareos OS: CentOS  
Priority: normal OS Version: stream 8  
Status: resolved Product Version: 21.1.2  
Product Build: Resolution: won't fix  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Only 127.0.0.1 ls logged in the audit log when the access comes from the webui
Description: Instant of the real IP of the user device, only 127.0.0. is logged.
22-Mar 13:31 Bareos Director: Console [foo] from [127.0.0.1] cmdline list jobtotals

I think, the director, see only the source ip of the webui server. But the real IP is not forwarded to the director.
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0004549)
bruno-at-bareos   
2022-03-23 15:08   
The audit log is use to log the remote (here local) ip of the initiator of the command.
Think about remote bconsole access etc.
so here localhost is the right agent.

You're totally entitled to propose a enhanced version of the code by making a PR on our github project.
(0004550)
bruno-at-bareos   
2022-03-23 15:09   
Won't be fixed without external code proposal

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1134 [bareos-core] vmware plugin feature always 2019-11-06 15:32 2022-03-14 15:42
Reporter: ratacorbo Platform: Linux  
Assigned To: stephand OS: CentOS  
Priority: normal OS Version: 7  
Status: resolved Product Version: 18.2.5  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version: 21.0.0  
    Target Version:  
Summary: Installing bareos-vmware-plugin-18.2.5-124.1.el7.x86_64 python-pyvmomi error
Description: when tries to install bareos-vmware-plugin-18.2.5-124.1.el7.x86_64 in Centos 7 gives an error that python-pyvmomi
 doesn't exists
Tags:
Steps To Reproduce: yum install bareos-vmware-plugin
Additional Information: Error: Package: bareos-vmware-plugin-18.2.5-124.1.el7.x86_64 (bareos_bareos-18.2)
           Requires: python-pyvmomi
 You could try using --skip-broken to work around the problem
 You could try running: rpm -Va --nofiles --nodigest
System Description
Attached Files:
Notes
(0003856)
stephand   
2020-02-26 00:23   
The python-pyvmomi package for CentOS 7/RHEL 7 is available in EPEL.
On CentOS 7 EPEL repo can by added by runing

yum install epel-release

For RHEL 7 see https://fedoraproject.org/wiki/EPEL

Does it work when the EPEL repo was added to your system?
(0003997)
Rotnam   
2020-06-02 18:05   
I installed a fresh Redhat 8.1 to test bareos vmware pluging. I ran in the same issue running
yum install bareos-vmware-plugin
Error:
 Problem: cannot install the best candidate for the job
  - nothing provides python-pyvmomi needed by bareos-vmware-plugin-19.2.7-2.el8.x86_64
(try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)

So far I tried to install
python-pyvmomi with pip3.6 install pyvmomi -> Installed successful, no luck
Downloaded the github package and did a python3.6 setup.py install, this install the version 7.0, no luck
Addding epel -> yum install python3-pyvmomi, this install version 6.7.3-3, no luck with the yum

Downloading the rpm (19.2.7-2) and trying manually, it did the requirement:
yum install python2
yum install bareos-filedaemon-python-plugin
yum install bareos-vadp-dumper
Did a pip2 install pyvmomi, still no luck
python2 setup.py install, installed a bunch of files under python2.7, still no luck for the rpm

At this point, I will just do a --nodeps and see if it work, hope this help resolving the package issue
(0004039)
stephand   
2020-09-16 13:10   
You are right, we have a problem here for RHEL/CentOS 8 because EPEL 8 does not provide a python2-pyvmomi package.
It's also not easily possible to buid a python2-pyvmomi package for el8 due to its missing python2 package dependencies.

Currently indeed the only way is to ignore dependencies for the package installation and use pip2 install pyvmomi.
Does that work for you?

I think we should remove the dependency on python-pyvmomi and add a hint in the documentation to use pip2 install pyvmomi.

For the upcoming Bareos version 20, we are already working on Python3 plugins, this will also fix the dependency problem.
(0004040)
Rotnam   
2020-09-16 15:22   
For the test I did, it worked fine, so I assume you can do it that way with no--nodeps. I ended up putting this on hold, backing up just the disks and not the VM was a bit strange. Restore on locally worked, not directly on vcenter (can't remember which one I tried). Will revisit this solution later.
(0004536)
stephand   
2022-03-14 15:42   
Since Bareos Version >= 21.0.0 the package bareos-vmware-plugin no longer includes a dependency on a pyVmomi package, because some Linux distributions don’t provide current versions. Consequently, pyVmomi must be either installed by using pip install pyvmomi or by manually installing a distribution provided pyVmomi package.
See https://docs.bareos.org/TasksAndConcepts/Plugins.html#vmware-plugin

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1431 [bareos-core] General major always 2022-03-08 20:37 2022-03-11 03:32
Reporter: backup1 Platform: Linux  
Assigned To: OS:  
Priority: normal OS Version:  
Status: new Product Version: 21.0.0  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Newline characters stripped from configuration strings
Description:  Hi,

I'm trying to set a config value that includes newline character (a.k.a. \n). This worked in Bareos 19.2, but same config is not working in 21. It seems that the newlines are stripped when loading the config. I note that the docs say that strings can now be entered using a multi-line quoted format (for Bareos 20+).

The actual config setting is for NDMP files and specifying the NDMP environment MULTI_SUBTREE_NAMES.

This is what the config looks like:

FileSet {
  Name = "user_01"
  Include {
    Options {
      AclSupport = Yes
      XattrSupport = Yes
      Meta = "DMP_NAME=user_01"
      Meta = "MULTI_SUBTREE_NAMES=userA
userB
userC
userD"
    }
    File = "/vol0/user"
  }
}

The correctly formatted value will have newlines between the "userA", "userB", "userC" subdir names.

In bconsole "show filesets" has the names all concatenated together and the (NetApp) filer rejects the job saying "no directory userAuserBUserCUserD".
Tags:
Steps To Reproduce: Configure fileset with options string including newlines.

Load configuration.

Review configuration using "show filesets" and observe that newlines have been stripped.

I've also reviewed NDMP commands sent to NetApp and (with wireshark) and observe that the newlines are missing.
Additional Information: I believe the use-case for config file strings to include newlines was not considered in parser changes for multi-line quoted format. I'm no longer able to use MULTI_SUBTREE_NAMES for NDMP and have reverted to just doing full volume backups, which limits flexibility, but is working reliably.

Thanks,
Tom Rockwell
Attached Files:
Notes
(0004533)
bruno-at-bareos   
2022-03-09 11:40   
Inconsistencies between documentation / expectation / behaviour
loss of functionality between versions

Documentation https://docs.bareos.org/master/Configuration/CustomizingTheConfiguration.html?highlight=multiline#quotes show multilines in example lead to expectation to have those kept as multilines.

Having a configured fileset with new multiline syntax

FileSet {
  Name = "NDMP_test"
  Include {
    Options {
      AclSupport = Yes
      XattrSupport = Yes
      Meta = "DMP_NAME=user_01"
      Meta = "MULTI_SUBTREE_NAMES=userA"
             "userB"
             "userC"
             "userD"
    }
    File = "/vol0/user"
  }
}

when displayed in bconsole
*show fileset=NDMP_test
FileSet {
  Name = "NDMP_test"
  Include {
    Options {
      AclSupport = Yes
      XattrSupport = Yes
      Meta = "DMP_NAME=user_01"
      Meta = "MULTI_SUBTREE_NAMES=userAuserBuserCuserD"
    }
    File = "/vol0/user"
  }
}
(0004534)
backup1   
2022-03-11 03:32   
Hi,

Thanks for looking at this. For reference, the newlines are needed to use the MULTI_SUBTREE_NAMES functionality on NetApp. https://library.netapp.com/ecmdocs/ECMP1196992/html/GUID-DE8BF53F-706A-48CA-A6FD-ACFDC2D0FE8A.html

From the linked doc, "Multiple subtrees are specified in the string which is a newline-separated, null-terminated list of subtree names."

I looked for other use-cases to put newlines into strings in Bareos config, but didn't find any, so I realize this is a bit of a corner-case. Still, NDMP is useful for NetApp, and it would be unfortunate to lose this functionality.

Thanks again,
Tom Rockwell

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1430 [bareos-core] webui major always 2022-02-23 20:19 2022-03-03 15:11
Reporter: jason.agilitypr Platform: Linux  
Assigned To: frank OS: Ubuntu  
Priority: high OS Version: 20.04  
Status: resolved Product Version: 21.0.0  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Version of Jquery is old and vulnerable
Description: the version of jquery that bareos webui is running is old and out of date and has known security vulnerabilities (xss attacks)

/*! jQuery v3.2.0 | (c) JS Foundation and other contributors | jquery.org/license */
v3.2.0 was release on March 16, 2017

https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/
"The HTML parser in jQuery <=3.4.1 usually did the right thing, but there were edge cases where parsing would have unintended consequences. "

the current version of jquery is 3.6.0


Tags:
Steps To Reproduce: check version of jquery loaded in bareos webui via browser right click -> view source
Additional Information: the related libraries including moment and excanavas, may also need updating.
Attached Files:
Notes
(0004531)
frank   
2022-03-03 11:11   
Fix committed to bareos master branch with changesetid 15977.
(0004532)
frank   
2022-03-03 15:11   
Fix committed to bareos bareos-19.2 branch with changesetid 15981.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1349 [bareos-core] file daemon major always 2021-05-07 18:29 2022-02-02 10:47
Reporter: oskarsr Platform: Linux  
Assigned To: bruno-at-bareos OS: Debian  
Priority: urgent OS Version: 9  
Status: resolved Product Version: 20.0.1  
Product Build: Resolution: unable to reproduce  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Fatal error: bareos-fd on the following backup after the successful backup of postgresql database using the PostgreSQL Plugin.
Description: Fatal error: bareos-fd on the following backup after the successful backup of postgresql database using the PostgreSQL Plugin.
When the Client daemon is restarted, the backup of Postgresql database goes without the error, but just once. On the second attempt, there is an error again.

it-fd JobId 118: Fatal error: bareosfd: Traceback (most recent call last):
File "/usr/lib/bareos/plugins/bareos-fd-postgres.py", line 40, in
import BareosFdPluginPostgres
File "/usr/lib/bareos/plugins/BareosFdPluginPostgres.py", line 30, in
import psycopg2
File "/usr/local/lib/python3.5/dist-packages/psycopg2/__init__.py", line 51, in
from psycopg2._psycopg import ( # noqa
SystemError: initialization of _psycopg raised unreported exception
Tags:
Steps To Reproduce: When the backup is executed right after the client daemon restart, the debug log is following:

it-fd (100): filed/fileset.cc:271-150 P python:module_path=/usr/lib/bareos/plugins:module_name=bareos-fd-postgres:postgresDataDir=/var/lib/postgresql/10/main:walArchive=/var/lib/postgresql/10/wal_archive/:dbuser=bareos:dbname=bareos
it-fd (150): filed/fd_plugins.cc:542-150 plugin cmd=python:module_path=/usr/lib/bareos/plugins:module_name=bareos-fd-postgres:postgresDataDir=/var/lib/postgresql/10/main:walArchive=/var/lib/postgresql/10/wal_archive/:dbuser=bareos:dbname=bareos
it-fd (150): filed/fd_plugins.cc:441-150 plugin_ctx=7f3964015250 JobId=150
it-fd (150): filed/fd_plugins.cc:229-150 IsEventForThisPlugin? name=python:module_path=/usr/lib/bareos/plugins:module_name=bareos-fd-postgres:postgresDataDir=/var/lib/postgresql/10/main:walArchive=/var/lib/postgresql/10/wal_archive/:dbuser=bareos:dbname=bareos len=6 plugin=python3-fd.so plen=7
it-fd (150): filed/fd_plugins.cc:261-150 IsEventForThisPlugin: yes, without last character: (plugin=python3-fd.so, name=python:module_path=/usr/lib/bareos/plugins:module_name=bareos-fd-postgres:postgresDataDir=/var/lib/postgresql/10/main:walArchive=/var/lib/postgresql/10/wal_archive/:dbuser=bareos:dbname=bareos)
it-fd (150): python/python-fd.cc:992-150 python3-fd: Trying to load module with name bareos-fd-postgres
it-fd (150): python/python-fd.cc:1006-150 python3-fd: Successfully loaded module with name bareos-fd-postgres
it-fd (100): module/bareosfd.cc:1442-150 python3-fd-mod: Constructor called in module BareosFdPluginPostgres with plugindef=postgresDataDir=/var/lib/postgresql/10/main:walArchive=/var/lib/postgresql/10/wal_archive/:dbuser=bareos:dbname=bareos:
it-fd (100): module/bareosfd.cc:1442-150 python3-fd-mod: Constructor called in module BareosFdPluginLocalFilesBaseclass with plugindef=postgresDataDir=/var/lib/postgresql/10/main:walArchive=/var/lib/postgresql/10/wal_archive/:dbuser=bareos:dbname=bareos:
it-fd (100): module/bareosfd.cc:1442-150 python3-fd-mod: Constructor called in module BareosFdPluginBaseclass with plugindef=postgresDataDir=/var/lib/postgresql/10/main:walArchive=/var/lib/postgresql/10/wal_archive/:dbuser=bareos:dbname=bareos:
it-fd (150): module/bareosfd.cc:1495-150 python3-fd-mod: PyBareosRegisterEvents registering event 2
it-fd (150): filed/fd_plugins.cc:2266-150 fd-plugin: Plugin registered event=2
it-fd (150): module/bareosfd.cc:1495-150 python3-fd-mod: PyBareosRegisterEvents registering event 4
it-fd (150): filed/fd_plugins.cc:2266-150 fd-plugin: Plugin registered event=4
it-fd (150): module/bareosfd.cc:1495-150 python3-fd-mod: PyBareosRegisterEvents registering event 16
it-fd (150): filed/fd_plugins.cc:2266-150 fd-plugin: Plugin registered event=16
it-fd (150): module/bareosfd.cc:1495-150 python3-fd-mod: PyBareosRegisterEvents registering event 19
it-fd (150): filed/fd_plugins.cc:2266-150 fd-plugin: Plugin registered event=19
it-fd (150): module/bareosfd.cc:1495-150 python3-fd-mod: PyBareosRegisterEvents registering event 3
it-fd (150): filed/fd_plugins.cc:2266-150 fd-plugin: Plugin registered event=3
it-fd (150): module/bareosfd.cc:1495-150 python3-fd-mod: PyBareosRegisterEvents registering event 5
it-fd (150): filed/fd_plugins.cc:2266-150 fd-plugin: Plugin registered event=5


But, when the backup is started repeatedly for the same client, the log consists of following:

it-fd (100): filed/fileset.cc:271-151 P python:module_path=/usr/lib/bareos/plugins:module_name=bareos-fd-postgres:postgresDataDir=/var/lib/postgresql/10/main:walArchive=/var/lib/postgresql/10/wal_archive/:dbuser=bareos:dbname=bareos
it-fd (150): filed/fd_plugins.cc:542-151 plugin cmd=python:module_path=/usr/lib/bareos/plugins:module_name=bareos-fd-postgres:postgresDataDir=/var/lib/postgresql/10/main:walArchive=/var/lib/postgresql/10/wal_archive/:dbuser=bareos:dbname=bareos
it-fd (150): filed/fd_plugins.cc:441-151 plugin_ctx=7f39641d1b60 JobId=151
it-fd (150): filed/fd_plugins.cc:229-151 IsEventForThisPlugin? name=python:module_path=/usr/lib/bareos/plugins:module_name=bareos-fd-postgres:postgresDataDir=/var/lib/postgresql/10/main:walArchive=/var/lib/postgresql/10/wal_archive/:dbuser=bareos:dbname=bareos len=6 plugin=python3-fd.so plen=7
it-fd (150): filed/fd_plugins.cc:261-151 IsEventForThisPlugin: yes, without last character: (plugin=python3-fd.so, name=python:module_path=/usr/lib/bareos/plugins:module_name=bareos-fd-postgres:postgresDataDir=/var/lib/postgresql/10/main:walArchive=/var/lib/postgresql/10/wal_archive/:dbuser=bareos:dbname=bareos)
it-fd (150): python/python-fd.cc:992-151 python3-fd: Trying to load module with name bareos-fd-postgres
it-fd (150): python/python-fd.cc:1000-151 python3-fd: Failed to load module with name bareos-fd-postgres
it-fd (150): include/python_plugins_common.inc:124-151 bareosfd: Traceback (most recent call last):
  File "/usr/lib/bareos/plugins/bareos-fd-postgres.py", line 40, in <module>
    import BareosFdPluginPostgres
  File "/usr/lib/bareos/plugins/BareosFdPluginPostgres.py", line 30, in <module>
    import psycopg2
  File "/usr/lib/python3/dist-packages/psycopg2/__init__.py", line 51, in <module>
    from psycopg2._psycopg import ( # noqa
SystemError: initialization of _psycopg raised unreported exception

it-fd (150): filed/fd_plugins.cc:480-151 Cancel return from GeneratePluginEvent
it-fd (100): filed/fileset.cc:271-151 N
it-fd (100): filed/dir_cmd.cc:462-151 <dird: getSecureEraseCmd
Additional Information:
System Description
Attached Files:
Notes
(0004129)
oskarsr   
2021-05-12 17:33   
(Last edited: 2021-05-12 17:34)
Does anybody has tried to back up postgresql DB by using bareos-fd-postgres python plugin ?

(0004263)
perkons   
2021-09-13 15:38   
We are experiencing exactly the same issue on Ubuntu 18.04.
(0004297)
bruno-at-bareos   
2021-10-11 13:31   
To Both of you, could you share the installed bareos package (and confirm they're coming from bareos.org) and also the python3 version used
also python packages (and from where they come) related (main core + psycopg)
(0004298)
perkons   
2021-10-11 14:52   
We installed the bareos-filedaemon from https://download.bareos.org
The python modules are installed from Ubuntu repositories. The reason we use both python and python3 modules is because if one is missing the backups fail. This seems pretty wrong to me, but as I understand there is active work to migrate to python3.
We also have both of these python modules (python2 and python3) on our RHEL based hosts and have not had any problems with the PostgreSQL Plugin.

# dpkg -l | grep psycopg
ii python-psycopg2 2.8.4-1~pgdg18.04+1 amd64 Python module for PostgreSQL
ii python3-psycopg2 2.8.6-2~pgdg18.04+1 amd64 Python 3 module for PostgreSQL
# dpkg -l | grep dateutil
ii python-dateutil 2.6.1-1 all powerful extensions to the standard Python datetime module
ii python3-dateutil 2.6.1-1 all powerful extensions to the standard Python 3 datetime module
# dpkg -l | grep bareos
ii bareos-common 20.0.1-3 amd64 Backup Archiving Recovery Open Sourced - common files
ii bareos-filedaemon 20.0.1-3 amd64 Backup Archiving Recovery Open Sourced - file daemon
ii bareos-filedaemon-postgresql-python-plugin 20.0.1-3 amd64 Backup Archiving Recovery Open Sourced - file daemon PostgreSQL plugin
ii bareos-filedaemon-python-plugins-common 20.0.1-3 amd64 Backup Archiving Recovery Open Sourced - file daemon Python plugin common files
ii bareos-filedaemon-python3-plugin 20.0.1-3 amd64 Backup Archiving Recovery Open Sourced - file daemon Python plugin
# dpkg -s bareos-filedaemon
Package: bareos-filedaemon
Status: install ok installed
Priority: optional
Section: admin
Installed-Size: 384
Maintainer: Joerg Steffens <joerg.steffens@bareos.com>
Architecture: amd64
Source: bareos
Version: 20.0.1-3
Replaces: bacula-fd
Depends: bareos-common (= 20.0.1-3), lsb-base (>= 3.2-13), lsof, libc6 (>= 2.14), libgcc1 (>= 1:3.0), libjansson4 (>= 2.0.1), libstdc++6 (>= 5.2), zlib1g (>= 1:1.1.4)
Pre-Depends: debconf (>= 1.4.30) | debconf-2.0, adduser
Conflicts: bacula-fd
Conffiles:
 /etc/init.d/bareos-fd bcc61ad57fde8a771a5002365130c3ec
Description: Backup Archiving Recovery Open Sourced - file daemon
 Bareos is a set of programs to manage backup, recovery and verification of
 data across a network of computers of different kinds.
 .
 The file daemon has to be installed on the machine to be backed up. It is
 responsible for providing the file attributes and data when requested by
 the Director, and also for the file system-dependent part of restoration.
 .
 This package contains the Bareos File daemon.
Homepage: http://www.bareos.org/
# cat /etc/apt/sources.list.d/bareos-20.list
deb https://download.bareos.org/bareos/release/20/xUbuntu_18.04 /
#
(0004299)
bruno-at-bareos   
2021-10-11 15:48   
Thanks for your report, as you stated the python/python3 situation is far from ideal, but PR are progressing, the tunnel's end is near.
Also as you mentioned there's no trouble on RHEL system, I'm aware too.

I would have tried to use only python2 code on such version.
I make a note about testing that with the future new code on Ubuntu 18... But I just can't say when.
(0004497)
bruno-at-bareos   
2022-02-02 10:46   
For the issue reported there's something that look wrong

File "/usr/lib/bareos/plugins/BareosFdPluginPostgres.py", line 30, in
import psycopg2
File "/usr/local/lib/python3.5/dist-packages/psycopg2/__init__.py", line 51, in
from psycopg2._psycopg import ( # noqa
SystemError: initialization of _psycopg raised unreported exception

here it is /usr/local/lib/python3.5

And then

it-fd (150): include/python_plugins_common.inc:124-151 bareosfd: Traceback (most recent call last):
  File "/usr/lib/bareos/plugins/bareos-fd-postgres.py", line 40, in <module>
    import BareosFdPluginPostgres
  File "/usr/lib/bareos/plugins/BareosFdPluginPostgres.py", line 30, in <module>
    import psycopg2
  File "/usr/lib/python3/dist-packages/psycopg2/__init__.py", line 51, in <module>
    from psycopg2._psycopg import ( # noqa
SystemError: initialization of _psycopg raised unreported exception

/usr/lib/python3

So seems you have mixed python env which create strange behaviour, cause the module loaded is not always the same.
Our best advice would be to clean up the global environment and make sure only one consistent version is used for bareos.

Also python3 support has been greetly improved in Bareos 21.
Will closing, as we are not able to reproduce such environment.

btw postgresql plugin is tested each time the code is updated.
(0004498)
bruno-at-bareos   
2022-02-02 10:47   
Mixed python version used with different psyscopg. /usr/local/lib/python3.5 and /usr/lib/python3

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1418 [bareos-core] storage daemon major always 2022-01-04 14:23 2022-01-31 09:34
Reporter: Scorpionking83 Platform: Linux  
Assigned To: bruno-at-bareos OS: RHEL  
Priority: immediate OS Version: 7  
Status: resolved Product Version: 19.2.11  
Product Build: Resolution: duplicate  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Still autoprune and recycle not working in Bareos 19.2.7
Description: Dear developers,

I have still a problem wit autoprune and recycle tapes:
1. Everything works, but when he reaches the max volume tapes with retention of 90 day's , he cannot create any backup any more. Then update the incrementail pool>
update --> Option 2 "Pool from resource" --> Option 3 Incremental
update --> Option 1 "Volumes parameters" --> Option 13: All volumes from pool --> Option 1 Incremental
update --> Option 1 "Volumes parameters" --> Option 13: All volumes from pool -->Option 2 Full
update --> Option 1 "Volumes parameters" --> Option 13: All volumes from pool -->Option 3 Incremental

2. I get the following error:
Volume "Incrementail-0001" has Volume Retention of 7776000 sec. and has 0 jobs that will be pruned

Max volume tapes is set to 400

But way does autoprune and recycle not work if max volumes tapes has been reaches and retention period is not yet been reach?
Is it also possible to detele old tapes from file and database?

I need a answer soon.
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0004451)
Scorpionking83   
2022-01-04 14:36   
Why close this, but the issue is not resolved.
(0004452)
bruno-at-bareos   
2022-01-04 14:39   
This issue is the same as the report 001318 made by the same user.
This is clearly a duplicate case.
(0004493)
Scorpionking83   
2022-01-29 17:14   
Can someone please check my other bug report 0001318.
I still looking for a solution.
(0004496)
bruno-at-bareos   
2022-01-31 09:34   
duplicate of 0001318

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1409 [bareos-core] director tweak always 2021-12-19 00:33 2022-01-13 14:22
Reporter: jalseos Platform: Linux  
Assigned To: bruno-at-bareos OS: Debian  
Priority: low OS Version: 10  
Status: resolved Product Version: 20.0.2  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: DB error on restore with ExitOnFatal=true
Description: I was trying to use ExitOnFatal=true in director and noticed a persistent error when trying to initiate a restore:

bareos-dir: ERROR TERMINATION at cats/postgresql.cc:675
Fatal database error

The error does not happen with unset/default ExitOnFatal=false

The postgresql (11) log reveals:
STATEMENT: DROP TABLE temp
ERROR: table "temp1" does not exist
STATEMENT: DROP TABLE temp1
ERROR: table "temp1" does not exist

I found the SQL statements in thes files in the code:
/core/src/cats/dml/0018_uar_del_temp
/core/src/cats/dml/0019_uar_del_temp1

I am wondering if something like this might be in order: (like 0012_drop_deltabs.postgresql)
/core/src/cats/dml/0018_uar_del_temp.postgres
DROP TABLE IF EXISTS temp
Tags:
Steps To Reproduce: $ bconsole
* restore
9
bareos-dir: ERROR TERMINATION at cats/postgresql.cc:675
Fatal database error
Additional Information:
System Description
Attached Files:
Notes
(0004400)
bruno-at-bareos   
2021-12-21 15:58   
The behavior is exiting in case of error when ExitOnFatal = true

STATEMENT: DROP TABLE temp
ERROR: table "temp1" does not exist

this is an error and the product the obey strictly to the paramet Exit On Fatal.

Now with future version, where only postgresql will be kept as database, but also older postgresql will never be installed the code can be reviewed to chase every drop table without and if exist.

Files to change

```
core/src/cats/dml/0018_uar_del_temp:DROP TABLE temp
core/src/cats/dml/0019_uar_del_temp1:DROP TABLE temp1
core/src/cats/mysql_queries.inc:"DROP TABLE temp "
core/src/cats/mysql_queries.inc:"DROP TABLE temp1 "
core/src/cats/postgresql_queries.inc:"DROP TABLE temp "
core/src/cats/postgresql_queries.inc:"DROP TABLE temp1 "
core/src/cats/sqlite_queries.inc:"DROP TABLE temp "
core/src/cats/sqlite_queries.inc:"DROP TABLE temp1 "
core/src/dird/query.sql:!DROP TABLE temp;
core/src/dird/query.sql:!DROP TABLE temp2;
```
Do you want to propose a PR for ?
(0004405)
bruno-at-bareos   
2021-12-21 16:50   
PR proposed
https://github.com/bareos/bareos/pull/1035

Once the PR will be build, there's will be some testing package available, would you like to test them ?
(0004443)
jalseos   
2022-01-02 16:52   
Hi, thank you for looking into this issue! I will try to test the built package (deb preferred) if a subsequent code/package "downgrade" (ie. no Catalog DB changes, ...) to a published Community Edition release remains possible afterwards.
(0004473)
bruno-at-bareos   
2022-01-13 14:22   
Fix committed to bareos master branch with changesetid 15753.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1389 [bareos-core] installer / packages minor always 2021-09-20 12:23 2022-01-05 13:23
Reporter: colttt Platform:  
Assigned To: bruno-at-bareos OS:  
Priority: normal OS Version:  
Status: resolved Product Version: 20.0.2  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: no repository for debian 11
Description: Debian 11 (bullseye) was released on 14th august 2021 but there is no bareos repository yet.
I would appreciate if debian 11 would be supported as well.
Tags:
Steps To Reproduce:
Additional Information:
Attached Files:
Notes
(0004276)
bruno-at-bareos   
2021-09-27 13:37   
Thanks for your report.

Starting September 14th Debian 11 is available for all customers with a subscription contract.
Nightly is also build for Debian 11, and will be part of bareos 21 release.
(0004292)
brechsteiner   
2021-10-02 22:51   
what is about the Community Repository? https://download.bareos.org/bareos/release/20/
(0004293)
bruno-at-bareos   
2021-10-04 09:30   
Sorry if it wasn't clear in my previous statement, Debian 11 will be available for next Bareos 21.
(0004455)
bruno-at-bareos   
2022-01-05 13:23   
community repository published https://download.bareos.org/bareos/release/21/Debian_11/

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1408 [bareos-core] director minor have not tried 2021-12-18 20:32 2021-12-28 09:44
Reporter: embareossed Platform: Linux  
Assigned To: bruno-at-bareos OS: Debian  
Priority: normal OS Version: 10  
Status: resolved Product Version: 20.0.2  
Product Build: Resolution: no change required  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: "Backup OK" email message subject line no longer displays the job name
Description: In bareos 18, backups which concluded successfully would be followed up by an email with a subject line indicating the name of the specific job that ran. However, in bareos 20, the subject line now only indicates the name of the client for which the job ran.

This is a minor nuisance, but I found the more distinguishing subject line to be more useful. In a case where there are multiple backup jobs for a single client where one but not all jobs fail, it is not immediately obvious -- as it was in bareos 18 -- as to which job for that client failed.
Tags:
Steps To Reproduce: Run two jobs on a host which has more than 1 backup job associated with it.
The email subject lines will be identical even though they are for 2 different jobs.
Additional Information:
System Description
Attached Files:
Notes
(0004401)
bruno-at-bareos   
2021-12-21 16:05   
Maybe some configuration file example used would help

From code we can see that the line was not changed since 2016
67ad14188a src/defaultconfigs/bareos-dir.d/messages/Standard.conf.in (Joerg Steffens 2016-08-01 14:03:06 +0200 5) mailcommand = "@bindir@/bsmtp -h @smtp_host@ -f \"\(Bareos\) \<%r\>\" -s \"Bareos: %t %e of %c %l\" %r"
(0004415)
embareossed   
2021-12-24 17:58   
Here is what my configs look like:
# grep mailcommand *
Daemon.conf: mailcommand = "/usr/bin/bsmtp -h localhost -f \"\(Bareos\) \<%r\>\" -s \"Bareos daemon message\" %r"
Standard.conf: mailcommand = "/usr/bin/bsmtp -h localhost -f \"\(Bareos\) \<%r\>\" -s \"Bareos: %t %e of %c %l\" %r"

All references to message resources are for Standard, except for the director which uses Daemon. I copied most of my config files from the old director (bareos 18) to the setup for the new director (bareos 20); I did not make any changes to messages, afair. I'll take a deeper look at this and see what I can figure out. Maybe bsmtp semantics have changed?
(0004416)
embareossed   
2021-12-24 18:12   
OK, it appears that in bareos 20, as per doc, the %c stands for the client, not the jobname (which should be %n). However, in bareos 18 and prior, this same setup seems to be generating the jobname, not the clientname. So it appears that the semantics have changed to properly implement the documented purpose of the %c macro (and perhaps others; I haven't tested those).

Changing the macro to %n works as desired.
(0004428)
bruno-at-bareos   
2021-12-28 09:44   
Adapting configuration following documentation

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1407 [bareos-core] General minor always 2021-12-18 20:26 2021-12-28 09:43
Reporter: embareossed Platform: Linux  
Assigned To: bruno-at-bareos OS: Debian  
Priority: normal OS Version: 10  
Status: resolved Product Version: 20.0.2  
Product Build: Resolution: no change required  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Run before script hangs unless debug tracing enabled in script
Description: A "run before job script" which has been working since bareos 18 no longer works in bareos 20 (20.0.3 or 20.0.4). This is a simple bash shell script that performs some chores before the backup. It only sends output to stdout, not stderr (I've checked this). The script works properly in bareos 18, but causes the job to hang in bareos 20.

The script is actually being run on a remote file daemon. This may be a clue to the behavior. But again, this has been working in bareos 18.

Interestingly, I found that by enabling bash tracing (-xv options) inside the script itself to try to see what was causing the hang, it actually alleviated the hang!
Tags:
Steps To Reproduce: Create a bash shell on a remote bareos 20 client.
Create a job in a bareos 20 director on a local system that calls a "run before job script" on the remote client.
Run the job.
If this is reproducible, the job will hang when it reaches the call to the remote script.

if this is reproducible, try setting traces in the bash script.

Additional Information: I built the 20.0.3 executables from the git source code on a devuan beowulf host and distributed the packages to the bareos director server and the bareos file daemon client, both of which are also devuan beowulf.
System Description
Attached Files:
Notes
(0004403)
bruno-at-bareos   
2021-12-21 16:10   
Would you mind to share the job definition so we can try to reproduce.
The script would be nice, but perhaps it do something secret.
(0004404)
bruno-at-bareos   
2021-12-21 16:17   
I can't reproduce, it works here

with a job definiton

```
 2021-12-21 16:14:12 qt-kt-dir JobId 8204: Start Backup JobId 8204, Job=yoda.2021-12-21_16.14.10_06
 2021-12-21 16:14:12 qt-kt-dir JobId 8204: Connected Storage daemon at qt-kt.labaroche.ioda.net:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
 2021-12-21 16:14:12 qt-kt-dir JobId 8204: Using Device "admin" to write.
 2021-12-21 16:14:12 qt-kt-dir JobId 8204: Probing client protocol... (result will be saved until config reload)
 2021-12-21 16:14:12 qt-kt-dir JobId 8204: Connected Client: yoda-fd at yoda.labaroche.ioda.net:9102, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
 2021-12-21 16:14:12 qt-kt-dir JobId 8204: Handshake: Immediate TLS 2021-12-21 16:14:12 qt-kt-dir JobId 8204: Encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
 2021-12-21 16:14:12 yoda-fd JobId 8204: Connected Storage daemon at qt-kt.labaroche.ioda.net:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
 2021-12-21 16:14:12 yoda-fd JobId 8204: shell command: run ClientBeforeJob "sh -c 'snapper list && snapper -c ioda list'"
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: # | Type | Pre # | Date | User | Used Space | Cleanup | Description | Userdata
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: -------+--------+-------+----------------------------------+------+------------+----------+-----------------------+--------------
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 0 | single | | | root | | | current |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 1* | single | | Sun 21 Jun 2020 05:17:47 PM CEST | root | 92.00 KiB | | first root filesystem |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 4803 | single | | Fri 01 Jan 2021 12:00:23 AM CET | root | 13.97 GiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 10849 | single | | Wed 01 Sep 2021 12:00:02 AM CEST | root | 12.58 GiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 11582 | single | | Fri 01 Oct 2021 12:00:01 AM CEST | root | 7.90 GiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 12342 | single | | Mon 01 Nov 2021 12:00:08 AM CET | root | 8.07 GiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13085 | single | | Wed 01 Dec 2021 12:00:07 AM CET | root | 156.00 KiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13272 | pre | | Wed 08 Dec 2021 06:23:04 PM CET | root | 156.00 KiB | number | zypp(zypper) | important=yes
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13273 | post | 13272 | Wed 08 Dec 2021 06:46:13 PM CET | root | 3.28 MiB | number | | important=yes
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13278 | pre | | Wed 08 Dec 2021 10:11:11 PM CET | root | 304.00 KiB | number | zypp(zypper) | important=yes
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13279 | post | 13278 | Wed 08 Dec 2021 10:11:26 PM CET | root | 124.00 KiB | number | | important=yes
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13447 | pre | | Wed 15 Dec 2021 09:57:35 PM CET | root | 48.00 KiB | number | zypp(zypper) | important=no
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13448 | post | 13447 | Wed 15 Dec 2021 09:57:42 PM CET | root | 48.00 KiB | number | | important=no
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13499 | single | | Sat 18 Dec 2021 12:00:06 AM CET | root | 156.00 KiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13523 | single | | Sun 19 Dec 2021 12:00:05 AM CET | root | 156.00 KiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13547 | single | | Mon 20 Dec 2021 12:00:05 AM CET | root | 156.00 KiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13557 | pre | | Mon 20 Dec 2021 09:27:21 AM CET | root | 156.00 KiB | number | zypp(zypper) | important=yes
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13559 | pre | | Mon 20 Dec 2021 10:30:43 AM CET | root | 156.00 KiB | number | zypp(zypper) | important=yes
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13560 | post | 13559 | Mon 20 Dec 2021 10:52:01 AM CET | root | 1.76 MiB | number | | important=yes
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13562 | pre | | Mon 20 Dec 2021 11:53:40 AM CET | root | 352.00 KiB | number | zypp(zypper) | important=yes
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13563 | post | 13562 | Mon 20 Dec 2021 11:53:56 AM CET | root | 124.00 KiB | number | | important=yes
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13576 | single | | Tue 21 Dec 2021 12:00:00 AM CET | root | 172.00 KiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13585 | single | | Tue 21 Dec 2021 09:00:00 AM CET | root | 172.00 KiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13586 | single | | Tue 21 Dec 2021 10:00:00 AM CET | root | 172.00 KiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13587 | single | | Tue 21 Dec 2021 11:00:00 AM CET | root | 172.00 KiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13588 | single | | Tue 21 Dec 2021 12:00:00 PM CET | root | 172.00 KiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13589 | single | | Tue 21 Dec 2021 01:00:00 PM CET | root | 172.00 KiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13590 | single | | Tue 21 Dec 2021 02:00:00 PM CET | root | 172.00 KiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13591 | single | | Tue 21 Dec 2021 03:00:00 PM CET | root | 172.00 KiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13592 | single | | Tue 21 Dec 2021 04:00:00 PM CET | root | 92.00 KiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: # | Type | Pre # | Date | User | Cleanup | Description | Userdata
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: -------+--------+-------+---------------------------------+------+----------+-------------+---------
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 0 | single | | | root | | current |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13050 | single | | Mon 20 Dec 2021 12:00:05 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13061 | single | | Mon 20 Dec 2021 11:00:00 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13062 | single | | Mon 20 Dec 2021 12:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13063 | single | | Mon 20 Dec 2021 01:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13064 | single | | Mon 20 Dec 2021 02:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13065 | single | | Mon 20 Dec 2021 03:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13066 | single | | Mon 20 Dec 2021 04:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13067 | single | | Mon 20 Dec 2021 05:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13068 | single | | Mon 20 Dec 2021 06:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13069 | single | | Mon 20 Dec 2021 07:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13070 | single | | Mon 20 Dec 2021 08:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13071 | single | | Mon 20 Dec 2021 09:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13072 | single | | Mon 20 Dec 2021 10:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13073 | single | | Mon 20 Dec 2021 11:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13074 | single | | Tue 21 Dec 2021 12:00:00 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13075 | single | | Tue 21 Dec 2021 01:00:00 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13076 | single | | Tue 21 Dec 2021 02:00:00 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13077 | single | | Tue 21 Dec 2021 03:00:00 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13078 | single | | Tue 21 Dec 2021 04:00:00 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13079 | single | | Tue 21 Dec 2021 05:00:00 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13080 | single | | Tue 21 Dec 2021 06:00:00 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13081 | single | | Tue 21 Dec 2021 07:00:00 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13082 | single | | Tue 21 Dec 2021 08:00:00 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13083 | single | | Tue 21 Dec 2021 09:00:00 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13084 | single | | Tue 21 Dec 2021 10:00:00 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13085 | single | | Tue 21 Dec 2021 11:00:00 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13086 | single | | Tue 21 Dec 2021 12:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13087 | single | | Tue 21 Dec 2021 01:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13088 | single | | Tue 21 Dec 2021 02:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13089 | single | | Tue 21 Dec 2021 03:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13090 | single | | Tue 21 Dec 2021 04:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: Extended attribute support is enabled
 2021-12-21 16:14:36 yoda-fd JobId 8204: ACL support is enabled
 
 RunScript {
    RunsWhen = Before
    RunsOnClient = Yes
    FailJobOnError = No
    Command = "sh -c 'snapper list && snapper -c ioda list'"
  }

```
(0004414)
embareossed   
2021-12-24 17:45   
Nothing secret really. It's just a script that runs "estimate" and parses the output for the size of the backup. Then it decides (based on a value in a config file for the backup name) whether to proceed or not. This way, estimates can be used to determine whether to proceed with a backup or not. This was my workaround to my request in https://bugs.bareos.org/view.php?id=1135.

I did some upgrades recently and the problem has disappeared. So you can close this.
(0004427)
bruno-at-bareos   
2021-12-28 09:43   
Upgrade solve this.
estimate can take time, and from the bconsole point of view, look it is stalled or blocked, when you use the "listing instruction" you'll see the file by file proceed output.
closing

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1413 [bareos-core] bconsole major always 2021-12-27 15:29 2021-12-28 09:38
Reporter: jcottin Platform: Linux  
Assigned To: bruno-at-bareos OS: Debian  
Priority: high OS Version: 10  
Status: resolved Product Version: 21.0.0  
Product Build: Resolution: no change required  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: While restoring a directory using always incremental scheme, Bareos looks for a volume in the wrong storage.
Description: I configured the Always Incremental scheme using 2 different storage (FILE) as advised in the documentation.
-----------------------------------
https://docs.bareos.org/TasksAndConcepts/AlwaysIncrementalBackupScheme.html?highlight=job#storages-and-pools

While restoring a directory using always incremental scheme, Bareos looks for a volume in the wrong storage.

he job will require the following
   Volume(s) Storage(s) SD Device(s)
===========================================================================

    AI-Consolidated-vm-aiqi-linux-test-backup-0006 FileStorage-AI-Consolidated FileStorage-AI-Consolidated
    AI-Incremental-vm-aiqi-linux-test-backup0012 FileStorage-AI-Incremental FileStorage-AI-Incremental

It looks for AI-Incremental-vm-aiqi-linux-test-backup0012 in FileStorage-AI-Consolidated.
it should look for it in FileStorage-AI-Incremental.

Is there a problem with my setup ?
Tags: always incremental, storage
Steps To Reproduce: Using bconsole I target a backup before : 2021-12-27 19:00:00
I can find 3 backup (1 Full, 2 Incremental)
=======================================================
+-------+-------+----------+----------------+---------------------+------------------------------------------------+
| jobid | level | jobfiles | jobbytes | starttime | volumename |
+-------+-------+----------+----------------+---------------------+------------------------------------------------+
| 24 | F | 108,199 | 13,145,763,765 | 2021-12-25 08:06:41 | AI-Consolidated-vm-aiqi-linux-test-backup-0006 |
| 27 | I | 95 | 68,530 | 2021-12-25 20:00:04 | AI-Incremental-vm-aiqi-linux-test-backup0008 |
| 32 | I | 40 | 1,322,314 | 2021-12-26 20:00:09 | AI-Incremental-vm-aiqi-linux-test-backup0012 |
+-------+-------+----------+----------------+---------------------+------------------------------------------------+
-----------------------------------
$ cd /var/lib/mysql.dumps/wordpressdb/
cwd is: /var/lib/mysql.dumps/wordpressdb/
-----------------------------------
$ dir
-rw-r--r-- 1 0 (root) 112 (bareos) 1830 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/%create.gz
-rw-r--r-- 1 0 (root) 112 (bareos) 149 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/%tables
-rw-r--r-- 1 0 (root) 112 (bareos) 783 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/wp_commentmeta.sql.gz
-rw-r--r-- 1 0 (root) 112 (bareos) 1161 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/wp_comments.sql.gz
-rw-r--r-- 1 0 (root) 112 (bareos) 869 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/wp_links.sql.gz
-rw-r--r-- 1 0 (root) 112 (bareos) 235966 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/wp_options.sql.gz
-rw-r--r-- 1 0 (root) 112 (bareos) 830 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/wp_postmeta.sql.gz
-rw-r--r-- 1 0 (root) 112 (bareos) 3470 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/wp_posts.sql.gz
-rw-r--r-- 1 0 (root) 112 (bareos) 770 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/wp_term_relationships.sql.gz
-rw-r--r-- 1 0 (root) 112 (bareos) 838 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/wp_term_taxonomy.sql.gz
-rw-r--r-- 1 0 (root) 112 (bareos) 780 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/wp_termmeta.sql.gz
-rw-r--r-- 1 0 (root) 112 (bareos) 814 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/wp_terms.sql.gz
-rw-r--r-- 1 0 (root) 112 (bareos) 1404 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/wp_usermeta.sql.gz
-rw-r--r-- 1 0 (root) 112 (bareos) 983 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/wp_users.sql.gz
-----------------------------------
$ cd ..
cwd is: /var/lib/mysql.dumps/
-----------------------------------
I mark the folder:
$ mark /var/lib/mysql.dumps/wordpressdb
15 files marked.
$ done
-----------------------------------
The job will require the following
   Volume(s) Storage(s) SD Device(s)
============================================

    AI-Consolidated-vm-aiqi-linux-test-backup-0006 FileStorage-AI-Consolidated FileStorage-AI-Consolidated
    AI-Incremental-vm-aiqi-linux-test-backup0012 FileStorage-AI-Incremental FileStorage-AI-Incremental

Volumes marked with "*" are online.
18 files selected to be restored.

Using Catalog "MyCatalog"
Run Restore job
JobName: RestoreFiles
Bootstrap: /var/lib/bareos/bareos-dir.restore.2.bsr
Where: /tmp/bareos-restores
Replace: Always
FileSet: LinuxAll
Backup Client: vm-aiqi-linux-test-backup-fd
Restore Client: vm-aiqi-linux-test-backup-fd
Format: Native
Storage: FileStorage-AI-Consolidated
When: 2021-12-27 22:10:13
Catalog: MyCatalog
Priority: 10
Plugin Options: *None*
OK to run? (yes/mod/no): yes

I get these two messages.
============================================
27-Dec 22:15 bareos-sd JobId 43: Warning: stored/acquire.cc:286 Read open device "FileStorage-AI-Consolidated" (/var/lib/bareos/storage-AI-Consolidated) Volume "AI-Incremental-vm-aiqi-linux-test-backup0012" failed: ERR=stored/dev.cc:716 Could not open: /var/lib/bareos/storage-AI-Consolidated/AI-Incremental-vm-aiqi-linux-test-backup0012, ERR=No such file or directory

27-Dec 22:15 bareos-sd JobId 43: Please mount read Volume "AI-Incremental-vm-aiqi-linux-test-backup0012" for:
    Job: RestoreFiles.2021-12-27_22.15.29_31
    Storage: "FileStorage-AI-Consolidated" (/var/lib/bareos/storage-AI-Consolidated)
    Pool: Incremental-BareOS
    Media type: File
============================================

Bareos try to find AI-Incremental-vm-aiqi-linux-test-backup0012 in the wrong storage.
Additional Information:
===========================================
Job {
  Name = vm-aiqi-linux-test-backup-job
  Client = vm-aiqi-linux-test-backup-fd

  Accurate = yes
  Always Incremental = yes
  Always Incremental Job Retention = 30 days
  Always Incremental Keep Number = 15
  Always Incremental Max Full Age = 60 days

  Level = Incremental
  Type = Backup
  FileSet = "LinuxAll-vm-aiqi-linux-test-backup" # LinuxAll fileset (0000013)
  Schedule = "WeeklyCycleCustomers"
  Storage = FileStorage-AI-Incremental
  Messages = Standard
  Pool = AI-Incremental-vm-aiqi-linux-test-backup
  Priority = 10
  Write Bootstrap = "/var/lib/bareos/%c.bsr"

  Full Backup Pool = AI-Consolidated-vm-aiqi-linux-test-backup # write Full Backups into "Full" Pool (0000005)
  Incremental Backup Pool = AI-Incremental-vm-aiqi-linux-test-backup # write Incr Backups into "Incremental" Pool (0000011)

  Enabled = yes

  RunScript {
    FailJobOnError = Yes
    RunsOnClient = Yes
    RunsWhen = Before
    Command = "sh /SCRIPTS/mysql/pre.mysql.sh"
  }

  Run Script {
    Console = ".bvfs_update jobid=%i"
    RunsWhen = After
    RunsOnClient = No
  }
}
===========================================
Pool {
  Name = AI-Consolidated-vm-aiqi-linux-test-backup
  Pool Type = Backup
  Recycle = yes # Bareos can automatically recycle Volumes
  AutoPrune = yes # Prune expired volumes
  Volume Retention = 360 days # How long should the Full Backups be kept? (0000006)
  Maximum Volume Bytes = 50G # Limit Volume size to something reasonable
  Label Format = "AI-Consolidated-vm-aiqi-linux-test-backup-" # Volumes will be labeled "Full-<volume-id>"
  Storage = FileStorage-AI-Consolidated
}
===========================================
Pool {
  Name = AI-Incremental-vm-aiqi-linux-test-backup
  Pool Type = Backup
  Recycle = yes # Bareos can automatically recycle Volumes
  AutoPrune = yes # Prune expired volumes
  Volume Retention = 360 days # How long should the Incremental Backups be kept? (0000012)
  Maximum Volume Bytes = 50G # Limit Volume size to something reasonable
  Label Format = "AI-Incremental-vm-aiqi-linux-test-backup" # Volumes will be labeled "Incremental-<volume-id>"
  Volume Use Duration = 23h
  Storage = FileStorage-AI-Incremental
  Next Pool = AI-Consolidated-vm-aiqi-linux-test-backup
}

Both volume are available in their respective storage.

root@vm-aiqi-testbackup:~# ls -l /var/lib/bareos/storage-AI-Consolidated/AI-Consolidated-vm-aiqi-linux-test-backup-0006
-rw-r----- 1 bareos bareos 26349467738 Dec 25 08:09 /var/lib/bareos/storage-AI-Consolidated/AI-Consolidated-vm-aiqi-linux-test-backup-0006

root@vm-aiqi-testbackup:~# ls -l /var/lib/bareos/storage-AI-Incremental/AI-Incremental-vm-aiqi-linux-test-backup0012
-rw-r----- 1 bareos bareos 1329612 Dec 26 20:00 /var/lib/bareos/storage-AI-Incremental/AI-Incremental-vm-aiqi-linux-test-backup0012
System Description
Attached Files: Bareos-always-incremental-restore-fail.txt (7,259 bytes) 2021-12-27 15:53
https://bugs.bareos.org/file_download.php?file_id=487&type=bug
Notes
(0004421)
jcottin   
2021-12-27 15:53   
Output with TXT might be easier to read.
(0004422)
jcottin   
2021-12-27 16:32   
Device {
  Name = FileStorage-AI-Consolidated
  Media Type = File
  Archive Device = /var/lib/bareos/storage-AI-Consolidated
  LabelMedia = yes; # lets Bareos label unlabeled media
  Random Access = yes;
  AutomaticMount = yes; # when device opened, read it
  RemovableMedia = no;
  AlwaysOpen = no;
  Description = "File device. A connecting Director must have the same Name and MediaType."
}

Device {
  Name = FileStorage-AI-Incremental
  Media Type = File
  Archive Device = /var/lib/bareos/storage-AI-Incremental
  LabelMedia = yes; # lets Bareos label unlabeled media
  Random Access = yes;
  AutomaticMount = yes; # when device opened, read it
  RemovableMedia = no;
  AlwaysOpen = no;
  Description = "File device. A connecting Director must have the same Name and MediaType."
}

Storage {
  Name = FileStorage-AI-Consolidated
  Address = bareos-server # N.B. Use a fully qualified name here (do not use "localhost" here).
  Password = "22222222222222222222222222222222222222222222"
  Device = FileStorage-AI-Consolidated
  Media Type = File
}

Storage {
  Name = FileStorage-AI-Incremental
  Address = bareos-server # N.B. Use a fully qualified name here (do not use "localhost" here).
  Password = "22222222222222222222222222222222222222222222"
  Device = FileStorage-AI-Incremental
  Media Type = File
}
(0004423)
jcottin   
2021-12-27 16:43   
The documentation said 2 storage.
But I created 2 device.

1 storage => 1 device.

I moved the data from one device (FILE: Directory) to another.
2 storage => 1 device.

Problem solved.
(0004425)
bruno-at-bareos   
2021-12-28 09:37   
Thanks for sharing. Yes when the documentation talk about 2 storages it's in the director view, and not bareos-storage having 2 devices.
I close the issue.
(0004426)
bruno-at-bareos   
2021-12-28 09:38   
AI need 2 storages on the director but One device able to read/write both Incremental and Full

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1339 [bareos-core] webui minor always 2021-04-19 11:49 2021-12-23 08:39
Reporter: khvalera Platform:  
Assigned To: frank OS:  
Priority: normal OS Version: archlinux  
Status: assigned Product Version: 20.0.1  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: When going to the Run jobs tab I get an error
Description: when going to the Run jobs https://127.0.0.1/bareos-webui/job/run/ tab I get an error:
Notice: Undefined index: value in /usr/share/webapps/bareos-webui/module/Pool/src/Pool/Model/PoolModel.php on line 152
Tags: webui
Steps To Reproduce: https://127.0.0.1/bareos-webui/job/run/
Additional Information:
Attached Files: Снимок экрана_2021-04-19_12-52-56.png (110,528 bytes) 2021-04-19 11:54
https://bugs.bareos.org/file_download.php?file_id=464&type=bug
png
Notes
(0004112)
khvalera   
2021-04-19 11:54   
I am attaching a screenshot:
(0004156)
khvalera   
2021-06-11 22:36   
You need to correct the expression: preg_match('/\s*Next\s*Pool\s*=\s*("|\')?(?<value>.*)(?(1)\1|)/i', $result, $matches);
(0004157)
khvalera   
2021-06-11 22:39   
I temporarily corrected myself so that the error does not appear: preg_match('/\s*Pool\s*=?(?<value>.*)(?(1)\1|)/i', $result, $matches);
But most of all this is not the right decision.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1397 [bareos-core] documentation minor always 2021-11-01 16:45 2021-12-21 16:07
Reporter: Norst Platform: Linux  
Assigned To: bruno-at-bareos OS: Debian  
Priority: normal OS Version: 9  
Status: resolved Product Version: 20.0.2  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: "Tapespeed and blocksizes" chapter location is wrong
Description: "Tapespeed and blocksizes" chapter is a general topic. Therefore, it must be moved away from "Autochanger Support" page/category.
https://docs.bareos.org/TasksAndConcepts/AutochangerSupport.html#setblocksizes
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0004360)
bruno-at-bareos   
2021-11-25 10:35   
Would you like to propose a PR changing the place, it would be really appreciate.
Are you doing backup on tapes on a single drive ? (Most of the use case we seen actually are all using autochanger, that's why the chapter is actually there).
(0004376)
Norst   
2021-11-30 21:01   
(Last edited: 2021-11-30 21:03)
Yes, I use standalone tape drive, but for infrequent, long-term archiving rather than regular backup.

PR to move "Tapespeed and blocksizes" one level up, to "Tasks and Concepts": https://github.com/bareos/bareos/pull/1009

(0004383)
bruno-at-bareos   
2021-12-09 09:42   
Did you see the comment in the PR ?
(0004402)
bruno-at-bareos   
2021-12-21 16:07   
PR#1009 merged last week.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1369 [bareos-core] webui crash always 2021-07-12 11:54 2021-12-21 13:58
Reporter: jarek_herisz Platform: Linux  
Assigned To: frank OS: Debian  
Priority: normal OS Version: 10  
Status: resolved Product Version: 20.0.2  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: webui tries to load a nonexistent file
Description: When the Polish language is chosen at the login stage. Webui tries to load the file:
bareos-webui/js/locale/pl_PL/LC_MESSAGES/pl_PL.po

Such a file does not exist, it results in an error:
i_gettext.js:413 iJS-gettext:'try_load_lang_po': failed. Unable to exec XMLHttpRequest for link

Remaining javasctipt is terminated. Interfeis becomes inoperable
Tags: webui
Steps To Reproduce: With version 20.0.1
On the webui login page, select Polish.
Additional Information:
System Description
Attached Files: Przechwytywanie.PNG (78,772 bytes) 2021-07-19 10:36
https://bugs.bareos.org/file_download.php?file_id=472&type=bug
png
Notes
(0004182)
jarek_herisz   
2021-07-19 10:36   
System:
root@backup:~# cat /etc/debian_version
10.10
(0004206)
frank   
2021-08-09 17:22   
Fix committed to bareos bareos-19.2 branch with changesetid 15093.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1324 [bareos-core] webui major always 2021-03-02 10:26 2021-12-21 13:57
Reporter: Emmanuel Garette Platform: Linux Ubuntu  
Assigned To: frank OS: 20.04  
Priority: normal OS Version:  
Status: resolved Product Version: 20.0.0  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Infinite loop when trying to log with invalid account
Description: I'm using this community version of Webui: http://download.bareos.org/bareos/release/20/xUbuntu_20.04/

When I'm trying to log with invalid account, webui didn't return nothing and apache seems to run an infinite loop. The log file increases rapidly.

I think the problem is in this two lines:

          $send = fwrite($this->socket, $msg, $str_length);
         if($send === 0) {

The fwrite function returns false when an error provides (see: https://www.php.net/manual/en/function.fwrite.php ).

If a replace 0 by false, everything is ok.

In attachement a patch to solve this issues.
Tags:
Steps To Reproduce:
Additional Information:
Attached Files: webui.patch (483 bytes) 2021-03-02 10:26
https://bugs.bareos.org/file_download.php?file_id=458&type=bug
Notes
(0004163)
frank   
2021-06-28 15:22   
Fix committed to bareos master branch with changesetid 15006.
(0004165)
frank   
2021-06-29 14:22   
Fix committed to bareos bareos-19.2 branch with changesetid 15017.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1316 [bareos-core] storage daemon major always 2021-01-30 10:01 2021-12-21 13:57
Reporter: kardel Platform:  
Assigned To: franku OS:  
Priority: high OS Version:  
Status: resolved Product Version: 20.0.0  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: storage daemon loses a configured device instance causing major confusion in device handling
Description: After start "status storage=<name>" show the device as not open or with it parameters - that is expected.

After the first backup with spooling "status storage=<name>" shows the device as "not open or does not exist" - that is a hint
=> the the configured "device_resource->dev" value is nullptr.

The follow up effects are that the reservation code is unable to match the same active device and the same volume in all cases.
When the match fails (log shows "<name> (/dev/<tapename>) and "<name> (/dev/<tapename>) " with no differences) it attempts to allocate new volumes possibly with operator intervention even though the expected volume is available in the drive.

The root cause is a temporary device created in spool.cc::295 => auto rdev(std::make_unique<SpoolDevice>());
Line 302 sets device resource rdev->device_resource = dcr->dev->device_resource;
When rdev leaves scope the Device::~Device() Dtor is called which happily sets this.device_resource->dev = nullptr in
dev.cc:1281 if (device_resource) { device_resource->dev = nullptr; } (=> potential memory leak)

At this point the configured device_resource is lost (even though it might still be known by active volume reservations).
After that the reservation code is completely confused due to new default allocations of devices (see additional info).

A fix is provided as patch against 20.0.0. It only clears this.device_resource->dev when
this.device_resource->dev references this instance.
Tags:
Steps To Reproduce: start bareos system.
observe "status storage=..."
run a spooling job
observer "status storage=..."

If you want to see the confusion it involves a more elaborate test setup with multipe jobs where a spooling job finishes before
another job for the same volume and device begins to run.
Additional Information: It might be worthwhile to check the validity of creating a device in dir_cmd.cc:932. During testing
a difference of device pointer was seen in vol_mgr.cc:916 although the device parameter where the same.
This is most likely cause by Device::this.device_resource->dev being a nullptr and the device creation
in dir_cmd.cc:932. The normal expected lifetime of a device is from reading the configuration until the
program termination. Autochanger support might change that rule though - I didn't analyze that far.
Attached Files: dev.cc.patch (568 bytes) 2021-01-30 10:01
https://bugs.bareos.org/file_download.php?file_id=455&type=bug
Notes
(0004088)
franku   
2021-02-12 12:15   
Thank you for your deep analysis and the proposed fix which solves the issue.

See github PR https://github.com/bareos/bareos/pull/724/commits for more information on the fix and systemtests (which is draft at the time of adding this note).
(0004089)
franku   
2021-02-15 11:38   
Experimental binaries with the proposed bugfix can be found here: http://download.bareos.org/bareos/experimental/CD/PR-724/
(0004091)
franku   
2021-02-24 13:22   
Fix committed to bareos master branch with changesetid 14543.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1300 [bareos-core] webui minor always 2021-01-11 16:27 2021-12-21 13:57
Reporter: fapg Platform: Linux  
Assigned To: frank OS: CentOS  
Priority: normal OS Version: 7  
Status: resolved Product Version: 20.0.0  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: some job status are not categorized properly
Description: in the dashboard, when we click in waiting jobs, the url is:

https://bareos-server/bareos-webui/job//?period=1&status=Waiting
but should be:
https://bareos-server/bareos-webui/job//?period=1&status=Queued

Best Regards,
Fernando Gomes
Tags:
Steps To Reproduce:
Additional Information: affects table column filter
System Description
Attached Files:
Notes
(0004168)
frank   
2021-06-29 18:45   
It's not a query parameter issue. WebUI categorizes all the different job status flags into groups. I had a look into it and some job status are not categorized properly so the column filter on the table does not work as expected in those cases. A fix will follow.
(0004175)
frank   
2021-07-06 11:22   
Fix committed to bareos master branch with changesetid 15053.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1251 [bareos-core] webui tweak always 2020-06-11 09:13 2021-12-21 13:57
Reporter: juanpebalsa Platform: Linux  
Assigned To: frank OS: CentOS  
Priority: low OS Version: 7  
Status: resolved Product Version: 19.2.7  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Error when displaying pool detail
Description: When I try to see the detail of the Pools under Storage -> Pools -> 15-Days (one of my pools), I get an error message because I can't find the page.

http://xxxxxxxxx.com/bareos-webui/pool/details/15-Days:
|A 404 error occurred
|Page not found.
|The requested URL could not be matched by routing.
|
|No Exception available
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files: Captura de pantalla 2020-06-11 a las 9.13.02.png (20,870 bytes) 2020-06-11 09:13
https://bugs.bareos.org/file_download.php?file_id=442&type=bug
png
Notes
(0004207)
frank   
2021-08-09 17:22   
Fix committed to bareos bareos-19.2 branch with changesetid 15094.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1232 [bareos-core] installer / packages minor always 2020-04-21 09:26 2021-12-21 13:57
Reporter: rogern Platform: Linux  
Assigned To: bruno-at-bareos OS: CentOS  
Priority: normal OS Version: 8  
Status: resolved Product Version: 19.2.7  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: bareos logrotate errors
Description: Problem with logrotate seems to be back (previously addressed and fixed in 0000417) due to missing

su bareos bareos

in /etc/logrotate.d/bareos-dir

Logrotate gives "error: skipping "/var/log/bareos/bareos.log" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation."
Also the same for bareos-audit.log
Tags:
Steps To Reproduce: Two fresh installs of 19.2.7 with same error from logrotate and lacking "su bareos bareos" in /etc/logrotate.d/bareos-dir
Additional Information:
Attached Files:
Notes
(0004256)
bruno-at-bareos   
2021-09-08 13:46   
PR is now proposed with also backport to supported version
https://github.com/bareos/bareos/pull/918
(0004259)
bruno-at-bareos   
2021-09-09 15:07   
PR#918 has been merged, and backport will be made to 20,19,18 Monday 13th. and will be available on next minor release.
(0004260)
bruno-at-bareos   
2021-09-09 15:22   
Fix committed to bareos master branch with changesetid 15139.
(0004261)
bruno-at-bareos   
2021-09-09 16:22   
Fix committed to bareos bareos-19.2 branch with changesetid 15141.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1205 [bareos-core] webui minor always 2020-02-28 09:42 2021-12-21 13:57
Reporter: Ryushin Platform: Linux  
Assigned To: frank OS: Devuan (Debian)  
Priority: normal OS Version: 10  
Status: resolved Product Version: 19.2.6  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: HeadLink.php error with PHP 7.3
Description: I received this error when trying to connecting to the webui:
Notice: compact(): Undefined variable: extras in /usr/share/bareos-webui/vendor/zendframework/zend-view/src/Helper/HeadLink.php on line 403

Seems to be related to this issue:
https://github.com/zendframework/zend-view/issues/172#issue-388080603
Though the line numbers for their fix is not the same.
Tags:
Steps To Reproduce:
Additional Information: I solved the issue by replacing the HeadLink.php file with an updated version from here:
https://raw.githubusercontent.com/zendframework/zend-view/f7242f7d5ccec2b8c319634b4098595382ef651c/src/Helper/HeadLink.php
Attached Files:
Notes
(0004144)
frank   
2021-06-08 12:22   
Fix committed to bareos bareos-19.2 branch with changesetid 14922.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1191 [bareos-core] webui crash always 2020-02-12 15:40 2021-12-21 13:57
Reporter: khvalera Platform: Linux  
Assigned To: frank OS: Arch Linux  
Priority: high OS Version: x64  
Status: resolved Product Version: 19.2.6  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: The web interface runs under any login and password
Description: To enter the web interface starts under any arbitrary username and password.
How to fix it?
Tags: webui
Steps To Reproduce: /etc/bareos/bareos-dir.d/console/web-admin.conf

Console {
  Name = web-admin
  Password = "123"
  Profile = "webui-admin"
}

/etc/bareos/bareos-dir.d/profile/webui-admin.conf

Profile {
  Name = "webui-admin"
  CommandACL = !.bvfs_clear_cache, !.exit, !.sql, !configure, !create, !delete, !purge, !prune, !sqlquery, !umount, !unmount, *all*
  Job ACL = *all*
  Schedule ACL = *all*
  Catalog ACL = *all*
  Pool ACL = *all*
  Storage ACL = *all*
  Client ACL = *all*
  FileSet ACL = *all*
  Where ACL = *all*
}

/etc/bareos-webui/directors.ini

[bareos_dir]
enabled = "yes"
diraddress = "localhost"
dirport>= 9101
;UsePamAuthentication = yes
pam_console_name = "web-admin"
pam_console_password = "123"
Additional Information:
Attached Files:
Notes
(0003936)
khvalera   
2020-04-10 00:10   
UsePamAuthentication = yes
#pam_console_name = "web-admin"
#pam_console_password = "123"
(0004289)
frank   
2021-09-29 18:22   
Fix committed to bareos master branch with changesetid 15252.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1020 [bareos-core] webui major always 2018-10-10 09:37 2021-12-21 13:56
Reporter: linkstat Platform: Linux  
Assigned To: frank OS: Debian  
Priority: normal OS Version: 9  
Status: resolved Product Version: 18.2.4-rc1  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Can not restore a client with spaces in its name
Description: All my clients have names with spaces on their names like "client-fd using Catalog-XXX"; correctly handled (ie, enclosing the name in quotation marks, or escaping the space with \), this has not represented any problem... until now. Webui can even perform backup tasks (previously defined in the configuration files) and has not presented problems with the spaces. But when it came time to restore something ... it just does not seem to be able to properly handle the character strings that contain spaces inside. Apparently, cuts the string to first place especially found (as you can see by looking at the attached image).
Tags:
Steps To Reproduce: Define a client whose name contains spaces inside, such as "hostname-fd Testing Client".
Try to restore a backup from Webui to that client (it does not matter that the backup was originally made in that client or that the newly defined client is a new destination for the restoration of a backup previously made in another client).
Webui will fail by saying "invalid client argument: hostname-fd". As you can see, Webui will "cut" the client's name to the first string before the first space, and since there is no hostname-fd client, the task will fail; or worse, if additionally there was a client whose name matched the first string before the first space, Webui will restore the wrong client.
Additional Information: bconsole does not present any problem when the clients contain spaces in their names (this of course, when the spaces are correctly handled by the human operator who writes the commands, either by enclosing the name with quotation marks, or escaping spaces with a backslash).
System Description
Attached Files: Bareos - Can not restore when a client name has spaces in their name.jpg (139,884 bytes) 2018-10-10 09:37
https://bugs.bareos.org/file_download.php?file_id=311&type=bug
jpg
Notes
(0003546)
linkstat   
2019-07-31 18:03   
Hello!

Any news regarding this problem? (or any ideas about how to patch it temporarily so that you can use webui for the case described)?
Sometimes it is tedious to use bconsole all the time instead of webui ...

Regards!
(0004185)
frank   
2021-07-21 15:22   
Fix committed to bareos master branch with changesetid 15068.
(0004188)
frank   
2021-07-22 15:22   
Fix committed to bareos bareos-19.2 branch with changesetid 15079.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
971 [bareos-core] webui major always 2018-06-25 11:54 2021-12-21 13:56
Reporter: Masanetz Platform: Linux  
Assigned To: frank OS: any  
Priority: normal OS Version: 3  
Status: resolved Product Version: 17.2.5  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Error building tree for filenames with backslashes
Description: WebUI Restore fails building the tree if directory contains filenames with backslashes.

Some time ago the adobe reader plugin created a file named "C:\nppdf32Log\debuglog.txt" in the working dir.
Building the restore tree in WebUI fails with popup "Oops, something went wrong, probably too many files.".

Filebname handling for backslashes should be adapted for backslashes (e.g. like https://github.com/bareos/bareos-webui/commit/ee232a6f04eaf2a7c1084fee981f011ede000e8a)
Tags:
Steps To Reproduce: 1. Put an empty file with a filename with backslashes (e.g. C:\nppdf32Log\debuglog.txt) in your home directory
2. Backup
3. Try to restore any file from your home directory from this backup via WebUI
Additional Information: Attached diff of my "workaround"
System Description General issues regarding all linux platforms or not specific to only one distribution
Attached Files: RestoreController.php.diff (1,669 bytes) 2018-06-25 11:54
https://bugs.bareos.org/file_download.php?file_id=299&type=bug
Notes
(0004184)
frank   
2021-07-21 15:22   
Fix committed to bareos master branch with changesetid 15067.
(0004189)
frank   
2021-07-22 15:22   
Fix committed to bareos bareos-19.2 branch with changesetid 15080.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
871 [bareos-core] webui block always 2017-11-04 16:10 2021-12-21 13:56
Reporter: tuxmaster Platform: Linux  
Assigned To: frank OS: CentOS  
Priority: normal OS Version: 7  
Status: resolved Product Version: 17.4.2  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: UI will not load complete
Description: After the login, the website will not load complete.
Only the spinner are shown. (See picture)

The php error log will flooded with:
PHP Notice: Undefined index: meta in /usr/share/bareos-webui/module/Job/src/Job/Model/JobModel.php on line 120

The bareos director will run with version 16.2.7.
Tags:
Steps To Reproduce:
Additional Information: PHP 7.1 via fpm
System Description
Attached Files: Bildschirmfoto von »2017-11-04 16-06-19«.png (50,705 bytes) 2017-11-04 16:10
https://bugs.bareos.org/file_download.php?file_id=270&type=bug
png
Notes
(0002812)
frank   
2017-11-09 15:35   
DIRD and WebUI need to have the same version currently.

WebUI 17.2.4 is not compatible to a 16.2.7 director yet, which may change in future.
(0002813)
tuxmaster   
2017-11-09 17:36   
Thanks for the information.
But this shout be noted in the release notes, or better result in an error message about an unsupported version.
(0004169)
frank   
2021-06-30 11:49   
There is a note in the installation chapter, see https://docs.bareos.org/master/IntroductionAndTutorial/InstallingBareosWebui.html#system-requirements .
Nevertheless, I'm going to have a look if we can somehow improve the error handling reagarding version compatibility.
(0004176)
frank   
2021-07-06 17:22   
Fix committed to bareos master branch with changesetid 15057.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
579 [bareos-core] webui block always 2015-12-06 12:41 2021-12-21 13:56
Reporter: tuxmaster Platform: x86_64  
Assigned To: frank OS: Fedora  
Priority: normal OS Version: 22  
Status: resolved Product Version:  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Unable to connect to the director from webui via ipv6
Description: The web ui and the director are running on the same system.
After enter the password the error message "Error: , director seems to be down or blocking our request." will presented.
Tags:
Steps To Reproduce: Open the website enter the credentials and try to log in.
Additional Information: getsebool httpd_can_network_connect
httpd_can_network_connect --> on

Error from the apache log file:
[Sun Dec 06 12:37:32.658104 2015] [:error] [pid 2642] [client ABC] PHP Warning: stream_socket_client(): unable to connect to tcp://[XXX]:9101 (Unknown error) in /usr/share/bareos-webui/vendor/Bareos/library/Bareos/BSock/BareosBSock.php on line 521, referer: http://CDE/bareos-webui/

XXX=ip6addr of the director.

connect(from the web server) via telnet ip6addr 9101 will work.
bconsole will also work.
Attached Files:
Notes
(0002323)
frank   
2016-07-15 16:07   
Note: When specifying a numerical IPv6 address (e.g. fe80::1), you must enclose the IP in square brackets—for example, tcp://[fe80::1]:80.

http://php.net/manual/en/function.stream-socket-client.php

You could try setting your IPv6 address in your directors.ini into square brackets until we provide a fix, that might already work.
(0002324)
tuxmaster   
2016-07-15 17:04   
I have try to set it to:
diraddress = "[XXX]"
XXX are the ipv6 address

But the error are the same.
(0002439)
tuxmaster   
2016-11-06 12:09   
Same on Fedora 24 using php 7.0
(0004159)
pete   
2021-06-23 12:41   
(Last edited: 2021-06-23 12:55)
This is still present in version 20 of the Bareos WebUI, on all RHEL variants I tested (CentOS 8, AlmaLinux 8).

It results from a totally unnecessary "bindto" configuration in line 473 of /usr/share/bareos-webui/vendor/Bareos/library/Bareos/BSock/BareosBSock.php:

      $opts = array(
          'socket' => array(
              'bindto' => '0:0',
          ),
      );

This unnecessarily limits PHP socket binding to IPv4 interfaces as documented in https://www.php.net/manual/en/context.socket.php. The simplest solution is to just comment out the "bindto" line:

      $opts = array(
          'socket' => array(
              // 'bindto' => '0:0',
          ),
      );

Restart php-fpm and now IPv6 works perfectly

(0004167)
frank   
2021-06-29 17:22   
Fix committed to bareos bareos-19.2 branch with changesetid 15043.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1388 [bareos-core] regression testing block always 2021-09-20 12:02 2021-11-29 09:12
Reporter: mschiff Platform: Linux  
Assigned To: bruno-at-bareos OS: any  
Priority: normal OS Version: 3  
Status: resolved Product Version: 19.2.11  
Product Build: Resolution: no change required  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: /.../sd_reservation.cc: error: sleep_for is not a member of std::this_thread (maybe gcc-11 related)
Description: It seems like the tests do not build when using gcc-11:


/var/tmp/portage/app-backup/bareos-19.2.9/work/bareos-Release-19.2.9/core/src/tests/sd_reservation.cc: In function ‘void WaitThenUnreserve(std::unique_ptr<TestJob>&)’:
/var/tmp/portage/app-backup/bareos-19.2.9/work/bareos-Release-19.2.9/core/src/tests/sd_reservation.cc:147:21: error: ‘sleep_for’ is not a member of ‘std::this_thread’
  147 | std::this_thread::sleep_for(std::chrono::milliseconds(10));
      | ^~~~~~~~~
ninja: build stopped: subcommand failed.
Tags:
Steps To Reproduce:
Additional Information: Please see: https://bugs.gentoo.org/786789
System Description General issues regarding all linux platforms or not specific to only one distribution
Attached Files:
Notes
(0004288)
bruno-at-bareos   
2021-09-29 13:58   
Would you mind to test with the new 19.2.11 release or even with the not released yet branch-12.2
Building here with gcc-11 under openSUSE Tumbleweed works as expected.
(0004328)
bruno-at-bareos   
2021-11-10 10:05   
Hello, did you make any progress on this ?

As 19.2 will be soon obsoleted, did you try to compile version 20 ?
(0004368)
mschiff   
2021-11-27 12:23   
Hi!

Sorry for the late answer. All current version build fine with gcc-11 here:
 - 18.2.12
 - 19.2.11
 - 20.0.3

Thanks!
(0004369)
bruno-at-bareos   
2021-11-29 09:11   
Ok thanks I will close then,
(0004370)
bruno-at-bareos   
2021-11-29 09:12   
Gentoo get gcc11 fixed

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1151 [bareos-core] webui feature always 2019-12-12 09:25 2021-11-26 13:22
Reporter: DanielB Platform:  
Assigned To: frank OS:  
Priority: normal OS Version:  
Status: resolved Product Version: 18.2.5  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: bareos webui does not show the inchanger flag for volumes
Description: The bareos webui does not show the inchanger flag for tape volumes. The flag is visible in the bconsole.
The flag should be visible as additional column to help volume management with tape changers.
Tags: volume, webui
Steps To Reproduce: Log into the webgui.
Select Storage -> Volumes
Additional Information:
Attached Files:
Notes
(0004367)
frank   
2021-11-26 13:22   
Fix committed to bareos master branch with changesetid 15491.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1396 [bareos-core] bconsole minor always 2021-10-31 21:36 2021-11-24 14:43
Reporter: nelson.gonzalez6 Platform: Linux  
Assigned To: bruno-at-bareos OS: CentOS  
Priority: normal OS Version: 7  
Status: resolved Product Version: 19.2.11  
Product Build: Resolution: no change required  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: HOW TO REMOVE FRON DB ALL ERROR, CANCELED, FAILED JOBS
Description: I HAVE NOTICED THAT WHEN LISTING IN bconsole, pools with media ids notice that have jobs from 3 to 4 years ago, all the clients that are already not in used or eliminated that are still on the db catalog list. how can i remove these jobs that are volstatus Error, experired, canceled.

need sugestions.

Thanks.


-----------------------------

| MediaId | VolumeName | VolStatus | Enabled | VolBytes | VolFiles | VolRetention | Recycle | Slot | InChanger | MediaType | L astWritten | Storage |
+---------+----------------------+-----------+---------+------------+----------+--------------+---------+------+-----------+-------------+-- -------------------+---------+
| 698 | Differentialvpn-0698 | Error | 1 | 35,437,993 | 0 | 864,000 | 1 | 0 | 0 | GlusterFile | 2 021-07-23 04:42:41 | Gluster |
| 900 | Differentialvpn-0900 | Error | 1 | 3,246,132 | 0 | 864,000 | 1 | 0 | 0 | GlusterFile | 2 021-08-29 14:56:06 | Gluster |
| 1,000 | Differentialvpn-1000 | Append | 1 | 226,375 | 0 | 864,000 | 1 | 0 | 0 | GlusterFile | 2 021-10-31 06:11:56 | Gluster |
+---------+----------------------+-----------+---------+------------+----------+-
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0004313)
bruno-at-bareos   
2021-11-02 13:21   
Did you already test dbcheck this tool normally is the right one to cleanup orphan stored in DB.
(0004320)
nelson.gonzalez6   
2021-11-08 13:01   
Hi yes, i have cleaned dbcheck -fv and it took long time due to lots of orphanes but when listing volumes in bconsole it still has the errors.
(0004325)
bruno-at-bareos   
2021-11-10 10:01   
So in that case, the best things to do is to remove them manually: with the delete volume command into bconsole.
You will normally also have to remove manually the volume file from the filesystem.

This is due to the fact that volume in Error state are locked and then not pruned as we can't touch them anymore.
(0004341)
bruno-at-bareos   
2021-11-16 15:42   
Beside removing manually those records, in bconsole or by scripting the delete volume line, you have to remember to delete the corresponding file if it still exist in your storage place.
(0004357)
bruno-at-bareos   
2021-11-24 14:42   
Final note before closing. The manual process is required.
(0004358)
bruno-at-bareos   
2021-11-24 14:43   
A manual process is needed as described to remove volumes in errors.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1401 [bareos-core] webui minor always 2021-11-16 15:14 2021-11-18 09:50
Reporter: Armand Platform:  
Assigned To: bruno-at-bareos OS:  
Priority: normal OS Version:  
Status: resolved Product Version: 20.0.2  
Product Build: Resolution: duplicate  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: No data display on /bareos-webui/media/details/_volume_name_ for french translation
Description: When we are logged to webui with french language, the Volumes details are not displayed due to an issue with French translation

During translation to french we can have some quote (') as in french sometimes we replace letter a / e with a single quote. Exemple "The application" is translate to "L'application" and not to "La application"
Unfortunately, quote is also a string separator in javascript, and we could not have a quote inside a single quote string. To solve this we need to put the string inside a double quote (") or we need to escape the quote 'this is a string with a single quote \' escaped to be valide'

In the file /usr/share/bareos-webui/module/Media/view/media/media/details.phtml we have this issue between line 315 and 445 : inside function detailFormatterVol(index, row) {...}

One solution could be :
  between line 315 and 445,
    replace all : <?php echo $this->translate("XXXXXX"); ?>
    with : <?php echo str_replace("'","\'",$this->translate("XXXXXX")); ?>
  This will replace single quote by escaped single quote

PS: see attache file, including this solution
Tags:
Steps To Reproduce: log to webui with language = French
go to menu : STOCKAGES
go in tab : Volumes
select a volume in the list (My volumes are all DLT tapes)

we expect to see the volume information + the jobs saved on this volume
 
Additional Information:
Attached Files: details.phtml (17,122 bytes) 2021-11-16 15:14
https://bugs.bareos.org/file_download.php?file_id=484&type=bug
Notes
(0004340)
Armand   
2021-11-16 15:22   
Juste saw, this is the same as issue 0001235 ;-/ sorry
(0004345)
bruno-at-bareos   
2021-11-18 09:50   
I will close this as duplicate of 1235.
It is fixed and published in 20.0.3 (available for customer with a affordable subscription) and you can cherry pick the commit which fix this.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1235 [bareos-core] webui major always 2020-04-26 18:43 2021-11-18 09:50
Reporter: kabassanov Platform: Linux  
Assigned To: frank OS: Debian  
Priority: normal OS Version: 9  
Status: resolved Product Version: 19.2.7  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Special characters not escaped in translations
Description: Hi,

I observed erros in in some (at least one ) webpages generation, in presence of special characters in translation strings.

Here is an example:
module/Media/view/media/media/details.phtml: html.push('<th><?php echo $this->translate("Volume writes"); ?></th>');
with French translation containing ' . I customized translations a little bit, so I'm not sure the original translation of this one had this character, but it is a general issue.

Thanks.
Tags:
Steps To Reproduce: Just take a translation containing an apostrophe and observe that webpage is not completely displayed. In debug window you'll see:

      html.push('<th>Nombre d'écritures sur le volume</th>');

where the apostrophe in "Nombre d'écritures" will be considered as push string end character.
Additional Information:
System Description
Attached Files:
Notes
(0004208)
frank   
2021-08-10 11:22   
Fix committed to bareos bareos-19.2 branch with changesetid 15101.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1348 [bareos-core] storage daemon major have not tried 2021-05-07 08:50 2021-11-17 09:49
Reporter: RobertF. Platform: Linux  
Assigned To: bruno-at-bareos OS: CentOS  
Priority: high OS Version: 7  
Status: resolved Product Version: 20.0.1  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version: 21.0.0  
    Target Version:  
Summary: Fatal error: stored/askdir.cc:295 NULL Volume name. This shouldn't happen!!!
Description: I have the following problem we do daily backups on tape and yesterday this error appeared for the first time.
Before the update to version 20. 0. 1, the tape backup was running without any errors.

The error has only appeared so far. Do I have to change something in the settings or what may be?


Backup is done to a LTO-6 library with 4 tape drives.
Here is the job log that shows what is happening:
Tags:
Steps To Reproduce:
Additional Information:
2021-05-04 09:02:59	bareos-dir JobId 220137: Error: Bareos bareos-dir 20.0.1 (02Mar21):
Build OS: Debian GNU/Linux 10 (buster)
JobId: 220137
Job: daily-DbBck-ELVMCDB57.2021-05-04_09.00.00_19
Backup Level: Full
Client: "elbct3" 20.0.1 (02Mar21) Debian GNU/Linux 10 (buster),debian
FileSet: "DbBck-ELVMCDB57" 2020-02-25 08:25:00
Pool: "DailyDbShare" (From Job resource)
Catalog: "MyCatalog" (From Client resource)
Storage: "elbct3-msl4048" (From Pool resource)
Scheduled time: 04-May-2021 09:00:00
Start time: 04-May-2021 09:00:02
End time: 04-May-2021 09:02:59
Elapsed time: 2 mins 57 secs
Priority: 10
FD Files Written: 7
SD Files Written: 7
FD Bytes Written: 28,688,204,350 (28.68 GB)
SD Bytes Written: 28,688,205,321 (28.68 GB)
Rate: 162080.3 KB/s
Software Compression: None
VSS: no
Encryption: no
Accurate: yes
Volume name(s): 1000062
Volume Session Id: 441
Volume Session Time: 1619188643
Last Volume Bytes: 2,327,573,436,416 (2.327 TB)
Non-fatal FD errors: 0
SD Errors: 1
FD termination status: OK
SD termination status: Fatal Error
Bareos binary info: official Bareos subscription
Job triggered by: Scheduler
Termination: *** Backup Error ***

2021-05-04 09:02:59	elbct3-sd JobId 220137: Elapsed time=00:02:57, Transfer rate=162.0 M Bytes/second
2021-05-04 09:02:59	elbct3-sd JobId 220137: Fatal error: stored/askdir.cc:295 NULL Volume name. This shouldn't happen!!!
2021-05-04 09:02:56	elbct3-sd JobId 220137: Releasing device "Drive3" (/dev/tape/by-id/scsi-35001438016033618-nst).
2021-05-04 09:00:00	elbct3-sd JobId 220137: Connected File Daemon at 192.168.219.133:9102, encryption: None
2021-05-04 09:00:02	elbct3-fd JobId 220137: ACL support is enabled
2021-05-04 09:00:02	elbct3-fd JobId 220137: Extended attribute support is enabled
2021-05-04 09:00:00	bareos-dir JobId 220137: FD compression disabled for this Job because AllowCompress=No in Storage resource.
2021-05-04 09:00:00	bareos-dir JobId 220137: Encryption: None
2021-05-04 09:00:00	bareos-dir JobId 220137: Handshake: Cleartext
2021-05-04 09:00:00	bareos-dir JobId 220137: Connected Client: elbct3 at 192.168.219.133:9102, encryption: None
2021-05-04 09:00:00	bareos-dir JobId 220137: Using Device "Drive3" to write.
2021-05-04 09:00:00	bareos-dir JobId 220137: Connected Storage daemon at 192.168.219.133:9103, encryption: None
2021-05-04 09:00:00	bareos-dir JobId 220137: Start Backup JobId 220137, Job=daily-DbBck-ELVMCDB57.2021-05-04_09.00.00_19
System Description
Attached Files:
Notes
(0004342)
bruno-at-bareos   
2021-11-17 09:48   
The root cause has been found and a PR has been merged.
https://github.com/bareos/bareos/pull/975

This will appear as fix in next 21, and for our customer under subscription in 20.0.4

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
816 [bareos-core] webui major always 2017-05-02 14:48 2021-10-07 10:22
Reporter: Kvazyman Platform: Linux  
Assigned To: frank OS: Debian  
Priority: normal OS Version: 8  
Status: resolved Product Version: 16.2.4  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Incorrect display value of the item Retention/Expiration depending on the selected localization
Description: Incorrect display value of the item Retention/Expiration depending on the selected localization
Tags:
Steps To Reproduce: Login with english localization. Go to https://YOUR_SITE_BAREOS/bareos-webui/media
Select some volume and see it Retention/Expiration

Login with russian localization. Go to https://YOUR_SITE_BAREOS/bareos-webui/media
Select same volume and see it Retention/Expiration (Задержка/Окончание)

Compare the values. The values are different for 5 days.
Additional Information:
System Description
Attached Files:
Notes
(0004294)
frank   
2021-10-07 10:22   
Fix committed to bareos master branch with changesetid 15298.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1381 [bareos-core] webui major always 2021-08-26 10:50 2021-09-17 12:55
Reporter: jens Platform: Linux  
Assigned To: frank OS: Debian  
Priority: normal OS Version: 10  
Status: acknowledged Product Version: 19.2.10  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Webui File selection list shows error when trying to restore
Description: BareOS version: 19.2.7-2

When selecting a backup client with lots of ( millions ) files and folders the File selection area shows following error.

{"id":"#","xhr":{"readyState":4,"responseText":"\n\n\n \n \n \n \n\n \n \n\n\n \n \n\n\n\n\n \n\n \n\n
\n \n\n
An error occurred
\n
An error occurred during execution; please try again later.
\n\n\n
\n
Additional information:
\n
Zend\\Json\\Exception\\RuntimeException
\n
\n
File:
\n
\n
/usr/share/bareos-webui/vendor/zendframework/zend-json/src/Json.php:68
\n
\n
Message:
\n
\n
Decoding failed: Syntax error
\n
\n
Stack trace:
\n
\n
#0 /usr/share/bareos-webui/module/Restore/src/Restore/Model/RestoreModel.php(54): Zend\\Json\\Json::decode('', 1)\n#1 /usr/share/bareos-webui/module/Restore/src/Restore/Controller/RestoreController.php(481): Restore\\Model\\RestoreModel->getDirectories(Object(Bareos\\BSock\\BareosBSock), '67', '#')\n#2 /usr/share/bareos-webui/module/Restore/src/Restore/Controller/RestoreController.php(555): Restore\\Controller\\RestoreController->getDirectories()\n#3 /usr/share/bareos-webui/module/Restore/src/Restore/Controller/RestoreController.php(466): Restore\\Controller\\RestoreController->buildSubtree()\n#4 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/Controller/AbstractActionController.php(82): Restore\\Controller\\RestoreController->filebrowserAction()\n#5 [internal function]: Zend\\Mvc\\Controller\\AbstractActionController->onDispatch(Object(Zend\\Mvc\\MvcEvent))\n#6 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(444): call_user_func(Array, Object(Zend\\Mvc\\MvcEvent))\n#7 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(205): Zend\\EventManager\\EventManager->triggerListeners('dispatch', Object(Zend\\Mvc\\MvcEvent), Object(Closure))\n#8 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/Controller/AbstractController.php(118): Zend\\EventManager\\EventManager->trigger('dispatch', Object(Zend\\Mvc\\MvcEvent), Object(Closure))\n#9 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/DispatchListener.php(93): Zend\\Mvc\\Controller\\AbstractController->dispatch(Object(Zend\\Http\\PhpEnvironment\\Request), Object(Zend\\Http\\PhpEnvironment\\Response))\n#10 [internal function]: Zend\\Mvc\\DispatchListener->onDispatch(Object(Zend\\Mvc\\MvcEvent))\n#11 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(444): call_user_func(Array, Object(Zend\\Mvc\\MvcEvent))\n#12 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(205): Zend\\EventManager\\EventManager->triggerListeners('dispatch', Object(Zend\\Mvc\\MvcEvent), Object(Closure))\n#13 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/Application.php(314): Zend\\EventManager\\EventManager->trigger('dispatch', Object(Zend\\Mvc\\MvcEvent), Object(Closure))\n#14 /usr/share/bareos-webui/public/index.php(24): Zend\\Mvc\\Application->run()\n#15 {main}
\n
\n
\n\n\n
\n\n \n \n\n\n","status":500,"statusText":"Internal Server Error"}}
Tags:
Steps To Reproduce: In restore tab select a client with a lot of files and folders ( File Server )
Additional Information: From Apache error log:

PHP Notice: compact(): Undefined variable: extras in /usr/share/bareos-webui/vendor/zend
framework/zend-view/src/Helper/HeadLink.php on line 403, referer: http://xxx.xxx.xxx.xxx/bareos-webui/restore/?jobid=&client=<client>&restoreclient=&restorejob=&where=&files
et=&mergefilesets=0&mergejobs=0&limit=2000

PHP Notice: compact(): Undefined variable: extras in /usr/share/bareos-webui/vendor/zend
framework/zend-view/src/Helper/HeadLink.php on line 403, referer: http://xxx.xxx.xxx.xxx/bareos-webui/storage//

System Description
Attached Files: bareos_webui.png (84,832 bytes) 2021-08-26 10:50
https://bugs.bareos.org/file_download.php?file_id=479&type=bug
png

bareos-webui-nightly.png (9,043 bytes) 2021-08-27 12:03
https://bugs.bareos.org/file_download.php?file_id=480&type=bug
png

bconsole_api2_test_query_output.txt (53,111 bytes) 2021-09-17 12:55
https://bugs.bareos.org/file_download.php?file_id=482&type=bug
Notes
(0004223)
arogge   
2021-08-26 10:59   
thanks for the report.
Could you try and reproduce this with the latest webui from the nightly-build? It can be installed on a different host or vm and will be able to talk to your 19.2 director.
Also if you can still reproduce the issue there it would really help if you could tell us how many jobs were merged here (i.e. how many incrementals are on top of your full) and how many files are in each of them. This would probably improve our chances to reproduce the problem.

Having said that, the workaround (that you probably already knew) is to restore from within bconsole.
(0004224)
jens   
2021-08-26 11:09   
Thank you for the ultra fast response.
I will try my best to give the nightly build a try but it gonna take me some time to arrange in our environment.

Regarding your question this is the only single backup we took from that machine into a long term tape archive pool.
There are no incrementals on top.
(0004225)
arogge   
2021-08-26 11:35   
Allright! Would you still share the exact amount of files in that backup job, so we can produce a test-case with the same amount of files?
(0004226)
jens   
2021-08-26 11:37   
FD Files Written: 155,482,903
SD Files Written: 155,482,903
FD Bytes Written: 26,776,974,356,682 (26.77 TB)
SD Bytes Written: 26,805,737,848,974 (26.80 TB)
(0004228)
jens   
2021-08-27 12:03   
(Last edited: 2021-08-27 12:20)
So I tried the latest nightly built from here: http://download.bareos.org/bareos/experimental/nightly/Debian_10/all/
Unfortunately it does not want to connect to my director 19.2

(0004229)
jens   
2021-08-27 12:19   
Also tried with the bareos-webui_20.0.1-3
It is able to connect to my 19.2 director but throws the exact same error as initially reported
(0004230)
arogge   
2021-08-27 12:26   
Yes, sorry. That version check was introduced not too long ago, I simply forgot. Thanks for reproducing with 20.0.1 though, that should be recent enough.
(0004241)
frank   
2021-08-31 16:00   
jens:

It seems we are receiving malformed JSON from the director here as the decoding throws a syntax error.

We should have a look at the JSON result the director provides for the particular directory you are
trying to list in webui by using bvfs API commands (https://docs.bareos.org/DeveloperGuide/api.html#bvfs-api) in bconsole.


In bconsole please do the following:


1. Get the jobid of the job that causes the described issue. Replace the jobid from the example below with your specific jobid, e.g. the jobid of the full backup you mentioned.

*.bvfs_lsdirs path= jobid=142
38 0 0 A A A A A A A A A A A A A A .
37 0 0 A A A A A A A A A A A A A A /


2. Navigate to the folder which causes problems by using pathid, pathids will differ at yours.

*.bvfs_lsdirs pathid=37 jobid=142
37 0 0 A A A A A A A A A A A A A A .
38 0 0 A A A A A A A A A A A A A A ..
57 0 0 A A A A A A A A A A A A A A ceph/
*

*.bvfs_lsdirs pathid=57 jobid=142
57 0 0 A A A A A A A A A A A A A A .
37 0 0 A A A A A A A A A A A A A A ..
56 0 0 A A A A A A A A A A A A A A groups/
*

*.bvfs_lsdirs pathid=56 jobid=142
56 0 0 A A A A A A A A A A A A A A .
57 0 0 A A A A A A A A A A A A A A ..
51 11817 142 P0A V9T EHt CcR A A A 8AA BAA L4 BhLhQA BhLhP/ BhLhP/ A A C group_aa/

Let's pretend group_aa (pathid 51) is the folder we can not list properly in webui.


3. Switch to API mode 2 (JSON) now and list the content of folder group_aa (pathid 51) to get the JSON result.

*.api 2
{
  "jsonrpc": "2.0",
  "id": null,
  "result": {
    "api": 2
  }
}*.bvfs_lsdirs pathid=51 jobid=142
{
  "jsonrpc": "2.0",
  "id": null,
  "result": {
    "directories": [
      {
        "type": "D",
        "pathid": 51,
        "fileid": 11817,
        "jobid": 142,
        "lstat": "P0A V9T EHt CcR A A A 8AA BAA L4 BhLhQA BhLhP/ BhLhP/ A A C",
        "name": ".",
        "fullpath": ".",
        "stat": {
          "dev": 64768,
          "ino": 89939,
          "mode": 16877,
          "nlink": 10001,
          "uid": 0,
          "gid": 0,
          "user": "root",
          "group": "root",
          "rdev": 0,
          "size": 245760,
          "atime": 1630409728,
          "mtime": 1630409727,
          "ctime": 1630409727
        },
        "linkfileindex": 0
      },
      {
        "type": "D",
        "pathid": 56,
        "fileid": 0,
        "jobid": 0,
        "lstat": "A A A A A A A A A A A A A A",
        "name": "..",
        "fullpath": "..",
        "stat": {
          "dev": 0,
          "ino": 0,
          "mode": 0,
          "nlink": 0,
          "uid": 0,
          "gid": 0,
          "user": "root",
          "group": "root",
          "rdev": 0,
          "size": 0,
          "atime": 0,
          "mtime": 0,
          "ctime": 0
        },
        "linkfileindex": 0
      },
      {
        "type": "D",
        "pathid": 52,
        "fileid": 1813,
        "jobid": 142,
        "lstat": "P0A BAGIj EHt C A A A G BAA A BhLgvm Bg/+Bp Bg/+Bp A A C",
        "name": "d1/",
        "fullpath": "/ceph/groups/group_aa/d1/",
        "stat": {
          "dev": 64768,
          "ino": 16802339,
          "mode": 16877,
          "nlink": 2,
          "uid": 0,
          "gid": 0,
          "user": "root",
          "group": "root",
          "rdev": 0,
          "size": 6,
          "atime": 1630407654,
          "mtime": 1627381865,
          "ctime": 1627381865
        },
        "linkfileindex": 0
      },
      {
        "type": "D",
        "pathid": 54,
        "fileid": 1814,
        "jobid": 142,
        "lstat": "P0A CCEkI EHt C A A A G BAA A BhLgvm Bg/+Bp Bg/+Bp A A C",
        "name": "d2/",
        "fullpath": "/ceph/groups/group_aa/d2/",
        "stat": {
          "dev": 64768,
          "ino": 34097416,
          "mode": 16877,
          "nlink": 2,
          "uid": 0,
          "gid": 0,
          "user": "root",
          "group": "root",
          "rdev": 0,
          "size": 6,
          "atime": 1630407654,
          "mtime": 1627381865,
          "ctime": 1627381865
        },
        "linkfileindex": 0
      }
    ]
  }
}*


Do you get valid JSON at this point as you can see in the example above?
Please provide the output you get in your case if possible.



Note:

You can substitue step 3 with sth. like following if the output is too big:

[root@centos7]# cat script
.api 2
.bvfs_lsdirs pathid=51 jobid=142
quit

[root@centos7]# cat script | bconsole > out.txt

Remove everything except the JSON you received from the .bvfs_lsdirs command from the out.txt file.

Validate the JSON output with a tool like https://stedolan.github.io/jq/ or https://jsonlint.com/ for example.
(0004243)
jens   
2021-09-01 10:26   
Hi Frank,

thanks for your feedback.
Please note, I receive the JSON error on top level already.
Meaning I am not able to select a folder at all, yet.

I will try to follow your instructions and see how far I can get.
Will keep you posted.

Thank you once again for your support.
Much appreciated.
(0004267)
jens   
2021-09-17 12:55   
Hi Frank,

I found some time to go over your instruction and did some intense testing.


First I queried the top 2 folder levels for the jobid in question.
--------------------------------------------------------------------------------------------
*.bvfs_lsdirs path= jobid=67
Automatically selected Catalog: MyCatalog
Using Catalog "MyCatalog"
32 0 0 A A A A A A A A A A A A A A .
31 0 0 A A A A A A A A A A A A A A /

*.bvfs_lsdirs pathid=31 jobid=67
31 0 0 A A A A A A A A A A A A A A .
32 0 0 A A A A A A A A A A A A A A ..
3037697 0 0 A A A A A A A A A A A A A A imgrep/
3037699 0 0 A A A A A A A A A A A A A A storage/


Since the issue in webgui already occurs on this level I switched to api level 2 already
but couldn't find anything obvious regarding malformatted output.
----------------------------------------------------------------------------------------------------------------------------
*.api 2
{
  "jsonrpc": "2.0",
  "id": null,
  "result": {
    "api": 2
  }
}
*.bvfs_lsdirs pathid=31 jobid=67
{
  "jsonrpc": "2.0",
  "id": null,
  "result": {
    "directories": [
      {
        "type": "D",
        "pathid": 31,
        "fileid": 0,
        "jobid": 0,
        "lstat": "A A A A A A A A A A A A A A",
        "name": ".",
        "fullpath": ".",
        "stat": {
          "dev": 0,
          "ino": 0,
          "mode": 0,
          "nlink": 0,
          "user": "root",
          "group": "root",
          "rdev": 0,
          "size": 0,
          "atime": 0,
          "mtime": 0,
          "ctime": 0
        },
        "linkfileindex": 0
      },
      {
        "type": "D",
        "pathid": 32,
        "fileid": 0,
        "jobid": 0,
        "lstat": "A A A A A A A A A A A A A A",
        "name": "..",
        "fullpath": "..",
        "stat": {
          "dev": 0,
          "ino": 0,
          "mode": 0,
          "nlink": 0,
          "user": "root",
          "group": "root",
          "rdev": 0,
          "size": 0,
          "atime": 0,
          "mtime": 0,
          "ctime": 0
        },
        "linkfileindex": 0
      },
      {
        "type": "D",
        "pathid": 3037697,
        "fileid": 0,
        "jobid": 0,
        "lstat": "A A A A A A A A A A A A A A",
        "name": "imgrep/",
        "fullpath": "/imgrep/",
        "stat": {
          "dev": 0,
          "ino": 0,
          "mode": 0,
          "nlink": 0,
          "user": "root",
          "group": "root",
          "rdev": 0,
          "size": 0,
          "atime": 0,
          "mtime": 0,
          "ctime": 0
        },
        "linkfileindex": 0
      },
      {
        "type": "D",
        "pathid": 3037699,
        "fileid": 0,
        "jobid": 0,
        "lstat": "A A A A A A A A A A A A A A",
        "name": "storage/",
        "fullpath": "/storage/",
        "stat": {
          "dev": 0,
          "ino": 0,
          "mode": 0,
          "nlink": 0,
          "user": "root",
          "group": "root",
          "rdev": 0,
          "size": 0,
          "atime": 0,
          "mtime": 0,
          "ctime": 0
        },
        "linkfileindex": 0
      }
    ]
  }
}*

So I went one level deeper on the imgrep folder but still everything seem to work fine and look valid.
-------------------------------------------------------------------------------------------------------------------------------------------------
-> see attachment ( please note I've shortened the output to a few thousand lines and anonymized all the folder names )


Interesting to me is that this is the only machine we are having trouble with.
Maybe it is something with the filesystem layouts there.
Therefor and to give you a better picture this is what the backup client looks like:

OS:
Distributor ID: Debian
Description: Debian GNU/Linux 8.7 (jessie)
Release: 8.7
Codename: jessie

Storage mounts:
-------------------------
/dev/vdc1 on /storage/bucket-00 type xfs (rw,noatime,nodiratime,attr2,inode64,noquota)
/dev/vdb1 on /imgrep type xfs (rw,noatime,nodiratime,attr2,inode64,noquota)

Disk Free status:
------------------------
/dev/vdc1 5.0T 4.8T 293G 95% /storage/bucket-00
/dev/vdb1 23T 23T 348G 99% /imgrep


Client Fileset:
-------------------
FileSet {
  Name = "xxxxxxxxxxxxxxxxx"
  Include {
    Options {
      Signature = SHA1
      One FS = No
      Checkfilechanges = yes
    }
    File = /imgrep/images
    File = /storage/bucket-00/images
  }
}



Please let me know if you need any additional information or contribution.

Best, Jens

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1188 [bareos-core] webui major always 2020-02-11 21:02 2021-08-10 23:27
Reporter: hostedpower Platform: Linux  
Assigned To: frank OS: Debian  
Priority: urgent OS Version: 9  
Status: assigned Product Version: 19.2.6  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Cannot restore at all after uprgade to 19.2.6 (php error)
Description: Hi,


Not sure what happened, but after upgrading from 19.2.5 to 19.2.6 the restore screen no longer works at all.

We see php errors as well:

[Tue Feb 11 21:02:07.597134 2020] [proxy_fcgi:error] [pid 762:tid 140551107876608] [client 178.117.59.204:49903] AH01071: Got error 'PHP message: PHP Notice: Undefined index: type in /usr/share/bareos-webui/module/Restore/src/Restore/Form/RestoreForm.php on line 91\n', referer: https://xxxxx.hosted-power.com/restore/?jobid=134785&client=xxx.xxxx.com&restoreclient=&restorejob=&where=&fileset=&mergefilesets=0&mergejobs=0&limit=2000
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0003792)
hostedpower   
2020-02-11 21:17   
PS: I just found out that it's only in Internet explorer, chrome and edge work fine.

However the issue of slow loading persists, initial loading of the /restore url is slow, once a client is selected after that, it's finally faster
(0003797)
hostedpower   
2020-02-12 10:00   
So to conclude, we can restore with chrome, but internet explorer gives the weird php error :)
(0004173)
frank   
2021-06-30 17:04   
I'm not able to reproduce the IE issue. Tried it with Microsoft Edge Version 91.0.864.59 (Win10) and Bareos-19.2.10. Does the problem still exists for you?
(0004209)
hostedpower   
2021-08-10 23:27   
Ok this seems to be present still in Bareos 20, however we ditched IE now :)

(Just open IE, go to a server and try to restore it, it doesn't seem to open the file tree, you can see the tree, but not browse it)

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1365 [bareos-core] webui minor have not tried 2021-06-23 02:11 2021-07-30 10:53
Reporter: grizly Platform: Linux  
Assigned To: frank OS: Ubuntu  
Priority: high OS Version: Ubuntu 18.04.5 L  
Status: acknowledged Product Version: 19.2.10  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: apt postinstall is activating
Description: Essentially https://github.com/bareos/bareos-webui/blob/890997a2f6c836beaa7a36160c1e1e737b66df1e/debian/postinst#L4 caused a minor outage this morning.

We run php7.2, which works fine for bareos-webui, however after apt-update ran this morning the postinst script activated php5.. which is not installed. So, not sure how it did that, but it did. /etc/apache2/mods-enabled/php5.load and /etc/apache2/mods-enabled/php5.conf were created and as the packages weren't installed, it prevented apache2 from restarting.


Deleting those two erroneous files allowed apache2 to restart and work just fine.
Tags:
Steps To Reproduce: install apache2 and php7-fpm
install bareos-webui
check apache2
Additional Information: From term.log

Setting up bareos-webui (19.2.7-2) ...
Installing new version of config file /etc/bareos-webui/configuration.ini ...
/usr/sbin/a2enmod
Module rewrite already enabled
/usr/sbin/a2enmod
Module rewrite already enabled
Enabling module php5.
To activate the new configuration, you need to run:
  systemctl restart apache2
/usr/sbin/a2enconf
Conf bareos-webui already enabled


...

Jun 23 06:56:06 server-name systemd[1]: Starting The Apache HTTP Server...
Jun 23 06:56:06 server-name apachectl[21560]: apache2: Syntax error on line 146 of /etc/apache2/apache2.conf: Syntax error on line 1 of /etc/apache2/mods-enabled/php5.load: Cannot load /usr/lib/apache2/modules/libphp5.so into server: /usr/lib/apache2/modules/libphp5.so: cannot open shared object file: No such file or directory
Jun 23 06:56:06 server-name apachectl[21560]: Action 'start' failed.
Jun 23 06:56:06 server-name apachectl[21560]: The Apache error log may have more information.
Jun 23 06:56:06 server-name systemd[1]: apache2.service: Control process exited, code=exited status=1
Jun 23 06:56:06 server-name systemd[1]: apache2.service: Failed with result 'exit-code'.
Jun 23 06:56:06 server-name systemd[1]: Failed to start The Apache HTTP Server.


Attached Files:
Notes
(0004158)
grizly   
2021-06-23 02:13   
Well, that submitted before I was finished, but it's mostly there.

Not sure what the fix would be, but I would guess if it can detect php, just leave the current php alone?

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
981 [bareos-core] webui minor always 2018-07-10 03:43 2021-07-26 10:54
Reporter: NTANMA Platform: Linux  
Assigned To: OS: CentOS  
Priority: normal OS Version: 7  
Status: resolved Product Version: 17.2.6  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: can not see file names when restoring
Description: If cp866 files get into backup, then instead of their names in Webui empty fields are displayed, the restoration itself is correct. sample files in the attachment.

Tags:
Steps To Reproduce: make backup file from attachment
Additional Information:
System Description
Attached Files: webUI.png (97,031 bytes) 2018-07-10 03:43
https://bugs.bareos.org/file_download.php?file_id=304&type=bug
png

cp866.zip (476 bytes) 2018-07-10 03:44
https://bugs.bareos.org/file_download.php?file_id=305&type=bug
Notes
(0003065)
NTANMA   
2018-07-10 04:16   
on the host the files look like this:
Зарплатный проект.egp
Справочник точек.egp
прив зка н.п..xlsx

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1025 [bareos-core] webui major always 2018-10-26 14:32 2021-07-26 10:37
Reporter: Gordon Klimm Platform: Linux  
Assigned To: OS: Debian  
Priority: normal OS Version: 9  
Status: resolved Product Version: 17.2.4  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: restore doesn't display certain files and folders
Description: A certain combination of spaces (and minuses?) leads to not displaying files for restore using webui
Tags:
Steps To Reproduce: mkdir d1
touch "d1/f1" "d1/f 2" "d1/f-f"

=> backup
==> list files using bconsole works fine
===> goto webui->restore, select job, no files visible.
Additional Information:
System Description
Attached Files: bareos_restore.jpg (113,423 bytes) 2018-10-26 14:32
https://bugs.bareos.org/file_download.php?file_id=315&type=bug
jpg
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
963 [bareos-core] file daemon major always 2018-06-12 20:44 2021-07-26 00:03
Reporter: assafin Platform:  
Assigned To: arogge OS: Freebsd  
Priority: normal OS Version: 11.0  
Status: new Product Version: 16.2.4  
Product Build: Resolution: reopened  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Windows Backup using VSS
Description: We have a Baros-Server running on FreeBSD. We are trying to make a backup of a Windows Server on which an Exchange Server is installed.

Everything is running OK. But after some time we noticed that the Exchange Server transactions logs are not being rolled and that the backup size increases substantially.

After digging around on the Windows Server we saw that the volume shadow copies are not being deleted (yes, we are using Enable VSS = yes ). We think that this may result in the growth of the transaction logs.

Here is an example (on the Windows Server via CMD):

diskshadow
add volume f:
begin backup
create
end backup

The "end backup" will delete the shadow copy. And we assume that this will also lead to a truncation of the transaction log.

So, the question is: Does Bareos (the Windows Client) ends the WindowsVSS in a correct way, so that the shadow copy is being deleted?

If you need more infos, we can gladly help.

-- Alaa
Tags:
Steps To Reproduce:
Additional Information:
Attached Files:
Notes
(0003548)
arogge   
2019-08-01 10:18   
Sorry, but Bareos is not able to backup an Exchange database using VSS at this time.
As it doesn't handle Exchange correctly the Logs are not truncated and you'll expierience a growth in transaction logs.
(0004193)
Ryushin   
2021-07-26 00:03   
Is there any progress towards backing up Exchange 2016/2019 and have it commit the logs? Windows Server Backup has two options for the VSS, Full and Copy. The Full method will truncate the logs. Is there any way to pass the Full method for VSS with Bareos?

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1342 [bareos-core] webui crash always 2021-04-23 23:07 2021-06-28 17:27
Reporter: bluecmd Platform: Linux  
Assigned To: frank OS: Debian  
Priority: normal OS Version: 10  
Status: resolved Product Version: 20.0.1  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Invalid login on webui causes apache2 error log to fill disk
Description: Hi,

I am setting up Bareos 20.0.1 from using Bacula.
I followed the instructions to set it up on my Debian 11 testing system by using Debian 10 packages.

When I have configured the webui to talk to my director and I login with the right credentials, things work fine.
When I try to login with the *wrong* credentials however the PHP process seems to go haywire and output an unending loop of the following:

[Fri Apr 23 22:58:18.627635 2021] [php7:notice] [pid 50019] [client 2a07:redacted:51371] PHP Notice: fwrite(): send of 26 bytes failed with errno=104 Connection reset by peer in /usr/share/bareos-webui/vendor/Bareos/library/Bareos/BSock/BareosBSock.php on line 219, referer: https://debian.redacted/bareos-webui/
[Fri Apr 23 22:58:18.627740 2021] [php7:notice] [pid 50019] [client 2a07:redacted:51371] PHP Notice: fwrite(): send of 26 bytes failed with errno=32 Broken pipe in /usr/share/bareos-webui/vendor/Bareos/library/Bareos/BSock/BareosBSock.php on line 219, referer: https://debian.redacted/bareos-webui/
[Fri Apr 23 22:58:18.627768 2021] [php7:notice] [pid 50019] [client 2a07:redacted:51371] PHP Notice: fwrite(): send of 26 bytes failed with errno=32 Broken pipe in /usr/share/bareos-webui/vendor/Bareos/library/Bareos/BSock/BareosBSock.php on line 219, referer: https://debian.redacted/bareos-webui/
[this line repeats indefinitely]

Within seconds I have many hundred of megabytes in the log.
Tags:
Steps To Reproduce: 1. Install 20.0.1 on Debian testing with webui
2. Login using wrong credentials
Additional Information:
System Description
Attached Files:
Notes
(0004116)
bluecmd   
2021-04-23 23:16   
Looking closer on existing bugs, this seems to be the same as https://bugs.bareos.org/view.php?id=1324

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1096 [bareos-core] webui minor always 2019-07-02 21:42 2021-06-28 15:22
Reporter: joergs Platform:  
Assigned To: frank OS:  
Priority: normal OS Version:  
Status: resolved Product Version: 18.2.6  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: webui: when login as a console user without the "TLS Enable = false" setting, a misleading error message will be soon.
Description: When try to login to the WebUI as a console user without the "TLS Enable = false" setting, a following misleading error message will be soon:

Sorry, can not authenticate. Wrong username and/or password.

In this case, it can be expected that the user will retry user and password.
However, he will never be successful, as the "TLS Enable = false" setting is missing.

A better error message would be:

Sorry, can not authenticate. TLS Handshake failed.
Tags:
Steps To Reproduce: Create a user without the "TLS Enable = false" setting using the bconsole:
* configure add console=test1 password=secret profile=webui-admin

Login to the WebUI as user test1
Additional Information:
Attached Files:
Notes
(0004162)
frank   
2021-06-28 15:22   
Fix committed to bareos master branch with changesetid 15004.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1362 [bareos-core] documentation minor always 2021-06-14 09:46 2021-06-14 09:46
Reporter: Int Platform:  
Assigned To: OS:  
Priority: normal OS Version:  
Status: new Product Version: 20.0.2  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Documentation of job type "Archive" is missing
Description: The documentation of the "Always Incremental Concept"
https://docs.bareos.org/TasksAndConcepts/AlwaysIncrementalBackupScheme.html#always-incremental-concept
mentions a jobtype "archive":
"Therefore, the jobtype of the longterm job is updated to “archive”, so that it is not taken as base for then next incrementals and the always incremental job will stand alone."

But the documentation of job type
https://docs.bareos.org/Configuration/Director.html#config-Dir_Job_Type
does not mention or explain this jobtype "archive".
Tags:
Steps To Reproduce:
Additional Information:
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
912 [bareos-core] documentation tweak always 2018-02-15 12:20 2021-06-11 11:08
Reporter: colttt Platform: Linux  
Assigned To: frank OS: Debian  
Priority: normal OS Version: 9  
Status: resolved Product Version: 17.2.4  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: wrong configuration path for apache
Description: Hi,
in http://doc.bareos.org/master/html/bareos-manual-main-reference.html#x1-580003.3.4
you said "/etc/apache2/available-conf/bareos-webui.conf" but "/etc/apache2/conf-enabled/bareos-webui.conf" will be correct
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0004133)
frank   
2021-05-27 13:22   
Fix committed to bareos bareos-19.2 branch with changesetid 14854.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1330 [bareos-core] webui text always 2021-03-22 18:56 2021-06-10 16:24
Reporter: sknust Platform: Linux  
Assigned To: frank OS: any  
Priority: low OS Version: 3  
Status: resolved Product Version: 19.2.9  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Wrong German translation of "Terminated normally" job status in tooltip
Description: The tooltip of the status text of jobs which are in status T (terminated normally) in the German localization is wrong. It says "nicht erfolgreich beendet", which translates to "not finished successfully", the "nicht" corresponding to "not".

Correct translation is either "normal beendet" (literally "terminated normally") or "erfolgreich beendet" (literally "terminated successfully"). Typical German use would be "erfolgreich beendet" IMHO, although the translation for status W ("terminated normally with warnings") is "Normal beendet (mit Warnungen)", so the "normal beendet" would be more consistent with existing translation.
Tags: webui
Steps To Reproduce: 1) Login to a bareos-webui connected to a director with at least one job in status T with German locale
2) List that job
3) Hover over the White-on-green "Erfolgreich" status text in the job listing or the job details
Additional Information: Sole source seems to be line 1383 (master branch) in /webui/module/Application/language/de_DE.po:
https://github.com/bareos/bareos/blob/85a2c521c845327d0de525363b394f28ce65bb62/webui/module/Application/language/de_DE.po#L1383

In branch bareos-19.2 it's line 1327 (https://github.com/bareos/bareos/blob/a78cda310412daf06186614c582e5ff4ac29c384/webui/module/Application/language/de_DE.po#L1327)
System Description General issues regarding all linux platforms or not specific to only one distribution
Attached Files: bareos-webui_de-DE_translation-error.png (81,293 bytes) 2021-03-22 18:56
https://bugs.bareos.org/file_download.php?file_id=461&type=bug
png
Notes
(0004154)
frank   
2021-06-10 16:24   
Will be fixed in the upcoming maintenance releases 20,19 and 18.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
896 [bareos-core] webui major have not tried 2018-01-26 14:30 2021-06-10 13:01
Reporter: rightmirem Platform: Intel  
Assigned To: frank OS: Debian GNU/Linux 8 (jessie)  
Priority: high OS Version: 8  
Status: resolved Product Version: 17.2.4  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version: 18.2.10  
    Target Version:  
Summary: Argument 0000001 is not an array in JobModel.php
Description: After updating to webui 17.2.4, this error repeating in apache2 error.log

[Fri Jan 26 14:12:30.038395 2018] [:error] [pid 11793] [client 129.xxx.xxx.70:61978] PHP Notice: Undefined index: result in /usr/share/bareos-webui/module/Job/src/Job/Model/JobModel.php on line 126, referer: http://129.xxx.xxx.186:xxx/bareos-webui/dashboard/
[Fri Jan 26 14:12:30.038412 2018] [:error] [pid 11793] [client 129.xxx.xxx.70:61978] PHP Warning: array_merge(): Argument 0000001 is not an array in /usr/share/bareos-webui/module/Job/src/Job/Model/JobModel.php on line 126, referer: http://129.xxx.xxx.186:xxx/bareos-webui/dashboard/
Tags:
Steps To Reproduce: Attempting to connect via bareos webui
Additional Information:
System Description
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
747 [bareos-core] documentation feature always 2016-12-30 17:33 2021-06-09 17:44
Reporter: michelv Platform:  
Assigned To: arogge OS:  
Priority: normal OS Version:  
Status: resolved Product Version: 16.4.1  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Consider moving away from one long page
Description: The whole manual is now in one document/page, making it very hard to navigate and find parts easily.
A lot of project use easy to read documentation build with tools such as MkDocs. I would argue it would help the project to transfer the one page document to a layout with parts.
Tags:
Steps To Reproduce:
Additional Information: MkDocs http://www.mkdocs.org/ (I'm not linked)
Example 1: https://gluster.readthedocs.io/en/latest/
Example 2: https://docs.syncthing.net/
Attached Files:
Notes
(0002495)
joergs   
2017-01-03 13:09   
Bareos Main Manual:

While I fully agree, that there is a lot of room for improvements in the Bareos manual, and we also discussed changing from the current LaTex backend to some other format, I don't see changing to this mkdocs Mark Down format soon.
That would require be a lot of effort. Also some features would get lost.
Enhancing it by additionally host it as separate sections could be done. This is already on the TODO list. However, it fear this will not greatly improve readability.

Bareos Developer Guide:
The Bareos Developer Guide (http://doc.bareos.org/master/html/bareos-developer-guide.html) has been migrated to Mark Down a while ago. This could be enhanced by mkdocs.
(0002496)
michelv   
2017-01-03 18:02   
Of course the way of writing is up to you.
From latex to MD can be done by tools, eg http://pandoc.org/demos.html

Another consideration might be to make it easier to help out with docs.
I personally have seen that it can work nicely at github, where people can relatively easily add/adjust with pull requests to the md files.
(0002529)
joergs   
2017-01-25 14:41   
We used pandoc to migrate the Developer Guide from Latex to Mark Down. However, even this relatively simple document causes a lot of trouble. So this is not an option for the Bareos Main Manual.
(0004147)
arogge   
2021-06-09 17:44   
The documentation has been migrated to ReST and is now split into smaller individual files.
I guess that fixes the issue. If it doesn't and you have additional requirements, feel free to reopen.

Thank you!

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1344 [bareos-core] webui minor always 2021-04-27 11:39 2021-04-29 11:00
Reporter: blokhinaleks Platform: Linux  
Assigned To: frank OS: CentOS  
Priority: low OS Version: 7  
Status: assigned Product Version: 20.0.1  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Last run after WebUI timeout
Description: If webui open in page whith restore or task and timeout session, a can pres start/restart any task before system redirect me to login page.
Tags:
Steps To Reproduce: http://servername/bareos-webui/job//
Wait tiomeout
Run restart job
Login
See running/query job
Additional Information:
System Description
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1298 [bareos-core] webui block always 2021-01-04 14:43 2021-04-29 10:58
Reporter: Dragon Platform: Linux  
Assigned To: frank OS: GenToo  
Priority: normal OS Version:  
Status: assigned Product Version: 20.0.0  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: WebUI Login shows PHP-Notices then 404 (Dashboard)
Description: In development mode various PHP warnings/notices are shown on the login page

Notice: compact(): Undefined variable: extras in <webroot>/admin/bareos-webui/vendor/zendframework/zend-view/src/Helper/HeadLink.php on line 403

Deprecated: array_key_exists(): Using array_key_exists() on objects is deprecated. Use isset() or property_exists() instead in <webroot>/admin/bareos-webui/vendor/zendframework/zend-i18n/src/Translator/Loader/Gettext.php on line 142

When logging in then ending up with 404 page not found. Looks like after login "<baseurl>/admin/bareos-webui/public/dashboard/" is attempted to be loaded and Apache returns 404. The file really does not exist. It seems as a PHP module so something is wrong.

Apache 2.4 contains this config (besides others):
"""
Alias /admin/bareos-webui <webroot>/admin/bareos-webui/public
<Directory <webroot>/admin/bareos-webui/public>
  AllowOverride None
  Options FollowSymLinks
  # ... authorization parameters for restricted internal access only ...
  RewriteEngine on
  RewriteBase /admin/bareos-webui
  RewriteCond %{REQUEST_FILENAME} -s [OR]
  RewriteCond %{REQUEST_FILENAME} -l [OR]
  RewriteCond %{REQUEST_FILENAME} -d
  RewriteRule ^.*$ - [NC,L]
  RewriteRule ^.*$ index.php [NC,L]
</Directory>
"""

WebUI is located under "<webroot>/admin/bareos-webui"

Apache logs:
XXX - xxx [04/Jan/2021:14:34:27 +0100] "GET /admin/bareos-webui/public/dashboard/ HTTP/1.1" 404 254
Tags: webui
Steps To Reproduce: Fresh install
Additional Information:
Attached Files: webui-update-dep.patch (2,388 bytes) 2021-02-25 10:09
https://bugs.bareos.org/file_download.php?file_id=456&type=bug
Notes
(0004092)
Skylord   
2021-02-25 10:09   
Got the same errors on php 7.4 - it seems that ZendFramework dependencies are too old. Have updated composer.json in webui root and run "composer install" - working fine now. There are few notices on some pages but not very annoying.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1280 [bareos-core] webui minor always 2020-11-09 18:07 2021-04-29 10:53
Reporter: TEKrantz Platform: aarch64  
Assigned To: frank OS: Linux  
Priority: low OS Version: Fedora 33  
Status: assigned Product Version:  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Statistics with multiple storage daemons
Description: I have an environment with 1 director, 2 storage daemons and a couple of dozen file daemons. I believe that the 2 SD's are configured with the same parameters as far as statistics go. I only see progress statistics in the webui for one or the other SD when I start backups for all clients. Almost all of the time it is statistics for SD1 and nothing for SD2 but every now and again that reverses. I never get statistics for both SD's at the same time.
Tags: Collect Statistics, statistics multiple storage daemons
Steps To Reproduce: Configure 1 director, 2 SD's and multiple clients on both SD's and start them all at once.
Additional Information:
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1277 [bareos-core] webui minor sometimes 2020-10-28 12:14 2021-04-29 10:51
Reporter: steilfirn Platform: VM  
Assigned To: frank OS: CentOS  
Priority: normal OS Version: 8  
Status: assigned Product Version: 19.2.8  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Jobs being still displayed while no longer running
Description: I backed up my cloud storage and according to bconsole & web-ui it says that job is still running.
But no progress on my storage.

I decided to quit the backup job but as soon as I either click on the "cancel" button or at bconsole *cancel jobid=xx it says that no job is running with this id.
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1233 [bareos-core] webui trivial always 2020-04-23 20:22 2021-04-29 10:48
Reporter: omarioja Platform:  
Assigned To: frank OS:  
Priority: high OS Version:  
Status: assigned Product Version: 19.2.7  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Webui after logging in show blank screen
Description: after i log into the webui i get a blank screen
Tags:
Steps To Reproduce:
Additional Information:
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1140 [bareos-core] webui minor always 2019-11-19 15:40 2021-04-29 10:38
Reporter: koef Platform: Linux  
Assigned To: frank OS: CentOS  
Priority: normal OS Version: 7  
Status: assigned Product Version: 18.2.5  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Restore feature always fails from webui (cats/bvfs.cc:927-0 Can't execute q)
Description: Hello.

Restore feature doesn't create restore job from webui. But it works fine from bconsole.
Please ask for additional info if it's needed.

You can see debug output with level 200 and mysql query log below.

Thanks.
Tags: director, webui
Steps To Reproduce: Merge all client filesets - No
Merge all related jobs to last full backup of selected backup job - No
Additional Information: bareos-dir debug trace:
19-Nov-2019 15:29:15.191147 bareos-dir (100): lib/bsock.cc:81-0 Construct BareosSocket
19-Nov-2019 15:29:15.191410 bareos-dir (100): include/jcr.h:320-0 Construct JobControlRecord
19-Nov-2019 15:29:15.191460 bareos-dir (200): lib/bsock.cc:631-0 Identified from Bareos handshake: admin-R_CONSOLE recognized version: 18.2
19-Nov-2019 15:29:15.191491 bareos-dir (110): dird/socket_server.cc:109-0 Conn: Hello admin calling version 18.2.5
19-Nov-2019 15:29:15.191506 bareos-dir (100): include/jcr.h:320-0 Construct JobControlRecord
19-Nov-2019 15:29:15.191528 bareos-dir (100): dird/storage.cc:157-0 write_storage_list=File
19-Nov-2019 15:29:15.191547 bareos-dir (100): dird/storage.cc:166-0 write_storage=File where=Job resource
19-Nov-2019 15:29:15.191559 bareos-dir (100): dird/job.cc:1519-0 JobId=0 created Job=-Console-.2019-11-19_15.29.15_07
19-Nov-2019 15:29:15.191776 bareos-dir (50): lib/cram_md5.cc:69-0 send: auth cram-md5 <1114491002.1574173755@bareos-dir> ssl=0
19-Nov-2019 15:29:15.192019 bareos-dir (50): lib/cram_md5.cc:88-0 Authenticate OK Gd1+i91cs2Tf7pZiQJs+ew
19-Nov-2019 15:29:15.192200 bareos-dir (100): lib/cram_md5.cc:116-0 cram-get received: auth cram-md5 <9503288492.1574173755@php-bsock> ssl=0
19-Nov-2019 15:29:15.192239 bareos-dir (99): lib/cram_md5.cc:135-0 sending resp to challenge: 1y/il6/RE9/FU8dciG/X6A
19-Nov-2019 15:29:15.273737 bareos-dir (10): dird/ua_audit.cc:146-0 : Console [admin] from [10.105.132.139] cmdline llist backups client="someclient.domain.com" fileset="any" order=desc
19-Nov-2019 15:29:15.273867 bareos-dir (100): dird/ua_db.cc:155-0 UA Open database
19-Nov-2019 15:29:15.273903 bareos-dir (100): cats/sql_pooling.cc:61-0 DbSqlGetNonPooledConnection allocating 1 new non pooled database connection to database bareos, backend type mysql
19-Nov-2019 15:29:15.273929 bareos-dir (100): cats/cats_backends.cc:81-0 db_init_database: Trying to find mapping of given interfacename mysql to mapping interfacename dbi, partly_compare = true
19-Nov-2019 15:29:15.273943 bareos-dir (100): cats/cats_backends.cc:81-0 db_init_database: Trying to find mapping of given interfacename mysql to mapping interfacename mysql, partly_compare = false
19-Nov-2019 15:29:15.273959 bareos-dir (100): cats/mysql.cc:869-0 db_init_database first time
19-Nov-2019 15:29:15.273990 bareos-dir (50): cats/mysql.cc:181-0 mysql_init done
19-Nov-2019 15:29:15.274839 bareos-dir (50): cats/mysql.cc:205-0 mysql_real_connect done
19-Nov-2019 15:29:15.274873 bareos-dir (50): cats/mysql.cc:207-0 db_user=someuser db_name=bareos db_password=somepass
19-Nov-2019 15:29:15.275378 bareos-dir (100): cats/mysql.cc:230-0 opendb ref=1 connected=1 db=7effb000ab20
19-Nov-2019 15:29:15.275854 bareos-dir (150): dird/ua_db.cc:188-0 DB bareos opened
19-Nov-2019 15:29:15.275887 bareos-dir (20): dird/ua_output.cc:579-0 list: llist backups client="someclient.domain.com" fileset="any" order=desc
19-Nov-2019 15:29:15.275937 bareos-dir (100): cats/sql_query.cc:96-0 called: void BareosDb::FillQueryVaList(PoolMem&, BareosDbQueryEnum::SQL_QUERY_ENUM, __va_list_tag*) with query name list_jobs_long (6)
19-Nov-2019 15:29:15.276015 bareos-dir (100): cats/sql_query.cc:102-0 called: void BareosDb::FillQueryVaList(PoolMem&, BareosDbQueryEnum::SQL_QUERY_ENUM, __va_list_tag*) query is now SELECT DISTINCT Job.JobId, Job.Job, Job.Name, Job.PurgedFiles, Job.Type, Job.Level, Job.ClientId, Client.Name as Client, Job.JobStatus, Job.SchedTime, Job.StartTime, Job.EndTime, Job.RealEndTime, Job.JobTDate, Job.VolSessionId, Job.VolSessionTime, Job.JobFiles, Job.JobBytes, Job.JobErrors, Job.JobMissingFiles, Job.PoolId, Pool.Name as PoolName, Job.PriorJobId, Job.FileSetId, FileSet.FileSet FROM Job LEFT JOIN Client ON Client.ClientId=Job.ClientId LEFT JOIN Pool ON Pool.PoolId=Job.PoolId LEFT JOIN JobMedia ON JobMedia.JobId=Job.JobId LEFT JOIN Media ON JobMedia.MediaId=Media.MediaId LEFT JOIN FileSet ON FileSet.FileSetId=Job.FileSetId WHERE Job.JobId > 0 AND Job.Type='B' AND Client.Name='someclient.domain.com' AND JobStatus IN ('T','W') AND (FileSet='v2iFileset' OR FileSet='SelfTest' OR FileSet='LinuxAll' OR FileSet='InfluxdbFileset' OR FileSet='IcingaFileset' OR FileSet='GraylogFileset' OR FileSet='GrafanaFileset' OR FileSet='Catalog') ORDER BY StartTime DESC;
19-Nov-2019 15:29:15.276067 bareos-dir (100): cats/sql_query.cc:124-0 called: bool BareosDb::SqlQuery(const char*, int) with query SELECT DISTINCT Job.JobId, Job.Job, Job.Name, Job.PurgedFiles, Job.Type, Job.Level, Job.ClientId, Client.Name as Client, Job.JobStatus, Job.SchedTime, Job.StartTime, Job.EndTime, Job.RealEndTime, Job.JobTDate, Job.VolSessionId, Job.VolSessionTime, Job.JobFiles, Job.JobBytes, Job.JobErrors, Job.JobMissingFiles, Job.PoolId, Pool.Name as PoolName, Job.PriorJobId, Job.FileSetId, FileSet.FileSet FROM Job LEFT JOIN Client ON Client.ClientId=Job.ClientId LEFT JOIN Pool ON Pool.PoolId=Job.PoolId LEFT JOIN JobMedia ON JobMedia.JobId=Job.JobId LEFT JOIN Media ON JobMedia.MediaId=Media.MediaId LEFT JOIN FileSet ON FileSet.FileSetId=Job.FileSetId WHERE Job.JobId > 0 AND Job.Type='B' AND Client.Name='someclient.domain.com' AND JobStatus IN ('T','W') AND (FileSet='v2iFileset' OR FileSet='SelfTest' OR FileSet='LinuxAll' OR FileSet='InfluxdbFileset' OR FileSet='IcingaFileset' OR FileSet='GraylogFileset' OR FileSet='GrafanaFileset' OR FileSet='Catalog') ORDER BY StartTime DESC;
19-Nov-2019 15:29:15.354800 bareos-dir (10): dird/ua_audit.cc:146-0 : Console [admin] from [10.105.132.139] cmdline llist clients current
19-Nov-2019 15:29:15.354928 bareos-dir (20): dird/ua_output.cc:579-0 list: llist clients current
19-Nov-2019 15:29:15.354968 bareos-dir (100): cats/sql_query.cc:124-0 called: bool BareosDb::SqlQuery(const char*, int) with query SELECT ClientId,Name,Uname,AutoPrune,FileRetention,JobRetention FROM Client ORDER BY ClientId
19-Nov-2019 15:29:15.355739 bareos-dir (200): dird/ua_output.cc:1626-0 filterit: Filter on resource_type 1002 value bareos-fd, suppress output
19-Nov-2019 15:29:15.355779 bareos-dir (200): dird/ua_output.cc:1626-0 filterit: Filter on resource_type 1002 value bareos-dir-node, suppress output
19-Nov-2019 15:29:15.610801 bareos-dir (10): dird/ua_audit.cc:146-0 : Console [admin] from [10.105.132.139] cmdline .bvfs_update jobid=142
19-Nov-2019 15:29:15.616201 bareos-dir (100): lib/htable.cc:77-0 malloc buf=7effb006a718 size=9830400 rem=9830376
19-Nov-2019 15:29:15.616266 bareos-dir (100): lib/htable.cc:220-0 Allocated big buffer of 9830400 bytes
19-Nov-2019 15:29:15.616634 bareos-dir (10): cats/bvfs.cc:359-0 Updating cache for 142
19-Nov-2019 15:29:15.616656 bareos-dir (10): cats/bvfs.cc:190-0 UpdatePathHierarchyCache()
19-Nov-2019 15:29:15.616694 bareos-dir (100): cats/sql_query.cc:124-0 called: bool BareosDb::SqlQuery(const char*, int) with query SELECT 1 FROM Job WHERE JobId = 142 AND HasCache=1
19-Nov-2019 15:29:15.617365 bareos-dir (10): cats/bvfs.cc:202-0 Already computed 142
19-Nov-2019 15:29:15.617405 bareos-dir (100): lib/htable.cc:90-0 free malloc buf=7effb006a718
Pool Maxsize Maxused Inuse
NoPool 256 86 0
NAME 1318 16 4
FNAME 2304 75 65
MSG 2634 31 17
EMSG 2299 10 4
BareosSocket 31080 4 2
RECORD 128 0 0

19-Nov-2019 15:29:15.619407 bareos-dir (100): lib/htable.cc:601-0 Done destroy.
19-Nov-2019 15:29:15.620312 bareos-dir (10): dird/ua_audit.cc:146-0 : Console [admin] from [10.105.132.139] cmdline .bvfs_restore jobid=142 fileid=6914 dirid= path=b2000928016
19-Nov-2019 15:29:15.620348 bareos-dir (100): cats/sql_query.cc:124-0 called: bool BareosDb::SqlQuery(const char*, int) with query SELECT VolSessionId,VolSessionTime,PoolId,StartTime,EndTime,JobFiles,JobBytes,JobTDate,Job,JobStatus,Type,Level,ClientId,Name,PriorJobId,RealEndTime,JobId,FileSetId,SchedTime,RealEndTime,ReadBytes,HasBase,PurgedFiles FROM Job WHERE JobId=142
19-Nov-2019 15:29:15.620781 bareos-dir (100): cats/sql_query.cc:124-0 called: bool BareosDb::SqlQuery(const char*, int) with query SELECT ClientId,Name,Uname,AutoPrune,FileRetention,JobRetention FROM Client WHERE Client.ClientId=8
19-Nov-2019 15:29:15.621038 bareos-dir (100): cats/sql_query.cc:124-0 called: bool BareosDb::SqlQuery(const char*, int) with query DROP TABLE btempb2000928016
19-Nov-2019 15:29:15.621252 bareos-dir (100): cats/sql_query.cc:124-0 called: bool BareosDb::SqlQuery(const char*, int) with query DROP TABLE b2000928016
19-Nov-2019 15:29:15.621419 bareos-dir (15): cats/bvfs.cc:924-0 q=CREATE TABLE btempb2000928016 AS SELECT Job.JobId, JobTDate, FileIndex, File.Name, PathId, FileId FROM File JOIN Job USING (JobId) WHERE FileId IN (6914)
19-Nov-2019 15:29:15.621434 bareos-dir (100): cats/sql_query.cc:124-0 called: bool BareosDb::SqlQuery(const char*, int) with query CREATE TABLE btempb2000928016 AS SELECT Job.JobId, JobTDate, FileIndex, File.Name, PathId, FileId FROM File JOIN Job USING (JobId) WHERE FileId IN (6914)
19-Nov-2019 15:29:15.621634 bareos-dir (10): cats/bvfs.cc:927-0 Can't execute q
19-Nov-2019 15:29:15.621662 bareos-dir (100): cats/sql_query.cc:124-0 called: bool BareosDb::SqlQuery(const char*, int) with query DROP TABLE btempb2000928016
19-Nov-2019 15:29:15.661510 bareos-dir (10): dird/ua_audit.cc:146-0 : Console [admin] from [10.105.132.139] cmdline restore file=?b2000928016 client=someclient.domain.com restoreclient=someclient.domain.com restorejob="RestoreFiles" where=/tmp/bareos-restores/ replace=never yes
19-Nov-2019 15:29:15.661580 bareos-dir (100): cats/sql_query.cc:124-0 called: bool BareosDb::SqlQuery(const char*, int) with query SELECT ClientId,Name,Uname,AutoPrune,FileRetention,JobRetention FROM Client WHERE Client.Name='someclient.domain.com'
19-Nov-2019 15:29:15.661982 bareos-dir (100): cats/sql_query.cc:96-0 called: void BareosDb::FillQueryVaList(PoolMem&, BareosDbQueryEnum::SQL_QUERY_ENUM, __va_list_tag*) with query name uar_jobid_fileindex_from_table (32)
19-Nov-2019 15:29:15.662011 bareos-dir (100): cats/sql_query.cc:102-0 called: void BareosDb::FillQueryVaList(PoolMem&, BareosDbQueryEnum::SQL_QUERY_ENUM, __va_list_tag*) query is now SELECT JobId, FileIndex FROM b2000928016 ORDER BY JobId, FileIndex ASC
19-Nov-2019 15:29:15.662022 bareos-dir (100): cats/sql_query.cc:140-0 called: bool BareosDb::SqlQuery(const char*, int (*)(void*, int, char**), void*) with query SELECT JobId, FileIndex FROM b2000928016 ORDER BY JobId, FileIndex ASC
Pool Maxsize Maxused Inuse
NoPool 256 86 0
NAME 1318 16 5
FNAME 2304 75 65
MSG 2634 31 17
EMSG 2299 10 4
BareosSocket 31080 4 2
RECORD 128 0 0

19-Nov-2019 15:29:15.702682 bareos-dir (10): dird/ua_audit.cc:146-0 : Console [admin] from [10.105.132.139] cmdline .bvfs_cleanup path=b2000928016
19-Nov-2019 15:29:15.702753 bareos-dir (100): cats/sql_query.cc:124-0 called: bool BareosDb::SqlQuery(const char*, int) with query DROP TABLE b2000928016
19-Nov-2019 15:29:15.714596 bareos-dir (100): cats/mysql.cc:252-0 closedb ref=0 connected=1 db=7effb000ab20
19-Nov-2019 15:29:15.714646 bareos-dir (100): cats/mysql.cc:259-0 close db=7effb000ab20
19-Nov-2019 15:29:15.714817 bareos-dir (200): dird/job.cc:1560-0 Start dird FreeJcr
19-Nov-2019 15:29:15.714871 bareos-dir (200): dird/job.cc:1624-0 End dird FreeJcr
19-Nov-2019 15:29:15.714888 bareos-dir (100): lib/jcr.cc:446-0 FreeCommonJcr: 7effb0007898
19-Nov-2019 15:29:15.714909 bareos-dir (100): lib/bsock.cc:129-0 Destruct BareosSocket
19-Nov-2019 15:29:15.714924 bareos-dir (100): include/jcr.h:324-0 Destruct JobControlRecord



Mysql query log:
191119 15:29:15 37 Connect bareos@localhost as anonymous on bareos
                   37 Query SELECT VersionId FROM Version
                   37 Query SET wait_timeout=691200
                   37 Query SET interactive_timeout=691200
                   37 Query SELECT DISTINCT Job.JobId, Job.Job, Job.Name, Job.PurgedFiles, Job.Type, Job.Level, Job.ClientId, Client.Name as Client, Job.JobStatus, Job.SchedTime, Job.StartTime, Job.EndTime, Job.RealEndTime, Job.JobTDate, Job.VolSessionId, Job.VolSessionTime, Job.JobFiles, Job.JobBytes, Job.JobErrors, Job.JobMissingFiles, Job.PoolId, Pool.Name as PoolName, Job.PriorJobId, Job.FileSetId, FileSet.FileSet FROM Job LEFT JOIN Client ON Client.ClientId=Job.ClientId LEFT JOIN Pool ON Pool.PoolId=Job.PoolId LEFT JOIN JobMedia ON JobMedia.JobId=Job.JobId LEFT JOIN Media ON JobMedia.MediaId=Media.MediaId LEFT JOIN FileSet ON FileSet.FileSetId=Job.FileSetId WHERE Job.JobId > 0 AND Job.Type='B' AND Client.Name='someclient.domain.com' AND JobStatus IN ('T','W') AND (FileSet='v2iFileset' OR FileSet='SelfTest' OR FileSet='LinuxAll' OR FileSet='InfluxdbFileset' OR FileSet='IcingaFileset' OR FileSet='GraylogFileset' OR FileSet='GrafanaFileset' OR FileSet='Catalog') ORDER BY StartTime DESC
                   37 Query SELECT ClientId,Name,Uname,AutoPrune,FileRetention,JobRetention FROM Client ORDER BY ClientId
                   37 Query SELECT 1 FROM Job WHERE JobId = 142 AND HasCache=1
                   37 Query SELECT VolSessionId,VolSessionTime,PoolId,StartTime,EndTime,JobFiles,JobBytes,JobTDate,Job,JobStatus,Type,Level,ClientId,Name,PriorJobId,RealEndTime,JobId,FileSetId,SchedTime,RealEndTime,ReadBytes,HasBase,PurgedFiles FROM Job WHERE JobId=142
                   37 Query SELECT ClientId,Name,Uname,AutoPrune,FileRetention,JobRetention FROM Client WHERE Client.ClientId=8
                   37 Query DROP TABLE btempb2000928016
                   37 Query DROP TABLE b2000928016
                   37 Query CREATE TABLE btempb2000928016 AS SELECT Job.JobId, JobTDate, FileIndex, File.Name, PathId, FileId FROM File JOIN Job USING (JobId) WHERE FileId IN (6914)
                   37 Query DROP TABLE btempb2000928016
                   37 Query SELECT ClientId,Name,Uname,AutoPrune,FileRetention,JobRetention FROM Client WHERE Client.Name='someclient.domain.com'
                   37 Query SELECT JobId, FileIndex FROM b2000928016 ORDER BY JobId, FileIndex ASC
                   37 Query DROP TABLE b2000928016
                   37 Quit
System Description
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1286 [bareos-core] installer / packages crash always 2020-12-11 13:03 2021-04-26 15:46
Reporter: 4c4712a4141d261ec0ca8f9037950685 Platform: Linux  
Assigned To: arogge OS: Ubuntu  
Priority: urgent OS Version: 20.04  
Status: resolved Product Version: 19.2.8  
Product Build: Resolution: unable to reproduce  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Fails to build ceph plugin on Ubuntu 20.04
Description: Ceph plugin is fails to build on Ubuntu 20.04 with ceph 15.2. The same problem exists on Ubuntu 18.04 with updated ceph.
Bareos version from version 18.2 to lastest git tree are affected.

Build report:

```
[ 96%] Building CXX object src/plugins/filed/CMakeFiles/cephfs-fd.dir/cephfs-fd.cc.o
cd /root/bareos/bareos19/bareos-19.2.7/obj-x86_64-linux-gnu/src/plugins/filed && /usr/bin/c++ -D_FILE_OFFSET_BITS=64 -Dcephfs_fd_EXPORTS -I/usr/include/tirpc -I/root/bareos/bareos19/bareos-19.2.7/src -isystem /usr/include/python2.7 -g -O2 -fdebug-prefix-map=/root/bareos/bareos19/bareos-19.2.7=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -Wsuggest-override -Wformat -Werror=format-security -fmacro-prefix-map=/root/bareos/bareos19/bareos-19.2.7=. -Wno-unknown-pragmas -Wall -fPIC -std=gnu++11 -o CMakeFiles/cephfs-fd.dir/cephfs-fd.cc.o -c /root/bareos/bareos19/bareos-19.2.7/src/plugins/filed/cephfs-fd.cc
/root/bareos/bareos19/bareos-19.2.7/src/plugins/filed/cephfs-fd.cc: In function ‘bRC filedaemon::get_next_file_to_backup(bpContext*)’:
/root/bareos/bareos19/bareos-19.2.7/src/plugins/filed/cephfs-fd.cc:457:33: error: cannot convert ‘stat*’ to ‘ceph_statx*’
  457 | &p_ctx->statp, &stmask);
      | ^~~~~~~~~~~~~
      | |
      | stat*
In file included from /root/bareos/bareos19/bareos-19.2.7/src/plugins/filed/cephfs-fd.cc:35:
/usr/include/cephfs/libcephfs.h:564:29: note: initializing argument 4 of ‘int ceph_readdirplus_r(ceph_mount_info*, ceph_dir_result*, dirent*, ceph_statx*, unsigned int, unsigned int, Inode**)’
  564 | struct ceph_statx *stx, unsigned want, unsigned flags, struct Inode **out);
      | ~~~~~~~~~~~~~~~~~~~^~~
make[3]: *** [src/plugins/filed/CMakeFiles/cephfs-fd.dir/build.make:66: src/plugins/filed/CMakeFiles/cephfs-fd.dir/cephfs-fd.cc.o] Error 1
make[3]: Leaving directory '/root/bareos/bareos19/bareos-19.2.7/obj-x86_64-linux-gnu'
make[2]: *** [CMakeFiles/Makefile2:2803: src/plugins/filed/CMakeFiles/cephfs-fd.dir/all] Error 2
make[2]: Leaving directory '/root/bareos/bareos19/bareos-19.2.7/obj-x86_64-linux-gnu'
make[1]: *** [Makefile:144: all] Error 2
make[1]: Leaving directory '/root/bareos/bareos19/bareos-19.2.7/obj-x86_64-linux-gnu'
```
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0004064)
arogge_adm   
2020-12-11 17:38   
You claim that building the latest master from git fails, please provide steps how to reproduce that.
Out continous build infrastructure just built master, bareos-19.2 and bareos-18.2 a few minutes ago. On Ubuntu 20.04 using ceph packages versioned "15.2.5-0ubuntu0.20.04.1". I think you're not building the latest sources. This was a problem in 18.2, 19.2 and master. But it has been fixed for quite a while now.
(0004125)
arogge   
2021-04-26 15:46   
closing due to no feedback from reporter.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1319 [bareos-core] installer / packages block always 2021-02-16 21:38 2021-04-26 15:43
Reporter: KawaiDesu Platform: Linux  
Assigned To: arogge OS: Ubuntu  
Priority: normal OS Version: 16.04  
Status: resolved Product Version: 17.2.4  
Product Build: Resolution: won't fix  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Repository key has expired
Description: While trying to make `apt update`:
W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: http://download.bareos.org/bareos/release/17.2/xUbuntu_16.04 Release: The following signatures were invalid: KEYEXPIRED 1613226987

`apt-key list`:
pub 4096R/093BFBA2 2015-08-24 [expired: 2021-02-13]
uid Bareos Packaging Signing Key <signing@bareos.com>
Tags:
Steps To Reproduce: echo 'deb http://download.bareos.org/bareos/release/17.2/xUbuntu_16.04 /' | sudo tee /etc/apt/sources.list.d/bareos.list
sudo apt-key adv --keyserver 'hkp://pool.sks-keyservers.net' --recv F93C028C093BFBA2
sudo apt update
Additional Information:
System Description
Attached Files:
Notes
(0004103)
acimadamore   
2021-03-22 20:07   
Hi, same problem here.
(0004114)
duven   
2021-04-20 11:23   
Hi, same problem since months.. no solution?
(0004119)
barcus   
2021-04-25 11:35   
Same here... i can not build docker image with Bareos 16/17
https://github.com/barcus/bareos/issues/101
(0004123)
arogge   
2021-04-26 15:42   
16 and 17 is both EOL.
(0004124)
arogge   
2021-04-26 15:43   
Both Bareos 16.2 and 17.2 are End of Life now. We won't resign the packages with a new key.
Starting with 18.2 the new keys won't expire anymore.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
792 [bareos-core] director major always 2017-03-01 11:56 2021-04-26 13:22
Reporter: chrisbzc Platform: Linux  
Assigned To: frank OS: Debian  
Priority: normal OS Version: 8  
Status: resolved Product Version: 16.2.4  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Unable to use "AllowDuplicateJobs = no" with AlwaysIncremental / Consolidate jobs
Description: I have my jobs / jobdefs set up as follows, as output by "show jobdefs" and "show jobs":

JobDefs {
  Name = "DefaultJob"
  Type = Backup
  Level = Incremental
  Messages = "Standard"
  Storage = "File"
  Pool = "AI-Incremental"
  FileSet = "LinuxAll"
  WriteBootstrap = "/var/lib/bareos/%c.bsr"
  AllowDuplicateJobs = no
  CancelLowerLevelDuplicates = yes
  CancelQueuedDuplicates = yes
}

Job {
  Name = "Consolidate"
  Type = Consolidate
  Storage = "File"
  Pool = "AI-Incremental"
  Client = "thog-fd"
  Schedule = "Consolidate"
  JobDefs = "DefaultJob"
  MaximumConcurrentJobs = 2
}

Job {
  Name = "swift-storage.mycompany.com"
  Pool = "AI-Incremental"
  FullBackupPool = "AI-Full"
  Client = "swift-storage.mycompany.com"
  Schedule = "Nightly"
  JobDefs = "DefaultJob"
  MaximumConcurrentJobs = 2
  Accurate = yes
  AlwaysIncremental = yes
  AlwaysIncrementalJobRetention = 1 weeks
  AlwaysIncrementalKeepNumber = 7
  AlwaysIncrementalMaxFullAge = 1 months
}


I really need to be able to use "AllowDuplicateJobs = no" to cancel lower-level and queued up duplicates, because a full backup of this client takes a number of days. However, if I have AllowDuplicateJobs turned off then the Consolidate jobs fail because the VirtualFull jobs they create immediately cancel themselves. It looks like it's incorrectly treating the Consolidate job and the VirtualFull job it creates as duplicates:

01-Mar 10:00 thog-dir JobId 842: Start Consolidate JobId 842, Job=Consolidate.2017-03-01_10.00.00_34
01-Mar 10:00 thog-dir JobId 842: Looking at always incremental job swift-storage.mycompany.com
01-Mar 10:00 thog-dir JobId 842: swift-storage.mycompany.com: considering jobs older than 22-Feb-2017 10:00:03 for consolidation.
01-Mar 10:00 thog-dir JobId 842: before ConsolidateFull: jobids: 800,819,822
01-Mar 10:00 thog-dir JobId 842: check full age: full is 20-Feb-2017 03:45:08, allowed is 30-Jan-2017 10:00:03
01-Mar 10:00 thog-dir JobId 842: Full is newer than AlwaysIncrementalMaxFullAge -> skipping first jobid 800 because of age
01-Mar 10:00 thog-dir JobId 842: after ConsolidateFull: jobids: 819,822
01-Mar 10:00 thog-dir JobId 842: swift-storage.mycompany.com: Start new consolidation
01-Mar 10:00 thog-dir JobId 842: Using Catalog "MyCatalog"
01-Mar 10:00 thog-dir JobId 843: Fatal error: JobId 842 already running. Duplicate job not allowed.
01-Mar 10:00 thog-dir JobId 842: Job queued. JobId=843
01-Mar 10:00 thog-dir JobId 842: Consolidating JobId 843 started.
01-Mar 10:00 thog-dir JobId 842: BAREOS 16.2.4 (01Jul16): 01-Mar-2017 10:00:03
  JobId: 842
  Job: Consolidate.2017-03-01_10.00.00_34
  Scheduled time: 01-Mar-2017 10:00:00
  Start time: 01-Mar-2017 10:00:03
  End time: 01-Mar-2017 10:00:03
  Termination: Consolidate OK

01-Mar 10:00 thog-dir JobId 843: Bareos thog-dir 16.2.4 (01Jul16):
  Build OS: x86_64-pc-linux-gnu debian Debian GNU/Linux 7.0 (wheezy)
  JobId: 843
  Job: swift-storage.mycompany.com.2017-03-01_10.00.03_35
  Backup Level: Virtual Full
  Client: "swift-storage.mycompany.com" 16.2.4 (01Jul16) x86_64-pc-linux-gnu,debian,Debian GNU/Linux 8.0 (jessie),Debian_8.0,x86_64
  FileSet: "LinuxAll" 2016-09-29 20:43:39
  Pool: "AI-Full" (From Job Pool's NextPool resource)
  Catalog: "MyCatalog" (From Client resource)
  Storage: "File" (From Storage from Pool's NextPool resource)
  Scheduled time: 01-Mar-2017 10:00:03
  Start time: 01-Mar-2017 10:00:03
  End time: 01-Mar-2017 10:00:03
  Elapsed time: 0 secs
  Priority: 10
  SD Files Written: 0
  SD Bytes Written: 0 (0 B)
  Rate: 0.0 KB/s
  Volume name(s):
  Volume Session Id: 0
  Volume Session Time: 0
  Last Volume Bytes: 0 (0 B)
  SD Errors: 0
  SD termination status:
  Accurate: no
  Termination: Backup Canceled


If I set AllowDuplicateJobs to "yes" in the JobDef, then the Consolidate job works, but then I get duplicate jobs that I have to cancel manually if I ever need to do a new Full backup, which is not ideal.

I have tried setting AllowDuplicateJobs to yes in the _Job_ resource:

root@thog:/etc/bareos/bareos-dir.d# cat jobdefs/DefaultJob.conf
JobDefs {
  Name = "DefaultJob"
  Type = Backup
  Level = Incremental
  FileSet = "LinuxAll"
  Storage = File
  Messages = Standard
  Pool = AI-Incremental
  Priority = 10
  Write Bootstrap = "/var/lib/bareos/%c.bsr"

  Allow Duplicate Jobs = no
  Cancel Lower Level Duplicates = yes
  Cancel Queued Duplicates = yes
}
root@thog:/etc/bareos/bareos-dir.d# cat job/Consolidate.conf
Job {
  Name = "Consolidate"
  JobDefs = "DefaultJob"
  Schedule = Consolidate
  Type = Consolidate
  Storage = File
  Pool = AI-Incremental
  Client = thog-fd
  Maximum Concurrent Jobs = 2
  Allow Duplicate Jobs = yes
}

But this doesn't seem to actually override the JobDef which seems very broken to me:

*reload
reloaded
*mes
You have no messages.
*show jobs
Job {
  Name = "Consolidate"
  Type = Consolidate
  Storage = "File"
  Pool = "AI-Incremental"
  Client = "thog-fd"
  Schedule = "Consolidate"
  JobDefs = "DefaultJob"
  MaximumConcurrentJobs = 2
}
*run
A job name must be specified.
The defined Job resources are:
     1: RestoreFiles
     2: Consolidate
     3: swift-storage.mycompany.cob
     4: BackupCatalog
Select Job resource (1-4): 2
Run Consolidate Job
JobName: Consolidate
FileSet: LinuxAll
Client: thog-fd
Storage: File
When: 2017-03-01 10:47:58
Priority: 10
OK to run? (yes/mod/no): yes
Job queued. JobId=850
*mess
01-Mar 10:48 thog-dir JobId 850: Start Consolidate JobId 850, Job=Consolidate.2017-03-01_10.47.59_04
01-Mar 10:48 thog-dir JobId 850: Looking at always incremental job swift-storage.mycompany.com
01-Mar 10:48 thog-dir JobId 850: swift-storage.mycompany.com: considering jobs older than 22-Feb-2017 10:48:01 for consolidation.
01-Mar 10:48 thog-dir JobId 850: before ConsolidateFull: jobids: 800,819,822
01-Mar 10:48 thog-dir JobId 850: check full age: full is 20-Feb-2017 03:45:08, allowed is 30-Jan-2017 10:48:01
01-Mar 10:48 thog-dir JobId 850: Full is newer than AlwaysIncrementalMaxFullAge -> skipping first jobid 800 because of age
01-Mar 10:48 thog-dir JobId 850: after ConsolidateFull: jobids: 819,822
01-Mar 10:48 thog-dir JobId 850: swift-storage.mycompany.com: Start new consolidation
01-Mar 10:48 thog-dir JobId 850: Using Catalog "MyCatalog"
01-Mar 10:48 thog-dir JobId 851: Fatal error: JobId 850 already running. Duplicate job not allowed.
01-Mar 10:48 thog-dir JobId 850: Job queued. JobId=851
01-Mar 10:48 thog-dir JobId 850: Consolidating JobId 851 started.
01-Mar 10:48 thog-dir JobId 850: BAREOS 16.2.4 (01Jul16): 01-Mar-2017 10:48:02
  JobId: 850
  Job: Consolidate.2017-03-01_10.47.59_04
  Scheduled time: 01-Mar-2017 10:47:58
  Start time: 01-Mar-2017 10:48:01
  End time: 01-Mar-2017 10:48:02
  Termination: Consolidate OK



I even tried setting up an entirely new JobDef with AllowDuplicateJobs set to "yes", but this failed too:

JobDefs {
  Name = "DefaultJobAllowDuplicates"
  Type = Backup
  Level = Incremental
  Messages = "Standard"
  Storage = "File"
  Pool = "AI-Incremental"
  FileSet = "LinuxAll"
  WriteBootstrap = "/var/lib/bareos/%c.bsr"
  CancelLowerLevelDuplicates = yes
  CancelQueuedDuplicates = yes
}

Job {
  Name = "Consolidate"
  Type = Consolidate
  Storage = "File"
  Pool = "AI-Incremental"
  Client = "thog-fd"
  Schedule = "Consolidate"
  JobDefs = "DefaultJobAllowDuplicates"
  MaximumConcurrentJobs = 2
}

01-Mar 10:54 thog-dir JobId 852: Start Consolidate JobId 852, Job=Consolidate.2017-03-01_10.53.59_06
01-Mar 10:54 thog-dir JobId 852: Looking at always incremental job swift-storage.trac.jobs
01-Mar 10:54 thog-dir JobId 852: swift-storage.trac.jobs: considering jobs older than 22-Feb-2017 10:54:01 for consolidation.
01-Mar 10:54 thog-dir JobId 852: before ConsolidateFull: jobids: 800,819,822
01-Mar 10:54 thog-dir JobId 852: check full age: full is 20-Feb-2017 03:45:08, allowed is 30-Jan-2017 10:54:01
01-Mar 10:54 thog-dir JobId 852: Full is newer than AlwaysIncrementalMaxFullAge -> skipping first jobid 800 because of age
01-Mar 10:54 thog-dir JobId 852: after ConsolidateFull: jobids: 819,822
01-Mar 10:54 thog-dir JobId 852: swift-storage.trac.jobs: Start new consolidation
01-Mar 10:54 thog-dir JobId 852: Using Catalog "MyCatalog"
01-Mar 10:54 thog-dir JobId 853: Fatal error: JobId 852 already running. Duplicate job not allowed.
01-Mar 10:54 thog-dir JobId 852: Job queued. JobId=853
01-Mar 10:54 thog-dir JobId 852: Consolidating JobId 853 started.
01-Mar 10:54 thog-dir JobId 853: Bareos thog-dir 16.2.4 (01Jul16):
  Build OS: x86_64-pc-linux-gnu debian Debian GNU/Linux 7.0 (wheezy)
  JobId: 853
  Job: swift-storage.trac.jobs.2017-03-01_10.54.01_07
  Backup Level: Virtual Full
  Client: "swift-storage.trac.jobs" 16.2.4 (01Jul16) x86_64-pc-linux-gnu,debian,Debian GNU/Linux 8.0 (jessie),Debian_8.0,x86_64
  FileSet: "LinuxAll" 2016-09-29 20:43:39
  Pool: "AI-Full" (From Job Pool's NextPool resource)
  Catalog: "MyCatalog" (From Client resource)
  Storage: "File" (From Storage from Pool's NextPool resource)
  Scheduled time: 01-Mar-2017 10:54:01
  Start time: 01-Mar-2017 10:54:01
  End time: 01-Mar-2017 10:54:01
  Elapsed time: 0 secs
  Priority: 10
  SD Files Written: 0
  SD Bytes Written: 0 (0 B)
  Rate: 0.0 KB/s
  Volume name(s):
  Volume Session Id: 0
  Volume Session Time: 0
  Last Volume Bytes: 0 (0 B)
  SD Errors: 0
  SD termination status:
  Accurate: no
  Termination: Backup Canceled

01-Mar 10:54 thog-dir JobId 852: BAREOS 16.2.4 (01Jul16): 01-Mar-2017 10:54:01
  JobId: 852
  Job: Consolidate.2017-03-01_10.53.59_06
  Scheduled time: 01-Mar-2017 10:53:58
  Start time: 01-Mar-2017 10:54:01
  End time: 01-Mar-2017 10:54:01
  Termination: Consolidate OK
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0002641)
therm   
2017-05-11 07:02   
This one affects me as well.
Currently it is impossible to use bareos in production if you have a lot of data (backup duration > 24h) and AlwaysIncremental is not working. Without getting it working with AllowDuplicateJobs=no the AlwaysIncremental feature is nearly useless.
(0002672)
chrisbzc   
2017-06-20 12:03   
Just wondering if anyone has a solution to this? I'd really like to roll out Bareos for all my company's backups but this one fatal flaw in AlwaysIncremental is blocking that...
(0002870)
dextor   
2018-01-06 16:27   
+1

I wrote a quick and dirty patch to fix this issue. Here it is:
--- a/src/dird/job.c 2017-12-14 13:57:08.000000000 +0300
+++ b/src/dird/job.c 2018-01-06 17:27:25.897508235 +0300
@@ -996,4 +996,8 @@
          }

+ if (djcr->is_JobType(JT_CONSOLIDATE)) {
+ break;
+ }
+
          if (cancel_dup || job->CancelRunningDuplicates) {
             /*
(0003069)
bluco   
2018-07-13 08:28   
+1
(0003103)
chrisbzc   
2018-08-29 16:32   
Is anyone from Bareos going to look at this? It's been almost a year and a half now since this was opened and not a word from the authors at all.
Even just a quick yes/no on whether the above three-line patch is good or not would be appreciated because at least then I know whether it's safe to patch my local build.
(0003140)
hsn   
2018-10-14 13:36   
This issue definitely needs fix. I have feeling that development is directed more towards new features then bugfixes.
(0003941)
brockp   
2020-04-14 21:28   
Tested on 19.2.6 can confirm issue still remains, jobs are canceled as duplicates when running Consolidate jobs even if you set the consolidate job to allow duplicates as the jobs all inherit form the jobs they are consolidating from.
(0004043)
frank   
2020-09-29 12:39   
FYI: We figured a configuration workaround which you could use until it is fixed.


Configuration Workaround
======================

In principle we simply use an extra job for AI which is not scheduled but used by the consolidation job for the AI Cycle. The configuration workaround looks like following.

We do have a normal job named "ai-backup-bareos-fd" which is automatically scheduled, one initial full backup and only incremental backups afterwards, no AI directives set in here.

We introduce another backup job named "ai-backup-bareos-fd" which is NOT and NEVER automatically scheduled, it is just used by the generated Virtual Full from the Consolidation job, we'll set our AI directives in here.

As the normal first initial full backup is gone after consolidation and the generated virtual full has another name (ai-backup-bareos-fd-consolidate) in comparison to the original backup job (ai-backup-bareos-fd),
we simply use a run script after the job is done to rename ai-backup-bareos-fd-consolidate to ai-backup-bareos-fd.

The renaming via the run after script is required so we don't get a full backup fallback when the normal backup job ai-backup-bareos-fd is scheduled again.
The initial full backup has been consolidated into the virtual full as mentioned above.

If you look at the jobdefs below you'll see that by this configuration we are able to use the "AllowDuplicateJobs = no" setting in normal backup jobs and "AllowDuplicateJobs = yes" for consolidation.

File: /etc/bareos/bareos-dir.d/job/ai-backup-bareos-fd.conf

Job {
  Name = "ai-backup-bareos-fd"
  JobDefs = "DefaultJob1"
  Client = "bareos-fd"
  Accurate = yes
}

File: /etc/bareos/bareos-dir.d/job/ai-backup-bareos-fd-consolidate.conf

Job {
  Name = "ai-backup-bareos-fd-consolidate"
  JobDefs = "DefaultJob2"
  Client = "bareos-fd"
  Accurate = yes
  AlwaysIncremental = yes
  AlwaysIncrementalJobRetention = 1 seconds
  AlwaysIncrementalKeepNumber = 1
  AlwaysIncrementalMaxFullAge = 1 seconds
  Run Script {
        console = "update jobid=%i jobname=ai-backup-bareos-fd"
        Runs When = After
        Runs On Client = No
        Runs On Failure = No
  }
}

File: /etc/bareos/bareos-dir.d/job/consolidate.conf

Job {
    Name = "Consolidate"
    Type = "Consolidate"
    Accurate = "yes"
    JobDefs = "DefaultJob3"
    Schedule = Consolidate
}

File: /etc/bareos/bareos-dir.d/jobdefs/DefaultJob1.conf

JobDefs {
  Name = "DefaultJob1"
  Type = Backup
  Level = Incremental
  Client = bareos-fd
  FileSet = "SelfTest"
  Storage = File
  Messages = Standard
  Pool = AI-Incremental
  Priority = 10
  Write Bootstrap = "@working_dir@/%c.bsr"
  Full Backup Pool = AI-Consolidated
  AllowDuplicateJobs = no
  CancelLowerLevelDuplicates = yes
  CancelQueuedDuplicates = yes
  CancelRunningDuplicates = no
  Schedule = "ai-schedule"
}

File: /etc/bareos/bareos-dir.d/jobdefs/DefaultJob2.conf

JobDefs {
  Name = "DefaultJob2"
  Type = Backup
  Level = Incremental
  Client = bareos-fd
  FileSet = "SelfTest"
  Storage = File
  Messages = Standard
  Pool = AI-Incremental
  Priority = 10
  Write Bootstrap = "@working_dir@/%c.bsr"
  Full Backup Pool = AI-Consolidated
  AllowDuplicateJobs = yes
  CancelLowerLevelDuplicates = no
  CancelQueuedDuplicates = no
  CancelRunningDuplicates = no
}

File: /etc/bareos/bareos-dir.d/jobdefs/DefaultJob3.conf

JobDefs {
  Name = "DefaultJob3"
  Type = Backup
  Level = Incremental
  Messages = "Standard"
  Storage = "file-storage"
  Pool = "Full"
  FullBackupPool = "Full"
  IncrementalBackupPool = "Incremental"
  DifferentialBackupPool = "Differential"
  Client = "bareos-fd"
  FileSet = "linux-server-fileset"
  Schedule = "WeeklyCycle"
  WriteBootstrap = "/var/lib/bareos/%c.bsr"
  Priority = 99
  AllowDuplicateJobs = no
  SpoolAttributes = yes
}

File: /etc/bareos/bareos-dir.d/schedule/Consolidate.conf

Schedule {
  Name = "Consolidate"
  run = at 12:00
}

File: /etc/bareos/bareos-dir.d/schedule/ai-schedule.conf

Schedule {
  Name = "ai-schedule"
  run = Incremental Mon-Fri at 21:00
}
(0004121)
frank   
2021-04-26 13:22   
Fix committed to bareos master branch with changesetid 14803.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1217 [bareos-core] storage daemon major always 2020-03-24 18:50 2021-01-15 10:15
Reporter: igormedo Platform: Linux  
Assigned To: arogge OS: CentOS  
Priority: high OS Version: 7  
Status: feedback Product Version: 18.2.7  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: SD error during S3 restore
Description: I have a working bareos installation with AWS S3 storage backend.
The lifecycle policy moves Full backup chunk files to Glacier Deep Archive.

I have a restore request from my client.

Did a restore request from glacier with this command:

aws s3 ls s3://mybucketname/CloudFull-0012/ | awk '{print $4}' | xargs -I{} -L 1 aws s3api restore-object --restore-request '{"Days":3,"GlacierJobParameters":{"Tier":"Standard"}}' --bucket mybucketname --key CloudFull-0012/{}

Few hour later after completion of my request, I could copy any chunk file, so it is now reachable: aws s3 cp s3://mybucketname/CloudFull-0012/0957 ./

However, bareos cannot restore from it with the following error:

bareos-sd JobId 541: Please mount read Volume "CloudFull-0012" for:
Job: RestoreFiles.2020-03-24_18.32.23_12
Storage: "S3_Full" (AWS S3 Storage)
Pool: CloudIncremental
Media type: S3_Object_Full

bareos-sd JobId 541: Warning: stored/acquire.cc:331 Read acquire: Requested Volume "CloudFull-0012" on "S3_Full" (AWS S3 Storage) is not a Bareos labeled Volume, because: ERR=stored/block.cc:1036 Read zero bytes at 0:0 on device "S3_Full" (AWS S3 Storage).

Weird thing is, it refers to Pool: CloudIncremental, but CloudFull was used during backup.
Tags: s3;droplet;aws;storage
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0003914)
igormedo   
2020-03-24 20:40   
For testing, I did a restore from Incremantal pool with STANDARD S3 storage tier.

It succeeded:

24-Mar 20:28 bareos-dir JobId 552: Start Restore Job RestoreFiles.2020-03-24_20.00.35_43
24-Mar 20:28 bareos-dir JobId 552: Connected Storage daemon at example.tld:9103, encryption: PSK-AES256-CBC-SHA
24-Mar 20:28 bareos-dir JobId 552: Using Device "S3_Incremental" to read.
24-Mar 20:28 bareos-dir JobId 552: Connected Client: client.tld-fd at client.tld:9102, encryption: ECDHE-RSA-AES256-GCM-SHA384
24-Mar 20:28 bareos-dir JobId 552: Handshake: Immediate TLS
24-Mar 20:28 bareos-sd JobId 552: Connected File Daemon at client.tld:9102, encryption: ECDHE-RSA-AES256-GCM-SHA384
24-Mar 20:28 bareos-sd JobId 552: Ready to read from volume "CloudIncremental-0023" on device "S3_Incremental" (AWS S3 Storage).
24-Mar 20:28 bareos-sd JobId 552: Forward spacing Volume "CloudIncremental-0023" to file:block 0:259569452.
24-Mar 20:28 bareos-sd JobId 552: Releasing device "S3_Incremental" (AWS S3 Storage).
24-Mar 20:28 bareos-dir JobId 552: Bareos bareos-dir 18.2.5 (30Jan19):
  Build OS: Linux-4.4.92-6.18-default redhat CentOS Linux release 7.6.1810 (Core)
  JobId: 552
  Job: RestoreFiles.2020-03-24_20.00.35_43
  Restore Client: client.tld-fd
  Start time: 24-Mar-2020 20:28:23
  End time: 24-Mar-2020 20:28:49
  Elapsed time: 26 secs
  Files Expected: 1
  Files Restored: 1
  Bytes Restored: 171,943
  Rate: 6.6 KB/s
  FD Errors: 0
  FD termination status: OK
  SD termination status: OK
  Bareos binary info: bareos.org build: Get official binaries and vendor support on bareos.com
  Termination: Restore OK
(0003915)
arogge   
2020-03-26 10:47   
Can you rerun the restore with the SD put into trace mode with at least debuglevel 100 and take a look at the trace? Maybe it cannot find the volume chunks.
Thank you!
(0003918)
igormedo   
2020-03-26 11:33   
I had to overwrite the bucket with the restored chunks:
aws s3 cp s3://mybucketname/Volumename/ s3://mybucketname/Volumename/ --storage-class STANDARD_IA --recursive --force-glacier-transfer

Looks like it's not enough that you can copy the chunks with aws cli.

This adds additional transfer costs, so I decided to keep latest Full and subsequential Differential backups in STANDRD_IA and ONEZONE_IA tier and move them to glacier with lifecycle rules after a new Full backup is done.
(0003923)
arogge   
2020-03-26 14:23   
Does that mean you could fix the issue yourself?
(0004079)
TheCritter   
2021-01-15 09:50   
(Last edited: 2021-01-15 10:15)
Sam error here. I use Bareos 20.0 on Ubuntu 20.04. Backup works fine, but restore does not.
bareos-sd JobId 9387: Warning: stored/acquire.cc:348 Read acquire: Requested Volume "FullS3-6401" on "RadosStorage" (Object S3 Storage) is not a Bareos labeled Volume, because: ERR=stored/block.cc:1074 Read zero bytes at 0:0 on device "RadosStorage" (Object S3 Storage).

I dont use S3 from AWS, I use S3 from Ceph Rados Gateway.
If I dont use SSL, restore works fine:
2021-01-15 07:43:13 bareos-sd JobId 9383: Ready to read from volume "FullS3-6401" on device "RadosStorage" (Object S3 Storage).
2021-01-15 07:43:13 bareos-sd JobId 9383: Forward spacing Volume "FullS3-6401" to file:block 0:207.

I open a new Ticket for this. I think its a little bit different


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1236 [bareos-core] director minor sometimes 2020-04-27 09:40 2020-12-17 20:47
Reporter: hostedpower Platform: Linux  
Assigned To: arogge OS: Debian  
Priority: normal OS Version: 10  
Status: assigned Product Version: 19.2.7  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: COPY from stdin failed: COPY terminated by new PQexec
Description: This query seems to take several minutes sometimes: COPY batch FROM STDIN

We get alerts in our monitoring about this long running query. Now we checked the logs and also see related errors: COPY from stdin failed: COPY terminated by new PQexec

Is this a bug , something that needs finetuning or can it safely be ignored?
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0003967)
hostedpower   
2020-04-28 10:13   
We see even values like over 38min atm for this query: longest query: 2311s (38 minutes 31 seconds)
(0003968)
arogge   
2020-04-28 11:32   
This happens when your PostgreSQL installation is slow on inserting. Did you tune your PostgreSQL installation? Take a look at https://pgtune.leopard.in.ua/ with "Data warehouse" for a good starting point.
The PostgreSQL default configuration (just like with MySQL) is not optimized to work with production workloads.
(0003970)
hostedpower   
2020-04-28 16:24   
we used that and configured quite some settings already :)
(0004072)
hostedpower   
2020-12-17 20:47   
This seems to happen still

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1057 [bareos-core] vmware plugin feature always 2019-02-14 10:00 2020-12-14 17:28
Reporter: alex.kanaykin Platform: Linux  
Assigned To: stephand OS: RHEL  
Priority: normal OS Version: 6  
Status: resolved Product Version: 18.2.5  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Option to disable quiesced snapshot
Description: Hi,
I have VM to backup with RHEL6. (Cisco UCM) When I tried to backup it I getting error msg = "An error occurred while quiescing the virtual machine. See the virtual machine's event log for details."

That VM doesn't support quiesced snapshots.

The question is: Can you, please, add an option to vmware-plugin to avoid quiescing host like in Bacula *quiesce_host=no*

Anyway thank you for your great software!

with best regards, Alex.
Tags: plugin
Steps To Reproduce: Start job on vm that can't do quiesced snapshots.
Additional Information: Fatal error: python-fd: Traceback (most recent call last):
File "/usr/lib64/bareos/plugins/BareosFdWrapper.py", line 38, in handle_plugin_event
return bareos_fd_plugin_object.handle_plugin_event(context, event)
File "/usr/lib64/bareos/plugins/BareosFdPluginVMware.py", line 549, in handle_plugin_event
return self.start_backup_job(context)
File "/usr/lib64/bareos/plugins/BareosFdPluginVMware.py", line 179, in start_backup_job
return self.vadp.prepare_vm_backup(context)
File "/usr/lib64/bareos/plugins/BareosFdPluginVMware.py", line 764, in prepare_vm_backup
if not self.create_vm_snapshot(context):
File "/usr/lib64/bareos/plugins/BareosFdPluginVMware.py", line 930, in create_vm_snapshot
self.vmomi_WaitForTasks([self.create_snap_task])
File "/usr/lib64/bareos/plugins/BareosFdPluginVMware.py", line 1219, in vmomi_WaitForTasks
raise task.info.error
vim.fault.ApplicationQuiesceFault: (vim.fault.ApplicationQuiesceFault) {
dynamicType = ,
dynamicProperty = (vmodl.DynamicProperty) [],
msg = "An error occurred while quiescing the virtual machine. See the virtual machine's event log for details.",
faultCause = ,
faultMessage = (vmodl.LocalizableMessage) [
(vmodl.LocalizableMessage) {
dynamicType = ,
dynamicProperty = (vmodl.DynamicProperty) [],
key = 'msg.checkpoint.save.fail2.std3',
arg = (vmodl.KeyAnyValue) [
(vmodl.KeyAnyValue) {
dynamicType = ,
dynamicProperty = (vmodl.DynamicProperty) [],
key = '1',
value = 'msg.snapshot.error-QUIESCINGERROR'
}
],
message = 'An error occurred while saving the snapshot: Failed to quiesce the virtual machine.'
},
(vmodl.LocalizableMessage) {
dynamicType = ,
dynamicProperty = (vmodl.DynamicProperty) [],
key = 'msg.snapshot.vigor.take.error',
arg = (vmodl.KeyAnyValue) [
(vmodl.KeyAnyValue) {
dynamicType = ,
dynamicProperty = (vmodl.DynamicProperty) [],
key = '1',
value = 'msg.snapshot.error-QUIESCINGERROR'
}
],
message = 'An error occurred while taking a snapshot: Failed to quiesce the virtual machine.'
}
]
System Description
Attached Files:
Notes
(0003451)
arogge   
2019-07-12 10:59   
sounds reasonable, but I don't know when we'll be able to implement it.
(0004065)
stephand   
2020-12-14 17:28   
Fix committed to bareos master branch with changesetid 14414.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1268 [bareos-core] director minor have not tried 2020-08-13 15:52 2020-08-21 12:16
Reporter: RobertF. Platform: x86  
Assigned To: arogge OS: Windows  
Priority: normal OS Version: 2016  
Status: resolved Product Version: 18.2.9  
Product Build: Resolution: duplicate  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Catalog error updating file digest
Description: Hello,

I have following warning on a Bareos Job:

Error: Catalog error updating file digest. cats/sql_update.cc:63 Update failed: affected_rows=0 for UPDATE File SET MD5='77zVjnA67LS3/IKQMA++Zg' WHERE FileId=6148914691236517205

Can somebody help me?

thanks Robert
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1212 [bareos-core] bconsole minor always 2020-03-16 15:11 2020-06-25 13:09
Reporter: hostedpower Platform: Linux  
Assigned To: arogge OS: Debian  
Priority: low OS Version: 9  
Status: resolved Product Version: 19.2.6  
Product Build: Resolution: no change required  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Can no longer prune volumes
Description: We have a script to prune volumes but since 19.2.x it returns error:

for pool, vols in con.call('list volumes')['volumes'].items():
    for v in vols:
        if v['volstatus'] == 'Full' or v['volstatus'] == 'Used':
            print("Pruning %s" % v['volumename'])
            con.call('prune volume=%s yes' % v['volumename'])

bareos.exceptions.JsonRpcErrorReceivedException: failed: {u'jsonrpc': u'2.0', u'id': None, u'error': {u'message': u'failed', u'code': 1, u'data': {u'messages': {u'info': [u'The current Volume retention period is: 3 months 10 days \n']}, u'result': {}}}}

Did anythong change in bconsole or is this a bug?



Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files: prune_all_vol19.py (1,101 bytes) 2020-04-17 16:09
https://bugs.bareos.org/file_download.php?file_id=437&type=bug
Notes
(0003947)
arogge   
2020-04-17 15:20   
This seems to be broken in 19.2.
I'm trying to find out which change introduced the problem so I can fix it. Can you tell my what exactly was the version you were running before (where the script worked)?
Thank you!
(0003948)
arogge   
2020-04-17 16:09   
The problem was in the python-bareos library now handling errors differently.
I have attached a newer version of the script that should work with 19.2. This version should handle the retention message correctly.
I removed the functionality with the volume truncation as it should not be required. You can add it back if you want to. You may need to handle the exception when there were no volumes to be truncated.
(0003949)
hostedpower   
2020-04-17 16:38   
ok this part works!! Thanks!! :)

I'm checking:

for s in con.call('.storages')['storages']:
    if s['enabled']:
      print ('Truncating volumes on storage %s' % s['name'])
      con.call('truncate volstatus=Purged storage=%s yes' % s['name'])

But that also gives exception, I don't know how to handle this in a smart way , any hints on how to properly interpret and handle exceptions here? :)
(0003956)
hostedpower   
2020-04-22 21:43   
looking into this, if we don't explicitely truncate like you said, will the volumes be cleared/emptied? I think it did not and that's why is was added in the first place or what am I missing here? :)
(0003960)
arogge   
2020-04-23 12:48   
The usual mode of operation is that volumes are overwritten when they were pruned.
(0003961)
hostedpower   
2020-04-23 12:51   
that wouldn't be bad, but it seems they build up too much sometimes, keeping more volumes with data than needed...
(0003962)
hostedpower   
2020-04-23 14:31   
Just added it as a test and it cleared up quite some extra space. My code is however not clean:

for s in con.call('.storages')['storages']:
    if s['enabled']:
        print ('Truncating volumes on storage %s' % s['name'])
        try:
            con.call('truncate volstatus=Purged storage=%s yes' % s['name'])
            print("truncated")
        except JsonRpcErrorReceivedException as e:
            print("excepted")


(It always excepts it seems, yet it executes properly)
(0004012)
arogge   
2020-06-25 13:09   
The newer python-bareos treats some situations as errors that the previous one did not. Thus the client is now responsible for checking what happened and responding accordingly.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1234 [bareos-core] director major always 2020-04-24 18:27 2020-06-22 19:50
Reporter: browcio Platform: Linux  
Assigned To: pstorz OS: Ubuntu  
Priority: normal OS Version: 18.04  
Status: resolved Product Version: 19.2.7  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Log spammed with "0 jobs that will be pruned" after upgrade
Description: Hello,

earlier this week I upgraded our bareos instance from 19.2.6 to 19.2.7. Since then every job run causes flood of:
Volume "vol_12419" has Volume Retention of 7776000 sec. and has 0 jobs that will be pruned
Volume "vol_12420" has Volume Retention of 7776000 sec. and has 0 jobs that will be pruned
Volume "vol_12421" has Volume Retention of 7776000 sec. and has 0 jobs that will be pruned
Volume "vol_12423" has Volume Retention of 7776000 sec. and has 0 jobs that will be pruned
messages for every volume in database. Also i observe much higher database load. I suspect that this may be caused by changes in pruning code mentioned in release notes.
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0003969)
pstorz   
2020-04-28 15:22   
Fix committed to bareos bareos-19.2 branch with changesetid 13261.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1215 [bareos-core] director feature have not tried 2020-03-23 12:46 2020-04-20 10:34
Reporter: hostedpower Platform:  
Assigned To: arogge OS:  
Priority: normal OS Version:  
Status: acknowledged Product Version: 19.2.6  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Support number of jobs / storage daemon
Description: Hi,


We have 3 different storage daemons running now on 3 different servers. We really need a way to limit number of jobs per storage daemon...

They all have independent hardware resources , their own bandwidth and varying processing power. It's somewhat strange this functionality doesn't even exist yet. It probably doesn't make much sense to have common settings for this?

We'd need this for regular backup jobs and for sure also for consolidates...

Tags:
Steps To Reproduce:
Additional Information:
Attached Files:
Notes
(0003944)
arogge   
2020-04-17 15:12   
This is a current limitation and there is no easy way to fix this. The Director only knows about Storages and has no idea on which Storage Daemon that Storage is located.
(0003950)
hostedpower   
2020-04-17 19:53   
can you set a max on the director and also on each storage daemon? The director could then query each storage daemon to see what has been set? :)
(0003951)
arogge   
2020-04-20 08:36   
The director doesn't know there is a storage daemon. It just knows there is a storage and it can be accessed using an address, name and pre-shared key. To apply any limitations on a per storage daemon basis the director first needs to know what a storage daemon is.
Definitely doable, but neither simple nor on our roadmap.
(0003952)
hostedpower   
2020-04-20 10:34   
As soon as it connects to that storage, would it be possible to simply assign jobs (like now it the case). After that the storage daemon decides how many he wants to run concurrently. Still the director should be smart enough to more or less evenly distribute jobs to different storage daemons.

Let's say we put 24 as a max jobs for director, then the limit on the storage daemon is 8. Then at least the daemons would be protected from overloading.

How do other people with multi storage daemons cope with this? It's really not very scalable like this while the architecture seems to be designed for that? :)

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
951 [bareos-core] General major always 2018-05-22 18:35 2020-03-20 12:43
Reporter: ameijeiras Platform: Linux  
Assigned To: arogge OS: Ubuntu  
Priority: normal OS Version: 16.04  
Status: assigned Product Version: 17.2.4  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Differential backup compute date from last incremental
Description: I configure a standard backup policy Full (Monthly), Differential (Weekly) and incremental (Daily).

If I configure it using a daily incremental schedule and control differentials and Full using the variables "Max Diff Interval = 6 days" (To perform a Differential backup weekly) and "Max Full Interval = 31 days" (To perform a Full backup monthly), differentials backups date don't be compute since last Full backup date but from the date of the last incremental wrongly. This create problem in the policy (Full, Diff, Inc) because differential backups will not be really differentials and only have data from last incremental.
I a run manually a differential backup or shedule manually, for example: "
Schedule {
 Name = Backup_test
 Run = Level=Full 1st sun at 02:00
 Run = Differential 2nd-5th sun at 02:00
 Run = Level=Incremental mon-sat at 02:00
}
" differentials backups date are compute right since last Full backup
Tags:
Steps To Reproduce: onfigure a policy backup Full, Differential and incremental using this method:

# ---------- Jobs config ---------------
Job {
  Name = Backup_test
  Enabled = yes
  JobDefs = "DefaultJob"
  Messages = Standard
  Type = Backup
  Storage = CabinaFS1
  Client = test-fd
  FileSet = Archivos_test
  Schedule = Backup_test
  Full Backup Pool = test_Mensual
  Differential Backup Pool = test_Semanal
  Incremental Backup Pool = test_Diario
  Pool = test_Mensual
  Max Diff Interval = 6 days
  Max Full Interval = 31 days
}
# ---------- Schedule config ----------------
Schedule {
 Name = Backup_test
 Run = Level=Incremental at 02:00
}
Additional Information: example config that generate mistaken differential backups:
...
# ---------- Jobs config ---------------
Job {
  Name = Backup_test
  Enabled = yes
  JobDefs = "DefaultJob"
  Messages = Standard
  Type = Backup
  Storage = CabinaFS1
  Client = test-fd
  FileSet = Archivos_test
  Schedule = Backup_test
  Full Backup Pool = test_Mensual
  Differential Backup Pool = test_Semanal
  Incremental Backup Pool = test_Diario
  Pool = test_Mensual
  Max Diff Interval = 6 days
  Max Full Interval = 31 days
}
# ---------- Schedule config ----------------
Schedule {
 Name = Backup_test
 Run = Level=Incremental at 02:00
}


example config that generate right differential backups:
...
Job {
  Name = Backup_test
  Enabled = yes
  JobDefs = "DefaultJob"
  Messages = Standard
  Type = Backup
  Storage = CabinaFS1
  Client = test-fd
  FileSet = Archivos_test
  Schedule = Backup_test
  Full Backup Pool = test_Mensual
  Differential Backup Pool = test_Semanal
  Incremental Backup Pool = test_Diario
  Pool = test_Mensual


}
# ---------- Schedule config ----------------
Schedule {
 Name = Backup_test
 Run = Level=Full 1st sun at 02:00
 Run = Differential 2nd-5th sun at 02:00
 Run = Level=Incremental mon-sat at 02:00
}
...
System Description
Attached Files:
Notes
(0003147)
comfortel   
2018-10-24 14:17   
Same problem here:
https://bugs.bareos.org/view.php?id=1001
(0003168)
arogge   
2019-01-09 09:04   
I just confirmed that this happens at least in 17.2.4 on Ubuntu 16.04. We will try to fix the issue and to find out which versions are affected.
(0003169)
arogge   
2019-01-10 15:03   
Can you please try my packages with an applied fix in your environment and make sure this actually fixes the issue for you?

http://download.bareos.org/bareos/people/arogge/951/xUbuntu_16.04/amd64/
(0003227)
arogge_adm   
2019-01-30 13:12   
Fix committed to bareos dev/arogge/bareos-18.2/fix-951 branch with changesetid 10978.
(0003269)
comfortel   
2019-02-19 11:42   
when can we expect a change in official repo 18.2 for ubuntu?
(0003270)
arogge   
2019-02-19 11:54   
AFAICT you haven't even tried the test-packages I provided.

The bug itself is not fixed yet, because my patch is flawed (picks up changes from the latest full or differential instead of the last full).
If you need this fixed soon, you can open a support case or take a look for yourself (the branch with my candidate fix is publicly available on github).
(0003312)
comfortel   
2019-04-04 13:33   
Hello.

We test Increment and all ok.
jobfiles show 7 and 6 changed files.

Automatically selected FileSet: FileSetBackup_ansible.organisation.pro
+--------+-------+----------+-------------+---------------------+---------------------------------+
| jobid | level | jobfiles | jobbytes | starttime | volumename |
+--------+-------+----------+-------------+---------------------+---------------------------------+
| 20,280 | F | 47,522 | 130,433,828 | 2019-04-04 13:38:20 | Full_ansible.organisation.pro_0781 |
| 20,281 | I | 7 | 761,010 | 2019-04-04 13:40:50 | Incr_ansible.organisation.pro_0782 |
| 20,282 | I | 6 | 2,144 | 2019-04-04 13:55:08 | Incr_ansible.organisation.pro_0783 |
+--------+-------+----------+-------------+---------------------+---------------------------------+
(0003335)
comfortel   
2019-04-15 09:17   
Automatically selected FileSet: FileSetBackup_ansible.organisation.pro
+--------+-------+----------+-------------+---------------------+---------------------------------+
| jobid | level | jobfiles | jobbytes | starttime | volumename |
+--------+-------+----------+-------------+---------------------+---------------------------------+
| 20,280 | F | 47,522 | 130,433,828 | 2019-04-04 13:38:20 | Full_ansible.organisation.pro_0781 |
| 20,291 | D | 29 | 23,345,104 | 2019-04-12 01:05:02 | Diff_ansible.organisation.pro_0789 |
| 20,292 | I | 6 | 54,832 | 2019-04-13 01:05:02 | Incr_ansible.organisation.pro_0783 |
| 20,293 | I | 0 | 0 | 2019-04-14 01:05:03 | Incr_ansible.organisation.pro_0784 |
| 20,294 | I | 0 | 0 | 2019-04-15 01:05:03 | Incr_ansible.organisation.pro_0786 |
+--------+-------+----------+-------------+---------------------+---------------------------------+
You have selected the following JobIds: 20280,20291,20292,20293,20294
(0003336)
arogge   
2019-04-15 09:35   
Thanks for the confirmation.
I'll try to get the patch merged for the next release.
(0003337)
comfortel   
2019-04-15 14:43   
Thanks
(0003392)
comfortel   
2019-06-13 16:58   
When will this fix be added?
(0003562)
comfortel   
2019-08-30 12:23   
Hello. When will this fix be added to upstream?
(0003563)
arogge   
2019-08-30 12:27   
The fix didn't pass the review. The implemented behaviour is wrong, so it'll need work.
(0003887)
comfortel   
2020-03-06 13:25   
Hello. There will be no correction?
(0003888)
arogge   
2020-03-06 13:43   
There is no immediate plan to fix this. This is only an issue if your job upgrades to Differential because of "Max Diff Interval" and have Accurate disabled, which is discouraged.
As I already wrote: if you need it fixed soon, you can open a support case or take a look for yourself.
(0003889)
comfortel   
2020-03-06 14:18   
Thanks, we try enable "Accurate" asap.
(0003898)
comfortel   
2020-03-20 12:43   
We use "Accurate" and all ok. Thanks.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1182 [bareos-core] webui minor always 2020-02-09 20:30 2020-02-20 09:23
Reporter: zendx Platform: Linux  
Assigned To: stephand OS: Debian  
Priority: normal OS Version: 9  
Status: resolved Product Version: 19.2.5  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Files list in restore dialog is empty after updating 18.2.5 to 19.2.5
Description: I've upgraded bareos 18.2.5 -> 19.2.5
PostgreSQL.

/usr/lib/bareos/scripts/update_bareos_tables

DB migrated to version 2192

.bvfs_clear_cache yes done without errors.


After .bvfs_update, some error messages appears in the logfile:

09-Feb 13:08 bareos-dir JobId 0: Fatal error: cats/bvfs.cc:244 cats/bvfs.cc:244 query INSERT INTO PathVisibility (PathId, JobId) SELECT DISTINCT PathId, JobId FROM (SELECT PathId, JobId FROM File WHERE JobId = 13066 UNION SELECT PathId, BaseFiles.JobId FROM BaseFiles JOIN File AS F USING (FileId) WHERE BaseFiles.JobId = 13066) AS B failed: ERROR: duplicate key value violates unique constraint "pathvisibility_pkey"


Restore dialog in WebUI shows all jobs for clients, but files list is empty for all of them :(
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0003794)
stephand   
2020-02-12 08:35   
Thanks for reporting this, I verifided that it is a race condition that unfortunately happens sometimes when multiple .bvfs_update are running in parallel. It is perfectly ok to run multiple .bvfs_update in parallel, and using the Bareos webui also causes running .bvfs_update. In most of the cases, the code prevents from running bvfs update on the same JobId in parallel, see https://github.com/bareos/bareos/blob/master/core/src/cats/bvfs.cc#L191, but I was able to reproduce the error nevertheless, both in 18.2 and 19.2. So this is not related to the upgrade. However, the error does not impact the consistency of the bvfs cache data, please consider this like a warning for the time being. It needs to be fixed in the future.

The other problem "Restore dialog in WebUI shows all jobs for clients, but files list is empty for all of them :(" is related to the upgrade. It's fixed with https://github.com/bareos/bareos/pull/411 also in 19.2.6, please update.
(0003795)
stephand   
2020-02-12 08:38   
Could you please check if it work with Bareos 19.2.6?
(0003843)
mfulz   
2020-02-19 23:36   
I can confirm the empty files list is fixed with 19.2.6.
I had the same issue with the 19.2.5 version.

THX
(0003844)
stephand   
2020-02-20 09:23   
Issue is fixed in 19.2.6

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1165 [bareos-core] director major random 2020-01-03 09:57 2020-01-07 15:09
Reporter: franku Platform:  
Assigned To: franku OS:  
Priority: normal OS Version:  
Status: resolved Product Version:  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Same job name for jobs submitted at the same time
Description: Restore jobs submitted at the same time end up with the same job name, which results in one of the jobs being rejected by the storage daemon and failing.
Tags:
Steps To Reproduce: See attached files:

why.c: test-source file with the code stripped down to the essence
output: log-output of the test that shows up the problem
Additional Information: The seq variable is incremented inside of the mutex, which should be safe, but then its value is read into the JobControlRecord outside of the mutex, which is a race condition if other threads are manipulating the value at the same time.
Attached Files: why.c (1,939 bytes) 2020-01-03 09:57
https://bugs.bareos.org/file_download.php?file_id=398&type=bug
output (1,116 bytes) 2020-01-03 09:57
https://bugs.bareos.org/file_download.php?file_id=399&type=bug
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1162 [bareos-core] file daemon major always 2019-12-18 16:01 2019-12-18 18:44
Reporter: arogge Platform:  
Assigned To: arogge OS:  
Priority: normal OS Version:  
Status: resolved Product Version: 19.2.4~pre  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version: 19.2.4~rc1  
    Target Version: 19.2.4~rc1  
Summary: When restoring files without directories, the permissions of the immediate parent directory are wrong
Description: If you restore files (and not directories) with a restore job and the parent directories are created by the filedaemon, then these directories will have weird permission bits set.
Tags:
Steps To Reproduce: 1. start restore browser
2. select a single file in any non-top directory
3. restore to a non-existant location
4. observe weird permission bits on the file's immediate parent directory
Additional Information: It looks like the filedaemon guesses what permission the directory should have based on the file that is being restored. This is inconsistent and the whole behaviour should probably be rewritten sometime.
Attached Files:
Notes
(0003702)
arogge   
2019-12-18 17:22   
Fix committed to bareos master branch with changesetid 12446.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1014 [bareos-core] webui minor always 2018-09-30 15:36 2019-12-12 09:15
Reporter: progserega Platform: amd64  
Assigned To: frank OS: Debian  
Priority: normal OS Version: 9.4  
Status: assigned Product Version: 17.2.4  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: web ui not show statistic about running job, but bconsole - show
Description: web ui not show statistic about running job, but bconsole - show
Tags:
Steps To Reproduce: 1. Start job
2. see web ui for statistic of current runnong job - no statistic, web said:"not collect enaph data"
3. At this time open bconsole, get status this client and see statistic: bytes/sec, files and other, wich changed after each start "status client=client_name
4. After some time in statistic window in webui set: "Error fetching data."
Additional Information:
System Description
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1121 [bareos-core] file daemon text always 2019-10-18 16:20 2019-10-18 16:20
Reporter: b.braunger@syseleven.de Platform: Linux  
Assigned To: OS: RHEL  
Priority: low OS Version: 7  
Status: new Product Version: 18.2.6  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: documentation of setFileAttributes for plugins not correct
Description: The documentation says that the setFileAttributes for FD Plugins is not implemented yet. Although I a python plugin I can write an according function and it gets called.

https://docs.bareos.org/DeveloperGuide/pluginAPI.html#setfileattributes-bpcontext-ctx-struct-restore-pkt-rp

Is the documentation out of date or am I getting something wrong?
Tags:
Steps To Reproduce: * Create a restore job which calls a python plugin
* start a FD with debug output (at least 150)
* watch the logfile of the FD
Additional Information: LOG example:
test-server (150): filed/fd_plugins.cc:1308-326 PluginSetAttributes
test-server (100): filed/python-fd.cc:2740-326 python-fd: set_file_attributes() entry point in Python called with RestorePacket(stream=1, data_stream=2, type=3, file_index=43337, linkFI=0, uid=0, statp="StatPacket(dev=0, ino=0, mode=0644, nlink=0, uid=0, gid=0, rdev=0, size=-1, atime=1571011435, mtime=1563229608, ctime=1571011439, blksize=4096, blocks=1)", attrEx="", ofname="/opt/puppetlabs/puppet/share/doc/openssl/html/man3/SSL_CTX_set_session_id_context.html", olname="", where="", RegexWhere="<NULL>", replace=97, create_status=2)
test-server (150): filed/restore.cc:517-326 Got hdr: Files=43337 FilInx=43338 size=210 Stream=1, Unix attributes.
test-server (130): filed/restore.cc:534-326 Got stream: Unix attributes len=210 extract=0
System Description
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1087 [bareos-core] director minor always 2019-06-05 15:38 2019-07-03 10:00
Reporter: isi Platform: Linux  
Assigned To: astoorangi OS: Debian  
Priority: normal OS Version: 9  
Status: assigned Product Version: 18.2.5  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: LDAP Plugin error
Description: When backing up a large group > 1000 Members I get following error we have only one group in LDAP with
more than 1000 Members:
.
.
.
Error: Read error on file @LDAP/dc=de/dc=elkb/ou=group/cn=g_internet/data.ldif. ERR=Erfolg
Releasing device "FileStorage" (/tank8/bareos/filepool01)
Elapsed time=00:00:02, Transfer rate=1.034 M Bytes/second
Insert of attributes batch table with 4218 entries start
Insert of attributes batch table done
.
.
.
Backup Ends with:
Termination: Backup OK -- with warnings

list files jobid=94 lists lot of ldap Entries groups and people
Tags: plugin
Steps To Reproduce: Try to backup large LDAP Group
Ldap Server is OpenLdap running on Debian 9
Paket: slapd
Version: 2.4.44+dfsg-5+deb9u2

Bareos Version Director is latest 18.2.5

Bareos File Daemon on ldap02 Server is Version:
Paket: bareos-filedaemon
Version: 18.2.5-147.2
Additional Information:
System Description
Attached Files:
Notes
(0003397)
joergs   
2019-07-02 15:45   
1000 objects is the default limit of OpenLDAP. You have to increase the server search limit to "unlimited", at least the hard limit, see http://www.openldap.org/doc/admin24/guide.html#Limits

slapd.conf:

# should work:
sizelimit unlimited

# might work:
sizelimit size.soft=1000 size.hard=unlimited
(0003402)
isi   
2019-07-02 16:24   
Thanks, I have tested with:
olcSizeLimit size.soft=2000 size.hard=unlimited
and
olcSizeLimit size.soft=2000 size.hard=2000

got same error. ldapsearch from console works fine

How do I enable debug logging in ldap plugin ?

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
989 [bareos-core] webui trivial always 2018-07-20 00:37 2019-06-03 08:42
Reporter: HiFlyer Platform: Linux  
Assigned To: aron_s OS: Ubuntu  
Priority: low OS Version: 16.04  
Status: assigned Product Version: 17.2.4  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Sorting on Column "Retention/Expiration" fails
Description: When clicking on Retention/Expiration it should sort the files according to some logic that show the days to expiration. No sort occurs. All other columns work correctly.

not a big deal if it ever gets fixed but though I would note.
Tags:
Steps To Reproduce: Go to Storage/Pools/"pick any client".
click on "Retention/Expiration".
no sorting occurs even though there are different labels in the column.
if you click on the other columns sorting does occur according to the column.
Additional Information:
System Description
Attached Files:
Notes
(0003083)
aron_s   
2018-07-26 14:25   
The same column also does not work under Storages/Volumes. Will look into this.
(0003144)
HiFlyer   
2018-10-22 13:10   
Found the issue, in that column you actually have to click on the arrow rather than on the name. The other columns you can just click on the name. It does seem though if you click on the name, the column gets highlighted but nothing else happens. Hope this helps.
(0003388)
colttt   
2019-06-03 08:42   
Hello,
for me it does not work to click on the arrows nor the name.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
728 [bareos-core] webui major always 2016-11-25 11:15 2018-08-09 10:25
Reporter: excalibur Platform: Linux  
Assigned To: frank OS: CentOS  
Priority: high OS Version: 7  
Status: acknowledged Product Version: 16.2.4  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: restore to the different client doesn't work
Description: When I tried restore backup to another host (client) in the web-ui I got:

https://www.dropbox.com/s/jh9s33mwzpok2jb/Screenshot%202016-11-25%2012.10.16.png?dl=0

and restore doesn't start

apache log:

[Fri Nov 25 10:31:15 2016] [error] [client 192.168.1.120] PHP Warning: unpack(): Type N: not enough input, need 4, have 0 in /usr/share/bareos-webui/vendor/Bareos/library/Bareos/BSock/BareosBSock.php on line 170, referer: http://myhost.com/backuper/restore/?type=client&jobid=&client=excalibur-192.168.1.125-fd&restoreclient=&restorejob=&where=&fileset=&mergefilesets=0&mergejobs=0
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0002531)
frank   
2017-01-26 14:12   
I'm not able to reproduce what you have reported, yet.

Can you please describe more detailed and in order the steps you have done to achieve the restore to another client (additional screenshots would be great but optional)?

That would be very helpful to reproduce the issue.

Thanks in advance.
(0002578)
achazot   
2017-02-22 08:42   
(Last edited: 2017-02-22 08:44)
Exactly the same trouble today.
I've reproduce this bug with a newly added client, and want to restore on it a HUGE backup from another machine.
Tested with Firefox and Opera.

(0002580)
khetzal   
2017-02-23 14:02   
Hello,

I've the same problem restoring percona xtradb backup.
It work for some destination hosts but not all, and I don't know the difference between the ones that work and the ones that not work, so it won't help much
(0002590)
frank   
2017-03-02 16:51   
You are both able to restore via bconsole to another client and just webui is making trouble in that case?
(0002600)
achazot   
2017-03-06 09:25   
It's work with bconsole. Trouble only with webui.
(0002644)
riot   
2017-05-30 17:29   
(Last edited: 2017-05-30 17:31)
I'm also having this problem. It doesn't seem to write anything useful in any log.

The JOB is Running infinitely.

Restoring to local machine or to the same client where from backups are done goes well.

(0003097)
lankme   
2018-08-09 10:25   
Same problem:
---------apache errorlog----------------
[Thu Aug 09 10:16:09.287787 2018] [:error] [pid 854] [client 192.168.1.100:60968] PHP Warning: unpack(): Type N: not enough input, need 4, have 0 in /usr/share/bareos-webui/vendor/Bareos/library/Bareos/BSock/BareosBSock.php on line 170, referer: http://bareos/bareos-webui/restore/?type=client&jobid=1186&client=pc-user1-tp-fd&restoreclient=&restorejob=&where=&fileset=&mergefilesets=0&mergejobs=0&limit=2000
-----------------------------------------
root@bareos:/home/# uname -a
Linux bareos 4.9.0-7-amd64 0000001 SMP Debian 4.9.110-3+deb9u1 (2018-08-03) x86_64 GNU/Linux
root@bareos:/home/# lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description: Debian GNU/Linux 9.5 (stretch)
Release: 9.5
Codename: stretch
root@bareos:/home/# dpkg -l | grep bare
ii bareos 17.2.4-9.1 amd64 Backup Archiving Recovery Open Sourced - metapackage
ii bareos-bconsole 17.2.4-9.1 amd64 Backup Archiving Recovery Open Sourced - text console
ii bareos-client 17.2.4-9.1 amd64 Backup Archiving Recovery Open Sourced - client metapackage
ii bareos-common 17.2.4-9.1 amd64 Backup Archiving Recovery Open Sourced - common files
ii bareos-database-common 17.2.4-9.1 amd64 Backup Archiving Recovery Open Sourced - common catalog files
ii bareos-database-postgresql 17.2.4-9.1 amd64 Backup Archiving Recovery Open Sourced - PostgreSQL backend
ii bareos-database-tools 17.2.4-9.1 amd64 Backup Archiving Recovery Open Sourced - database tools
ii bareos-director 17.2.4-9.1 amd64 Backup Archiving Recovery Open Sourced - director daemon
ii bareos-filedaemon 17.2.4-9.1 amd64 Backup Archiving Recovery Open Sourced - file daemon
ii bareos-storage 17.2.4-9.1 amd64 Backup Archiving Recovery Open Sourced - storage daemon
ii bareos-tools 17.2.4-9.1 amd64 Backup Archiving Recovery Open Sourced - common tools
ii bareos-webui 17.2.4-15.1 all Backup Archiving Recovery Open Sourced - webui

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
913 [bareos-core] documentation trivial always 2018-02-15 12:24 2018-02-15 14:09
Reporter: colttt Platform: Linux  
Assigned To: frank OS: Debian  
Priority: none OS Version: 9  
Status: assigned Product Version:  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: uncomplete documentation to install the webgui
Description: In 3.3.3 its not possible to create this user unless you restart bareos-dir, cause the profile webui-admin doesn't exist and the command end with an error:

"configure error: Could not find config Resource "Profile" referenced on line 4 : Profile = webui-admin"
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
849 [bareos-core] webui minor always 2017-08-31 16:03 2017-09-01 12:17
Reporter: chaos_prevails Platform: linux  
Assigned To: frank OS: Ubuntu  
Priority: normal OS Version: 16.04.3 amd64  
Status: assigned Product Version: 16.2.4  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: JSON Errors when managing autoloader on remote storage via webui
Description: When using a remote storage with an autoloader, I get JSON errors when trying to manage the autoloader. All 3 tables (import/export slots, drives, slots) stay empty. bareos-dir and bareos-webui are installed on the same machine, bareos-sd with the autoloader is installed on another machine

The JSON error messages are:
"DataTables warning: table id=storage-drives - Invalid JSON response. For more information about this error, please see http://datatables.net/tn/1"

upon clicking OK, I get:
"DataTables warning: table id=storage-slots - Invalid JSON response. For more information about this error, please see http://datatables.net/tn/1"

upon clicking OK, I get:
"DataTables warning: table id=storage-ie-slots - Invalid JSON response. For more information about this error, please see http://datatables.net/tn/1"

via bconsole, I don't see any errors. I can label and list volumes, move them, etc.
Tags:
Steps To Reproduce: 1. configure remote storage autoloader + reload sd and dir
2. login to webui
3. click storages
4. click "manage autoloader"
Additional Information: In the webui, I can successfully import volumes with "import all". The drive and the slots show up, and I get the following message:
Connecting to Storage daemon XXXX_HP_G2_Autochanger at XXXXX.org:9103 ...
3306 Issuing autochanger "listall" command.
Nothing to do

Storage status:
Connecting to Storage daemon remote-sd_HP_G2_Autochanger at remote-sd.address:9103

remote-sd Version: 16.2.4 (01 July 2016) x86_64-pc-linux-gnu ubuntu Ubuntu 16.04 LTS
Daemon started 30-Aug-17 17:11. Jobs: run=0, running=0.
 Heap: heap=135,168 smbytes=46,171 max_bytes=2,178,242 bufs=121 max_bufs=135
 Sizes: boffset_t=8 size_t=8 int32_t=4 int64_t=8 mode=0 bwlimit=0kB/s

Running Jobs:
No Jobs running.
====

Jobs waiting to reserve a drive:
====

Terminated Jobs:
====

Device status:
Autochanger "HP_G2_Autochanger" with devices:
   "Ultrium920" (/dev/st2)

Device "Ultrium920" (/dev/st2) is not open.
    Slot 8 was last loaded in drive 0.
==
====

Used Volume status:
VOLUMEX on device "Ultrium920" (/dev/st2)
    Reader=0 writers=0 reserves=0 volinuse=0
====

====

my configuration files:
1) bareos-dir server:
bareos-dir.d/director/bareos-dir.conf
Director { # define myself
  Name = bareos-dir
  QueryFile = "/usr/lib/bareos/scripts/query.sql"
  Maximum Concurrent Jobs = 10
  Password = "XXXX1" # Console password
  Messages = Daemon
  Auditing = yes

  # Enable the Heartbeat if you experience connection losses
  # (eg. because of your router or firewall configuration).
  # Additionally the Heartbeat can be enabled in bareos-sd and bareos-fd.
  #
  # Heartbeat Interval = 1 min

  # remove comment in next line to load dynamic backends from specified directory
  # Backend Directory = /usr/lib/bareos/backends

  # remove comment from "Plugin Directory" to load plugins from specified directory.
  # if "Plugin Names" is defined, only the specified plugins will be loaded,
  # otherwise all director plugins (*-dir.so) from the "Plugin Directory".
  #
  # Plugin Directory = /usr/lib/bareos/plugins
  # Plugin Names = ""
}

bareos-dir.d/storage/remote-sd.conf
Storage {
  Name = remote-sd_HP_G2_Autochanger
  Address = "remote-sd.address"
  Password = "xxxx2"
  Device = "HP_G2_Autochanger"
  Media Type = LTO
  Autochanger = yes
  TLS Certificate = /etc/bareos/ssl/remote-sd.crt
  TLS Key = /etc/bareos/ssl/remote-sd.key
  TLS CA Certificate File = /etc/bareos/ssl/xx
  TLS DH File = /etc/bareos/ssl/xx
  TLS Enable = yes
  TLS Require = yes
  TLS Verify Peer = yes
}

2) bareos-sd remote machine with autochanger
bareos-sd.d/director/bareos-dir.conf
Director {
  Name = bareos-dir
  Password = "[md5]xxxx2"
  Description = "Director, who is permitted to contact this storage daemon."
  TLS Certificate = /etc/bareos/ssl/remote-sd.crt
  TLS Key = /etc/bareos/ssl/remote-sd.key
  TLS CA Certificate File = /etc/bareos/ssl/xx
  TLS DH File = /etc/bareos/ssl/xx
  TLS Enable = yes
  TLS Require = yes
  TLS Verify Peer = yes
}

bareos-sd.d/storage/bareos-sd.conf
Storage {
  Name = remote-sd
  Maximum Concurrent Jobs = 20

  # remove comment from "Plugin Directory" to load plugins from specified directory.
  # if "Plugin Names" is defined, only the specified plugins will be loaded,
  # otherwise all storage plugins (*-sd.so) from the "Plugin Directory".
  #
  # Plugin Directory = /usr/lib/bareos/plugins
  # Plugin Names = ""
  TLS Certificate = /etc/bareos/ssl/remote-sd.crt
  TLS Key = /etc/bareos/ssl/remote-sd.key
  TLS DH File = /etc/bareos/ssl/
  TLS CA Certificate File = /etc/bareos/ssl/
  TLS Enable = yes
  TLS Require = yes
  TLS Verify Peer = yes
}

bareos-sd.d/autochanger/HP_G2_Autochanger.conf
Autochanger {
  Name = "HP_G2_Autochanger"
  Device = Ultrium920
  Changer Device = /dev/sg5
  Changer Command = "/usr/lib/bareos/scripts/mtx-changer %c %o %S %a %d"
}

bareos-sd.d/device/Ultrium920.conf
Device {
  Name = "Ultrium920"
  Media Type = LTO
  Archive Device = /dev/st2
  Autochanger = yes
  LabelMedia = no
  AutomaticMount = yes
  AlwaysOpen = yes
  RemovableMedia = yes
  Maximum Spool Size = 50G
  Spool Directory = /tmp
  Maximum Block Size = 2097152
# Maximum Block Size = 4194304
  Maximum File Size = 50G
}
System Description
Attached Files: bareos-webui_error1.jpg (67,727 bytes) 2017-08-31 16:03
https://bugs.bareos.org/file_download.php?file_id=261&type=bug
jpg

bareos-webui_error2.jpg (145,640 bytes) 2017-08-31 16:05
https://bugs.bareos.org/file_download.php?file_id=262&type=bug
jpg
Notes
(0002716)
chaos_prevails   
2017-08-31 16:06   
I forgot to mention: with the director installed on the remote machine, there are no JSON error messages

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
832 [bareos-core] webui minor always 2017-07-14 11:05 2017-07-20 11:12
Reporter: bkasprzyk Platform: Linux  
Assigned To: frank OS: Debian  
Priority: normal OS Version: 8.6  
Status: assigned Product Version: 16.2.4  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Problem listing volumes using autochanger
Description: I am unable to list volumes from tab "manage autochanger"
But when i try list volumes from bconsole it works:
list volumes pool=offsite_copy
Using Catalog "MyCatalog"
+---------+------------+-----------+---------+-----------------+----------+--------------+---------+------+-----------+-----------+---------------------+--------------+
| mediaid | volumename | volstatus | enabled | volbytes | volfiles | volretention | recycle | slot | inchanger | mediatype | lastwritten | storage |
+---------+------------+-----------+---------+-----------------+----------+--------------+---------+------+-----------+-----------+---------------------+--------------+
| 920 | 000101L3 | Full | 1 | 426,288,190,164 | 36 | 1 | 1 | 1 | 1 | LTO-3 | 2017-07-13 14:59:13 | offsite_copy |
| 962 | 000104L3 | Full | 1 | 349,629,967,465 | 36 | 1 | 1 | 2 | 1 | LTO-3 | 2017-07-13 14:27:04 | offsite_copy |
| 983 | 000106L3 | Append | 1 | 203 | 0 | 604,800 | 1 | 7 | 1 | LTO-3 | | offsite_copy |
| 984 | 000105L3 | Append | 1 | 203 | 0 | 604,800 | 1 | 3 | 1 | LTO-3 | | offsite_copy |
| 985 | 000103L3 | Append | 1 | 203 | 0 | 604,800 | 1 | 4 | 1 | LTO-3 | | offsite_copy |
| 986 | 000102L3 | Append | 1 | 203 | 0 | 604,800 | 1 | 5 | 1 | LTO-3 | | offsite_copy |
+---------+------------+-----------+---------+-----------------+----------+--------------+---------+------+-----------+-----------+---------------------+--------------+

I also tried to use same command as specified in php file /usr/share/bareos-webui/module/Storage/src/Storage/Model/StorageModel.php on
line 47: status storage="' . $storage . '" slots
But after typing status storage=offsite_copy slots I get this:

Connecting to Storage daemon offsite_copy at offsite_copy:9103 ...
3306 Issuing autochanger "listall" command.
 Slot | Volume Name | Status | Media Type | Pool |
------+------------------+-----------+----------------+--------------------------|
    1 | 000101L3 | Full | LTO-3 | offsite_copy |
    2 | 000104L3 | Full | LTO-3 | offsite_copy |
    3 | 000105L3 | Append | LTO-3 | offsite_copy |
    4 | 000103L3 | Append | LTO-3 | offsite_copy |
    5*| 000102L3 | Append | LTO-3 | offsite_copy |
    6*| ? | ? | ? | ? |
    7 | 000106L3 | Append | LTO-3 | offsite_copy |
    8*| ? | ? | ? | ? |
Slot 9 greater than max 8 ignored.
Slot 10 greater than max 8 ignored.
Slot 11 greater than max 8 ignored.
Slot 12 greater than max 8 ignored.
Slot 13 greater than max 8 ignored.
Slot 14 greater than max 8 ignored.
Slot 15 greater than max 8 ignored.
Slot 16 greater than max 8 ignored.
Tags:
Steps To Reproduce: When I click on "manage autochanger" from Storage tab, i get error windows( details in attachment.
Additional Information: I have some helpful debug information in /var/log/apache/error.log:

[Fri Jul 14 10:46:52.314572 2017] [:error] [pid 10127] [client Client_addr:54106] PHP Notice: Undefined index: contents in /usr/share/bareos-webui/module/Storage/src/Storage/Model/StorageModel.php on line 174, referer: http://bareos-webui/storage//
[Fri Jul 14 10:46:52.314621 2017] [:error] [pid 10127] [client Client_addr:54106] PHP Warning: Invalid argument supplied for foreach() in /usr/share/bareos-webui/module/Storage/src/Storage/Controller/StorageController.php on line 120, referer: http://bareos-webui/storage//
[Fri Jul 14 10:46:53.634660 2017] [:error] [pid 10127] [client Client_addr:54106] PHP Notice: Undefined index: contents in /usr/share/bareos-webui/module/Storage/src/Storage/Model/StorageModel.php on line 50, referer: http://bareos-webui/storage/details/offsite_copy
[Fri Jul 14 10:46:53.738599 2017] [:error] [pid 10127] [client Client_addr:54106] PHP Notice: Undefined index: contents in /usr/share/bareos-webui/module/Storage/src/Storage/Model/StorageModel.php on line 50, referer: http://bareos-webui/storage/details/offsite_copy
[Fri Jul 14 10:46:53.858596 2017] [:error] [pid 10127] [client Client_addr:54106] PHP Notice: Undefined index: contents in /usr/share/bareos-webui/module/Storage/src/Storage/Model/StorageModel.php on line 50, referer: http://bareos-webui/storage/details/offsite_copy
[Fri Jul 14 10:46:59.046489 2017] [:error] [pid 11459] [client Client_addr:54112] PHP Notice: Undefined index: contents in /usr/share/bareos-webui/module/Storage/src/Storage/Model/StorageModel.php on line 174, referer: http://bareos-webui/storage/details/offsite_copy
[Fri Jul 14 10:46:59.046527 2017] [:error] [pid 11459] [client Client_addr:54112] PHP Warning: Invalid argument supplied for foreach() in /usr/share/bareos-webui/module/Storage/src/Storage/Controller/StorageController.php on line 120, referer: http://bareos-webui/storage/details/offsite_copy
[Fri Jul 14 10:47:04.209490 2017] [:error] [pid 11459] [client Client_addr:54112] PHP Notice: Undefined index: contents in /usr/share/bareos-webui/module/Storage/src/Storage/Model/StorageModel.php on line 50, referer: http://bareos-webui/storage/details/offsite_copy?action=updateslots&storage=offsite_copy
[Fri Jul 14 10:47:04.302691 2017] [:error] [pid 10130] [client Client_addr:54114] PHP Notice: Undefined index: contents in /usr/share/bareos-webui/module/Storage/src/Storage/Model/StorageModel.php on line 50, referer: http://bareos-webui/storage/details/offsite_copy?action=updateslots&storage=offsite_copy
[Fri Jul 14 10:47:04.398596 2017] [:error] [pid 11459] [client Client_addr:54112] PHP Notice: Undefined index: contents in /usr/share/bareos-webui/module/Storage/src/Storage/Model/StorageModel.php on line 50, referer: http://bareos-webui/storage/details/offsite_copy?action=updateslots&storage=offsite_copy



I also tried bareos-webui with nightly version, and this error does not apper.
System Description
Attached Files: bareos_screenshots.tar.xz (219,884 bytes) 2017-07-14 11:05
https://bugs.bareos.org/file_download.php?file_id=255&type=bug
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
831 [bareos-core] webui minor have not tried 2017-07-11 12:39 2017-07-11 12:39
Reporter: frank Platform:  
Assigned To: frank OS:  
Priority: normal OS Version:  
Status: assigned Product Version: 16.2.6  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Update Check: Windows client versions are not determined properly
Description: It looks like windows client versions are not determined properly. Meaning, even if the client is up to date it is labeled grey in the client listing of webui.
Tags:
Steps To Reproduce:
Additional Information:
E.g.

Uname: 16.2.6 (02Jun17) Microsoft Windows Server 2012 Datacenter Edition (build 9200), 64-bit,Cross-compile,Win64
Uname: 16.2.5 (03Mar17) Microsoft Windows Server 2012 Standard Edition (build 9200), 64-bit,Cross-compile,Win64
Uname: 16.2.5 (03Mar17) Microsoft Windows Server 2012 Standard Edition (build 9200), 64-bit,Cross-compile,Win64
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
732 [bareos-core] webui minor always 2016-12-02 18:49 2016-12-07 12:39
Reporter: hk298 Platform: Linux  
Assigned To: frank OS: Debian  
Priority: normal OS Version: 8  
Status: confirmed Product Version: 16.2.4  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: URL should not contain rerun action
Description: - Click on a rerun icon in the jobs/show tab, for example for job 1234
- Close the popup
- The job list now contains a new job icon with status "queued"
- The URL changed to .../bareos-webui/job/index?action=rerun&jobid=1234

In order to see whether the job has finished I would now simply refresh the browser. However that reruns the same job again, instead you need to click the jobs tab again. I think the displayed URL should not contain the rerun action, so a browser refresh won't have this side effect.

Tested this with FF45.5 on Debian8 and IE11 on Win7.
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0002464)
frank   
2016-12-07 12:38   
Yes I see, we could/should redirect to the index action instead in that case.