View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
67 [bareos-core] regression testing feature have not tried 2013-02-13 17:14 2023-09-27 10:49
Reporter: pstorz Platform: Linux  
Assigned To: bruno-at-bareos OS: any  
Priority: normal OS Version: 3  
Status: assigned Product Version:  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: create regression test for scsi crypto LTO encryption
Description: After we have the mhvtl support on the regression machines, we need to create a regression test for the scsicrypto option.
Tags:
Steps To Reproduce:
Additional Information:
System Description General issues regarding all linux platforms or not specific to only one distribution
Attached Files:
Notes
(0000153)
pstorz   
2013-02-25 10:39   
basic check for scsi crypto is done, but we also need a test for disaster recovery.


Marco: I think you should also show that you cannot do a bls of the tape after you clear the encryption key.
(0000508)
maik   
2013-07-05 16:57   
partly done, test for bextract missing


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1555 [bareos-core] webui minor always 2023-09-25 20:00 2023-09-26 11:27
Reporter: Animux Platform: Linux  
Assigned To: OS: Debian  
Priority: normal OS Version: 10  
Status: new Product Version: 22.1.0  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: bareos-webui depends needlessly on libapache2-mod-fcgid
Description: The official debian package from the community repository has the following depends:

> Depends: apache2 | httpd, libapache2-mod-fcgid, php-fpm (>= 7.0), php-date, php-intl, php-json, php-curl

When using any http server other than apache2 the dependency on libapache2-mod-fcgid is wrong and is pulling unnecessary additional dependencies (something like apache2-bin). Other official debian packages are using "Recommends" (f.e. sympa) or "Suggests" (f.a. munin or oar-restful-api) for such dependencies.

Can you downgrade the dependency on libapache2-mod-fcgid to at least recommends? "Recommends" would still install libapache2-mod-fcgid in default setups, but would allow the server administrator to skip the installation or to remove the package afterwards.
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0005448)
bruno-at-bareos   
2023-09-26 11:27   
Are you willing to propose a PR to fix this ?


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1554 [bareos-core] storage daemon crash always 2023-09-21 15:38 2023-09-22 13:14
Reporter: oyxnaut Platform: Linux  
Assigned To: OS: Ubuntu  
Priority: normal OS Version: 22.04.3  
Status: new Product Version: 22.1.0  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: bareos-sd crashes when bareos-fd can't be reached
Description: When bareos-sd tries to connect to a bareos-fd client, and a firewall on the network path drops (didn't test with reject) packages from the storage-daemon, the bareos-sd process crashes.
This happens with `22.1.1~pre88.04d51ceb5-87` from https://download.bareos.org/current/xUbuntu_22.04/\1. Since the product version dropdown doesn't list 22.1.1, I've selected 22.1.0. Sorry if this bug tracker is the wrong place to report this issue.
Tags:
Steps To Reproduce: Configure a firewall that drop connections from bareos-sd to bareos-fd, and run a backup job.
Additional Information: The crash happens in __strnlen_sse2 () at ../sysdeps/x86_64/multiarch/../multiarch/strlen-vec.S:123
```
(gdb) disas
Dump of assembler code for function __strnlen_sse2:
<snip>
   0x00007fc4afacabb0 <+64>: and rax,0xfffffffffffffff0
=> 0x00007fc4afacabb4 <+68>: pcmpeqb xmm0,XMMWORD PTR [rax]
</snip>
(gdb) i r
rax 0x30 48
```
You can find attached a backtrace for the crashed thread.
Attached Files: bareos-sd-crash.log (6,258 bytes) 2023-09-21 15:38
https://bugs.bareos.org/file_download.php?file_id=569&type=bug
Notes
(0005444)
bruno-at-bareos   
2023-09-21 16:51   
May I ask you to check twice the version used, in the trace there's mention of which look like wired.

../../../../../../../bareos_PR-1545/TARFILES/bareos-22.1.1~pre86.ca2488010

The bugtracker is the right place if you got a backtrace ;-)

Btw: You certainly want to use the passive client mode which will soon get better advertised in our documentation.
https://docs.bareos.org/TasksAndConcepts/NetworkSetup.html#passive-clients
(0005445)
bruno-at-bareos   
2023-09-21 16:53   
the weird mention is the PR-1545 and the bareos-22.1.1~pre86.ca2488010 when your package mention 22.1.1~pre88.04d51ceb5-87
(0005446)
oyxnaut   
2023-09-22 13:14   
At first I thought the version mismatch might be due to unattended updates, so I restarted the storage daemon, and tried again.
Unfortunately the same happened.

I won't be available to provide more information after today 16:00 GMT+2 for the next two weeks. If you need more details to assist your debugging, kindly let me know before then.

P.S. I noticed I made a copy & paste mistake with the repo url. The trailing \1 should not be there.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1553 [bareos-core] storage daemon major always 2023-09-20 10:18 2023-09-20 10:18
Reporter: robertdb Platform: Linux  
Assigned To: OS: Debian  
Priority: high OS Version: 9  
Status: new Product Version: 22.1.0  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: S3 (droplet) returns an error on Exoscale S3 service
Description: Using [Exoscale Object Storage](https://www.exoscale.com/object-storage/) ("S3 compatible"), I'm getting this error and a failing job:

```text
bareos-storagedaemon JobId 1: Warning: stored/label.cc:358 Open device "EXO_S3_1-00" (Exoscale S3 Storage) Volume "Full-0001" failed: ERR=stored/dev.cc:602 Could not open: Exoscale S3 Storage/Full-0001
```
Tags: "libdroplet", "s3"
Steps To Reproduce: Configure bareos-sd with these files:

/etc/bareos/bareos-sd.d/device/EXO_S3_1-00.conf:

```text
Device {
  Name = "EXO_S3_1-00"
  Description = "ExoScale S3 device."
  Maximum Concurrent Jobs = 1
  Media Type = "S3_Object"
  Archive Device = "Exoscale S3 Storage"
  Device Type = "droplet"
  Device Options = "profile=/etc/bareos/bareos-sd.d/device/droplet/exoscale.profile,bucket=bareos-backups,chunksize=100M"
  LabelMedia = yes
  Random Access = yes
  AutomaticMount = yes
  RemovableMedia = No
  AlwaysOpen = No
}
```

/etc/bareos/bareos-sd.d/device/droplet/exoscale.profile:

```text
host = "sos-ch-gva-2.exo.io:443"
use_https = true
access_key = REDACTED
secret_key = REDACTED
pricing_dir = ""
backend = s3
aws_auth_sign_version = 4
aws_region = ch-gva-2
```

Start a backup on this Storage.
Additional Information: The packages are installed from the "subscription repository": (`cat /etc/apt/sources.list.d/bareos.list`)

```text
deb [signed-by=/etc/apt/bareos.gpg] https://download.bareos.com/bareos/release/22/Debian_12 /
```

I'm using these version: (`dpkg -l | grep bareos`)

```text
ii bareos-common 22.1.0-2 amd64 Backup Archiving Recovery Open Sourced - common files
ii bareos-dbg 22.1.0-2 amd64 Backup Archiving Recovery Open Sourced - debugging symbols
ii bareos-storage 22.1.0-2 amd64 Backup Archiving Recovery Open Sourced - storage daemon
ii bareos-storage-droplet 22.1.0-2 amd64 Backup Archiving Recovery Open Sourced - storage daemon droplet backend
ii bareos-storage-tape 22.1.0-2 amd64 Backup Archiving Recovery Open Sourced - storage daemon tape support
ii bareos-tools 22.1.0-2 amd64 Backup Archiving Recovery Open Sourced - common tools
```

On this host: (`cat /etc/os-release`)

```text
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
NAME="Debian GNU/Linux"
VERSION_ID="12"
VERSION="12 (bookworm)"
VERSION_CODENAME=bookworm
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
```
System Description
Attached Files:
There are no notes attached to this issue.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1144 [bareos-core] director minor always 2019-11-27 13:27 2023-09-13 18:56
Reporter: ironiq Platform: Linux  
Assigned To: bruno-at-bareos OS: CentOS  
Priority: normal OS Version: 7  
Status: assigned Product Version: 18.2.5  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Backtrace if trying to login user with pam authentication
Description: I tried to configure the PAM authentication for bconsole and bareos-webui, but following the official docs (https://docs.bareos.org/TasksAndConcepts/PAM.html) i get a huge backtrace after entering username and password. I followed the other docs on github (https://github.com/bareos/bareos-contrib/tree/master/misc/bareos_pam_integration) about the pam integration and tried the pamtester command with success. The server is an up2date CentOS 7, the bareos packages are coming from "official" bareos yum repository. The SELinux is set currently to "Enforcing", but also tried with "Permissive" with the same result. Also tried to simplify the PAM config with the official docs version, the result was the same.
Tags:
Steps To Reproduce: 1. Configure the director regarding the docs
2. Run: bconsole -c /etc/bareos/bconsole-pam.conf
3. Enter username and password.
Additional Information: [root@backup /etc/bareos]# cat /etc/redhat-release
CentOS Linux release 7.7.1908 (Core)
[root@backup /etc/bareos]# yum repolist | grep bareos
bareos_bareos-18.2 bareos:bareos-18.2 (CentOS_7) 45
[root@backup /etc/bareos]# rpm -qa | grep bareos
bareos-filedaemon-18.2.5-144.1.el7.x86_64
bareos-client-18.2.5-144.1.el7.x86_64
bareos-director-18.2.5-144.1.el7.x86_64
bareos-common-18.2.5-144.1.el7.x86_64
bareos-tools-18.2.5-144.1.el7.x86_64
bareos-bconsole-18.2.5-144.1.el7.x86_64
bareos-database-common-18.2.5-144.1.el7.x86_64
bareos-database-tools-18.2.5-144.1.el7.x86_64
bareos-18.2.5-144.1.el7.x86_64
bareos-storage-18.2.5-144.1.el7.x86_64
bareos-database-postgresql-18.2.5-144.1.el7.x86_64
bareos-webui-18.2.5-131.1.el7.noarch
[root@backup /etc/bareos]# cat /etc/pam.d/bareos
auth required pam_env.so
auth sufficient pam_sss.so forward_pass
auth sufficient pam_unix.so try_first_pass
auth required pam_deny.so
[root@backup /etc/bareos]# su - bareos -s /bin/bash
Last login: Tue Nov 26 17:08:42 CET 2019 on pts/0
-bash-4.2$ pamtester bareos foobar authenticate
Password:
pamtester: successfully authenticated
-bash-4.2$ logout
[root@backup /etc/bareos]# cat bconsole-pam.conf
#
# Bareos User Agent (or Console) Configuration File
#

Director {
  Name = bareos-dir
  address = localhost
  Password = "Very Secret Password"
  Description = "Bareos Console credentials for local Director"
}

Console {
  Name = "PamConsole"
  Password = "secret"
}

[root@backup /etc/bareos]# cat bareos-dir.d/console/pam-console.conf
Console {
        Name = "PamConsole"
        Password = "secret"
        UsePamAuthentication = yes
}

[root@backup /etc/bareos]# cat bareos-dir.d/user/foobar.conf
User {
        Name = "foobar"
        Password = ""
        CommandACL = status, .status
    JobACL = *all*
}

[root@backup /etc/bareos]# bconsole -c bconsole-pam.conf
Connecting to Director localhost:9101
 Encryption: PSK-AES256-CBC-SHA
login:foobar
Password: *** Error in `bconsole': double free or corruption (fasttop): 0x0000000001826390 ***
======= Backtrace: =========
/lib64/libc.so.6(+0x81679)[0x7f83fb78c679]
bconsole(_Z22ConsolePamAuthenticateP8_IO_FILEP12BareosSocket+0x9d)[0x40945d]
bconsole(main+0x10c2)[0x405c52]
/lib64/libc.so.6(__libc_start_main+0xf5)[0x7f83fb72d505]
bconsole[0x4063f8]
======= Memory map: ========
00400000-0040d000 r-xp 00000000 fd:00 188 /usr/sbin/bconsole
0060c000-0060d000 r--p 0000c000 fd:00 188 /usr/sbin/bconsole
0060d000-0060e000 rw-p 0000d000 fd:00 188 /usr/sbin/bconsole
0060e000-0060f000 rw-p 00000000 00:00 0
017e9000-0184c000 rw-p 00000000 00:00 0 [heap]
7f83ec000000-7f83ec021000 rw-p 00000000 00:00 0
7f83ec021000-7f83f0000000 ---p 00000000 00:00 0
7f83f2948000-7f83f2954000 r-xp 00000000 fd:00 8336 /usr/lib64/libnss_files-2.17.so
7f83f2954000-7f83f2b53000 ---p 0000c000 fd:00 8336 /usr/lib64/libnss_files-2.17.so
7f83f2b53000-7f83f2b54000 r--p 0000b000 fd:00 8336 /usr/lib64/libnss_files-2.17.so
7f83f2b54000-7f83f2b55000 rw-p 0000c000 fd:00 8336 /usr/lib64/libnss_files-2.17.so
7f83f2b55000-7f83f2b5b000 rw-p 00000000 00:00 0
7f83f2b5b000-7f83f2b5c000 ---p 00000000 00:00 0
7f83f2b5c000-7f83f335c000 rw-p 00000000 00:00 0
7f83f335c000-7f83f9886000 r--p 00000000 fd:00 8065 /usr/lib/locale/locale-archive
7f83f9886000-7f83f98e6000 r-xp 00000000 fd:00 16996 /usr/lib64/libpcre.so.1.2.0
7f83f98e6000-7f83f9ae6000 ---p 00060000 fd:00 16996 /usr/lib64/libpcre.so.1.2.0
7f83f9ae6000-7f83f9ae7000 r--p 00060000 fd:00 16996 /usr/lib64/libpcre.so.1.2.0
7f83f9ae7000-7f83f9ae8000 rw-p 00061000 fd:00 16996 /usr/lib64/libpcre.so.1.2.0
7f83f9ae8000-7f83f9b0c000 r-xp 00000000 fd:00 11896 /usr/lib64/libselinux.so.1
7f83f9b0c000-7f83f9d0b000 ---p 00024000 fd:00 11896 /usr/lib64/libselinux.so.1
7f83f9d0b000-7f83f9d0c000 r--p 00023000 fd:00 11896 /usr/lib64/libselinux.so.1
7f83f9d0c000-7f83f9d0d000 rw-p 00024000 fd:00 11896 /usr/lib64/libselinux.so.1
7f83f9d0d000-7f83f9d0f000 rw-p 00000000 00:00 0
7f83f9d0f000-7f83f9d25000 r-xp 00000000 fd:00 8346 /usr/lib64/libresolv-2.17.so
7f83f9d25000-7f83f9f24000 ---p 00016000 fd:00 8346 /usr/lib64/libresolv-2.17.so
7f83f9f24000-7f83f9f25000 r--p 00015000 fd:00 8346 /usr/lib64/libresolv-2.17.so
7f83f9f25000-7f83f9f26000 rw-p 00016000 fd:00 8346 /usr/lib64/libresolv-2.17.so
7f83f9f26000-7f83f9f28000 rw-p 00000000 00:00 0
7f83f9f28000-7f83f9f2b000 r-xp 00000000 fd:00 17360 /usr/lib64/libkeyutils.so.1.5
7f83f9f2b000-7f83fa12a000 ---p 00003000 fd:00 17360 /usr/lib64/libkeyutils.so.1.5
7f83fa12a000-7f83fa12b000 r--p 00002000 fd:00 17360 /usr/lib64/libkeyutils.so.1.5
7f83fa12b000-7f83fa12c000 rw-p 00003000 fd:00 17360 /usr/lib64/libkeyutils.so.1.5
7f83fa12c000-7f83fa13a000 r-xp 00000000 fd:00 17861 /usr/lib64/libkrb5support.so.0.1
7f83fa13a000-7f83fa33a000 ---p 0000e000 fd:00 17861 /usr/lib64/libkrb5support.so.0.1
7f83fa33a000-7f83fa33b000 r--p 0000e000 fd:00 17861 /usr/lib64/libkrb5support.so.0.1
7f83fa33b000-7f83fa33c000 rw-p 0000f000 fd:00 17861 /usr/lib64/libkrb5support.so.0.1
7f83fa33c000-7f83fa340000 r-xp 00000000 fd:00 17146 /usr/lib64/libcap-ng.so.0.0.0
7f83fa340000-7f83fa540000 ---p 00004000 fd:00 17146 /usr/lib64/libcap-ng.so.0.0.0
7f83fa540000-7f83fa541000 r--p 00004000 fd:00 17146 /usr/lib64/libcap-ng.so.0.0.0
7f83fa541000-7f83fa542000 rw-p 00005000 fd:00 17146 /usr/lib64/libcap-ng.so.0.0.0
7f83fa542000-7f83fa546000 r-xp 00000000 fd:00 16978 /usr/lib64/libattr.so.1.1.0
7f83fa546000-7f83fa745000 ---p 00004000 fd:00 16978 /usr/lib64/libattr.so.1.1.0
7f83fa745000-7f83fa746000 r--p 00003000 fd:00 16978 /usr/lib64/libattr.so.1.1.0
7f83fa746000-7f83fa747000 rw-p 00004000 fd:00 16978 /usr/lib64/libattr.so.1.1.0
7f83fa747000-7f83fa778000 r-xp 00000000 fd:00 17850 /usr/lib64/libk5crypto.so.3.1
7f83fa778000-7f83fa977000 ---p 00031000 fd:00 17850 /usr/lib64/libk5crypto.so.3.1
7f83fa977000-7f83fa979000 r--p 00030000 fd:00 17850 /usr/lib64/libk5crypto.so.3.1
7f83fa979000-7f83fa97a000 rw-p 00032000 fd:00 17850 /usr/lib64/libk5crypto.so.3.1
7f83fa97a000-7f83fa97d000 r-xp 00000000 fd:00 16967 /usr/lib64/libcom_err.so.2.1
7f83fa97d000-7f83fab7c000 ---p 00003000 fd:00 16967 /usr/lib64/libcom_err.so.2.1
7f83fab7c000-7f83fab7d000 r--p 00002000 fd:00 16967 /usr/lib64/libcom_err.so.2.1
7f83fab7d000-7f83fab7e000 rw-p 00003000 fd:00 16967 /usr/lib64/libcom_err.so.2.1
7f83fab7e000-7f83fac57000 r-xp 00000000 fd:00 17856 /usr/lib64/libkrb5.so.3.3
7f83fac57000-7f83fae56000 ---p 000d9000 fd:00 17856 /usr/lib64/libkrb5.so.3.3
7f83fae56000-7f83fae64000 r--p 000d8000 fd:00 17856 /usr/lib64/libkrb5.so.3.3
7f83fae64000-7f83fae67000 rw-p 000e6000 fd:00 17856 /usr/lib64/libkrb5.so.3.3
7f83fae67000-7f83faeb1000 r-xp 00000000 fd:00 17727 /usr/lib64/libgssapi_krb5.so.2.2
7f83faeb1000-7f83fb0b1000 ---p 0004a000 fd:00 17727 /usr/lib64/libgssapi_krb5.so.2.2
7f83fb0b1000-7f83fb0b2000 r--p 0004a000 fd:00 17727 /usr/lib64/libgssapi_krb5.so.2.2
7f83fb0b2000-7f83fb0b4000 rw-p 0004b000 fd:00 17727 /usr/lib64/libgssapi_krb5.so.2.2
7f83fb0b4000-7f83fb0b6000 r-xp 00000000 fd:00 8083 /usr/lib64/libdl-2.17.so
7f83fb0b6000-7f83fb2b6000 ---p 00002000 fd:00 8083 /usr/lib64/libdl-2.17.so
7f83fb2b6000-7f83fb2b7000 r--p 00002000 fd:00 8083 /usr/lib64/libdl-2.17.so
7f83fb2b7000-7f83fb2b8000 rw-p 00003000 fd:00 8083 /usr/lib64/libdl-2.17.so
7f83fb2b8000-7f83fb2d6000 r-xp 00000000 fd:00 11901 /usr/lib64/libaudit.so.1.0.0BAREOS interrupted by signal 6: IOT trap
bconsole, bconsole got signal 6 - IOT trap. Attempting traceback.
exepath=/etc/bareos
Calling: /etc/bareos/btraceback /etc/bareos/bconsole 10627 /tmp
execv: /etc/bareos/btraceback failed: ERR=No such file or directory
[root@backup /etc/bareos]#
System Description
Attached Files:
Notes
(0005441)
bruno-at-bareos   
2023-09-13 18:51   
Hello cleaning our bug database entries, I wonder if you can reproduce this with recent code like bareos 22.1.0?
(0005442)
ironiq   
2023-09-13 18:56   
Several things happened so far, i replaced the director with Ubuntu. I will check tomorrow, but it's possible that this ticket can be closed.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
643 [bareos-core] storage daemon feature always 2016-04-18 12:12 2023-09-13 18:34
Reporter: otto Platform: Linux  
Assigned To: OS: Debian  
Priority: normal OS Version: 8  
Status: new Product Version: 15.2.3  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: efficienter use of Tape drives
Description: I use a Tape Library with 2 tape drives and spooling.
Later i will upgrade to 4 drives. This , which reduced problem 1 and increase problem 2...

 Prefer Mounted Volumes = yes (default)
 

Problem 1:
A long running job with pool A, blocks a tapedrive for all other pools.
So in my config 2 jobs with long (spooling) duration in two different pools,
will block the hole backup.

Problem 2:
A lot of jobs in one Pool will only use one tape drive.
The other tape drives are idle and the backup performance of all these jobs,
are limited to one drive (some Despool Waiting Jobs).

Maybe the reservation of a tapedrive occours on beginning of despooling.

I think this wouldn't be trivial, but necessary to efficient use of an Tape library with more Tape Drives.
Tags: bareos-sd
Steps To Reproduce:
Additional Information: *status storage=Scalar6k-TWR
Connecting to Storage daemon Scalar6k-TWR at bareos-sd.mydomain:9103

bareos1-sd Version: 15.2.3 (07 March 2016) x86_64-pc-linux-gnu debian Debian GNU/Linux 8.0 (jessie)
Daemon started 13-Apr-16 16:52. Jobs: run=586, running=11.
 Heap: heap=405,504 smbytes=14,445,531 max_bytes=32,272,609 bufs=908 max_bufs=1,220
 Sizes: boffset_t=8 size_t=8 int32_t=4 int64_t=8 mode=0 bwlimit=0kB/s

Running Jobs:
Writing: Full Backup job dxxx-bj_home JobId=18624 Volume="OF0465L6"
    pool="TWR-3M" device="LTO6-twr-05" (/dev/LTO6-twr-05)
    spooling=1 despooling=0 despool_wait=0
    Files=2,519,006 Bytes=4,176,252,155,687 AveBytes/sec=12,817,909 LastBytes/sec=5,244,472
    FDReadSeqNo=90,013,841 in_msg=82987117 out_msg=6 fd=23
Writing: Full Backup job gxxx-bj1 JobId=18731 Volume="OF0465L6"
    pool="TWR-3M" device="LTO6-twr-05" (/dev/LTO6-twr-05)
    spooling=1 despooling=0 despool_wait=0
    Files=5,355,457 Bytes=1,217,339,579,708 AveBytes/sec=4,445,027 LastBytes/sec=1,436,783
    FDReadSeqNo=76,768,546 in_msg=61286029 out_msg=6 fd=12
Writing: Full Backup job dxyy-bj_home JobId=18732 Volume="OF0465L6"
    pool="TWR-3M" device="LTO6-twr-05" (/dev/LTO6-twr-05)
    spooling=1 despooling=0 despool_wait=0
    Files=6,303,219 Bytes=9,513,898,578,229 AveBytes/sec=23,131,529 LastBytes/sec=1,703,936
    FDReadSeqNo=199,457,777 in_msg=180857551 out_msg=6 fd=14
Writing: Incremental Backup job dxyy-bj_user JobId=19046 Volume="OF0465L6"
    pool="TWR-3M" device="LTO6-twr-05" (/dev/LTO6-twr-05)
    spooling=0 despooling=0 despool_wait=0
    Files=0 Bytes=0 AveBytes/sec=0 LastBytes/sec=0
    FDReadSeqNo=4 in_msg=3 out_msg=4 fd=53
Writing: Full Backup job wxxx-bj1 JobId=19064 Volume="OF0465L6"
    pool="TWR-3M" device="LTO6-twr-05" (/dev/LTO6-twr-05)
    spooling=0 despooling=0 despool_wait=1
    Files=20,361,201 Bytes=3,918,859,194,423 AveBytes/sec=0 LastBytes/sec=0
    FDReadSeqNo=228,578,427 in_msg=171579584 out_msg=6 fd=16
Writing: Full Backup job dxyy-bj_owncloud JobId=19066 Volume="OF0465L6"
    pool="TWR-3M" device="LTO6-twr-05" (/dev/LTO6-twr-05)
    spooling=0 despooling=0 despool_wait=0
    Files=0 Bytes=0 AveBytes/sec=0 LastBytes/sec=0
    FDReadSeqNo=4 in_msg=3 out_msg=4 fd=74
Writing: Full Backup job biox-bj_raid18 JobId=19067 Volume="OF0465L6"
    pool="TWR-3M" device="LTO6-twr-05" (/dev/LTO6-twr-05)
    spooling=1 despooling=0 despool_wait=0
    Files=299,270 Bytes=1,224,903,194,950 AveBytes/sec=9,839,289 LastBytes/sec=4,000,380
    FDReadSeqNo=21,378,861 in_msg=20499910 out_msg=6 fd=39
Writing: Full Backup job biox-bj_raid7 JobId=19068 Volume="OF0465L6"
    pool="TWR-3M" device="LTO6-twr-05" (/dev/LTO6-twr-05)
    spooling=0 despooling=1 despool_wait=0
    Files=83,141 Bytes=1,256,041,667,421 AveBytes/sec=0 LastBytes/sec=0
    FDReadSeqNo=21,110,883 in_msg=20786954 out_msg=6 fd=62
Writing: Full Backup job biox-bj_raid11 JobId=19069 Volume="OF0465L6"
    pool="TWR-3M" device="LTO6-twr-05" (/dev/LTO6-twr-05)
    spooling=0 despooling=0 despool_wait=1
    Files=51,732 Bytes=1,088,934,570,327 AveBytes/sec=0 LastBytes/sec=0
    FDReadSeqNo=18,436,190 in_msg=18235013 out_msg=6 fd=42
Writing: Incremental Backup job jxxx-bj JobId=19347 Volume="OF0465L6"
    pool="TWR-3M" device="LTO6-twr-05" (/dev/LTO6-twr-05)
    spooling=0 despooling=0 despool_wait=1
    Files=65,887 Bytes=512,108,906 AveBytes/sec=0 LastBytes/sec=0
    FDReadSeqNo=405,080 in_msg=272353 out_msg=6 fd=29
Writing: Full Backup job lxxx-bj JobId=19366 Volume="OF0465L6"
    pool="TWR-3M" device="LTO6-twr-05" (/dev/LTO6-twr-05)
    spooling=0 despooling=0 despool_wait=1
    Files=17,748 Bytes=244,545,851 AveBytes/sec=0 LastBytes/sec=0
    FDReadSeqNo=126,005 in_msg=85004 out_msg=10 fd=7
====

Jobs waiting to reserve a drive:
====

Terminated Jobs:
 JobId Level Files Bytes Status Finished Name
===================================================================
...
====

Device status:
Autochanger "Scalar6k-TWR" with devices:
   "LTO6-twr-05" (/dev/LTO6-twr-05)
   "LTO6-twr-06" (/dev/LTO6-twr-06)

Device "LTO6-twr-05" (/dev/LTO6-twr-05) is mounted with:
    Volume: OF0465L6
    Pool: TWR-3M
    Media type: LTO6
    Slot 171 is loaded in drive 0.
    Total Bytes=2,126,624,100,462 Blocks=15,870,174 Bytes/block=134,001
    Positioned at File=141 Block=11,100
==

Device "LTO6-twr-06" (/dev/LTO6-twr-06) is mounted with:
    Volume: OF0699L6
    Pool: TWR-3W
    Media type: LTO6
    Slot 234 is loaded in drive 1.
    Total Bytes=2,324,103,240,764 Blocks=2,216,501 Bytes/block=1,048,545
    Positioned at File=161 Block=0
==
====

Used Volume status:
OF0465L6 on device "LTO6-twr-05" (/dev/LTO6-twr-05)
    Reader=0 writers=9 reserves=2 volinuse=1
OF0699L6 on device "LTO6-twr-06" (/dev/LTO6-twr-06)
    Reader=0 writers=0 reserves=0 volinuse=0
====

Data spooling: 9 active jobs, 1,421,936,868,161 bytes; 583 total jobs, 3,530,901,944,142 max bytes/job.
Attr spooling: 9 active jobs, 9,187,943,025 bytes; 502 total jobs, 9,187,943,025 max bytes.
====
System Description
Attached Files:
There are no notes attached to this issue.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1532 [bareos-core] file daemon major always 2023-04-27 10:22 2023-09-13 11:58
Reporter: hostedpower Platform: x86  
Assigned To: OS: Windows  
Priority: normal OS Version: 2016  
Status: new Product Version: 22.0.3  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Windows backups fails on a lot of files, seems vss is not used properly
Description: Using Windows 2022

mdf": ERR=The process cannot access the file because it is being used by another process.

Cannot open "E:/SQLLogs/xxxxx/xxxx_Cbro_log.ldf": ERR=The process cannot access the file because it is being used by another proces

But we see lot's of these errors (easily 100's at moments which is quite weird).

I cannot remember seeing so many errors with Windows backups as we experience now on Windows 2022 and Baroes 22.x (Not sure which one causes it)

Could you provide me any troubleshooting steps so I can report back?
Tags: mssql, VSS, Windows
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0004993)
hostedpower   
2023-04-28 16:21   
Maybe a similar issue here: https://groups.google.com/g/bareos-users/c/F46rRPh7Hf8
(0004997)
bruno-at-bareos   
2023-05-03 15:39   
Are the "E:/SQLLogs" directory pick into account by the VSS SQL Writer ?


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
886 [bareos-core] director minor always 2017-12-27 09:14 2023-09-13 11:57
Reporter: bozonius Platform: Linux  
Assigned To: bruno-at-bareos OS: Ubuntu  
Priority: normal OS Version: 14.04  
Status: feedback Product Version: 16.2.4  
Product Build: Resolution: reopened  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: "Run Before Job" in JobDefs is run even though a backup job specifies a different "Run Before Job"
Description: My JobDefs file specifies a script that is to be run before a backup job. This is valid for nearly all of my jobs. However, one backup job should not run that script; instead, I want to run a different script before running that backup. So in that one specific backup job definition, I override the default from JobDefs with a different script. But the JobDefs script still gets run anyway.
Tags:
Steps To Reproduce: Create a JobDefs with a "Run Before Job" set to run "script1."

Create one or more jobs (e.g., maybe "job-general1," "job-general2," "job-general3") for the general case, not setting "Run Before Job" in those, allowing the setting to default to the one specified in JobDefs (i.e., "script1").

Create one job for a specific case ("job-special"), setting "Run Before Job" to "script2."

Run any of the general case jobs ("job-general1," etc.) and "script1" is correctly run, since no override is specified for any of those jobs.

Run "job-special" and BOTH script1 AND script2 are run before the job.
Additional Information: From the documentation, section 9.3, "Only the changes from the defaults need to be mentioned in each Job." (http://doc.bareos.org/master/html/bareos-manual-main-reference.html#QQ2-1-216) I infer that the description of Bareos's defaulting mechanism fits the industry-standard definition of the term "default."

Please note that this was also an issue in Bacula, and one of the (many) reasons I chose to move on to Bareos. I was told by Bacula support that I did not understand the meaning of "default," which actually, I really am sure I do. I've worked with software for over 30 years, and the documented semantics (as per section 9.3) seem to comport with general agreement about the meaning of default mechanisms. To this day, I do not think I've ever seen a single exception to the generally agreed-upon meaning of "default." Even overloading mechanisms in OO languages all appear to comport to this intention and implementation.

I hope this will not result in the same stonewalling of this issue and that this bug, while minor for the most part but potential for disaster for those unawares in other more complicated contexts, can and will be fixed in some upcoming release. Thanks.
System Description
Attached Files:
Notes
(0002845)
joergs   
2017-12-27 12:59   
The "Run Before Job" directive can be specified multiple times in a resource to run multiple run scripts. So the behavior you describe is to be expected. You can extend a DefaultJob with additional run scripts.

To achieve the behavior requested by you, you might use the fact, that DefaultJobs can be used recursively.

JobDefs {
  Name = "defaultjob-without-script"
  ...
}

JobDefs {
  Name = "defaultjob-with-script"
  JobDefs = "defaultjob-without-script"
  Run Before Job = "echo 'test'"
}

Jobs can than either refer to "defaultjob-with-script" or "defaultjob-without-script".
(0002848)
bozonius   
2017-12-27 21:30   
How is this reflected in the documentation? I was kind of wondering if one could specify multiple JobDefs (default configurations), which would be handy. However, the documentation, as I read it, does not seem to encourage the reader to consider multiple JobDefs, as you have illustrated.

Thank you for this information, and I will DEFINITELY make use of this facility -- actually, it addresses precisely what I wanted! However, it might be a good idea to add something to the JobDefs documentation to illustrate exactly what you have shown here. Thanks.

I wonder, even had a bareos admin read every last word of the documentation, if this approach to handling job defaults would be obvious to them. This is not a criticism of the current docs; as docs go, this is one of the more complete I've seen. It's just that sometimes it takes more explanation of various options than those one might glean by reading the material quite literally as I have. I also don't believe that one will necessarily read every last bit of the documentation in the first place (though you might be within your right to claim every user really should have in order to leverage the full advantage of this extensive system). Users may be tasked with correcting or adding an urgent matter and not have time to read everything, rather heading straight to the portion of the docs seeming to be most relevant for the task at hand.

In the opening paragraph of JobDefs might be a good place to add this information. It is the place where it would be most likely NOT to be missed. Again, thanks for the info.
(0002853)
joergs   
2018-01-02 16:39   
Already documented at http://doc.bareos.org/master/html/bareos-manual-main-reference.html#directiveDirJobJob%20Defs :

To structure the configuration even more, Job Defs themselves can also refer to other Job Defs.

If you know a better way to describe this: also patches to the documentation are always welcome.
(0002856)
bozonius   
2018-01-03 03:35   
From the doc:

Any value that you explicitly define in the current Job resource, will override any defaults specified in the Job Defs resource.

That isn't quite exactly correct. What really happens is something a bit more akin to subclassing I think, albeit these "classes" are linear; there is no "multiple inheritance," so to speak (though THAT might even be useful?).

BareOS JobDefs values are not always replaced; they may be replaced, or appended instead. Appending in this manner is not typical of the way we normally speak of defaults in most of the software kingdom (at least in my own experience).

Not sure how to improve the docs, but just thought I'd put this out there: This notion of single parent subclassing. Perhaps this could lead us toward a better description.
(0003636)
bozonius   
2019-11-20 07:49   
I tried the link in your note https://bugs.bareos.org/view.php?id=886#c2853, but it takes me to the start of the doc, not the specific section, which is what I think your link is supposed to do. Seems like a lot of links (and anchors) in the doc are broken this way. References to the bareos doc from websites (like linuxquestions) are now broken; I realize there might not be much that can be done about external references outside of bareos.org.
(0005434)
bruno-at-bareos   
2023-09-13 11:57   
A lot of stabilization in terms of links has be done in documentation since then (Including how to contribute, by proposing a PR for a change).
For example actual 22x version.
https://docs.bareos.org/bareos-22/Configuration/Director.html#jobdefs-resource

Question, would you like to propose a change?


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1069 [bareos-core] General block always 2019-03-20 14:22 2023-09-13 11:47
Reporter: gslongo Platform: Linux  
Assigned To: bruno-at-bareos OS: CentOS  
Priority: high OS Version: 7  
Status: feedback Product Version: 18.2.5  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Always Incremental Jobs possible bug ? Not backing up same files as Incremental jobs..
Description: Hi everyone,

We are planing to implement AI jobs for one of our projects and we decided to make tests before on a running infrastructure (Version: 18.2.6).

The problem is AI jobs are backing up less files than original incr job. To do our test we copied a normal job and here is the result of execution :

*list job=denver-always-incremental
Using Catalog "MyCatalog"
+--------+---------------------------+--------+---------------------+------+-------+----------+---------------+-----------+
| jobid | name | client | starttime | type | level | jobfiles | jobbytes | jobstatus |
+--------+---------------------------+--------+---------------------+------+-------+----------+---------------+-----------+
| 21,197 | denver-always-incremental | denver | 2019-03-15 15:03:10 | B | F | 147,998 | 2,079,207,003 | T |
| 21,217 | denver-always-incremental | denver | 2019-03-16 01:51:21 | B | D | 341 | 32,224,492 | T |
| 21,254 | denver-always-incremental | denver | 2019-03-18 01:15:39 | B | I | 17 | 1,524,336 | T |
| 21,290 | denver-always-incremental | denver | 2019-03-19 01:15:34 | B | I | 20 | 4,419,477 | T |
+--------+---------------------------+--------+---------------------+------+-------+----------+---------------+-----------+
*list job=denver
....

| 21,216 | denver | denver | 2019-03-16 01:50:56 | B | D | 341 | 32,223,670 | T |
| 21,253 | denver | denver | 2019-03-18 01:14:34 | B | I | 1,291 | 184,462,873 | T |
| 21,289 | denver | denver | 2019-03-19 01:14:43 | B | I | 2,413 | 132,643,290 | T |
+--------+--------+--------+---------------------+------+-------+----------+---------------+-----------+


As you can see, starting from same base (same differential job on 2019-03-16) does not provides same results. There is a lot of files missing in the AI job.
However, File set is same and "estimate" provides same results.
Only Incr jobs seems to be "buggy"

Here is a copy of our config files :

job
{
  Name = "denver"
  FileSet = "denver"
  Schedule = "WeeklyCycle"
  Client = "denver"
  JobDefs = "DefaultJob"
  Write bootstrap = "/var/lib/bareos/bstraps/denver.bsr"
  Accurate = yes
}

Job
{
  Name = "denver-always-incremental"
  FileSet = "denver"
  Schedule = "WeeklyCycleAlwaysIncremental"
  Client = "denver"
  JobDefs = "DefaultJob"
  Write bootstrap = "/var/lib/bareos/bstraps/denver-always-incremental.bsr"
 
    # Always incremental settings
    AlwaysIncremental = yes
    AlwaysIncrementalJobRetention = 7 days
    Always Incremental Keep Number = 6
   
    Accurate = yes
 
    Pool = AI-Incremental
    Full Backup Pool = AI-Consolidated
}

 
Job {
    Name = "denver-always-incremental-consolidate"
    Type = "Consolidate"
    Accurate = "yes"
    JobDefs = "DefaultJob"
}
 

FileSet
{
  Name = "denver"
  Ignore FileSet Changes = yes
  Include {
    Options {
      signature = MD5
      compression = gzip
      noatime = yes
    }
    File = /
    File = /boot
    File = /home
    File = /var/log
  }
  Exclude {
    File = /proc
    File = /sys
    File = /tmp
    File = /.journal
    File = /.fsck
  }
}


JobDefs {
  Name = "DefaultJob"
  Type = Backup
  Level = Full
  Client = baloo
  FileSet = "Full Set"
  Schedule = "NONE"
  Storage = File
  Messages = Standard
  Pool = Month
  Priority = 10
  Accurate = yes
}


Do is it a bad configuration or it could be a bug ?

Thank you !
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0003296)
gslongo   
2019-03-21 15:49   
Hi,

I think I've found the error. This looks like implementation error. This looks related to the accurate implementation and the situation where 2 incrementals (different jobs but on same client, sharing same accurate informations) are run sequentially on the same Client.
As a try, I've inverted the backup order : Launching the AI before the original Incremental, and... The result is also inverted !

So, I created a "testmodif1.txt" file, then launching AI Job then Incr job with tracelog enabled and the result is :

root@pve-support:~# grep testmo /var/lib/bareos/pve-support-fd.trace.*
/var/lib/bareos/pve-support-fd.trace.ai:pve-support-fd (100): filed/accurate.cc:186-21381 accurate /root/testmodif1.txt (not found)
/var/lib/bareos/pve-support-fd.trace.incr:pve-support-fd (100): filed/accurate_htable.cc:87-21382 add fname=</root/testmodif1.txt> lstat=P0A DAAKQ IGk B A A A A BAA A Bck55J Bck55J Bck55J A A d delta_seq=0 chksum=
/var/lib/bareos/pve-support-fd.trace.incr:pve-support-fd (100): filed/accurate.cc:101-21382 lookup </root/testmodif1.txt> ok

As you can see, the file is not backed up on the second incr job.


Again, modified the testmodif1.txt file and launching AI then Incr job :

root@pve-support:~# grep testmod /var/lib/bareos/pve-support-fd.trace*
/var/lib/bareos/pve-support-fd.trace.ai:pve-support-fd (100): filed/accurate_htable.cc:87-21383 add fname=</root/testmodif1.txt> lstat=P0A DAAKQ IGk B A A A A BAA A Bck55J Bck55J Bck55J A A d delta_seq=0 chksum=
/var/lib/bareos/pve-support-fd.trace.ai:pve-support-fd (100): filed/accurate.cc:101-21383 lookup </root/testmodif1.txt> ok
/var/lib/bareos/pve-support-fd.trace.ai:pve-support-fd (99): filed/accurate.cc:276-21383 /root/testmodif1.txt st_mtime differs
/var/lib/bareos/pve-support-fd.trace.ai:pve-support-fd (100): findlib/bfile.cc:1101-21383 bopen: fname /root/testmodif1.txt, flags 00000000, mode 0000, rdev 0
/var/lib/bareos/pve-support-fd.trace.incr:pve-support-fd (100): filed/accurate_htable.cc:87-21384 add fname=</root/testmodif1.txt> lstat=P0A DAAKW IGk B A A A F BAA I Bck6Aw Bck6Aw Bck6Aw A A d delta_seq=0 chksum=
/var/lib/bareos/pve-support-fd.trace.incr:pve-support-fd (100): filed/accurate.cc:101-21384 lookup </root/testmodif1.txt> ok


Then as you can see, as soon as the file have been backed up a first time, it will never be backed up again even if the job is not the same. As a result : Inconcistent backups

I know it should not happen with a proper jobs setup but in some cases, it could.
(0005432)
bruno-at-bareos   
2023-09-13 11:40   
Hello cleaning our bug db entries, we would like to know if you can still reproduce this with a recent version like 22.1.0?
Regards
(0005433)
bruno-at-bareos   
2023-09-13 11:47   
Should work normally if you adapt your configuration to the rule
one-client-one-fileset-> give one job, one accurate entity.

So if you double your fileset, then inc and AI inc should react the same.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1038 [bareos-core] file daemon major always 2019-01-27 23:51 2023-09-13 10:55
Reporter: divanikus Platform: Linux  
Assigned To: bruno-at-bareos OS: Debian  
Priority: normal OS Version: 9  
Status: feedback Product Version: 17.2.4  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Partial backup of GlusterFS using glusterfind
Description: I have 2 node cluster of GlusterFS. I have created a job to backup it using glusterfind approach. Unfortunatelly, it's unable to do a Full backup. I keep getting the following error:

Error: gfapi-fd: glfs_stat() failed: No such file or directory

And instead of backing up 300k+ files process stop at 2k+. The job also has Warning status, which can be misleading too. It's not really done, so it should Error.

I have investigated the problem and seems like it is caused by files having "+" sign in their names.
Tags: gluster
Steps To Reproduce: Setup glusterfind backup. Add files with "+" in their names. Try to do a backup. Enjoy. Works with incremental backups too.
Additional Information:
System Description
Attached Files:
Notes
(0005430)
bruno-at-bareos   
2023-09-13 10:55   
Hello, cleaning up entries from our bugs database.
We would like to now if this is still reproducible in recent version llike 22.1.0?


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1002 [bareos-core] storage daemon minor always 2018-08-29 08:31 2023-09-12 14:18
Reporter: mschmidone Platform: Linux  
Assigned To: bruno-at-bareos OS: Debian  
Priority: normal OS Version: 9  
Status: feedback Product Version: 17.2.4  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: despooling/writing to disk is inefficient
Description: I have several backup to disk targets, most of them are on RAID devices (mdadm or LVM) where the backup speed is acceptable, mostly above 10MB/s. But writing on single disks ends in poor writing speed below 1MB/s, sometimes below 100kB/s.

I've tested with several configurations on several servers, and all of them show the same slow write speed.

The best performance so far was to use XFS as file system formatted with the native block size of 4k. The the write speed is about 1.4MB/s.

This all was tested with the default settings and focussing only on the write speed. And this write speed did not change when the spooling was active.

It seems that the despooling is as inefficient as writing without spooling. I'd expect the dispooling to be almost as fast as a file copy which runs with more than 45MB/s on that types of disk (45MB/s on a 5400rpm disk, more on faster disks).
Writing to an SSD increases the write speed to some 25MB/s which then could be limited by the fd.

I've then tested by increasing the maximum block size, and this improved the write speed to an acceptable throughput of some 10MB/s.

Without having seen the code, it seems to me that the writes to the disk suffer from repositioning, frequent buffer flushes or re-reads. All of them seem to have less impact on RAID or SSD since there is a more powerful layer between the physical storage and the logical interface which buffer non-seqential or small-portion access.

I would expect at least the despooling run at near the device's bandwith.
Tags:
Steps To Reproduce: see above
Additional Information:
System Description
Attached Files:
Notes
(0005415)
bruno-at-bareos   
2023-09-12 14:18   
Hello, cleaning old reports here. Did you ever tried with recent version like 22.1.0 ?
Beware that by default Bareos is not bloc aligned (63k) which of course impact read/write performance, but the numbers given here are very low.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1283 [bareos-core] General feature always 2020-12-05 13:03 2023-09-11 17:35
Reporter: rugk Platform:  
Assigned To: arogge OS:  
Priority: high OS Version:  
Status: acknowledged Product Version: 19.2.8  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Remove insecure MD5 hashing
Description: You use/provide CRAM-MD5 hashing:
https://github.com/bareos/bareos/blob/819bba62ebdadd2ac0bd773ac8d26f4f60f5d39e/python-bareos/bareos/util/password.py#L51

However MD5 is easily brute-forcable nowadays, vulnerable to an (active) MITM attack and has many more weaknesses:
https://en.wikipedia.org/wiki/CRAM-MD5#Weaknesses

And it has been deprecated since 2008.
https://tools.ietf.org/html/draft-ietf-sasl-crammd5-to-historic-00
Tags:
Steps To Reproduce:
Additional Information: The linked RFC recommends e.g. SCRAM as an alternative.

AFAIK you use TLS, which should mitigate this problem, but then such an additional authentication is also quite useless here.
You may consider, if appropriate for your use case and not already done. using password stretching hashes (PBKDF, Argon2 etc.) on the server for a secure storage or possibly some kind of private-public-key authentication scheme.
These are only ideas for the future though. For now, just remove legacy and insecure algorithms, or – at least – mark them as deprecated as you should have done in 2008! At most, they can give a false sense of security.
Attached Files:
Notes
(0005409)
arogge   
2023-09-11 17:35   
Right now Bareos has two protocol modes to operate in.
The legacy one is what we inherited from the predecessor project. It uses CRAM-MD5 on plaintext connnections (even if you have TLS enabled).
The modernized protocol does immediate TLS and then authenticates using CRAM-MD5 inside that TLS-session.
While this is still obviously legacy, we chose to keep it for a few reasons:
- the legacy clients require that type of authentication
- in Bareos context it isn't worse than sending a plain password
- it is still considered safe when used via a TLS connection (which is the default nowadays)

Having said that, the document from 2008 that you're referencing is a draft and was never made a standard.

If we decide to implement another incompatible protocol change, we will definitely get rid of CRAM. We will probably not get rid of the shared secrets, so password stretching won't work.
Concerning PKI we decided against it, as PSK seems to be sufficient for our use-case.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1012 [bareos-core] director minor always 2018-09-25 17:01 2023-09-11 17:21
Reporter: stephand Platform:  
Assigned To: bruno-at-bareos OS:  
Priority: normal OS Version:  
Status: feedback Product Version: 17.2.7  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: If a Job terminates with error then status client should not report OK for that job
Description: When a fatal SD error occurs while a restore job is running, nevertheless the "status client=<ClientName>" output shows OK status for that failed restore job.
Tags:
Steps To Reproduce: While the restore job is running, fill the volume currently being used with zeroes using dd. (Note: Never do this on a production system)

Example:
[root@vgr-c7bpdev2-test-pgsql storage]# ls -lah
total 299M
drwxrwxr-x. 2 bareos bareos 23 Sep 25 10:39 .
drwxrwx---. 3 bareos bareos 210 Sep 25 11:49 ..
-rw-r-----. 1 bareos bareos 299M Sep 25 10:46 Full-0001
[root@vgr-c7bpdev2-test-pgsql storage]# ll /dev/zero
crw-rw-rw-. 1 root root 1, 5 Sep 25 10:24 /dev/zero
[root@vgr-c7bpdev2-test-pgsql storage]# dd if=/dev/zero of=Full-0001 bs=1M count=300
300+0 records in
300+0 records out
314572800 bytes (315 MB) copied, 0.168414 s, 1.9 GB/s

This will cause a fatal SD error:

*restore client=bareos-fd select all done yes
Using Catalog "MyCatalog"
Automatically selected FileSet: Data1Set
+-------+-------+-----------+----------+---------------------+------------+
| jobid | level | jobfiles | jobbytes | starttime | volumename |
+-------+-------+-----------+----------+---------------------+------------+
| 1 | F | 1,001,111 | 0 | 2018-09-25 10:39:01 | Full-0001 |
+-------+-------+-----------+----------+---------------------+------------+
You have selected the following JobId: 1

Building directory tree for JobId(s) 1 ... +++++++++++++++++++++++++++++++++++++++++++++++++
1,000,000 files inserted into the tree and marked for extraction.
Bootstrap records written to /var/lib/bareos/bareos-dir.restore.4.bsr

The job will require the following
   Volume(s) Storage(s) SD Device(s)
===========================================================================
   
    Full-0001 File FileStorage

Volumes marked with "*" are online.


1,001,111 files selected to be restored.

Using Catalog "MyCatalog"
Job queued. JobId=7
*
You have messages.
*mes
25-Sep 12:28 bareos-dir JobId 7: Start Restore Job RestoreFiles.2018-09-25_12.28.13_41
25-Sep 12:28 bareos-dir JobId 7: Using Device "FileStorage" to read.
25-Sep 12:28 bareos-sd JobId 7: Ready to read from volume "Full-0001" on device "FileStorage" (/var/lib/bareos/storage).
25-Sep 12:28 bareos-sd JobId 7: Forward spacing Volume "Full-0001" to file:block 0:219.
25-Sep 12:28 bareos-sd JobId 7: Error: block.c:288 Volume data error at 0:18192512! Wanted ID: "BB02", got "". Buffer discarded.
25-Sep 12:28 bareos-sd JobId 7: Releasing device "FileStorage" (/var/lib/bareos/storage).
25-Sep 12:28 bareos-sd JobId 7: Fatal error: fd_cmds.c:236 Command error with FD, hanging up.
25-Sep 12:28 bareos-dir JobId 7: Error: Bareos bareos-dir 17.2.7 (16Jul18):
  Build OS: x86_64-redhat-linux-gnu redhat CentOS Linux release 7.5.1804 (Core)
  JobId: 7
  Job: RestoreFiles.2018-09-25_12.28.13_41
  Restore Client: bareos-fd
  Start time: 25-Sep-2018 12:28:15
  End time: 25-Sep-2018 12:28:19
  Elapsed time: 4 secs
  Files Expected: 1,001,111
  Files Restored: 74,872
  Bytes Restored: 0
  Rate: 0.0 KB/s
  FD Errors: 0
  FD termination status: OK
  SD termination status: Fatal Error
  Termination: *** Restore Error ***

But the "status client" ouput lists this restore job as OK:

*status client=bareos-fd
Connecting to Client bareos-fd at localhost:9102

vgr-c7bpdev2-test-pgsql-fd Version: 17.2.7 (16 Jul 2018) x86_64-redhat-linux-gnu redhat CentOS Linux release 7.5.1804 (Core)
Daemon started 25-Sep-18 10:25. Jobs: run=7 running=0.
 Heap: heap=135,168 smbytes=113,964 max_bytes=181,086 bufs=89 max_bufs=122
 Sizeof: boffset_t=8 size_t=8 debug=0 trace=0 bwlimit=0kB/s

Running Jobs:
bareos-dir (director) connected at: 25-Sep-18 12:28
No Jobs running.
====

Terminated Jobs:
 JobId Level Files Bytes Status Finished Name
======================================================================
     1 Full 1,001,111 0 OK 25-Sep-18 10:39 Data1
     2 Full 106,836 0 Error 25-Sep-18 10:45 Data1
     3 Full 175,194 0 Error 25-Sep-18 10:46 Data1
     4 1,001,111 0 OK 25-Sep-18 10:50 RestoreFiles
     5 36,734 0 Error 25-Sep-18 10:53 RestoreFiles
     6 49,344 0 Error 25-Sep-18 11:49 RestoreFiles
     7 74,872 0 OK 25-Sep-18 12:28 RestoreFiles
====
Additional Information: The expected behavior is that whenever you look at the status of a job on any way that it is possible, the status of the job should be the same.

Note that this is not reproducible by simply stopping bareos-sd while running a restore job. I also wasn't able to reproduce this kind of status discrepancy for backup jobs. In both cases the "status client" termination status was correctly showing Error.
Attached Files:
Notes
(0005408)
bruno-at-bareos   
2023-09-11 17:21   
Is this still happening ?


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1550 [bareos-core] installer / packages minor always 2023-08-31 12:06 2023-09-11 17:08
Reporter: roland Platform: Linux  
Assigned To: bruno-at-bareos OS: Debian  
Priority: normal OS Version: 11  
Status: feedback Product Version: 22.1.0  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Debian: Build-Depends: openssl missing
Description: Building Bareos 21.x and 22.x (including todays GIT HEAD) fails in a clean Debian 11 (bullseye) build environment (using cowbuilder/pbuilder) during the dh_auto_configure run with the following message:

CMake Error at systemtests/CMakeLists.txt:196 (message):
  Creation of certificates failed: 127 /build/bareos-22.1.0/cmake-build

That's because openssl package is not installed in a clean environment and it's missing in debian/control Build-Depends.
Tags: build, debian 11
Steps To Reproduce: # First get the official source package and rename it:
wget https://github.com/bareos/bareos/archive/refs/tags/Release/22.1.0.tar.gz
mv 22.1.0.tar.gz bareos_22.1.0.orig.tar.gz

# Now get the same via GIT (I could also unpack the above package):
git clone https://github.com/bareos/bareos.git
cd bareos
git checkout Release/22.1.0

# Create the missing debian/changelog file:
dch --create --empty --package bareos -v 22.1.0-rr1+deb11u1
dch -a 'RoRo Build bullseye'

# Create a Debian source package (.dsc, .debian.tar.xz):
env DEB_BUILD_PROFILES="debian bullseye" debuild -us -uc -S -d

# And now finally build the Debian source package in a clean bullseye chroot:
sudo env DEB_BUILD_PROFILES="debian bullseye" cowbuilder --build --basepath /var/cache/pbuilder/base-bullseye.cow ../bareos_22.1.0-rr1+deb11u1.dsc
Additional Information: With Bareos 20.x I did not run into this issue.

Adding "openssl" to the Build-Depends in debian/control debian/control.src avoids running into the above build failure.

I'm not sure, whether there are other missing build dependencies, at least the build is complaining above some pam stuff missing, but these don't stop the build.

I still see several automated tests failing, but have to dig deeper there.
System Description
Attached Files:
Notes
(0005403)
bruno-at-bareos   
2023-09-11 17:07   
Would you mind to open a PR to fix the issue on github?


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1488 [bareos-core] file daemon block always 2022-10-22 19:29 2023-09-11 15:40
Reporter: tuxmaster Platform: x86_64  
Assigned To: bruno-at-bareos OS: RHEL  
Priority: urgent OS Version: 9  
Status: assigned Product Version: 21.1.4  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: PostgreSQL Plugin fails on PostgreSQL 15
Description: Because the backup functions was renamed, the add can't backup anything.
See:
https://www.postgresql.org/docs/15/functions-admin.html
https://www.postgresql.org/docs/current/release-15.html

Functions pg_start_backup()/pg_stop_backup() have been renamed to pg_backup_start()/pg_backup_stop(),
and the functions pg_backup_start_time() and pg_is_in_backup() have been removed.
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0004818)
tuxmaster   
2022-10-22 21:12   
Also the file backup_label is not created any more.
It looks like the backup process has changed.
https://www.postgresql.org/docs/current/continuous-archiving.html#BACKUP-BASE-BACKUP
See 26.3.3
pg_backup_stop will return two fields, that has to be saved into files by the backup tool. (2-> backup_label, 3. -> tablespace_map).
(0004822)
bruno-at-bareos   
2022-11-03 10:19   
Seen by developer, but PR from external is welcomed too ;-)
(0005042)
hostedpower   
2023-05-11 16:50   
We have the same problem, we upgraded recently to postgresql 15.2 and no we cannot backup anything more, what can we do now?
(0005043)
bruno-at-bareos   
2023-05-11 16:54   
Using postgresql15 as database for bareos is possible but need new grant script which is proposed here
future new grant script in https://github.com/bareos/bareos/pull/1449/files
full file is https://github.com/bareos/bareos/blob/3660d4c4b59fd17052d663bd22927fa85b2398e6/core/src/cats/ddl/grants/postgresql.sql

Backup & Restore PostgreSQL15 database with the postgresql plugin is not possible due to full changes of the backup API by PGDG group starting with 15.
Contribution for improving the code is still awaited ...
(0005044)
hostedpower   
2023-05-11 18:31   
I checked, but we still use postgresql 14 for Bareos fortunately :)

However our servers with postgresql 15 fail to backup. I checked and probably the easiest way is to switch to non exclusive backups for postgresql 15 as well. Most likely it could than function in the same way.

The commands are renamed (easy) and the biggest difference for the rest is that the backup_label is no longer written to the file system but written when the "backup stop" is called.
(0005045)
hostedpower   
2023-05-12 05:26   
https://www.enterprisedb.com/blog/exclusive-backup-mode-finally-removed-postgres-15
https://www.cybertec-postgresql.com/en/exclusive-backup-deprecated-what-now/

The exclusive backup mode was deprecated since PostgreSQL 9.6 when non-exclusive backups were introduced.

No idea why a brand new plugin was created with a method deprecated for more than 5 years!
(0005060)
hostedpower   
2023-05-15 21:40   
Hi, let us know if there is anything we can do to get the required changes. I think it must be pretty doable for the creator of the original plugin. As ar as I know the older PostgreSQL support even the same backup procedures (and even the same name: pg_backup_start etc)

Maybe we can fund it a bit with the original requester of the functionality?
(0005362)
bruno-at-bareos   
2023-09-04 10:58   
To keep you informed: A new plugin will appear in 23 supporting PostgreSQL >= 10
The PR is available here https://github.com/bareos/bareos/pull/1541

Once build you can test with package located in https://download.bareos.org/experimental/PR-1541/
The new documentation will also appear into that repository.

We will appreciate comments, remarks, and tests.
(0005402)
bruno-at-bareos   
2023-09-11 15:39   
Sorry took more time than expected, but finally the testing packages are now published.
https://download.bareos.org/experimental/PR-1541/

Associated documentation is also available at that address https://download.bareos.org/experimental/PR-1541/BareosMainReference/TasksAndConcepts/Plugins.html#postgresql-plugin


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1164 [bareos-core] director minor always 2019-12-19 17:19 2023-09-05 16:58
Reporter: joergs Platform:  
Assigned To: bruno-at-bareos OS:  
Priority: low OS Version:  
Status: feedback Product Version: 19.2.4~rc1  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Inheritance of JobDefs is handled oddly
Description: Jobs can retrieve settings from JobDefs. A JobDefs itself can also retrieve settings from JobDefs.
So, specifying a job via

1. job
2. job -> jobdef
3. job -> jobdef -> jobdef
all works as expected.

However, specifying
job -> jobdef -> jobdef -> jobdef
does not work.
Tags:
Steps To Reproduce:
Additional Information: https://github.com/joergsteffens/bareos/tree/dev/joergs/master/systemtests-jobdefs-runscripts
does implement a systemtest for part iof this behavior. Handling RunScripts fom JobDefs is a special, even more complicated case.
Attached Files:
Notes
(0003709)
stephand   
2019-12-20 10:36   
I really doubt that this makes sense, I also dare to say that the possibility to use inheritance or nesting of JobDefs should be considered a bug, as the documentation already quite clearly defines:

"The JobDefs resource permits all the same directives that can appear in a Job resource. However, a JobDefs resource does not create a Job, rather it can be referenced within a Job to provide defaults for that Job. This permits you to concisely define several nearly identical Jobs, each one referencing a JobDefs resource which contains the defaults. Only the changes from the defaults need to be mentioned in each Job."
(See https://docs.bareos.org/Configuration/Director.html#jobdefs-resource)

In this respect inheritance of JobDefs was obviously never intended. Allowing arbitrary levels of inheritance/nesting of JobDefs could allow inscrutable configurations and problems would be hard to reproduce. The first sentence in the documentation should be clarified and instead say:

"The JobDefs resource permits all the same directives that can appear in a Job resource, except JobDefs. JobDefs can not be nested."

When really considering to allow nesting JobDefs, there must be a defined maximum nesting depth. In any case it would also require to detect bad configurations like JobDefs referencing each other or indirect circular referencing.

Also the handling of RunScript in JobDefs is rather odd, as it deviates from the expected behaviour of beeing able to override parameters from JobDef in a Job.
(0003711)
joergs   
2019-12-20 12:34   
That is not easily to decide.

The documentation clearly states in https://docs.bareos.org/master/Configuration/Director.html#config-Dir_Job_JobDefs :
"To structure the configuration even more, Job Defs themselves can also refer to other Job Defs."

So it is a documented behavior.

Also the code tries to handle it. However, it handles it incorrectly (only one level of inheritance inside JobDefs) .

I did not care about JobDefs in the past, but I know at least 4 installations using JobDef inheritance, also one pull request and at least one other Mantis ticket refer to this behavior.

Jobs and JobDefs can contain multiple RunScripts and RunScripts can have different parameter: RunBefore vs. RunAfter, RunOnClient vs RunOnHost, ...
Should a RunScript directive in a Job really overwrite all RunScript directive from a JobDef, even if they are of different types?

In my opinion, it would be handled clearer if inherited RunScripts are added, at it is now.

However, I really agree, that this must be obvious for the user. Currently, the ConfigParser already stores information, about if a directive is inherited. This could be extended, that the ConfigParser also stores from which jobDef a setting is inherited. Currently, the console command "show Jobs verbose" already gives some more insight about job definitions. This could also be extended in a way that the verbose mode of show command shows also where a directive is defined. Both should be relatively easy to implement. Also the JobLog could state, where a RunScript is defined.

If there is a decision against inherited JobDefs, I strongly opt for not changing this behavior in the next major release, but mark it as deprecated and remove it in the major release there after.
(0003714)
embareossed   
2019-12-22 23:13   
I note that in this discussion, no mention of multiple inheritance has been mentioned. If we are going for featureness, it might be something to consider.
(0005386)
bruno-at-bareos   
2023-09-05 16:58   
Can we state on documented sentence has to be deprecated and removed as soon as possible.
After 4 years, the test branch didn't exist anymore, nor code effort will be made to fix/improve the behavior.

Will be closed within 7 days.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
879 [bareos-core] director minor always 2017-11-29 11:25 2023-09-05 16:14
Reporter: chaos_prevails Platform: Linux  
Assigned To: bruno-at-bareos OS: Ubuntu  
Priority: normal OS Version: 16.04 amd64  
Status: feedback Product Version: 16.2.4  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: "wrong" end time of VirtualFull job prevents verifying this job
Description: The VirtualFull Job initiated by consolidate receives the "end time" of the last consolidated job.

This leads to the situation that the Verify job of the VirtualFull job verifies the last (non-consolidated) incremental job, but not the VirtualFull job.

E.g. in provided example (see additional information)
1) jobid 603 with end time 10:50:34 is the last incremental job consolidated in jobid 609
2) jobid 604 with end time 10:50:39 is the incremental job correctly not consolidated any more but wrongly verified by jobid 610
3) jobid 609 with end time 10:50:34 (I assume received by jobid 603) is not verified
4) jobid 610 is the verify job which verifies 604, but not 609
Tags:
Steps To Reproduce: 1) have a always incremental job with verify e.g. volumetocatalog
2) start consolidate
3) -> initiated virtualFull job doesn't get verified

Solution:
Verify job should verify the VirtualFull job (e.g. the virtualFull job should get the real endtime, not the endtime of the last consolidated job). Or there should be a way to tell which job to verify (e.g. via jobid).
Additional Information: #### configuration ####

JobDefs {
  Name = "default_ai"
  Type = Backup
  Level = Incremental
  Client = DIR-fd
  Messages = Standard
  Priority = 40
  Write Bootstrap = "|/usr/local/bin/bareos-messages.sh \"[Bootstrap] %j (%d: jobid %i)\" %i \"it@XXXXXX\" %c-%n"
  Maximum Concurrent Jobs = 7

  #Allow Duplicate Jobs = no #doesn't work with virtualFull Job
  #Cancel Lower Level Duplicates = yes
  #Cancel Queued Duplicates = yes


  #always incremental config
  Pool = disk_ai
  #Incremental Backup Pool = disk_ai
  Full Backup Pool = disk_ai_consolidate
  Storage = DIR-file
  Accurate = yes
  Always Incremental = yes
  Always Incremental Job Retention = 7 days
  Always Incremental Keep Number = 7
  Always Incremental Max Full Age = 14 days
}

JobDefs {
  Name = "default_verify"
  Type = Verify
  Level = Full
  Client = DIR-fd
  FileSet = none
  Schedule = manual
  Accurate = yes
  Storage = DIR-file
  Messages = Standard
  Priority = 10
  Pool = Incremental
  Full Backup Pool = Full # write Full Backups into "Full" Pool (0000005)
  Differential Backup Pool = Differential # write Diff Backups into "Differential" Pool (0000008)
  Incremental Backup Pool = Incremental # write Incr Backups into "Incremental" Pool (0000011)
  Write Bootstrap = "|/usr/local/bin/bareos-messages.sh \"[Bootstrap] %j (%d: jobid %i)\" %i \"it@XXXXXX\" %c-%n"
}


Job {
  Name = "client1_sys_ai"
  JobDefs = "default_ai"
  Client = "client1-fd"
  FileSet = linux_common
  Schedule = client1_sys_ai
  RunAfterJob = "/bin/bash -c '/bin/echo \"run client1_sys_ai_v yes\" | bconsole >/dev/null'"
}

Job {
  Name = client1_sys_ai_v
  JobDefs = default_verify
  Verify Job = client1_sys_ai
  Level = VolumeToCatalog
  Client = client1-fd
  FileSet = linux_common
  Schedule = manual
  Pool = disk_ai
  Priority = 41
}



#### Joblogs ####


*Joblog - (last consolidated job)*
Connecting to Director localhost:9101
1000 OK: DIR-dir Version: 16.2.4 (01 July 2016)
Enter a period to cancel a command.
list joblog jobid=603
Automatically selected Catalog: MyCatalog
Using Catalog "MyCatalog"
 2017-11-29 10:50:11 DIR-dir JobId 603: Start Backup JobId 603, Job=client1_sys_ai.2017-11-29_10.48.57_06
 2017-11-29 10:50:11 DIR-dir JobId 603: Using Device "DIR-file" to write.
 2017-11-29 10:50:12 DIR-dir JobId 603: Sending Accurate information.
 2017-11-29 10:50:12 DIR-sd JobId 603: Volume "ai_inc-0038" previously written, moving to end of data.
 2017-11-29 10:50:12 DIR-sd JobId 603: Ready to append to end of Volume "ai_inc-0038" size=367999923
 2017-11-29 10:50:34 DIR-sd JobId 603: Elapsed time=00:00:22, Transfer rate=6.326 M Bytes/second
 2017-11-29 10:50:35 DIR-dir JobId 603: Bareos DIR-dir 16.2.4 (01Jul16):
  Build OS: x86_64-pc-linux-gnu ubuntu Ubuntu 16.04 LTS
  JobId: 603
  Job: client1_sys_ai.2017-11-29_10.48.57_06
  Backup Level: Incremental, since=2017-11-29 10:38:31
  Client: "client1-fd" 16.2.4 (01Jul16) x86_64-pc-linux-gnu,ubuntu,Ubuntu 14.04 LTS,xUbuntu_14.04,x86_64
  FileSet: "linux_common" 2017-11-28 18:30:00
  Pool: "disk_ai" (From Job resource)
  Catalog: "MyCatalog" (From Client resource)
  Storage: "DIR-file" (From Pool resource)
  Scheduled time: 29-Nov-2017 10:48:57
  Start time: 29-Nov-2017 10:50:12
  End time: 29-Nov-2017 10:50:34
  Elapsed time: 22 secs
  Priority: 40
  FD Files Written: 127
  SD Files Written: 127
  FD Bytes Written: 139,101,392 (139.1 MB)
  SD Bytes Written: 139,190,405 (139.1 MB)
  Rate: 6322.8 KB/s
  Software Compression: 60.7 % (lz4hc)
  VSS: no
  Encryption: yes
  Accurate: yes
  Volume name(s): ai_inc-0038
  Volume Session Id: 38
  Volume Session Time: 1511884549
  Last Volume Bytes: 507,406,172 (507.4 MB)
  Non-fatal FD errors: 0
  SD Errors: 0
  FD termination status: OK
  SD termination status: OK
  Termination: Backup OK


*Joblog (job which gets wrongly verified by virtualfull-verification job)*
Connecting to Director localhost:9101
1000 OK: DIR-dir Version: 16.2.4 (01 July 2016)
Enter a period to cancel a command.
list joblog jobid=604
Automatically selected Catalog: MyCatalog
Using Catalog "MyCatalog"
 2017-11-29 10:50:37 DIR-dir JobId 604: Start Backup JobId 604, Job=client1_sys_ai.2017-11-29_10.49.02_07
 2017-11-29 10:50:37 DIR-dir JobId 604: Using Device "DIR-file" to write.
 2017-11-29 10:50:38 DIR-dir JobId 604: Sending Accurate information.
 2017-11-29 10:50:39 DIR-sd JobId 604: Volume "ai_inc-0038" previously written, moving to end of data.
 2017-11-29 10:50:39 DIR-sd JobId 604: Ready to append to end of Volume "ai_inc-0038" size=507406172
 2017-11-29 10:50:39 DIR-sd JobId 604: Elapsed time=00:00:01, Transfer rate=0 Bytes/second
 2017-11-29 10:50:39 DIR-dir JobId 604: Bareos DIR-dir 16.2.4 (01Jul16):
  Build OS: x86_64-pc-linux-gnu ubuntu Ubuntu 16.04 LTS
  JobId: 604
  Job: client1_sys_ai.2017-11-29_10.49.02_07
  Backup Level: Incremental, since=2017-11-29 10:38:31
  Client: "client1-fd" 16.2.4 (01Jul16) x86_64-pc-linux-gnu,ubuntu,Ubuntu 14.04 LTS,xUbuntu_14.04,x86_64
  FileSet: "linux_common" 2017-11-28 18:30:00
  Pool: "disk_ai" (From Job resource)
  Catalog: "MyCatalog" (From Client resource)
  Storage: "DIR-file" (From Pool resource)
  Scheduled time: 29-Nov-2017 10:49:01
  Start time: 29-Nov-2017 10:50:38
  End time: 29-Nov-2017 10:50:39
  Elapsed time: 1 sec
  Priority: 40
  FD Files Written: 0
  SD Files Written: 0
  FD Bytes Written: 0 (0 B)
  SD Bytes Written: 0 (0 B)
  Rate: 0.0 KB/s
  Software Compression: None
  VSS: no
  Encryption: yes
  Accurate: yes
  Volume name(s):
  Volume Session Id: 39
  Volume Session Time: 1511884549
  Last Volume Bytes: 0 (0 B)
  Non-fatal FD errors: 0
  SD Errors: 0
  FD termination status: OK
  SD termination status: OK
  Termination: Backup OK

*Joblog (of virtualfull job, initiated by consolidate)*
Connecting to Director localhost:9101
1000 OK: DIR-dir Version: 16.2.4 (01 July 2016)
Enter a period to cancel a command.
list joblog jobid=609
Automatically selected Catalog: MyCatalog
Using Catalog "MyCatalog"
 2017-11-29 10:53:09 DIR-dir JobId 609: Start Virtual Backup JobId 609, Job=client1_sys_ai.2017-11-29_10.53.05_40
 2017-11-29 10:53:09 DIR-dir JobId 609: Consolidating JobIds 593,596,603
 2017-11-29 10:53:09 DIR-dir JobId 609: Bootstrap records written to /var/lib/bareos/DIR-dir.restore.20.bsr
 2017-11-29 10:53:10 DIR-dir JobId 609: Using Device "DIR-file" to read.
 2017-11-29 10:53:10 DIR-dir JobId 609: Using Device "DIR-file-consolidate" to write.
 2017-11-29 10:53:10 DIR-sd JobId 609: Ready to read from volume "ai_inc-0038" on device "DIR-file" (/mnt/client1/mnt_crypt_backup_bareos).
 2017-11-29 10:53:10 DIR-sd JobId 609: Volume "ai_consolidate-0037" previously written, moving to end of data.
 2017-11-29 10:53:10 DIR-sd JobId 609: Ready to append to end of Volume "ai_consolidate-0037" size=432352027
 2017-11-29 10:53:10 DIR-sd JobId 609: Forward spacing Volume "ai_inc-0038" to file:block 0:122455323.
 2017-11-29 10:53:37 DIR-sd JobId 609: End of Volume at file 0 on device "DIR-file" (/mnt/client1/mnt_crypt_backup_bareos), Volume "ai_inc-0038"
 2017-11-29 10:53:37 DIR-sd JobId 609: End of all volumes.
 2017-11-29 10:53:37 DIR-sd JobId 609: Elapsed time=00:00:27, Transfer rate=8.357 M Bytes/second
 2017-11-29 10:53:37 DIR-dir JobId 609: Joblevel was set to joblevel of first consolidated job: Incremental
 2017-11-29 10:53:38 DIR-dir JobId 609: Bareos DIR-dir 16.2.4 (01Jul16):
  Build OS: x86_64-pc-linux-gnu ubuntu Ubuntu 16.04 LTS
  JobId: 609
  Job: client1_sys_ai.2017-11-29_10.53.05_40
  Backup Level: Virtual Full
  Client: "client1-fd" 16.2.4 (01Jul16) x86_64-pc-linux-gnu,ubuntu,Ubuntu 14.04 LTS,xUbuntu_14.04,x86_64
  FileSet: "linux_common" 2017-11-28 18:30:00
  Pool: "disk_ai_consolidate" (From Job Pool's NextPool resource)
  Catalog: "MyCatalog" (From Client resource)
  Storage: "DIR-file-consolidate" (From Storage from Pool's NextPool resource)
  Scheduled time: 29-Nov-2017 10:53:05
  Start time: 29-Nov-2017 10:50:12
  End time: 29-Nov-2017 10:50:34
  Elapsed time: 22 secs
  Priority: 40
  SD Files Written: 449
  SD Bytes Written: 225,652,009 (225.6 MB)
  Rate: 10256.9 KB/s
  Volume name(s): ai_consolidate-0037
  Volume Session Id: 41
  Volume Session Time: 1511884549
  Last Volume Bytes: 658,344,926 (658.3 MB)
  SD Errors: 0
  SD termination status: OK
  Accurate: yes
  Termination: Backup OK


*Joblog (of verify job, wrongly verifies jobid 604 and not jobid 609)*
Connecting to Director localhost:9101
1000 OK: DIR-dir Version: 16.2.4 (01 July 2016)
Enter a period to cancel a command.
list joblog jobid=610
Automatically selected Catalog: MyCatalog
Using Catalog "MyCatalog"
 2017-11-29 10:53:44 DIR-dir JobId 610: Verifying against JobId=604 Job=client1_sys_ai.2017-11-29_10.49.02_07
 2017-11-29 10:53:44 DIR-dir JobId 610: No files found to read. No bootstrap file written.
 2017-11-29 10:53:44 DIR-dir JobId 610: Bareos DIR-dir 16.2.4 (01Jul16):
  Build OS: x86_64-pc-linux-gnu ubuntu Ubuntu 16.04 LTS
  JobId: 610
  Job: client1_sys_ai_v.2017-11-29_10.53.39_49
  FileSet: linux_system
  Verify Level: VolumeToCatalog
  Client: client1-fd
  Verify JobId: 604
  Verify Job: client1_sys_ai
  Start time: 29-Nov-2017 10:53:44
  End time: 29-Nov-2017 10:53:44
  Files Expected: 0
  Files Examined: 0
  Non-fatal FD errors: 0
  FD termination status:
  SD termination status:
  Termination: Verify OK
Attached Files:
Notes
(0005381)
bruno-at-bareos   
2023-09-05 16:14   
Hello, while cleaning bugs entries, would you mind to confirm this with recent code like version 22.1.0?
Ideally create a systemtest to reproduce it and propose a PR on github.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1294 [bareos-core] file daemon minor always 2020-12-19 20:39 2023-09-05 14:01
Reporter: bsperduto Platform: Linux  
Assigned To: bruno-at-bareos OS: CentOS  
Priority: normal OS Version: 8  
Status: feedback Product Version: 20.0.0  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: LibCloud Accurate Files Memory Leak
Description: Hi,

In the current LibCloud with an accurate backup configuration has a frequent tendency to create an out of space error. In an accurate backup mode files that are determined to be unchanged are still passed up to the worker queue to be downloaded (bareos/core/src/plugins/filed/python/libcloud/bareos_libcloud_api/bucket_explorer.py L141). When the file makes its way through the queue and through start_backup_file it appears the director never triggers any file IOs against this file and never calls IO_CLOSE on the file. This will generate a memory leak on large buckets as these files are never purged until the job finishes and jobs are frequently aborted due to out of memory.

I have a few thoughts on how to work around this but I'd like some input before proceeding.
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0005376)
bruno-at-bareos   
2023-09-05 14:01   
Hello, cleaning up the bugs entries.
May we ask you if this is still reproducible with recent version like 22.1.0 ?
If yes are you able to share some configuration to implement a systemtest that can reproduce the problem.

Of course, if you know how to, please open a PR on the github.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1239 [bareos-core] storage daemon major have not tried 2020-05-12 11:27 2023-09-04 17:11
Reporter: kabassanov Platform: Linux  
Assigned To: bruno-at-bareos OS: Debian  
Priority: normal OS Version: 10  
Status: feedback Product Version: 19.2.7  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: A running job stole a tape from another running job
Description: Hi,

I'm not really sure if it is a bug or a limitation, but yesterday I saw an unexpected behavior.

There is a library with 2 tape readers.
I started a restore job JOB_REC_1 that needed 4 tapes, let's say T1,T2,T3,T4. While reading from T3, another scheduled backup job JOB_BACK_1 started. It was using T4. So when JOB_REC_1 asked for T4, it was not available (because used by JOB_BACK_1). Normal. So it started waiting for T4.
Some minutes later other jobs JOB_BACK_2, JOB_BACK_3, JOB_BACK_4 were queued (max 2 jobs in parallel in bareos config) . When JOB_BACK_1 finished, JOB_BACK_2 started writing to the tape. At this moment I was a little bit surprised to see it writing to the tape, because I was expecting that JOB_REC_1 should have priority for the tape. I waited some minutes, then JOB_BACK_3 started. In its logs I saw that it was "Ready to append to end of Volume R0B018L7 at file=1097". At this moment, JOB_REC_1 unloaded the tape from JOB_BACK_3 drive and loaded it in its drive. Of course JOB_BACK_3 got an IO error message: "Error: stored/block.cc:804 Write error at 1097:0 on device "Drive-2" (/dev/nst1). ERR=Erreur d'entrée/sortie." and Bareos changed tape status to full.

I think that restore process succeeded (even if job "failed", because Bareos found entry in its database that was not written to the tape).
When manually changed tape status back to Append, backup jobs succeeded (with a warning for JOB_BACK_3).

But I wonder if there is a bug in media locking…

Thanks.
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0005370)
bruno-at-bareos   
2023-09-04 17:11   
Hello, it this still reproducible in 22.1.0
To be able to reproduce the exact case, it is also advised to give the related configuration.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1122 [bareos-core] General major always 2019-10-18 19:32 2023-09-04 16:40
Reporter: xyros Platform: Linux  
Assigned To: bruno-at-bareos OS: Ubuntu  
Priority: normal OS Version: 16.04  
Status: resolved Product Version: 18.2.6  
Product Build: Resolution: no change required  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Consolidate queues and indefinitely orphans jobs but falsely reports status as "Consolidate OK" for last queued
Description: My Consolidate job never succeeds -- quickly terminating with "Consolidate OK" while leaving all the VirtualFull jobs it started queued and orphaned.

In the WebUI listing for the allegedly successful Consolidate run, it always list the sequentially last (by job ID) client it queued as being the successful run; however, the level is "Incremental," nothing is actually done, and the client's VirtualFull job is actually still queued up with all the other clients.

In bconsole the status is similar to this:

Running Jobs:
Console connected at 15-Oct-19 15:06
 JobId Level Name Status
======================================================================
   636 Virtual PandoraFMS.2019-10-15_14.33.02_06 is waiting on max Storage jobs
   637 Virtual MongoDB.2019-10-15_14.33.03_09 is waiting on max Storage jobs
   638 Virtual DNS-DHCP.2019-10-15_14.33.04_11 is waiting on max Storage jobs
   639 Virtual Desktop_1.2019-10-15_14.33.05_19 is waiting on max Storage jobs
   640 Virtual Desktop_2.2019-10-15_14.33.05_20 is waiting on max Storage jobs
   641 Virtual Desktop_3.2019-10-15_14.33.06_21 is waiting on max Storage jobs
====


Given that above output, for example the WebUI would show the following:

    642 Consolidate desktop3-fd.hq Consolidate Incremental 0 0.00 B 0 Success
    641 Desktop_3 desktop3-fd.hq Backup VirtualFull 0 0.00 B 0 Queued
    640 Desktop_2 desktop2-fd.hq Backup VirtualFull 0 0.00 B 0 Queued
    639 Desktop_1 desktop1-fd.hq Backup VirtualFull 0 0.00 B 0 Queued
    638 DNS-DHCP dns-dhcp-fd.hq Backup VirtualFull 0 0.00 B 0 Queued
    637 MongoDB mongodb-fd.hq Backup VirtualFull 0 0.00 B 0 Queued
    636 PandoraFMS pandorafms-fd.hq Backup VirtualFull 0 0.00 B 0 Queued


I don't know if this has anything to do with the fact that I have multiple storage definitions, for each VLAN the server is on, and an additional one dedicated for the storage addressable on the default IP (see bareos-dir/storage/File.conf in attached bareos.zip file). Technically this should not matter, but I get the impression Bareos nas not been designed/tested to elegantly work in an environment where the server participates in VLANs.

The reason I'm using VLANs is so that connections do not have to go through a router to reach the clients. Therefore, the full network bandwidth of each LAN segment is available to the Bareos client/server data transfer.

I've tried debugging the Consolidate backup process, using "bareos-dir -d 400 >> /var/log/bareos-dir.log;" however, I get nothing that particularly identifies the issue. I have attached a truncated log file that contains activity starting with queuing the second-to-last. I've cut off the log at the point where it is stuck in the endless cycling with output of:

bareos-dir (50): dird/jobq.cc:951-0 Inc Rstore=File-AI-VLAN105 rncj=1
bareos-dir (50): dird/jobq.cc:1004-0 Fail to acquire Wstore=File-AI-VLAN105 wncj=1
bareos-dir (50): dird/jobq.cc:971-0 Dec Rstore=File-AI-VLAN105 rncj=0
bareos-dir (50): dird/jobq.cc:951-0 Inc Rstore=File-AI-VLAN105 rncj=1
bareos-dir (50): dird/jobq.cc:1004-0 Fail to acquire Wstore=File-AI-VLAN105 wncj=1
bareos-dir (50): dird/jobq.cc:971-0 Dec Rstore=File-AI-VLAN105 rncj=0
bareos-dir (50): dird/jobq.cc:951-0 Inc Rstore=File-AI-VLAN107 rncj=1
bareos-dir (50): dird/jobq.cc:1004-0 Fail to acquire Wstore=File-AI-VLAN107 wncj=1
bareos-dir (50): dird/jobq.cc:971-0 Dec Rstore=File-AI-VLAN107 rncj=0
bareos-dir (50): dird/jobq.cc:951-0 Inc Rstore=File-AI-VLAN107 rncj=1
bareos-dir (50): dird/jobq.cc:1004-0 Fail to acquire Wstore=File-AI-VLAN107 wncj=1
etc...

For convenience, I have attached all the most relevant excerpts of my configuration files (sanitized for privacy/security reasons).

I suspect there's a bug that is responsible for this; however, I'm unable to make heads or tails of what's going on.

Could someone please take a look?

Thanks
Tags: always incremental, consolidate
Steps To Reproduce: 1. Place Bareos on a network switch (virtual or actual) with tagged VLANS
2. Configure Bareos host to have connectivity on three or more VLANs
3. Make sure you have clients you can backup, on each of the VLANs
4. Use the attached config files as reference for setting up storages and jobs for testing.
Additional Information:
System Description
Attached Files: bareos.zip (9,113 bytes) 2019-10-18 19:32
https://bugs.bareos.org/file_download.php?file_id=391&type=bug
bareos-dir.log (41,361 bytes) 2019-10-18 19:32
https://bugs.bareos.org/file_download.php?file_id=392&type=bug
Notes
(0004008)
xyros   
2020-06-11 10:11   
Figured it out myself. The official documentation needs a full working example, as the always incremental backup configuration is very finicky and the error message provide insufficient guidance for resolution.
(0005367)
bruno-at-bareos   
2023-09-04 16:40   
Fixed by user adapted configuration


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1500 [bareos-core] General block always 2022-12-19 13:35 2023-09-04 16:30
Reporter: mdc Platform: x86_64  
Assigned To: bruno-at-bareos OS: Windows  
Priority: normal OS Version: Win10  
Status: acknowledged Product Version: 21.1.6  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Build on Windows using the MSVC fails, because of invalid compiler flags
Description: The cmake config part runs fine, but the compile part fails because of invalid compiler options.
At the summary page of the cmake run, the invalid compiler flags are show:
C Compiler: C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.34.31933/bin/Hostx64/x64/cl.exe 19.34.31937.0
C++ Compiler: C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.34.31933/bin/Hostx64/x64/cl.exe 19.34.31937.0
C Compiler flags: /DWIN32 /D_WINDOWS /W3 -Werror -Wall
C++ Compiler flags: /DWIN32 /D_WINDOWS /W3 /GR /EHsc -Werror -Wall -m64 -mwin32 -mthreads
Linker flags: /machine:x64 /machine:x64 /machine:x64 /machine:x64

The build it self fails with:
[ 1%] Building CXX object core/src/lib/CMakeFiles/version-obj.dir/version.cc.obj
cl : Befehlszeile error D8021 : Ungültiges numerisches Argument /Werror.
NMAKE : fatal error U1077: ""C:\Program Files\CMake\bin\cmake.exe"": Rückgabe-Code "0x2"
Stop.
NMAKE : fatal error U1077: ""C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.34.31933\bin\HostX64\x64\nmake.exe"": Rückgabe-Code "0x2"
Stop.
NMAKE : fatal error U1077: ""C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.34.31933\bin\HostX64\x64\nmake.exe"": Rückgabe-Code "0x2"
Stop.


Tags:
Steps To Reproduce: Call:
cmake -B bauen -DENABLE_PYTHON=OFF -DENABLE_JANSSON=OFF -DENABLE_BCONSOLE=OFF -Dclient-only=ON -DCMAKE_PREFIX_PATH="c:\zlib" -DOPENSSL_ROOT_DIR="c:\openssl"
cmake --build bauen
Additional Information: Using Visual Studio Community 2022.

As I understand the Microsoft compiler don't know the "gcc Wxxx" options.
Attached Files:
Notes
(0005365)
bruno-at-bareos   
2023-09-04 16:30   
So work are in progress to make possible to build Bareos with Microsoft Visual CPP.
It is still uncertain this will be done for 23.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1249 [bareos-core] director minor always 2020-06-09 12:02 2023-08-23 14:14
Reporter: browcio Platform: Linux  
Assigned To: bruno-at-bareos OS: Ubuntu  
Priority: normal OS Version: 16.04  
Status: feedback Product Version: 19.2.7  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Mtimeonly do not work
Description: Hello,

I want to backup directory with incremental snapshots made with hard links. Standard fileset failed because ctime of hardlinks caused all data to be backed up in process of incremental backup.

I found Mtimeonly fileset option in manual but it seems that it does not work. For me it seems that director do not forward mtime_only option to filedaemon.
Tags:
Steps To Reproduce: Have fileset with mtimeonly:
*show fileset=XXX
FileSet {
  Name = "XXX"
  Include {
    Options {
      Signature = MD5
      OneFS = No
      AclSupport = Yes
      MtimeOnly = Yes
(...)
Additional Information: Tried to debug on client.

$ bareos-fd -f -d 99
(...)
(10): filed/dir_cmd.cc:1456-31899 LevelCmd: level = accurate_incremental mtime_only=0
(10): filed/dir_cmd.cc:1456-31899 LevelCmd: level = since_utime 1591695236 mtime_only=0 prev_job=XXX
(...)
(99): filed/accurate.cc:276-31899 XXX st_ctime differs
(99): filed/accurate.cc:276-31899 XXX st_ctime differs
(99): filed/accurate.cc:276-31899 XXX st_ctime differs
(99): filed/accurate.cc:276-31899 XXX st_ctime differs
System Description
Attached Files:
Notes
(0005351)
bruno-at-bareos   
2023-08-23 14:14   
Hello cleaning up our bugs entries, are you still able to reproduce this with a recent version like 22.1.0?


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
616 [bareos-core] director feature always 2016-02-09 13:30 2023-08-22 21:04
Reporter: jmt Platform: Linux  
Assigned To: bruno-at-bareos OS: CentOS  
Priority: normal OS Version: 7  
Status: feedback Product Version: 15.2.2  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: VirtualFull after a full job with a base job
Description: When we do a virtualFull backup with base jobs :
* the job writes only files modified after the last base job, as expected.
* but there is no record for based file, nor in the archive, nor ine the basefiles table.

Then , when trying to restore using this job , we can only restore files modified after the last base job.
Tags:
Steps To Reproduce: * configure for base jobs
* do a base job
* do a full job
* do a virtualfull job
* try to restore with this last job
Additional Information:
System Description
Attached Files:
Notes
(0005343)
bruno-at-bareos   
2023-08-22 21:04   
I'm thinking it is fixed, but prefer to ask if you can still reproduce the issue with recent Bareos 22.1.0 ?


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1534 [bareos-core] storage daemon block always 2023-04-30 20:58 2023-08-22 20:42
Reporter: pavelr Platform: Linux  
Assigned To: bruno-at-bareos OS: RHEL (and clones)  
Priority: immediate OS Version: 9  
Status: feedback Product Version: 22.0.3  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: s3 backend doesn't work
Description: integration with aws S3 does not work. used https://docs.bareos.org/TasksAndConcepts/StorageBackends.html and no matter what i configure i get the following error
2023-04-30 17:31:32 bareos-sd: ERROR in backends/droplet_device.cc:109 error: ../../../../../core/src/droplet/libdroplet/src/profile.c:251: conf_cb_func: invalid value 'use_https'
i do have correct permissions and tokens (confirmed with aws cli and was able to put a file in the bucket.
Tags:
Steps To Reproduce: using this configuration:
host = s3.amazonaws.com # This parameter is only used as baseurl and will be prepended with bucket and location set in device resource to form correct url
use_https = true
access_key = key
secret_key = secret
pricing_dir = "" # If not empty, an droplet.csv file will be created which will record all S3 operations.
backend = s3
aws_auth_sign_version = 4 # Currently, AWS S3 uses version 4. The Ceph S3 gateway uses version 2.
aws_region = us-west-2

throws:
2023-04-30 17:31:32 bareos-sd JobId 512: Warning: stored/label.cc:372 Open device "AWS_S3_1-00" (AWS S3 Storage) Volume "s3-longterm-0403" failed: ERR=stored/dev.cc:602 Could not open: AWS S3 Storage/s3-longterm-0403

2023-04-30 17:31:32 bareos-sd: ERROR in backends/droplet_device.cc:109 error: ../../../../../core/src/droplet/libdroplet/src/profile.c:251: conf_cb_func: invalid value 'use_https'
2023-04-30 17:31:32 bareos-sd: ERROR in backends/droplet_device.cc:109 error: ../../../../../core/src/droplet/libdroplet/src/profile.c:251: conf_cb_func: invalid value 'use_https'
2023-04-30 17:31:32 bareos-sd: ERROR in backends/droplet_device.cc:109 error: ../../../../../core/src/droplet/libdroplet/src/profile.c:251: conf_cb_func: invalid value 'use_https'
Additional Information: running on centos 9.
version:

bareos-webui-22.0.3~pre37.3c0624880-27.el9.x86_64
bareos-common-22.0.4~pre78.1134d9896-47.el9.x86_64
bareos-filedaemon-22.0.4~pre78.1134d9896-47.el9.x86_64
bareos-database-common-22.0.4~pre78.1134d9896-47.el9.x86_64
bareos-database-postgresql-22.0.4~pre78.1134d9896-47.el9.x86_64
bareos-database-tools-22.0.4~pre78.1134d9896-47.el9.x86_64
bareos-director-22.0.4~pre78.1134d9896-47.el9.x86_64
bareos-filedaemon-python-plugins-common-22.0.4~pre78.1134d9896-47.el9.x86_64
bareos-filedaemon-python3-plugin-22.0.4~pre78.1134d9896-47.el9.x86_64
bareos-bconsole-22.0.4~pre78.1134d9896-47.el9.x86_64
bareos-client-22.0.4~pre78.1134d9896-47.el9.x86_64
bareos-tools-22.0.4~pre78.1134d9896-47.el9.x86_64
bareos-storage-22.0.4~pre78.1134d9896-47.el9.x86_64
bareos-22.0.4~pre78.1134d9896-47.el9.x86_64
bareos-storage-droplet-22.0.4~pre78.1134d9896-47.el9.x86_64
bareos-filedaemon-percona-xtrabackup-python-plugin-22.0.4~pre78.1134d9896-47.el9.x86_64
System Description
Attached Files:
Notes
(0004994)
pavelr   
2023-04-30 21:05   
sorry. mixed up the errors:
#option 1:
host = s3.amazonaws.com:443
#result
2023-04-30 17:31:32 bareos-sd: ERROR in backends/droplet_device.cc:109 error: ../../../../../core/src/droplet/libdroplet/src/profile.c:251: conf_cb_func: invalid value 'use_https'

#option 2:
host = s3.amazonaws.com
#error:
2023-04-30 19:01:16 bareos-sd: ERROR in backends/droplet_device.cc:109 error: ../../../../../core/src/droplet/libdroplet/src/addrlist.c:600: dpl_addrlist_add: cannot lookup host s3.amazonaws.com
(0004998)
bruno-at-bareos   
2023-05-03 15:44   
Did you tried to remove the comment ?

host = s3.amazonaws.com # This parameter is only used as baseurl and will be prepended with bucket and location set in device resource to form correct url
use_https = true
access_key = key
secret_key = secret
pricing_dir = "" # If not empty, an droplet.csv file will be created which will record all S3 operations.
backend = s3
aws_auth_sign_version = 4 # Currently, AWS S3 uses version 4. The Ceph S3 gateway uses version 2.
aws_region = us-west-2

to

host = s3.amazonaws.com
use_https = true
access_key = key
secret_key = secret
pricing_dir = ""
backend = s3
aws_auth_sign_version = 4
aws_region = us-west-2

look like the parser was confused by previous line ?
(0005004)
pavelr   
2023-05-04 09:48   
i tried every possible option during the 2 days i've spent on it (including what you've suggested) - it just fails with errors above.
none related question to the issues - i did manage to do a work around using another option - but not sure it's correct.
what i did is mount an backblaze b2 bucket locally using s3fs - and i just configure the bareos storage to work with local folder - it does work - however i did get a strange behavior while i briefly tested it - i did a backup which saved 4 files, but when i'm trying to restore it throws warning that it expected 4 files but restored only 3 - is it a known issue? does this considered a proper way of working with buckets? this issue happened with a specific backup where part of the backup were on actual local folder (full) and the other part (incremental) was on locally mounted b2 bucket. when i did a full backup and restore to and from the bucket it worked without issues. again i didn't tested it toughly so wanted to know if it's ok or not.
(0005041)
arogge   
2023-05-10 14:52   
It's a long shot, but are you using a file with unix line endings?
Because if you had DOS/Windows line endings, then the value is "true<CR>", which is neither "true" nor "false" and thus an invalid value.
Maybe running "dos2unix" on your profile configuration file fixes the problem.
(0005061)
pavelr   
2023-05-16 08:45   
well looks like it indeed was the issue :) however now i get a different error:
2023-05-16 06:43:07 bareos-sd: ERROR in backends/droplet_device.cc:109 error: ../../../../../core/src/droplet/libdroplet/src/conn.c:376: init_ssl_conn: SSL certificate verification status: 0: ok
2023-05-16 06:43:07 bareos-sd: ERROR in backends/droplet_device.cc:109 error: ../../../../../core/src/droplet/libdroplet/src/conn.c:373: init_ssl_conn: SSL connect error: -1 (error:00000001:lib(0)::reason(1))
2023-05-16 06:43:07 bareos-sd: ERROR in backends/droplet_device.cc:109 error: ../../../../../core/src/droplet/libdroplet/src/conn.c:372: init_ssl_conn: SSL_connect failed with 40F6FF4F617F0000:error:0A000126:SSL routines:ssl3_read_n:unexpected eof while reading:ssl/record/rec_layer_s3.c:320:
(0005062)
glani   
2023-05-26 19:06   
I can confirm the following:
Backend AWS does not work on 22.0.3 centos 9 either.
Docu is misleading and slightly incorrect. The region is not taken into consideration.
I have tried many ways: only one I got worked health check for storage: <bucket-name>.s3.amazonaws.com:443

*status
Status available for:
1: Director
2: Storage
3: Client
4: Scheduler
5: All
Select daemon type for status (1-5): 2
The defined Storage resources are:
1: File
2: S3_ObjectStorage_General_smblob_10.0.0.100
3: S3_ObjectStorage_smblob_10.0.0.3
4: S3_ObjectStorage_smblob_10.0.0.4
Select Storage resource (1-4): 2
Connecting to Storage daemon S3_ObjectStorage_General_smblob_10.0.0.100 at 10.0.0.100:9103
 Encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3

bareos-sd Version: 22.0.3 (24 March 2023) CentOS Stream release 9
Daemon started 26-May-23 16:21. Jobs: run=1, running=0, self-compiled binary
 Sizes: boffset_t=8 size_t=8 int32_t=4 int64_t=8 bwlimit=0kB/s

There are WARNINGS for this storagedaemon's configuration!
See output of 'bareos-sd -t' for details.

Running Jobs:
No Jobs running.
====

Jobs waiting to reserve a drive:
====

Device status:

Device "S3_ObjectDevice_General_smblob_10.0.0.100_1" (S3_ObjectDevice_General_smblob_1) is not open.
Backend connection is working.
No pending IO flush requests.
==

Device "S3_ObjectDevice_General_smblob_10.0.0.100_2" (S3_ObjectDevice_General_smblob_2) is not open.
Backend connection is working.
No pending IO flush requests.
==
====

Used Volume status:
====

====


but when I do backup

I got :
26-May 16:36 bareos-dir JobId 22: No prior Full backup Job record found.
26-May 16:36 bareos-dir JobId 22: No prior or suitable Full backup found in catalog. Doing FULL backup.
26-May 16:36 bareos-dir JobId 22: Start Backup JobId 22, Job=bareos-dir-conf.2023-05-26_16.36.22_07
26-May 16:36 bareos-dir JobId 22: Connected Storage daemon at 10.0.0.100:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
26-May 16:36 bareos-dir JobId 22: Encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
26-May 16:36 bareos-dir JobId 22: Connected Client: client-fd-smblob_10-0-0-100 at 10.0.0.100:9102, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
26-May 16:36 bareos-dir JobId 22: Handshake: Immediate TLS
26-May 16:36 bareos-dir JobId 22: Encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
26-May 16:36 bareos-dir JobId 22: Using Device "S3_ObjectDevice_General_smblob_10.0.0.100_1" to write.
26-May 16:36 client-fd-smblob_10-0-0-100 JobId 22: Connected Storage daemon at 10.0.0.100:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
26-May 16:36 client-fd-smblob_10-0-0-100 JobId 22: Encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
26-May 16:36 client-fd-smblob_10-0-0-100 JobId 22: Extended attribute support is enabled
26-May 16:36 client-fd-smblob_10-0-0-100 JobId 22: ACL support is enabled
26-May 16:36 bareos-sd JobId 22: Warning: Volume "Full-0010" not on device "S3_ObjectDevice_General_smblob_10.0.0.100_1" (S3_ObjectDevice_General_smblob_1).
26-May 16:36 bareos-sd JobId 22: Marking Volume "Full-0010" in Error in Catalog.
26-May 16:36 bareos-sd JobId 22: Warning: Volume "Full-0010" not on device "S3_ObjectDevice_General_smblob_10.0.0.100_1" (S3_ObjectDevice_General_smblob_1).
26-May 16:36 bareos-sd JobId 22: Marking Volume "Full-0010" in Error in Catalog.
26-May 16:36 bareos-dir JobId 22: Created new Volume "Full-0011" in catalog.
26-May 16:36 bareos-sd JobId 22: Labeled new Volume "Full-0011" on device "S3_ObjectDevice_General_smblob_10.0.0.100_1" (S3_ObjectDevice_General_smblob_1).
26-May 16:36 bareos-sd JobId 22: Wrote label to prelabeled Volume "Full-0011" on device "S3_ObjectDevice_General_smblob_10.0.0.100_1" (S3_ObjectDevice_General_smblob_1)
26-May 16:36 bareos-sd JobId 22: Releasing device "S3_ObjectDevice_General_smblob_10.0.0.100_1" (S3_ObjectDevice_General_smblob_1).
26-May 16:36 bareos-sd JobId 22: Fatal error: Failed to flush device "S3_ObjectDevice_General_smblob_10.0.0.100_1" (S3_ObjectDevice_General_smblob_1).
26-May 16:36 bareos-sd JobId 22: Elapsed time=00:00:30, Transfer rate=1.433 K Bytes/second
26-May 16:36 bareos-dir JobId 22: Insert of attributes batch table with 102 entries start
26-May 16:36 bareos-dir JobId 22: Insert of attributes batch table done
26-May 16:37 bareos-dir JobId 22: Error: Bareos bareos-dir 22.0.3 (24Mar23):

I googled a little bit looks like this issue has existed since the fork from bacula.

I have spent two days as well. I cannot mount s3fs because plans are backup postgres cluster with 3 nodes and only filedaemon is sitting on replica
(0005063)
glani   
2023-05-26 19:46   
Looks like I found the issue.

The working config for the device profile is:

host = s3.amazonaws.com:443
use_https = true
backend = s3
aws_region = {{ aws_region }}
aws_auth_sign_version = 4
access_key = "{{ aws_access_id }}"
secret_key = "{{ aws_secret_key }}"
pricing_dir = ""

AWS bucket should have ACL to handle requests from API. The default creation of the bucket denies it (bareos hides this info because of no implementation here https://github.com/scality/Droplet). Better to test bucket and AWS creds on something independent. I tested via amazon ansible module. The same bucket and creds

The bucket is specified in the device itself. NO COMMENTS! After removing all comments from the profile it started working.
(0005338)
bruno-at-bareos   
2023-08-22 20:42   
#glani, I'm wondering if you want to make the solution as a remark into documentation by doing a PR ?


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1367 [bareos-core] file daemon major always 2021-06-28 12:30 2023-08-17 14:19
Reporter: therm Platform: Linux  
Assigned To: bruno-at-bareos OS: Ubuntu  
Priority: normal OS Version: 16.04  
Status: assigned Product Version:  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Potential memory leak in bareos-fd in version 20.0.1
Description: On big systems (~17TB Disk with 24million inodes) the bareos-fd is consuming a lot of ram. Restarting the client works in the short run but the problem is there after the next run.
I thought the ram usage of the bareos-fd would decrease after all jobs have finished but it doesnt.
On the system mentioned above bareos-fd takes 5-8GB of RAM doing nothing.
It is a simple file backup with no plugins using accurate mode. The only special is that the partition is split up in multiple jobs.

top line:
 51768 root 20 0 6462352 5,197g 0 S 0,0 33,2 57:29.76 bareos-fd

Any idea how to track this done?
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files: bareos-support-info_0001367_2023-08-07_110333.tgz (5,852 bytes) 2023-08-07 11:54
https://bugs.bareos.org/file_download.php?file_id=566&type=bug
Notes
(0005244)
bruno-at-bareos   
2023-07-20 17:25   
Could you test and report if newer version like the actual 22.1.0 show still the same (mis-)behavior?
(0005252)
therm   
2023-07-24 09:38   
The problem still exists on Ubuntu 20.04 and bareos-client 22.1.1-pre26.
Memory consumption risis rapidly until the OOM-Killer comes. (20 GB after 30 minutes)

Can we assist in providing logs or traces? Would need a howto for that.
(0005254)
bruno-at-bareos   
2023-07-25 15:20   
Thanks for the feedback, we can't promise when it would be proceed (will be a best effort), but we are interested to try to reproduce this.

Please refer to the following page to run and add result of our bareos-support-info tool.
https://servicedesk.bareos.com/help/en-us/2-support/3-how-to-use-the-bareos-support-info-sh-tool

You certainly want to check our page concerning how to raise debuglevel howto
https://servicedesk.bareos.com/help/en-us/2-support/2-how-do-i-set-the-debug-level

and the whole debugging chapter from our documentation.

if the resulting file is not too large (<2MB) please attach here (can be also made private, password,keys etc being already filtered out by the tool). Otherwise just ask I will open a temporary upload share.

The idea is to be able to reproduce it, so the developer will be able to create a proper fix.
(0005275)
bruno-at-bareos   
2023-07-31 11:32   
As we have an interest to discover what happen on your side, our dev's team is also curious to have the following information.

Could you report the output of find <your fileset> | wc -c or as an equivalent if you didn't have huge amounts of files after the last full,

echo list files jobid=<last full jobid> | bconsole | wc -c

If you have a lot of changes you can replay the command for each incremental and summarize the whole.
Thanks
(0005314)
therm   
2023-08-07 11:54   
(Last edited: 2023-08-07 11:56)
The last full backup needed 7 days 19:46:26 to backup 101.431.976 files with a total size of 2.56 TB.
(just to clarify: it is another system with the same problem.)

Top information after 23 minutes of starting the backup job:

    PID USER PR NI VIRT RES SHR S %CPU %MEM ZEIT+ BEFEHL
1165622 root 20 0 10,1g 10,0g 6884 S 99,3 42,4 4:41.56 /usr/sbin/bareos-fd -f
 
I cancled the job after that. Attached the bareos-support-info. The compressed debug trace log is more than 800 MB. Could you give me a temporary upload share?
(0005323)
bruno-at-bareos   
2023-08-17 14:19   
Hello,

The fd will have to store all the path/filename list with 101.431 Files can start to be big especially on long path and file names.

You can estimate how much memory you need by counting the size of those string in the file+path table for a given jobid.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
979 [bareos-core] documentation text N/A 2018-07-05 14:42 2023-08-02 14:48
Reporter: stephand Platform:  
Assigned To: bruno-at-bareos OS:  
Priority: low OS Version:  
Status: feedback Product Version:  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version: 18.3.1  
Summary: Clarify Documentation of Max Wait Time Parameter
Description: Documentation is now:

 Max Wait Time = <time>

    The time specifies the maximum allowed time that a job may block waiting for a resource (such as waiting for a tape to be mounted, or waiting for the storage or file daemons to perform their duties), counted from the when the job starts, (not necessarily the same as when the job was scheduled).

It should be clarified how the time in waiting state is counted, the term "counted from when the job start" could be misleading, because the relevant time counting only starts when the job gets in waiting state.
It should also be mentioned that the time counting is reset to 0 when the job can continue before reaching the Max Wait Time, so in fact it refers to the maximum continuous time of the job in waiting state.
Tags:
Steps To Reproduce:
Additional Information:
Attached Files:
Notes
(0005130)
bruno-at-bareos   
2023-07-04 15:23   
Isn't the purpose of the diagram below the configuration ?
Or should we still rephrase the explanation
(0005306)
bruno-at-bareos   
2023-08-02 14:48   
Just modify documentation about the reset if job continue


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1333 [bareos-core] storage daemon block have not tried 2021-03-27 16:33 2023-07-31 14:10
Reporter: noone Platform: x86_64  
Assigned To: bruno-at-bareos OS: SLES  
Priority: normal OS Version: 15.1  
Status: resolved Product Version: 19.2.9  
Product Build: Resolution: no change required  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: mtx-changer stopped working
Description: mtx-changer stopped working after an update.
Each time a new tape was loaded by bareos I got an error message like:
"""
Connecting to Storage daemon SL3-LTO5-00 at uranus.mcservice.eu:9103 ...
3304 Issuing autochanger "load slot 6, drive 0" command.
3992 Bad autochanger "load slot 6, drive 0": ERR=Child died from signal 15: Termination.
Results=Program killed by BAREOS (timeout)

3001 Mounted Volume: ALN043L5
3001 Device "tapedrive-lto-5" (/dev/tape/by-id/scsi-300e09e60001ce29a-nst) is already mounted with Volume "ALN043L5"
"""
In reality the tape was loaded and after 5 minutes the command was killed by timeout. The tape is loaded after roughly 120 seconds and is readable by that time using other applications (like dd or tapeinfo).

I tracked it down to the `wait_for_drive()` function in the script. I modified the function to look like
```
wait_for_drive() {
  i=0
  while [ $i -le 300 ]; do # Wait max 300 seconds
    debug "tapeinfo -f $1 2>&1"
    debug `tapeinfo -f $1 2>&1`
    debug "mt -f $1 status 2>&1"
    debug `mt -f $1 status 2>&1`
    if mt -f $1 status 2>&1 | grep "${ready}" >/dev/null 2>&1; then
      break
    fi
    debug "Device $1 - not ready, retrying..."
    sleep 1
    i=`expr $i + 1`
  done
}
```
An example protocol output is (shortened)
"""
20210327-16:20:35 tapeinfo -f /dev/tape/by-id/scsi-300e09e60001ce29a-nst 2>&1
20210327-16:20:35 mtx: Request Sense: Long Report=yes mtx: Request Sense: Valid Residual=no mtx: Request Sense: Error Code=0 (Unknown?!) mtx: Request Sense: Sense Key=No Sense mtx: Request Sense: FileMark=no mtx: Request Sense: EOM=no mtx: Request Sense: ILI=no mtx: Request Sense: Additional Sense Code = 00 mtx: Request Sense: Additional Sense Qualifier = 00 mtx: Request Sense: BPV=no mtx: Request Sense: Error in CDB=no mtx: Request Sense: SKSV=no INQUIRY Command Failed
20210327-16:20:35 mt -f /dev/tape/by-id/scsi-300e09e60001ce29a-nst status 2>&1
20210327-16:20:35 Unknown tape drive: file no= -1 block no= -1
20210327-16:20:35 Device /dev/tape/by-id/scsi-300e09e60001ce29a-nst - not ready, retrying...
"""
I verified via bash that at this time the tapedrive was ready using tapeinfo.



Tags:
Steps To Reproduce:
Additional Information:
System Description General issues regarding all linux platforms or not specific to only one distribution
Attached Files:
Notes
(0004105)
noone   
2021-03-27 16:39   
For anyone facing this problem:

I found a workaround. The mt commands return value is the problem. So I am using now tapeinfo as a replacement.
At you own risk you could try to replace the `wait_for_drive` function by
```
wait_for_drive() {
  i=0
  while [ $i -le 300 ]; do # Wait max 300 seconds
    # Code Changed because mt has stopped working in December 2020. This is a provisional fix...
    #if mt -f $1 status 2>&1 | grep "${ready}" >/dev/null 2>&1; then
    if tapeinfo -f $1 2>&1 | grep "Ready: yes" >/dev/null 2>&1; then
      break
    fi
    debug "Device $1 - not ready, retrying..."
    sleep 1
    i=`expr $i + 1`
  done
}
```

Might or might not work on different systems. To work bareos-sd has to run as root on my machine.

PS:
I found out that the mt command returns
"""
uranus:~ # mt -f /dev/tape/by-id/scsi-300e09e60001ce29a-nst status
Unknown tape drive:

   file no= -1 block no= -1
"""
so this might be the reason why it stopped working. But I am unable to find out why the output of mt has changed.
(0005279)
bruno-at-bareos   
2023-07-31 14:10   
Thanks for your tips. As it is something we can fix, we mark it as resolved.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
636 [bareos-core] storage daemon minor always 2016-03-30 08:50 2023-07-27 15:51
Reporter: axestr Platform: Linux  
Assigned To: OS: CentOS  
Priority: normal OS Version: 6  
Status: new Product Version: 15.2.2  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Error on Copy Job / Maybe simmilar to 0000361
Description: For an copy job, I become allways the error
 Fatal error: append.c:191 FI=51 from SD not positive or sequential=0
see additional information for full job log.
All other jobs and copy jobs are running fine.
Maybe, this is simmialar to 0000361?

How can I help to resolve this Issue?
Tags:
Steps To Reproduce: Scheduler reproduces this error daily.
Additional Information: 30-Mar 01:40 home-dir JobId 9703: Copying using JobId=6613 Job=MsWs1-Users.2016-02-18_22.30.00_32
30-Mar 01:40 home-dir JobId 9703: Bootstrap records written to /var/lib/bareos/home-dir.restore.9.bsr
30-Mar 01:40 home-dir JobId 9703: Start Copying JobId 9703, Job=Copy-HomeLocal-Q-to-Backup1-Q.2016-03-30_01.40.03_21
30-Mar 01:40 home-dir JobId 9703: Using Device "Home-Store1" to read.
30-Mar 01:40 home-dir JobId 9704: Using Device "Backup1-Store1" to write.
30-Mar 01:40 backup1-sd JobId 9704: Volume "Backup1-Q-4" previously written, moving to end of data.
30-Mar 01:40 home-sd JobId 9703: Ready to read from volume "HomeLocal-Q-2" on device "Home-Store1" (/home/system-shares/bareos-storage/store1).
30-Mar 01:40 backup1-sd JobId 9704: Ready to append to end of Volume "Backup1-Q-4" size=6782764324
30-Mar 01:40 home-sd JobId 9703: Forward spacing Volume "HomeLocal-Q-2" to file:block 2:2081572941.
30-Mar 01:40 home-sd JobId 9703: End of Volume at file 2 on device "Home-Store1" (/home/system-shares/bareos-storage/store1), Volume "HomeLocal-Q-2"
30-Mar 01:40 home-sd JobId 9703: Ready to read from volume "HomeLocal-Q-3" on device "Home-Store1" (/home/system-shares/bareos-storage/store1).
30-Mar 01:40 backup1-sd JobId 9704: Fatal error: append.c:191 FI=51 from SD not positive or sequential=0
30-Mar 01:40 backup1-sd JobId 9704: Elapsed time=00:00:01, Transfer rate=0 Bytes/second
30-Mar 01:40 home-sd JobId 9703: Forward spacing Volume "HomeLocal-Q-3" to file:block 0:11067481.
30-Mar 01:40 home-sd JobId 9703: Error: bsock_tcp.c:422 Write error sending -1 bytes to Storage daemon:backup1.pmit.cc:9103: ERR=Connection reset by peer
30-Mar 01:40 home-sd JobId 9703: Fatal error: mac.c:537 Network send error to SD. ERR=Connection reset by peer
30-Mar 01:40 home-dir JobId 9703: Error: Bareos home-dir 15.2.2 (16Nov15):
  Build OS: x86_64-redhat-linux-gnu redhat CentOS release 6.6 (Final)
  Prev Backup JobId: 6613
  Prev Backup Job: MsWs1-Users.2016-02-18_22.30.00_32
  New Backup JobId: 9704
  Current JobId: 9703
  Current Job: Copy-HomeLocal-Q-to-Backup1-Q.2016-03-30_01.40.03_21
  Backup Level: Full
  Client: Dummy
  FileSet: "Dummy"
  Read Pool: "HomeLocal-Q" (From Job resource)
  Read Storage: "Home-Store1" (From Pool resource)
  Write Pool: "Backup1-Q" (From Job Pool's NextPool resource)
  Write Storage: "Backup1-Store1" (From Storage from Pool's NextPool resource)
  Next Pool: "Backup1-Q" (From Job Pool's NextPool resource)
  Catalog: "MyCatalog" (From Client resource)
  Start time: 30-Mar-2016 01:40:05
  End time: 30-Mar-2016 01:40:05
  Elapsed time: 0 secs
  Priority: 10
  SD Files Written: 2
  SD Bytes Written: 3,078 (3.078 KB)
  Rate: 0.0 KB/s
  Volume name(s):
  Volume Session Id: 50
  Volume Session Time: 1459228020
  Last Volume Bytes: 0 (0 B)
  SD Errors: 1
  SD termination status: Fatal Error
  Termination: *** Copying Error ***
System Description
Attached Files: bareos_volumes.png (33,262 bytes) 2018-05-23 14:13
https://bugs.bareos.org/file_download.php?file_id=292&type=bug
Notes
(0002230)
axestr   
2016-04-08 07:35   
Found a nasty workaround. The problem was the copy of a job which resits on two volumen (HomeLocal-Q-2 and HomeLocal-Q-3).
Deleted the job, now everything is fine. But deleting the job is not a clean solution ;-)
(0002265)
mvwieringen   
2016-05-09 19:13   
I think this is a corner case where there is not much on the first
volume (e.g. not even a full data record) and as such an ASSERT is triggered
that makes sure that the FI (FileIndex) is progressing. Probably a serious
difficult one to reproduce in a reliable way to be able to create a workaround.
As you deleted the Job we also have not really a way to get some higher debug
output to see if my hunch is right.
(0002266)
axestr   
2016-05-10 06:21   
If the needed Information is in the PostgreSQL database, I can get backups from this database.
Also, I have this information from BAT and following reports:

Error: Bareos home-dir 15.2.2 (16Nov15):
  Build OS: x86_64-redhat-linux-gnu redhat CentOS release 6.6 (Final)
  Prev Backup JobId: 6613
  Prev Backup Job: MsWs1-Users.2016-02-18_22.30.00_32
  New Backup JobId: 10355
  Current JobId: 10354
  Current Job: Copy-HomeLocal-Q-to-Backup1-Q.2016-04-07_01.40.02_26
  Backup Level: Full
  Client: Dummy
  FileSet: "Dummy"
  Read Pool: "HomeLocal-Q" (From Job resource)
  Read Storage: "Home-Store1" (From Pool resource)
  Write Pool: "Backup1-Q" (From Job Pool's NextPool resource)
  Write Storage: "Backup1-Store1" (From Storage from Pool's NextPool resource)
  Next Pool: "Backup1-Q" (From Job Pool's NextPool resource)
  Catalog: "MyCatalog" (From Client resource)
  Start time: 07-Apr-2016 01:40:05
  End time: 07-Apr-2016 01:40:05
  Elapsed time: 0 secs
  Priority: 10
  SD Files Written: 2
  SD Bytes Written: 3,078 (3.078 KB)
  Rate: 0.0 KB/s
  Volume name(s):
  Volume Session Id: 510
  Volume Session Time: 1459228020
  Last Volume Bytes: 0 (0 B)
  SD Errors: 1
  SD termination status: Fatal Error
  Termination: *** Copying Error ***




27-Mar 01:20 home-dir JobId 9457: Copying using JobId=6613 Job=MsWs1-Users.2016-02-18_22.30.00_32
27-Mar 01:20 home-dir JobId 9457: Bootstrap records written to /var/lib/bareos/home-dir.restore.4.bsr
27-Mar 01:20 home-dir JobId 9457: Start Copying JobId 9457, Job=Copy-HomeLocal-Q-to-Backup1-Q.2016-03-27_01.20.02_58
27-Mar 01:20 home-dir JobId 9457: Using Device "Home-Store1" to read.
27-Mar 01:20 home-dir JobId 9458: Using Device "Backup1-Store1" to write.
27-Mar 01:20 backup1-sd JobId 9458: Volume "Backup1-Q-4" previously written, moving to end of data.
27-Mar 01:20 home-sd JobId 9457: Ready to read from volume "HomeLocal-Q-2" on device "Home-Store1" (/home/system-shares/bareos-storage/store1).
27-Mar 01:20 backup1-sd JobId 9458: Ready to append to end of Volume "Backup1-Q-4" size=6143665867
27-Mar 01:20 home-sd JobId 9457: Forward spacing Volume "HomeLocal-Q-2" to file:block 2:2081572941.
27-Mar 01:20 home-sd JobId 9457: End of Volume at file 2 on device "Home-Store1" (/home/system-shares/bareos-storage/store1), Volume "HomeLocal-Q-2"
27-Mar 01:20 home-sd JobId 9457: Ready to read from volume "HomeLocal-Q-3" on device "Home-Store1" (/home/system-shares/bareos-storage/store1).
27-Mar 01:20 backup1-sd JobId 9458: Fatal error: append.c:191 FI=51 from SD not positive or sequential=0
27-Mar 01:20 backup1-sd JobId 9458: Elapsed time=00:00:01, Transfer rate=0 Bytes/second
27-Mar 01:20 home-sd JobId 9457: Forward spacing Volume "HomeLocal-Q-3" to file:block 0:11067481.
27-Mar 01:20 home-sd JobId 9457: Error: bsock_tcp.c:422 Write error sending -1 bytes to Storage daemon:backup1.pmit.cc:9103: ERR=Connection reset by peer
27-Mar 01:20 home-sd JobId 9457: Fatal error: mac.c:537 Network send error to SD. ERR=Connection reset by peer
27-Mar 01:20 home-dir JobId 9457: Error: Bareos home-dir 15.2.2 (16Nov15):
  Build OS: x86_64-redhat-linux-gnu redhat CentOS release 6.6 (Final)
  Prev Backup JobId: 6613
  Prev Backup Job: MsWs1-Users.2016-02-18_22.30.00_32
  New Backup JobId: 9458
  Current JobId: 9457
  Current Job: Copy-HomeLocal-Q-to-Backup1-Q.2016-03-27_01.20.02_58
  Backup Level: Full
  Client: Dummy
  FileSet: "Dummy"
  Read Pool: "HomeLocal-Q" (From Job resource)
  Read Storage: "Home-Store1" (From Pool resource)
  Write Pool: "Backup1-Q" (From Job Pool's NextPool resource)
  Write Storage: "Backup1-Store1" (From Storage from Pool's NextPool resource)
  Next Pool: "Backup1-Q" (From Job Pool's NextPool resource)
  Catalog: "MyCatalog" (From Client resource)
  Start time: 27-Mar-2016 01:20:03
  End time: 27-Mar-2016 01:20:03
  Elapsed time: 0 secs
  Priority: 10
  SD Files Written: 2
  SD Bytes Written: 3,078 (3.078 KB)
  Rate: 0.0 KB/s
  Volume name(s):
  Volume Session Id: 44
  Volume Session Time: 1458977449
  Last Volume Bytes: 0 (0 B)
  SD Errors: 1
  SD termination status: Fatal Error
  Termination: *** Copying Error ***



28-Mar 01:40 home-dir JobId 9533: Copying using JobId=6613 Job=MsWs1-Users.2016-02-18_22.30.00_32
28-Mar 01:40 home-dir JobId 9533: Bootstrap records written to /var/lib/bareos/home-dir.restore.3.bsr
28-Mar 01:40 home-dir JobId 9533: Start Copying JobId 9533, Job=Copy-HomeLocal-Q-to-Backup1-Q.2016-03-28_01.40.00_30
28-Mar 01:40 home-dir JobId 9533: Using Device "Home-Store1" to read.
28-Mar 01:40 home-dir JobId 9534: Using Device "Backup1-Store1" to write.
28-Mar 01:40 backup1-sd JobId 9534: Volume "Backup1-Q-4" previously written, moving to end of data.
28-Mar 01:40 home-sd JobId 9533: Ready to read from volume "HomeLocal-Q-2" on device "Home-Store1" (/home/system-shares/bareos-storage/store1).
28-Mar 01:40 backup1-sd JobId 9534: Ready to append to end of Volume "Backup1-Q-4" size=6592042035
28-Mar 01:40 home-sd JobId 9533: Forward spacing Volume "HomeLocal-Q-2" to file:block 2:2081572941.
28-Mar 01:40 home-sd JobId 9533: End of Volume at file 2 on device "Home-Store1" (/home/system-shares/bareos-storage/store1), Volume "HomeLocal-Q-2"
28-Mar 01:40 home-sd JobId 9533: Ready to read from volume "HomeLocal-Q-3" on device "Home-Store1" (/home/system-shares/bareos-storage/store1).
28-Mar 01:40 backup1-sd JobId 9534: Fatal error: append.c:191 FI=51 from SD not positive or sequential=0
28-Mar 01:40 backup1-sd JobId 9534: Elapsed time=00:00:01, Transfer rate=0 Bytes/second
28-Mar 01:40 home-sd JobId 9533: Forward spacing Volume "HomeLocal-Q-3" to file:block 0:11067481.
28-Mar 01:40 home-sd JobId 9533: Error: bsock_tcp.c:422 Write error sending -1 bytes to Storage daemon:backup1.pmit.cc:9103: ERR=Connection reset by peer
28-Mar 01:40 home-sd JobId 9533: Fatal error: mac.c:537 Network send error to SD. ERR=Connection reset by peer
28-Mar 01:40 home-dir JobId 9533: Error: Bareos home-dir 15.2.2 (16Nov15):
  Build OS: x86_64-redhat-linux-gnu redhat CentOS release 6.6 (Final)
  Prev Backup JobId: 6613
  Prev Backup Job: MsWs1-Users.2016-02-18_22.30.00_32
  New Backup JobId: 9534
  Current JobId: 9533
  Current Job: Copy-HomeLocal-Q-to-Backup1-Q.2016-03-28_01.40.00_30
  Backup Level: Full
  Client: Dummy
  FileSet: "Dummy"
  Read Pool: "HomeLocal-Q" (From Job resource)
  Read Storage: "Home-Store1" (From Pool resource)
  Write Pool: "Backup1-Q" (From Job Pool's NextPool resource)
  Write Storage: "Backup1-Store1" (From Storage from Pool's NextPool resource)
  Next Pool: "Backup1-Q" (From Job Pool's NextPool resource)
  Catalog: "MyCatalog" (From Client resource)
  Start time: 28-Mar-2016 01:40:32
  End time: 28-Mar-2016 01:40:33
  Elapsed time: 1 sec
  Priority: 10
  SD Files Written: 2
  SD Bytes Written: 3,078 (3.078 KB)
  Rate: 3.1 KB/s
  Volume name(s):
  Volume Session Id: 27
  Volume Session Time: 1459067007
  Last Volume Bytes: 0 (0 B)
  SD Errors: 1
  SD termination status: Fatal Error
  Termination: *** Copying Error ***



29-Mar 01:40 home-dir JobId 9607: Copying using JobId=6613 Job=MsWs1-Users.2016-02-18_22.30.00_32
29-Mar 01:40 home-dir JobId 9607: Bootstrap records written to /var/lib/bareos/home-dir.restore.1.bsr
29-Mar 01:40 home-dir JobId 9607: Start Copying JobId 9607, Job=Copy-HomeLocal-Q-to-Backup1-Q.2016-03-29_01.40.02_44
29-Mar 01:40 home-dir JobId 9607: Using Device "Home-Store1" to read.
29-Mar 01:40 home-dir JobId 9608: Using Device "Backup1-Store1" to write.
29-Mar 01:40 backup1-sd JobId 9608: Volume "Backup1-Q-4" previously written, moving to end of data.
29-Mar 01:40 home-sd JobId 9607: Ready to read from volume "HomeLocal-Q-2" on device "Home-Store1" (/home/system-shares/bareos-storage/store1).
29-Mar 01:40 backup1-sd JobId 9608: Ready to append to end of Volume "Backup1-Q-4" size=6592238917
29-Mar 01:40 home-sd JobId 9607: Forward spacing Volume "HomeLocal-Q-2" to file:block 2:2081572941.
29-Mar 01:40 home-sd JobId 9607: End of Volume at file 2 on device "Home-Store1" (/home/system-shares/bareos-storage/store1), Volume "HomeLocal-Q-2"
29-Mar 01:40 home-sd JobId 9607: Ready to read from volume "HomeLocal-Q-3" on device "Home-Store1" (/home/system-shares/bareos-storage/store1).
29-Mar 01:40 backup1-sd JobId 9608: Fatal error: append.c:191 FI=51 from SD not positive or sequential=0
29-Mar 01:40 backup1-sd JobId 9608: Elapsed time=00:00:01, Transfer rate=0 Bytes/second
29-Mar 01:40 home-sd JobId 9607: Forward spacing Volume "HomeLocal-Q-3" to file:block 0:11067481.
29-Mar 01:40 home-sd JobId 9607: Error: bsock_tcp.c:422 Write error sending -1 bytes to Storage daemon:backup1.pmit.cc:9103: ERR=Connection reset by peer
29-Mar 01:40 home-sd JobId 9607: Fatal error: mac.c:537 Network send error to SD. ERR=Connection reset by peer
29-Mar 01:40 home-dir JobId 9607: Error: Bareos home-dir 15.2.2 (16Nov15):
  Build OS: x86_64-redhat-linux-gnu redhat CentOS release 6.6 (Final)
  Prev Backup JobId: 6613
  Prev Backup Job: MsWs1-Users.2016-02-18_22.30.00_32
  New Backup JobId: 9608
  Current JobId: 9607
  Current Job: Copy-HomeLocal-Q-to-Backup1-Q.2016-03-29_01.40.02_44
  Backup Level: Full
  Client: Dummy
  FileSet: "Dummy"
  Read Pool: "HomeLocal-Q" (From Job resource)
  Read Storage: "Home-Store1" (From Pool resource)
  Write Pool: "Backup1-Q" (From Job Pool's NextPool resource)
  Write Storage: "Backup1-Store1" (From Storage from Pool's NextPool resource)
  Next Pool: "Backup1-Q" (From Job Pool's NextPool resource)
  Catalog: "MyCatalog" (From Client resource)
  Start time: 29-Mar-2016 01:40:05
  End time: 29-Mar-2016 01:40:05
  Elapsed time: 0 secs
  Priority: 10
  SD Files Written: 2
  SD Bytes Written: 3,078 (3.078 KB)
  Rate: 0.0 KB/s
  Volume name(s):
  Volume Session Id: 79
  Volume Session Time: 1459067007
  Last Volume Bytes: 0 (0 B)
  SD Errors: 1
  SD termination status: Fatal Error
  Termination: *** Copying Error ***



30-Mar 01:40 home-dir JobId 9703: Copying using JobId=6613 Job=MsWs1-Users.2016-02-18_22.30.00_32
30-Mar 01:40 home-dir JobId 9703: Bootstrap records written to /var/lib/bareos/home-dir.restore.9.bsr
30-Mar 01:40 home-dir JobId 9703: Start Copying JobId 9703, Job=Copy-HomeLocal-Q-to-Backup1-Q.2016-03-30_01.40.03_21
30-Mar 01:40 home-dir JobId 9703: Using Device "Home-Store1" to read.
30-Mar 01:40 home-dir JobId 9704: Using Device "Backup1-Store1" to write.
30-Mar 01:40 backup1-sd JobId 9704: Volume "Backup1-Q-4" previously written, moving to end of data.
30-Mar 01:40 home-sd JobId 9703: Ready to read from volume "HomeLocal-Q-2" on device "Home-Store1" (/home/system-shares/bareos-storage/store1).
30-Mar 01:40 backup1-sd JobId 9704: Ready to append to end of Volume "Backup1-Q-4" size=6782764324
30-Mar 01:40 home-sd JobId 9703: Forward spacing Volume "HomeLocal-Q-2" to file:block 2:2081572941.
30-Mar 01:40 home-sd JobId 9703: End of Volume at file 2 on device "Home-Store1" (/home/system-shares/bareos-storage/store1), Volume "HomeLocal-Q-2"
30-Mar 01:40 home-sd JobId 9703: Ready to read from volume "HomeLocal-Q-3" on device "Home-Store1" (/home/system-shares/bareos-storage/store1).
30-Mar 01:40 backup1-sd JobId 9704: Fatal error: append.c:191 FI=51 from SD not positive or sequential=0
30-Mar 01:40 backup1-sd JobId 9704: Elapsed time=00:00:01, Transfer rate=0 Bytes/second
30-Mar 01:40 home-sd JobId 9703: Forward spacing Volume "HomeLocal-Q-3" to file:block 0:11067481.
30-Mar 01:40 home-sd JobId 9703: Error: bsock_tcp.c:422 Write error sending -1 bytes to Storage daemon:backup1.pmit.cc:9103: ERR=Connection reset by peer
30-Mar 01:40 home-sd JobId 9703: Fatal error: mac.c:537 Network send error to SD. ERR=Connection reset by peer
30-Mar 01:40 home-dir JobId 9703: Error: Bareos home-dir 15.2.2 (16Nov15):
  Build OS: x86_64-redhat-linux-gnu redhat CentOS release 6.6 (Final)
  Prev Backup JobId: 6613
  Prev Backup Job: MsWs1-Users.2016-02-18_22.30.00_32
  New Backup JobId: 9704
  Current JobId: 9703
  Current Job: Copy-HomeLocal-Q-to-Backup1-Q.2016-03-30_01.40.03_21
  Backup Level: Full
  Client: Dummy
  FileSet: "Dummy"
  Read Pool: "HomeLocal-Q" (From Job resource)
  Read Storage: "Home-Store1" (From Pool resource)
  Write Pool: "Backup1-Q" (From Job Pool's NextPool resource)
  Write Storage: "Backup1-Store1" (From Storage from Pool's NextPool resource)
  Next Pool: "Backup1-Q" (From Job Pool's NextPool resource)
  Catalog: "MyCatalog" (From Client resource)
  Start time: 30-Mar-2016 01:40:05
  End time: 30-Mar-2016 01:40:05
  Elapsed time: 0 secs
  Priority: 10
  SD Files Written: 2
  SD Bytes Written: 3,078 (3.078 KB)
  Rate: 0.0 KB/s
  Volume name(s):
  Volume Session Id: 50
  Volume Session Time: 1459228020
  Last Volume Bytes: 0 (0 B)
  SD Errors: 1
  SD termination status: Fatal Error
  Termination: *** Copying Error ***



07-Apr 01:40 home-dir JobId 10354: Copying using JobId=6613 Job=MsWs1-Users.2016-02-18_22.30.00_32
07-Apr 01:40 home-dir JobId 10354: Bootstrap records written to /var/lib/bareos/home-dir.restore.170.bsr
07-Apr 01:40 home-dir JobId 10354: Start Copying JobId 10354, Job=Copy-HomeLocal-Q-to-Backup1-Q.2016-04-07_01.40.02_26
07-Apr 01:40 home-dir JobId 10354: Using Device "Home-Store1" to read.
07-Apr 01:40 home-dir JobId 10355: Using Device "Backup1-Store1" to write.
07-Apr 01:40 backup1-sd JobId 10355: Volume "Backup1-Q-0" previously written, moving to end of data.
07-Apr 01:40 home-sd JobId 10354: Ready to read from volume "HomeLocal-Q-2" on device "Home-Store1" (/home/system-shares/bareos-storage/store1).
07-Apr 01:40 backup1-sd JobId 10355: Ready to append to end of Volume "Backup1-Q-0" size=3490511342
07-Apr 01:40 home-sd JobId 10354: Forward spacing Volume "HomeLocal-Q-2" to file:block 2:2081572941.
07-Apr 01:40 home-sd JobId 10354: End of Volume at file 2 on device "Home-Store1" (/home/system-shares/bareos-storage/store1), Volume "HomeLocal-Q-2"
07-Apr 01:40 home-sd JobId 10354: Ready to read from volume "HomeLocal-Q-3" on device "Home-Store1" (/home/system-shares/bareos-storage/store1).
07-Apr 01:40 backup1-sd JobId 10355: Fatal error: append.c:191 FI=51 from SD not positive or sequential=0
07-Apr 01:40 backup1-sd JobId 10355: Elapsed time=00:00:01, Transfer rate=0 Bytes/second
07-Apr 01:40 home-sd JobId 10354: Forward spacing Volume "HomeLocal-Q-3" to file:block 0:11067481.
07-Apr 01:40 home-sd JobId 10354: Error: bsock_tcp.c:422 Write error sending -1 bytes to Storage daemon:backup1.pmit.cc:9103: ERR=Connection reset by peer
07-Apr 01:40 home-sd JobId 10354: Fatal error: mac.c:537 Network send error to SD. ERR=Connection reset by peer
07-Apr 01:40 home-dir JobId 10354: Error: Bareos home-dir 15.2.2 (16Nov15):
  Build OS: x86_64-redhat-linux-gnu redhat CentOS release 6.6 (Final)
  Prev Backup JobId: 6613
  Prev Backup Job: MsWs1-Users.2016-02-18_22.30.00_32
  New Backup JobId: 10355
  Current JobId: 10354
  Current Job: Copy-HomeLocal-Q-to-Backup1-Q.2016-04-07_01.40.02_26
  Backup Level: Full
  Client: Dummy
  FileSet: "Dummy"
  Read Pool: "HomeLocal-Q" (From Job resource)
  Read Storage: "Home-Store1" (From Pool resource)
  Write Pool: "Backup1-Q" (From Job Pool's NextPool resource)
  Write Storage: "Backup1-Store1" (From Storage from Pool's NextPool resource)
  Next Pool: "Backup1-Q" (From Job Pool's NextPool resource)
  Catalog: "MyCatalog" (From Client resource)
  Start time: 07-Apr-2016 01:40:05
  End time: 07-Apr-2016 01:40:05
  Elapsed time: 0 secs
  Priority: 10
  SD Files Written: 2
  SD Bytes Written: 3,078 (3.078 KB)
  Rate: 0.0 KB/s
  Volume name(s):
  Volume Session Id: 510
  Volume Session Time: 1459228020
  Last Volume Bytes: 0 (0 B)
  SD Errors: 1
  SD termination status: Fatal Error
  Termination: *** Copying Error ***
(0002811)
axestr   
2017-11-04 08:37   
Hi Bareos Team,

Same error pop's up again daily. What could I do/track to identify?

04-Nov 01:20 home-dir JobId 64023: Copying using JobId=62888 Job=Leela-Users.2017-10-22_22.30.00_57
04-Nov 01:20 home-dir JobId 64023: Bootstrap records written to /var/lib/bareos/home-dir.restore.108.bsr
04-Nov 01:20 home-dir JobId 64023: Start Copying JobId 64023, Job=Copy-HomeLocal-Y-to-Backup1-Y.2017-11-04_01.20.02_23
04-Nov 01:20 home-dir JobId 64023: Using Device "Home-Store1" to read.
04-Nov 01:20 home-dir JobId 64024: Using Device "Backup1-Store1" to write.
04-Nov 01:20 home-sd JobId 64023: Ready to read from volume "HomeLocal-Y-10" on device "Home-Store1" (/home/system-shares/bareos-storage/store1).
04-Nov 01:20 backup1-sd JobId 64024: Volume "Backup1-Y-11" previously written, moving to end of data.
04-Nov 01:20 backup1-sd JobId 64024: Ready to append to end of Volume "Backup1-Y-11" size=3495845162
04-Nov 01:20 home-sd JobId 64023: Forward spacing Volume "HomeLocal-Y-10" to file:block 2:2100121434.
04-Nov 01:20 home-sd JobId 64023: End of Volume at file 2 on device "Home-Store1" (/home/system-shares/bareos-storage/store1), Volume "HomeLocal-Y-10"
04-Nov 01:20 home-sd JobId 64023: Ready to read from volume "HomeLocal-Y-11" on device "Home-Store1" (/home/system-shares/bareos-storage/store1).
04-Nov 01:20 backup1-sd JobId 64024: Fatal error: append.c:191 FI=217 from SD not positive or sequential=0
04-Nov 01:20 home-sd JobId 64023: Forward spacing Volume "HomeLocal-Y-11" to file:block 0:21805270.
04-Nov 01:20 home-sd JobId 64023: Error: bsock_tcp.c:422 Write error sending 14208 bytes to Storage daemon:backup1.pmit.cc:9103: ERR=Connection reset by peer
04-Nov 01:20 home-sd JobId 64023: Fatal error: mac.c:326 Network send error to SD. ERR=Connection reset by peer
04-Nov 01:20 home-sd JobId 64023: Error: bsock_tcp.c:357 Socket has errors=1 on call to Storage daemon:backup1.pmit.cc:9103
04-Nov 01:20 backup1-sd JobId 64024: Elapsed time=00:00:01, Transfer rate=0 Bytes/second
04-Nov 01:20 home-dir JobId 64023: Error: Bareos home-dir 15.2.2 (16Nov15):
  Build OS: x86_64-redhat-linux-gnu redhat CentOS release 6.6 (Final)
  Prev Backup JobId: 62888
  Prev Backup Job: Leela-Users.2017-10-22_22.30.00_57
  New Backup JobId: 64024
  Current JobId: 64023
  Current Job: Copy-HomeLocal-Y-to-Backup1-Y.2017-11-04_01.20.02_23
  Backup Level: Full
  Client: Dummy
  FileSet: "Dummy"
  Read Pool: "HomeLocal-Y" (From Job resource)
  Read Storage: "Home-Store1" (From Pool resource)
  Write Pool: "Backup1-Y" (From Job Pool's NextPool resource)
  Write Storage: "Backup1-Store1" (From Storage from Pool's NextPool resource)
  Next Pool: "Backup1-Y" (From Job Pool's NextPool resource)
  Catalog: "MyCatalog" (From Client resource)
  Start time: 04-Nov-2017 01:20:04
  End time: 04-Nov-2017 01:20:05
  Elapsed time: 1 sec
  Priority: 10
  SD Files Written: 1
  SD Bytes Written: 3,606 (3.606 KB)
  Rate: 3.6 KB/s
  Volume name(s):
  Volume Session Id: 315
  Volume Session Time: 1509371965
  Last Volume Bytes: 0 (0 B)
  SD Errors: 2
  SD termination status: Fatal Error
  Termination: *** Copying Error ***

BR, Axel
(0003014)
IvanBayan   
2018-05-23 11:43   
I have same problem.
Bareos version 17.2.4-9.1 installed from http://download.bareos.org/bareos/release/latest/xUbuntu_16.04.

First it looks like network problem, but when I've made dump I've found that TCP connection closed by storage without FIN flag, eventually it just started to sent RST flag.
In debug output from storage I've found next:
 23-May-2018 04:52:49.623885 ow-backup03-sd (850): message.c:858-6475 Enter dispatch_message type=3 msg=ow-backup03-sd JobId 6475: Fatal error: append.c:192 FI=5 from SD not positive or sequential=0
23-May-2018 04:52:49.623899 ow-backup03-sd (850): message.c:1129-6475 DIRECTOR for following msg: ow-backup03-sd JobId 6475: Fatal error: append.c:192 FI=5 from SD not positive or sequential=0

It only occurs when I try to run copy job.
(0003015)
IvanBayan   
2018-05-23 14:12   
Copy job is not completely broken, it can't copy only jobs which share volume with another job:
*list jobs volume=ord-uploadcdr01-fd-mia-backup03_WV-CDRs_full-20180514-5851-486
+-------+----------------------------+--------------------+---------------------+------+-------+----------+-------------+-----------+
| jobid | name | client | starttime | type | level | jobfiles | jobbytes | jobstatus |
+-------+----------------------------+--------------------+---------------------+------+-------+----------+-------------+-----------+
| 5,926 | ow-cdrupload01_CDRs_daily | ow-cdrupload01-fd | 2018-05-15 21:00:02 | B | F | 108 | 809,354,467 | T |
| 5,927 | ord-uploadcdr01_CDRs_daily | ord-uploadcdr01-fd | 2018-05-15 21:00:03 | B | F | 70 | 334,296,463 | T |
+-------+----------------------------+--------------------+---------------------+------+-------+----------+-------------+-----------+
*list volumes jobid=5926
Jobid 5926 used 1 Volume(s): ord-uploadcdr01-fd-mia-backup03_WV-CDRs_full-20180514-5851-486
*list volumes jobid=5927
Jobid 5927 used 1 Volume(s): ord-uploadcdr01-fd-mia-backup03_WV-CDRs_full-20180514-5851-486

In webui job looks like it uses two volumes.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1110 [bareos-core] file daemon minor always 2019-09-05 13:07 2023-07-20 10:27
Reporter: stephand Platform:  
Assigned To: OS:  
Priority: low OS Version:  
Status: new Product Version: 18.2.7  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: BareosFdPluginBaseclass:parse_plugin_definition() does not allow to override plugin options using FD Plugin Options in Job
Description: Currently, the code does not allow to override plugin options set from FileSet by using
the FD Plugin Options parameter in a Job resource.
As a user/admin, I want to be able to override the more general settings (FileSet) by more
specific settings (Job).

See
https://github.com/bareos/bareos/blob/02f72235abaa5acacc5e672bbe6af1a9253f9479/core/src/plugins/filed/BareosFdPluginBaseclass.py#L99
Tags: fd, plugin
Steps To Reproduce: Example FileSet:

FileSet {
  Name = "vm_generic_fileset"
  Include {
    Options {
      ...
    }
    Plugin = "python:module_path=/usr/lib64/bareos/plugins/vmware_plugin:module_name=bareos-fd-vmware:dc=mydc:folder=/my/vmfolder:vmname=myvm:vcserver=myvc.example.com:vcuser=bakadm@vsphere.local:vcpass=myexamplepassword"
  }
}

And example Job definition:

Job {
  Name = "vm_test02_job"
  JobDefs = "DefaultJob"
  FileSet = "vm_generic_fileset"
  FD Plugin Options = "python:folder=/dmz:vmname=fw01"
}

The above options from "FD Plugin Options" do currently not override the options declared in the FileSet,
it is only possible to add options not yet declared in the FileSet.
Additional Information:
Attached Files:
Notes
(0005236)
bruno-at-bareos   
2023-07-20 10:27   
Would be nice to have.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
978 [bareos-core] director feature N/A 2018-07-04 22:59 2023-07-04 15:26
Reporter: stevec Platform: Linux  
Assigned To: bruno-at-bareos OS: Ubuntu  
Priority: normal OS Version: 16.04  
Status: resolved Product Version: 16.2.7  
Product Build: Resolution: won't fix  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Define encryption to pool ; auto encrypt tapes when purged
Description:
Ability to migrate to/from encrypted volumes in a pool over time when volumes are purged/overwritten with new data automatically.

Current method of flagging a tape for encryption at label time has issues when you want to migrate pools to/from an encrypted estate over time. Also current methods are very 'patchwork' for scsi encryption and would be better served to have all settings just in the config files directly.
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0005131)
bruno-at-bareos   
2023-07-04 15:26   
Sorry to close but this feature will not happen without community work.

As a workaround a scratch pool dedicated to encrypted tape can be added and configured to all pool using such medium.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1533 [bareos-core] vmware plugin major always 2023-04-29 13:41 2023-06-26 15:11
Reporter: CirocN Platform: Linux  
Assigned To: stephand OS: RHEL (and clones)  
Priority: urgent OS Version: 8  
Status: resolved Product Version: 22.0.3  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Restoring vmware vms keeps failing, can't restore the data.
Description: First I need to mention I am new to Bareos and I have been working with it for the past couple of weeks to replace our old backup solution.
I am trying to use the VMware plugin to get backups of our vmdks for disaster recovery or if we need to extract a specific file using guestfish.
I have followed the official documents regarding setting up the VMware plugin at https://docs.bareos.org/TasksAndConcepts/Plugins.html#general
The backups of the vm in our vSphere server is successful.
But when I try to restore the backups it keeps failing with the following information:

bareos87.simlab.xyz-fd JobId 3: Fatal error: bareosfd: Traceback (most recent call last):
File "/usr/lib64/bareos/plugins/BareosFdWrapper.py", line 85, in create_file
return bareos_fd_plugin_object.create_file(restorepkt)
File "/usr/lib64/bareos/plugins/BareosFdPluginVMware.py", line 504, in create_file
cbt_data = self.vadp.restore_objects_by_objectname[objectname]["data"]
KeyError: '/VMS/Datacenter/backup_client/[DatastoreVM] backup_client/backup_client.vmdk


I have tried the same steps on Bareos 21 and 20, and also tried it on Redhat 9.1 and I kept getting same exact error.

Tags:
Steps To Reproduce: After setting up the vmware plugin accoding to official documents, I have ran the backups using:
run job=vm-websrv1 level=Full
Web-GUI shows the job instantly and after about 10 minutes the job's status shows success.
Right after the backup is done when I try to restore the backup using Web-GUI or console, I keep getting the same error:

19 2023-04-29 07:34:05 bareos-dir JobId 4: Error: Bareos bareos-dir 22.0.4~pre63.807bc5689 (17Apr23):
Build OS: Red Hat Enterprise Linux release 8.7 (Ootpa)
JobId: 4
Job: RestoreFiles.2023-04-29_07.33.59_43
Restore Client: bareos-fd
Start time: 29-Apr-2023 07:34:01
End time: 29-Apr-2023 07:34:05
Elapsed time: 4 secs
Files Expected: 1
Files Restored: 0
Bytes Restored: 0
Rate: 0.0 KB/s
FD Errors: 1
FD termination status: Fatal Error
SD termination status: Fatal Error
Bareos binary info: Bareos community build (UNSUPPORTED): Get professional support from https://www.bareos.com
Job triggered by: User
Termination: *** Restore Error ***

18 2023-04-29 07:34:05 bareos-dir JobId 4: Warning: File count mismatch: expected=1 , restored=0
17 2023-04-29 07:34:05 bareos-sd JobId 4: Releasing device "FileStorage" (/var/lib/bareos/storage).
16 2023-04-29 07:34:05 bareos-sd JobId 4: Error: lib/bsock_tcp.cc:454 Socket has errors=1 on call to client:192.168.111.136:9103
15 2023-04-29 07:34:05 bareos-sd JobId 4: Fatal error: stored/read.cc:146 Error sending to File daemon. ERR=Connection reset by peer
14 2023-04-29 07:34:05 bareos-sd JobId 4: Error: lib/bsock_tcp.cc:414 Wrote 65536 bytes to client:192.168.111.136:9103, but only 16384 accepted.
13 2023-04-29 07:34:04 bareos87.simlab.xyz-fd JobId 4: Fatal error: bareosfd: Traceback (most recent call last):
File "/usr/lib64/bareos/plugins/BareosFdWrapper.py", line 85, in create_file
return bareos_fd_plugin_object.create_file(restorepkt)
File "/usr/lib64/bareos/plugins/BareosFdPluginVMware.py", line 504, in create_file
cbt_data = self.vadp.restore_objects_by_objectname[objectname]["data"]
KeyError: '/VMS/Datacenter/backup_client/[DatastoreVM] backup_client/backup_client.vmdk'

12 2023-04-29 07:34:04 bareos-sd JobId 4: Forward spacing Volume "Full-0001" to file:block 0:627.
11 2023-04-29 07:34:04 bareos-sd JobId 4: Ready to read from volume "Full-0001" on device "FileStorage" (/var/lib/bareos/storage).
10 2023-04-29 07:34:04 bareos87.simlab.xyz-fd JobId 4: Encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
9 2023-04-29 07:34:04 bareos87.simlab.xyz-fd JobId 4: Connected Storage daemon at bareos87.simlab.xyz:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
8 2023-04-29 07:34:02 bareos-dir JobId 4: Encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
7 2023-04-29 07:34:02 bareos-dir JobId 4: Handshake: Immediate TLS
6 2023-04-29 07:34:02 bareos-dir JobId 4: Connected Client: bareos-fd at localhost:9102, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
5 2023-04-29 07:34:02 bareos-dir JobId 4: Probing client protocol... (result will be saved until config reload)
4 2023-04-29 07:34:02 bareos-dir JobId 4: Using Device "FileStorage" to read.
3 2023-04-29 07:34:02 bareos-dir JobId 4: Encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
2 2023-04-29 07:34:02 bareos-dir JobId 4: Connected Storage daemon at bareos87.simlab.xyz:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
1 2023-04-29 07:34:01 bareos-dir JobId 4: Start Restore Job RestoreFiles.2023-04-29_07.33.59_43
Additional Information:
System Description
Attached Files: Restore_Failure.png (84,322 bytes) 2023-04-29 13:41
https://bugs.bareos.org/file_download.php?file_id=554&type=bug
Backup_Success.png (78,727 bytes) 2023-04-29 13:41
https://bugs.bareos.org/file_download.php?file_id=555&type=bug
BareosFdPluginVMware.png (30,436 bytes) 2023-05-03 23:32
https://bugs.bareos.org/file_download.php?file_id=556&type=bug
docs.bareos.org-instruction.png (36,604 bytes) 2023-05-03 23:32
https://bugs.bareos.org/file_download.php?file_id=557&type=bug
BareosFdPluginVMware.patch (612 bytes) 2023-06-15 16:21
https://bugs.bareos.org/file_download.php?file_id=562&type=bug
Notes
(0004995)
bruno-at-bareos   
2023-05-03 15:37   
As all continuous tests are working perfectly, you certainly miss a point into your configuration, without the configuration nobody will be able to find the problem.

Also if you want to show a job result screeshots are certainly the worse way.
please use the text log result of bconsole <<< "list joblog jobid=2" as attachement here.
(0005003)
CirocN   
2023-05-03 23:32   
I have found out that this is the result of miss matching the backup path and restore path.
The backup is getting created with /VMS/Datacenter//backup_client/[DatastoreVM] backup_client/backup_client.vmdk while /VMS/Datacenter/backup_client/[DatastoreVM] backup_client/backup_client.vmdk.
I have noticed in your trying to strip it off on BareosFdPluginVMware.py on line 366 but it is not working.

CODE:
        if "uuid" in self.options:
            self.vadp.backup_path = "/VMS/%s" % (self.options["uuid"])
        else:
            self.vadp.backup_path = "/VMS/%s/%s/%s" % (
                self.options["dc"],
                self.options["folder"].strip("/"),
                self.options["vmname"],
            )

My configuration for the folder is precisely what is suggested in the official document: folder=/


The workaround I have found for now was to edit the BareosFdPluginVMware.py Python script to the following code but it needs to get properly fixed:

CODE:
        if "uuid" in self.options:
            self.vadp.backup_path = "/VMS/%s" % (self.options["uuid"])
        else:
            self.vadp.backup_path = "/VMS/%s%s/%s" % (
                self.options["dc"],
                self.options["folder"].strip("/"),
                self.options["vmname"],
            )
(0005065)
stephand   
2023-06-06 17:58   
Thanks for reporting this, confirming that it doesn't work properly when using folder=/ for backups.
The root cause is the double slash in the backup path, like in your example
/VMS/Datacenter//backup_client/[DatastoreVM] backup_client/backup_client.vmdk

I will provide a proper fix.
(0005084)
awillem   
2023-06-15 16:21   
Hi,

here's a more elegant solution, "non breaking" the actual design in case you're using a mixed with- and without-folder VM hierarchy.

Hope this helps others.

BR
Arnaud
(0005085)
stephand   
2023-06-16 11:20   
Thanks for you proposed solution.

The PR 1484 already contains a fix that for this issue which will work when using a mixed with- and without-folder VM hierarchy.
See https://github.com/bareos/bareos/pull/1484

Regards,
Stephan
(0005100)
stephand   
2023-06-26 15:11   
Fix committed to bareos master branch with changesetid 17739.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1523 [bareos-core] file daemon crash always 2023-03-13 13:09 2023-06-26 13:52
Reporter: mp Platform:  
Assigned To: bruno-at-bareos OS:  
Priority: urgent OS Version:  
Status: resolved Product Version: 22.0.2  
Product Build: Resolution: duplicate  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: fatal error bareos-fd for full backup postgresql 11
Description: Full backup of Postgesql 11 crash with error:
Error: python3-fd-mod: Could net get stat-info for file /var/lib/postgresql/11/main/base/964374/t4_384322129: "[Errno 2] No such file or directory: '/var/lib/postgresql/11/main/base/964374/t4_384322129'"

1c-pg11-fd JobId 238: Fatal error: bareosfd: Traceback (most recent call last):
File "/usr/lib/bareos/plugins/BareosFdWrapper.py", line 61, in start_backup_file
return bareos_fd_plugin_object.start_backup_file(savepkt)
File "/usr/lib/bareos/plugins/BareosFdPluginPostgres.py", line 396, in start_backup_file
return super(BareosFdPluginPostgres, self).start_backup_file(savepkt)
File "/usr/lib/bareos/plugins/BareosFdPluginLocalFilesBaseclass.py", line 118, in start_backup_file
mystatp.st_mode = statp.st_mode
UnboundLocalError: local variable 'statp' referenced before assignment
Tags:
Steps To Reproduce:
Additional Information: Backup fail every time when try to backup postgesql 11. At same time backup of pg14 finished w/o any problem
Attached Files:
Notes
(0005093)
bruno-at-bareos   
2023-06-26 10:10   
(Last edited: 2023-06-26 10:11)
Is this reproducible in a way or another. Looks like your table (or part of) has been dropped during the backup ?

Is a vacuum full works on this table ? and the file exist ?
(0005099)
bruno-at-bareos   
2023-06-26 13:52   
In issue 1520 we acknowledge that having a warning instead an error should be the right behavior.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1517 [bareos-core] director block always 2023-03-01 17:24 2023-06-18 19:19
Reporter: mschiff Platform:  
Assigned To: arogge OS:  
Priority: normal OS Version:  
Status: assigned Product Version: 21.1.6  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: compilation fails with GCC-13: cats.h:491:50: error: expected class-name before { token
Description: Reading the changelog from 21.1.5 to .6 there seems no related fix, so I assume this is still relevant.

compiling bareos on a system with gcc-13 fails with many errors like:

core/src/cats/cats.h:491:50: error: expected class-name before ‘{’ token

Please see this links for a complete log:

https://895192.bugs.gentoo.org/attachment.cgi?id=852366
Tags: broken, cmake compilation, director
Steps To Reproduce: Try to compile bareos 21.1.6 using gcc-13
Additional Information: Gentoo bug: https://bugs.gentoo.org/895192
Attached Files: bareos-build-fail (139,450 bytes) 2023-03-01 17:24
https://bugs.bareos.org/file_download.php?file_id=543&type=bug
Notes
(0004954)
arogge   
2023-03-24 11:34   
Can you do a PR to fix that problem against master? We could then backport it to 22 and 21.
(0004955)
arogge   
2023-03-24 11:38   
AFAICT there is just an "#include <stdexcept>" missing in the include-block at the top of cats.h
(0004958)
bruno-at-bareos   
2023-03-27 13:27   
PR in progress https://github.com/bareos/bareos/pull/1424
(0004973)
mschiff   
2023-04-20 00:46   
I tested with PR 1424 applied and using gcc-13.0.1_pre20230409 I now get this error:

[53/362] /usr/bin/x86_64-pc-linux-gnu-g++ -D_FILE_OFFSET_BITS=64 -Dbareossd_EXPORTS -I/usr/include/tirpc -I/var/tmp/portage/app-backup/bareos-21.1.7/work/bareos-Release-21.1.7/core/src -O2 -pipe -Wsuggest-override -Wformat -Wformat-security -fdebug-prefix-map=/var/tmp/portage/app-backup/bareos-21.1.7/work/bareos-Release-21.1.7/core=. -fmacro-prefix-map=/var/tmp/portage/app-backup/bareos-21.1.7/work/bareos-Release-21.1.7/core=. -Wno-unknown-pragmas -Wall -std=gnu++17 -fPIC -Wno-deprecated-declarations -MD -MT core/src/stored/CMakeFiles/bareossd.dir/dev.cc.o -MF core/src/stored/CMakeFiles/bareossd.dir/dev.cc.o.d -o core/src/stored/CMakeFiles/bareossd.dir/dev.cc.o -c /var/tmp/portage/app-backup/bareos-21.1.7/work/bareos-Release-21.1.7/core/src/stored/dev.cc
FAILED: core/src/stored/CMakeFiles/bareossd.dir/dev.cc.o
/usr/bin/x86_64-pc-linux-gnu-g++ -D_FILE_OFFSET_BITS=64 -Dbareossd_EXPORTS -I/usr/include/tirpc -I/var/tmp/portage/app-backup/bareos-21.1.7/work/bareos-Release-21.1.7/core/src -O2 -pipe -Wsuggest-override -Wformat -Wformat-security -fdebug-prefix-map=/var/tmp/portage/app-backup/bareos-21.1.7/work/bareos-Release-21.1.7/core=. -fmacro-prefix-map=/var/tmp/portage/app-backup/bareos-21.1.7/work/bareos-Release-21.1.7/core=. -Wno-unknown-pragmas -Wall -std=gnu++17 -fPIC -Wno-deprecated-declarations -MD -MT core/src/stored/CMakeFiles/bareossd.dir/dev.cc.o -MF core/src/stored/CMakeFiles/bareossd.dir/dev.cc.o.d -o core/src/stored/CMakeFiles/bareossd.dir/dev.cc.o -c /var/tmp/portage/app-backup/bareos-21.1.7/work/bareos-Release-21.1.7/core/src/stored/dev.cc
/var/tmp/portage/app-backup/bareos-21.1.7/work/bareos-Release-21.1.7/core/src/stored/dev.cc: In function ‘storagedaemon::Device* storagedaemon::FactoryCreateDevice(JobControlRecord*, DeviceResource*)’:
/var/tmp/portage/app-backup/bareos-21.1.7/work/bareos-Release-21.1.7/core/src/stored/dev.cc:243:41: error: expected unqualified-id before ‘&’ token
  243 | } catch (const std::out_of_range&) {
      | ^
/var/tmp/portage/app-backup/bareos-21.1.7/work/bareos-Release-21.1.7/core/src/stored/dev.cc:243:41: error: expected ‘)’ before ‘&’ token
  243 | } catch (const std::out_of_range&) {
      | ~ ^
      | )
/var/tmp/portage/app-backup/bareos-21.1.7/work/bareos-Release-21.1.7/core/src/stored/dev.cc:243:41: error: expected ‘{’ before ‘&’ token
/var/tmp/portage/app-backup/bareos-21.1.7/work/bareos-Release-21.1.7/core/src/stored/dev.cc:243:42: error: expected primary-expression before ‘)’ token
  243 | } catch (const std::out_of_range&) {
      |
(0004974)
bruno-at-bareos   
2023-04-20 09:44   
Sorry no backport nor tests have been nor will be done for bareos-21 which is no more current code.

The current code/version is 22x
(0004976)
mschiff   
2023-04-20 10:04   
OK, please is there a published information somewhere which branch of bareos is mainteined and which is EOL and when?

Ideally also with a plan in the future about how long each branch will be maintained

My last information was that you support more than one branch actively and provide at least bugfix releases for the last two branches before the current one (so 20 and 21 currently)
(0004977)
bruno-at-bareos   
2023-04-20 10:32   
Your information are correct for the subscription version (paying customer). As none of the supported operating system has gcc13 present by default the fixes for 21 are postponed until there's a paying request (or external contribution of course).

Otherwise the changes for the repositories and versions was published here
https://www.bareos.com/bareos-release-policy/

Hope this help you.
(0004978)
mschiff   
2023-04-20 13:23   
Thanks for clarification. So as I am packaging for Gentoo, which is source based we dont need the binaries at all so I am really only interested in the sources at this point.

So you are saying that you would accept PRs on github for things like gcc13 support for the maintained branches (20-22 currently).

But is there some roadmap somewhere where you describe how long each of the active branches will be supported?
Or is there some policy somewhere stating how many months a branch is supported until its EOL after a new major branch has been released?

Thanks in advance!
(0004979)
bruno-at-bareos   
2023-04-20 13:58   
I'm not aware of anything else published than the official documentation and published policy.

So on community side (download.bareos.org)

So basically it will have as usually a major version per year like next one will be 23. Globally we would like to increase the speed of this, but for the moment it will stay stable.
The objective that will be tried is to get the major version out around end of October.

When 23 will be out, that will replace 22 in current, and people will have to move to that new version as soon as it is out.
Immediately 24 development will start (in next) at the same time.

So basically 22 is maintained until 23 is released.
This has been decided to get better and recent feedback instead having people running very old code.

For you and anybody building sources, you are free to do what you want of course and copy the policy we have for subscription customer
meaning we support bareos-20,21,22 actually, 20 being dropped once 23 will be released.

For PR: normally they are against master, then backported when an legitimate interest is there, but they can also address a specific version like it is maybe the case here.
(0004980)
mschiff   
2023-04-20 15:31   
Thanks again for clarification!

For the specufic error in this bug I just found out, that its just a missing

#include <stdexcept>

in the head of
  core/src/stored/dev.cc

after adding this the source compiles fine.
(0004981)
mschiff   
2023-04-20 18:05   
https://github.com/bareos/bareos/pull/1451
(0004982)
mschiff   
2023-04-20 18:24   
And a final comment (sorry): PR1451 and 1424 both need to be applied to bareos-20 and bareos-22 branches as well and I guess master. Will you do it or should I create seperate PR for each branch?

TIA!
(0005072)
arogge   
2023-06-15 10:17   
I don't think we're going to add GCC 13 support to Bareos 20.
For Bareos 21, 22 and the current master it should build now. Could you please verify that it works for you?
(0005086)
mschiff   
2023-06-18 19:19   
master and 22.1 work fine! I did not see a new release of 21 yet


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1542 [bareos-core] General block always 2023-06-14 07:18 2023-06-15 12:24
Reporter: mdc Platform: x86_64  
Assigned To: arogge OS: Fedora  
Priority: normal OS Version: 38  
Status: assigned Product Version: 22.1.0  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: GCC13 fix looks like not complete
Description: 22.1.0 will include an fix for GCC13, but the build using it will fails with:
[ 63%] Building CXX object core/src/console/CMakeFiles/console_objects.dir/auth_pam.cc.o
cd /builddir/build/BUILD/bareos-Release-22.1.0/redhat-linux-build/core/src/console && ccache /usr/lib64/ccache/g++ -D_FILE_OFFSET_BITS=64 -I/usr/include/tirpc -I/builddir/build/BUILD/bareos-Release-22.1.0/core/s
rc -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat
-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -fno-omit-frame-pointer -mno-omit-
leaf-frame-pointer -Wsuggest-override -Wformat -Werror=format-security -fdebug-prefix-map=/builddir/build/BUILD/bareos-Release-22.1.0/core=. -fmacro-prefix-map=/builddir/build/BUILD/bareos-Release-22.1.0/core=. -Wno-invalid-offsetof -Werror -Wall -Wextra -std=gnu++17 -MD -MT core/src/console/CMakeFiles/console_objects.dir/auth_pam.cc.o -MF CMakeFiles/console_objects.dir/auth_pam.cc.o.d -o CMakeFiles/console_objects.dir/auth_pam.cc.o -c /builddir/build/BUILD/bareos-Release-22.1.0/core/src/console/auth_pam.cc
In file included from /usr/include/features.h:491,
                 from /usr/include/bits/libc-header-start.h:33,
                 from /usr/include/stdint.h:26,
                 from /usr/lib/gcc/x86_64-redhat-linux/13/include/stdint.h:9,
                 from /builddir/build/BUILD/bareos-Release-22.1.0/core/src/include/bareos.h:97,
                 from /builddir/build/BUILD/bareos-Release-22.1.0/core/src/findlib/find_one.cc:32:
In function 'readlink',
    inlined from 'process_symlink(JobControlRecord*, FindFilesPacket*, int (*)(JobControlRecord*, FindFilesPacket*, bool), char*, bool)' at /builddir/build/BUILD/bareos-Release-22.1.0/core/src/findlib/find_one.cc:521:18:
/usr/include/bits/unistd.h:119:10: error: '*readlink' specified size 18446744073709551614 exceeds maximum object size 9223372036854775807 [-Werror=stringop-overflow=]
  119 | return __glibc_fortify (readlink, __len, sizeof (char),
      | ^~~~~~~~~~~~~~~
/usr/include/bits/unistd.h: In function 'process_symlink(JobControlRecord*, FindFilesPacket*, int (*)(JobControlRecord*, FindFilesPacket*, bool), char*, bool)':
/usr/include/bits/unistd.h:104:16: note: in a call to function '*readlink' declared with attribute 'access (write_only, 2, 3)'
  104 | extern ssize_t __REDIRECT_NTH (__readlink_alias,
      | ^~~~~~~~~~~~~~
cc1plus: all warnings being treated as errors
gmake[2]: *** [core/src/findlib/CMakeFiles/bareosfind.dir/build.make:163: core/src/findlib/CMakeFiles/bareosfind.dir/find_one.cc.o] Error 1
gmake[2]: Leaving directory '/builddir/build/BUILD/bareos-Release-22.1.0/redhat-linux-build'
gmake[1]: *** [CMakeFiles/Makefile2:3983: core/src/findlib/CMakeFiles/bareosfind.dir/all] Error 2
gmake[1]: *** Waiting for unfinished jobs....
[ 63%] Building CXX object core/src/dird/CMakeFiles/dird_objects.dir/backup.cc.o
Tags:
Steps To Reproduce:
Additional Information: gcc version: 13.1.1
It looks like the problem of 0001535
System Description
Attached Files: build.log (657,808 bytes) 2023-06-15 12:24
https://bugs.bareos.org/file_download.php?file_id=561&type=bug
Notes
(0005073)
arogge   
2023-06-15 10:49   
I just tried to compile on Fedora 38 and it worked.
Could you provide your CMake-commandline and the effective CFLAGS/CXXFLAGS/LDFLAGS so I can reproduce your build-failure?
(0005074)
mdc   
2023-06-15 12:24   
Here the complete build log.
Fedora will set this compiler options by calling the cmake macro:
 + CFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer '
+ export CFLAGS
+ CXXFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer '
+ export CXXFLAGS
+ FFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -I/usr/lib64/gfortran/modules '
+ export FFLAGS
+ FCFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -I/usr/lib64/gfortran/modules '
+ export FCFLAGS
+ VALAFLAGS=-g
+ export VALAFLAGS
+ LDFLAGS='-Wl,-z,relro -Wl,--as-needed -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes '
+ export LDFLAGS
+ LT_SYS_LIBRARY_PATH=/usr/lib64:
+ export LT_SYS_LIBRARY_PATH
+ CC=gcc
+ export CC
+ CXX=g++
+ export CXX
+ /usr/bin/cmake -S . -B redhat-linux-build -DCMAKE_C_FLAGS_RELEASE:STRING=-DNDEBUG -DCMAKE_CXX_FLAGS_RELEASE:STRING=-DNDEBUG -DCMAKE_Fortran_FLAGS_RELEASE:STRING=-DNDEBUG -DCMAKE_VERBOSE_MAKEFILE:BOOL=ON -DCMAKE_INSTALL_DO_STRIP:BOOL=OFF -DCMAKE_INSTALL_PREFIX:PATH=/usr -DINCLUDE_INSTALL_DIR:PATH=/usr/include -DLIB_INSTALL_DIR:PATH=/usr/lib64 -DSYSCONF_INSTALL_DIR:PATH=/etc -DSHARE_INSTALL_PREFIX:PATH=/usr/share -DLIB_SUFFIX=64 -DBUILD_SHARED_LIBS:BOOL=ON -DCMAKE_VERBOSE_MAKEFILE=ON -DBUILD_SHARED_LIBS:BOOL=ON -DUSE_SYSTEM_CLI11:BOOL=ON -DUSE_SYSTEM_FMT:BOOL=ON -DUSE_SYSTEM_XXHASH:BOOL=ON -Dbatch-insert=yes -Ddynamic-cats-backends=yes -Ddynamic-storage-backends=yes -Dscsi-crypto=yes -Dlmdb=yes -Dndmp=yes -Dbuild_ndmjob=yes -Dacl=yes -Dlockmgr=yes -Dtraymonitor=yes -Dpostgresql=yes -Ddir-user=bareos -Ddir-group=bareos -Dsd-user=bareos -Dsd-group=bareos -Dfd-user=root -Dfd-group=bareos -Dscriptdir=/usr/share/bareos/scripts -Dbindir=/usr/bin -Dsbindir=/usr/bin -Ddir-password=XXX_REPLACE_WITH_DIRECTOR_PASSWORD_XXX -Dfd-password=XXX_REPLACE_WITH_CLIENT_PASSWORD_XXX -Dsd-password=XXX_REPLACE_WITH_STORAGE_PASSWORD_XXX -Dmon-dir-password=XXX_REPLACE_WITH_DIRECTOR_MONITOR_PASSWORD_XXX -Dmon-fd-password=XXX_REPLACE_WITH_CLIENT_MONITOR_PASSWORD_XXX -Dmon-sd-password=XXX_REPLACE_WITH_STORAGE_MONITOR_PASSWORD_XXX -Dopenssl=yes -Dbasename=XXX_REPLACE_WITH_LOCAL_HOSTNAME_XXX -Dhostname=XXX_REPLACE_WITH_LOCAL_HOSTNAME_XXX -Dsystemd=yes -Dvddk_incl=/usr/include/VMware-vix-disklib -Dvddk_lib=/usr/lib64/VMware-vix-disklib

I hope this will help.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1541 [bareos-core] General trivial have not tried 2023-06-13 12:16 2023-06-13 12:16
Reporter: arogge_adm Platform:  
Assigned To: OS:  
Priority: normal OS Version:  
Status: new Product Version:  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Release Bareos 22.1.1
Description: This ticket acts as a tracker ticket to collect information about releasing Bareos 22.1.1
Tags: release
Steps To Reproduce:
Additional Information:
Attached Files:
There are no notes attached to this issue.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
690 [bareos-core] General crash always 2016-08-23 18:19 2023-05-09 16:58
Reporter: tigerfoot Platform: x86_64  
Assigned To: bruno-at-bareos OS: openSUSE  
Priority: low OS Version: Leap 42.1  
Status: resolved Product Version: 15.2.5  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: bcopy segfault
Description: With a small volume Full001 I'm using a bsr (Catalog job)
to try to copy this job to another media (and device)

bcopy start to work but segv at the end
Tags:
Steps To Reproduce: Pick a bsr of a job in a multiple jobs volume, run bcopy with a new volume on a different destination (different media)
Additional Information: gdb /usr/sbin/bcopy
GNU gdb (GDB; openSUSE Leap 42.1) 7.9.1
Copyright (C) 2015 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law. Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-suse-linux".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<http://bugs.opensuse.org/>.
Find the GDB manual and other documentation resources online at:
<http://www.gnu.org/software/gdb/documentation/>.
For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from /usr/sbin/bcopy...Reading symbols from /usr/lib/debug/usr/sbin/bcopy.debug...done.
done.
(gdb) run -d200 -b Catalog.bsr -c /etc/bareos/bareos-sd.conf -v -o catalog01 Default FileStorage
Starting program: /usr/sbin/bcopy -d200 -b Catalog.bsr -c /etc/bareos/bareos-sd.conf -v -o catalog01 Default FileStorage
Missing separate debuginfos, use: zypper install glibc-debuginfo-2.19-22.1.x86_64
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Detaching after fork from child process 3016.
bcopy (90): stored_conf.c:837-0 Inserting Device res: Default
bcopy (90): stored_conf.c:837-0 Inserting Director res: earth-dir
Detaching after fork from child process 3019.
bcopy (50): plugins.c:222-0 load_plugins
bcopy (50): plugins.c:302-0 Found plugin: name=autoxflate-sd.so len=16
bcopy (50): sd_plugins.c:371-0 is_plugin_compatible called
bcopy (50): plugins.c:299-0 Rejected plugin: want=-sd.so name=bpipe-fd.so len=11
bcopy (50): plugins.c:302-0 Found plugin: name=scsicrypto-sd.so len=16
bcopy (200): scsicrypto-sd.c:137-0 scsicrypto-sd: Loaded: size=104 version=3
bcopy (50): sd_plugins.c:371-0 is_plugin_compatible called
bcopy (50): plugins.c:302-0 Found plugin: name=scsitapealert-sd.so len=19
bcopy (200): scsitapealert-sd.c:99-0 scsitapealert-sd: Loaded: size=104 version=3
bcopy (50): sd_plugins.c:371-0 is_plugin_compatible called
bcopy (8): crypto_cache.c:55-0 Could not open crypto cache file. /var/lib/bareos/bareos-sd.9103.cryptoc ERR=No such file or directory
bcopy (100): bcopy.c:194-0 About to setup input jcr
bcopy (200): autoxflate-sd.c:183-0 autoxflate-sd: newPlugin JobId=0
bcopy (200): scsicrypto-sd.c:168-0 scsicrypto-sd: newPlugin JobId=0
bcopy (200): scsitapealert-sd.c:130-0 scsitapealert-sd: newPlugin JobId=0
bcopy: butil.c:271-0 Using device: "Default" for reading.
bcopy (100): dev.c:393-0 init_dev: tape=0 dev_name=/var/lib/bareos/storage/default
bcopy (100): dev.c:395-0 dev=/var/lib/bareos/storage/default dev_max_bs=0 max_bs=0
bcopy (100): block.c:127-0 created new block of blocksize 64512 (dev->device->label_block_size) as dev->max_block_size is zero
bcopy (200): acquire.c:791-0 Attach Jid=0 dcr=61fc28 size=0 dev="Default" (/var/lib/bareos/storage/default)
bcopy (150): vol_mgr.c:171-0 add read_vol=Full-0001 JobId=0
bcopy (100): butil.c:164-0 Acquire device for read
bcopy (100): acquire.c:63-0 dcr=61fc28 dev=623f58
bcopy (100): acquire.c:64-0 MediaType dcr= dev=default
bcopy (100): acquire.c:92-0 Want Vol=Full-0001 Slot=0
bcopy (100): acquire.c:106-0 MediaType dcr=default dev=default
bcopy (100): acquire.c:174-0 MediaType dcr=default dev=default
bcopy (100): acquire.c:193-0 dir_get_volume_info vol=Full-0001
bcopy (100): askdir.c:731-0 Fake dir_get_volume_info
bcopy (200): scsitapealert-sd.c:228-0 scsitapealert-sd: tapealert is not enabled on device /var/lib/bareos/storage/default
bcopy (100): mount.c:650-0 No swap_dev set
bcopy (100): mount.c:600-0 Must load "Default" (/var/lib/bareos/storage/default)
bcopy (100): autochanger.c:99-0 Device "Default" (/var/lib/bareos/storage/default) is not an autochanger
bcopy (200): scsitapealert-sd.c:192-0 scsitapealert-sd: Unknown event 5
bcopy (100): acquire.c:235-0 stored: open vol=Full-0001
bcopy (100): dev.c:561-0 open dev: type=1 dev_name="Default" (/var/lib/bareos/storage/default) vol=Full-0001 mode=OPEN_READ_ONLY
bcopy (100): dev.c:572-0 call open_device mode=OPEN_READ_ONLY
bcopy (190): dev.c:1006-0 Enter mount
bcopy (100): dev.c:646-0 open disk: mode=OPEN_READ_ONLY open(/var/lib/bareos/storage/default/Full-0001, 0x0, 0640)
bcopy (100): dev.c:662-0 open dev: disk fd=3 opened
bcopy (100): dev.c:580-0 preserve=0xffffde60 fd=3
bcopy (100): acquire.c:243-0 opened dev "Default" (/var/lib/bareos/storage/default) OK
bcopy (100): acquire.c:257-0 calling read-vol-label
bcopy (100): dev.c:502-0 setting minblocksize to 64512, maxblocksize to label_block_size=64512, on device "Default" (/var/lib/bareos/storage/default)
bcopy (100): label.c:76-0 Enter read_volume_label res=0 device="Default" (/var/lib/bareos/storage/default) vol=Full-0001 dev_Vol=*NULL* max_blocksize=64512
bcopy (130): label.c:140-0 Big if statement in read_volume_label
bcopy (190): label.c:907-0 unser_vol_label

Volume Label:
Id : Bareos 2.0 immortal
VerNo : 20
VolName : Full-0001
PrevVolName :
VolFile : 0
LabelType : VOL_LABEL
LabelSize : 159
PoolName : Full
MediaType : default
PoolType : Backup
HostName : earth
Date label written: 23-aoû-2016 16:12
bcopy (130): label.c:213-0 Compare Vol names: VolName=Full-0001 hdr=Full-0001

Volume Label:
Id : Bareos 2.0 immortal
VerNo : 20
VolName : Full-0001
PrevVolName :
VolFile : 0
LabelType : VOL_LABEL
LabelSize : 159
PoolName : Full
MediaType : default
PoolType : Backup
HostName : earth
Date label written: 23-aoû-2016 16:12
bcopy (130): label.c:234-0 Leave read_volume_label() VOL_OK
bcopy (100): label.c:251-0 Call reserve_volume=Full-0001
bcopy (150): vol_mgr.c:414-0 enter reserve_volume=Full-0001 drive="Default" (/var/lib/bareos/storage/default)
bcopy (150): vol_mgr.c:323-0 new Vol=Full-0001 at 625718 dev="Default" (/var/lib/bareos/storage/default)
bcopy (150): vol_mgr.c:582-0 === set in_use. vol=Full-0001 dev="Default" (/var/lib/bareos/storage/default)
bcopy (150): vol_mgr.c:619-0 Inc walk_start use_count=2 volname=Full-0001
bcopy (150): vol_mgr.c:259-0 List end new volume: Full-0001 in_use=1 swap=0 on device "Default" (/var/lib/bareos/storage/default)
bcopy (200): scsitapealert-sd.c:228-0 scsitapealert-sd: tapealert is not enabled on device /var/lib/bareos/storage/default
bcopy (100): dev.c:432-0 Device "Default" (/var/lib/bareos/storage/default) has dev->device->max_block_size of 0 and dev->max_block_size of 64512, dcr->VolMaxBlocksize is 0
bcopy (100): dev.c:474-0 set minblocksize to 64512, maxblocksize to 64512 on device "Default" (/var/lib/bareos/storage/default)
bcopy (100): acquire.c:263-0 Got correct volume.
23-aoû 18:16 bcopy JobId 0: Ready to read from volume "Full-0001" on device "Default" (/var/lib/bareos/storage/default).
bcopy (100): acquire.c:370-0 dcr=61fc28 dev=623f58
bcopy (100): acquire.c:371-0 MediaType dcr=default dev=default
bcopy (100): bcopy.c:212-0 About to setup output jcr
bcopy (200): autoxflate-sd.c:183-0 autoxflate-sd: newPlugin JobId=0
bcopy (200): scsicrypto-sd.c:168-0 scsicrypto-sd: newPlugin JobId=0
bcopy (200): scsitapealert-sd.c:130-0 scsitapealert-sd: newPlugin JobId=0
bcopy: butil.c:274-0 Using device: "FileStorage" for writing.
bcopy (100): dev.c:393-0 init_dev: tape=0 dev_name=/var/lib/bareos/storage/file/
bcopy (100): dev.c:395-0 dev=/var/lib/bareos/storage/file/ dev_max_bs=0 max_bs=0
bcopy (100): block.c:127-0 created new block of blocksize 64512 (dev->device->label_block_size) as dev->max_block_size is zero
bcopy (200): acquire.c:791-0 Attach Jid=0 dcr=6260d8 size=0 dev="FileStorage" (/var/lib/bareos/storage/file/)
bcopy (150): vol_mgr.c:169-0 read_vol=Full-0001 JobId=0 already in list.
bcopy (120): device.c:266-0 start open_output_device()
bcopy (129): device.c:275-0 Device is file, deferring open.
bcopy (100): bcopy.c:225-0 About to acquire device for writing
bcopy (100): dev.c:561-0 open dev: type=1 dev_name="FileStorage" (/var/lib/bareos/storage/file/) vol=catalog01 mode=OPEN_READ_WRITE
bcopy (100): dev.c:572-0 call open_device mode=OPEN_READ_WRITE
bcopy (190): dev.c:1006-0 Enter mount
bcopy (100): dev.c:646-0 open disk: mode=OPEN_READ_WRITE open(/var/lib/bareos/storage/file/catalog01, 0x2, 0640)
bcopy (100): dev.c:662-0 open dev: disk fd=4 opened
bcopy (100): dev.c:580-0 preserve=0xffffe0b0 fd=4
bcopy (100): acquire.c:400-0 acquire_append device is disk
bcopy (190): acquire.c:435-0 jid=0 Do mount_next_write_vol
bcopy (150): mount.c:71-0 Enter mount_next_volume(release=0) dev="FileStorage" (/var/lib/bareos/storage/file/)
bcopy (150): mount.c:84-0 mount_next_vol retry=0
bcopy (200): scsitapealert-sd.c:228-0 scsitapealert-sd: tapealert is not enabled on device /var/lib/bareos/storage/file/
bcopy (100): mount.c:650-0 No swap_dev set
bcopy (200): scsitapealert-sd.c:192-0 scsitapealert-sd: Unknown event 5
bcopy (200): mount.c:390-0 Before dir_find_next_appendable_volume.
bcopy (100): askdir.c:731-0 Fake dir_get_volume_info
bcopy (150): mount.c:124-0 After find_next_append. Vol=catalog01 Slot=0
bcopy (100): autochanger.c:99-0 Device "FileStorage" (/var/lib/bareos/storage/file/) is not an autochanger
bcopy (150): mount.c:173-0 autoload_dev returns 0
bcopy (150): mount.c:209-0 want vol=catalog01 devvol= dev="FileStorage" (/var/lib/bareos/storage/file/)
bcopy (100): dev.c:502-0 setting minblocksize to 64512, maxblocksize to label_block_size=64512, on device "FileStorage" (/var/lib/bareos/storage/file/)
bcopy (100): label.c:76-0 Enter read_volume_label res=0 device="FileStorage" (/var/lib/bareos/storage/file/) vol=catalog01 dev_Vol=*NULL* max_blocksize=64512
bcopy (130): label.c:140-0 Big if statement in read_volume_label
bcopy (190): label.c:907-0 unser_vol_label

Volume Label:
Id : Bareos 2.0 immortal
VerNo : 20
VolName : catalog01
PrevVolName :
VolFile : 0
LabelType : VOL_LABEL
LabelSize : 159
PoolName : Default
MediaType : file
PoolType : Backup
HostName : earth
Date label written: 23-aoû-2016 17:59
bcopy (130): label.c:213-0 Compare Vol names: VolName=catalog01 hdr=catalog01

Volume Label:
Id : Bareos 2.0 immortal
VerNo : 20
VolName : catalog01
PrevVolName :
VolFile : 0
LabelType : VOL_LABEL
LabelSize : 159
PoolName : Default
MediaType : file
PoolType : Backup
HostName : earth
Date label written: 23-aoû-2016 17:59
bcopy (130): label.c:234-0 Leave read_volume_label() VOL_OK
bcopy (100): label.c:251-0 Call reserve_volume=catalog01
bcopy (150): vol_mgr.c:414-0 enter reserve_volume=catalog01 drive="FileStorage" (/var/lib/bareos/storage/file/)
bcopy (150): vol_mgr.c:619-0 Inc walk_start use_count=2 volname=Full-0001
bcopy (150): vol_mgr.c:259-0 List begin reserve_volume: Full-0001 in_use=1 swap=0 on device "Default" (/var/lib/bareos/storage/default)
bcopy (150): vol_mgr.c:323-0 new Vol=catalog01 at 6258c8 dev="FileStorage" (/var/lib/bareos/storage/file/)
bcopy (150): vol_mgr.c:582-0 === set in_use. vol=catalog01 dev="FileStorage" (/var/lib/bareos/storage/file/)
bcopy (150): vol_mgr.c:619-0 Inc walk_start use_count=2 volname=Full-0001
bcopy (150): vol_mgr.c:259-0 List end new volume: Full-0001 in_use=1 swap=0 on device "Default" (/var/lib/bareos/storage/default)
bcopy (150): vol_mgr.c:638-0 Inc walk_next use_count=2 volname=catalog01
bcopy (150): vol_mgr.c:259-0 List end new volume: catalog01 in_use=1 swap=0 on device "FileStorage" (/var/lib/bareos/storage/file/)
bcopy (200): scsitapealert-sd.c:228-0 scsitapealert-sd: tapealert is not enabled on device /var/lib/bareos/storage/file/
bcopy (100): dev.c:432-0 Device "FileStorage" (/var/lib/bareos/storage/file/) has dev->device->max_block_size of 0 and dev->max_block_size of 64512, dcr->VolMaxBlocksize is 0
bcopy (100): dev.c:474-0 set minblocksize to 64512, maxblocksize to 64512 on device "FileStorage" (/var/lib/bareos/storage/file/)
bcopy (150): mount.c:438-0 Want dirVol=catalog01 dirStat=
bcopy (150): mount.c:446-0 Vol OK name=catalog01
bcopy (100): askdir.c:731-0 Fake dir_get_volume_info
bcopy (100): askdir.c:731-0 Fake dir_get_volume_info
bcopy (200): mount.c:289-0 applying vol block sizes to device "FileStorage" (/var/lib/bareos/storage/file/): dcr->VolMinBlocksize set to 0, dcr->VolMaxBlocksize set to 0
bcopy (100): dev.c:432-0 Device "FileStorage" (/var/lib/bareos/storage/file/) has dev->device->max_block_size of 0 and dev->max_block_size of 64512, dcr->VolMaxBlocksize is 0
bcopy (100): dev.c:474-0 set minblocksize to 64512, maxblocksize to 64512 on device "FileStorage" (/var/lib/bareos/storage/file/)
bcopy (100): mount.c:323-0 Device previously written, moving to end of data. Expect 0 bytes
23-aoû 18:16 bcopy JobId 0: Volume "catalog01" previously written, moving to end of data.
bcopy (100): dev.c:749-0 Enter eod
bcopy (200): dev.c:761-0 ====== Seek to 14465367
23-aoû 18:16 bcopy JobId 0: Warning: For Volume "catalog01":
The sizes do not match! Volume=14465367 Catalog=0
Correcting Catalog
bcopy (150): mount.c:341-0 update volinfo mounts=1
bcopy (150): mount.c:351-0 set APPEND, normal return from mount_next_write_volume. dev="FileStorage" (/var/lib/bareos/storage/file/)
bcopy (190): acquire.c:448-0 Output pos=0:14465367
bcopy (100): acquire.c:459-0 === nwriters=1 nres=0 vcatjob=1 dev="FileStorage" (/var/lib/bareos/storage/file/)
23-aoû 18:16 bcopy JobId 0: Forward spacing Volume "Full-0001" to file:block 0:1922821345.
bcopy (100): dev.c:892-0 ===== lseek to 1922821345
bcopy (10): bcopy.c:384-0 Begin Job Session Record: VolSessionId=3 VolSessionTime=1471954183 JobId=2 DataLen=144
Begin Job Session Record: VolSessionId=3 VolSessionTime=1471954183 JobId=2 DataLen=144
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=1335
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=2407
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=3479
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=4551
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=5623
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=6695
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=7767
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=8839
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=9911
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=10983
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=12055
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=13127
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=14199
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=15271
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=16343
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=17415
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=18487
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=19559
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=20631
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=21703
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=22775
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=23847
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=24919
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=25991
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=27063
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=28135
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=29207
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=30279
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=31351
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=32423
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=33495
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=34567
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=35639
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=36711
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=37783
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=38855
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=39927
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=40999
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=42071
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=43143
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=44215
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=45287
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=46359
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=47431
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=48503
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=49575
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=50647
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=51719
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=52791
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=53863
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=54935
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=56007
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=57079
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=58151
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=59223
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=60295
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=61367
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=62439
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=63511
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=64583
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=107
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=1179
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=2251
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=3323
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=4395
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=5467
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=6539
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=7611
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=8683
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=9755
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=10827
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=11899
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=12971
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=600 rem=200
bcopy (10): bcopy.c:384-0 End Job Session Record: VolSessionId=3 VolSessionTime=1471954183 JobId=2 DataLen=180
End Job Session Record: VolSessionId=3 VolSessionTime=1471954183 JobId=2 DataLen=180
bcopy (200): read_record.c:243-0 End of file 0 on device "Default" (/var/lib/bareos/storage/default), Volume "Full-0001"
23-aoû 18:16 bcopy JobId 0: End of Volume at file 0 on device "Default" (/var/lib/bareos/storage/default), Volume "Full-0001"
bcopy (150): vol_mgr.c:713-0 === clear in_use vol=Full-0001
bcopy (150): vol_mgr.c:732-0 === set not reserved vol=Full-0001 num_writers=0 dev_reserved=0 dev="Default" (/var/lib/bareos/storage/default)
bcopy (150): vol_mgr.c:763-0 === clear in_use vol=Full-0001
bcopy (150): vol_mgr.c:777-0 === remove volume Full-0001 dev="Default" (/var/lib/bareos/storage/default)
bcopy (150): vol_mgr.c:619-0 Inc walk_start use_count=2 volname=catalog01
bcopy (150): vol_mgr.c:259-0 List free_volume: catalog01 in_use=1 swap=0 on device "FileStorage" (/var/lib/bareos/storage/file/)
bcopy (90): mount.c:928-0 NumReadVolumes=1 CurReadVolume=1
bcopy (150): vol_mgr.c:705-0 vol_unused: no vol on "Default" (/var/lib/bareos/storage/default)
bcopy (150): vol_mgr.c:619-0 Inc walk_start use_count=2 volname=catalog01
bcopy (150): vol_mgr.c:259-0 List null vol cannot unreserve_volume: catalog01 in_use=1 swap=0 on device "FileStorage" (/var/lib/bareos/storage/file/)
bcopy (90): mount.c:947-0 End of Device reached.
23-aoû 18:16 bcopy JobId 0: End of all volumes.
bcopy (10): bcopy.c:384-0 Unknown Record: VolSessionId=0 VolSessionTime=0 JobId=0 DataLen=0
Unknown Record: VolSessionId=0 VolSessionTime=0 JobId=0 DataLen=0
bcopy: bcopy.c:328-0 EOT label not copied.
bcopy (100): block.c:449-0 return write_block_to_dev no data to write
bcopy: bcopy.c:251-0 1 Jobs copied. 377 records copied.
bcopy (100): dev.c:933-0 close_dev "Default" (/var/lib/bareos/storage/default)
bcopy (100): dev.c:1043-0 Enter unmount
bcopy (100): dev.c:921-0 Clear volhdr vol=Full-0001
bcopy (100): dev.c:933-0 close_dev "FileStorage" (/var/lib/bareos/storage/file/)
bcopy (100): dev.c:1043-0 Enter unmount
bcopy (100): dev.c:921-0 Clear volhdr vol=catalog01
bcopy (150): vol_mgr.c:193-0 remove_read_vol=Full-0001 JobId=0 found=1
bcopy: lockmgr.c:319-0 ASSERT failed at acquire.c:839: !priority || priority >= max_prio
[New Thread 0x7ffff53e7700 (LWP 3015)]

Program received signal SIGSEGV, Segmentation fault.
0x00007ffff7743c29 in lmgr_thread_t::pre_P (this=0x61c478, m=0x61fc58, priority=-1431655766, f=0x7ffff7bc5f7b "acquire.c", l=839) at lockmgr.c:319
319 lockmgr.c: No such file or directory.
Missing separate debuginfos, use: zypper install libcap2-debuginfo-2.22-16.1.x86_64 libgcc_s1-debuginfo-5.3.1+r233831-6.1.x86_64 libjansson4-debuginfo-2.7-3.2.x86_64 liblzo2-2-debuginfo-2.08-4.1.x86_64 libopenssl1_0_0-debuginfo-1.0.1i-15.1.x86_64 libstdc++6-debuginfo-5.3.1+r233831-6.1.x86_64 libwrap0-debuginfo-7.6-885.4.x86_64 libz1-debuginfo-1.2.8-6.4.x86_64
(gdb) bt
#0 0x00007ffff7743c29 in lmgr_thread_t::pre_P (this=0x61c478, m=0x61fc58, priority=-1431655766, f=0x7ffff7bc5f7b "acquire.c", l=839) at lockmgr.c:319
0000001 0x00007ffff77433a6 in bthread_mutex_lock_p (m=m@entry=0x61fc58, file=file@entry=0x7ffff7bc5f7b "acquire.c", line=line@entry=839) at lockmgr.c:742
0000002 0x00007ffff7b9e72f in free_dcr (dcr=0x61fc28) at acquire.c:839
0000003 0x00007ffff7baa43f in my_free_jcr (jcr=0x622bc8) at butil.c:215
0000004 0x00007ffff77409bd in b_free_jcr (file=file@entry=0x402d83 "bcopy.c", line=line@entry=256, jcr=0x622bc8) at jcr.c:641
0000005 0x0000000000401f98 in main (argc=<optimized out>, argv=<optimized out>) at bcopy.c:256
(gdb) cont
Continuing.
[Thread 0x7ffff53e7700 (LWP 3015) exited]

Program terminated with signal SIGSEGV, Segmentation fault.
Attached Files:
Notes
(0002413)
tigerfoot   
2016-10-27 12:07   
End of the trace with full debuginfo packages installed.

27-oct-2016 12:05:29.103637 bcopy (90): mount.c:947-0 End of Device reached.
27-oct 12:05 bcopy JobId 0: End of all volumes.
27-oct-2016 12:05:29.103642 bcopy (10): bcopy.c:384-0 Unknown Record: VolSessionId=0 VolSessionTime=0 JobId=0 DataLen=0
Unknown Record: VolSessionId=0 VolSessionTime=0 JobId=0 DataLen=0
bcopy: bcopy.c:328-0 EOT label not copied.
27-oct-2016 12:05:29.103650 bcopy (100): block.c:449-0 return write_block_to_dev no data to write
bcopy: bcopy.c:251-0 1 Jobs copied. 309113 records copied.
27-oct-2016 12:05:29.103656 bcopy (100): dev.c:933-0 close_dev "Default" (/var/lib/bareos/storage/default)
27-oct-2016 12:05:29.103703 bcopy (100): dev.c:1043-0 Enter unmount
27-oct-2016 12:05:29.103707 bcopy (100): dev.c:921-0 Clear volhdr vol=Full-0001
27-oct-2016 12:05:29.103713 bcopy (100): dev.c:933-0 close_dev "FileStorage" (/var/lib/bareos/storage/file)
27-oct-2016 12:05:29.103716 bcopy (100): dev.c:1043-0 Enter unmount
27-oct-2016 12:05:29.103719 bcopy (100): dev.c:921-0 Clear volhdr vol=catalog01
27-oct-2016 12:05:29.103726 bcopy (150): vol_mgr.c:193-0 remove_read_vol=Full-0001 JobId=0 found=1
bcopy: lockmgr.c:319-0 ASSERT failed at acquire.c:839: !priority || priority >= max_prio
319 lockmgr.c: No such file or directory.

Thread 1 "bcopy" received signal SIGSEGV, Segmentation fault.
0x00007ffff7743c29 in lmgr_thread_t::pre_P (this=0x61c478, m=0x623008, priority=-1431655766, f=0x7ffff7bc5f7b "acquire.c", l=839) at lockmgr.c:319
(gdb) bt
#0 0x00007ffff7743c29 in lmgr_thread_t::pre_P (this=0x61c478, m=0x623008, priority=-1431655766, f=0x7ffff7bc5f7b "acquire.c", l=839) at lockmgr.c:319
0000001 0x00007ffff77433a6 in bthread_mutex_lock_p (m=m@entry=0x623008, file=file@entry=0x7ffff7bc5f7b "acquire.c", line=line@entry=839) at lockmgr.c:742
0000002 0x00007ffff7b9e72f in free_dcr (dcr=0x622fd8) at acquire.c:839
0000003 0x00007ffff7baa43f in my_free_jcr (jcr=0x623558) at butil.c:215
0000004 0x00007ffff77409bd in b_free_jcr (file=file@entry=0x402d83 "bcopy.c", line=line@entry=256, jcr=0x623558) at jcr.c:641
0000005 0x0000000000401f98 in main (argc=<optimized out>, argv=<optimized out>) at bcopy.c:256
(gdb) cont
Continuing.
[Thread 0x7ffff53e7700 (LWP 6975) exited]

Program terminated with signal SIGSEGV, Segmentation fault.
The program no longer exists.


(full stack will be attached)
(0002414)
tigerfoot   
2016-10-27 12:18   
trace can't be attached (log.xz is 24Mb)
you can have it here
https://dav.ioda.net/index.php/s/V7RPrq6M3KtbFc0/download
(0005031)
bruno-at-bareos   
2023-05-09 16:58   
Has been fixed in recent version 21.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
587 [bareos-core] director minor always 2015-12-22 16:48 2023-05-09 16:55
Reporter: joergs Platform: Linux  
Assigned To: bruno-at-bareos OS: any  
Priority: low OS Version: 3  
Status: resolved Product Version: 15.2.2  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: joblog has "Backup Error", but jobstatus is set to successfull ('T') if writing the bootstrap file fails
Description: If the director can't write the bootstrap file, the joblog says:

...
  Non-fatal FD errors: 1
  SD Errors: 0
  FD termination status: OK
  SD termination status: OK
  Termination: *** Backup Error ***

however, the jobstatus is 'T':

+-------+---------------+-----------------+---------------------+------+-------+----------+----------+-----------+
| JobId | Name | Client | StartTime | Type | Level | JobFiles | JobBytes | JobStatus |
+-------+---------------+-----------------+---------------------+------+-------+----------+----------+-----------+
| 225 | BackupClient1 | ting.dass-it-fd | 2015-12-22 16:32:13 | B | I | 2 | 44 | T |
+-------+---------------+-----------------+---------------------+------+-------+----------+----------+-----------+
Tags:
Steps To Reproduce: configure a job with

  Write Bootstrap = "/NONEXISTINGPATH/%c.bsr"

and run the job.

Compare status from "list joblog" with "list jobs".
Additional Information: list joblog jobid=...

will show something like:

...
 2015-12-22 16:46:12 ting.dass-it-dir JobId 226: Error: Could not open WriteBootstrap file:
/NONEXISTINGPATH/ting.dass-it-fd.bsr: ERR=No such file or directory
...
  Non-fatal FD errors: 1
  SD Errors: 0
  FD termination status: OK
  SD termination status: OK
  Termination: *** Backup Error ***

However "list jobs" will show 'T'.
System Description General issues regarding all linux platforms or not specific to only one distribution
Attached Files:
Notes
(0005029)
bruno-at-bareos   
2023-05-09 16:55   
Fixed in recent version.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
460 [bareos-core] file daemon minor N/A 2015-04-27 15:01 2023-05-09 16:50
Reporter: qwerty Platform: Slackware Linux  
Assigned To: pstorz OS: any  
Priority: normal OS Version: 3  
Status: feedback Product Version: 14.2.4  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: SHA1 checksum sometimes wrong
Description: For some files the checksum is wrong when doing incremental backup in accurate mode.
Note that is just for some files. And always the same files.
These files are not touched in any way.
They get backed up on every backup, full and incremental.
The problem is that it eat up backup media.

The FileSet setup look like this:
FileSet {
  Name = "Server1-data_opt_u"
  Include {
    Options {
      signature = SHA1
      compression = GZIP
      accurate = pins1u
      basejob = pins1u
      verify = pins1
      sparse = yes
      noatime = yes
      checkfilechanges = no
      exclude = yes

      @/opt/bareos/etc/clients.d/srv1.files/data_opt_u.wild_exclude
    }

    @/opt/bareos/etc/clients.d/srv1.files/data_opt_u.include
  }

  Exclude {
    @/opt/bareos/etc/clients.d/srv1.files/data_opt_u.exclude
  }
}

Shorted debug output from the file daemon.
One file as example.
.
.
.
srv1-fd: accurate_htable.c:97-0 add fname=</opt/u/Slackware/13.0/slackware-13.0-install-dvd.iso> lstat=P0E Jxf IHk B H0 H0 A DpqbgA BAA dPIV BUCE69 BMKTCp BU2xiJ A A g delta_seq=0 chksum=qCVteboqSK23Btz+1XrFVVb+KpA
.
.
.
srv1-fd: verify.c:322-0 === read_digest
srv1-fd: accurate.c:55-0 lookup </opt/u/Slackware/13.0/slackware-13.0-install-dvd.iso> ok
srv1-fd: verify.c:262-0 === digest_file
srv1-fd: bfile.c:1101-0 bopen: fname /opt/u/Slackware/13.0/slackware-13.0-install-dvd.iso, flags 262144, mode 0, rdev 0
srv1-fd: verify.c:322-0 === read_digest
srv1-fd: verify.c:437-0 /opt/u/Slackware/13.0/slackware-13.0-install-dvd.iso SHA1 chksum diff. Cat: qCVteboqSK23Btz+1XrFVVb+KpA File: WNmTLFTpUj5hZwRbIWmuqsx6n50
.
.
.
Tags:
Steps To Reproduce:
Additional Information:
System Description General issues regarding all linux platforms or not specific to only one distribution
Attached Files:
Notes
(0001741)
pstorz   
2015-05-26 12:00   
Hello,

we fixed some problems with the digest code for 0000462, and I think that this should also solve this problem.
(see
https://github.com/bareos/bareos/commit/b75dbb84551b93a4f2359db71dea7527edfd0541 )

Please check with the newest master code if your problem is now fixed.

Thank you
(0001765)
qwerty   
2015-06-02 21:00   
No. The problem is still there.
(0001822)
backelj   
2015-09-09 10:49   
Hello, I'm experiencing the same thing with these settings:

FileSet {
  Name = test-data-fileset
  Ignore FileSet Changes = yes
  Include {
    Options {
      Compression = GZIP
      Signature = MD5
      BaseJob = pmugcs5
      Accurate = mcs5
      Verify = pins5
      sparse = yes
      aclsupport = yes
      xattrsupport = yes
      hfsplussupport = yes
      checkfilechanges = yes
    }

The initial checksum is different from all subsequent checksums which are all the same.
That is, given the output above, the checksum that is shown in the debug output for accurate_htable.c is different from the one in the debug output of verify.c. In subsequent backups, the checksum in the debug output of verify.c does not change.

I'm wondering how these checksums are computed (are they computed on the original file or on the gzipped-file?) and why the initial checksum is different...
(0001828)
backelj   
2015-09-10 11:12   
Hello again,

I've found an old thread on the bacula mailing lists that is about a similar issue:

http://sourceforge.net/p/bacula/mailman/message/20282084/

I followed the thread and did the following:
- Create a full backup.
- Run a VolumeToCatalog verify job = OK.
- Run a DiskToCatalog verify job = OK.
- Run a incremental backup = files are still copied.

At the end they talk about the "sparse = yes" setting.
If I remove this setting, all is fine again!
Moreover, the checksums in the initial full backup are different from the one with the setting "sparse=yes".
In this thread, they mention that "the Verify code that creates the hash
does not take account of the Sparse flag". Maybe this is the case for the bareos code as well???

So qwerty, would you be able and willing to test this as well (i.e., without the "sparse=yes" setting)?

Many thanks,

Franky
(0001830)
qwerty   
2015-09-11 14:01   
Hi

Yes, with "Sparse = no" it works.
I missed that bacula thread.
It looks like that old bug is still there.
Thanks Franky.
(0002025)
backelj   
2015-12-03 11:34   
I'm wondering: will there be some action taken to resolve this issue, or will using the "sparse=no" the only solution to this problem?
(0005027)
bruno-at-bareos   
2023-05-09 16:50   
Is this still reproducible with newer version like 22x?


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
458 [bareos-core] director major random 2015-04-17 10:30 2023-05-09 16:43
Reporter: Dmitriy Platform: x64  
Assigned To: pstorz OS: CentOS  
Priority: high OS Version: release 6.6  
Status: feedback Product Version: 14.2.2  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: During the execution of Job, volume in the status Used mistakenly Recycled.
Description: During the execution of Job, volume in the status Used mistakenly Recycled.
In bconsole, web interface and database errors are not detected, the job is done with the status OK. Error can be seen in the logs or when restoring data from a failed job.

in log file bareos-dir says: Recycled volume "krus-fs-vol-0176"
15-Апр 00:00 backup-is.enforta.net-dir JobId 8476: Max configured use duration=86,400 sec. exceeded. Marking Volume "krus-fs-vol-0482" as Used.
16-Апр 00:00 backup-is.enforta.net-dir JobId 8530: Recycled volume "krus-fs-vol-0176"

bareos-sd answers: that the other volume is Recycled "krus-fs-vol-0436" (lost data)
16-Апр 00:00 backup-is-sd JobId 8530: Recycled volume "krus-fs-vol-0436" on device "krus-fd-dev" (/extra3/bacula_backup/krus-fs-pool/), all previous data lost.
Tags:
Steps To Reproduce:
Additional Information: 15.04.2015 00:00 running job 8476, return status ОК.

15-Апр 00:00 backup-is.enforta.net-dir JobId 8476: Using Device "krus-fd-dev" to write.
15-Апр 00:00 siebel-oracle-fd JobId 8476: shell command: run ClientBeforeJob "/opt/krus/bin/bacula_pg_backup.sh Incremental start 8476"
15-Апр 00:00 backup-is.enforta.net-dir JobId 8476: Sending Accurate information.
15-Апр 00:00 backup-is.enforta.net-dir JobId 8476: Max configured use duration=86,400 sec. exceeded. Marking Volume "krus-fs-vol-0482" as Used.
15-Апр 00:00 backup-is.enforta.net-dir JobId 8476: Recycled volume "krus-fs-vol-0436"
15-Апр 00:00 backup-is-sd JobId 8476: Recycled volume "krus-fs-vol-0436" on device "krus-fd-dev" (/extra3/bacula_backup/krus-fs-pool/), all previous data lost.
15-Апр 00:01 backup-is-sd JobId 8476: Elapsed time=00:01:05, Transfer rate=85.88 M Bytes/second
  Volume name(s): krus-fs-vol-0436
  Termination: Backup OK

16.04.2015 00:00 Running job 8530, return status ОК, in fact destroyed the data from the volume krus-fs-vol-0436(job 8476)
16-Апр 00:00 backup-is.enforta.net-dir JobId 8530: Using Device "krus-fd-dev" to write.
16-Апр 00:00 backup-is.enforta.net-dir JobId 8530: Max configured use duration=86,400 sec. exceeded. Marking Volume "krus-fs-vol-0436" as Used.
16-Апр 00:00 backup-is.enforta.net-dir JobId 8530: Recycled volume "krus-fs-vol-0176"
16-Апр 00:00 backup-is-sd JobId 8530: Recycled volume "krus-fs-vol-0436" on device "krus-fd-dev" (/extra3/bacula_backup/krus-fs-pool/), all previous data lost.
16-Апр 00:00 backup-is.enforta.net-dir JobId 8530: Max configured use duration=86,400 sec. exceeded. Marking Volume "krus-fs-vol-0436" as Used.
16-Апр 00:01 siebel-oracle-fd JobId 8530: shell command: run ClientAfterJob "/opt/krus/bin/bacula_pg_backup.sh Incremental finish 8530"
16-Апр 00:01 backup-is-sd JobId 8530: Elapsed time=00:01:08, Transfer rate=88.62 M Bytes/second
  Volume name(s): krus-fs-vol-0176|krus-fs-vol-0436
  Termination: Backup OK

in database bareos job 8476 exist on volume krus-fs-vol-0436
bareos=# select m.mediaid, m.volumename from jobmedia jm, media m where jobid=8476 and m.mediaid=jm.mediaid group by m.mediaid, m.volumename;
 mediaid | volumename
---------+------------------
     436 | krus-fs-vol-0436

bareos=# select m.mediaid, m.volumename from jobmedia jm, media m where jobid=8530 and m.mediaid=jm.mediaid group by m.mediaid, m.volumename;
 mediaid | volumename
---------+------------------
     176 | krus-fs-vol-0176
     436 | krus-fs-vol-0436

But viewing volume shows that there is the only job - 8530 (job 8476 absent, restoring data job 8476 on volume krus-fs-vol-0436 failed)
bls -c /home/cruel/bareos-sd.conf -V krus-fs-vol-0436 krus-fd-dev -j
bls: butil.c:298-0 Using device: "krus-fd-dev" for reading.
17-Апр 10:41 bls JobId 0: Ready to read from volume "krus-fs-vol-0436" on device "krus-fd-dev" (/extra3/bacula_backup/krus-fs-pool/).
Volume Record: File:blk=0:226 SessId=409 SessTime=1428425778 JobId=0 DataLen=191
Begin Job Session Record: File:blk=0:64738 SessId=409 SessTime=1428425778 JobId=8530
   Job=siebel-oracle-krus-fs-log-job.2015-04-16_00.00.01_38 Date=16-Апр-2015 00:00:21 Level=I Type=B
End Job Session Record: File:blk=1:1735869214 SessId=409 SessTime=1428425778 JobId=8530
   Date=16-Апр-2015 00:01:29 Level=I Type=B Files=383 Bytes=6,026,360,124 Errors=0 Status=T
17-Апр 10:43 bls JobId 0: End of Volume at file 1 on device "krus-fd-dev" (/extra3/bacula_backup/krus-fs-pool/), Volume "krus-fs-vol-0436"
17-Апр 10:43 bls JobId 0: End of all volumes.

Volume settings:
llist volume=krus-fs-vol-0176
Automatically selected Catalog: MyCatalog
Using Catalog "MyCatalog"
          mediaid: 176
       volumename: krus-fs-vol-0176
             slot: 0
           poolid: 13
        mediatype: File
     firstwritten: 2015-04-16 13:20:06
      lastwritten: 2015-04-17 00:00:39
        labeldate: 2015-04-17 00:00:03
          voljobs: 2
         volfiles: 4
        volblocks: 324,295
        volmounts: 6
         volbytes: 20,920,886,821
        volerrors: 0
        volwrites: 1,525,891
volcapacitybytes: 0
        volstatus: Append
          enabled: 1
          recycle: 1
     volretention: 1,209,600
   voluseduration: 86,400
       maxvoljobs: 0
      maxvolfiles: 0
      maxvolbytes: 53,687,091,200
       inchanger: 0
          endfile: 4
         endblock: 3,741,017,636
        labeltype: 0
        storageid: 11
         deviceid: 0
       locationid: 0
     recyclecount: 3
     initialwrite:
    scratchpoolid: 0
    recyclepoolid: 0
          comment:

llist volume=krus-fs-vol-0436
          mediaid: 436
       volumename: krus-fs-vol-0436
             slot: 0
           poolid: 13
        mediatype: File
     firstwritten: 2015-04-15 00:00:19
      lastwritten: 2015-04-16 00:01:29
       labeldate: 2015-04-16 13:14:27
          voljobs: 1
         volfiles: 1
        volblocks: 93,484
        volmounts: 5
         volbytes: 6,030,836,511
        volerrors: 0
        volwrites: 1,295,080
volcapacitybytes: 0
        volstatus: Used
          enabled: 1
          recycle: 1
     volretention: 1,209,600
   voluseduration: 86,400
       maxvoljobs: 0
      maxvolfiles: 0
      maxvolbytes: 53,687,091,200
        inchanger: 0
          endfile: 1
         endblock: 1,735,869,214
        labeltype: 0
        storageid: 11
         deviceid: 0
       locationid: 0
     recyclecount: 1
     initialwrite:
    scratchpoolid: 0
    recyclepoolid: 0
          comment:
System Description
Attached Files:
Notes
(0001708)
Dmitriy   
2015-05-05 08:16   
The problem is repeated and observed in the storage located on the NAS connected via iScsi.

There is a suspicion that the storage falls asleep from inactivity. At the start of the job, NAS awakens after 30 - 60 seconds, and at this time there is a failure.
(0001742)
pstorz   
2015-05-26 12:06   
Hello,

your last comment seems to show that the problem is caused by your NAS system.

If your NAS system does not behave correctly this is not a bug in bareos.

Do you have more info? Otherwise we will close this bug

Best regards,

Philipp
(0001743)
Dmitriy   
2015-05-26 13:19   
Hello,
I turned off the stop drive on NAS, there is no timeout.
The problem persists.
It may need more information to diagnose the problem?
(0001827)
pstorz   
2015-09-10 09:22   
Hello,

can you send your configuration please?

Especially the configuration of the Pools would be interesting.

Also, you can run the sd with high debug level (e.g. 500), then it will tell you why it thinks it can recycle a volume.
(0001835)
Dmitriy   
2015-09-16 13:58   
Hello,

I launched bareos-sd demon with debug level 500.
Problem Repeat every 1-2 weeks.
I am waiting for the failure and send the configuration and debug info
(0001848)
Dmitriy   
2015-09-22 10:54   
Hello,

The problem occurs only on volumes in the pool krus-fs-pool when start job siebel-oracle-krus-fs-log-job Type=Incremental

It is a mistake that cleared krus-fs-vol-0187 containing the previous incremental backup

jobid 18190
22-Сен 00:00 backup-is.enforta.net-dir JobId 18190: Recycled volume "krus-fs-vol-0189"
22-Сен 00:00 backup-is-sd JobId 18190: Recycled volume "krus-fs-vol-0187" on device "krus-fd-dev" (/extra3/bacula_backup/krus-fs-pool/), all previous data lost.

Job log
https://drive.google.com/file/d/0BwXK64WOr_77NkcxRXNzNHF6cVk/view?usp=sharing

Full debug bareos-sd level 500
https://drive.google.com/file/d/0BwXK64WOr_77U3JNbV96dmpiMGc/view?usp=sharing

grep 'krus-fs-vol-0187\|krus-fs-vol-0187\|=18190' bareos-sd1.debug >1.log
https://drive.google.com/file/d/0BwXK64WOr_77Y2tFMlZmdWZpQms/view?usp=sharing

Config file
https://drive.google.com/file/d/0BwXK64WOr_77UERqSEl5NzhFV28/view?usp=sharing
(0005024)
bruno-at-bareos   
2023-05-09 16:43   
is this still reproducible with recent version 22x


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
301 [bareos-core] director tweak always 2014-05-27 17:02 2023-05-09 16:41
Reporter: alexbrueckel Platform:  
Assigned To: bruno-at-bareos OS: Debian 7  
Priority: low OS Version:  
Status: resolved Product Version: 13.2.2  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Inconsistency when configuring bandwith limitation
Description: Hi,
while configuring different jobs for a client, some width bandwith limitation, i noticed that every configuration item could be placed in quotation marks except the desired max. bandwidth.

It's a bit inconsistent this way so it would be great if this could be fixed.

Thank you very much
Tags:
Steps To Reproduce:
Additional Information:
Attached Files:
Notes
(0000889)
mvwieringen   
2014-05-31 22:04   
An example and the exact error would be handy. Its probably some missing
parsing as all config code uses the same config parser. But without a
clear example and the exact error its not something we can go on.
(0000899)
alexbrueckel   
2014-06-04 17:37   
Hi,

here's the example that works:
Job {
  Name = "myhost-backupjob"
  Client = "myhost.mydomain.tld"
  JobDefs = "default"
  FileSet = "myfileset"
  Maximum Bandwidth = 10Mb/s
}

Note that the bandwidth value has noch quotation marks.

Thats an example that doesn't work:
Job {
  [same as above]
  Maximum Bandwidth = "10Mb/s"
}

The error message i get in this case is:
ERROR in parse_conf.c:764 Config error: expected a speed, got: 10Mb/s

Hope that helps and thanks for your work.
Alex
(0000900)
mvwieringen   
2014-06-06 15:39   
It seems that the config engine only allows a quoted string. E.g. all numbers
are now allowed to have a quotation. As the speed get parsed by the same function
as a number it currently doesn't allow you to use quotes. You can indeed argue
that its inconsistent but it seems to be envisioned by the original creator of
the config engine. We might change this one day but for now I wouldn't hold my
breath for it to occur any time soon. There are just more important things to do.
(0001086)
joergs   
2014-12-01 16:13   
I added some notes about this to the documentation.
(0005023)
bruno-at-bareos   
2023-05-09 16:41   
documentation updated.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1528 [bareos-core] General trivial have not tried 2023-03-24 11:32 2023-03-24 11:32
Reporter: arogge_adm Platform:  
Assigned To: OS:  
Priority: normal OS Version:  
Status: new Product Version:  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Release Bareos 21.1.8
Description: This ticket acts as a tracker ticket to collect information about releasing Bareos 21.1.8
Tags: release
Steps To Reproduce:
Additional Information:
Attached Files:
There are no notes attached to this issue.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
782 [bareos-core] director minor have not tried 2017-02-09 16:12 2023-03-23 16:20
Reporter: hostedpower Platform: Linux  
Assigned To: joergs OS: Debian  
Priority: normal OS Version: 8  
Status: assigned Product Version:  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Disk full not adequately detected
Description: Hi,


Once a volume disk is full, it keeps creating ghost volumes over and over instead of stopping when it doesn't work. Is there some setting to prevent this, is this a bug or is it by design somehow?

For example disk was full and it created several volumes which are never created because of this full. I have the feeling there should be a better method?


09-Feb 09:20 bareos-sd JobId 809: End of medium on Volume "vol-cons-0768" Bytes=158,430,174,241 Blocks=2,455,825 at 09-Feb-2017 09:20.
09-Feb 09:20 hostedpower-dir JobId 809: Created new Volume "vol-cons-0773" in catalog.
09-Feb 09:20 bareos-sd JobId 809: End of Volume "vol-cons-0773" at 0:0 on device "vps66143-cons" (/home/vps66143/bareos). Write of 209 bytes got -1.
09-Feb 09:20 bareos-sd JobId 809: End of Volume "vol-cons-0773" at 0:0 on device "vps66143-cons" (/home/vps66143/bareos). Write of 209 bytes got -1.
09-Feb 09:20 bareos-sd JobId 809: Marking Volume "vol-cons-0773" in Error in Catalog.
09-Feb 09:20 hostedpower-dir JobId 809: Created new Volume "vol-cons-0774" in catalog.
09-Feb 09:20 bareos-sd JobId 809: End of Volume "vol-cons-0774" at 0:0 on device "vps66143-cons" (/home/vps66143/bareos). Write of 209 bytes got -1.
09-Feb 09:20 bareos-sd JobId 809: Marking Volume "vol-cons-0774" in Error in Catalog.
09-Feb 09:20 hostedpower-dir JobId 809: Created new Volume "vol-cons-0775" in catalog.
09-Feb 09:20 bareos-sd JobId 809: End of Volume "vol-cons-0775" at 0:0 on device "vps66143-cons" (/home/vps66143/bareos). Write of 209 bytes got -1.
09-Feb 09:20 bareos-sd JobId 809: Marking Volume "vol-cons-0775" in Error in Catalog.
09-Feb 09:20 hostedpower-dir JobId 809: Created new Volume "vol-cons-0776" in catalog.
09-Feb 09:20 bareos-sd JobId 809: End of Volume "vol-cons-0776" at 0:0 on device "vps66143-cons" (/home/vps66143/bareos). Write of 209 bytes got -1.
09-Feb 09:20 bareos-sd JobId 809: Marking Volume "vol-cons-0776" in Error in Catalog.
09-Feb 09:20 hostedpower-dir JobId 809: Created new Volume "vol-cons-0777" in catalog.
09-Feb 09:20 bareos-sd JobId 809: End of Volume "vol-cons-0777" at 0:0 on device "vps66143-cons" (/home/vps66143/bareos). Write of 209 bytes got -1.
09-Feb 09:20 bareos-sd JobId 809: Marking Volume "vol-cons-0777" in Error in Catalog.
09-Feb 09:20 bareos-sd JobId 809: Please mount append Volume "vol-cons-0777" or label a new one for:
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
There are no notes attached to this issue.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1524 [bareos-core] file daemon major always 2023-03-14 16:11 2023-03-14 16:11
Reporter: Petya Genius Platform: Linux  
Assigned To: OS: RHEL (and clones)  
Priority: high OS Version: 8  
Status: new Product Version:  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: bpipe does not track the status of the running program
Description: Hello!
When using the plugin bpipe, I found that this plugin does not track the execution status of the running program. If a running program in bpipe fails, the backup task itself completes successfully without checking the status of the running program.

For example:
Plugin = "bpipe:file=/var/backup:reader=bash -c 'pg_basebackup --host=testhost --username=postgres --wal-method=none --format=tar -D -':writer=bash -c 'tar xf - -C /var/backup'"

If pg_basebackup fails with an error (the host is unavailable, the host turned off during the backup, etc.), the backup task itself will succeed.
Anything can be written in bash -c, the execution status will not be tracked. If the program fails, nothing is written to stderr.

Thank you in advance!
Tags:
Steps To Reproduce: File Set {
  Name = "test-fileset"
  Description = "FileSet for tests"
  Include {
    Options {
      Signature = MD5
    }
    Plugin = "bpipe:file=/var/backup:reader=bash -c 'pg_basebackup --host=testhost --username=postgres --wal-method=none --format=tar -D -':writer=bash -c 'tar xf - -C /var/backup'"
    }
}
Additional Information:
System Description
Attached Files:
There are no notes attached to this issue.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1447 [bareos-core] file daemon tweak always 2022-04-06 14:12 2023-03-07 12:09
Reporter: mdc Platform: x86_64  
Assigned To: bruno-at-bareos OS: CentOS  
Priority: normal OS Version: stream 8  
Status: resolved Product Version: 21.1.2  
Product Build: Resolution: won't fix  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Restore of unencrypted files on an encrypted fd thrown an error, bur works.
Description: When restore files from an client that stores the files unencrypted on an client that normally only runs encrypted backups, the restore will work, but an error is thrown.
Tags:
Steps To Reproduce: Sample config:
Client A:
Client {
...
PKI Signatures = Yes
PKI Encryption = Yes
PKI Cipher = aes256
PKI Master Key = ".../master.key"
PKI Keypair = ".../all.keys"
}
Client B:
Client {
...
# without the cryptor config
}

Both can do its' backup and restore for itself to the storage. But when an restore is done, with files from client B on client A, then the files are restored as request, but for every file an error is logged:
clienta JobId 72: Error: filed/crypto.cc:168 Missing cryptographic signature for /var/tmp/bareos/var/log/journal/e882cedd07af40b386b29cfa9c88466f/user-70255@bdb4fa2d506c45ba8f8163f7e4ee7dac-0000000000b6f8c1-0005d99dd2d23d5a.journal
and the hole job is marked as failed.
Additional Information: Because the restore itself works, I think the job should only marked as "OK with warnings" and the "Missing cryptographic signature ..." only as an warning instant of an error.
System Description
Attached Files:
Notes
(0004902)
bruno-at-bareos   
2023-03-07 12:09   
Thanks you for your report. In a bug triage session, we came to the following conclusion for this case.
We understand completely the case, and agree it should be better handled by the code.

The workaround is to change your configuration as with the parameter PKI Signatures = Yes you are requesting the fact that normally you care about the signature for all data, so the job got its failing status. If you need to restore unencrypted data to that client, you should during the restore time to comment out that parameter.

On our side, nobody will work on that improvement, but feel free to propose a fix in a PR on github.
Thanks


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1512 [bareos-core] installer / packages major always 2023-02-07 04:48 2023-02-07 13:37
Reporter: MarceloRuiz Platform:  
Assigned To: bruno-at-bareos OS:  
Priority: high OS Version:  
Status: resolved Product Version:  
Product Build: Resolution: won't fix  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Updating Bareos polutes/invalidates user settings
Description: Upating Bareos recreates all the sample configuration files inside '/etc/bareos/' even if the folder exists and contains a working configuration.
Tags:
Steps To Reproduce: Install Bareos, configure it using custom filenames, update Bareos.
Additional Information: Bareos should not touch an existing '/etc/bareos/' folder if it exists. That means that a user spent a considerable amount of time configuring the program and a simple system update will make the whole configuration invalid and Bareos won't even start.
If there is a need to provide a sample configuration, do it in a separate folder, like '/etc/bareos-sample-config' so it won't break a working configuration. The installer/updater could even delete that folder before the install/update and recreate to provide an up-to-date example of the configuration for the current version without risking breaking anything.
Attached Files:
Notes
(0004874)
bruno-at-bareos   
2023-02-07 13:37   
What OS are you using ?

We state into the documentation to not remove any installed files by your package manager, as it will be reinstalled if you delete them.
rpm will create for you rpmnew or rpmold files when they have been changed.
make install create .new .old files if existing are already there or have being changed.

One of the best way is to simply comment or keep them empty, so no changes will happen on update.

It is how it is actually, as we didn't have find a way to make the product as easy as possible for newcomer proposing a ready to use configuration.

Our the expert side, you can also simple create your personal /etc/bareos-production structure and create systemd overwrite service to use and point to that location.
(0004875)
bruno-at-bareos   
2023-02-07 13:37   
No changes will occurs


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1503 [bareos-core] storage daemon major always 2022-12-25 04:13 2023-01-26 10:36
Reporter: cmlara Platform: Linux  
Assigned To: OS: Ubuntu  
Priority: normal OS Version: 20.04.4  
Status: new Product Version: 22.0.0  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Bareos-sd with libdroplet(s3) storage leaves sockets in CLOSE_WAIT state (FD exhaustion/resource leak)
Description: bareos-dir Version: 22.0.1~pre (21 December 2022) Ubuntu 20.04.4 LTS
s3-us-east-2-sd Version: 22.0.1~pre (21 December 2022) Ubuntu 20.04.4 LTS

Director and SD are on different hosts.

On a system with the bareos-sd, configured for s3 storage via libdroplet it appears that each individual job causes at least one FD to remain open in close_wait state. I've been tracking this on 21.0.0 for a while.. Now that 22.0.1-pre is published as the current release I upgraded and tested against it to ensure it was not fixed in the past year.

---- Logs from 21.0.1-pre

$ ps afux |grep -i bareos-sd
ubuntu 556449 0.0 0.0 8164 660 pts/0 S+ 02:49 0:00 \_ grep --color=auto -i bareos-sd
bareos 555014 0.0 2.4 307808 24432 ? Ssl Dec24 0:00 /usr/sbin/bareos-sd -f

$ sudo lsof -p 555014
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
...
bareos-sd 555014 bareos 3u IPv4 75608807 0t0 TCP *:bacula-sd (LISTEN)
bareos-sd 555014 bareos 4u IPv6 75608808 0t0 TCP *:bacula-sd (LISTEN)
bareos-sd 555014 bareos 6u IPv4 75609055 0t0 TCP ip-172-26-12-255.us-east-2.compute.internal:40150->s3-r-w.us-east-2.amazonaws.com:https (CLOSE_WAIT)
bareos-sd 555014 bareos 7u IPv4 75608950 0t0 TCP ip-172-26-12-255.us-east-2.compute.internal:39234->s3-r-w.us-east-2.amazonaws.com:https (CLOSE_WAIT)
bareos-sd 555014 bareos 8u REG 202,1 30374841 512018 /var/lib/bareos/s3-us-east-2-sd.trace
bareos-sd 555014 bareos 9u IPv4 75611304 0t0 TCP ip-172-26-12-255.us-east-2.compute.internal:45236->s3-r-w.us-east-2.amazonaws.com:https (CLOSE_WAIT)

<ran a small backup>


$ sudo lsof -p 555014
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
...
bareos-sd 555014 bareos 3u IPv4 75608807 0t0 TCP *:bacula-sd (LISTEN)
bareos-sd 555014 bareos 4u IPv6 75608808 0t0 TCP *:bacula-sd (LISTEN)
bareos-sd 555014 bareos 6u IPv4 75609055 0t0 TCP ip-172-26-12-255.us-east-2.compute.internal:40150->s3-r-w.us-east-2.amazonaws.com:https (CLOSE_WAIT)
bareos-sd 555014 bareos 7u IPv4 75608950 0t0 TCP ip-172-26-12-255.us-east-2.compute.internal:39234->s3-r-w.us-east-2.amazonaws.com:https (CLOSE_WAIT)
bareos-sd 555014 bareos 8u REG 202,1 30438986 512018 /var/lib/bareos/s3-us-east-2-sd.trace
bareos-sd 555014 bareos 9u IPv4 75611304 0t0 TCP ip-172-26-12-255.us-east-2.compute.internal:45236->s3-r-w.us-east-2.amazonaws.com:https (CLOSE_WAIT)
bareos-sd 555014 bareos 11u IPv4 75624999 0t0 TCP ip-172-26-12-255.us-east-2.compute.internal:42052->s3-r-w.us-east-2.amazonaws.com:https (CLOSE_WAIT)
bareos-sd 555014 bareos 12u IPv4 75625002 0t0 TCP ip-172-26-12-255.us-east-2.compute.internal:33668->s3-r-w.us-east-2.amazonaws.com:https (CLOSE_WAIT)
bareos-sd 555014 bareos 13u IPv4 75625007 0t0 TCP ip-172-26-12-255.us-east-2.compute.internal:59050->s3-r-w.us-east-2.amazonaws.com:https (CLOSE_WAIT)
bareos-sd 555014 bareos 14u IPv4 75625012 0t0 TCP ip-172-26-12-255.us-east-2.compute.internal:42476->s3-r-w.us-east-2.amazonaws.com:https (CLOSE_WAIT)
Tags:
Steps To Reproduce: Run a backup where storage is backed by S3.
Additional Information:
Attached Files:
Notes
(0004857)
bruno-at-bareos   
2023-01-12 16:08   
Hello,

TCP/IP state is not handled by Bareos but by your own operating system.
It look like a FIN_CLOSE is sended but not FIN_Ack is received back and as thus the CLOSE_WAIT state remain.

We have several tests and running OS but we are not able to reproduce this.
Could you check with your network admin if you don't have something block the normal workflow of TCP/IP.
(0004861)
cmlara   
2023-01-12 22:04   
(Last edited: 2023-01-12 22:07)
"TCP/IP state is not handled by Bareos but by your own operating system."
True, however on Linux CLOSE_WAIT generally means that the remote side has closed the socket and the local system is waiting on the user software to either close() or shutdown() the socket it has open for the connection.

I'm running these connections inside the same region of AWS services with a Lightstail instance hosting the Bareos-SD and the VPC Gateway configured with S3 Target (the S3 connections are routed by AWS internally on their fabric at gateway/router). No firewall blocking enabled.

I performed an unencrypted network capture to get a better idea of what is going on (contains credentials and real data so I can't post the file)

What I observed for a sequence of operations was:

1) SD: HTTP HEAD of /S3-AI-Incremental-0159/0000 (this is the current active object chunk for this particular media)
2) AWS_S3: HTTP 200 Response with head results
3) SD: HTTP PUT /S3-AI-Incremental-0159/0000 (this is the backup data being uploaded)
4) AWS_S3: HTTP OK Response
5) SD: HEAD of /S3-AI-Incremental-0159/0000 (I presume validating the upload was truly successful, or getting latest data about the file to avoid possible write collisions)
6) AWS_S3: HTTP 200 with Head results
7) SD: HTTP HEAD of /S3-AI-Incremental-0159/0001 --- this would be the next chunk file if it crosses the chunksize threshold, not sure why this occurred at this point. This file shouldn't exist yet.
8) AWS_S3: HTTP 404 to HEAD request (expected)
9) OS of server hosting the SD sends a TCP ACK to the packet in step 8 (note: these have been sent for other packets as well, this is just the first relevant packet for discussion)

Approximately 23 seconds later (a timeout has likely occurred at the S3 bucket web server related to keeping the connection open):

10) AWS_S3: Sends FIN+ACK acknowledging the ACK from step 9 and requesting to close the connection.
11) OS of server with SD: Sends an ACK to the packet in step 10 allowing the S3 bucket to close the connection. Locally the connection move to CLOSE_WAIT.

Now that the connection has been closed on the bucket the OS is waiting on Bareos/libdroplet to read the last of the buffers (if any data is in them) and close() or shutdown() the socket which will generate another FIN/ACK cycle for the two-way shutdown. This does not appear to ever occur and as such the FD is left open and the connection remains in CLOSE_WAIT until the Bareos SD is restarted.

I will note that by the time step 9 occurs but before step 10 the Bareos console already indicates the backup was a success, which makes sense as the data is fully stored in the bucket at this time. To me this makes me suspect that likely there should be some sort of socket shutdown by the SD/libdroplet after step 9 that isn't occurring and instead the connection is timing out from the S3 bucket. Alternately whatever task should occur after step 11 isn't and the socket remains consumed.

If there are any config files or additional logs that could be useful please let me know.
(0004862)
cmlara   
2023-01-14 03:37   
Looking at the raw HTTP helped me get a better understand of what was going on, coupled with some trace logging I had done in the past.

Some context on the steps above:
Steps 3-4 are likely DropletDevice::d_write->ChunkedDevice::WriteChunked->ChunkedDevice::FlushChunk->DropletDevice::FlushRemoteChunk() which eventually leads to dpl_s3_put().
Steps 5-8 are likely DropletDevice::d_flush->ChunkedDevice::WaitUntilChunksWritten()->ChunkedDevice::is_written()->DropletDevice::RemoteVolumeSize() being called which leads eventually to dpl_s3_head_raw(). It is expected behavior that it keeps going until it can't find a chunk to know that all chunks are accounted for.

What I'm observing on a preliminary review is that in src/droplet/libdroplet/src/backend/s3/backend/*.c connections are opened by calling `dpl_try_connect(ctx, req, &conn);` where &conn is returned as the connection to use for communicating with the S3 bucket.

On quick glance all the source files tend to have this piece of code included.
```
  if (NULL != conn) {
    if (1 == connection_close)
      dpl_conn_terminate(conn);
    else
      dpl_conn_release(conn);
  }
```
the value `connection_close` appears to only be set to `1` in times of unexpected errors, so in the case of successful backup and volume size validation at step 8 dpl_conn_release(conn) is again used which returns the connection back to the pool for future use where the connection dies from non use

I'm suspecting that DropletDevice::d_close() may be missing a step that would lead to calling dpl_conn_terminate() to close the connection and cleanup the sockets.
(0004870)
arogge   
2023-01-26 09:19   
You're probably right.
However, AFAICT droplet is supposed to handle this on its own which it doesn't seem to do (correctly). At least not in your use-case.
(0004871)
arogge   
2023-01-26 09:44   
After having yet another look the connection pooling in droplet/Libdroplet/src/conn.c is simlpy not able to handle that.
We would have to have some kind of job that looks at all connections in the the connection pool and shut them down if the remote end closed the connection.

If I understand the current code correctly, it will only look at the connection again if it tries to re-use it and then it should clean it up. So while you're seeing some sockets in CLOSE_WAIT, the amount of sockets in that state should not steadily increase, but max-out at some arbitrary limit (probably number of devices times number of threads).
(0004872)
cmlara   
2023-01-26 10:36   
I currently have 3 'S3 devices' configured (each one limited to a single job at a time due to the the s3 interleaving restrictions).

Just did a restart of the SD (last restart was on the 12th) where I was at 1009 FD's, 6 of them being necessary for bareos-sd and the remaining 1003 are close-wait sockets. If I don't proactively restart the service I always hit the 1024 configured FD ulimit and backups will fail.

If there is an upper limit it appears to be quite high.

On my initial look I did wonder why I wasn't just seeing a reuse of the socket that would lead to an error and closing it. My only theory is that 'ctx' as passed to dpl_try_connect() is reset somewhere (being freed or lost) and that each 'session' (whatever a session is in this case) gets a new context with no knowledge of the previous sockets and as such no knowledge try and use them and cleanup. However I haven't been able to fully follow the code flow to confirm that is accurate.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1502 [bareos-core] General trivial have not tried 2022-12-21 13:42 2022-12-21 13:42
Reporter: arogge_adm Platform:  
Assigned To: OS:  
Priority: normal OS Version:  
Status: new Product Version:  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Release Bareos 23.0.0
Description: This ticket acts as a tracker ticket to collect information about releasing Bareos 23.0.0
Tags: release
Steps To Reproduce:
Additional Information:
Attached Files:
There are no notes attached to this issue.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1498 [bareos-core] webui minor random 2022-12-15 15:00 2022-12-21 11:47
Reporter: alexanderbazhenov Platform:  
Assigned To: frank OS:  
Priority: normal OS Version:  
Status: assigned Product Version:  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Failed to send result as json. Maybe the result message is too long?
Description: Got something like this again in 21.0 version: https://bugs.bareos.org/view.php?id=719

Failed to retrieve data from Bareos director
Error message received from director:

Failed to send result as json. Maybe the result message is too long?
Tags: director, job, postgresql, ubuntu20.04, webui
Steps To Reproduce: Don't know the steps, but as I got it happens when more volumes or output used. Especcially when you run a script on a client, e.g. gitlab dump:

sudo mkdir /var/opt/gitlab/backups
sudo chown git /var/opt/gitlab/backups
sudo gitlab-rake gitlab:backup:create SKIP=artifacts

If you got this once you'll not be able to open any job details (the same message all the time) until all backup jobs will finish.
Additional Information: I don't know is it web bug or just director, but there is no error in director logs.

Additional info:

root@bareos:/etc/bareos/bareos-dir.d/catalog# dpkg -l | grep bareos
ii bareos 21.0.0-4 amd64 Backup Archiving Recovery Open Sourced - metapackage
ii bareos-bconsole 21.0.0-4 amd64 Backup Archiving Recovery Open Sourced - text console
ii bareos-client 21.0.0-4 amd64 Backup Archiving Recovery Open Sourced - client metapackage
ii bareos-common 21.0.0-4 amd64 Backup Archiving Recovery Open Sourced - common files
ii bareos-database-common 21.0.0-4 amd64 Backup Archiving Recovery Open Sourced - common catalog files
ii bareos-database-postgresql 21.0.0-4 amd64 Backup Archiving Recovery Open Sourced - PostgreSQL backend
ii bareos-database-tools 21.0.0-4 amd64 Backup Archiving Recovery Open Sourced - database tools
ii bareos-director 21.0.0-4 amd64 Backup Archiving Recovery Open Sourced - director daemon
ii bareos-filedaemon 21.0.0-4 amd64 Backup Archiving Recovery Open Sourced - file daemon
ii bareos-storage 21.0.0-4 amd64 Backup Archiving Recovery Open Sourced - storage daemon
ii bareos-tools 21.0.0-4 amd64 Backup Archiving Recovery Open Sourced - common tools
ii bareos-traymonitor 21.0.0-4 amd64 Backup Archiving Recovery Open Sourced - tray monitor
ii bareos-webui 21.0.0-4 all Backup Archiving Recovery Open Sourced - webui

Postgre installed with defaults:

root@bareos:/etc/bareos/bareos-dir.d/catalog# dpkg -l | grep postgresql
ii bareos-database-postgresql 21.0.0-4 amd64 Backup Archiving Recovery Open Sourced - PostgreSQL backend
ii pgdg-keyring 2018.2 all keyring for apt.postgresql.org
ii postgresql-14 14.1-2.pgdg20.04+1 amd64 The World's Most Advanced Open Source Relational Database
ii postgresql-client-14 14.1-2.pgdg20.04+1 amd64 front-end programs for PostgreSQL 14
ii postgresql-client-common 232.pgdg20.04+1 all manager for multiple PostgreSQL client versions
ii postgresql-common 232.pgdg20.04+1 all PostgreSQL database-cluster manager

root@bareos:/etc/bareos/bareos-dir.d/catalog# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 20.04.3 LTS
Release: 20.04
Codename: focal

Any ideas? Or what other info I should provide.
Attached Files: joblog_jobid4230_json.log (282,928 bytes) 2022-12-15 19:15
https://bugs.bareos.org/file_download.php?file_id=539&type=bug
Notes
(0004839)
bruno-at-bareos   
2022-12-15 16:54   
To help debugging, it would be nice to have at least one of the offending joblog, which can be extracted with bconsole.
please to so and attach the output here (if < 2MB) or on an accessible share.

Developper can be also interested by the same output in json, to do so you can switch bconsole output to ".api 2"

bconsole <<<"@output /var/tmp/joblog_jobidXXXX_json.log
.api 2
list joblog jobid=XXXX
"

where XXX is the problematic jobid.
(0004840)
alexanderbazhenov   
2022-12-15 19:15   
Here is one of them.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1450 [bareos-core] documentation tweak always 2022-04-20 10:12 2022-11-10 16:53
Reporter: mdc Platform: x86_64  
Assigned To: bruno-at-bareos OS: CentOS  
Priority: normal OS Version: stream 8  
Status: resolved Product Version: 21.1.2  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Wrong link to git hub
Description: The GH link in
https://docs.bareos.org/TasksAndConcepts/Plugins.html#python-fd-plugin
points to:
https://github.com/bareos/bareos/tree/master/contrib/fd-plugins/options-plugin-sample
But correct will be:
https://github.com/bareos/bareos/tree/master/contrib/fd-plugins/bareos_option_example
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0004572)
bruno-at-bareos   
2022-04-20 11:44   
Thanks for your report.
We have a fix in progress for that in https://github.com/bareos/bareos/pull/1165
(0004573)
bruno-at-bareos   
2022-04-21 10:21   
PR1165 merged (master), PR1167 Bareos-21 in progress
(0004576)
bruno-at-bareos   
2022-04-21 15:16   
Fix for bareos-21 (default) documentation has been merged too.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1445 [bareos-core] bconsole minor always 2022-03-31 08:35 2022-11-10 16:52
Reporter: mdc Platform: x86_64  
Assigned To: bruno-at-bareos OS: CentOS  
Priority: normal OS Version: stream 8  
Status: resolved Product Version: 21.1.2  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version: 21.1.3  
    Target Version:  
Summary: Quotes are missing at the director name on export
Description: When calling configure export client="Foo" on the console, in the exported file the quotes for the director name are missing.
instant of:
Director {
  Name = "Bareos Director"
this will exported:
Director {
  Name = Bareos Director

As written in the documentation, quotes must be used, when the string contains an space.
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0004562)
bruno-at-bareos   
2022-03-31 10:06   
Hello, I'm just confirmed the missing quote on export.
But even if space is allowed in such resource name, we really advise you to avoid them, it will hurt you in lot a situation.
Space in name for example also doesn't work well with autocompletion in bconsole, etc.

It is safer to consider Name = resource as fqdn name using only ascii alphanumeric and .-_ as special characters.


Regards
(0004577)
bruno-at-bareos   
2022-04-25 16:49   
PR1171 in progress.
(0004590)
bruno-at-bareos   
2022-05-04 17:10   
PR-1171 merged + backport for 21 1173 merged
will appear in next 21.1.3


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1429 [bareos-core] documentation major have not tried 2022-02-14 16:29 2022-11-10 16:52
Reporter: abaguinski Platform:  
Assigned To: bruno-at-bareos OS:  
Priority: high OS Version:  
Status: resolved Product Version: 20.0.5  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Mysql to Postgres migration howto doesn't explain how to initialise the postgres database
Description: I'm trying to figure out how to migrate the catalog from mysql to postgres but I think I'm missing something. The howto (https://docs.bareos.org/Appendix/Howtos.html#prepare-the-new-database) suggests: "Firstly, create a new PostgreSQL database as described in Prepare Bareos database" and links to this document: "https://docs.bareos.org/IntroductionAndTutorial/InstallingBareos.html#prepare-bareos-database", which in turn instructs to run a series of commands that would initialize the database (https://docs.bareos.org/IntroductionAndTutorial/InstallingBareos.html#id9):

su postgres -c /usr/lib/bareos/scripts/create_bareos_database
su postgres -c /usr/lib/bareos/scripts/make_bareos_tables
su postgres -c /usr/lib/bareos/scripts/grant_bareos_privileges

However these commands assume that I mean the currently configured Mysql catalog and fail because Mysql catalog is deprecated:

su postgres -c /usr/lib/bareos/scripts/create_bareos_database
Creating mysql database
The MySQL database backend is deprecated. Please use PostgreSQL instead.
Creating of bareos database failed.

Does that mean I first have to "Add the new PostgreSQL database to the current Bareos Director configuration" (second sentence of the Howto section) and only then go back to the first sentence? Shouldn't the sentences be swapped then (except for "Firstly, ")? And will the create_bareos_database understand which catalog I mean when I configure two catalogs at the same time?

Tags:
Steps To Reproduce: 1. Install bareos 19 with mysql catalog
2. upgrade to bareos 20
3. try to follow the how exactly how it is written
Additional Information:
Attached Files:
Notes
(0004527)
bruno-at-bareos   
2022-02-24 15:56   
I've been able to reproduce the problem, which is due to missing keywords into the documentation. (Passing the dbdriver to scripts)

at the postgresql create database stage could you retry by using this commands

  su - postgres /usr/lib/bareos/scripts/create_bareos_database postgresql
  su - postgres /usr/lib/bareos/scripts/make_bareos_tables postgresql
  su - postgres /usr/lib/bareos/scripts/grant_bareos_privileges postgresql

After that you should be able to use bareos-dbcopy as documented.
Thanks to confirm this works for you, I will then propose and update to the documentation.
(0004528)
abaguinski   
2022-02-25 08:51   
Hi

Thanks for your reaction!

In the mean time we were able to migrate to postgres with a slight difference in the order of steps: 1 added the new catalog resource to the director configuration, 2 created and initialized the postgres database using these scripts. Indeed we've found that the 'postgresql' argument was necessary.

Since we have done it already in this order I unfortunately cannot confirm if only adding the argument was enough (i.e. would the scripts with extra argument work without the catalog resource)

Greetings,
Artem
(0004529)
bruno-at-bareos   
2022-02-28 09:29   
Thanks for your feedback,
Yes the script would have worked without the second catalog resource, when you give them the dbtype.

I will update the documentation to be more precise in that sense.
(0004530)
bruno-at-bareos   
2022-03-01 15:33   
PR#1093 and PR#1094 in review actually
(0004543)
bruno-at-bareos   
2022-03-21 10:56   
PR1094 for updating documentation has been merged.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1480 [bareos-core] documentation minor always 2022-08-30 12:33 2022-11-10 16:51
Reporter: crameleon Platform: Bareos 21.1.3  
Assigned To: frank OS: SUSE Linux Enterprise Server  
Priority: low OS Version: 15 SP4  
Status: resolved Product Version:  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: password string length limitation
Description: Hi,

if I try to log into the web console with the following configuration snippet active:

Console {
  Name = "mygreatusername"
  Password = "SX~E5eMw21shy%z!!B!cZ0PiQ)ex+FOn$Q-A&iv~B3,x|dSGqxsP&4}Zm6iF;[6c6#>LcAvFArcL%d|J}Ae*NB.g8S?$}gJ4mqUH:6aS+Jh6Vtv^Qhno7$>FW24|t2gq"
  Profile = "mygreatwebuiprofile"
  TLS Enable = No
}

The web UI prints the following message:

"Please provide a director, username and password."

If I change the password line to something more simple:

Console {
  Name = "suse-superuser"
  Password = "12345"
  Profile = "webui-superadmin"
  TLS Enable = No
}

Login works as expected.

Since the system does not seem to print any error messages about invalid passwords in its configuration, it would be nice if the allowed characters and lengths (and possibly a sample `pwgen -r <forbidden characters> <length> 1` command) were documented.

Best,
Georg
Tags:
Steps To Reproduce: 1. Configure a web UI user with a complex password such as SX~E5eMw21shy%z!!B!cZ0PiQ)ex+FOn$Q-A&iv~B3,x|dSGqxsP&4}Zm6iF;[6c6#>LcAvFArcL%d|J}Ae*NB.g8S?$}gJ4mqUH:6aS+Jh6Vtv^Qhno7$>FW24|t2gq
2. Copy paste username and password into the browser
3. Try to log in
Additional Information:
Attached Files:
Notes
(0004737)
bruno-at-bareos   
2022-08-31 11:16   
Thanks for your report, the title is a bit misleading, as the problem seems to be present only with the webui.
Having a strong password like described work perfectly with dir<->bconsole for example.

We are now checking where the problem really occur.
(0004738)
bruno-at-bareos   
2022-08-31 11:17   
Long or complicated password are truncated during POST operation with login form.
Those password work well with bconsole for example.
(0004739)
crameleon   
2022-08-31 11:28   
Apologies, I did not consider it to be specific to the webui. Thanks for looking into this! Maybe the POST truncation could be adjusted in my Apache web server?
(0004740)
bruno-at-bareos   
2022-08-31 11:38   
Actual research has proved that the length is important and the password for webui console should be less than 64 chars.
Maybe you can confirm this also on your installation so when our dev's will check this it will be more precise about the symptoms.
(0004741)
crameleon   
2022-09-02 19:00   
Can confirm, with 64 characters it works fine!
(0004742)
crameleon   
2022-09-02 19:02   
And I can also confirm, with one more character, so 65 in total, it returns the "Please provide a director, username and password." message.
(0004744)
frank   
2022-09-08 15:23   
(Last edited: 2022-09-08 16:33)
The form data input filter for password input is set to validate for a PW length between 1 and 64. We simply can remove the max value from the filter to not cause problems like this or set it to a value corresponding to what is allowed in configuration files.
(0004747)
frank   
2022-09-13 18:11   
Fix committed to bareos master branch with changesetid 16581.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1489 [bareos-core] webui minor always 2022-11-02 06:23 2022-11-09 14:11
Reporter: dimmko Platform: Linux  
Assigned To: frank OS: Ubuntu  
Priority: normal OS Version: 22.04  
Status: resolved Product Version: 21.1.4  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: broken storage pool link
Description: Hello!
Sorry for my very bad English!

I have a error when go to see detail
bareos-webui/pool/details/Diff

Tags:
Steps To Reproduce: 1) login in webui
2) click on jobid
3) click on "+"
4) click on pool - Full (for example).
Additional Information: Error:

An error occurred
An error occurred during execution; please try again later.
Additional information:
Exception
File:
/usr/share/bareos-webui/module/Pool/src/Pool/Model/PoolModel.php:94
Message:
Missing argument.
Stack trace:
#0 /usr/share/bareos-webui/module/Pool/src/Pool/Controller/PoolController.php(137): Pool\Model\PoolModel->getPool()
0000001 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/Controller/AbstractActionController.php(82): Pool\Controller\PoolController->detailsAction()
0000002 [internal function]: Zend\Mvc\Controller\AbstractActionController->onDispatch()
0000003 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(444): call_user_func()
0000004 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(205): Zend\EventManager\EventManager->triggerListeners()
0000005 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/Controller/AbstractController.php(118): Zend\EventManager\EventManager->trigger()
0000006 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/DispatchListener.php(93): Zend\Mvc\Controller\AbstractController->dispatch()
0000007 [internal function]: Zend\Mvc\DispatchListener->onDispatch()
0000008 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(444): call_user_func()
0000009 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(205): Zend\EventManager\EventManager->triggerListeners()
0000010 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/Application.php(314): Zend\EventManager\EventManager->trigger()
0000011 /usr/share/bareos-webui/public/index.php(46): Zend\Mvc\Application->run()
0000012 {main}
Attached Files: bareos_webui_error.png (63,510 bytes) 2022-11-02 06:23
https://bugs.bareos.org/file_download.php?file_id=538&type=bug
Notes
(0004821)
bruno-at-bareos   
2022-11-03 10:18   
What is needed to try to understand the error is you pool configuration, also maybe you can use your browser console to log POST and GET answer and headers.
Maybe you can also afford the effort to check the php-fpm if used log and apache log (access and error) when the problem occur.

Thanks.
(0004824)
dimmko   
2022-11-07 09:01   
(Last edited: 2022-11-07 09:05)
bruno-at-bareos, thank's for you comment.

1) my pool - Diff
Pool {
  Name = Diff
  Pool Type = Backup
  RecyclePool = Diff
  Purge Oldest Volume = yes
  Recycle = no
  Recycle Oldest Volume = no
  AutoPrune = no
  Volume Retention = 21 days
  ActionOnPurge = Truncate
  Maximum Volume Jobs = 1
  Label Format = "${Client}_${Level}_${Pool}.${Year}-${Month:p/2/0/r}-${Day:p/2/0/r}-${Hour:p/2/0/r}-${Minute:p/2/0/r}_${JobId}"
}

apache2 access.log
[07/Nov/2022:10:40:58 +0300] "GET /pool/details/Diff HTTP/1.1" 500 3225 "http://192.168.5.16/job/?period=1&status=Success" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36"

apache error.log
[Mon Nov 07 10:50:09.844798 2022] [php:warn] [pid 1340] [client 192.168.1.13:61800] PHP Warning: "continue" targeting switch is equivalent to "break". Did you mean to use "continue 2"? in /usr/share/bareos-webui/vendor/zendframework/zend-stdlib/src/ArrayObject.php on line 426, referer: http://192.168.5.16/job/?period=1&status=Success


In Chrome (103):
General:
Request URL: http://192.168.5.16/pool/details/Diff
Request Method: GET
Status Code: 500 Internal Server Error
Remote Address: 192.168.5.16:80
Referrer Policy: strict-origin-when-cross-origin

Response Headers:
HTTP/1.1 500 Internal Server Error
Date: Mon, 07 Nov 2022 07:59:54 GMT
Server: Apache/2.4.52 (Ubuntu)
Expires: Thu, 19 Nov 1981 08:52:00 GMT
Cache-Control: no-store, no-cache, must-revalidate
Pragma: no-cache
Content-Length: 2927
Connection: close
Content-Type: text/html; charset=UTF-8

Request Headers:
GET /pool/details/Diff HTTP/1.1
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9
Accept-Encoding: gzip, deflate
Accept-Language: ru-RU,ru;q=0.9,en-US;q=0.8,en;q=0.7
Cache-Control: no-cache
Connection: keep-alive
Cookie: bareos=o87i7ftkdsf2r160k2j0g5vic2
DNT: 1
Host: 192.168.5.16
Pragma: no-cache
Referer: http://192.168.5.16/job/?period=1&status=Success
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36
(0004825)
dimmko   
2022-11-07 09:18   
Enable display_error in php

[Mon Nov 07 11:17:57.573002 2022] [php:error] [pid 1545] [client 192.168.1.13:63174] PHP Fatal error: Uncaught Zend\\Session\\Exception\\InvalidArgumentException: 'session.name' is not a valid sessions-related ini setting. in /usr/share/bareos-webui/vendor/zendframework/zend-session/src/Config/SessionConfig.php:90\nStack trace:\n#0 /usr/share/bareos-webui/vendor/zendframework/zend-session/src/Config/StandardConfig.php(266): Zend\\Session\\Config\\SessionConfig->setStorageOption()\n#1 /usr/share/bareos-webui/vendor/zendframework/zend-session/src/Config/StandardConfig.php(114): Zend\\Session\\Config\\StandardConfig->setName()\n#2 /usr/share/bareos-webui/module/Application/Module.php(154): Zend\\Session\\Config\\StandardConfig->setOptions()\n#3 [internal function]: Application\\Module->Application\\{closure}()\n#4 /usr/share/bareos-webui/vendor/zendframework/zend-servicemanager/src/ServiceManager.php(939): call_user_func()\n#5 /usr/share/bareos-webui/vendor/zendframework/zend-servicemanager/src/ServiceManager.php(1099): Zend\\ServiceManager\\ServiceManager->createServiceViaCallback()\n#6 /usr/share/bareos-webui/vendor/zendframework/zend-servicemanager/src/ServiceManager.php(638): Zend\\ServiceManager\\ServiceManager->createFromFactory()\n#7 /usr/share/bareos-webui/vendor/zendframework/zend-servicemanager/src/ServiceManager.php(598): Zend\\ServiceManager\\ServiceManager->doCreate()\n#8 /usr/share/bareos-webui/vendor/zendframework/zend-servicemanager/src/ServiceManager.php(530): Zend\\ServiceManager\\ServiceManager->create()\n#9 /usr/share/bareos-webui/module/Application/Module.php(82): Zend\\ServiceManager\\ServiceManager->get()\n#10 /usr/share/bareos-webui/module/Application/Module.php(42): Application\\Module->initSession()\n#11 [internal function]: Application\\Module->onBootstrap()\n#12 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(444): call_user_func()\n#13 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(205): Zend\\EventManager\\EventManager->triggerListeners()\n#14 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/Application.php(157): Zend\\EventManager\\EventManager->trigger()\n#15 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/Application.php(261): Zend\\Mvc\\Application->bootstrap()\n#16 /usr/share/bareos-webui/public/index.php(46): Zend\\Mvc\\Application::init()\n#17 {main}\n\nNext Zend\\ServiceManager\\Exception\\ServiceNotCreatedException: An exception was raised while creating "Zend\\Session\\SessionManager"; no instance returned in /usr/share/bareos-webui/vendor/zendframework/zend-servicemanager/src/ServiceManager.php:946\nStack trace:\n#0 /usr/share/bareos-webui/vendor/zendframework/zend-servicemanager/src/ServiceManager.php(1099): Zend\\ServiceManager\\ServiceManager->createServiceViaCallback()\n#1 /usr/share/bareos-webui/vendor/zendframework/zend-servicemanager/src/ServiceManager.php(638): Zend\\ServiceManager\\ServiceManager->createFromFactory()\n#2 /usr/share/bareos-webui/vendor/zendframework/zend-servicemanager/src/ServiceManager.php(598): Zend\\ServiceManager\\ServiceManager->doCreate()\n#3 /usr/share/bareos-webui/vendor/zendframework/zend-servicemanager/src/ServiceManager.php(530): Zend\\ServiceManager\\ServiceManager->create()\n#4 /usr/share/bareos-webui/module/Application/Module.php(82): Zend\\ServiceManager\\ServiceManager->get()\n#5 /usr/share/bareos-webui/module/Application/Module.php(42): Application\\Module->initSession()\n#6 [internal function]: Application\\Module->onBootstrap()\n#7 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(444): call_user_func()\n#8 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(205): Zend\\EventManager\\EventManager->triggerListeners()\n#9 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/Application.php(157): Zend\\EventManager\\EventManager->trigger()\n#10 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/Application.php(261): Zend\\Mvc\\Application->bootstrap()\n#11 /usr/share/bareos-webui/public/index.php(46): Zend\\Mvc\\Application::init()\n#12 {main}\n thrown in /usr/share/bareos-webui/vendor/zendframework/zend-servicemanager/src/ServiceManager.php on line 946, referer: http://192.168.5.16/job/?period=1&status=Success
(0004826)
bruno-at-bareos   
2022-11-07 09:55   
I was able to reproduce, what is funny is if you go to storage -> pool tab -> pool name it will work.
We will transfer that to developer.
(0004827)
bruno-at-bareos   
2022-11-07 09:57   
There's a subtile difference in url called

by storage, pool, poolname url is bareos-webui/pool/details/?pool=Full
by jobid, + details, pool bareos-webui/pool/details/Full
-> create error missing parameter
(0004828)
dimmko   
2022-11-07 10:34   
bruno-at-bareos, your method is work, thank's.
(0004830)
frank   
2022-11-08 15:11   
Fix committed to bareos master branch with changesetid 16853.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1357 [bareos-core] director crash have not tried 2021-05-18 10:53 2022-11-09 14:09
Reporter: harm Platform:  
Assigned To: arogge OS:  
Priority: normal OS Version:  
Status: resolved Product Version:  
Product Build: Resolution: unable to reproduce  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: bareos-dir: ERROR in lib/mem_pool.cc:215 Failed ASSERT: obuf
Description: Hello folks,

when I try to make a long term copy of an always incremental backup the Bareos director crashes.

Version: 20.0.1 (02 March 2021) Ubuntu 20.04.1 LTS

Please let me know what more information you need.

Best regards
Harm
Tags:
Steps To Reproduce: Follow the instructions of https://docs.bareos.org/TasksAndConcepts/AlwaysIncrementalBackupScheme.html
Additional Information:
Attached Files:
Notes
(0004130)
harm   
2021-05-19 15:15   
The problem seems to occur when a client is selected. I don't seem to have quite grasped the concept yet, but the error should be handled?
(0004149)
arogge   
2021-06-09 17:48   
We need a meaningful backtrace to debug that. Please install a debugger and debug-Packages (or tell me what system your director runs on so I can provide you the commands) and reproduce the issue, so we can see what goes wrong.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1492 [bareos-core] General trivial have not tried 2022-11-09 12:01 2022-11-09 12:01
Reporter: arogge_adm Platform:  
Assigned To: OS:  
Priority: normal OS Version:  
Status: new Product Version:  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Release Bareos 20.0.9
Description: This ticket acts as a tracker ticket to collect information about releasing Bareos 20.0.9
Tags: release
Steps To Reproduce:
Additional Information:
Attached Files:
There are no notes attached to this issue.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1487 [bareos-core] webui text have not tried 2022-10-13 14:21 2022-11-04 11:54
Reporter: fcolista Platform: Linux  
Assigned To: frank OS: any  
Priority: normal OS Version: 3  
Status: feedback Product Version: 21.1.4  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: webui support for PHP 8.1
Description: Hello. I'm the maintainer of BareOS for Alpine Linux.
We are almost ready to go ahead with new release of Alpine 3.17, where we are going to drop PHP 8 support in favor of PHP 8.1.
Can you please confirm that in BareOS webui 21.1.4 PHP 8.1 is fully supported?
Thank you.

.: Francesco
Tags:
Steps To Reproduce:
Additional Information:
System Description General issues regarding all linux platforms or not specific to only one distribution
Attached Files:
Notes
(0004820)
fcolista   
2022-10-31 14:57   
Any update on this, please?
(0004823)
frank   
2022-11-04 11:54   
> Can you please confirm that in BareOS webui 21.1.4 PHP 8.1 is fully supported?

Yes, currently there are no known issues that break functionality.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
854 [bareos-core] director tweak have not tried 2017-09-21 10:21 2022-10-11 09:43
Reporter: hostedpower Platform: Linux  
Assigned To: stephand OS: Debian  
Priority: normal OS Version: 8  
Status: assigned Product Version: 16.2.5  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Problem with always incremental virtual full
Description: Hi,


All the sudden we have issues with virtual full (for consolidate) no longer working.

We have 2 pools for each customer. One is for the full (consolidate) and the other for the incremental.

We used to have the option to limit a single job to a single volume, we removed that a while ago, maybe there is a relation.

We also had to downgrade 16.2.6 to 16.2.5 because of the MySQL slowness issues, that happened recently, so that's also a possibility.

We have the feeling this software is not very reliable or at least very complex to get it somewhat stable.

PS: The volume it asks is there, but it doesn't want to use it :(

We used this config for a long time without this particular issue, except for the

        Action On Purge = Truncate
and commenting the:
# Maximum Volume Jobs = 1 # a new file for each backup that is done
Tags:
Steps To Reproduce: Config:



Storage {
    Name = customerx-incr
    Device = customerx-incr # bareos-sd Device
    Media Type = customerx
    Address = xxx # backup server fqdn > sent to client sd
    Password = "xxx" # password for Storage daemon
    Maximum Concurrent Jobs = 8 # required for virtual full
}

Storage {
    Name = customerx-cons
    Device = customerx-cons # bareos-sd Device
    Media Type = customerx
    Address = xxx # backup server fqdn > sent to client sd
    Password = "xxx" # password for Storage daemon
    Maximum Concurrent Jobs = 8 # required for virtual full
}

cat /etc/bareos/pool-defaults.conf
#pool defaults used for always incremental scheme
        Pool Type = Backup
# Maximum Volume Jobs = 1 # a new file for each backup that is done
        Recycle = yes # Bareos can automatically recycle Volumes
        Auto Prune = yes # Prune expired volumes
        Volume Use Duration = 23h
        Action On Purge = Truncate
#end


Pool {
    Name = customerx-incr
    Storage = customerx-incr
    LabelFormat = "vol-incr-"
    Next Pool = customerx-cons
    @/etc/bareos/pool-defaults.conf
}

Pool {
    Name = customerx-cons
    Storage = customerx-cons
    LabelFormat = "vol-cons-"
    @/etc/bareos/pool-defaults.conf
}



# lb1.customerx.com
Job {
    Name = "backup-lb1.customerx.com"
    Client = lb1.customerx.com
    Pool = customerx-incr
    Full Backup Pool = customerx-cons
    FileSet = "linux-all"
    Schedule = "always-inc-cycle-4"
    #Defaults
    JobDefs = "HPJobInc"
    Maximum Concurrent Jobs = 8 # Let up to 8 jobs run
}

Client {
    Name = lb1.customerx.com
    Address = lb1.customerx.com
# Catalog = MyCatalog
    Password = "xxx" # password for Storage daemon
    AutoPrune = yes
}


# lb2.customerx.com
Job {
    Name = "backup-lb2.customerx.com"
    Client = lb2.customerx.com
    Pool = customerx-incr
    Full Backup Pool = customerx-cons
    FileSet = "linux-all"
    Schedule = "always-inc-cycle-4"
    #Defaults
    JobDefs = "HPJobInc"
    Maximum Concurrent Jobs = 8 # Let up to 8 jobs run
}

Client {
    Name = lb2.customerx.com
    Address = lb2.customerx.com
# Catalog = MyCatalog
    Password = "xxx" # password for Storage daemon
    AutoPrune = yes
}



# app1.customerx.com
Job {
    Name = "backup-app1.customerx.com"
    Client = app1.customerx.com
    Pool = customerx-incr
    Full Backup Pool = customerx-cons
    FileSet = "linux-all"
    Schedule = "always-inc-cycle-4"
    #Defaults
    JobDefs = "HPJobInc"
    Maximum Concurrent Jobs = 8 # Let up to 8 jobs run
}

Client {
    Name = app1.customerx.com
    Address = app1.customerx.com
# Catalog = MyCatalog
    Password = "xxx" # password for Storage daemon
    AutoPrune = yes
}



# app2.customerx.com
Job {
    Name = "backup-app2.customerx.com"
    Client = app2.customerx.com
    Pool = customerx-incr
    Full Backup Pool = customerx-cons
    FileSet = "linux-all"
    Schedule = "always-inc-cycle-4"
    #Defaults
    JobDefs = "HPJobInc"
    Maximum Concurrent Jobs = 8 # Let up to 8 jobs run
}

Client {
    Name = app2.customerx.com
    Address = app2.customerx.com
# Catalog = MyCatalog
    Password = "xxx" # password for Storage daemon
    AutoPrune = yes
}

# app3.customerx.com
Job {
    Name = "backup-app3.customerx.com"
    Client = app3.customerx.com
    Pool = customerx-incr
    Full Backup Pool = customerx-cons
    FileSet = "linux-all"
    Schedule = "always-inc-cycle-4"
    #Defaults
    JobDefs = "HPJobInc"
    Maximum Concurrent Jobs = 8 # Let up to 8 jobs run
}

Client {
    Name = app3.customerx.com
    Address = app3.customerx.com
# Catalog = MyCatalog
    Password = "xxx" # password for Storage daemon
    AutoPrune = yes
}




# db1.customerx.com
Job {
    Name = "backup-db1.customerx.com"
    Client = db1.customerx.com
    Pool = customerx-incr
    Full Backup Pool = customerx-cons
    FileSet = "linux-all-mysql"
    Schedule = "always-inc-cycle-4"
    #Defaults
    JobDefs = "HPJobInc"
    Maximum Concurrent Jobs = 8 # Let up to 8 jobs run
}

Client {
    Name = db1.customerx.com
    Address = db1.customerx.com
# Catalog = MyCatalog
    Password = "xxx" # password for Storage daemon
    AutoPrune = yes
}


# db2.customerx.com
Job {
    Name = "backup-db2.customerx.com"
    Client = db2.customerx.com
    Pool = customerx-incr
    Full Backup Pool = customerx-cons
    FileSet = "linux-all-mysql"
    Schedule = "always-inc-cycle-4"
    #Defaults
    JobDefs = "HPJobInc"
    Maximum Concurrent Jobs = 8 # Let up to 8 jobs run
}

Client {
    Name = db2.customerx.com
    Address = db2.customerx.com
# Catalog = MyCatalog
    Password = "xxx" # password for Storage daemon
    AutoPrune = yes
}

# db3.customerx.com
Job {
    Name = "backup-db3.customerx.com"
    Client = db3.customerx.com
    Pool = customerx-incr
    Full Backup Pool = customerx-cons
    FileSet = "linux-all-mysql"
    Schedule = "always-inc-cycle-4"
    #Defaults
    JobDefs = "HPJobInc"
    Maximum Concurrent Jobs = 8 # Let up to 8 jobs run
}

Client {
    Name = db3.customerx.com
    Address = db3.customerx.com
# Catalog = MyCatalog
    Password = "xxx" # password for Storage daemon
    AutoPrune = yes
}




# Backup for customerx
Device {
  Name = customerx-incr
  Media Type = customerx
  Archive Device = /home/customerx/bareos
  LabelMedia = yes;
  Random Access = Yes;
  AutomaticMount = yes;
  RemovableMedia = no;
  AlwaysOpen = no;
  Maximum Concurrent Jobs = 8
}

Device {
  Name = customerx-cons
  Media Type = customerx
  Archive Device = /home/customerx/bareos
  LabelMedia = yes;
  Random Access = Yes;
  AutomaticMount = yes;
  RemovableMedia = no;
  AlwaysOpen = no;
  Maximum Concurrent Jobs = 8
}
Additional Information: 2017-09-21 10:07:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 10:02:35 bareos-sd JobId 2467: Please mount read Volume "vol-cons-0378" for:
 Job: backup-db3.customerx.cloud.2017-09-21_09.00.04_05
 Storage: "customerx-incr" (/home/customerx/bareos)
 Pool: customerx-incr
 Media type: customerx
 
2017-09-21 09:57:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:52:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:47:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:42:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:37:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:32:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:27:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:22:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:17:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:12:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:07:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:02:35 hostedpower-dir JobId 2467: Bootstrap records written to /var/lib/bareos/hostedpower-dir.restore.107.bsr
 
2017-09-21 09:02:35 hostedpower-dir JobId 2467: Using Device "customerx-incr" to read.
 
2017-09-21 09:02:35 hostedpower-dir JobId 2467: Using Device "customerx-cons" to write.
 
2017-09-21 09:02:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:02:35 bareos-sd JobId 2467: Warning: acquire.c:294 Read acquire: label.c:254 Could not reserve volume vol-cons-0378 on "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:02:35 bareos-sd JobId 2467: Please mount read Volume "vol-cons-0378" for:
 Job: backup-db3.customerx.cloud.2017-09-21_09.00.04_05
 Storage: "customerx-incr" (/home/customerx/bareos)
 Pool: customerx-incr
 Media type: customerx
 
2017-09-21 09:02:34 hostedpower-dir JobId 2467: Start Virtual Backup JobId 2467, Job=backup-db3.customerx.cloud.2017-09-21_09.00.04_05
 
2017-09-21 09:02:34 hostedpower-dir JobId 2467: Consolidating JobIds 2392,971
System Description
Attached Files:
Notes
(0002744)
joergs   
2017-09-21 16:08   
I see some problems in this configuration.

You should check section http://doc.bareos.org/master/html/bareos-manual-main-reference.html#ConcurrentDiskJobs from the manual.

Each device can only read/write one volume at a time. VirtualFull requires multiple volumes.

Basically, you need multiple devices to the some "Storage Directory", each with "Maximum Concurrent Jobs = 1" to make it work.

Your setting

Device {
  Name = customerx-incr
  Media Type = customerx
  Archive Device = /home/customerx/bareos
  LabelMedia = yes;
  Random Access = Yes;
  AutomaticMount = yes;
  RemovableMedia = no;
  AlwaysOpen = no;
  Maximum Concurrent Jobs = 8
}

will use interleaving instead, which could cause performance issues on restore.
(0002745)
hostedpower   
2017-09-21 16:14   
So you suggest just making the device:

Device {
  Name = customerx-incr
  Media Type = customerx
  Archive Device = /home/customerx/bareos
  LabelMedia = yes;
  Random Access = Yes;
  AutomaticMount = yes;
  RemovableMedia = no;
  AlwaysOpen = no;
  Maximum Concurrent Jobs = 1
}

This would fix all issues?

Before we had this and I think that worked also: Maximum Volume Jobs = 1 , but it seems discouraged...
(0002746)
joergs   
2017-09-21 16:26   
No, this will not fix the issue, but it prevents interleaving, which might cause other problems.

I suggest, by pointing to the documentation, that you setup multiple Devices all pointing to the same Archive Device. Then, access them all to a Director Storage, like

Storage {
    Name = customerx
    Device = customerx-dev1, customerx-dev2, ... # bareos-sd Device
    Media Type = customerx
    Address = xxx # backup server fqdn > sent to client sd
    Password = "xxx" # password for Storage daemon
    Maximum Concurrent Jobs = 8 # required for virtual full
}
(0002747)
joergs   
2017-09-21 16:26   
With Maximum Concurrent Jobs = 8 you should use 8 Devices.
(0002748)
hostedpower   
2017-09-21 16:37   
Hi Joergs,

Thanks a lot, that documentation makes sense and seems to have improved since I last read it (or I missed it somehow).

Will test it like that, but it looks very promising :)
(0002754)
hostedpower   
2017-09-21 22:57   
I've read it, but it would mean I need to create multiple storage devices in this case? That's a lot of extra definitions to just backup 1 customer. It would be nice if you would simply declare the device object and tell there are 8 from it for example. All in just one definition. That shouldn't be too hard I suppose?

Something like:

Device {
  Name = customerx-incr
  Media Type = customerx
  Archive Device = /home/customerx/bareos
  LabelMedia = yes;
  Random Access = Yes;
  AutomaticMount = yes;
  RemovableMedia = no;
  AlwaysOpen = no;
  Maximum Concurrent Jobs = 8 --> this automatically creates customerx-incr1 --> customerx-incr8, probably with some extra setting to allow it.
}



For now, would it be a solution to set

Device {
  Name = customerx-incr
  Media Type = customerx
  Archive Device = /home/customerx/bareos
  LabelMedia = yes;
  Random Access = Yes;
  AutomaticMount = yes;
  RemovableMedia = no;
  AlwaysOpen = no;
  Maximum Concurrent Jobs = 1 # just use one at the same time
}

Since we have so many clients, it would still run multiple backups at the same time for different clients I suppose? (Each client has his own media, device and storage)


PS: We want to keep about 20 days of data, is the next config ok together with the above scenario?


JobDefs {
    Name = "HPJobInc"
    Type = Backup
    Level = Full
    Write Bootstrap = "/var/lib/bareos/%c.bsr"
    Accurate=yes
    Level=Full
    Messages = Standard
    Always Incremental = yes
    Always Incremental Job Retention = 20 days
    # The resulting interval between full consolidations when running daily backups and daily consolidations is Always Incremental Max Full Age Dir Job - Always Incremental Job Retention Dir Job.
    Always Incremental Max Full Age = 35 days # should NEVER be less then Always Incremental Job Retention -> Every 15 days the full backup is also consolidated ( Always Incremental Max Full Age Dir Job - Always Incremental Job Retention Dir )
    Always Incremental Keep Number = 5 # Guarantee that at least the specified number of Backup Jobs will persist, even if they are older than "Always Incremental Job Retention".
    Maximum Concurrent Jobs = 5 # Let up to 5 jobs run
}
(0002759)
hostedpower   
2017-09-25 09:50   
We used this now:

  Maximum Concurrent Jobs = 1 # just use one at the same time

But some jobs also have the same error:

2017-09-25 09:00:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 
2017-09-25 08:55:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)


It looks almost like multiple jobs can't exist together in one volume (well they can, but then issues like this start to occur.

Before, probably with the "Maximum Volume Jobs = 1", we never encountered these problems.
(0002760)
joergs   
2017-09-25 11:17   
Only one access per volume is possible at a time.
So reading and writing to the same volume is not possible.

I thought, you covered this by "Maximum Use Duration = 23h". Have you disabled it, or did you run multiple jobs during one day?

However, this is a bug tracker. I think further questions about Always Incrementals are best handled using the bareos-users mailing list or a bareos.com support ticket.
(0002761)
hostedpower   
2017-09-25 11:37   
Yes I was wondering why we encountered it now and never before.

It wants to swap consolidate pool for and incremental pool (or vice versa). Don't understand why.
(0002763)
joergs   
2017-09-25 12:15   
Please attach the output of

list joblog jobid=2668
(0002764)
hostedpower   
2017-09-25 12:22   
Enter a period to cancel a command.
*list joblog jobid=2668
Automatically selected Catalog: MyCatalog
Using Catalog "MyCatalog"
 2017-09-24 09:00:13 hostedpower-dir JobId 2668: Start Virtual Backup JobId 2668, Job=backup-web.hosted-power.com.2017-09-24_09.00.03_27
 2017-09-24 09:00:13 hostedpower-dir JobId 2668: Consolidating JobIds 2593,1173
 2017-09-24 09:00:18 hostedpower-dir JobId 2668: Bootstrap records written to /var/lib/bareos/hostedpower-dir.restore.87.bsr
 2017-09-24 09:00:18 hostedpower-dir JobId 2668: Using Device "hostedpower-incr" to read.
 2017-09-24 09:00:18 hostedpower-dir JobId 2668: Using Device "hostedpower-cons" to write.
 2017-09-24 09:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:00:18 bareos-sd JobId 2668: Warning: acquire.c:294 Read acquire: label.c:254 Could not reserve volume vol-cons-0344 on "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:00:18 bareos-sd JobId 2668: Please mount read Volume "vol-cons-0344" for:
    Job: backup-web.hosted-power.com.2017-09-24_09.00.03_27
    Storage: "hostedpower-incr" (/home/hostedpower/bareos)
    Pool: hostedpower-incr
    Media type: hostedpower
 2017-09-24 09:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 10:00:18 bareos-sd JobId 2668: Please mount read Volume "vol-cons-0344" for:
    Job: backup-web.hosted-power.com.2017-09-24_09.00.03_27
    Storage: "hostedpower-incr" (/home/hostedpower/bareos)
    Pool: hostedpower-incr
    Media type: hostedpower
 2017-09-24 10:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 10:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 10:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 10:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 10:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 10:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 10:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 10:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 10:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 10:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 10:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:20:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:25:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:30:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:35:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:40:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:45:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:50:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:55:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 12:00:18 bareos-sd JobId 2668: Please mount read Volume "vol-cons-0344" for:
    Job: backup-web.hosted-power.com.2017-09-24_09.00.03_27
    Storage: "hostedpower-incr" (/home/hostedpower/bareos)
    Pool: hostedpower-incr
    Media type: hostedpower
 2017-09-24 12:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 12:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 12:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 12:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 12:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 12:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 12:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 12:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 12:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 12:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 12:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 16:00:18 bareos-sd JobId 2668: Please mount read Volume "vol-cons-0344" for:
    Job: backup-web.hosted-power.com.2017-09-24_09.00.03_27
    Storage: "hostedpower-incr" (/home/hostedpower/bareos)
    Pool: hostedpower-incr
    Media type: hostedpower
 2017-09-24 16:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 16:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 16:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 16:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 16:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 16:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 16:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 16:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 16:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 16:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 16:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 00:00:18 bareos-sd JobId 2668: Please mount read Volume "vol-cons-0344" for:
    Job: backup-web.hosted-power.com.2017-09-24_09.00.03_27
    Storage: "hostedpower-incr" (/home/hostedpower/bareos)
    Pool: hostedpower-incr
    Media type: hostedpower
 2017-09-25 00:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 00:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 00:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 00:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 00:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 00:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 00:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 00:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 00:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 00:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 00:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:30:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:35:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:40:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:45:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:50:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:55:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:00:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:05:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:10:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:15:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:20:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:25:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:30:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:35:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:40:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:45:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:50:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:55:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:00:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:05:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:10:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:15:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:20:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:25:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:30:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:35:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:40:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:45:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:50:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:55:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:00:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:05:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:10:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:15:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:20:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:25:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:30:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:35:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:40:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:45:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:50:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:55:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:00:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:05:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:10:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:15:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:20:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:25:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:30:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:35:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:40:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:45:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:50:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:55:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 09:00:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 09:05:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 09:10:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 09:15:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 09:18:14 bareos-sd JobId 2668: acquire.c:221 Job 2668 canceled.
 2017-09-25 09:18:14 bareos-sd JobId 2668: Elapsed time=418423:18:14, Transfer rate=0 Bytes/second
 2017-09-25 09:18:14 hostedpower-dir JobId 2668: Bareos hostedpower-dir 16.2.5 (03Mar17):
  Build OS: x86_64-pc-linux-gnu debian Debian GNU/Linux 8.0 (jessie)
  JobId: 2668
  Job: backup-web.hosted-power.com.2017-09-24_09.00.03_27
  Backup Level: Virtual Full
  Client: "web.hosted-power.com" 16.2.4 (01Jul16) x86_64-pc-linux-gnu,debian,Debian GNU/Linux 7.0 (wheezy),Debian_7.0,x86_64
  FileSet: "linux-all-mysql" 2017-08-09 00:15:00
  Pool: "hostedpower-cons" (From Job Pool's NextPool resource)
  Catalog: "MyCatalog" (From Client resource)
  Storage: "hostedpower-cons" (From Storage from Pool's NextPool resource)
  Scheduled time: 24-Sep-2017 09:00:03
  Start time: 04-Sep-2017 02:15:02
  End time: 04-Sep-2017 02:42:30
  Elapsed time: 27 mins 28 secs
  Priority: 10
  SD Files Written: 0
  SD Bytes Written: 0 (0 B)
  Rate: 0.0 KB/s
  Volume name(s):
  Volume Session Id: 194
  Volume Session Time: 1506027627
  Last Volume Bytes: 0 (0 B)
  SD Errors: 0
  SD termination status: Canceled
  Accurate: yes
  Termination: Backup Canceled

You have messages.
*
(0002765)
hostedpower   
2017-09-27 11:57   
Hi,

Now jobs seem to success for the moment.

They also seem to be set as incremental all the time while before it was set as full after consolidate.

Example of such job

2017-09-27 09:02:07 hostedpower-dir JobId 2892: Joblevel was set to joblevel of first consolidated job: Incremental
 
2017-09-27 09:02:07 hostedpower-dir JobId 2892: Bareos hostedpower-dir 16.2.5 (03Mar17):
 Build OS: x86_64-pc-linux-gnu debian Debian GNU/Linux 8.0 (jessie)
 JobId: 2892
 Job: backup-web.hosted-power.com.2017-09-27_09.00.02_47
 Backup Level: Virtual Full
 Client: "web.hosted-power.com" 16.2.4 (01Jul16) x86_64-pc-linux-gnu,debian,Debian GNU/Linux 7.0 (wheezy),Debian_7.0,x86_64
 FileSet: "linux-all-mysql" 2017-08-09 00:15:00
 Pool: "hostedpower-cons" (From Job Pool's NextPool resource)
 Catalog: "MyCatalog" (From Client resource)
 Storage: "hostedpower-cons" (From Storage from Pool's NextPool resource)
 Scheduled time: 27-Sep-2017 09:00:02
 Start time: 07-Sep-2017 02:15:03
 End time: 07-Sep-2017 02:42:52
 Elapsed time: 27 mins 49 secs
 Priority: 10
 SD Files Written: 2,803
 SD Bytes Written: 8,487,235,164 (8.487 GB)
 Rate: 5085.2 KB/s
 Volume name(s): vol-cons-0010
 Volume Session Id: 121
 Volume Session Time: 1506368550
 Last Volume Bytes: 8,495,713,067 (8.495 GB)
 SD Errors: 0
 SD termination status: OK
 Accurate: yes
 Termination: Backup OK

 
2017-09-27 09:02:07 hostedpower-dir JobId 2892: purged JobIds 2817,1399 as they were consolidated into Job 2892
 
2017-09-27 09:02:07 hostedpower-dir JobId 2892: Begin pruning Jobs older than 6 months .
 
2017-09-27 09:02:07 hostedpower-dir JobId 2892: No Jobs found to prune.
 
2017-09-27 09:02:07 hostedpower-dir JobId 2892: Begin pruning Files.
 
2017-09-27 09:02:07 hostedpower-dir JobId 2892: No Files found to prune.
 
2017-09-27 09:02:07 hostedpower-dir JobId 2892: End auto prune.

 
2017-09-27 09:02:03 bareos-sd JobId 2892: Elapsed time=00:01:46, Transfer rate=80.06 M Bytes/second
 
2017-09-27 09:00:32 bareos-sd JobId 2892: End of Volume at file 1 on device "hostedpower-incr" (/home/hostedpower/bareos), Volume "vol-cons-0344"
 
2017-09-27 09:00:32 bareos-sd JobId 2892: Ready to read from volume "vol-incr-0097" on device "hostedpower-incr" (/home/hostedpower/bareos).
 
2017-09-27 09:00:32 bareos-sd JobId 2892: Forward spacing Volume "vol-incr-0097" to file:block 0:4709591.
 
2017-09-27 09:00:17 bareos-sd JobId 2892: Recycled volume "vol-cons-0010" on device "hostedpower-cons" (/home/hostedpower/bareos), all previous data lost.
 
2017-09-27 09:00:17 bareos-sd JobId 2892: Forward spacing Volume "vol-cons-0344" to file:block 0:215.
 
2017-09-27 09:00:14 hostedpower-dir JobId 2892: Start Virtual Backup JobId 2892, Job=backup-web.hosted-power.com.2017-09-27_09.00.02_47
 
2017-09-27 09:00:14 hostedpower-dir JobId 2892: Consolidating JobIds 2817,1399
 
2017-09-27 09:00:14 hostedpower-dir JobId 2892: Bootstrap records written to /var/lib/bareos/hostedpower-dir.restore.48.bsr
 
2017-09-27 09:00:14 hostedpower-dir JobId 2892: Using Device "hostedpower-incr" to read.
 
2017-09-27 09:00:14 hostedpower-dir JobId 2892: Max configured use duration=82,800 sec. exceeded. Marking Volume "vol-cons-0344" as Used.
 
2017-09-27 09:00:14 hostedpower-dir JobId 2892: There are no more Jobs associated with Volume "vol-cons-0010". Marking it purged.
 
2017-09-27 09:00:14 hostedpower-dir JobId 2892: All records pruned from Volume "vol-cons-0010"; marking it "Purged"
 
2017-09-27 09:00:14 hostedpower-dir JobId 2892: Recycled volume "vol-cons-0010"
 
2017-09-27 09:00:14 hostedpower-dir JobId 2892: Using Device "hostedpower-cons" to write.
 
2017-09-27 09:00:14 bareos-sd JobId 2892: Ready to read from volume "vol-cons-0344" on device "hostedpower-incr" (/home/hostedpower/bareos).
(0002766)
hostedpower   
2017-09-28 12:39   
Very strange things, most work for the moment it seems (but how long).

Some show full , other incremental


hostedpower-dir JobId 2956: Joblevel was set to joblevel of first consolidated job: Full
 
2017-09-28 09:05:03 hostedpower-dir JobId 2956: Bareos hostedpower-dir 16.2.5 (03Mar17):
 Build OS: x86_64-pc-linux-gnu debian Debian GNU/Linux 8.0 (jessie)
 JobId: 2956
 Job: backup-vps53404.2017-09-28_09.00.02_20
 Backup Level: Virtual Full
 Client: "vps53404" 16.2.4 (01Jul16) Microsoft Windows Server 2012 Standard Edition (build 9200), 64-bit,Cross-compile,Win64
 FileSet: "windows-all" 2017-08-08 22:15:00
 Pool: "vps53404-cons" (From Job Pool's NextPool resource)
 Catalog: "MyCatalog" (From Client resource)
 Storage: "vps53404-cons" (From Storage from Pool's NextPool resource)
 Scheduled time: 28-Sep-2017 09:00:02

Before they always all showed full
(0002802)
hostedpower   
2017-10-19 09:43   
Well I downgraded to 16.2.5 and guess what, issue was gone for few weeks.

Now I tried out the 16.2.7 and issue is back again ...

2017-10-19 09:40:03 bareos-sd JobId 4445: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0340 from dev="vps52320-cons" (/home/vps52320/bareos) to "vps52320-incr" (/home/vps52320/bareos)
 
2017-10-19 09:35:03 bareos-sd JobId 4445: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0340 from dev="vps52320-cons" (/home/vps52320/bareos) to "vps52320-incr" (/home/vps52320/bareos)
 
2017-10-19 09:30:03 bareos-sd JobId 4445: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0340 from dev="vps52320-cons" (/home/vps52320/bareos) to "vps52320-incr" (/home/vps52320/bareos)
 
2017-10-19 09:25:03 bareos-sd JobId 4445: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0340 from dev="vps52320-cons" (/home/vps52320/bareos) to "vps52320-incr" (/home/vps52320/bareos)
 
2017-10-19 09:20:03 bareos-sd JobId 4445: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0340 from dev="vps52320-cons" (/home/vps52320/bareos) to "vps52320-incr" (/home/vps52320/bareos)

Coincidence? I start doubting it.
(0002830)
hostedpower   
2017-12-02 12:19   
Hi,


I'm still on 17.2.7 and have to say it's an intermittent error. It goes fine for days and then all the sudden sometimes one or more jobs suffer from it.

Never had it in the past until a certain version, pretty sure of that.
(0002949)
hostedpower   
2018-03-26 20:22   
The problem was long time gone, but now it's back in full force. Any idea what the cause could be?
(0002991)
stephand   
2018-05-04 10:57   
With larger MySQL databases and Bareos 17.2, for incremental jobs with accurate=yes, it seems to help to add the following index:

CREATE INDEX jobtdate_idx ON Job (JobTDate);
ANALYZE TABLE Job;

Could you please check if that works for you?
(0002992)
hostedpower   
2018-05-04 11:16   
ok thanks, we addeed the index, but it took only 0.5 seconds. Usually this means there was not an issue :)

Creating an index which is slow, usually means there is (serious) performance gain.
(0002994)
stephand   
2018-05-04 17:56   
For sure it depends on the size of the Job table. I've measured it 25% faster with this index with 10000 records in the Job table.

However, looking at the logs like
2017-10-19 09:20:03 bareos-sd JobId 4445: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0340 from dev="vps52320-cons" (/home/vps52320/bareos) to "vps52320-incr" (/home/vps52320/bareos)
that's a different problem, has nothing to do with that index.
As Joerg already suggested to use multiple storage devices, I'd propose increase the number. This is documented meanwhile at
http://doc.bareos.org/master/html/bareos-manual-main-reference.html#UsingMultipleStorageDevices
(0003047)
stephand   
2018-06-21 07:59   
Were you able to solve the problem by using multiple storages devices?
(0003053)
hostedpower   
2018-06-27 23:56   
while this might work, we use at least 50-60 devices atm. So it would be a lot of extra work to add extra storage devices.

Why not create a device of type 'disk volume' and automatically give it the right properties? It would make things SO MUCH EASIER and bring bareos into the 2018 ...

Anything that can be done to get this supported? We would want to sponsor this if needed...
(0003145)
stephand   
2018-10-22 15:54   
Hi,

When you say "a lot of extra work to add extra storage devices" are you talking about the Storage Daemon configuration mentioned at
http://doc.bareos.org/master/html/bareos-manual-main-reference.html#UsingMultipleStorageDevices
which is a always

Device {
  Name = FileStorage1
  Media Type = File
  Archive Device = /var/lib/bareos/storage
  LabelMedia = yes
  Random Access = yes
  AutomaticMount = yes
  RemovableMedia = no
  AlwaysOpen = no
  Maximum Concurrent Jobs = 1
}

and only the number in Name = FileStorage1 is increased for each Device?

Are you already using configuration management tools like Ansible, Puppet etc? Then it shouldn't be too hard to get done.

Or what exactly do you mean by "Why not create a device of type 'disk volume' and automatically give it the right properties? It would make things SO MUCH EASIER and bring bareos into the 2018 ..."?

Let me guess, lets imagine we would have MultiDevice in the SD configuration, then the example config from http://doc.bareos.org/master/html/bareos-manual-main-reference.html#UsingMultipleStorageDevices could be written like this:

The Bareos Director config:

Director {
  Name = bareos-dir.example.com
  QueryFile = "/usr/lib/bareos/scripts/query.sql"
  Maximum Concurrent Jobs = 10
  Password = "<secret>"
}
 
Storage {
  Name = File
  Address = bareos-sd.bareos.com
  Password = "<sd-secret>"
  Device = MultiFileStorage
  # number of devices = Maximum Concurrent Jobs
  Maximum Concurrent Jobs = 4
  Media Type = File
}

The Bareos Storagedaemon config:

Storage {
  Name = bareos-sd.example.com
  # any number >= 4
  Maximum Concurrent Jobs = 20
}
 
Director {
  Name = bareos-dir.example.com
  Password = "<sd-secret>"
}
 
MultiDevice {
  Name = MultiFileStorage
  Media Type = File
  Archive Device = /var/lib/bareos/storage
  LabelMedia = yes
  Random Access = yes
  AutomaticMount = yes
  RemovableMedia = no
  AlwaysOpen = no
  Maximum Concurrent Jobs = 1
  Multi Device Count = 4
}
 
Do you mean that?

Or if not, please give an example of how the config should look like to make things easier for you.
(0003146)
hostedpower   
2018-10-22 16:25   
We're indeed looking into Ansible to automate this, but still, something like:

MultiDevice {
  Name = MultiFileStorage
  Media Type = File
  Archive Device = /var/lib/bareos/storage
  LabelMedia = yes
  Random Access = yes
  AutomaticMount = yes
  RemovableMedia = no
  AlwaysOpen = no
  Maximum Concurrent Jobs = 1
  Multi Device Count = 4
}

would be more than fantastic!!

Just a single device supporting concurrent access in an easy fashion!

Probably we could then also set "Maximum Concurrent Jobs = 4" Pretty safely?


I can imagine if you're used to Bareos (and tapes), this seems maybe a strange way of working.

However if you're used to (most) other backup software supporting harddrives by original design, the way it's designed now for disks is just way too complicated :(
(0003768)
hostedpower   
2020-02-10 21:08   
Hi,


I think this feature was written: https://docs.bareos.org/Configuration/StorageDaemon.html#storageresourcemultiplieddevice

Does it require an autochanger for this to work as disussed in this thread? Or would simply more devices thanks to the count parameter be sufficient?

I ask since lately we see a lot of errors again as reported here :| :)
(0003774)
arogge   
2020-02-11 09:34   
The autochanger is optional, but the feature won't help you if you don't configure an autochanger.
With the autochanger you only need to configure one storage device in your Director. Otherwise you'd need to configure each of the mutliplied devices in your director separately.

This is - of course - not a physical existing autochanger, it is just an autochanger configuration in the storage daemon to group the different storage devices together.
(0003775)
hostedpower   
2020-02-11 10:00   
ok, but in our case we ha
(0003776)
hostedpower   
2020-02-11 10:02   
ok, but in our case we have more than 100 storages configured with different names.

Do we need multiple autochangers as well or just 1 auto changer for all these devices? I'm afraid it's the latter, so we'd have to add tons of autochangers as well right? :)
(0003777)
arogge   
2020-02-11 10:30   
If you have more than 100 storages, I hope you're generating your configuration with something like puppet or ansible.
Why exactly do you need such a large amount of individual storages?
Usually if you're using only File-based storage a single storage (or file-autochanger) per RAID array is enough. Everything else usually just overcomplicates things in the end.
(0003798)
hostedpower   
2020-02-12 10:04   
Well it's because we allocate storage space for each customer, so each customer pays for his own storage. Putting all into one large storage, wouldn't show us anymore who is using what exactly.

Is there a better way to "allocate" storage for individual customers while at the same time use large storage as you suggest?

PS: Yes we generate config, but updating config now to include autochanger, would still be quite some work since we generate this config only once .

Just adding a device count is easy since we use include file. So adding autochanger now isn't really what we hoped for :)
(0004060)
hostedpower   
2020-12-01 12:43   
Hi,

We still have this: Need volume from other drive, but swap not possible

Strange thing is that it works 99% of times, but then we have periods we see this error a lot. I don't understand why this works so well mostly to work so bad at other times.

 It's one of the primary reasons we're now looking to other backup solutions. Next to that we have many storage servers and bareos has currently no way to let "x number of tasks" run on a per storage server basis.
(0004217)
hostedpower   
2021-08-25 11:27   
(Last edited: 2021-08-26 23:41)
Ok we finally re-architectured our whole backup infrastructure, only to find this problem/bug hits us hard again.

We use the latest bareos 20.2 version.

We now use one large folder for all backups with 10 concurrent consolidates (max). We use postgresql as our database engine (so it cannot be because of MySQL). We tried to follow all best practices, I don't understand what is wrong with it :(


Storage {
        Name = AI-Incremental
        Device = AI-Incremental-Autochanger
        Media Type = AI
        Address = xxx
        Password = "xxx"
        Maximum Concurrent Jobs = 10
        Autochanger = yes
}

Storage {
        Name = AI-Consolidated
        Device = AI-Consolidated-Autochanger
        Media Type = AI
        Address = xxx
        Password = "xxx"
        Maximum Concurrent Jobs = 10
        Autochanger = yes
}

Device {
  Name = AI-Incremental
  Media Type = AI
  Archive Device = /var/lib/bareos/storage
  LabelMedia = yes;
  Random Access = Yes;
  AutomaticMount = yes;
  RemovableMedia = no;
  AlwaysOpen = no;
  Maximum Concurrent Jobs = 1 # this has to be 1! (1 for each device)
  Count = 10
}

Autochanger {
  Name = "AI-Incremental-Autochanger"
  Device = AI-Incremental

  Changer Device = /dev/null
  Changer Command = ""
}


Device {
  Name = AI-Consolidated
  Media Type = AI
  Archive Device = /var/lib/bareos/storage
  LabelMedia = yes;
  Random Access = Yes;
  AutomaticMount = yes;
  RemovableMedia = no;
  AlwaysOpen = no;
  Maximum Concurrent Jobs = 1 # this has to be 1! (1 for each device)
  Count = 10
}

Autochanger {
  Name = "AI-Consolidated-Autochanger"
  Device = AI-Consolidated

  Changer Device = /dev/null
  Changer Command = ""
}


I suppose the error must be easy to spot? Or everyone would have this problem :(

(0004218)
hostedpower   
2021-08-25 11:32   
3838 machine.example.com-files machine.example.com Backup VirtualFull 0 0.00 B 0 Running
Messages
Search
#
Timestamp
Message
52 2021-08-25 11:25:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
51 2021-08-25 11:20:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
50 2021-08-25 11:15:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
49 2021-08-25 11:10:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
48 2021-08-25 11:05:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
47 2021-08-25 11:00:25 backup1-sd JobId 3838: Please mount read Volume "vol-cons-0287" for:
Job: machine.example.com-files.2021-08-25_08.00.19_36
Storage: "AI-Incremental0007" (/var/lib/bareos/storage)
Pool: AI-Incremental
Media type: AI
46 2021-08-25 10:55:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
45 2021-08-25 10:50:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
44 2021-08-25 10:45:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
43 2021-08-25 10:40:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
42 2021-08-25 10:35:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
41 2021-08-25 10:30:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
40 2021-08-25 10:25:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
39 2021-08-25 10:20:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
38 2021-08-25 10:15:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
37 2021-08-25 10:10:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
36 2021-08-25 10:05:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
35 2021-08-25 10:00:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
34 2021-08-25 09:55:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
33 2021-08-25 09:50:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
32 2021-08-25 09:45:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
31 2021-08-25 09:40:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
30 2021-08-25 09:35:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
29 2021-08-25 09:30:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
28 2021-08-25 09:25:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
27 2021-08-25 09:20:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
26 2021-08-25 09:15:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
25 2021-08-25 09:10:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
24 2021-08-25 09:05:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
23 2021-08-25 09:00:25 backup1-sd JobId 3838: Please mount read Volume "vol-cons-0287" for:
Job: machine.example.com-files.2021-08-25_08.00.19_36
Storage: "AI-Incremental0007" (/var/lib/bareos/storage)
Pool: AI-Incremental
Media type: AI
22 2021-08-25 08:55:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
21 2021-08-25 08:50:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
20 2021-08-25 08:45:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
19 2021-08-25 08:40:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
18 2021-08-25 08:35:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
17 2021-08-25 08:30:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
16 2021-08-25 08:25:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
15 2021-08-25 08:20:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
14 2021-08-25 08:15:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
13 2021-08-25 08:10:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
12 2021-08-25 08:05:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
11 2021-08-25 08:00:25 backup1-sd JobId 3838: Please mount read Volume "vol-cons-0287" for:
Job: machine.example.com-files.2021-08-25_08.00.19_36
Storage: "AI-Incremental0007" (/var/lib/bareos/storage)
Pool: AI-Incremental
Media type: AI
10 2021-08-25 08:00:25 backup1-sd JobId 3838: Warning: stored/acquire.cc:348 Read acquire: stored/label.cc:269 Could not reserve volume vol-cons-0287 on "AI-Incremental0007" (/var/lib/bareos/storage)
9 2021-08-25 08:00:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
8 2021-08-25 08:00:24 backup1-sd JobId 3838: Ready to append to end of Volume "vol-cons-0287" size=12609080131
7 2021-08-25 08:00:24 backup1-sd JobId 3838: Volume "vol-cons-0287" previously written, moving to end of data.
6 2021-08-25 08:00:24 backup1-dir JobId 3838: Using Device "AI-Consolidated0007" to write.
5 2021-08-25 08:00:24 backup1-dir JobId 3838: Using Device "AI-Incremental0007" to read.
4 2021-08-25 08:00:23 backup1-dir JobId 3838: Connected Storage daemon at backupx.xxxx:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
3 2021-08-25 08:00:23 backup1-dir JobId 3838: Bootstrap records written to /var/lib/bareos/backup1-dir.restore.379.bsr
(0004219)
hostedpower   
2021-08-25 11:48   
I see now it tries to mount consolidate volumes on the incremental devices, you can see it in above sample, but also below:
25-Aug 08:02 backup1-dir JobId 3860: Start Virtual Backup JobId 3860, Job=machine.example.com-files.2021-08-25_08.00.31_02
25-Aug 08:02 backup1-dir JobId 3860: Consolidating JobIds 3563,724
25-Aug 08:02 backup1-dir JobId 3860: Bootstrap records written to /var/lib/bareos/backup1-dir.restore.394.bsr
25-Aug 08:02 backup1-dir JobId 3860: Connected Storage daemon at xxx.xxx:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
25-Aug 08:02 backup1-dir JobId 3860: Using Device "AI-Incremental0005" to read.
25-Aug 08:02 backup1-dir JobId 3860: Using Device "AI-Consolidated0005" to write.
25-Aug 08:02 backup1-sd JobId 3860: Volume "vol-cons-0292" previously written, moving to end of data.
25-Aug 08:02 backup1-sd JobId 3860: Ready to append to end of Volume "vol-cons-0292" size=26118365623
25-Aug 08:02 backup1-sd JobId 3860: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0005" (/var/lib/bareos/storage)
25-Aug 08:02 backup1-sd JobId 3860: Warning: stored/acquire.cc:348 Read acquire: stored/label.cc:269 Could not reserve volume vol-cons-0287 on "AI-Incremental0005" (/var/lib/bareos/storage)
25-Aug 08:02 backup1-sd JobId 3860: Please mount read Volume "vol-cons-0287" for:
    Job: machine.example.com-files.2021-08-25_08.00.31_02
    Storage: "AI-Incremental0005" (/var/lib/bareos/storage)
    Pool: AI-Incremental
    Media type: AI

This might be the cause? What could be causing this?
(0004220)
hostedpower   
2021-08-25 11:51   
This is the first job today with the messages, but it succeeded anyway; maybe you could see what is going wrong here?

2021-08-24 15:53:54 backup1-dir JobId 3549: console command: run AfterJob ".bvfs_update JobId=3549"
30 2021-08-24 15:53:54 backup1-dir JobId 3549: End auto prune.

29 2021-08-24 15:53:54 backup1-dir JobId 3549: No Files found to prune.
28 2021-08-24 15:53:54 backup1-dir JobId 3549: Begin pruning Files.
27 2021-08-24 15:53:54 backup1-dir JobId 3549: No Jobs found to prune.
26 2021-08-24 15:53:54 backup1-dir JobId 3549: Begin pruning Jobs older than 6 months .
25 2021-08-24 15:53:54 backup1-dir JobId 3549: purged JobIds 3237,648 as they were consolidated into Job 3549
24 2021-08-24 15:53:54 backup1-dir JobId 3549: Bareos backup1-dir 20.0.2 (10Jun21):
Build OS: Debian GNU/Linux 10 (buster)
JobId: 3549
Job: another.xxx-files.2021-08-24_15.48.51_46
Backup Level: Virtual Full
Client: "another.xxx" 20.0.2 (10Jun21) Debian GNU/Linux 10 (buster),debian
FileSet: "linux-files" 2021-07-20 16:03:24
Pool: "AI-Consolidated" (From Job Pool's NextPool resource)
Catalog: "MyCatalog" (From Client resource)
Storage: "AI-Consolidated" (From Storage from Pool's NextPool resource)
Scheduled time: 24-Aug-2021 15:48:51
Start time: 03-Aug-2021 23:08:50
End time: 03-Aug-2021 23:09:30
Elapsed time: 40 secs
Priority: 10
SD Files Written: 653
SD Bytes Written: 55,510,558 (55.51 MB)
Rate: 1387.8 KB/s
Volume name(s): vol-cons-0288
Volume Session Id: 2056
Volume Session Time: 1628888564
Last Volume Bytes: 55,596,662 (55.59 MB)
SD Errors: 0
SD termination status: OK
Accurate: yes
Bareos binary info: official Bareos subscription
Job triggered by: User
Termination: Backup OK

23 2021-08-24 15:53:54 backup1-dir JobId 3549: Joblevel was set to joblevel of first consolidated job: Incremental
22 2021-08-24 15:53:54 backup1-dir JobId 3549: Insert of attributes batch table done
21 2021-08-24 15:53:54 backup1-dir JobId 3549: Insert of attributes batch table with 653 entries start
20 2021-08-24 15:53:54 backup1-sd JobId 3549: Releasing device "AI-Incremental0008" (/var/lib/bareos/storage).
19 2021-08-24 15:53:54 backup1-sd JobId 3549: Releasing device "AI-Consolidated0008" (/var/lib/bareos/storage).
18 2021-08-24 15:53:54 backup1-sd JobId 3549: Elapsed time=00:00:01, Transfer rate=55.51 M Bytes/second
17 2021-08-24 15:53:54 backup1-sd JobId 3549: Forward spacing Volume "vol-incr-0135" to file:block 0:2909195921.
16 2021-08-24 15:53:54 backup1-sd JobId 3549: Ready to read from volume "vol-incr-0135" on device "AI-Incremental0008" (/var/lib/bareos/storage).
15 2021-08-24 15:53:54 backup1-sd JobId 3549: End of Volume at file 0 on device "AI-Incremental0008" (/var/lib/bareos/storage), Volume "vol-cons-0284"
14 2021-08-24 15:53:54 backup1-sd JobId 3549: Forward spacing Volume "vol-cons-0284" to file:block 0:307710024.
13 2021-08-24 15:53:54 backup1-sd JobId 3549: Ready to read from volume "vol-cons-0284" on device "AI-Incremental0008" (/var/lib/bareos/storage).
12 2021-08-24 15:48:54 backup1-sd JobId 3549: Please mount read Volume "vol-cons-0284" for:
Job: another.xxx-files.2021-08-24_15.48.51_46
Storage: "AI-Incremental0008" (/var/lib/bareos/storage)
Pool: AI-Incremental
Media type: AI
11 2021-08-24 15:48:54 backup1-sd JobId 3549: Warning: stored/acquire.cc:348 Read acquire: stored/label.cc:269 Could not reserve volume vol-cons-0284 on "AI-Incremental0008" (/var/lib/bareos/storage)
10 2021-08-24 15:48:54 backup1-sd JobId 3549: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=1 num_writers=0 num_reserve=0 swap=0 vol=vol-cons-0284 from dev="AI-Incremental0005" (/var/lib/bareos/storage) to "AI-Incremental0008" (/var/lib/bareos/storage)
9 2021-08-24 15:48:54 backup1-sd JobId 3549: Wrote label to prelabeled Volume "vol-cons-0288" on device "AI-Consolidated0008" (/var/lib/bareos/storage)
8 2021-08-24 15:48:54 backup1-sd JobId 3549: Labeled new Volume "vol-cons-0288" on device "AI-Consolidated0008" (/var/lib/bareos/storage).
7 2021-08-24 15:48:53 backup1-dir JobId 3549: Using Device "AI-Consolidated0008" to write.
6 2021-08-24 15:48:53 backup1-dir JobId 3549: Created new Volume "vol-cons-0288" in catalog.
5 2021-08-24 15:48:53 backup1-dir JobId 3549: Using Device "AI-Incremental0008" to read.
4 2021-08-24 15:48:52 backup1-dir JobId 3549: Connected Storage daemon at xxx.xxx:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
3 2021-08-24 15:48:52 backup1-dir JobId 3549: Bootstrap records written to /var/lib/bareos/backup1-dir.restore.331.bsr
2 2021-08-24 15:48:52 backup1-dir JobId 3549: Consolidating JobIds 3237,648
1 2021-08-24 15:48:52 backup1-dir JobId 3549: Start Virtual Backup JobId 3549, Job=another.xxx-files.2021-08-24_15.48.51_46
(0004221)
hostedpower   
2021-08-25 11:54   
Just saw this swap not possible error also sometimes happen when the same device/storage/pool was used:

5 2021-08-24 15:54:03 backup1-sd JobId 3553: Ready to read from volume "vol-incr-0136" on device "AI-Incremental0002" (/var/lib/bareos/storage).
14 2021-08-24 15:49:03 backup1-sd JobId 3553: Please mount read Volume "vol-incr-0136" for:
Job: xxx.xxx.bxxe-files.2021-08-24_15.48.52_50
Storage: "AI-Incremental0002" (/var/lib/bareos/storage)
Pool: AI-Incremental
Media type: AI
13 2021-08-24 15:49:03 backup1-sd JobId 3553: Warning: stored/acquire.cc:348 Read acquire: stored/label.cc:269 Could not reserve volume vol-incr-0136 on "AI-Incremental0002" (/var/lib/bareos/storage)
12 2021-08-24 15:49:03 backup1-sd JobId 3553: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=1 num_writers=0 num_reserve=0 swap=0 vol=vol-incr-0136 from dev="AI-Incremental0006" (/var/lib/bareos/storage) to "AI-Incremental0002" (/var/lib/bareos/storage)
11 2021-08-24 15:49:03 backup1-sd JobId 3553: End of Volume at file 0 on device "AI-Incremental0002" (/var/lib/bareos/storage), Volume "vol-cons-0285"
(0004222)
hostedpower   
2021-08-25 12:20   
PS: Consolidate job missing in the posted config above:

Job {
        Name = "Consolidate"
        Type = "Consolidate"
        Accurate = "yes"

        JobDefs = "DefaultJob"

        Schedule = "ConsolidateCycle"
        Max Full Consolidations = 200

        Maximum Concurrent Jobs = 1
        Prune Volumes = yes

        Priority = 11
}
(0004227)
hostedpower   
2021-08-26 13:11   
2021-08-26 11:34:12 backup1-sd JobId 4151: Warning: stored/acquire.cc:348 Read acquire: stored/label.cc:269 Could not reserve volume vol-cons-0282 on "AI-Incremental0004" (/var/lib/bareos/storage) <-------
10 2021-08-26 11:34:12 backup1-sd JobId 4151: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0282 from dev="AI-Consolidated0010" (/var/lib/bareos/storage) to "AI-Incremental0004" (/var/lib/bareos/storage)
9 2021-08-26 11:34:12 backup1-sd JobId 4151: Wrote label to prelabeled Volume "vol-cons-0298" on device "AI-Incremental0001" (/var/lib/bareos/storage)
8 2021-08-26 11:34:12 backup1-sd JobId 4151: Labeled new Volume "vol-cons-0298" on device "AI-Incremental0001" (/var/lib/bareos/storage).
7 2021-08-26 11:34:12 backup1-dir JobId 4151: Using Device "AI-Incremental0001" to write.
6 2021-08-26 11:34:12 backup1-dir JobId 4151: Created new Volume "vol-cons-0298" in catalog.


All jobs don't even continue after this "event"...
(0004494)
hostedpower   
2022-01-31 08:33   
This still happens, even after using seperate devices and labels etc


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1474 [bareos-core] storage daemon crash always 2022-07-27 16:12 2022-10-04 10:28
Reporter: jens Platform: Linux  
Assigned To: bruno-at-bareos OS: Debian  
Priority: normal OS Version: 9  
Status: resolved Product Version: 19.2.12  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: bareos-sd crashing on VirtualFull with SIGSEGV ../src/lib/serial.cc file not found
Description: When running 'always incremental' backup scheme the storage daemon crashes with segmentation fault
on VirtualFull backup triggered by consolidation.

Job error:
bareos-dir JobId 1267: Fatal error: Director's comm line to SD dropped.

GDB debug:
bareos-sd (200): stored/mac.cc:159-154 joblevel from SOS_LABEL is now F
bareos-sd (130): stored/label.cc:672-154 session_label record=ec015288
bareos-sd (150): stored/label.cc:718-154 Write sesson_label record JobId=154 FI=SOS_LABEL SessId=1 Strm=154 len=165 remainder=0
bareos-sd (150): stored/label.cc:722-154 Leave WriteSessionLabel Block=1351364161d File=0d
bareos-sd (200): stored/mac.cc:221-154 before write JobId=154 FI=1 SessId=1 Strm=UNIX-Attributes-EX len=123
Thread 4 "bareos-sd" received signal SIGSEGV, Segmentation fault.

[Switching to Thread 0x7ffff4c5b700 (LWP 2271)]
serial_uint32 (ptr=ptr@entry=0x7ffff4c5aa70, v=<optimized out>) at ../../../src/lib/serial.cc:76
76 ../../../src/lib/serial.cc: No such file or directory.


I am running daily incrementals into 'File' pool, consolidating every 4 days into 'FileCons' pool, a virtual full every 1st Monday of a month into 'LongTerm-Disk' pool
and finally a migration fto tape every 2nd Monday of a month from 'LongTerm-Disk' pool into 'LongTerm-Tape' pool.

BareOS version: 19.2.7
BareOS director and storage daemon on the same machine.
Disk storage on CEPH mount
Tape storage on Fujitsu Eternus LT2 tape library with 1 LTO-7 drive

---------------------------------------------------------------------------------------------------
Storage Device config:

FileStorage with 10 devices, all into same 1st folder:

Device {
  Name = FileStorage
  Media Type = File
  Archive Device = /storage/backup/bareos_Incremental # storage location
  LabelMedia = yes # lets Bareos label unlabeled media
  Random Access = yes # allow this device to be used by any job
  AutomaticMount = yes # when device opened, read it
  RemovableMedia = no # fixed media ( no tape, no usb )
  AlwaysOpen = no
  Auto Inflate = both # auto-decompress in- out- stream
  Auto Deflate = both # auto-compress in- out- stream ( backup server side compression )
  Auto Deflate Algorithm = LZ4HC # compression algorithm
}
....

FileStorageCons with 10 devices, all into same 2nd folder

Name = FileStorageCons
  Media Type = FileCons
  Archive Device = /storage/backup/bareos_Consolidate # storage location
  LabelMedia = yes # lets Bareos label unlabeled media
  Random Access = yes # allow this device to be used by any job
  AutomaticMount = yes # when device opened, read it
  RemovableMedia = no # fixed media ( no tape, no usb )
  AlwaysOpen = no
  Auto Inflate = both # auto-decompress in- out- stream
  Auto Deflate = both # auto-compress in- out- stream ( backup server side compression )
  Auto Deflate Algorithm = LZ4HC # compression algorithm
}
...

FileStorageVault with 10 devices, all into same 3rd folder

Name = FileStorageVault
  Media Type = FileVLT
  Archive Device = /storage/backup/bareos_LongTermDisk # storage location
  LabelMedia = yes # lets Bareos label unlabeled media
  Random Access = yes # allow this device to be used by any job
  AutomaticMount = yes # when device opened, read it
  RemovableMedia = no # fixed media ( no tape, no usb )
  AlwaysOpen = no
  Auto Inflate = both # auto-decompress in- out- stream
  Auto Deflate = both # auto-compress in- out- stream ( backup server side compression )
  Auto Deflate Algorithm = LZ4HC # compression algorithm
}
....

Tape Device:

Device {
  Name = IBM-ULTRIUM-HH7
  Device Type = Tape
  DriveIndex = 0
  ArchiveDevice = /dev/nst0
  Media Type = IBM-LTO-7
  AutoChanger = yes
  AutomaticMount = yes
  LabelMedia = yes
  RemovableMedia = yes
  Autoselect = yes
  MaximumFileSize = 10GB
  Spool Directory = /storage/scratch
  Maximum Spool Size = 2199023255552 # maximum total spool size in bytes (2Tbyte)
}

---------------------------------------------------------------------------------------------------
Pool Config:

Pool {
  Name = AI-Incremental # name of the media pool
  Pool Type = Backup # pool type
  Recycle = yes # BAReOS can automatically recycle volumes from that pool
  AutoPrune = yes # do not automatically prune expired volumes
  Volume Retention = 72 days
  Storage = File # storage device to be used
  Maximum Volume Bytes = 10G # maximum file size per volume
  Maximum Volumes = 500 # maximum allowed total number of volumes in pool
  Label Format = "AI-Incremental_" # volumes will be labeled "AI-Incremental_-<volume-id>"
  Volume Use Duration = 36 days # volume will be no longer used than
  Next Pool = AI-Consolidate # next pool for consolidation
  Job Retention = 72 days
  File Retention = 36 days
}

Pool {
  Name = AI-Consolidate # name of the media pool
  Pool Type = Backup # pool type
  Recycle = yes # BAReOS can automatically recycle volumes from that pool
  AutoPrune = yes # do not automatically prune expired volumes
  Volume Retention = 366 days
  Job Retention = 180 days
  File Retention = 93 days
  Storage = FileCons # storage device to be used
  Maximum Volume Bytes = 10G # maximum file size per volume
  Maximum Volumes = 1000 # maximum allowed total number of volumes in pool
  Label Format = "AI-Consolidate_" # volumes will be labeled "AI-Consolidate_-<volume-id>"
  Volume Use Duration = 2 days # volume will be no longer used than
  Next Pool = LongTerm-Disk # next pool for long term backups to disk
}

Pool {
  Name = LongTerm-Disk # name of the media pool
  Pool Type = Backup # pool type
  Recycle = yes # BAReOS can automatically recycle volumes from that pool
  AutoPrune = yes # do not automatically prune expired volumes
  Volume Retention = 732 days
  Job Retention = 732 days
  File Retention = 366 days
  Storage = FileVLT # storage device to be used
  Maximum Volume Bytes = 10G # maximum file size per volume
  Maximum Volumes = 1000 # maximum allowed total number of volumes in pool
  Label Format = "LongTerm-Disk_" # volumes will be labeled "LongTerm-Disk_<volume-id>"
  Volume Use Duration = 2 days # volume will be no longer used than
  Next Pool = LongTerm-Tape # next pool for long term backups to disk
  Migration Time = 2 days # Jobs older than 2 days in this pool will be migrated to 'Next Pool'
}

Pool {
  Name = LongTerm-Tape
  Pool Type = Backup
  Recycle = yes # Bareos can automatically recycle Volumes
  AutoPrune = yes # Prune expired volumes
  Volume Retention = 732 days # How long should the Backups be kept? (0000012)
  Job Retention = 732 days
  File Retention = 366 days
  Storage = TapeLibrary # Physical Media
  Maximum Block Size = 1048576
  Recycle Pool = Scratch
  Cleaning Prefix = "CLN"
}

---------------------------------------------------------------------------------------------------
JobDefs:

JobDefs {
  Name = AI-Incremental
  Type = Backup
  Level = Incremental
  Storage = File
  Messages = Standard
  Pool = AI-Incremental
  Incremental Backup Pool = AI-Incremental
  Full Backup Pool = AI-Consolidate
  Accurate = yes
  Allow Mixed Priority = yes
  Always Incremental = yes
  Always Incremental Job Retention = 36 days
  Always Incremental Keep Number = 14
  Always Incremental Max Full Age = 31 days
}

JobDefs {
  Name = AI-Consolidate
  Type = Consolidate
  Storage = File
  Messages = Standard
  Pool = AI-Consolidate
  Priority = 25
  Write Bootstrap = "/storage/bootstrap/%c.bsr"
  Incremental Backup Pool = AI-Incremental
  Full Backup Pool = AI-Consolidate
  Max Full Consolidations = 1
  Prune Volumes = yes
  Accurate = yes
}

JobDefs {
  Name = LongTermDisk
  Type = Backup
  Level = VirtualFull
  Messages = Standard
  Pool = AI-Consolidate
  Priority = 30
  Write Bootstrap = "/storage/bootstrap/%c.bsr"
  Accurate = yes
  Run Script {
    console = "update jobid=%1 jobtype=A"
    Runs When = After
    Runs On Client = No
    Runs On Failure = No
  }
}

JobDefs {
  Name = "LongTermTape"
  Pool = LongTerm-Disk
  Messages = Standard
  Type = Migrate
}


---------------------------------------------------------------------------------------------------
Job Config ( per client )

Job {
  Name = "Incr-<client>"
  Description = "<client> always incremental 36d retention"
  Client = <client>
  Jobdefs = AI-Incremental
  FileSet="fileset-<client>"
  Schedule = "daily_incremental_<client>"
  # Write Bootstrap file for disaster recovery.
  Write Bootstrap = "/storage/bootstrap/%j.bsr"
  # The higher the number the lower the job priority
  Priority = 15
  Run Script {
    Console = ".bvfs_update jobid=%i"
    RunsWhen = After
    RunsOnClient = No
  }
}

Job {
  Name = "AI-Consolidate"
  Description = "consolidation of 'always incremental' jobs"
  Client = backup.mgmt.drs
  FileSet = SelfTest
  Jobdefs = AI-Consolidate
  Schedule = consolidate

  # The higher the number the lower the job priority
  Priority = 25
}

Job {
  Name = "VFull-<client>"
  Description = "<client> monthly virtual full"
  Messages = Standard
  Client = <client>
  Type = Backup
  Level = VirtualFull
  Jobdefs = LongTermDisk
  FileSet=fileset-<client>
  Pool = AI-Consolidate
  Schedule = virtual-full_<client>
  Priority = 30
  Run Script {
    Console = ".bvfs_update"
    RunsWhen = After
    RunsOnClient = No
  }
}

Job {
  Name = "migrate-2-tape"
  Description = "monthly migration of virtual full backups from LongTerm-Disk to LongTerm-Tape pool"
  Jobdefs = LongTermTape
  Selection Type = PoolTime
  Schedule = "migrate-2-tape"
  Priority = 15
  Run Script {
    Console = ".bvfs_update jobid=%i"
    RunsWhen = After
    RunsOnClient = No
  }
}

---------------------------------------------------------------------------------------------------
Schedule config:

Schedule {
  Name = "daily_incremental_<client>"
  Run = daily at 02:00
}

Schedule {
  Name = "consolidate"
  Run = Incremental 3/4 at 00:00
}

Schedule {
  Name = "virtual-full_<client>"
  Run = 1st monday at 10:00
}

Schedule {
  Name = "migrate-2-tape"
  Run = 2nd monday at 8:00
}

---------------------------------------------------------------------------------------------------
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files: bareos_sd_debug.zip (3,771 bytes) 2022-07-27 16:59
https://bugs.bareos.org/file_download.php?file_id=530&type=bug
Notes
(0004688)
bruno-at-bareos   
2022-07-27 16:43   
could you check in your working dir (/var/lib/bareos) of SD any other trace backtrace and debug files
if you have them, please attach them (eventually compressed).
(0004690)
jens   
2022-07-27 17:00   
debug files attached in private note
(0004697)
bruno-at-bareos   
2022-07-28 09:34   
What is the reason behind running 19.2 instead upgrading to 21 ?
(0004699)
jens   
2022-07-28 13:06   
1. missing comprehensive and easy to follow step-by-step guide on how to upgrade
2. being unconfident about a flawless upgrade procedure without rendering backup data unusable
3. lack of experience and skilled personnel resulting in major effort to roll out new version
4. limited access to online repositories to update local mirrors -> very long lead time to get new versions
(0004700)
jens   
2022-07-28 13:09   
(Last edited: 2022-07-28 13:12)
For the above reasons I am little hesitant to take the effort of upgrading.
Currently I am considering an update only if this will be the only chance to get the issue resolved.
I need confirmation from your end first.
My hope is that there is just something wrong in my configuration or I am running an adverse setup and changing either one might resolve the issue ?
(0004701)
bruno-at-bareos   
2022-08-01 11:59   
Hi Jens,

Thanks for the complements.

Did this crash happen each time a consolidation VF is created ?
(0004702)
bruno-at-bareos   
2022-08-01 12:04   
Maybe related to fixed in 19.2.9 (available with subscription)
 - fix a memory corruption when autolabeling with increased maxiumum block size
https://docs.bareos.org/bareos-19.2/Appendix/ReleaseNotes.html#id12
(0004703)
jens   
2022-08-01 12:05   
Hi Bruno,

so far, yes, that is my experience.
It is always failing.
Also when repeating or manually rescheduling the failed job through the web-ui during idle hours where nothing else is running on the director.
(0004704)
jens   
2022-08-01 12:14   
The "- fix a memory corruption when autolabeling with increased maximum block size" indeed could be a lead
as I see the following in the job logs ?

Warning: For Volume "AI-Consolidate_0118": The sizes do not match!
Volume=64574484 Catalog=32964717
Correcting Catalog
(0004705)
bruno-at-bareos   
2022-08-02 13:42   
Hi Jens, a quick note about the size do not match, this is unrelated. Aborted or failed job can have this effect.

This Fix was introduce with this commit https://github.com/bareos/bareos/commit/0086b852d and the 19.2.9 has the fix.
(0004800)
bruno-at-bareos   
2022-10-04 10:27   
Closing as a fix already exist
(0004801)
bruno-at-bareos   
2022-10-04 10:28   
Fix is present in source code and published subscription binaries.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1465 [bareos-core] file daemon feature always 2022-05-23 13:31 2022-09-21 16:02
Reporter: mdc Platform: x86_64  
Assigned To: OS: CentOS  
Priority: normal OS Version: stream 8  
Status: new Product Version: 21.1.3  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Use the virtual file path + prefix when no writer is set
Description: Creating the backup woks fine, but on restore there exists often the problem, that the restore must be done to an normal file like in the PostgreSQL or MariaDB add on.
But when using the an dd or the demo code that is in the documentation, the file can only created in an absolute path.
I think it will be better, when no writer command is set(or :writer=none), that the file restore is done to the file prefix (Where setting of the restore job) + path of the virtual file like it is done in the both python add-on's.

Sample:
bpipe:file=/_mongobackups_/foo_db.archive:reader=sh -c 'cat /foo/backup_pw_file | mongodump <OPT ... > --archive':writer=none"
Then the file will written to <where>/_mongobackups_/foo_db.archive
Tags: bpipe
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0004755)
bruno-at-bareos   
2022-09-21 16:02   
As it looks like more a feature request, why not proposing a PR with ? ;-)


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1476 [bareos-core] file daemon major always 2022-08-03 16:01 2022-08-23 12:08
Reporter: support@ingenium.trading Platform: Linux  
Assigned To: bruno-at-bareos OS: CentOS  
Priority: high OS Version: 7  
Status: resolved Product Version:  
Product Build: Resolution: won't fix  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Backups for Full and Incremental are approx 10 times bigger than the server used
Description: Whenever a backup job is running, it takes very long without errors and the file size is 10 times bigger than the server usual uses.

Earlier we had issues with the connectivity so I enabled heartbeat in the clien/myself.conf => Heartbeat Interval = 1 min
Tags:
Steps To Reproduce: Manually start the job via webui or bconsole.
Additional Information: Backup Server:
OS: Fedora 35
Bareos Version: 22.0.0~pre613.d7109f123

Client Server:
OS: Alma Linux 9 / CentOS7
Bareos Version: 22.0.0~pre613.d7109f123

Backup job:
03-Aug 09:48 bareos-dir JobId 565: No prior Full backup Job record found.
03-Aug 09:48 bareos-dir JobId 565: No prior or suitable Full backup found in catalog. Doing FULL backup.
03-Aug 09:48 bareos-dir JobId 565: Start Backup JobId 565, Job=td02.example.com.2022-08-03_09.48.28_03
03-Aug 09:48 bareos-dir JobId 565: Connected Storage daemon at backup01.example.com:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
03-Aug 09:48 bareos-dir JobId 565: Max configured use duration=82,800 sec. exceeded. Marking Volume "AI-Example-Consolidated-0490" as Used.
03-Aug 09:48 bareos-dir JobId 565: Created new Volume "AI-Example-Consolidated-0584" in catalog.
03-Aug 09:48 bareos-dir JobId 565: Using Device "FileStorage01" to write.
03-Aug 09:48 bareos-dir JobId 565: Probing client protocol... (result will be saved until config reload)
03-Aug 09:48 bareos-dir JobId 565: Connected Client: td02.example.com at td02.example.com:9102, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
03-Aug 09:48 bareos-dir JobId 565: Handshake: Immediate TLS
03-Aug 09:48 bareos-dir JobId 565: Encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
03-Aug 09:48 trade02-fd JobId 565: Connected Storage daemon at backup01.example.com:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
03-Aug 09:48 trade02-fd JobId 565: Extended attribute support is enabled
03-Aug 09:48 trade02-fd JobId 565: ACL support is enabled
03-Aug 09:48 bareos-sd JobId 565: Labeled new Volume "AI-Example-Consolidated-0584" on device "FileStorage01" (/backup/bareos/storage).
03-Aug 09:48 bareos-sd JobId 565: Wrote label to prelabeled Volume "AI-Example-Consolidated-0584" on device "FileStorage01" (/backup/bareos/storage)
03-Aug 09:48 trade02-fd JobId 565: Disallowed filesystem. Will not descend from / into /dev
03-Aug 09:51 trade02-fd JobId 565: Disallowed filesystem. Will not descend from / into /run
03-Aug 09:51 trade02-fd JobId 565: Disallowed filesystem. Will not descend from / into /sys
03-Aug 09:51 trade02-fd JobId 565: Disallowed filesystem. Will not descend from / into /var/lib/nfs/rpc_pipefs
03-Aug 10:27 bareos-sd JobId 565: User defined maximum volume capacity 107,374,182,400 exceeded on device "FileStorage01" (/backup/bareos/storage).
03-Aug 10:27 bareos-sd JobId 565: End of medium on Volume "AI-Example-Consolidated-0584" Bytes=107,374,159,911 Blocks=1,664,406 at 03-Aug-2022 10:27.
03-Aug 10:27 bareos-dir JobId 565: Created new Volume "AI-Example-Consolidated-0585" in catalog.
03-Aug 10:27 bareos-sd JobId 565: Labeled new Volume "AI-Example-Consolidated-0585" on device "FileStorage01" (/backup/bareos/storage).
03-Aug 10:27 bareos-sd JobId 565: Wrote label to prelabeled Volume "AI-Example-Consolidated-0585" on device "FileStorage01" (/backup/bareos/storage)
03-Aug 10:27 bareos-sd JobId 565: New volume "AI-Example-Consolidated-0585" mounted on device "FileStorage01" (/backup/bareos/storage) at 03-Aug-2022 10:27.
03-Aug 11:07 bareos-sd JobId 565: User defined maximum volume capacity 107,374,182,400 exceeded on device "FileStorage01" (/backup/bareos/storage).
03-Aug 11:07 bareos-sd JobId 565: End of medium on Volume "AI-Example-Consolidated-0585" Bytes=107,374,160,137 Blocks=1,664,406 at 03-Aug-2022 11:07.
03-Aug 11:07 bareos-dir JobId 565: Created new Volume "AI-Example-Consolidated-0586" in catalog.
03-Aug 11:07 bareos-sd JobId 565: Labeled new Volume "AI-Example-Consolidated-0586" on device "FileStorage01" (/backup/bareos/storage).
03-Aug 11:07 bareos-sd JobId 565: Wrote label to prelabeled Volume "AI-Example-Consolidated-0586" on device "FileStorage01" (/backup/bareos/storage)
03-Aug 11:07 bareos-sd JobId 565: New volume "AI-Example-Consolidated-0586" mounted on device "FileStorage01" (/backup/bareos/storage) at 03-Aug-2022 11:07.
03-Aug 11:46 bareos-sd JobId 565: User defined maximum volume capacity 107,374,182,400 exceeded on device "FileStorage01" (/backup/bareos/storage).
03-Aug 11:46 bareos-sd JobId 565: End of medium on Volume "AI-Example-Consolidated-0586" Bytes=107,374,160,137 Blocks=1,664,406 at 03-Aug-2022 11:46.
03-Aug 11:46 bareos-dir JobId 565: Created new Volume "AI-Example-Consolidated-0587" in catalog.
03-Aug 11:46 bareos-sd JobId 565: Labeled new Volume "AI-Example-Consolidated-0587" on device "FileStorage01" (/backup/bareos/storage).
03-Aug 11:46 bareos-sd JobId 565: Wrote label to prelabeled Volume "AI-Example-Consolidated-0587" on device "FileStorage01" (/backup/bareos/storage)
03-Aug 11:46 bareos-sd JobId 565: New volume "AI-Example-Consolidated-0587" mounted on device "FileStorage01" (/backup/bareos/storage) at 03-Aug-2022 11:46.
03-Aug 12:25 bareos-sd JobId 565: User defined maximum volume capacity 107,374,182,400 exceeded on device "FileStorage01" (/backup/bareos/storage).
03-Aug 12:25 bareos-sd JobId 565: End of medium on Volume "AI-Example-Consolidated-0587" Bytes=107,374,160,137 Blocks=1,664,406 at 03-Aug-2022 12:25.
03-Aug 12:25 bareos-dir JobId 565: Created new Volume "AI-Example-Consolidated-0588" in catalog.
03-Aug 12:25 bareos-sd JobId 565: Labeled new Volume "AI-Example-Consolidated-0588" on device "FileStorage01" (/backup/bareos/storage).
03-Aug 12:25 bareos-sd JobId 565: Wrote label to prelabeled Volume "AI-Example-Consolidated-0588" on device "FileStorage01" (/backup/bareos/storage)
03-Aug 12:25 bareos-sd JobId 565: New volume "AI-Example-Consolidated-0588" mounted on device "FileStorage01" (/backup/bareos/storage) at 03-Aug-2022 12:25.
03-Aug 12:56 bareos-sd JobId 565: Releasing device "FileStorage01" (/backup/bareos/storage).
03-Aug 12:56 bareos-sd JobId 565: Elapsed time=03:08:04, Transfer rate=45.57 M Bytes/second
03-Aug 12:56 bareos-dir JobId 565: Insert of attributes batch table with 188627 entries start
03-Aug 12:56 bareos-dir JobId 565: Insert of attributes batch table done
03-Aug 12:56 bareos-dir JobId 565: Bareos bareos-dir 22.0.0~pre613.d7109f123 (01Aug22):
  Build OS: Fedora release 35 (Thirty Five)
  JobId: 565
  Job: td02.example.com.2022-08-03_09.48.28_03
  Backup Level: Full (upgraded from Incremental)
  Client: "td02.example.com" 22.0.0~pre553.6a41db3f7 (07Jul22) CentOS Stream release 9,redhat
  FileSet: "ExampleLinux" 2022-08-03 09:48:28
  Pool: "AI-Example-Consolidated" (From Job FullPool override)
  Catalog: "MyCatalog" (From Client resource)
  Storage: "File" (From Pool resource)
  Scheduled time: 03-Aug-2022 09:48:27
  Start time: 03-Aug-2022 09:48:31
  End time: 03-Aug-2022 12:56:50
  Elapsed time: 3 hours 8 mins 19 secs
  Priority: 10
  FD Files Written: 188,627
  SD Files Written: 188,627
  FD Bytes Written: 514,227,307,623 (514.2 GB)
  SD Bytes Written: 514,258,382,470 (514.2 GB)
  Rate: 45510.9 KB/s
  Software Compression: None
  VSS: no
  Encryption: no
  Accurate: yes
  Volume name(s): AI-Example-Consolidated-0584|AI-Example-Consolidated-0585|AI-Example-Consolidated-0586|AI-Example-Consolidated-0587|AI-Example-Consolidated-0588
  Volume Session Id: 4
  Volume Session Time: 1659428963
  Last Volume Bytes: 85,150,808,401 (85.15 GB)
  Non-fatal FD errors: 0
  SD Errors: 0
  FD termination status: OK
  SD termination status: OK
  Bareos binary info: pre-release version: Get official binaries and vendor support on bareos.com
  Job triggered by: User
  Termination: Backup OK

03-Aug 12:56 bareos-dir JobId 565: shell command: run AfterJob "echo '.bvfs_update jobid=565' | bconsole"
03-Aug 12:56 bareos-dir JobId 565: AfterJob: .bvfs_update jobid=565 | bconsole

Client Alma Linux 9:
Filesystem Size Used Avail Use% Mounted on
devtmpfs 47G 0 47G 0% /dev
tmpfs 47G 196K 47G 1% /dev/shm
tmpfs 19G 2.3M 19G 1% /run
/dev/mapper/almalinux-root 12G 3.5G 8.6G 29% /
/dev/sda1 2.0G 237M 1.6G 13% /boot
/dev/mapper/almalinux-opt 8.0G 90M 7.9G 2% /opt
/dev/mapper/almalinux-home 12G 543M 12G 5% /home
/dev/mapper/almalinux-var 8.0G 309M 7.7G 4% /var
/dev/mapper/almalinux-opt_ExampleAd 8.0G 373M 7.7G 5% /opt/ExampleAd
/dev/mapper/almalinux-opt_ExampleEn 32G 7.5G 25G 24% /opt/ExampleEn
/dev/mapper/almalinux-var_log 20G 8.1G 12G 41% /var/log
/dev/mapper/almalinux-var_lib 12G 259M 12G 3% /var/lib
tmpfs 9.3G 0 9.3G 0% /run/user/1703000011
tmpfs 9.3G 0 9.3G 0% /run/user/1703000004



Server JobDefs:
JobDefs {
  Name = "ExampleLinux"
  Type = Backup
  Client = bareos-fd
  FileSet = "ExampleLinux"
  Storage = File
  Messages = Standard
  Schedule = "BasicBackup"
  Pool = AI-Example-Incremental
  Priority = 10
  Write Bootstrap = "/var/lib/bareos/%c.bsr"
  Full Backup Pool = AI-Example-Consolidated # write Full Backups into "Full" Pool (0000005)
  Incremental Backup Pool = AI-Example-Incremental # write Incr Backups into "Incremental" Pool (0000011)
}
System Description
Attached Files:
Notes
(0004710)
bruno-at-bareos   
2022-08-03 18:15   
with bconsole show FileSet="ExampleLinux" we will better understand what you've tried to do.

bconsole
estimate job=td02.example.com listing
will show you all the file included.
(0004731)
bruno-at-bareos   
2022-08-23 12:08   
No informations given to go further


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1471 [bareos-core] installer / packages tweak N/A 2022-07-13 11:39 2022-07-28 09:27
Reporter: amodia Platform:  
Assigned To: bruno-at-bareos OS: Debian 11  
Priority: normal OS Version:  
Status: resolved Product Version: 21.1.3  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Shell example script for Bareos installation on Debian / Ubuntu uses deprecated "apt-key"
Description: Using the script raises the warning: "apt-key is deprecated". In order to correct this, it is suggested to change
---
# add package key
wget -q $URL/Release.key -O- | apt-key add -
---
to
+++
# add package key
wget -q $URL/Release.key -O- | gpg --dearmor -o /usr/share/keyrings/bareos.gpg
sed -i -e 's#deb #deb [signed-by=/usr/share/keyrings/bareos.gpg] #' /etc/apt/sources.list.d/bareos.list
+++
Tags:
Steps To Reproduce:
Additional Information:
Attached Files:
Notes
(0004665)
bruno-at-bareos   
2022-07-14 10:09   
Would this be valid for any version of Debian/ubuntu used (deb 9, and ubuntu 18.04) ?
(0004666)
bruno-at-bareos   
2022-07-14 10:44   
We appreciate any effort made to make our software better.
This would be a nice improvement.
Testing on old systems seems ok, checking how much effort to change the code and handle the update/upgrade process on user installation + documentation changes.
(0004667)
bruno-at-bareos   
2022-07-14 11:01   
Adding public reference of the why apt-key should be changed and how,
https://askubuntu.com/questions/1286545/what-commands-exactly-should-replace-the-deprecated-apt-key/1307181#1307181

Maybe changing to Deb822 .sources files is the way to go.
(0004669)
amodia   
2022-07-14 13:06   
I ran into this issue on the update from Bareos 20 to 21. So I can't comment on earlier versions.
My "solution" was the first that worked. Any solution that is better, more compatible and/or requires less effort is appreciated.
(0004695)
bruno-at-bareos   
2022-07-28 09:26   
Changes applied to future documentation
commit c08b56c1a
PR1203
(0004696)
bruno-at-bareos   
2022-07-28 09:27   
Follow status in PR1203 https://github.com/bareos/bareos/pull/1203


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1253 [bareos-core] webui major always 2020-06-17 09:58 2022-07-20 14:09
Reporter: tagort214 Platform: Linux  
Assigned To: frank OS: Ubuntu  
Priority: normal OS Version: 16.04  
Status: acknowledged Product Version: 19.2.7  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Can't restore files from Webui
Description: When I try to restore files from Webui it returns this error:

here was an error while loading data for this tree.

Error: ajax

Plugin: core

Reason: Could not load node

Data:

{"id":"#","xhr":{"readyState":4,"responseText":"\n\n\n \n \n \n \n\n \n \n\n\n \n \n\n\n\n\n \n\n \n\n
\n \n\n
Произошла ошибка
\n
An error occurred during execution; please try again later.
\n\n\n\n
Дополнительная информация:
\n
Zend\\Json\\Exception\\RuntimeException
\n

\n
Файл:
\n
    \n

    /usr/share/bareos-webui/vendor/zendframework/zend-json/src/Json.php:68

    \n \n
Сообщение:
\n
    \n

    Decoding failed: Syntax error

    \n \n
Трассировки стека:
\n
    \n

    #0 /usr/share/bareos-webui/module/Restore/src/Restore/Model/RestoreModel.php(54): Zend\\Json\\Json::decode('', 1)\n#1 /usr/share/bareos-webui/module/Restore/src/Restore/Controller/RestoreController.php(481): Restore\\Model\\RestoreModel->getDirectories(Object(Bareos\\BSock\\BareosBSock), '207685', '#')\n#2 /usr/share/bareos-webui/module/Restore/src/Restore/Controller/RestoreController.php(555): Restore\\Controller\\RestoreController->getDirectories()\n#3 /usr/share/bareos-webui/module/Restore/src/Restore/Controller/RestoreController.php(466): Restore\\Controller\\RestoreController->buildSubtree()\n#4 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/Controller/AbstractActionController.php(82): Restore\\Controller\\RestoreController->filebrowserAction()\n#5 [internal function]: Zend\\Mvc\\Controller\\AbstractActionController->onDispatch(Object(Zend\\Mvc\\MvcEvent))\n#6 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(444): call_user_func(Array, Object(Zend\\Mvc\\MvcEvent))\n#7 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(205): Zend\\EventManager\\EventManager->triggerListeners('dispatch', Object(Zend\\Mvc\\MvcEvent), Object(Closure))\n#8 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/Controller/AbstractController.php(118): Zend\\EventManager\\EventManager->trigger('dispatch', Object(Zend\\Mvc\\MvcEvent), Object(Closure))\n#9 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/DispatchListener.php(93): Zend\\Mvc\\Controller\\AbstractController->dispatch(Object(Zend\\Http\\PhpEnvironment\\Request), Object(Zend\\Http\\PhpEnvironment\\Response))\n#10 [internal function]: Zend\\Mvc\\DispatchListener->onDispatch(Object(Zend\\Mvc\\MvcEvent))\n#11 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(444): call_user_func(Array, Object(Zend\\Mvc\\MvcEvent))\n#12 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(205): Zend\\EventManager\\EventManager->triggerListeners('dispatch', Object(Zend\\Mvc\\MvcEvent), Object(Closure))\n#13 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/Application.php(314): Zend\\EventManager\\EventManager->trigger('dispatch', Object(Zend\\Mvc\\MvcEvent), Object(Closure))\n#14 /usr/share/bareos-webui/public/index.php(24): Zend\\Mvc\\Application->run()\n#15 {main}

    \n \n

\n\n\n
\n\n \n \n\n\n","status":500,"statusText":"Internal Server Error"}}

Also, in apache2 error logs i see this strings:
[:error] [pid 13597] [client 172.32.1.51:56276] PHP Notice: Undefined index: type in /usr/share/bareos-webui/module/Restore/src/Restore/Form/RestoreForm.php on line 91, referer: http://bareos.ivt.lan/bareos-webui/client/details/clientname
 [:error] [pid 14367] [client 172.32.1.51:56278] PHP Warning: unpack(): Type N: not enough input, need 4, have 0 in /usr/share/bareos-webui/vendor/Bareos/library/Bareos/BSock/BareosBSock.php on line 172, referer: http://bareos.ivt.lan/bareos-webui/restore/?mergefilesets=1&mergejobs=1&client=clientname&jobid=207728


Tags:
Steps To Reproduce: 1) Login to webui
2) Select job and click show files (or select client from restore tab)
Additional Information:
System Description
Attached Files: Снимок экрана_2020-06-17_10-57-42.png (37,854 bytes) 2020-06-17 09:58
https://bugs.bareos.org/file_download.php?file_id=443&type=bug
Снимок экрана_2020-06-17_10-57-24.png (47,279 bytes) 2020-06-17 09:58
https://bugs.bareos.org/file_download.php?file_id=444&type=bug
Notes
(0004242)
frank   
2021-08-31 16:14   
tagort214:

It seems we are receiving malformed JSON from the director here as the decoding throws a syntax error.

We should have a look at the JSON result the director provides for the particular directory you are
trying to list in webui by using bvfs API commands (https://docs.bareos.org/DeveloperGuide/api.html#bvfs-api) in bconsole.


In bconsole please do the following:


1. Get the jobid of the job that causes the described issue, 142 in this example. Replace the jobid from the example below with your specific jobid(s).

*.bvfs_get_jobids jobid=142 all
1,55,142
*.bvfs_lsdirs path= jobid=1,55,142
38 0 0 A A A A A A A A A A A A A A .
37 0 0 A A A A A A A A A A A A A A /


2. Navigate to the folder which causes problems by using pathid, pathids will differ at yours.

*.bvfs_lsdirs pathid=37 jobid=1,55,142
37 0 0 A A A A A A A A A A A A A A .
38 0 0 A A A A A A A A A A A A A A ..
57 0 0 A A A A A A A A A A A A A A ceph/
*

*.bvfs_lsdirs pathid=57 jobid=1,55,142
57 0 0 A A A A A A A A A A A A A A .
37 0 0 A A A A A A A A A A A A A A ..
56 0 0 A A A A A A A A A A A A A A groups/
*

*.bvfs_lsdirs pathid=56 jobid=1,55,142
56 0 0 A A A A A A A A A A A A A A .
57 0 0 A A A A A A A A A A A A A A ..
51 11817 142 P0A V9T EHt CcR A A A 8AA BAA L4 BhLhQA BhLhP/ BhLhP/ A A C group_aa/

Let's pretend group_aa (pathid 51) is the folder we can not list properly in webui.


3. Switch to API mode 2 (JSON) now and list the content of folder group_aa (pathid 51) to get the JSON result.

*.api 2
{
  "jsonrpc": "2.0",
  "id": null,
  "result": {
    "api": 2
  }
}*.bvfs_lsdirs pathid=51 jobid=1,55,142
{
  "jsonrpc": "2.0",
  "id": null,
  "result": {
    "directories": [
      {
        "type": "D",
        "pathid": 51,
        "fileid": 11817,
        "jobid": 142,
        "lstat": "P0A V9T EHt CcR A A A 8AA BAA L4 BhLhQA BhLhP/ BhLhP/ A A C",
        "name": ".",
        "fullpath": ".",
        "stat": {
          "dev": 64768,
          "ino": 89939,
          "mode": 16877,
          "nlink": 10001,
          "uid": 0,
          "gid": 0,
          "user": "root",
          "group": "root",
          "rdev": 0,
          "size": 245760,
          "atime": 1630409728,
          "mtime": 1630409727,
          "ctime": 1630409727
        },
        "linkfileindex": 0
      },
      {
        "type": "D",
        "pathid": 56,
        "fileid": 0,
        "jobid": 0,
        "lstat": "A A A A A A A A A A A A A A",
        "name": "..",
        "fullpath": "..",
        "stat": {
          "dev": 0,
          "ino": 0,
          "mode": 0,
          "nlink": 0,
          "uid": 0,
          "gid": 0,
          "user": "root",
          "group": "root",
          "rdev": 0,
          "size": 0,
          "atime": 0,
          "mtime": 0,
          "ctime": 0
        },
        "linkfileindex": 0
      },
      {
        "type": "D",
        "pathid": 52,
        "fileid": 1813,
        "jobid": 142,
        "lstat": "P0A BAGIj EHt C A A A G BAA A BhLgvm Bg/+Bp Bg/+Bp A A C",
        "name": "d1/",
        "fullpath": "/ceph/groups/group_aa/d1/",
        "stat": {
          "dev": 64768,
          "ino": 16802339,
          "mode": 16877,
          "nlink": 2,
          "uid": 0,
          "gid": 0,
          "user": "root",
          "group": "root",
          "rdev": 0,
          "size": 6,
          "atime": 1630407654,
          "mtime": 1627381865,
          "ctime": 1627381865
        },
        "linkfileindex": 0
      },
      {
        "type": "D",
        "pathid": 54,
        "fileid": 1814,
        "jobid": 142,
        "lstat": "P0A CCEkI EHt C A A A G BAA A BhLgvm Bg/+Bp Bg/+Bp A A C",
        "name": "d2/",
        "fullpath": "/ceph/groups/group_aa/d2/",
        "stat": {
          "dev": 64768,
          "ino": 34097416,
          "mode": 16877,
          "nlink": 2,
          "uid": 0,
          "gid": 0,
          "user": "root",
          "group": "root",
          "rdev": 0,
          "size": 6,
          "atime": 1630407654,
          "mtime": 1627381865,
          "ctime": 1627381865
        },
        "linkfileindex": 0
      }
    ]
  }
}*


Do you get valid JSON at this point as you can see in the example above?
Please provide the output you get in your case if possible.



Note:

You can substitue step 3 with sth. like following if the output is too big:

[root@centos7]# cat script
.api 2
.bvfs_lsdirs pathid=51 jobid=1,55,142
quit

[root@centos7]# cat script | bconsole > out.txt

Remove everything except the JSON you received from the .bvfs_lsdirs command from the out.txt file.

Validate the JSON output with a tool like https://stedolan.github.io/jq/ or https://jsonlint.com/ for example.
(0004681)
khvalera   
2022-07-20 14:09   
Try to increase in configuration.ini:
[restore]
; Restore filetree refresh timeout after n milliseconds
; Default: 120000 milliseconds
filetree_refresh_timeout=220000


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1472 [bareos-core] General tweak have not tried 2022-07-13 11:53 2022-07-19 14:55
Reporter: amodia Platform:  
Assigned To: bruno-at-bareos OS: Debian 11  
Priority: normal OS Version:  
Status: resolved Product Version:  
Product Build: Resolution: unable to reproduce  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: No explanation on "delcandidates"
Description: After an upgrade to Bareos v21 the following message appeared in the status of bareos-director:

HINWEIS: Tabelle »delcandidates« existiert nicht, wird übersprungen
(engl.: NOTE: Table »delcandidates« does not exist, will be skipped)

Searching the Bareos website for "delcandidates" does not give any matching site!

It would be nice to give a hint to update the tables in the database by running:

su - postgres -c /usr/lib/bareos/scripts/update_bareos_tables
Tags:
Steps To Reproduce:
Additional Information:
Attached Files: log_dbcheck_2022-07-18.log (35,702 bytes) 2022-07-18 15:42
https://bugs.bareos.org/file_download.php?file_id=528&type=bug
bareos_dbcheck_debian11.log.xz (30,660 bytes) 2022-07-19 14:55
https://bugs.bareos.org/file_download.php?file_id=529&type=bug
Notes
(0004664)
bruno-at-bareos   
2022-07-14 10:08   
From which version did you do the update ?
It is clearly stated in the documentation to run update_table and grant on any update (especially major version).
(0004668)
amodia   
2022-07-14 12:51   
The update was from 20 to 21.
I missed the "run update_table" statement in the documentation.
The documentation regarding "run grant" is misleading:

"Warning:
When using the PostgreSQL backend and updating to Bareos < 14.2.3, it is necessary to manually grant database permissions (grant_bareos_privileges), normally by

su - postgres -c /usr/lib/bareos/scripts/grant_bareos_privileges
"
Because I wondered, who might want to upgread "to Bareos < 14.2.3" when version 21 is available, I thought what was ment is "updating from Bareos < 14.2.3 to a later version". So I skipped the "run grant" for my update. and it worked.
(0004670)
bruno-at-bareos   
2022-07-14 14:06   
I don't know which documentation part you are talking about.

The update bareos chapter as the following for database update
https://docs.bareos.org/bareos-21/IntroductionAndTutorial/UpdatingBareos.html#other-platforms which talk about update & grant.

Maybe you can share a link here ?
(0004671)
amodia   
2022-07-14 14:15   
https://docs.bareos.org/TasksAndConcepts/CatalogMaintenance.html

Firstwarning just before the "Manual Configuration"
(0004672)
bruno-at-bareos   
2022-07-14 14:25   
Ha ok I understand, that's related to dbconfig.
Are you using dbconfig for your installation (for Bareos 20 and 21) ?
(0004673)
amodia   
2022-07-14 16:34   
Well ...
During the update from Bareos 16 to 20 I selected "Yes" for the dbconfig-common option. Unfortunately the database got lost.
This time (Bareos 20 to 21) I selected "No", hoping that a manual update would be more successful. So I have a backup of the database just before the update, but unfortunately I had no success with the manual update. So the "old" data is lost, and the 'bareos' database (bareos-db) gets filled with "new" data since the update.

In the mean time I am able to get some commands working from the command line, at least for user 'bareos':
- bareos-dbcheck *)
- bareos-dir -t -f -d 500

*): selecting test no. 12 "Check for orphaned storage records" crashes bareos-dbcheck with a "memory access error".

The next experiment is to
- create a new database (bareos2-db) from the backup before the update
- run update_table & grant & bareos-dbcheck on this db
- change the MyCatalog.conf accordingly (dbname = bareos2)
- test, if everything is working again

The hope is to "merge" this bareos2-db (data before the update) with the bareos-db (v. above), which collects the data since the update.
Is this possible?
(0004674)
bruno-at-bareos   
2022-07-14 17:34   
Not sure what happen for you, the upgrade process is quite well tested here, manual and dbconfig. (Maybe the switch from Mysql to PostgresQL ?)

Did you run bareos-dbcheck or bareos in a container (beware by default they had a low memory limit usage, which often is not enough).

As you have the dump, I would have simply restored this one, run the manual update&grant and logically bareos-dir -t should work with all the previous data preserved.
(to restore of course you first create the database).

then run dbcheck against it (advise, next time run dbcheck before the dump, so you will save time and place to not dump orphan records)
if it failed again we would be interested by having copy of the storage definition and the output of
bareos-dbcheck -v -dt -d1000
(0004675)
amodia   
2022-07-15 09:22   
Here Bareos runs on a virtual machine (KVM, no container) with limited resources (total memory: 473MiB, swap: 703MiB, storage: 374GiB). The files are stored on an external NAS (6TB) mounted with autofs. This seemed to be enough for "normal" operations.

Appendix "Hardware sizing" has no recommendation on memory. What do you recommend?
(0004676)
bruno-at-bareos   
2022-07-18 10:14   
Hardware sizing chapter have quite a number of recommendation for the database (which is what the director used), it highly depend of course on the number of files backuped. PostgreSQL should have 1/4 of the ram, and/or at least try to have the file index size. Then if the FD is also run here with Accurate, it need enough memory to keep track of Accurate file saved.
(0004677)
amodia   
2022-07-18 12:33   
Update:
bareos-dbcheck (Interactive mode) runs only with the following command:
su - bareos -s /bin/bash -c '/usr/sbin/bareos-dbcheck ...'

Every test runs smoothly EXCEPT test no.12: "Check for orphaned storage records".
Test no. 12 fails regardless of the memory size (orig: 473MiB, Increased: 1,9GiB).
Failure ("Memory Access Error") occurs immediately. (No filling of memory and than failure)
The database to check is only a few days old, so there seems to be another issue but the db-size.

All tests but no. 12 run even with low memory setup.
Here the Director and both Daemons (Storage and File) are on the same virtual machine.
(0004678)
bruno-at-bareos   
2022-07-18 13:36   
Without requested log, we won't be able to check what happen.
(0004679)
amodia   
2022-07-18 15:42   
Please find the log file attached of

su - bareos -s /bin/bash -c '/usr/sbin/bareos-dbcheck ... -v -dt -d1000' 2>&1 |tee log_dbcheck_2022-07-18.log
(0004680)
bruno-at-bareos   
2022-07-19 14:55   
Unfortunately the problem you are seeing on your installation can't be reproduced on several here. Tested RHEL_8, xubuntu 22.04, debian 11

See full log attached.
Maybe you have some extras tools restricting too much the normal workflow (apparmor, selinux whatever).


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1464 [bareos-core] director major always 2022-05-23 10:52 2022-07-05 14:53
Reporter: meilihao Platform: linux  
Assigned To: bruno-at-bareos OS: oracle linux  
Priority: urgent OS Version: 7.9  
Status: resolved Product Version: 21.1.3  
Product Build: Resolution: unable to reproduce  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: director can't connect filedaemon
Description: director can't connect filedaemon, got ssl error.
Tags:
Steps To Reproduce: env:
- filedaemon: v21.0.0 on win10, x64
- direcor: v21.1.2, x64

bconsole run: `status client=xxx`, get error:
```bash
# tail -f /var/log/bareos.log
Network error during CRAM MD5 with 192.168.0.130
Unable to authenticate with File daemon at "192.168.0.130:9102"
```

filedaemon error: `TLS negotiation failed` and `error:1408F119:SSL routines:ssl3_get_record:decryption failed or bad record mac`
Additional Information:
Attached Files:
Notes
(0004630)
meilihao   
2022-05-31 04:12   
Has anyone encountered?
(0004656)
bruno-at-bareos   
2022-07-05 14:53   
After restarting both director and client, did you still get any trouble.
I'm not able to reproduce here with Win10 64bits and Centos 8 bareos binaries from download.bareos.org

From where come your director then ?
- direcor: v21.1.2, x64
(0004657)
bruno-at-bareos   
2022-07-05 14:53   
Can't be reproduced


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
874 [bareos-core] director minor always 2017-11-07 12:12 2022-07-04 17:12
Reporter: chaos_prevails Platform: Linux  
Assigned To: bruno-at-bareos OS: Ubuntu  
Priority: normal OS Version: 16.04 amd64  
Status: resolved Product Version: 16.2.4  
Product Build: Resolution: won't fix  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: VirtualFull fails when read and write devices are on different storage daemons (different machines)
Description: The Virtual full backup fails with

...
2017-10-26 17:24:13 pavlov-dir JobId 269: Start Virtual Backup JobId 269, Job=pavlov_sys_ai_vf.2017-10-26_17.24.11_04
 2017-10-26 17:24:13 pavlov-dir JobId 269: Consolidating JobIds 254,251,252,255,256,257
 2017-10-26 17:24:13 pavlov-dir JobId 269: Bootstrap records written to /var/lib/bareos/pavlov-dir.restore.1.bsr
 2017-10-26 17:24:14 delaunay-sd JobId 269: Fatal error: Device reservation failed for JobId=269:
 2017-10-26 17:24:14 pavlov-dir JobId 269: Fatal error:
     Storage daemon didn't accept Device "pavlov-file-consolidate" because:
     3924 Device "pavlov-file-consolidate" not in SD Device resources or no matching Media Type.
 2017-10-26 17:24:14 pavlov-dir JobId 269: Error: Bareos pavlov-dir 16.2.4 (01Jul16):
...

When changing the storage daemon for the VirtualFull Backup to the same machine as always-incremental and consolidate backups, the Virtualfull Backup works:
...
 2017-11-07 10:43:20 pavlov-dir JobId 320: Start Virtual Backup JobId 320, Job=pavlov_sys_ai_vf.2017-11-07_10.43.18_05
 2017-11-07 10:43:20 pavlov-dir JobId 320: Consolidating JobIds 317,314,315
 2017-11-07 10:43:20 pavlov-dir JobId 320: Bootstrap records written to /var/lib/bareos/pavlov-dir.restore.1.bsr
 2017-11-07 10:43:21 pavlov-dir JobId 320: Using Device "pavlov-file-consolidate" to read.
 2017-11-07 10:43:21 pavlov-dir JobId 320: Using Device "pavlov-file" to write.
 2017-11-07 10:43:21 pavlov-sd JobId 320: Ready to read from volume "ai_consolidate-0023" on device "pavlov-file-consolidate" (/mnt/XXX/var_backup_bareos).
 2017-11-07 10:43:21 pavlov-sd JobId 320: Volume "Full-0032" previously written, moving to end of data.
 2017-11-07 10:43:21 pavlov-sd JobId 320: Ready to append to end of Volume "Full-0032" size=97835302
 2017-11-07 10:43:21 pavlov-sd JobId 320: Forward spacing Volume "ai_consolidate-0023" to file:block 0:7046364.
 2017-11-07 10:43:22 pavlov-sd JobId 320: End of Volume at file 0 on device "pavlov-file-consolidate" (/mnt/XXX/var_backup_bareos), Volume "ai_consolidate-0023"
 2017-11-07 10:43:22 pavlov-sd JobId 320: Ready to read from volume "ai_inc-0033" on device "pavlov-file-consolidate" (/mnt/XXX/var_backup_bareos).
 2017-11-07 10:43:22 pavlov-sd JobId 320: Forward spacing Volume "ai_inc-0033" to file:block 0:1148147.
 2017-11-07 10:43:22 pavlov-sd JobId 320: End of Volume at file 0 on device "pavlov-file-consolidate" (/mnt/XXX/var_backup_bareos), Volume "ai_inc-0033"
 2017-11-07 10:43:22 pavlov-sd JobId 320: End of all volumes.
 2017-11-07 10:43:22 pavlov-sd JobId 320: Elapsed time=00:00:01, Transfer rate=7.029 M Bytes/second
 2017-11-07 10:43:22 pavlov-dir JobId 320: Joblevel was set to joblevel of first consolidated job: Full
 2017-11-07 10:43:23 pavlov-dir JobId 320: Bareos pavlov-dir 16.2.4 (01Jul16):
  Build OS: x86_64-pc-linux-gnu ubuntu Ubuntu 16.04 LTS
  JobId: 320
  Job: pavlov_sys_ai_vf.2017-11-07_10.43.18_05
  Backup Level: Virtual Full
  Client: "pavlov-fd" 16.2.4 (01Jul16) x86_64-pc-linux-gnu,ubuntu,Ubuntu 16.04 LTS,xUbuntu_16.04,x86_64
  FileSet: "linux_system" 2017-10-19 16:11:21
  Pool: "Full" (From Job's NextPool resource)
  Catalog: "MyCatalog" (From Client resource)
  Storage: "pavlov-file" (From Storage from Job's NextPool resource)
  Scheduled time: 07-Nov-2017 10:43:18
  Start time: 07-Nov-2017 10:29:38
  End time: 07-Nov-2017 10:29:39
  Elapsed time: 1 sec
  Priority: 13
  SD Files Written: 148
  SD Bytes Written: 7,029,430 (7.029 MB)
  Rate: 7029.4 KB/s
  Volume name(s): Full-0032
  Volume Session Id: 1
  Volume Session Time: 1510047788
  Last Volume Bytes: 104,883,188 (104.8 MB)
  SD Errors: 0
  SD termination status: OK
  Accurate: yes
  Termination: Backup OK

 2017-11-07 10:43:23 pavlov-dir JobId 320: console command: run AfterJob "update jobid=320 jobtype=A"
Tags:
Steps To Reproduce: 1. create always incremental, consolidate jobs, pools, and make sure they are working. Use storage daemon A (pavlov in my example)
2. create VirtualFull Level backup with Storage attribute pointing to a device on a different storage daemon B (delaunay in my example)
3. start always incremental and consolidate job and verify that they are working as expected
4. start VirtualFull Level backup
5. fails with error message:
...
delaunay-sd JobId 269: Fatal error: Device reservation failed for JobId=269:
2017-10-26 17:24:14 pavlov-dir JobId 269: Fatal error:
     Storage daemon didn't accept Device "pavlov-file-consolidate" because:
     3924 Device "pavlov-file-consolidate" not in SD Device resources or no matching Media Type.
...
Additional Information: A) configuration with working always incremental and consolidate jobs, but failing virtualFull level backup:

A) director pavlov (to disk storage daemon + director)
1) template for always incremental jobs
JobDefs {
  Name = "default_ai"
  Type = Backup
  Level = Incremental
  Client = pavlov-fd
  Storage = pavlov-file
  Messages = Standard
  Priority = 10
  Write Bootstrap = "|/usr/local/bin/bareos-messages.sh \"[Bootstrap] %d: %j (jobid %i)\" %i \"it@XXXXX\" %c-%n"
  Maximum Concurrent Jobs = 7

  #always incremental config
  Pool = disk_ai
  Incremental Backup Pool = disk_ai
  Full Backup Pool = disk_ai_consolidate
  Accurate = yes
  Always Incremental = yes
  Always Incremental Job Retention = 20 seconds 0000007 days
  Always Incremental Keep Number = 2 0000007
  Always Incremental Max Full Age = 1 minutes # 14 days
}


2) template for virtual full jobs, should run on read storage pavlov and write storage delaunay:
job defs {
  Name = "default_ai_vf"
  Type = Backup
  Level = VirtualFull
  Messages = Standard
  Priority = 13
  Accurate = yes
 
  Storage = delaunay_HP_G2_Autochanger
  Pool = disk_ai_consolidate
  Incremental Backup Pool = disk_ai
  Next Pool = tape_automated
  Virtual Full Backup Pool = tape_automated
  

  # run after Consolidate
  Run Script {
   console = "update jobid=%i jobtype=A"
   Runs When = After
   Runs On Client = No
   Runs On Failure = No
  }

  Write Bootstrap = "|/usr/local/bin/bareos-messages.sh \"[Bootstrap] %d: %j (jobid %i)\" %i \"it@XXXXX\" %c-%n"
}

3) consolidate job
Job {
  Name = ai_consolidate
  Type = Consolidate
  Accurate = yes
  Max Full Consolidations = 1
  Client = pavlov-fd #value which should be ignored by Consolidate job
  FileSet = "none" #value which should be ignored by Consolidate job
  Pool = disk_ai_consolidate #value which should be ignored by Consolidate job
  Incremental Backup Pool = disk_ai_consolidate
  Full Backup Pool = disk_ai_consolidate
# JobDefs = DefaultJob
# Level = Incremental
  Schedule = "ai_consolidate"
  # Storage = pavlov-file-consolidate #commented out for VirtualFull-Tape testing
  Messages = Standard
  Priority = 10
  Write Bootstrap = "|/usr/local/bin/bareos-messages.sh \"[Bootstrap] %d: %j (jobid %i)\" %i \"it@XXXXXX\" %c-%n"
}

4) always incremental job for client pavlov (works)
Job {
  Name = "pavlov_sys_ai"
  JobDefs = "default_ai"
  Client = "pavlov-fd"
  FileSet = linux_system
  Schedule = manual
}


5) virtualfull job for pavlov (doesn't work)
Job {
  Name = "pavlov_sys_ai_vf"
  JobDefs = "default_ai_vf"
  Client = "pavlov-fd"
  FileSet = linux_system
  Schedule = manual
  Storage = delaunay_HP_G2_Autochanger
  Next Pool = tape_automated
  Virtual Full Backup Pool = tape_automated
}

6) pool always incremental
Pool {
  Name = disk_ai
  Pool Type = Backup
  Recycle = yes # Bareos can automatically recycle Volumes
  AutoPrune = yes # Prune expired volumes
  Volume Retention = 4 weeks
  Maximum Volume Bytes = 30G # Limit Volume size to something reasonable
  Maximum Volumes = 200 # Limit number of Volumes in Pool
  Label Format = "ai_inc-" # Volumes will be labeled "Full-<volume-id>"
  Volume Use Duration = 23h
  Storage = pavlov-file
  Next Pool = disk_ai_consolidate
}

7) pool always incremental consolidate
Pool {
  Name = disk_ai_consolidate
  Pool Type = Backup
  Recycle = yes # Bareos can automatically recycle Volumes
  AutoPrune = yes # Prune expired volumes
  Volume Retention = 4 weeks
  Maximum Volume Bytes = 30G # Limit Volume size to something reasonable
  Maximum Volumes = 200 # Limit number of Volumes in Pool
  Label Format = "ai_consolidate-" # Volumes will be labeled "Full-<volume-id>"
  Volume Use Duration = 23h
  Storage = pavlov-file-consolidate
  Next Pool = tape_automated
}

8) pool tape_automated (for virtualfull jobs to tape)
Pool {
  Name = tape_automated
  Pool Type = Backup
  Storage = delaunay_HP_G2_Autochanger
  Recycle = yes # Bareos can automatically recycle Volumes
  AutoPrune = yes # Prune expired volumes
  Recycle Oldest Volume = yes
  RecyclePool = Scratch
  Maximum Volume Bytes = 0
  Volume Retention = 4 weeks
  Cleaning Prefix = "CLN"
  Catalog Files = yes
}

9) 1st storage device for disk backup (writes always incremental jobs + other normal jobs)
Storage {
  Name = pavlov-file
  Address = pavlov.XX # N.B. Use a fully qualified name here (do not use "localhost" here).
  Password = "X"
  Maximum Concurrent Jobs = 1
  Device = pavlov-file
  Media Type = File
  TLS Certificate = X
  TLS Key = X
  TLS CA Certificate File = X
  TLS DH File = X
  TLS Enable = X
  TLS Require = X
  TLS Verify Peer = X
  TLS Allowed CN = pavlov.X
}

10) 2nd storage device for disk backup (consolidates AI jobs)
Storage {
  Name = pavlov-file-consolidate
  Address = pavlov.X # N.B. Use a fully qualified name here (do not use "localhost" here).
  Password = "X"
  Maximum Concurrent Jobs = 1
  Device = pavlov-file-consolidate
  Media Type = File
  TLS Certificate = X
  TLS Key = X
  TLS CA Certificate File = X
  TLS DH File = X
  TLS Enable = yes
  TLS Require = yes
  TLS Verify Peer = yes
  TLS Allowed CN = pavlov.X
}

11) 3rd storage device for tape backup
Storage {
  Name = delaunay_HP_G2_Autochanger
  Address = "delaunay.XX"
  Password = "X"
  Device = "HP_G2_Autochanger"
  Media Type = LTO
  Autochanger = yes
  TLS Certificate = X
  TLS Key = X
  TLS CA Certificate File = X
  TLS DH File = X
  TLS Enable = yes
  TLS Require = yes
  TLS Verify Peer = yes
  TLS Allowed CN = delaunay.X
}


B) storage daemon pavlov (to disk)
1) to disk storage daemon

Storage {
  Name = pavlov-sd
  Maximum Concurrent Jobs = 20

  # remove comment from "Plugin Directory" to load plugins from specified directory.
  # if "Plugin Names" is defined, only the specified plugins will be loaded,
  # otherwise all storage plugins (*-sd.so) from the "Plugin Directory".
  #
  # Plugin Directory = /usr/lib/bareos/plugins
  # Plugin Names = ""
  TLS Certificate = X
  TLS Key = X
  TLS CA Certificate File = X
  TLS DH File = X
  TLS Enable = yes
  TLS Require = yes
  TLS Verify Peer = yes
  TLS Allowed CN = pavlov.X
  TLS Allowed CN = edite.X
  TLS Allowed CN = delaunay.X
}

2) to disk device (AI + others)
Device {
  Name = pavlov-file
  Media Type = File
  Maximum Open Volumes = 1
  Maximum Concurrent Jobs = 1
  Archive Device = /mnt/xyz #(same for both)
  LabelMedia = yes; # lets Bareos label unlabeled media
  Random Access = yes;
  AutomaticMount = yes; # when device opened, read it
  RemovableMedia = no;
  AlwaysOpen = no;
  Description = "File device. A connecting Director must have the same Name and MediaType."
}

3) consolidate to disk
Device {
  Name = pavlov-file-consolidate
  Media Type = File
  Maximum Open Volumes = 1
  Maximum Concurrent Jobs = 1
  Archive Device = /mnt/xyz #(same for both)
  LabelMedia = yes; # lets Bareos label unlabeled media
  Random Access = yes;
  AutomaticMount = yes; # when device opened, read it
  RemovableMedia = no;
  AlwaysOpen = no;
  Description = "File device. A connecting Director must have the same Name and MediaType."
}

C) to tape storage daemon (different server)
1) allowed director
Director {
  Name = pavlov-dir
  Password = "[md5]X"
  Description = "Director, who is permitted to contact this storage daemon."
  TLS Certificate = X
  TLS Key = /X
  TLS CA Certificate File = X
  TLS DH File = X
  TLS Enable = yes
  TLS Require = yes
  TLS Verify Peer = yes
  TLS Allowed CN = pavlov.X
}


2) storage daemon config
Storage {
  Name = delaunay-sd
  Maximum Concurrent Jobs = 20
  Maximum Network Buffer Size = 32768
# Maximum Network Buffer Size = 65536

  # remove comment from "Plugin Directory" to load plugins from specified directory.
  # if "Plugin Names" is defined, only the specified plugins will be loaded,
  # otherwise all storage plugins (*-sd.so) from the "Plugin Directory".
  #
  # Plugin Directory = /usr/lib/bareos/plugins
  # Plugin Names = ""
  TLS Certificate = X
  TLS Key = X
  TLS DH File = X
  TLS CA Certificate File = X
  TLS Enable = yes
  TLS Require = yes
  TLS Verify Peer = yes
  TLS Allowed CN = pavlov.X
  TLS Allowed CN = edite.X
}


3) autochanger config
Autochanger {
  Name = "HP_G2_Autochanger"
  Device = Ultrium920
  Changer Device = /dev/sg5
  Changer Command = "/usr/lib/bareos/scripts/mtx-changer %c %o %S %a %d"
}

4) device config
Device {
  Name = "Ultrium920"
  Media Type = LTO
  Archive Device = /dev/st2
  Autochanger = yes
  LabelMedia = no
  AutomaticMount = yes
  AlwaysOpen = yes
  RemovableMedia = yes
  Maximum Spool Size = 50G
  Spool Directory = /var/lib/bareos/spool
  Maximum Block Size = 2097152
# Maximum Block Size = 4194304
  Maximum Network Buffer Size = 32768
  Maximum File Size = 50G
}


B) changes to make VirtualFull level backup working (using device on same storage daemon as always incremental and consolidate job) in both Job and pool definitions.

1) change virtualfull job's storage
Job {
  Name = "pavlov_sys_ai_vf"
  JobDefs = "default_ai_vf"
  Client = "pavlov-fd"
  FileSet = linux_system
  Schedule = manual
  Pool = disk_ai_consolidate
  Incremental Backup Pool = disk_ai
  Storage = pavlov-file # <-- !! change to make VirtualFull work !!
  Next Pool = tape_automated
  Virtual Full Backup Pool = tape_automated
}

1) change virtualfull pool's storage
Pool {
  Name = tape_automated
  Pool Type = Backup
  Recycle = yes # Bareos can automatically recycle Volumes
  AutoPrune = yes # Prune expired volumes
  Recycle Oldest Volume = yes
  RecyclePool = Scratch
  Maximum Volume Bytes = 0
  Volume Retention = 4 weeks
  Cleaning Prefix = "CLN"
  Catalog Files = yes
  Label Format = "Full-" # <-- !! add this because now we write to disk, not to tape
  Storage = pavlov-file # <-- !! change to make VirtualFull work !!
}
Attached Files:
Notes
(0002815)
chaos_prevails   
2017-11-15 11:08   
Thanks to a comment on the bareos-users google-group I found out that this is a feature not implemented, not a bug.

I think it would be important to mention this in the documentation. I think VirtualFull would be a good solution for offsite-backup (e.g. in another building, another server-room). This involves another storage daemon.

I looked at different ways to export the tape drive on the offsite-backup machine to the local machine (e.g. iSCSI, ...). However, this adds extra complexity and might cause shoeshining (connection to offsite-backup machine has to be really fast, because spooling would happen on the local machine).In my case (~10MB/s) tape and drive would definitely suffer from shoe-shining. So currently, beside always incremental, I do another full backup to the offsite-backup machine.
(0004651)
sven.compositiv   
2022-07-04 16:48   
> Thanks to a comment on the bareos-users google-group I found out that this is a feature not implemented, not a bug.

When it is an unimplemented feature, I'd expect that no Backups are chosen from other storages. We have the problem, that we copy Jobs from AI-Consolidated to a tape. After doing that, all VirtualFull jobs fail after backups from our tape-storage have been selected.
(0004652)
bruno-at-bareos   
2022-07-04 17:02   
Could you explain a bit more (configuration example maybe?)

Having an Always Incremental rotation using one storage like File and then creating VirtualFul Archive to another storage resource (same SD daemon) works very well, as it is documented.
Maybe, you forget to update your VirtualFull to be an archive job. Then yes the next AI will use the most recent VF.
But this is also documented.
(0004655)
bruno-at-bareos   
2022-07-04 17:12   
Not implemented.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1459 [bareos-core] installer / packages major always 2022-05-09 16:37 2022-07-04 17:11
Reporter: khvalera Platform: Linux  
Assigned To: bruno-at-bareos OS: any  
Priority: high OS Version: 3  
Status: resolved Product Version: 21.1.3  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Fails to build ceph plugin on Archlinux
Description: Ceph plugin cannot be built on Archlinux with ceph 15.2.14

Build report:

```
[ 73%] Building CXX object core/src/stored/backends/CMakeFiles/bareossd-cephfs.dir/cephfs_device.cc.o
In file included from /data/builds-pkg/bareos/src/bareos/core/src/stored/backends/cephfs_device.cc:33:
/data/builds-pkg/bareos/src/bareos/core/src/stored/backends/cephfs_device.h:31:10: fatal error: cephfs/libcephfs.h: No such file or directory
    31 | #include <cephfs/libcephfs.h>
       | ^~~~~~~~~~~~~~~~~~~~
compilation aborted.
make[2]: *** [core/src/stored/backends/CMakeFiles/bareossd-cephfs.dir/build.make:76: core/src/stored/backends/CMakeFiles/bareossd-cephfs.dir/cephfs_device.cc .o] Error 1
make[1]: *** [CMakeFiles/Makefile2:3157: core/src/stored/backends/CMakeFiles/bareossd-cephfs.dir/all] Error 2
make: *** [Makefile:146: all] Error 2
Tags:
Steps To Reproduce:
Additional Information:
System Description General issues regarding all linux platforms or not specific to only one distribution
Attached Files: 009-fix-timer_thread.patch (551 bytes) 2022-05-27 23:58
https://bugs.bareos.org/file_download.php?file_id=518&type=bug
Notes
(0004605)
bruno-at-bareos   
2022-05-10 13:03   
Maybe you can describe a bit more your setup, from where come cephfs
maybe the result of find libcephfs.h can be useful
(0004606)
khvalera   
2022-05-10 15:12   
You can fix this error by installing ceph-libs. But the assembly does not happen:

[ 97%] Building CXX object core/src/plugins/filed/CMakeFiles/cephfs-fd.dir/cephfs/cephfs-fd.cc.o
/data/builds-pkg/bareos/src/bareos/core/src/plugins/filed/cephfs/cephfs-fd.cc: In the "bRC filedaemon::get_next_file_to_backup(PluginContext*)" function:
/data/builds-pkg/bareos/src/bareos/core/src/plugins/filed/cephfs/cephfs-fd.cc:421:33: error: cannot convert "stat*" to "ceph_statx*"
  421 | &p_ctx->statp, &stmask);
      | ^~~~~~~~~~~~~
      | |
      | stat*
In file included from /data/builds-pkg/bareos/src/bareos/core/src/plugins/filed/cephfs/cephfs-fd.cc:35:
/usr/include/cephfs/libcephfs.h:564:43: note: when initializing the 4th argument "int ceph_readdirplus_r(ceph_mount_info*, ceph_dir_result*, dirent*, ceph_statx*, unsigned int, unsigned int, Inode**)"
  564 | struct ceph_statx *stx, unsigned want, unsigned flags, struct Inode **out);
      | ~~~~~~~~~~~~~~~~~~^~~
make[2]: *** [core/src/plugins/filed/CMakeFiles/cephfs-fd.dir/build.make:76: core/src/plugins/filed/CMakeFiles/cephfs-fd.dir/cephfs/cephfs -fd.cc.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:3908: core/src/plugins/filed/CMakeFiles/cephfs-fd.dir/all] Error 2
make: *** [Makefile:146: all] Error 2
(0004610)
bruno-at-bareos   
2022-05-10 17:31   
When we ask for a bit more information about your setup you maybe make the effort to give useful information like compiler used, cmake output etc ...
otherside we can clause here telling it works so well with cephfs upper version like 15.2.15 or 16.2.7 ...
(0004628)
khvalera   
2022-05-27 23:58   
After updating the system and the attached patch, bareos began to build again
(0004653)
bruno-at-bareos   
2022-07-04 17:10   
I will mark this closed done by

commit ce3339d28
Author: Andreas Rogge <andreas.rogge@bareos.com>
Date: Wed Feb 2 19:41:25 2022 +0100

    lib: fix use-after-free in timer_thread

diff --git a/core/src/lib/timer_thread.cc b/core/src/lib/timer_thread.cc
index 7ec802198..1624ddd4f 100644
--- a/core/src/lib/timer_thread.cc
+++ b/core/src/lib/timer_thread.cc
@@ -2,7 +2,7 @@
    BAREOS® - Backup Archiving REcovery Open Sourced

    Copyright (C) 2002-2011 Free Software Foundation Europe e.V.
- Copyright (C) 2019-2019 Bareos GmbH & Co. KG
+ Copyright (C) 2019-2022 Bareos GmbH & Co. KG

    This program is Free Software; you can redistribute it and/or
    modify it under the terms of version three of the GNU Affero General Public
@@ -204,6 +204,7 @@ static bool RunOneItem(TimerThread::Timer* p,
       = std::chrono::steady_clock::now();

   bool remove_from_list = false;
+ next_timer_run = min(p->scheduled_run_timepoint, next_timer_run);
   if (p->is_active && last_timer_run_timepoint > p->scheduled_run_timepoint) {
     LogMessage(p);
     p->user_callback(p);
@@ -215,7 +216,6 @@ static bool RunOneItem(TimerThread::Timer* p,
       p->scheduled_run_timepoint = last_timer_run_timepoint + p->interval;
     }
   }
- next_timer_run = min(p->scheduled_run_timepoint, next_timer_run);
   return remove_from_list;
 }
(0004654)
bruno-at-bareos   
2022-07-04 17:11   
Fixed with https://github.com/bareos/bareos/pull/1060


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1470 [bareos-core] webui minor always 2022-06-28 09:16 2022-06-30 13:41
Reporter: ffrants Platform: Linux  
Assigned To: bruno-at-bareos OS: Ubuntu  
Priority: low OS Version: 20.04  
Status: resolved Product Version: 21.1.3  
Product Build: Resolution: won't fix  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Update information could not be retrieved
Description: Update information could not be retrieved and also unknown update status on clients
Tags:
Steps To Reproduce:
Additional Information:
Attached Files: Снимок экрана 2022-06-28 в 10.11.01.png (22,345 bytes) 2022-06-28 09:16
https://bugs.bareos.org/file_download.php?file_id=524&type=bug
Снимок экрана 2022-06-28 в 10.15.12.png (28,921 bytes) 2022-06-28 09:16
https://bugs.bareos.org/file_download.php?file_id=525&type=bug
Снимок экрана 2022-06-30 в 14.04.09.png (14,330 bytes) 2022-06-30 13:07
https://bugs.bareos.org/file_download.php?file_id=526&type=bug
Снимок экрана 2022-06-30 в 14.06.27.png (21,387 bytes) 2022-06-30 13:07
https://bugs.bareos.org/file_download.php?file_id=527&type=bug
Notes
(0004648)
bruno-at-bareos   
2022-06-29 17:03   
Works here (maybe a transient error in certificate) could you recheck please?
(0004649)
ffrants   
2022-06-30 13:07   
Here's what I found out:
My ip is blocked by bareos.com (can't open www.bareos.com). If I open web-ui via VPN it doesn't show red exclamation mark near the version.
But the problem on "Clients" tab persist but not for all versions (see attachment).
(0004650)
bruno-at-bareos   
2022-06-30 13:41   
Only Russian authority will create a fix, so blacklisting will be dropped


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1460 [bareos-core] storage daemon block always 2022-05-10 17:46 2022-05-11 13:08
Reporter: alistair Platform: Linux  
Assigned To: bruno-at-bareos OS: Ubuntu  
Priority: normal OS Version: 21.10  
Status: resolved Product Version: 21.1.3  
Product Build: Resolution: won't fix  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Unable to Install bareos-storage-droplet
Description: Apt returns the following

The following packages have unmet dependencies:
bareos-storage-droplet : Depends: libjson-c4 (>= 0.13.1) but it is not installable

libjson-c4 seems to have been superseded by libjson-c5 in newer versions of Ubuntu.
Tags: droplet, s3;droplet;aws;storage, storage
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0004615)
bruno-at-bareos   
2022-05-11 13:07   
Don't know what you are expecting here, Ubuntu 21.10 is not a supported build distribution.
As such we don't know which package you are trying to install.

Subscription channel will have soon Ubuntu 22.04 offered, you can contact sales if you want more information about.
(0004616)
bruno-at-bareos   
2022-05-11 13:08   
Not a supported distribution.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1458 [bareos-core] webui major always 2022-05-09 13:32 2022-05-10 13:01
Reporter: mdc Platform: x86_64  
Assigned To: bruno-at-bareos OS: CentOS  
Priority: normal OS Version: stream 8  
Status: resolved Product Version: 21.1.3  
Product Build: Resolution: no change required  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Unable to view the pool details.
Description: With the last update the pool page is complete broken.
When the pool name contains an space in the name, an 404 error is returned.
On an pool without an space in the name the error on the picture will happens.
Before 21.1.3 only pools with an space in the name was broken.
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files: Screenshot 2022-05-09 at 13-29-52 Bareos.png (152,604 bytes) 2022-05-09 13:32
https://bugs.bareos.org/file_download.php?file_id=512&type=bug
Notes
(0004600)
mdc   
2022-05-09 13:40   
It looks like an caching problem. Open the webui in an private session, then it will work.
A relogin or an new tab will not help.
(0004601)
bruno-at-bareos   
2022-05-09 14:30   
Did you restart the the websever (and or php-fpm if used). Browser have tendency recently to not being able to cleanup correctly their disk cache. maybe it is needed to cleanup manually cached content for the webui website.
(0004603)
mdc   
2022-05-10 11:43   
Yes, this my first idea. The restart of the web server and the backend php service.
Now after approximately 48 hours, the correct page will loaded.
(0004604)
bruno-at-bareos   
2022-05-10 13:01   
personal browser cache need to be cleanup


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1453 [bareos-core] director major always 2022-05-02 15:56 2022-05-02 15:56
Reporter: inazo Platform: Linux  
Assigned To: OS: Debian  
Priority: normal OS Version: 10  
Status: new Product Version: 20.0.6  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Rate issue when prune and truncate volume
Description: Hi,

Every day of month i run task with a pool which are limited to 31 volumes, each volume is used one time and maximum retention is 30 days. Each days since i have the 31 volumes in my pools the rate decrease to 14150 Kb/s to 141 Kb/s... So my backup which take initialy 5 minutes to run, take now 30 minutes... I thin it's when the job truncate / recycle / prune the volume.

First day of month i run an other pool in full mode. And it's not affected by the rate decrease beacuse for the moment the job don't have to recycle/prune/truncate volume.

Other information i use S3 storage.
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files: bareos-dir.conf (893 bytes) 2022-05-02 15:56
https://bugs.bareos.org/file_download.php?file_id=503&type=bug
client.conf (79 bytes) 2022-05-02 15:56
https://bugs.bareos.org/file_download.php?file_id=504&type=bug
fileset.conf (333 bytes) 2022-05-02 15:56
https://bugs.bareos.org/file_download.php?file_id=505&type=bug
job.conf (435 bytes) 2022-05-02 15:56
https://bugs.bareos.org/file_download.php?file_id=506&type=bug
jobdefs.conf (655 bytes) 2022-05-02 15:56
https://bugs.bareos.org/file_download.php?file_id=507&type=bug
pool.conf (2,697 bytes) 2022-05-02 15:56
https://bugs.bareos.org/file_download.php?file_id=508&type=bug
schedule.conf (167 bytes) 2022-05-02 15:56
https://bugs.bareos.org/file_download.php?file_id=509&type=bug
There are no notes attached to this issue.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1446 [bareos-core] bconsole crash always 2022-04-01 13:24 2022-04-01 13:24
Reporter: mdc Platform: x86_64  
Assigned To: OS: CentOS  
Priority: normal OS Version: stream 8  
Status: new Product Version: 21.1.2  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: bconsole will crash when the unneeded password is missing
Description: When using pam authentication, only the console password is needed for the connection.
But when this is so done, the bconsole will crash with:
 bconsole -c /etc/bareos/bconsole-tui.conf
bconsole: ABORTING due to ERROR in console/console_conf.cc:181
Password item is required in Director resource, but not found.
BAREOS interrupted by signal 6: IOT trap
bconsole, bconsole got signal 6 - IOT trap. Attempting traceback.

So an empty and unneeded Password entry must added as an workaround.
Tags:
Steps To Reproduce:
Additional Information: Sample crash config:
Director {
  Name = bareos-dir
  Address = localhost
  }
Console {
  Name = "PamConsole"
  @/etc/bareos/pam.pw
}
Sample working config:
Director {
  Name = bareos-dir
  Address = localhost
  Password = ""
}
Console {
  Name = "PamConsole"
  @/etc/bareos/pam.pw
}

System Description
Attached Files:
There are no notes attached to this issue.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1421 [bareos-core] storage daemon minor always 2022-01-17 17:06 2022-03-30 12:14
Reporter: DemoFreak Platform: x86_64  
Assigned To: bruno-at-bareos OS: Opensuse  
Priority: normal OS Version: Leap 15.3  
Status: new Product Version: 21.0.0  
Product Build: Resolution: reopened  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: MTEOM on LTO-3 fails with Bareos 21, but works on older Bacula
Description: After migrating a file server, the backup was switched from Bacula 5.2 to Bareos 21.0. Transferring the configuration works flawlessly, everything works as desired except for the tape drive.

Appending another backup on an already used medium fails with
kernel: [586704.090320] st 8:0:0:0: [st0] Sense Key : Medium Error [current].
kernel: [586704.090327] st 8:0:0:0: [st0] Add. Sense: Recorded entity not found
the tape is marked as "Error" in the catalog.

The test with btape results consequently in a problem with EOD (MTEOM). After completing the storage configuration with
Hardware End of Medium = no
Fast Forward Space File = no
appending works, but is extremely slow, as also mentioned in the documentation.

Hardware:
- Fibre Channel: QLogic Corp. ISP2312-based 2Gb Fibre Channel to PCI-X HBA
- Drive 'HP Ultrium 3-SCSI Rev. L63S'

The drive and HBA were transferred from the old system to the new system without any changes.

How can I further isolate the problem?
Does Bareos work differently than Bacula 5.2 regarding EOD?
Tags: storage MTEOM
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0004478)
DemoFreak   
2022-01-18 18:38   
(Last edited: 2022-01-18 23:15)
It seems that even the slow (software) method sometimes fails. Here is the corresponding excerpt from the log.

First job on the tape:
17-Jan 11:00 bareos-sd JobId 81: Wrote label to prelabeled Volume "Band4" on device "TapeStorageLTO3" (/dev/tape/by-id/scsi-350060b00002e85de-nst)
...
17-Jan 13:51 bareos-sd JobId 81: Releasing device "TapeStorageLTO3" (/dev/tape/by-id/scsi-350060b00002e85de-nst).
  SD Bytes Written: 344,018,607,484 (344.0 GB)

Second job:
17-Jan 13:54 bareos-sd JobId 83: Volume "Band4" previously written, moving to end of data.
17-Jan 14:39 bareos-sd JobId 83: Ready to append to end of Volume "Band4" at file=65.
...
17-Jan 14:39 bareos-sd JobId 83: Releasing device "TapeStorageLTO3" (/dev/tape/by-id/scsi-350060b00002e85de-nst).
  SD Bytes Written: 140,473,627 (140.4 MB)

Third job:
17-Jan 14:42 bareos-sd JobId 85: Volume "Band4" previously written, moving to end of data.
17-Jan 15:27 bareos-sd JobId 85: Ready to append to end of Volume "Band4" at file=66.
...
17-Jan 15:32 bareos-sd JobId 84: Releasing device "FileStorage" (/home/.bareos/backup).
  SD Bytes Written: 9,954,169,360 (9.954 GB)

Fourth job:
17-Jan 15:33 bareos-sd JobId 87: Volume "Band4" previously written, moving to end of data.
17-Jan 16:20 bareos-sd JobId 87: Ready to append to end of Volume "Band4" at file=68.
...
17-Jan 16:20 bareos-sd JobId 87: Releasing device "TapeStorageLTO3" (/dev/tape/by-id/scsi-350060b00002e85de-nst).
  SD Bytes Written: 141,727,215 (141.7 MB)

Everything works fine up to this point.
The file size on the tape is 5GB (Maximum File Size = 5G). So the next job should be attached at file number 69.

Fifth job:
18-Jan 11:00 bareos-sd JobId 92: Volume "Band4" previously written, moving to end of data.
18-Jan 12:03 bareos-sd JobId 92: Error: Unable to position to end of data on device "TapeStorageLTO3" (/dev/tape/by-id/scsi-350060b00002e85de-nst): ERR=backends/generic_tape_device.cc:496 read error on "TapeStorageLTO3" (/dev/tape/by-id/scsi-350060b00002e85de-nst). ERR=Eingabe-/Ausgabefehler.
18-Jan 12:03 bareos-sd JobId 92: Marking Volume "Band4" in Error in Catalog.

This fails with an input/output error. Possibly no EOD marker was written during the fourth job.

Neither "mtst -f /dev/nst0 eod" nor "echo eod | btape" find EOD, they abort with error and the tape is read to the physical end.
Complete reading of the tape with "echo scanblocks | btape" works absolutely correct up to file number 68, different groups of blocks and one EOF marker each are read. In file number 69 no EOF is read, instead the drive keeps reading until the end of the medium.

...
1 block of 64508 bytes in file 66
2 blocks of 64512 bytes in file 66
1 block of 64508 bytes in file 66
23166 blocks of 64512 bytes in file 67
End of File mark.
43547 blocks of 64512 bytes in file 67
1 block of 64504 bytes in file 67
14277 blocks of 64512 bytes in file 67
1 block of 64506 bytes in file 67
3209 blocks of 64512 bytes in file 67
1 block of 64509 bytes in file 67
8889 blocks of 64512 bytes in file 67
1 block of 64510 bytes in file 67
222 blocks of 64512 bytes in file 67
1 block of 64502 bytes in file 67
1046 blocks of 64512 bytes in file 67
1 block of 33330 bytes in file 68
End of File mark.
2198 blocks of 64512 bytes in file 68
1 block of 35367 bytes in file 69
End of File mark.
(At this point, nothing more happens until the end of the tape. Please note that in the log of btape for whatever reason apparently the first line of a new file and the EOF marker of the previous file are swapped, so the last EOF marker here belongs to file number 68).

Any ideas?

(0004479)
DemoFreak   
2022-01-18 19:25   
(Last edited: 2022-01-18 23:14)
As an attempt to narrow down the problem, I wrote an EOF marker to file number 69 with mtst:

miraculix:~ # mtst -f /dev/nst0 rewind
miraculix:~ # mtst -f /dev/nst0 status
SCSI 2 tape drive:
File number=0, block number=0, partition=0.
Tape block size 0 bytes. Density code 0x44 (LTO-3).
Soft error count since last status=0
General status bits on (41010000):
 BOT ONLINE IM_REP_EN
miraculix:~ # time mtst -f /dev/nst0 fsf 69

real 0m29.927s
user 0m0.002s
sys 0m0.001s
miraculix:~ # mtst -f /dev/nst0 status
SCSI 2 tape drive:
File number=69, block number=0, partition=0.
Tape block size 0 bytes. Density code 0x44 (LTO-3).
Soft error count since last status=0
General status bits on (81010000):
 EOF ONLINE IM_REP_EN
miraculix:~ # mtst -f /dev/nst0 weof
miraculix:~ # mtst -f /dev/nst0 status
SCSI 2 tape drive:
File number=70, block number=0, partition=0.
Tape block size 0 bytes. Density code 0x44 (LTO-3).
Soft error count since last status=0
General status bits on (81010000):
 EOF ONLINE IM_REP_EN
miraculix:~ # mtst -f /dev/nst0 rewind

Note the extreme difference in required time for spacing forward to file number 69:

miraculix:~ # time echo -e "status\nfsf 69\nstatus\n" | btape TapeStorageLTO3
Tape block granularity is 1024 bytes.
btape: stored/butil.cc:306-0 Using device: "TapeStorageLTO3" for writing.
btape: stored/btape.cc:490-0 open device "TapeStorageLTO3" (/dev/tape/by-id/scsi-350060b00002e85de-nst): OK
* Bareos status: file=0 block=0
 Device status: BOT ONLINE IM_REP_EN file=0 block=0
Device status: TAPE BOT ONLINE IMMREPORT. ERR=
*btape: stored/btape.cc:1774-0 Forward spaced 69 files.
* EOF Bareos status: file=69 block=0
 Device status: EOF ONLINE IM_REP_EN file=69 block=0
Device status: TAPE EOF ONLINE IMMREPORT. ERR=
**
real 48m8.811s
user 0m0.006s
sys 0m0.014s
miraculix:~ #

After writing the EOF marker, btape "scanblocks" works as expected:
...
23166 blocks of 64512 bytes in file 67
End of File mark.
43547 blocks of 64512 bytes in file 67
1 block of 64504 bytes in file 67
14277 blocks of 64512 bytes in file 67
1 block of 64506 bytes in file 67
3209 blocks of 64512 bytes in file 67
1 block of 64509 bytes in file 67
8889 blocks of 64512 bytes in file 67
1 block of 64510 bytes in file 67
222 blocks of 64512 bytes in file 67
1 block of 64502 bytes in file 67
1046 blocks of 64512 bytes in file 67
1 block of 33330 bytes in file 68
End of File mark.
2198 blocks of 64512 bytes in file 68
1 block of 35367 bytes in file 69
End of File mark.
Total files=69, blocks=5495758, bytes = 354,542,114,821

btape "eod" works as well:

*eod
btape: stored/btape.cc:619-0 Moved to end of medium.



All in all, it seems to me that under circumstances that are not yet clear to me, sometimes no EOF is written on the tape.

Where am I wrong here?

(0004480)
DemoFreak   
2022-01-19 01:35   
Starting a migration job on this "repaired" tape triggers two migration worker jobs, the first of them works well, the second fails, and I don't understand why.

First job:
18-Jan 23:29 bareos-sd JobId 98: Volume "Band4" previously written, moving to end of data.
19-Jan 00:17 bareos-sd JobId 98: Ready to append to end of Volume "Band4" at file=69.
19-Jan 00:17 bareos-sd JobId 97: Releasing device "FileStorage" (/home/.bareos/backup).
  SD Bytes Written: 247,515,896 (247.5 MB)


Second job:
19-Jan 00:18 bareos-sd JobId 100: Volume "Band4" previously written, moving to end of data.
19-Jan 01:06 bareos-sd JobId 100: Error: Bareos cannot write on tape Volume "Band4" because:
The number of files mismatch! Volume=69 Catalog=70
19-Jan 01:06 bareos-sd JobId 100: Marking Volume "Band4" in Error in Catalog.

Why does the second job still find the end of the tape at file number 69, although this file was already written in the first job? EOD should be at file number 70, as it is also noted in the catalog.

Where is my error?
(0004481)
bruno-at-bareos   
2022-01-20 17:14   
Just a quick note having

Appending another backup on an already used medium fails with
kernel: [586704.090320] st 8:0:0:0: [st0] Sense Key : Medium Error [current].
kernel: [586704.090327] st 8:0:0:0: [st0] Add. Sense: Recorded entity not found

Means hardware trouble, be it the medium (tape) the drive or some other component in the scsi chain.
They are never fun to debug.
(0004482)
DemoFreak   
2022-01-20 17:48   
The hardware is completely unchanged. HBA, drive and tapes are the same. They are even still in the same place, only the HBA is now in a different computer.

To be on the safe side, I will rebuild everything and run a test in the old system. This worked until a week ago for several years completely without problems, but just with Bacula.

I am surprised about the lack of an EOF marker after some migration jobs.
(0004519)
DemoFreak   
2022-02-19 04:21   
(Last edited: 2022-02-19 04:23)
Sorry, I was unfortunately busy in the meantime, therefore the long response time.

I have just done the test and rebuilt everything in the old system, there it runs as expected completely without problems.

After the renewed change into the new system it runs now however also here perfectly.

So it was probably really a problem with the LC cabling.

So can be closed, thanks for the help.
(0004520)
bruno-at-bareos   
2022-02-21 09:40   
Hardware problem.
(0004556)
DemoFreak   
2022-03-30 12:14   
I think I have found the real cause.

I use an after-job script which shuts down the tape drive after the migration. For this I check after 30s waiting time if there are more jobs in the queue and only if there are no more waiting or running jobs, the drive is switched off.

echo "Checking for pending bacula jobs..."

sleep 30

if echo "status dir" | /usr/sbin/bconsole | /usr/bin/grep "^ " | /usr/bin/egrep -q "(is waiting|is running)"; then
        echo "Pending bacula jobs found, leaving tape device alone!"
else
        echo "Switching off tape device..."
        $DEBUG $SISPMCTLBIN -qf 1
fi

Apparently with Bareos, the processing of the jobs is more concurrent than Bacula, because since I temporarily suspended the shutdown of the drive, no more MTEOM errors occur. So I suspect that sometimes the drive was already powered off while the storage daemon was still in the process of writing the last data to the drive. Of course, this also meant that no EOF was written.

Is it possible that the Director reports jobs as finished while the SD is still writing?


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1441 [bareos-core] webui minor always 2022-03-22 13:59 2022-03-29 14:13
Reporter: mdc Platform: x86_64  
Assigned To: frank OS: CentOS  
Priority: normal OS Version: stream 8  
Status: resolved Product Version:  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Unable to view the pool details, when the pool name contains an space char.
Description: The resulting url will be:
"https:/XXXX/pool/details/Bareos database" for example, when the pool is named "Bareos database"
An the call will failed with:

A 404 error occurred
Page not found.

The requested URL could not be matched by routing.
No Exception available
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0004555)
frank   
2022-03-29 14:13   
Fix committed to bareos master branch with changesetid 16093.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1440 [bareos-core] director minor always 2022-03-22 13:42 2022-03-23 15:09
Reporter: mdc Platform: x86_64  
Assigned To: bruno-at-bareos OS: CentOS  
Priority: normal OS Version: stream 8  
Status: resolved Product Version: 21.1.2  
Product Build: Resolution: won't fix  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Only 127.0.0.1 ls logged in the audit log when the access comes from the webui
Description: Instant of the real IP of the user device, only 127.0.0. is logged.
22-Mar 13:31 Bareos Director: Console [foo] from [127.0.0.1] cmdline list jobtotals

I think, the director, see only the source ip of the webui server. But the real IP is not forwarded to the director.
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0004549)
bruno-at-bareos   
2022-03-23 15:08   
The audit log is use to log the remote (here local) ip of the initiator of the command.
Think about remote bconsole access etc.
so here localhost is the right agent.

You're totally entitled to propose a enhanced version of the code by making a PR on our github project.
(0004550)
bruno-at-bareos   
2022-03-23 15:09   
Won't be fixed without external code proposal


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1134 [bareos-core] vmware plugin feature always 2019-11-06 15:32 2022-03-14 15:42
Reporter: ratacorbo Platform: Linux  
Assigned To: stephand OS: CentOS  
Priority: normal OS Version: 7  
Status: resolved Product Version: 18.2.5  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version: 21.0.0  
    Target Version:  
Summary: Installing bareos-vmware-plugin-18.2.5-124.1.el7.x86_64 python-pyvmomi error
Description: when tries to install bareos-vmware-plugin-18.2.5-124.1.el7.x86_64 in Centos 7 gives an error that python-pyvmomi
 doesn't exists
Tags:
Steps To Reproduce: yum install bareos-vmware-plugin
Additional Information: Error: Package: bareos-vmware-plugin-18.2.5-124.1.el7.x86_64 (bareos_bareos-18.2)
           Requires: python-pyvmomi
 You could try using --skip-broken to work around the problem
 You could try running: rpm -Va --nofiles --nodigest
System Description
Attached Files:
Notes
(0003856)
stephand   
2020-02-26 00:23   
The python-pyvmomi package for CentOS 7/RHEL 7 is available in EPEL.
On CentOS 7 EPEL repo can by added by runing

yum install epel-release

For RHEL 7 see https://fedoraproject.org/wiki/EPEL

Does it work when the EPEL repo was added to your system?
(0003997)
Rotnam   
2020-06-02 18:05   
I installed a fresh Redhat 8.1 to test bareos vmware pluging. I ran in the same issue running
yum install bareos-vmware-plugin
Error:
 Problem: cannot install the best candidate for the job
  - nothing provides python-pyvmomi needed by bareos-vmware-plugin-19.2.7-2.el8.x86_64
(try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)

So far I tried to install
python-pyvmomi with pip3.6 install pyvmomi -> Installed successful, no luck
Downloaded the github package and did a python3.6 setup.py install, this install the version 7.0, no luck
Addding epel -> yum install python3-pyvmomi, this install version 6.7.3-3, no luck with the yum

Downloading the rpm (19.2.7-2) and trying manually, it did the requirement:
yum install python2
yum install bareos-filedaemon-python-plugin
yum install bareos-vadp-dumper
Did a pip2 install pyvmomi, still no luck
python2 setup.py install, installed a bunch of files under python2.7, still no luck for the rpm

At this point, I will just do a --nodeps and see if it work, hope this help resolving the package issue
(0004039)
stephand   
2020-09-16 13:10   
You are right, we have a problem here for RHEL/CentOS 8 because EPEL 8 does not provide a python2-pyvmomi package.
It's also not easily possible to buid a python2-pyvmomi package for el8 due to its missing python2 package dependencies.

Currently indeed the only way is to ignore dependencies for the package installation and use pip2 install pyvmomi.
Does that work for you?

I think we should remove the dependency on python-pyvmomi and add a hint in the documentation to use pip2 install pyvmomi.

For the upcoming Bareos version 20, we are already working on Python3 plugins, this will also fix the dependency problem.
(0004040)
Rotnam   
2020-09-16 15:22   
For the test I did, it worked fine, so I assume you can do it that way with no--nodeps. I ended up putting this on hold, backing up just the disks and not the VM was a bit strange. Restore on locally worked, not directly on vcenter (can't remember which one I tried). Will revisit this solution later.
(0004536)
stephand   
2022-03-14 15:42   
Since Bareos Version >= 21.0.0 the package bareos-vmware-plugin no longer includes a dependency on a pyVmomi package, because some Linux distributions don’t provide current versions. Consequently, pyVmomi must be either installed by using pip install pyvmomi or by manually installing a distribution provided pyVmomi package.
See https://docs.bareos.org/TasksAndConcepts/Plugins.html#vmware-plugin


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1431 [bareos-core] General major always 2022-03-08 20:37 2022-03-11 03:32
Reporter: backup1 Platform: Linux  
Assigned To: OS:  
Priority: normal OS Version:  
Status: new Product Version: 21.0.0  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Newline characters stripped from configuration strings
Description:  Hi,

I'm trying to set a config value that includes newline character (a.k.a. \n). This worked in Bareos 19.2, but same config is not working in 21. It seems that the newlines are stripped when loading the config. I note that the docs say that strings can now be entered using a multi-line quoted format (for Bareos 20+).

The actual config setting is for NDMP files and specifying the NDMP environment MULTI_SUBTREE_NAMES.

This is what the config looks like:

FileSet {
  Name = "user_01"
  Include {
    Options {
      AclSupport = Yes
      XattrSupport = Yes
      Meta = "DMP_NAME=user_01"
      Meta = "MULTI_SUBTREE_NAMES=userA
userB
userC
userD"
    }
    File = "/vol0/user"
  }
}

The correctly formatted value will have newlines between the "userA", "userB", "userC" subdir names.

In bconsole "show filesets" has the names all concatenated together and the (NetApp) filer rejects the job saying "no directory userAuserBUserCUserD".
Tags:
Steps To Reproduce: Configure fileset with options string including newlines.

Load configuration.

Review configuration using "show filesets" and observe that newlines have been stripped.

I've also reviewed NDMP commands sent to NetApp and (with wireshark) and observe that the newlines are missing.
Additional Information: I believe the use-case for config file strings to include newlines was not considered in parser changes for multi-line quoted format. I'm no longer able to use MULTI_SUBTREE_NAMES for NDMP and have reverted to just doing full volume backups, which limits flexibility, but is working reliably.

Thanks,
Tom Rockwell
Attached Files:
Notes
(0004533)
bruno-at-bareos   
2022-03-09 11:40   
Inconsistencies between documentation / expectation / behaviour
loss of functionality between versions

Documentation https://docs.bareos.org/master/Configuration/CustomizingTheConfiguration.html?highlight=multiline#quotes show multilines in example lead to expectation to have those kept as multilines.

Having a configured fileset with new multiline syntax

FileSet {
  Name = "NDMP_test"
  Include {
    Options {
      AclSupport = Yes
      XattrSupport = Yes
      Meta = "DMP_NAME=user_01"
      Meta = "MULTI_SUBTREE_NAMES=userA"
             "userB"
             "userC"
             "userD"
    }
    File = "/vol0/user"
  }
}

when displayed in bconsole
*show fileset=NDMP_test
FileSet {
  Name = "NDMP_test"
  Include {
    Options {
      AclSupport = Yes
      XattrSupport = Yes
      Meta = "DMP_NAME=user_01"
      Meta = "MULTI_SUBTREE_NAMES=userAuserBuserCuserD"
    }
    File = "/vol0/user"
  }
}
(0004534)
backup1   
2022-03-11 03:32   
Hi,

Thanks for looking at this. For reference, the newlines are needed to use the MULTI_SUBTREE_NAMES functionality on NetApp. https://library.netapp.com/ecmdocs/ECMP1196992/html/GUID-DE8BF53F-706A-48CA-A6FD-ACFDC2D0FE8A.html

From the linked doc, "Multiple subtrees are specified in the string which is a newline-separated, null-terminated list of subtree names."

I looked for other use-cases to put newlines into strings in Bareos config, but didn't find any, so I realize this is a bit of a corner-case. Still, NDMP is useful for NetApp, and it would be unfortunate to lose this functionality.

Thanks again,
Tom Rockwell


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1430 [bareos-core] webui major always 2022-02-23 20:19 2022-03-03 15:11
Reporter: jason.agilitypr Platform: Linux  
Assigned To: frank OS: Ubuntu  
Priority: high OS Version: 20.04  
Status: resolved Product Version: 21.0.0  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Version of Jquery is old and vulnerable
Description: the version of jquery that bareos webui is running is old and out of date and has known security vulnerabilities (xss attacks)

/*! jQuery v3.2.0 | (c) JS Foundation and other contributors | jquery.org/license */
v3.2.0 was release on March 16, 2017

https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/
"The HTML parser in jQuery <=3.4.1 usually did the right thing, but there were edge cases where parsing would have unintended consequences. "

the current version of jquery is 3.6.0


Tags:
Steps To Reproduce: check version of jquery loaded in bareos webui via browser right click -> view source
Additional Information: the related libraries including moment and excanavas, may also need updating.
Attached Files:
Notes
(0004531)
frank   
2022-03-03 11:11   
Fix committed to bareos master branch with changesetid 15977.
(0004532)
frank   
2022-03-03 15:11   
Fix committed to bareos bareos-19.2 branch with changesetid 15981.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1426 [bareos-core] director minor always 2022-02-07 10:37 2022-02-24 11:46
Reporter: mschiff Platform: Linux  
Assigned To: stephand OS: any  
Priority: normal OS Version: 3  
Status: acknowledged Product Version: 21.0.0  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: bareos send useless operator "mount" messages
Description: The default configuration has messages/Standard.conf which contains:

operator = <email> = mount

which shouldsend an email if an operator is required for a job to continue.

But these mails will also be triggered on a busy bareos-sd with multiple virtual drives and multiple jobs running, when a job just needs to wait a bit for a volume to become available.
Every month, when our systems are doing virtual full backups at night, we get lots of mails like:

06-Feb 23:37 kilo-sd JobId 58793: Please mount read Volume "VolIncr-0034" for:
    Job: BackupIndia.2022-02-06_23.15.01_31
    Storage: "MultiFileStorage0001" (/srv/backup/bareos)
    Pool: Incr
    Media type: File

But in the morning, all jobs are finished successsfully.

So when one job is reading a volume and another job is waiting for the same volume, an email is triggered. But after waiting a couple of minutes, this "issue" solves itself.

It should be possible to set some timeout, after which such messages are being sent, so that they will only be sent on really hanging jobs.

This is part of the joblog:

 2022-02-06 23:25:38 kilo-dir JobId 58793: Start Virtual Backup JobId 58793, Job=BackupIndia.2022-02-06_23.15.01_31
 2022-02-06 23:25:38 kilo-dir JobId 58793: Consolidating JobIds 58147,58164,58182,58200,58218,58236,58254,58272,58290,58308,58326,58344,58362,58380,58398,58416,58434,58452,58470,58488,58506
,58524,58542,58560,58578,58596,58614,58632,58650,58668,58686,58704,58722,58740,58758,58764
 2022-02-06 23:25:40 kilo-dir JobId 58793: Bootstrap records written to /var/lib/bareos/kilo-dir.restore.16.bsr
 2022-02-06 23:25:40 kilo-dir JobId 58793: Connected Storage daemon at kilo.sys4.de:9103, encryption: TLS_AES_256_GCM_SHA384 TLSv1.3
 2022-02-06 23:26:42 kilo-dir JobId 58793: Using Device "MultiFileStorage0001" to read.
 2022-02-06 23:26:42 kilo-dir JobId 58793: Using Device "MultiFileStorage0002" to write.
 2022-02-06 23:26:42 kilo-sd JobId 58793: Ready to read from volume "VolFull-0165" on device "MultiFileStorage0001" (/srv/backup/bareos).
 2022-02-06 23:26:42 kilo-sd JobId 58793: Forward spacing Volume "VolFull-0165" to file:block 0:3367481982.
 2022-02-06 23:26:53 kilo-sd JobId 58793: End of Volume at file 1 on device "MultiFileStorage0001" (/srv/backup/bareos), Volume "VolFull-0165"
 2022-02-06 23:26:53 kilo-sd JobId 58793: Ready to read from volume "VolFull-0168" on device "MultiFileStorage0001" (/srv/backup/bareos).
 2022-02-06 23:26:53 kilo-sd JobId 58793: Forward spacing Volume "VolFull-0168" to file:block 2:1033779909.
 2022-02-06 23:26:54 kilo-sd JobId 58793: End of Volume at file 2 on device "MultiFileStorage0001" (/srv/backup/bareos), Volume "VolFull-0168"
 2022-02-06 23:26:54 kilo-sd JobId 58793: Ready to read from volume "VolFull-0169" on device "MultiFileStorage0001" (/srv/backup/bareos).
 2022-02-06 23:26:54 kilo-sd JobId 58793: Forward spacing Volume "VolFull-0169" to file:block 0:64702.
 2022-02-06 23:27:03 kilo-sd JobId 58793: End of Volume at file 1 on device "MultiFileStorage0001" (/srv/backup/bareos), Volume "VolFull-0169"
 2022-02-06 23:27:03 kilo-sd JobId 58793: Warning: stored/vol_mgr.cc:542 Need volume from other drive, but swap not possible. Status: read=1 num_writers=0 num_reserve=0 swap=0 vol=VolIncr-0
022 from dev="MultiFileStorage0004" (/srv/backup/bareos) to "MultiFileStorage0001" (/srv/backup/bareos)
 2022-02-06 23:27:03 kilo-sd JobId 58793: Warning: stored/acquire.cc:348 Read acquire: stored/label.cc:268 Could not reserve volume VolIncr-0022 on "MultiFileStorage0001" (/srv/backup/bareo
s)
 2022-02-06 23:27:03 kilo-sd JobId 58793: Please mount read Volume "VolIncr-0022" for:
    Job: BackupIndia.2022-02-06_23.15.01_31
    Storage: "MultiFileStorage0001" (/srv/backup/bareos)
    Pool: Incr
    Media type: File
 2022-02-06 23:32:03 kilo-sd JobId 58793: Ready to read from volume "VolIncr-0022" on device "MultiFileStorage0001" (/srv/backup/bareos).
 2022-02-06 23:32:03 kilo-sd JobId 58793: Forward spacing Volume "VolIncr-0022" to file:block 0:3331542115.
 2022-02-06 23:32:03 kilo-sd JobId 58793: End of Volume at file 0 on device "MultiFileStorage0001" (/srv/backup/bareos), Volume "VolIncr-0022"
 2022-02-06 23:32:03 kilo-sd JobId 58793: Ready to read from volume "VolIncr-0023" on device "MultiFileStorage0001" (/srv/backup/bareos).
 2022-02-06 23:32:03 kilo-sd JobId 58793: Forward spacing Volume "VolIncr-0023" to file:block 0:750086502.
Tags:
Steps To Reproduce:
Additional Information:
System Description General issues regarding all linux platforms or not specific to only one distribution
Attached Files:
Notes
(0004526)
stephand   
2022-02-24 11:46   
Thanks for reporting this issue. I also already noticed that problem.
It will be very hard to fix this properly without a complete redesign of the whole reservation logic, which would be a huge effort.
But meanwhile we could think about a workaround to mitigate this somehow.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1349 [bareos-core] file daemon major always 2021-05-07 18:29 2022-02-02 10:47
Reporter: oskarsr Platform: Linux  
Assigned To: bruno-at-bareos OS: Debian  
Priority: urgent OS Version: 9  
Status: resolved Product Version: 20.0.1  
Product Build: Resolution: unable to reproduce  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Fatal error: bareos-fd on the following backup after the successful backup of postgresql database using the PostgreSQL Plugin.
Description: Fatal error: bareos-fd on the following backup after the successful backup of postgresql database using the PostgreSQL Plugin.
When the Client daemon is restarted, the backup of Postgresql database goes without the error, but just once. On the second attempt, there is an error again.

it-fd JobId 118: Fatal error: bareosfd: Traceback (most recent call last):
File "/usr/lib/bareos/plugins/bareos-fd-postgres.py", line 40, in
import BareosFdPluginPostgres
File "/usr/lib/bareos/plugins/BareosFdPluginPostgres.py", line 30, in
import psycopg2
File "/usr/local/lib/python3.5/dist-packages/psycopg2/__init__.py", line 51, in
from psycopg2._psycopg import ( # noqa
SystemError: initialization of _psycopg raised unreported exception
Tags:
Steps To Reproduce: When the backup is executed right after the client daemon restart, the debug log is following:

it-fd (100): filed/fileset.cc:271-150 P python:module_path=/usr/lib/bareos/plugins:module_name=bareos-fd-postgres:postgresDataDir=/var/lib/postgresql/10/main:walArchive=/var/lib/postgresql/10/wal_archive/:dbuser=bareos:dbname=bareos
it-fd (150): filed/fd_plugins.cc:542-150 plugin cmd=python:module_path=/usr/lib/bareos/plugins:module_name=bareos-fd-postgres:postgresDataDir=/var/lib/postgresql/10/main:walArchive=/var/lib/postgresql/10/wal_archive/:dbuser=bareos:dbname=bareos
it-fd (150): filed/fd_plugins.cc:441-150 plugin_ctx=7f3964015250 JobId=150
it-fd (150): filed/fd_plugins.cc:229-150 IsEventForThisPlugin? name=python:module_path=/usr/lib/bareos/plugins:module_name=bareos-fd-postgres:postgresDataDir=/var/lib/postgresql/10/main:walArchive=/var/lib/postgresql/10/wal_archive/:dbuser=bareos:dbname=bareos len=6 plugin=python3-fd.so plen=7
it-fd (150): filed/fd_plugins.cc:261-150 IsEventForThisPlugin: yes, without last character: (plugin=python3-fd.so, name=python:module_path=/usr/lib/bareos/plugins:module_name=bareos-fd-postgres:postgresDataDir=/var/lib/postgresql/10/main:walArchive=/var/lib/postgresql/10/wal_archive/:dbuser=bareos:dbname=bareos)
it-fd (150): python/python-fd.cc:992-150 python3-fd: Trying to load module with name bareos-fd-postgres
it-fd (150): python/python-fd.cc:1006-150 python3-fd: Successfully loaded module with name bareos-fd-postgres
it-fd (100): module/bareosfd.cc:1442-150 python3-fd-mod: Constructor called in module BareosFdPluginPostgres with plugindef=postgresDataDir=/var/lib/postgresql/10/main:walArchive=/var/lib/postgresql/10/wal_archive/:dbuser=bareos:dbname=bareos:
it-fd (100): module/bareosfd.cc:1442-150 python3-fd-mod: Constructor called in module BareosFdPluginLocalFilesBaseclass with plugindef=postgresDataDir=/var/lib/postgresql/10/main:walArchive=/var/lib/postgresql/10/wal_archive/:dbuser=bareos:dbname=bareos:
it-fd (100): module/bareosfd.cc:1442-150 python3-fd-mod: Constructor called in module BareosFdPluginBaseclass with plugindef=postgresDataDir=/var/lib/postgresql/10/main:walArchive=/var/lib/postgresql/10/wal_archive/:dbuser=bareos:dbname=bareos:
it-fd (150): module/bareosfd.cc:1495-150 python3-fd-mod: PyBareosRegisterEvents registering event 2
it-fd (150): filed/fd_plugins.cc:2266-150 fd-plugin: Plugin registered event=2
it-fd (150): module/bareosfd.cc:1495-150 python3-fd-mod: PyBareosRegisterEvents registering event 4
it-fd (150): filed/fd_plugins.cc:2266-150 fd-plugin: Plugin registered event=4
it-fd (150): module/bareosfd.cc:1495-150 python3-fd-mod: PyBareosRegisterEvents registering event 16
it-fd (150): filed/fd_plugins.cc:2266-150 fd-plugin: Plugin registered event=16
it-fd (150): module/bareosfd.cc:1495-150 python3-fd-mod: PyBareosRegisterEvents registering event 19
it-fd (150): filed/fd_plugins.cc:2266-150 fd-plugin: Plugin registered event=19
it-fd (150): module/bareosfd.cc:1495-150 python3-fd-mod: PyBareosRegisterEvents registering event 3
it-fd (150): filed/fd_plugins.cc:2266-150 fd-plugin: Plugin registered event=3
it-fd (150): module/bareosfd.cc:1495-150 python3-fd-mod: PyBareosRegisterEvents registering event 5
it-fd (150): filed/fd_plugins.cc:2266-150 fd-plugin: Plugin registered event=5


But, when the backup is started repeatedly for the same client, the log consists of following:

it-fd (100): filed/fileset.cc:271-151 P python:module_path=/usr/lib/bareos/plugins:module_name=bareos-fd-postgres:postgresDataDir=/var/lib/postgresql/10/main:walArchive=/var/lib/postgresql/10/wal_archive/:dbuser=bareos:dbname=bareos
it-fd (150): filed/fd_plugins.cc:542-151 plugin cmd=python:module_path=/usr/lib/bareos/plugins:module_name=bareos-fd-postgres:postgresDataDir=/var/lib/postgresql/10/main:walArchive=/var/lib/postgresql/10/wal_archive/:dbuser=bareos:dbname=bareos
it-fd (150): filed/fd_plugins.cc:441-151 plugin_ctx=7f39641d1b60 JobId=151
it-fd (150): filed/fd_plugins.cc:229-151 IsEventForThisPlugin? name=python:module_path=/usr/lib/bareos/plugins:module_name=bareos-fd-postgres:postgresDataDir=/var/lib/postgresql/10/main:walArchive=/var/lib/postgresql/10/wal_archive/:dbuser=bareos:dbname=bareos len=6 plugin=python3-fd.so plen=7
it-fd (150): filed/fd_plugins.cc:261-151 IsEventForThisPlugin: yes, without last character: (plugin=python3-fd.so, name=python:module_path=/usr/lib/bareos/plugins:module_name=bareos-fd-postgres:postgresDataDir=/var/lib/postgresql/10/main:walArchive=/var/lib/postgresql/10/wal_archive/:dbuser=bareos:dbname=bareos)
it-fd (150): python/python-fd.cc:992-151 python3-fd: Trying to load module with name bareos-fd-postgres
it-fd (150): python/python-fd.cc:1000-151 python3-fd: Failed to load module with name bareos-fd-postgres
it-fd (150): include/python_plugins_common.inc:124-151 bareosfd: Traceback (most recent call last):
  File "/usr/lib/bareos/plugins/bareos-fd-postgres.py", line 40, in <module>
    import BareosFdPluginPostgres
  File "/usr/lib/bareos/plugins/BareosFdPluginPostgres.py", line 30, in <module>
    import psycopg2
  File "/usr/lib/python3/dist-packages/psycopg2/__init__.py", line 51, in <module>
    from psycopg2._psycopg import ( # noqa
SystemError: initialization of _psycopg raised unreported exception

it-fd (150): filed/fd_plugins.cc:480-151 Cancel return from GeneratePluginEvent
it-fd (100): filed/fileset.cc:271-151 N
it-fd (100): filed/dir_cmd.cc:462-151 <dird: getSecureEraseCmd
Additional Information:
System Description
Attached Files:
Notes
(0004129)
oskarsr   
2021-05-12 17:33   
(Last edited: 2021-05-12 17:34)
Does anybody has tried to back up postgresql DB by using bareos-fd-postgres python plugin ?

(0004263)
perkons   
2021-09-13 15:38   
We are experiencing exactly the same issue on Ubuntu 18.04.
(0004297)
bruno-at-bareos   
2021-10-11 13:31   
To Both of you, could you share the installed bareos package (and confirm they're coming from bareos.org) and also the python3 version used
also python packages (and from where they come) related (main core + psycopg)
(0004298)
perkons   
2021-10-11 14:52   
We installed the bareos-filedaemon from https://download.bareos.org
The python modules are installed from Ubuntu repositories. The reason we use both python and python3 modules is because if one is missing the backups fail. This seems pretty wrong to me, but as I understand there is active work to migrate to python3.
We also have both of these python modules (python2 and python3) on our RHEL based hosts and have not had any problems with the PostgreSQL Plugin.

# dpkg -l | grep psycopg
ii python-psycopg2 2.8.4-1~pgdg18.04+1 amd64 Python module for PostgreSQL
ii python3-psycopg2 2.8.6-2~pgdg18.04+1 amd64 Python 3 module for PostgreSQL
# dpkg -l | grep dateutil
ii python-dateutil 2.6.1-1 all powerful extensions to the standard Python datetime module
ii python3-dateutil 2.6.1-1 all powerful extensions to the standard Python 3 datetime module
# dpkg -l | grep bareos
ii bareos-common 20.0.1-3 amd64 Backup Archiving Recovery Open Sourced - common files
ii bareos-filedaemon 20.0.1-3 amd64 Backup Archiving Recovery Open Sourced - file daemon
ii bareos-filedaemon-postgresql-python-plugin 20.0.1-3 amd64 Backup Archiving Recovery Open Sourced - file daemon PostgreSQL plugin
ii bareos-filedaemon-python-plugins-common 20.0.1-3 amd64 Backup Archiving Recovery Open Sourced - file daemon Python plugin common files
ii bareos-filedaemon-python3-plugin 20.0.1-3 amd64 Backup Archiving Recovery Open Sourced - file daemon Python plugin
# dpkg -s bareos-filedaemon
Package: bareos-filedaemon
Status: install ok installed
Priority: optional
Section: admin
Installed-Size: 384
Maintainer: Joerg Steffens <joerg.steffens@bareos.com>
Architecture: amd64
Source: bareos
Version: 20.0.1-3
Replaces: bacula-fd
Depends: bareos-common (= 20.0.1-3), lsb-base (>= 3.2-13), lsof, libc6 (>= 2.14), libgcc1 (>= 1:3.0), libjansson4 (>= 2.0.1), libstdc++6 (>= 5.2), zlib1g (>= 1:1.1.4)
Pre-Depends: debconf (>= 1.4.30) | debconf-2.0, adduser
Conflicts: bacula-fd
Conffiles:
 /etc/init.d/bareos-fd bcc61ad57fde8a771a5002365130c3ec
Description: Backup Archiving Recovery Open Sourced - file daemon
 Bareos is a set of programs to manage backup, recovery and verification of
 data across a network of computers of different kinds.
 .
 The file daemon has to be installed on the machine to be backed up. It is
 responsible for providing the file attributes and data when requested by
 the Director, and also for the file system-dependent part of restoration.
 .
 This package contains the Bareos File daemon.
Homepage: http://www.bareos.org/
# cat /etc/apt/sources.list.d/bareos-20.list
deb https://download.bareos.org/bareos/release/20/xUbuntu_18.04 /
#
(0004299)
bruno-at-bareos   
2021-10-11 15:48   
Thanks for your report, as you stated the python/python3 situation is far from ideal, but PR are progressing, the tunnel's end is near.
Also as you mentioned there's no trouble on RHEL system, I'm aware too.

I would have tried to use only python2 code on such version.
I make a note about testing that with the future new code on Ubuntu 18... But I just can't say when.
(0004497)
bruno-at-bareos   
2022-02-02 10:46   
For the issue reported there's something that look wrong

File "/usr/lib/bareos/plugins/BareosFdPluginPostgres.py", line 30, in
import psycopg2
File "/usr/local/lib/python3.5/dist-packages/psycopg2/__init__.py", line 51, in
from psycopg2._psycopg import ( # noqa
SystemError: initialization of _psycopg raised unreported exception

here it is /usr/local/lib/python3.5

And then

it-fd (150): include/python_plugins_common.inc:124-151 bareosfd: Traceback (most recent call last):
  File "/usr/lib/bareos/plugins/bareos-fd-postgres.py", line 40, in <module>
    import BareosFdPluginPostgres
  File "/usr/lib/bareos/plugins/BareosFdPluginPostgres.py", line 30, in <module>
    import psycopg2
  File "/usr/lib/python3/dist-packages/psycopg2/__init__.py", line 51, in <module>
    from psycopg2._psycopg import ( # noqa
SystemError: initialization of _psycopg raised unreported exception

/usr/lib/python3

So seems you have mixed python env which create strange behaviour, cause the module loaded is not always the same.
Our best advice would be to clean up the global environment and make sure only one consistent version is used for bareos.

Also python3 support has been greetly improved in Bareos 21.
Will closing, as we are not able to reproduce such environment.

btw postgresql plugin is tested each time the code is updated.
(0004498)
bruno-at-bareos   
2022-02-02 10:47   
Mixed python version used with different psyscopg. /usr/local/lib/python3.5 and /usr/lib/python3


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1418 [bareos-core] storage daemon major always 2022-01-04 14:23 2022-01-31 09:34
Reporter: Scorpionking83 Platform: Linux  
Assigned To: bruno-at-bareos OS: RHEL  
Priority: immediate OS Version: 7  
Status: resolved Product Version: 19.2.11  
Product Build: Resolution: duplicate  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Still autoprune and recycle not working in Bareos 19.2.7
Description: Dear developers,

I have still a problem wit autoprune and recycle tapes:
1. Everything works, but when he reaches the max volume tapes with retention of 90 day's , he cannot create any backup any more. Then update the incrementail pool>
update --> Option 2 "Pool from resource" --> Option 3 Incremental
update --> Option 1 "Volumes parameters" --> Option 13: All volumes from pool --> Option 1 Incremental
update --> Option 1 "Volumes parameters" --> Option 13: All volumes from pool -->Option 2 Full
update --> Option 1 "Volumes parameters" --> Option 13: All volumes from pool -->Option 3 Incremental

2. I get the following error:
Volume "Incrementail-0001" has Volume Retention of 7776000 sec. and has 0 jobs that will be pruned

Max volume tapes is set to 400

But way does autoprune and recycle not work if max volumes tapes has been reaches and retention period is not yet been reach?
Is it also possible to detele old tapes from file and database?

I need a answer soon.
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0004451)
Scorpionking83   
2022-01-04 14:36   
Why close this, but the issue is not resolved.
(0004452)
bruno-at-bareos   
2022-01-04 14:39   
This issue is the same as the report 001318 made by the same user.
This is clearly a duplicate case.
(0004493)
Scorpionking83   
2022-01-29 17:14   
Can someone please check my other bug report 0001318.
I still looking for a solution.
(0004496)
bruno-at-bareos   
2022-01-31 09:34   
duplicate of 0001318


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1422 [bareos-core] General major always 2022-01-20 11:58 2022-01-27 11:49
Reporter: niklas.skog Platform: Linux  
Assigned To: arogge OS: Debian  
Priority: normal OS Version: 11  
Status: confirmed Product Version: 21.0.0  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Bareos Libcloud Plugin incompatibe
Description: Goal is: to do backups of S3-buckets using Bareos

Situation:

Installed the Bareos 21.0.0-4 and "bareos-filedaemon-libcloud-python-plugin" on Debian 11 from "https://download.bareos.org/bareos/release/21/Debian_11"

Installed the "python3-libcloud" package on which the Plugin "bareos-filedaemon-libcloud-python-plugin" depends.

Configured the plugin according https://docs.bareos.org/TasksAndConcepts/Plugins.html

Trying to start a job, that should backup the data from S3 and getting following error in bconsole output:
---
20-Jan 08:27 bareos-dir JobId 13: Connected Storage daemon at backup:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
20-Jan 08:27 bareos-dir JobId 13: Using Device "FileStorage" to write.
20-Jan 08:27 bareos-dir JobId 13: Connected Client: backup-fd at localhost:9102, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
20-Jan 08:27 bareos-dir JobId 13: Handshake: Immediate TLS
20-Jan 08:27 backup-fd JobId 13: Fatal error: python3-fd-mod: BareosFdPluginLibcloud [50986]: Need Python version < 3.8 for Bareos Libcloud (current version: 3.9.2)
20-Jan 08:27 backup-fd JobId 13: Connected Storage daemon at backup:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
20-Jan 08:27 backup-fd JobId 13: Fatal error: TwoWayAuthenticate failed, because job was canceled.
20-Jan 08:27 backup-fd JobId 13: Fatal error: Failed to authenticate Storage daemon.
20-Jan 08:27 bareos-dir JobId 13: Fatal error: Bad response to Storage command: wanted 2000 OK storage
, got 2902 Bad storage
[...]
---

and the job fails.

Thus, main message:

"Fatal error: python3-fd-mod: BareosFdPluginLibcloud [50986]: Need Python version < 3.8 for Bareos Libcloud (current version: 3.9.2)"

which is understandable, because Debian 11 brings python3.9.*:

---
root@backup:/etc/bareos/bareos-dir.d/fileset# apt-cache policy python3
python3:
  Installed: 3.9.2-3
  Candidate: 3.9.2-3
  Version table:
 *** 3.9.2-3 500
        500 http://cdn-aws.deb.debian.org/debian bullseye/main amd64 Packages
        100 /var/lib/dpkg/status
root@backup:/etc/bareos/bareos-dir.d/fileset#
---


Accordingly, the plugin is incompatible with the current Debian version.
Tags: libcloud, plugin, s3
Steps To Reproduce: * install stock debian 11
* install & configure bareos 21, "python3-libcloud" and "bareos-filedaemon-libcloud-python-plugin"
* configure the plugin according https://docs.bareos.org/TasksAndConcepts/Plugins.html
* try to run a job that is backing up an S3-bucket
* this will fail
Additional Information:
Attached Files:
Notes
(0004487)
arogge   
2022-01-27 11:49   
You cannot use Python 3.9 or newer with python libcloud plugin due to a limitation in Python 3.9.
We're looking into this, but it isn't that easy to work around that limitation.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1409 [bareos-core] director tweak always 2021-12-19 00:33 2022-01-13 14:22
Reporter: jalseos Platform: Linux  
Assigned To: bruno-at-bareos OS: Debian  
Priority: low OS Version: 10  
Status: resolved Product Version: 20.0.2  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: DB error on restore with ExitOnFatal=true
Description: I was trying to use ExitOnFatal=true in director and noticed a persistent error when trying to initiate a restore:

bareos-dir: ERROR TERMINATION at cats/postgresql.cc:675
Fatal database error

The error does not happen with unset/default ExitOnFatal=false

The postgresql (11) log reveals:
STATEMENT: DROP TABLE temp
ERROR: table "temp1" does not exist
STATEMENT: DROP TABLE temp1
ERROR: table "temp1" does not exist

I found the SQL statements in thes files in the code:
/core/src/cats/dml/0018_uar_del_temp
/core/src/cats/dml/0019_uar_del_temp1

I am wondering if something like this might be in order: (like 0012_drop_deltabs.postgresql)
/core/src/cats/dml/0018_uar_del_temp.postgres
DROP TABLE IF EXISTS temp
Tags:
Steps To Reproduce: $ bconsole
* restore
9
bareos-dir: ERROR TERMINATION at cats/postgresql.cc:675
Fatal database error
Additional Information:
System Description
Attached Files:
Notes
(0004400)
bruno-at-bareos   
2021-12-21 15:58   
The behavior is exiting in case of error when ExitOnFatal = true

STATEMENT: DROP TABLE temp
ERROR: table "temp1" does not exist

this is an error and the product the obey strictly to the paramet Exit On Fatal.

Now with future version, where only postgresql will be kept as database, but also older postgresql will never be installed the code can be reviewed to chase every drop table without and if exist.

Files to change

```
core/src/cats/dml/0018_uar_del_temp:DROP TABLE temp
core/src/cats/dml/0019_uar_del_temp1:DROP TABLE temp1
core/src/cats/mysql_queries.inc:"DROP TABLE temp "
core/src/cats/mysql_queries.inc:"DROP TABLE temp1 "
core/src/cats/postgresql_queries.inc:"DROP TABLE temp "
core/src/cats/postgresql_queries.inc:"DROP TABLE temp1 "
core/src/cats/sqlite_queries.inc:"DROP TABLE temp "
core/src/cats/sqlite_queries.inc:"DROP TABLE temp1 "
core/src/dird/query.sql:!DROP TABLE temp;
core/src/dird/query.sql:!DROP TABLE temp2;
```
Do you want to propose a PR for ?
(0004405)
bruno-at-bareos   
2021-12-21 16:50   
PR proposed
https://github.com/bareos/bareos/pull/1035

Once the PR will be build, there's will be some testing package available, would you like to test them ?
(0004443)
jalseos   
2022-01-02 16:52   
Hi, thank you for looking into this issue! I will try to test the built package (deb preferred) if a subsequent code/package "downgrade" (ie. no Catalog DB changes, ...) to a published Community Edition release remains possible afterwards.
(0004473)
bruno-at-bareos   
2022-01-13 14:22   
Fix committed to bareos master branch with changesetid 15753.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1416 [bareos-core] General minor have not tried 2021-12-30 11:43 2022-01-11 21:50
Reporter: hostedpower Platform: Linux  
Assigned To: joergs OS: Debian  
Priority: low OS Version: 10  
Status: assigned Product Version: 21.0.1  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Bareos python3 contrib plugin filedaemon
Description: Hi,


We used a version of the bareos contrib mysql plugin which seems to support Python3, however in recent builds the file seems to be regressed to be only compatible with pyton2 again.

Tags:
Steps To Reproduce:
Additional Information: In attachment you can find the python3 compatible version which was previously found on git under "dev/joergs/master/contrib-build" branch, however since this branch was updated, the older python2 version is back in there
System Description
Attached Files: MySQL-Python3.zip (3,594 bytes) 2021-12-30 11:43
https://bugs.bareos.org/file_download.php?file_id=488&type=bug
Notes
(0004469)
joergs   
2022-01-11 21:32   
I just verified this. In my environment, the module is working fine with Python3.
I even added a systemtest to verify this: https://github.com/bareos/bareos/tree/dev/joergs/master/contrib-build/systemtests/tests/py3plug-fd-contrib-mysql_dump

However, I guess you have already noted that the path and the initialisation of the module have changed to the bareos_mysql_dump directory. Maybe this is not reflected in your environment?

Please be aware that we currently are in the process of finding a resonable file and directory structure for this plugins.

Without further information, I'd judge this bug entry as invalid.
(0004470)
hostedpower   
2022-01-11 21:39   
I think you could be right, I tried the v21 one : https://github.com/bareos/bareos/blob/bareos-21/contrib/fd-plugins/mysql-python/BareosFdMySQLclass.py

So master is working, but not v21 ?
(0004471)
joergs   
2022-01-11 21:50   
Correct. v21 should be identical to v20 , and both versions only work with Python 2.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1389 [bareos-core] installer / packages minor always 2021-09-20 12:23 2022-01-05 13:23
Reporter: colttt Platform:  
Assigned To: bruno-at-bareos OS:  
Priority: normal OS Version:  
Status: resolved Product Version: 20.0.2  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: no repository for debian 11
Description: Debian 11 (bullseye) was released on 14th august 2021 but there is no bareos repository yet.
I would appreciate if debian 11 would be supported as well.
Tags:
Steps To Reproduce:
Additional Information:
Attached Files:
Notes
(0004276)
bruno-at-bareos   
2021-09-27 13:37   
Thanks for your report.

Starting September 14th Debian 11 is available for all customers with a subscription contract.
Nightly is also build for Debian 11, and will be part of bareos 21 release.
(0004292)
brechsteiner   
2021-10-02 22:51   
what is about the Community Repository? https://download.bareos.org/bareos/release/20/
(0004293)
bruno-at-bareos   
2021-10-04 09:30   
Sorry if it wasn't clear in my previous statement, Debian 11 will be available for next Bareos 21.
(0004455)
bruno-at-bareos   
2022-01-05 13:23   
community repository published https://download.bareos.org/bareos/release/21/Debian_11/


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1408 [bareos-core] director minor have not tried 2021-12-18 20:32 2021-12-28 09:44
Reporter: embareossed Platform: Linux  
Assigned To: bruno-at-bareos OS: Debian  
Priority: normal OS Version: 10  
Status: resolved Product Version: 20.0.2  
Product Build: Resolution: no change required  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: "Backup OK" email message subject line no longer displays the job name
Description: In bareos 18, backups which concluded successfully would be followed up by an email with a subject line indicating the name of the specific job that ran. However, in bareos 20, the subject line now only indicates the name of the client for which the job ran.

This is a minor nuisance, but I found the more distinguishing subject line to be more useful. In a case where there are multiple backup jobs for a single client where one but not all jobs fail, it is not immediately obvious -- as it was in bareos 18 -- as to which job for that client failed.
Tags:
Steps To Reproduce: Run two jobs on a host which has more than 1 backup job associated with it.
The email subject lines will be identical even though they are for 2 different jobs.
Additional Information:
System Description
Attached Files:
Notes
(0004401)
bruno-at-bareos   
2021-12-21 16:05   
Maybe some configuration file example used would help

From code we can see that the line was not changed since 2016
67ad14188a src/defaultconfigs/bareos-dir.d/messages/Standard.conf.in (Joerg Steffens 2016-08-01 14:03:06 +0200 5) mailcommand = "@bindir@/bsmtp -h @smtp_host@ -f \"\(Bareos\) \<%r\>\" -s \"Bareos: %t %e of %c %l\" %r"
(0004415)
embareossed   
2021-12-24 17:58   
Here is what my configs look like:
# grep mailcommand *
Daemon.conf: mailcommand = "/usr/bin/bsmtp -h localhost -f \"\(Bareos\) \<%r\>\" -s \"Bareos daemon message\" %r"
Standard.conf: mailcommand = "/usr/bin/bsmtp -h localhost -f \"\(Bareos\) \<%r\>\" -s \"Bareos: %t %e of %c %l\" %r"

All references to message resources are for Standard, except for the director which uses Daemon. I copied most of my config files from the old director (bareos 18) to the setup for the new director (bareos 20); I did not make any changes to messages, afair. I'll take a deeper look at this and see what I can figure out. Maybe bsmtp semantics have changed?
(0004416)
embareossed   
2021-12-24 18:12   
OK, it appears that in bareos 20, as per doc, the %c stands for the client, not the jobname (which should be %n). However, in bareos 18 and prior, this same setup seems to be generating the jobname, not the clientname. So it appears that the semantics have changed to properly implement the documented purpose of the %c macro (and perhaps others; I haven't tested those).

Changing the macro to %n works as desired.
(0004428)
bruno-at-bareos   
2021-12-28 09:44   
Adapting configuration following documentation


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1407 [bareos-core] General minor always 2021-12-18 20:26 2021-12-28 09:43
Reporter: embareossed Platform: Linux  
Assigned To: bruno-at-bareos OS: Debian  
Priority: normal OS Version: 10  
Status: resolved Product Version: 20.0.2  
Product Build: Resolution: no change required  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Run before script hangs unless debug tracing enabled in script
Description: A "run before job script" which has been working since bareos 18 no longer works in bareos 20 (20.0.3 or 20.0.4). This is a simple bash shell script that performs some chores before the backup. It only sends output to stdout, not stderr (I've checked this). The script works properly in bareos 18, but causes the job to hang in bareos 20.

The script is actually being run on a remote file daemon. This may be a clue to the behavior. But again, this has been working in bareos 18.

Interestingly, I found that by enabling bash tracing (-xv options) inside the script itself to try to see what was causing the hang, it actually alleviated the hang!
Tags:
Steps To Reproduce: Create a bash shell on a remote bareos 20 client.
Create a job in a bareos 20 director on a local system that calls a "run before job script" on the remote client.
Run the job.
If this is reproducible, the job will hang when it reaches the call to the remote script.

if this is reproducible, try setting traces in the bash script.

Additional Information: I built the 20.0.3 executables from the git source code on a devuan beowulf host and distributed the packages to the bareos director server and the bareos file daemon client, both of which are also devuan beowulf.
System Description
Attached Files:
Notes
(0004403)
bruno-at-bareos   
2021-12-21 16:10   
Would you mind to share the job definition so we can try to reproduce.
The script would be nice, but perhaps it do something secret.
(0004404)
bruno-at-bareos   
2021-12-21 16:17   
I can't reproduce, it works here

with a job definiton

```
 2021-12-21 16:14:12 qt-kt-dir JobId 8204: Start Backup JobId 8204, Job=yoda.2021-12-21_16.14.10_06
 2021-12-21 16:14:12 qt-kt-dir JobId 8204: Connected Storage daemon at qt-kt.labaroche.ioda.net:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
 2021-12-21 16:14:12 qt-kt-dir JobId 8204: Using Device "admin" to write.
 2021-12-21 16:14:12 qt-kt-dir JobId 8204: Probing client protocol... (result will be saved until config reload)
 2021-12-21 16:14:12 qt-kt-dir JobId 8204: Connected Client: yoda-fd at yoda.labaroche.ioda.net:9102, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
 2021-12-21 16:14:12 qt-kt-dir JobId 8204: Handshake: Immediate TLS 2021-12-21 16:14:12 qt-kt-dir JobId 8204: Encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
 2021-12-21 16:14:12 yoda-fd JobId 8204: Connected Storage daemon at qt-kt.labaroche.ioda.net:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
 2021-12-21 16:14:12 yoda-fd JobId 8204: shell command: run ClientBeforeJob "sh -c 'snapper list && snapper -c ioda list'"
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: # | Type | Pre # | Date | User | Used Space | Cleanup | Description | Userdata
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: -------+--------+-------+----------------------------------+------+------------+----------+-----------------------+--------------
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 0 | single | | | root | | | current |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 1* | single | | Sun 21 Jun 2020 05:17:47 PM CEST | root | 92.00 KiB | | first root filesystem |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 4803 | single | | Fri 01 Jan 2021 12:00:23 AM CET | root | 13.97 GiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 10849 | single | | Wed 01 Sep 2021 12:00:02 AM CEST | root | 12.58 GiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 11582 | single | | Fri 01 Oct 2021 12:00:01 AM CEST | root | 7.90 GiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 12342 | single | | Mon 01 Nov 2021 12:00:08 AM CET | root | 8.07 GiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13085 | single | | Wed 01 Dec 2021 12:00:07 AM CET | root | 156.00 KiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13272 | pre | | Wed 08 Dec 2021 06:23:04 PM CET | root | 156.00 KiB | number | zypp(zypper) | important=yes
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13273 | post | 13272 | Wed 08 Dec 2021 06:46:13 PM CET | root | 3.28 MiB | number | | important=yes
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13278 | pre | | Wed 08 Dec 2021 10:11:11 PM CET | root | 304.00 KiB | number | zypp(zypper) | important=yes
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13279 | post | 13278 | Wed 08 Dec 2021 10:11:26 PM CET | root | 124.00 KiB | number | | important=yes
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13447 | pre | | Wed 15 Dec 2021 09:57:35 PM CET | root | 48.00 KiB | number | zypp(zypper) | important=no
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13448 | post | 13447 | Wed 15 Dec 2021 09:57:42 PM CET | root | 48.00 KiB | number | | important=no
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13499 | single | | Sat 18 Dec 2021 12:00:06 AM CET | root | 156.00 KiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13523 | single | | Sun 19 Dec 2021 12:00:05 AM CET | root | 156.00 KiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13547 | single | | Mon 20 Dec 2021 12:00:05 AM CET | root | 156.00 KiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13557 | pre | | Mon 20 Dec 2021 09:27:21 AM CET | root | 156.00 KiB | number | zypp(zypper) | important=yes
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13559 | pre | | Mon 20 Dec 2021 10:30:43 AM CET | root | 156.00 KiB | number | zypp(zypper) | important=yes
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13560 | post | 13559 | Mon 20 Dec 2021 10:52:01 AM CET | root | 1.76 MiB | number | | important=yes
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13562 | pre | | Mon 20 Dec 2021 11:53:40 AM CET | root | 352.00 KiB | number | zypp(zypper) | important=yes
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13563 | post | 13562 | Mon 20 Dec 2021 11:53:56 AM CET | root | 124.00 KiB | number | | important=yes
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13576 | single | | Tue 21 Dec 2021 12:00:00 AM CET | root | 172.00 KiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13585 | single | | Tue 21 Dec 2021 09:00:00 AM CET | root | 172.00 KiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13586 | single | | Tue 21 Dec 2021 10:00:00 AM CET | root | 172.00 KiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13587 | single | | Tue 21 Dec 2021 11:00:00 AM CET | root | 172.00 KiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13588 | single | | Tue 21 Dec 2021 12:00:00 PM CET | root | 172.00 KiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13589 | single | | Tue 21 Dec 2021 01:00:00 PM CET | root | 172.00 KiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13590 | single | | Tue 21 Dec 2021 02:00:00 PM CET | root | 172.00 KiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13591 | single | | Tue 21 Dec 2021 03:00:00 PM CET | root | 172.00 KiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13592 | single | | Tue 21 Dec 2021 04:00:00 PM CET | root | 92.00 KiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: # | Type | Pre # | Date | User | Cleanup | Description | Userdata
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: -------+--------+-------+---------------------------------+------+----------+-------------+---------
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 0 | single | | | root | | current |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13050 | single | | Mon 20 Dec 2021 12:00:05 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13061 | single | | Mon 20 Dec 2021 11:00:00 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13062 | single | | Mon 20 Dec 2021 12:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13063 | single | | Mon 20 Dec 2021 01:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13064 | single | | Mon 20 Dec 2021 02:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13065 | single | | Mon 20 Dec 2021 03:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13066 | single | | Mon 20 Dec 2021 04:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13067 | single | | Mon 20 Dec 2021 05:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13068 | single | | Mon 20 Dec 2021 06:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13069 | single | | Mon 20 Dec 2021 07:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13070 | single | | Mon 20 Dec 2021 08:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13071 | single | | Mon 20 Dec 2021 09:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13072 | single | | Mon 20 Dec 2021 10:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13073 | single | | Mon 20 Dec 2021 11:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13074 | single | | Tue 21 Dec 2021 12:00:00 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13075 | single | | Tue 21 Dec 2021 01:00:00 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13076 | single | | Tue 21 Dec 2021 02:00:00 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13077 | single | | Tue 21 Dec 2021 03:00:00 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13078 | single | | Tue 21 Dec 2021 04:00:00 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13079 | single | | Tue 21 Dec 2021 05:00:00 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13080 | single | | Tue 21 Dec 2021 06:00:00 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13081 | single | | Tue 21 Dec 2021 07:00:00 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13082 | single | | Tue 21 Dec 2021 08:00:00 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13083 | single | | Tue 21 Dec 2021 09:00:00 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13084 | single | | Tue 21 Dec 2021 10:00:00 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13085 | single | | Tue 21 Dec 2021 11:00:00 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13086 | single | | Tue 21 Dec 2021 12:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13087 | single | | Tue 21 Dec 2021 01:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13088 | single | | Tue 21 Dec 2021 02:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13089 | single | | Tue 21 Dec 2021 03:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13090 | single | | Tue 21 Dec 2021 04:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: Extended attribute support is enabled
 2021-12-21 16:14:36 yoda-fd JobId 8204: ACL support is enabled
 
 RunScript {
    RunsWhen = Before
    RunsOnClient = Yes
    FailJobOnError = No
    Command = "sh -c 'snapper list && snapper -c ioda list'"
  }

```
(0004414)
embareossed   
2021-12-24 17:45   
Nothing secret really. It's just a script that runs "estimate" and parses the output for the size of the backup. Then it decides (based on a value in a config file for the backup name) whether to proceed or not. This way, estimates can be used to determine whether to proceed with a backup or not. This was my workaround to my request in https://bugs.bareos.org/view.php?id=1135.

I did some upgrades recently and the problem has disappeared. So you can close this.
(0004427)
bruno-at-bareos   
2021-12-28 09:43   
Upgrade solve this.
estimate can take time, and from the bconsole point of view, look it is stalled or blocked, when you use the "listing instruction" you'll see the file by file proceed output.
closing


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1413 [bareos-core] bconsole major always 2021-12-27 15:29 2021-12-28 09:38
Reporter: jcottin Platform: Linux  
Assigned To: bruno-at-bareos OS: Debian  
Priority: high OS Version: 10  
Status: resolved Product Version: 21.0.0  
Product Build: Resolution: no change required  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: While restoring a directory using always incremental scheme, Bareos looks for a volume in the wrong storage.
Description: I configured the Always Incremental scheme using 2 different storage (FILE) as advised in the documentation.
-----------------------------------
https://docs.bareos.org/TasksAndConcepts/AlwaysIncrementalBackupScheme.html?highlight=job#storages-and-pools

While restoring a directory using always incremental scheme, Bareos looks for a volume in the wrong storage.

he job will require the following
   Volume(s) Storage(s) SD Device(s)
===========================================================================

    AI-Consolidated-vm-aiqi-linux-test-backup-0006 FileStorage-AI-Consolidated FileStorage-AI-Consolidated
    AI-Incremental-vm-aiqi-linux-test-backup0012 FileStorage-AI-Incremental FileStorage-AI-Incremental

It looks for AI-Incremental-vm-aiqi-linux-test-backup0012 in FileStorage-AI-Consolidated.
it should look for it in FileStorage-AI-Incremental.

Is there a problem with my setup ?
Tags: always incremental, storage
Steps To Reproduce: Using bconsole I target a backup before : 2021-12-27 19:00:00
I can find 3 backup (1 Full, 2 Incremental)
=======================================================
+-------+-------+----------+----------------+---------------------+------------------------------------------------+
| jobid | level | jobfiles | jobbytes | starttime | volumename |
+-------+-------+----------+----------------+---------------------+------------------------------------------------+
| 24 | F | 108,199 | 13,145,763,765 | 2021-12-25 08:06:41 | AI-Consolidated-vm-aiqi-linux-test-backup-0006 |
| 27 | I | 95 | 68,530 | 2021-12-25 20:00:04 | AI-Incremental-vm-aiqi-linux-test-backup0008 |
| 32 | I | 40 | 1,322,314 | 2021-12-26 20:00:09 | AI-Incremental-vm-aiqi-linux-test-backup0012 |
+-------+-------+----------+----------------+---------------------+------------------------------------------------+
-----------------------------------
$ cd /var/lib/mysql.dumps/wordpressdb/
cwd is: /var/lib/mysql.dumps/wordpressdb/
-----------------------------------
$ dir
-rw-r--r-- 1 0 (root) 112 (bareos) 1830 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/%create.gz
-rw-r--r-- 1 0 (root) 112 (bareos) 149 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/%tables
-rw-r--r-- 1 0 (root) 112 (bareos) 783 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/wp_commentmeta.sql.gz
-rw-r--r-- 1 0 (root) 112 (bareos) 1161 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/wp_comments.sql.gz
-rw-r--r-- 1 0 (root) 112 (bareos) 869 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/wp_links.sql.gz
-rw-r--r-- 1 0 (root) 112 (bareos) 235966 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/wp_options.sql.gz
-rw-r--r-- 1 0 (root) 112 (bareos) 830 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/wp_postmeta.sql.gz
-rw-r--r-- 1 0 (root) 112 (bareos) 3470 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/wp_posts.sql.gz
-rw-r--r-- 1 0 (root) 112 (bareos) 770 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/wp_term_relationships.sql.gz
-rw-r--r-- 1 0 (root) 112 (bareos) 838 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/wp_term_taxonomy.sql.gz
-rw-r--r-- 1 0 (root) 112 (bareos) 780 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/wp_termmeta.sql.gz
-rw-r--r-- 1 0 (root) 112 (bareos) 814 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/wp_terms.sql.gz
-rw-r--r-- 1 0 (root) 112 (bareos) 1404 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/wp_usermeta.sql.gz
-rw-r--r-- 1 0 (root) 112 (bareos) 983 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/wp_users.sql.gz
-----------------------------------
$ cd ..
cwd is: /var/lib/mysql.dumps/
-----------------------------------
I mark the folder:
$ mark /var/lib/mysql.dumps/wordpressdb
15 files marked.
$ done
-----------------------------------
The job will require the following
   Volume(s) Storage(s) SD Device(s)
============================================

    AI-Consolidated-vm-aiqi-linux-test-backup-0006 FileStorage-AI-Consolidated FileStorage-AI-Consolidated
    AI-Incremental-vm-aiqi-linux-test-backup0012 FileStorage-AI-Incremental FileStorage-AI-Incremental

Volumes marked with "*" are online.
18 files selected to be restored.

Using Catalog "MyCatalog"
Run Restore job
JobName: RestoreFiles
Bootstrap: /var/lib/bareos/bareos-dir.restore.2.bsr
Where: /tmp/bareos-restores
Replace: Always
FileSet: LinuxAll
Backup Client: vm-aiqi-linux-test-backup-fd
Restore Client: vm-aiqi-linux-test-backup-fd
Format: Native
Storage: FileStorage-AI-Consolidated
When: 2021-12-27 22:10:13
Catalog: MyCatalog
Priority: 10
Plugin Options: *None*
OK to run? (yes/mod/no): yes

I get these two messages.
============================================
27-Dec 22:15 bareos-sd JobId 43: Warning: stored/acquire.cc:286 Read open device "FileStorage-AI-Consolidated" (/var/lib/bareos/storage-AI-Consolidated) Volume "AI-Incremental-vm-aiqi-linux-test-backup0012" failed: ERR=stored/dev.cc:716 Could not open: /var/lib/bareos/storage-AI-Consolidated/AI-Incremental-vm-aiqi-linux-test-backup0012, ERR=No such file or directory

27-Dec 22:15 bareos-sd JobId 43: Please mount read Volume "AI-Incremental-vm-aiqi-linux-test-backup0012" for:
    Job: RestoreFiles.2021-12-27_22.15.29_31
    Storage: "FileStorage-AI-Consolidated" (/var/lib/bareos/storage-AI-Consolidated)
    Pool: Incremental-BareOS
    Media type: File
============================================

Bareos try to find AI-Incremental-vm-aiqi-linux-test-backup0012 in the wrong storage.
Additional Information:
===========================================
Job {
  Name = vm-aiqi-linux-test-backup-job
  Client = vm-aiqi-linux-test-backup-fd

  Accurate = yes
  Always Incremental = yes
  Always Incremental Job Retention = 30 days
  Always Incremental Keep Number = 15
  Always Incremental Max Full Age = 60 days

  Level = Incremental
  Type = Backup
  FileSet = "LinuxAll-vm-aiqi-linux-test-backup" # LinuxAll fileset (0000013)
  Schedule = "WeeklyCycleCustomers"
  Storage = FileStorage-AI-Incremental
  Messages = Standard
  Pool = AI-Incremental-vm-aiqi-linux-test-backup
  Priority = 10
  Write Bootstrap = "/var/lib/bareos/%c.bsr"

  Full Backup Pool = AI-Consolidated-vm-aiqi-linux-test-backup # write Full Backups into "Full" Pool (0000005)
  Incremental Backup Pool = AI-Incremental-vm-aiqi-linux-test-backup # write Incr Backups into "Incremental" Pool (0000011)

  Enabled = yes

  RunScript {
    FailJobOnError = Yes
    RunsOnClient = Yes
    RunsWhen = Before
    Command = "sh /SCRIPTS/mysql/pre.mysql.sh"
  }

  Run Script {
    Console = ".bvfs_update jobid=%i"
    RunsWhen = After
    RunsOnClient = No
  }
}
===========================================
Pool {
  Name = AI-Consolidated-vm-aiqi-linux-test-backup
  Pool Type = Backup
  Recycle = yes # Bareos can automatically recycle Volumes
  AutoPrune = yes # Prune expired volumes
  Volume Retention = 360 days # How long should the Full Backups be kept? (0000006)
  Maximum Volume Bytes = 50G # Limit Volume size to something reasonable
  Label Format = "AI-Consolidated-vm-aiqi-linux-test-backup-" # Volumes will be labeled "Full-<volume-id>"
  Storage = FileStorage-AI-Consolidated
}
===========================================
Pool {
  Name = AI-Incremental-vm-aiqi-linux-test-backup
  Pool Type = Backup
  Recycle = yes # Bareos can automatically recycle Volumes
  AutoPrune = yes # Prune expired volumes
  Volume Retention = 360 days # How long should the Incremental Backups be kept? (0000012)
  Maximum Volume Bytes = 50G # Limit Volume size to something reasonable
  Label Format = "AI-Incremental-vm-aiqi-linux-test-backup" # Volumes will be labeled "Incremental-<volume-id>"
  Volume Use Duration = 23h
  Storage = FileStorage-AI-Incremental
  Next Pool = AI-Consolidated-vm-aiqi-linux-test-backup
}

Both volume are available in their respective storage.

root@vm-aiqi-testbackup:~# ls -l /var/lib/bareos/storage-AI-Consolidated/AI-Consolidated-vm-aiqi-linux-test-backup-0006
-rw-r----- 1 bareos bareos 26349467738 Dec 25 08:09 /var/lib/bareos/storage-AI-Consolidated/AI-Consolidated-vm-aiqi-linux-test-backup-0006

root@vm-aiqi-testbackup:~# ls -l /var/lib/bareos/storage-AI-Incremental/AI-Incremental-vm-aiqi-linux-test-backup0012
-rw-r----- 1 bareos bareos 1329612 Dec 26 20:00 /var/lib/bareos/storage-AI-Incremental/AI-Incremental-vm-aiqi-linux-test-backup0012
System Description
Attached Files: Bareos-always-incremental-restore-fail.txt (7,259 bytes) 2021-12-27 15:53
https://bugs.bareos.org/file_download.php?file_id=487&type=bug
Notes
(0004421)
jcottin   
2021-12-27 15:53   
Output with TXT might be easier to read.
(0004422)
jcottin   
2021-12-27 16:32   
Device {
  Name = FileStorage-AI-Consolidated
  Media Type = File
  Archive Device = /var/lib/bareos/storage-AI-Consolidated
  LabelMedia = yes; # lets Bareos label unlabeled media
  Random Access = yes;
  AutomaticMount = yes; # when device opened, read it
  RemovableMedia = no;
  AlwaysOpen = no;
  Description = "File device. A connecting Director must have the same Name and MediaType."
}

Device {
  Name = FileStorage-AI-Incremental
  Media Type = File
  Archive Device = /var/lib/bareos/storage-AI-Incremental
  LabelMedia = yes; # lets Bareos label unlabeled media
  Random Access = yes;
  AutomaticMount = yes; # when device opened, read it
  RemovableMedia = no;
  AlwaysOpen = no;
  Description = "File device. A connecting Director must have the same Name and MediaType."
}

Storage {
  Name = FileStorage-AI-Consolidated
  Address = bareos-server # N.B. Use a fully qualified name here (do not use "localhost" here).
  Password = "22222222222222222222222222222222222222222222"
  Device = FileStorage-AI-Consolidated
  Media Type = File
}

Storage {
  Name = FileStorage-AI-Incremental
  Address = bareos-server # N.B. Use a fully qualified name here (do not use "localhost" here).
  Password = "22222222222222222222222222222222222222222222"
  Device = FileStorage-AI-Incremental
  Media Type = File
}
(0004423)
jcottin   
2021-12-27 16:43   
The documentation said 2 storage.
But I created 2 device.

1 storage => 1 device.

I moved the data from one device (FILE: Directory) to another.
2 storage => 1 device.

Problem solved.
(0004425)
bruno-at-bareos   
2021-12-28 09:37   
Thanks for sharing. Yes when the documentation talk about 2 storages it's in the director view, and not bareos-storage having 2 devices.
I close the issue.
(0004426)
bruno-at-bareos   
2021-12-28 09:38   
AI need 2 storages on the director but One device able to read/write both Incremental and Full


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1339 [bareos-core] webui minor always 2021-04-19 11:49 2021-12-23 08:39
Reporter: khvalera Platform:  
Assigned To: frank OS:  
Priority: normal OS Version: archlinux  
Status: assigned Product Version: 20.0.1  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: When going to the Run jobs tab I get an error
Description: when going to the Run jobs https://127.0.0.1/bareos-webui/job/run/ tab I get an error:
Notice: Undefined index: value in /usr/share/webapps/bareos-webui/module/Pool/src/Pool/Model/PoolModel.php on line 152
Tags: webui
Steps To Reproduce: https://127.0.0.1/bareos-webui/job/run/
Additional Information:
Attached Files: Снимок экрана_2021-04-19_12-52-56.png (110,528 bytes) 2021-04-19 11:54
https://bugs.bareos.org/file_download.php?file_id=464&type=bug
Notes
(0004112)
khvalera   
2021-04-19 11:54   
I am attaching a screenshot:
(0004156)
khvalera   
2021-06-11 22:36   
You need to correct the expression: preg_match('/\s*Next\s*Pool\s*=\s*("|\')?(?<value>.*)(?(1)\1|)/i', $result, $matches);
(0004157)
khvalera   
2021-06-11 22:39   
I temporarily corrected myself so that the error does not appear: preg_match('/\s*Pool\s*=?(?<value>.*)(?(1)\1|)/i', $result, $matches);
But most of all this is not the right decision.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1397 [bareos-core] documentation minor always 2021-11-01 16:45 2021-12-21 16:07
Reporter: Norst Platform: Linux  
Assigned To: bruno-at-bareos OS: Debian  
Priority: normal OS Version: 9  
Status: resolved Product Version: 20.0.2  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: "Tapespeed and blocksizes" chapter location is wrong
Description: "Tapespeed and blocksizes" chapter is a general topic. Therefore, it must be moved away from "Autochanger Support" page/category.
https://docs.bareos.org/TasksAndConcepts/AutochangerSupport.html#setblocksizes
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0004360)
bruno-at-bareos   
2021-11-25 10:35   
Would you like to propose a PR changing the place, it would be really appreciate.
Are you doing backup on tapes on a single drive ? (Most of the use case we seen actually are all using autochanger, that's why the chapter is actually there).
(0004376)
Norst   
2021-11-30 21:01   
(Last edited: 2021-11-30 21:03)
Yes, I use standalone tape drive, but for infrequent, long-term archiving rather than regular backup.

PR to move "Tapespeed and blocksizes" one level up, to "Tasks and Concepts": https://github.com/bareos/bareos/pull/1009

(0004383)
bruno-at-bareos   
2021-12-09 09:42   
Did you see the comment in the PR ?
(0004402)
bruno-at-bareos   
2021-12-21 16:07   
PR#1009 merged last week.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1369 [bareos-core] webui crash always 2021-07-12 11:54 2021-12-21 13:58
Reporter: jarek_herisz Platform: Linux  
Assigned To: frank OS: Debian  
Priority: normal OS Version: 10  
Status: resolved Product Version: 20.0.2  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: webui tries to load a nonexistent file
Description: When the Polish language is chosen at the login stage. Webui tries to load the file:
bareos-webui/js/locale/pl_PL/LC_MESSAGES/pl_PL.po

Such a file does not exist, it results in an error:
i_gettext.js:413 iJS-gettext:'try_load_lang_po': failed. Unable to exec XMLHttpRequest for link

Remaining javasctipt is terminated. Interfeis becomes inoperable
Tags: webui
Steps To Reproduce: With version 20.0.1
On the webui login page, select Polish.
Additional Information:
System Description
Attached Files: Przechwytywanie.PNG (78,772 bytes) 2021-07-19 10:36
https://bugs.bareos.org/file_download.php?file_id=472&type=bug
Notes
(0004182)
jarek_herisz   
2021-07-19 10:36   
System:
root@backup:~# cat /etc/debian_version
10.10
(0004206)
frank   
2021-08-09 17:22   
Fix committed to bareos bareos-19.2 branch with changesetid 15093.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1324 [bareos-core] webui major always 2021-03-02 10:26 2021-12-21 13:57
Reporter: Emmanuel Garette Platform: Linux Ubuntu  
Assigned To: frank OS: 20.04  
Priority: normal OS Version:  
Status: resolved Product Version: 20.0.0  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Infinite loop when trying to log with invalid account
Description: I'm using this community version of Webui: http://download.bareos.org/bareos/release/20/xUbuntu_20.04/

When I'm trying to log with invalid account, webui didn't return nothing and apache seems to run an infinite loop. The log file increases rapidly.

I think the problem is in this two lines:

          $send = fwrite($this->socket, $msg, $str_length);
         if($send === 0) {

The fwrite function returns false when an error provides (see: https://www.php.net/manual/en/function.fwrite.php ).

If a replace 0 by false, everything is ok.

In attachement a patch to solve this issues.
Tags:
Steps To Reproduce:
Additional Information:
Attached Files: webui.patch (483 bytes) 2021-03-02 10:26
https://bugs.bareos.org/file_download.php?file_id=458&type=bug
Notes
(0004163)
frank   
2021-06-28 15:22   
Fix committed to bareos master branch with changesetid 15006.
(0004165)
frank   
2021-06-29 14:22   
Fix committed to bareos bareos-19.2 branch with changesetid 15017.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1316 [bareos-core] storage daemon major always 2021-01-30 10:01 2021-12-21 13:57
Reporter: kardel Platform:  
Assigned To: franku OS:  
Priority: high OS Version:  
Status: resolved Product Version: 20.0.0  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: storage daemon loses a configured device instance causing major confusion in device handling
Description: After start "status storage=<name>" show the device as not open or with it parameters - that is expected.

After the first backup with spooling "status storage=<name>" shows the device as "not open or does not exist" - that is a hint
=> the the configured "device_resource->dev" value is nullptr.

The follow up effects are that the reservation code is unable to match the same active device and the same volume in all cases.
When the match fails (log shows "<name> (/dev/<tapename>) and "<name> (/dev/<tapename>) " with no differences) it attempts to allocate new volumes possibly with operator intervention even though the expected volume is available in the drive.

The root cause is a temporary device created in spool.cc::295 => auto rdev(std::make_unique<SpoolDevice>());
Line 302 sets device resource rdev->device_resource = dcr->dev->device_resource;
When rdev leaves scope the Device::~Device() Dtor is called which happily sets this.device_resource->dev = nullptr in
dev.cc:1281 if (device_resource) { device_resource->dev = nullptr; } (=> potential memory leak)

At this point the configured device_resource is lost (even though it might still be known by active volume reservations).
After that the reservation code is completely confused due to new default allocations of devices (see additional info).

A fix is provided as patch against 20.0.0. It only clears this.device_resource->dev when
this.device_resource->dev references this instance.
Tags:
Steps To Reproduce: start bareos system.
observe "status storage=..."
run a spooling job
observer "status storage=..."

If you want to see the confusion it involves a more elaborate test setup with multipe jobs where a spooling job finishes before
another job for the same volume and device begins to run.
Additional Information: It might be worthwhile to check the validity of creating a device in dir_cmd.cc:932. During testing
a difference of device pointer was seen in vol_mgr.cc:916 although the device parameter where the same.
This is most likely cause by Device::this.device_resource->dev being a nullptr and the device creation
in dir_cmd.cc:932. The normal expected lifetime of a device is from reading the configuration until the
program termination. Autochanger support might change that rule though - I didn't analyze that far.
Attached Files: dev.cc.patch (568 bytes) 2021-01-30 10:01
https://bugs.bareos.org/file_download.php?file_id=455&type=bug
Notes
(0004088)
franku   
2021-02-12 12:15   
Thank you for your deep analysis and the proposed fix which solves the issue.

See github PR https://github.com/bareos/bareos/pull/724/commits for more information on the fix and systemtests (which is draft at the time of adding this note).
(0004089)
franku   
2021-02-15 11:38   
Experimental binaries with the proposed bugfix can be found here: http://download.bareos.org/bareos/experimental/CD/PR-724/
(0004091)
franku   
2021-02-24 13:22   
Fix committed to bareos master branch with changesetid 14543.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1300 [bareos-core] webui minor always 2021-01-11 16:27 2021-12-21 13:57
Reporter: fapg Platform: Linux  
Assigned To: frank OS: CentOS  
Priority: normal OS Version: 7  
Status: resolved Product Version: 20.0.0  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: some job status are not categorized properly
Description: in the dashboard, when we click in waiting jobs, the url is:

https://bareos-server/bareos-webui/job//?period=1&status=Waiting
but should be:
https://bareos-server/bareos-webui/job//?period=1&status=Queued

Best Regards,
Fernando Gomes
Tags:
Steps To Reproduce:
Additional Information: affects table column filter
System Description
Attached Files:
Notes
(0004168)
frank   
2021-06-29 18:45   
It's not a query parameter issue. WebUI categorizes all the different job status flags into groups. I had a look into it and some job status are not categorized properly so the column filter on the table does not work as expected in those cases. A fix will follow.
(0004175)
frank   
2021-07-06 11:22   
Fix committed to bareos master branch with changesetid 15053.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1251 [bareos-core] webui tweak always 2020-06-11 09:13 2021-12-21 13:57
Reporter: juanpebalsa Platform: Linux  
Assigned To: frank OS: CentOS  
Priority: low OS Version: 7  
Status: resolved Product Version: 19.2.7  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Error when displaying pool detail
Description: When I try to see the detail of the Pools under Storage -> Pools -> 15-Days (one of my pools), I get an error message because I can't find the page.

http://xxxxxxxxx.com/bareos-webui/pool/details/15-Days:
|A 404 error occurred
|Page not found.
|The requested URL could not be matched by routing.
|
|No Exception available
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files: Captura de pantalla 2020-06-11 a las 9.13.02.png (20,870 bytes) 2020-06-11 09:13
https://bugs.bareos.org/file_download.php?file_id=442&type=bug
Notes
(0004207)
frank   
2021-08-09 17:22   
Fix committed to bareos bareos-19.2 branch with changesetid 15094.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1232 [bareos-core] installer / packages minor always 2020-04-21 09:26 2021-12-21 13:57
Reporter: rogern Platform: Linux  
Assigned To: bruno-at-bareos OS: CentOS  
Priority: normal OS Version: 8  
Status: resolved Product Version: 19.2.7  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: bareos logrotate errors
Description: Problem with logrotate seems to be back (previously addressed and fixed in 0000417) due to missing

su bareos bareos

in /etc/logrotate.d/bareos-dir

Logrotate gives "error: skipping "/var/log/bareos/bareos.log" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation."
Also the same for bareos-audit.log
Tags:
Steps To Reproduce: Two fresh installs of 19.2.7 with same error from logrotate and lacking "su bareos bareos" in /etc/logrotate.d/bareos-dir
Additional Information:
Attached Files:
Notes
(0004256)
bruno-at-bareos   
2021-09-08 13:46   
PR is now proposed with also backport to supported version
https://github.com/bareos/bareos/pull/918
(0004259)
bruno-at-bareos   
2021-09-09 15:07   
PR#918 has been merged, and backport will be made to 20,19,18 Monday 13th. and will be available on next minor release.
(0004260)
bruno-at-bareos   
2021-09-09 15:22   
Fix committed to bareos master branch with changesetid 15139.
(0004261)
bruno-at-bareos   
2021-09-09 16:22   
Fix committed to bareos bareos-19.2 branch with changesetid 15141.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1205 [bareos-core] webui minor always 2020-02-28 09:42 2021-12-21 13:57
Reporter: Ryushin Platform: Linux  
Assigned To: frank OS: Devuan (Debian)  
Priority: normal OS Version: 10  
Status: resolved Product Version: 19.2.6  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: HeadLink.php error with PHP 7.3
Description: I received this error when trying to connecting to the webui:
Notice: compact(): Undefined variable: extras in /usr/share/bareos-webui/vendor/zendframework/zend-view/src/Helper/HeadLink.php on line 403

Seems to be related to this issue:
https://github.com/zendframework/zend-view/issues/172#issue-388080603
Though the line numbers for their fix is not the same.
Tags:
Steps To Reproduce:
Additional Information: I solved the issue by replacing the HeadLink.php file with an updated version from here:
https://raw.githubusercontent.com/zendframework/zend-view/f7242f7d5ccec2b8c319634b4098595382ef651c/src/Helper/HeadLink.php
Attached Files:
Notes
(0004144)
frank   
2021-06-08 12:22   
Fix committed to bareos bareos-19.2 branch with changesetid 14922.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1191 [bareos-core] webui crash always 2020-02-12 15:40 2021-12-21 13:57
Reporter: khvalera Platform: Linux  
Assigned To: frank OS: Arch Linux  
Priority: high OS Version: x64  
Status: resolved Product Version: 19.2.6  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: The web interface runs under any login and password
Description: To enter the web interface starts under any arbitrary username and password.
How to fix it?
Tags: webui
Steps To Reproduce: /etc/bareos/bareos-dir.d/console/web-admin.conf

Console {
  Name = web-admin
  Password = "123"
  Profile = "webui-admin"
}

/etc/bareos/bareos-dir.d/profile/webui-admin.conf

Profile {
  Name = "webui-admin"
  CommandACL = !.bvfs_clear_cache, !.exit, !.sql, !configure, !create, !delete, !purge, !prune, !sqlquery, !umount, !unmount, *all*
  Job ACL = *all*
  Schedule ACL = *all*
  Catalog ACL = *all*
  Pool ACL = *all*
  Storage ACL = *all*
  Client ACL = *all*
  FileSet ACL = *all*
  Where ACL = *all*
}

/etc/bareos-webui/directors.ini

[bareos_dir]
enabled = "yes"
diraddress = "localhost"
dirport>= 9101
;UsePamAuthentication = yes
pam_console_name = "web-admin"
pam_console_password = "123"
Additional Information:
Attached Files:
Notes
(0003936)
khvalera   
2020-04-10 00:10   
UsePamAuthentication = yes
#pam_console_name = "web-admin"
#pam_console_password = "123"
(0004289)
frank   
2021-09-29 18:22   
Fix committed to bareos master branch with changesetid 15252.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1020 [bareos-core] webui major always 2018-10-10 09:37 2021-12-21 13:56
Reporter: linkstat Platform: Linux  
Assigned To: frank OS: Debian  
Priority: normal OS Version: 9  
Status: resolved Product Version: 18.2.4-rc1  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Can not restore a client with spaces in its name
Description: All my clients have names with spaces on their names like "client-fd using Catalog-XXX"; correctly handled (ie, enclosing the name in quotation marks, or escaping the space with \), this has not represented any problem... until now. Webui can even perform backup tasks (previously defined in the configuration files) and has not presented problems with the spaces. But when it came time to restore something ... it just does not seem to be able to properly handle the character strings that contain spaces inside. Apparently, cuts the string to first place especially found (as you can see by looking at the attached image).
Tags:
Steps To Reproduce: Define a client whose name contains spaces inside, such as "hostname-fd Testing Client".
Try to restore a backup from Webui to that client (it does not matter that the backup was originally made in that client or that the newly defined client is a new destination for the restoration of a backup previously made in another client).
Webui will fail by saying "invalid client argument: hostname-fd". As you can see, Webui will "cut" the client's name to the first string before the first space, and since there is no hostname-fd client, the task will fail; or worse, if additionally there was a client whose name matched the first string before the first space, Webui will restore the wrong client.
Additional Information: bconsole does not present any problem when the clients contain spaces in their names (this of course, when the spaces are correctly handled by the human operator who writes the commands, either by enclosing the name with quotation marks, or escaping spaces with a backslash).
System Description
Attached Files: Bareos - Can not restore when a client name has spaces in their name.jpg (139,884 bytes) 2018-10-10 09:37
https://bugs.bareos.org/file_download.php?file_id=311&type=bug
Notes
(0003546)
linkstat   
2019-07-31 18:03   
Hello!

Any news regarding this problem? (or any ideas about how to patch it temporarily so that you can use webui for the case described)?
Sometimes it is tedious to use bconsole all the time instead of webui ...

Regards!
(0004185)
frank   
2021-07-21 15:22   
Fix committed to bareos master branch with changesetid 15068.
(0004188)
frank   
2021-07-22 15:22   
Fix committed to bareos bareos-19.2 branch with changesetid 15079.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
971 [bareos-core] webui major always 2018-06-25 11:54 2021-12-21 13:56
Reporter: Masanetz Platform: Linux  
Assigned To: frank OS: any  
Priority: normal OS Version: 3  
Status: resolved Product Version: 17.2.5  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Error building tree for filenames with backslashes
Description: WebUI Restore fails building the tree if directory contains filenames with backslashes.

Some time ago the adobe reader plugin created a file named "C:\nppdf32Log\debuglog.txt" in the working dir.
Building the restore tree in WebUI fails with popup "Oops, something went wrong, probably too many files.".

Filebname handling for backslashes should be adapted for backslashes (e.g. like https://github.com/bareos/bareos-webui/commit/ee232a6f04eaf2a7c1084fee981f011ede000e8a)
Tags:
Steps To Reproduce: 1. Put an empty file with a filename with backslashes (e.g. C:\nppdf32Log\debuglog.txt) in your home directory
2. Backup
3. Try to restore any file from your home directory from this backup via WebUI
Additional Information: Attached diff of my "workaround"
System Description General issues regarding all linux platforms or not specific to only one distribution
Attached Files: RestoreController.php.diff (1,669 bytes) 2018-06-25 11:54
https://bugs.bareos.org/file_download.php?file_id=299&type=bug
Notes
(0004184)
frank   
2021-07-21 15:22   
Fix committed to bareos master branch with changesetid 15067.
(0004189)
frank   
2021-07-22 15:22   
Fix committed to bareos bareos-19.2 branch with changesetid 15080.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
871 [bareos-core] webui block always 2017-11-04 16:10 2021-12-21 13:56
Reporter: tuxmaster Platform: Linux  
Assigned To: frank OS: CentOS  
Priority: normal OS Version: 7  
Status: resolved Product Version: 17.4.2  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: UI will not load complete
Description: After the login, the website will not load complete.
Only the spinner are shown. (See picture)

The php error log will flooded with:
PHP Notice: Undefined index: meta in /usr/share/bareos-webui/module/Job/src/Job/Model/JobModel.php on line 120

The bareos director will run with version 16.2.7.
Tags:
Steps To Reproduce:
Additional Information: PHP 7.1 via fpm
System Description
Attached Files: Bildschirmfoto von »2017-11-04 16-06-19«.png (50,705 bytes) 2017-11-04 16:10
https://bugs.bareos.org/file_download.php?file_id=270&type=bug
Notes
(0002812)
frank   
2017-11-09 15:35   
DIRD and WebUI need to have the same version currently.

WebUI 17.2.4 is not compatible to a 16.2.7 director yet, which may change in future.
(0002813)
tuxmaster   
2017-11-09 17:36   
Thanks for the information.
But this shout be noted in the release notes, or better result in an error message about an unsupported version.
(0004169)
frank   
2021-06-30 11:49   
There is a note in the installation chapter, see https://docs.bareos.org/master/IntroductionAndTutorial/InstallingBareosWebui.html#system-requirements .
Nevertheless, I'm going to have a look if we can somehow improve the error handling reagarding version compatibility.
(0004176)
frank   
2021-07-06 17:22   
Fix committed to bareos master branch with changesetid 15057.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
579 [bareos-core] webui block always 2015-12-06 12:41 2021-12-21 13:56
Reporter: tuxmaster Platform: x86_64  
Assigned To: frank OS: Fedora  
Priority: normal OS Version: 22  
Status: resolved Product Version:  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Unable to connect to the director from webui via ipv6
Description: The web ui and the director are running on the same system.
After enter the password the error message "Error: , director seems to be down or blocking our request." will presented.
Tags:
Steps To Reproduce: Open the website enter the credentials and try to log in.
Additional Information: getsebool httpd_can_network_connect
httpd_can_network_connect --> on

Error from the apache log file:
[Sun Dec 06 12:37:32.658104 2015] [:error] [pid 2642] [client ABC] PHP Warning: stream_socket_client(): unable to connect to tcp://[XXX]:9101 (Unknown error) in /usr/share/bareos-webui/vendor/Bareos/library/Bareos/BSock/BareosBSock.php on line 521, referer: http://CDE/bareos-webui/

XXX=ip6addr of the director.

connect(from the web server) via telnet ip6addr 9101 will work.
bconsole will also work.
Attached Files:
Notes
(0002323)
frank   
2016-07-15 16:07   
Note: When specifying a numerical IPv6 address (e.g. fe80::1), you must enclose the IP in square brackets—for example, tcp://[fe80::1]:80.

http://php.net/manual/en/function.stream-socket-client.php

You could try setting your IPv6 address in your directors.ini into square brackets until we provide a fix, that might already work.
(0002324)
tuxmaster   
2016-07-15 17:04   
I have try to set it to:
diraddress = "[XXX]"
XXX are the ipv6 address

But the error are the same.
(0002439)
tuxmaster   
2016-11-06 12:09   
Same on Fedora 24 using php 7.0
(0004159)
pete   
2021-06-23 12:41   
(Last edited: 2021-06-23 12:55)
This is still present in version 20 of the Bareos WebUI, on all RHEL variants I tested (CentOS 8, AlmaLinux 8).

It results from a totally unnecessary "bindto" configuration in line 473 of /usr/share/bareos-webui/vendor/Bareos/library/Bareos/BSock/BareosBSock.php:

      $opts = array(
          'socket' => array(
              'bindto' => '0:0',
          ),
      );

This unnecessarily limits PHP socket binding to IPv4 interfaces as documented in https://www.php.net/manual/en/context.socket.php. The simplest solution is to just comment out the "bindto" line:

      $opts = array(
          'socket' => array(
              // 'bindto' => '0:0',
          ),
      );

Restart php-fpm and now IPv6 works perfectly

(0004167)
frank   
2021-06-29 17:22   
Fix committed to bareos bareos-19.2 branch with changesetid 15043.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
565 [bareos-core] file daemon feature N/A 2015-11-16 08:33 2021-12-07 14:24
Reporter: joergs Platform: Linux  
Assigned To: OS: SLES  
Priority: none OS Version: 12  
Status: acknowledged Product Version:  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: use btrfs features to efficiently detect changed files
Description: btrfs (a default filesystem on SLES 12) provides features to generate a list of changed files between snapshots. This can be useful for Bareos to efficiently generate the file lists for Incremental and Differential backups.
Tags:
Steps To Reproduce:
Additional Information: btrfs send operation compares two subvolumes and writes a description of how to convert one subvolume (the parent subvolume) into the other (the sent subvolume).
btrfs receive does the opposite.

SLES 12 comes the the tool snapper, which provides an abstraction for this functionality (and should work in the same way for LVM and ext4?).
System Description
Attached Files:
Notes
(0004378)
colttt   
2021-12-03 11:08   
6years later any news or plans for that?
the same can be done with zfs and bcachefs (in near feature)

so it would be great if bareos can support zfs/btrfs/bcachefs send/receive/snapshot
(0004382)
bruno-at-bareos   
2021-12-07 14:24   
Note :
+ To work the snapshot has to be configured and run on time for the desired FS (use a subvolume + disk space)
+ To be benchmarked on heavily used FS compared to traditional Accurate module.
+ Think that Accurate is needed in AI


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1388 [bareos-core] regression testing block always 2021-09-20 12:02 2021-11-29 09:12
Reporter: mschiff Platform: Linux  
Assigned To: bruno-at-bareos OS: any  
Priority: normal OS Version: 3  
Status: resolved Product Version: 19.2.11  
Product Build: Resolution: no change required  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: /.../sd_reservation.cc: error: sleep_for is not a member of std::this_thread (maybe gcc-11 related)
Description: It seems like the tests do not build when using gcc-11:


/var/tmp/portage/app-backup/bareos-19.2.9/work/bareos-Release-19.2.9/core/src/tests/sd_reservation.cc: In function ‘void WaitThenUnreserve(std::unique_ptr<TestJob>&)’:
/var/tmp/portage/app-backup/bareos-19.2.9/work/bareos-Release-19.2.9/core/src/tests/sd_reservation.cc:147:21: error: ‘sleep_for’ is not a member of ‘std::this_thread’
  147 | std::this_thread::sleep_for(std::chrono::milliseconds(10));
      | ^~~~~~~~~
ninja: build stopped: subcommand failed.
Tags:
Steps To Reproduce:
Additional Information: Please see: https://bugs.gentoo.org/786789
System Description General issues regarding all linux platforms or not specific to only one distribution
Attached Files:
Notes
(0004288)
bruno-at-bareos   
2021-09-29 13:58   
Would you mind to test with the new 19.2.11 release or even with the not released yet branch-12.2
Building here with gcc-11 under openSUSE Tumbleweed works as expected.
(0004328)
bruno-at-bareos   
2021-11-10 10:05   
Hello, did you make any progress on this ?

As 19.2 will be soon obsoleted, did you try to compile version 20 ?
(0004368)
mschiff   
2021-11-27 12:23   
Hi!

Sorry for the late answer. All current version build fine with gcc-11 here:
 - 18.2.12
 - 19.2.11
 - 20.0.3

Thanks!
(0004369)
bruno-at-bareos   
2021-11-29 09:11   
Ok thanks I will close then,
(0004370)
bruno-at-bareos   
2021-11-29 09:12   
Gentoo get gcc11 fixed


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1151 [bareos-core] webui feature always 2019-12-12 09:25 2021-11-26 13:22
Reporter: DanielB Platform:  
Assigned To: frank OS:  
Priority: normal OS Version:  
Status: resolved Product Version: 18.2.5  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: bareos webui does not show the inchanger flag for volumes
Description: The bareos webui does not show the inchanger flag for tape volumes. The flag is visible in the bconsole.
The flag should be visible as additional column to help volume management with tape changers.
Tags: volume, webui
Steps To Reproduce: Log into the webgui.
Select Storage -> Volumes
Additional Information:
Attached Files:
Notes
(0004367)
frank   
2021-11-26 13:22   
Fix committed to bareos master branch with changesetid 15491.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1396 [bareos-core] bconsole minor always 2021-10-31 21:36 2021-11-24 14:43
Reporter: nelson.gonzalez6 Platform: Linux  
Assigned To: bruno-at-bareos OS: CentOS  
Priority: normal OS Version: 7  
Status: resolved Product Version: 19.2.11  
Product Build: Resolution: no change required  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: HOW TO REMOVE FRON DB ALL ERROR, CANCELED, FAILED JOBS
Description: I HAVE NOTICED THAT WHEN LISTING IN bconsole, pools with media ids notice that have jobs from 3 to 4 years ago, all the clients that are already not in used or eliminated that are still on the db catalog list. how can i remove these jobs that are volstatus Error, experired, canceled.

need sugestions.

Thanks.


-----------------------------

| MediaId | VolumeName | VolStatus | Enabled | VolBytes | VolFiles | VolRetention | Recycle | Slot | InChanger | MediaType | L astWritten | Storage |
+---------+----------------------+-----------+---------+------------+----------+--------------+---------+------+-----------+-------------+-- -------------------+---------+
| 698 | Differentialvpn-0698 | Error | 1 | 35,437,993 | 0 | 864,000 | 1 | 0 | 0 | GlusterFile | 2 021-07-23 04:42:41 | Gluster |
| 900 | Differentialvpn-0900 | Error | 1 | 3,246,132 | 0 | 864,000 | 1 | 0 | 0 | GlusterFile | 2 021-08-29 14:56:06 | Gluster |
| 1,000 | Differentialvpn-1000 | Append | 1 | 226,375 | 0 | 864,000 | 1 | 0 | 0 | GlusterFile | 2 021-10-31 06:11:56 | Gluster |
+---------+----------------------+-----------+---------+------------+----------+-
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0004313)
bruno-at-bareos   
2021-11-02 13:21   
Did you already test dbcheck this tool normally is the right one to cleanup orphan stored in DB.
(0004320)
nelson.gonzalez6   
2021-11-08 13:01   
Hi yes, i have cleaned dbcheck -fv and it took long time due to lots of orphanes but when listing volumes in bconsole it still has the errors.
(0004325)
bruno-at-bareos   
2021-11-10 10:01   
So in that case, the best things to do is to remove them manually: with the delete volume command into bconsole.
You will normally also have to remove manually the volume file from the filesystem.

This is due to the fact that volume in Error state are locked and then not pruned as we can't touch them anymore.
(0004341)
bruno-at-bareos   
2021-11-16 15:42   
Beside removing manually those records, in bconsole or by scripting the delete volume line, you have to remember to delete the corresponding file if it still exist in your storage place.
(0004357)
bruno-at-bareos   
2021-11-24 14:42   
Final note before closing. The manual process is required.
(0004358)
bruno-at-bareos   
2021-11-24 14:43   
A manual process is needed as described to remove volumes in errors.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1390 [bareos-core] installer / packages minor always 2021-10-07 16:39 2021-11-24 10:49
Reporter: tastydr Platform: Linux  
Assigned To: arogge OS: Debian