View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1533 [bareos-core] vmware plugin major always 2023-04-29 13:41 2023-06-06 17:58
Reporter: CirocN Platform: Linux  
Assigned To: stephand OS: RHEL (and clones)  
Priority: urgent OS Version: 8  
Status: acknowledged Product Version: 22.0.3  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Restoring vmware vms keeps failing, can't restore the data.
Description: First I need to mention I am new to Bareos and I have been working with it for the past couple of weeks to replace our old backup solution.
I am trying to use the VMware plugin to get backups of our vmdks for disaster recovery or if we need to extract a specific file using guestfish.
I have followed the official documents regarding setting up the VMware plugin at https://docs.bareos.org/TasksAndConcepts/Plugins.html#general
The backups of the vm in our vSphere server is successful.
But when I try to restore the backups it keeps failing with the following information:

bareos87.simlab.xyz-fd JobId 3: Fatal error: bareosfd: Traceback (most recent call last):
File "/usr/lib64/bareos/plugins/BareosFdWrapper.py", line 85, in create_file
return bareos_fd_plugin_object.create_file(restorepkt)
File "/usr/lib64/bareos/plugins/BareosFdPluginVMware.py", line 504, in create_file
cbt_data = self.vadp.restore_objects_by_objectname[objectname]["data"]
KeyError: '/VMS/Datacenter/backup_client/[DatastoreVM] backup_client/backup_client.vmdk


I have tried the same steps on Bareos 21 and 20, and also tried it on Redhat 9.1 and I kept getting same exact error.

Tags:
Steps To Reproduce: After setting up the vmware plugin accoding to official documents, I have ran the backups using:
run job=vm-websrv1 level=Full
Web-GUI shows the job instantly and after about 10 minutes the job's status shows success.
Right after the backup is done when I try to restore the backup using Web-GUI or console, I keep getting the same error:

19 2023-04-29 07:34:05 bareos-dir JobId 4: Error: Bareos bareos-dir 22.0.4~pre63.807bc5689 (17Apr23):
Build OS: Red Hat Enterprise Linux release 8.7 (Ootpa)
JobId: 4
Job: RestoreFiles.2023-04-29_07.33.59_43
Restore Client: bareos-fd
Start time: 29-Apr-2023 07:34:01
End time: 29-Apr-2023 07:34:05
Elapsed time: 4 secs
Files Expected: 1
Files Restored: 0
Bytes Restored: 0
Rate: 0.0 KB/s
FD Errors: 1
FD termination status: Fatal Error
SD termination status: Fatal Error
Bareos binary info: Bareos community build (UNSUPPORTED): Get professional support from https://www.bareos.com
Job triggered by: User
Termination: *** Restore Error ***

18 2023-04-29 07:34:05 bareos-dir JobId 4: Warning: File count mismatch: expected=1 , restored=0
17 2023-04-29 07:34:05 bareos-sd JobId 4: Releasing device "FileStorage" (/var/lib/bareos/storage).
16 2023-04-29 07:34:05 bareos-sd JobId 4: Error: lib/bsock_tcp.cc:454 Socket has errors=1 on call to client:192.168.111.136:9103
15 2023-04-29 07:34:05 bareos-sd JobId 4: Fatal error: stored/read.cc:146 Error sending to File daemon. ERR=Connection reset by peer
14 2023-04-29 07:34:05 bareos-sd JobId 4: Error: lib/bsock_tcp.cc:414 Wrote 65536 bytes to client:192.168.111.136:9103, but only 16384 accepted.
13 2023-04-29 07:34:04 bareos87.simlab.xyz-fd JobId 4: Fatal error: bareosfd: Traceback (most recent call last):
File "/usr/lib64/bareos/plugins/BareosFdWrapper.py", line 85, in create_file
return bareos_fd_plugin_object.create_file(restorepkt)
File "/usr/lib64/bareos/plugins/BareosFdPluginVMware.py", line 504, in create_file
cbt_data = self.vadp.restore_objects_by_objectname[objectname]["data"]
KeyError: '/VMS/Datacenter/backup_client/[DatastoreVM] backup_client/backup_client.vmdk'

12 2023-04-29 07:34:04 bareos-sd JobId 4: Forward spacing Volume "Full-0001" to file:block 0:627.
11 2023-04-29 07:34:04 bareos-sd JobId 4: Ready to read from volume "Full-0001" on device "FileStorage" (/var/lib/bareos/storage).
10 2023-04-29 07:34:04 bareos87.simlab.xyz-fd JobId 4: Encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
9 2023-04-29 07:34:04 bareos87.simlab.xyz-fd JobId 4: Connected Storage daemon at bareos87.simlab.xyz:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
8 2023-04-29 07:34:02 bareos-dir JobId 4: Encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
7 2023-04-29 07:34:02 bareos-dir JobId 4: Handshake: Immediate TLS
6 2023-04-29 07:34:02 bareos-dir JobId 4: Connected Client: bareos-fd at localhost:9102, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
5 2023-04-29 07:34:02 bareos-dir JobId 4: Probing client protocol... (result will be saved until config reload)
4 2023-04-29 07:34:02 bareos-dir JobId 4: Using Device "FileStorage" to read.
3 2023-04-29 07:34:02 bareos-dir JobId 4: Encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
2 2023-04-29 07:34:02 bareos-dir JobId 4: Connected Storage daemon at bareos87.simlab.xyz:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
1 2023-04-29 07:34:01 bareos-dir JobId 4: Start Restore Job RestoreFiles.2023-04-29_07.33.59_43
Additional Information:
System Description
Attached Files: Restore_Failure.png (84,322 bytes) 2023-04-29 13:41
https://bugs.bareos.org/file_download.php?file_id=554&type=bug
png

Backup_Success.png (78,727 bytes) 2023-04-29 13:41
https://bugs.bareos.org/file_download.php?file_id=555&type=bug
png

BareosFdPluginVMware.png (30,436 bytes) 2023-05-03 23:32
https://bugs.bareos.org/file_download.php?file_id=556&type=bug
png

docs.bareos.org-instruction.png (36,604 bytes) 2023-05-03 23:32
https://bugs.bareos.org/file_download.php?file_id=557&type=bug
png
Notes
(0004995)
bruno-at-bareos   
2023-05-03 15:37   
As all continuous tests are working perfectly, you certainly miss a point into your configuration, without the configuration nobody will be able to find the problem.

Also if you want to show a job result screeshots are certainly the worse way.
please use the text log result of bconsole <<< "list joblog jobid=2" as attachement here.
(0005003)
CirocN   
2023-05-03 23:32   
I have found out that this is the result of miss matching the backup path and restore path.
The backup is getting created with /VMS/Datacenter//backup_client/[DatastoreVM] backup_client/backup_client.vmdk while /VMS/Datacenter/backup_client/[DatastoreVM] backup_client/backup_client.vmdk.
I have noticed in your trying to strip it off on BareosFdPluginVMware.py on line 366 but it is not working.

CODE:
        if "uuid" in self.options:
            self.vadp.backup_path = "/VMS/%s" % (self.options["uuid"])
        else:
            self.vadp.backup_path = "/VMS/%s/%s/%s" % (
                self.options["dc"],
                self.options["folder"].strip("/"),
                self.options["vmname"],
            )

My configuration for the folder is precisely what is suggested in the official document: folder=/


The workaround I have found for now was to edit the BareosFdPluginVMware.py Python script to the following code but it needs to get properly fixed:

CODE:
        if "uuid" in self.options:
            self.vadp.backup_path = "/VMS/%s" % (self.options["uuid"])
        else:
            self.vadp.backup_path = "/VMS/%s%s/%s" % (
                self.options["dc"],
                self.options["folder"].strip("/"),
                self.options["vmname"],
            )
(0005065)
stephand   
2023-06-06 17:58   
Thanks for reporting this, confirming that it doesn't work properly when using folder=/ for backups.
The root cause is the double slash in the backup path, like in your example
/VMS/Datacenter//backup_client/[DatastoreVM] backup_client/backup_client.vmdk

I will provide a proper fix.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1531 [bareos-core] storage daemon crash always 2023-04-25 23:49 2023-06-06 01:13
Reporter: mpflash Platform: Windows  
Assigned To: OS: Windows  
Priority: high OS Version: 10  
Status: acknowledged Product Version: 22.0.3  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: bextractor.exe write only one zero file, if enabled compression in "fileset"
Description: bextractor.exe on Windows10x64 write only one zero file, if enabled any compression (GZIP/LZO/LZ4) in "fileset".

Recovery from WEBUI always work! 100%!
Tags:
Steps To Reproduce:
Additional Information: FileSet {
  Enable VSS = no
  Name = "All-DOC-XLS"
  Include {
     Options {
        wildFile = "*.jpg"
        wildFile = "*.jpeg"
        wildFile = "*.doc*"
        wildFile = "*.xls*"
        wildFile = "*.csv*"
        wildFile = "*.pdf"
        wildFile = "*.txt"
        wildFile = "*.rtf"
        signature = MD5
        compression = GZIP
        IgnoreCase = yes
        }
     Options {
        RegexFile = ".*"
        exclude = yes
     }
     File = "C:/Users/USER/desktop"
     File = "C:/Users/USER/Documents"
     File = "C:/Users/USER/Pictures"
     File = "D:/desktop"
     File = "D:/Documents"
     File = "D:/Pictures"

  }
}
System Description
Attached Files: dir-sd-configs.zip (18,127 bytes) 2023-04-26 14:10
https://bugs.bareos.org/file_download.php?file_id=547&type=bug
Screenshot_2.png (42,141 bytes) 2023-04-26 14:10
https://bugs.bareos.org/file_download.php?file_id=548&type=bug
png

ROZN6-fd-Incremental-PoolIncr-2023-04-26-15-05 (414,516 bytes) 2023-04-26 14:11
https://bugs.bareos.org/file_download.php?file_id=549&type=bug
Screenshot_1.png (51,378 bytes) 2023-04-26 14:18
https://bugs.bareos.org/file_download.php?file_id=550&type=bug
png

Screenshot_3.png (192,167 bytes) 2023-04-28 01:17
https://bugs.bareos.org/file_download.php?file_id=551&type=bug
png

ROZN6-fd-Incremental-PoolIncr-2023-04-28-02-09 (360,660 bytes) 2023-04-28 01:19
https://bugs.bareos.org/file_download.php?file_id=552&type=bug
Screenshot_4.png (152,577 bytes) 2023-04-28 01:45
https://bugs.bareos.org/file_download.php?file_id=553&type=bug
png
Notes
(0004983)
bruno-at-bareos   
2023-04-26 09:34   
There's no bextractor.exe executable delivered by Bareos, So it is hard to understand your trouble.
Also to be acceptable as a bug, we need to reproduce the trouble, this can be achieved only if you make the effort of giving all commands and related configuration.

No one in the community is clairvoyant ;-)
(0004984)
mpflash   
2023-04-26 14:10   
(Last edited: 2023-04-26 14:10)

(0004985)
mpflash   
2023-04-26 14:11   
(0004986)
mpflash   
2023-04-26 14:18   
(0004987)
mpflash   
2023-04-26 14:21   
will you be able to check the recoverability of my archive in Linux assemblies of bareos? ROZN6-fd-Incremental-PoolIncr-2023-04-26-15-05
(0004988)
bruno-at-bareos   
2023-04-27 16:12   
Just to be sure to have all element, if you retest the same fileset without the compression enabled it works and extract folders and files ?
That confirmation will help to create a regression test.

It look like you found a bug, as I can confirm the same result as you, under windows or linux when using bextract. under linux using bscan to import the volume allow to restore its content.
(0004989)
bruno-at-bareos   
2023-04-27 16:15   
While you are retesting, would a compression like lz4 produce the same error.
Thanks.
(0004990)
mpflash   
2023-04-28 01:17   
(0004991)
mpflash   
2023-04-28 01:19   
Attach LZ4 archive file
(0004992)
mpflash   
2023-04-28 01:45   
If ANY copression disable, bextract.exe correct work
(0004996)
bruno-at-bareos   
2023-05-03 15:38   
Reproducible
 
(0005064)
irekpias   
2023-06-06 01:13   
Adding "me to": Bareos version: 22.0.4~pre114.45a6341f6 - bextract always write 0 size files, when compresion is enabled in job - i use lz4hc. Tested the same version of bareos with AlmaLinux 8 and AlmaLinux 9.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1538 [bareos-core] file daemon major always 2023-05-31 11:52 2023-05-31 11:52
Reporter: hostedpower Platform: x86  
Assigned To: OS: Windows  
Priority: high OS Version: 2016  
Status: new Product Version: 22.0.3  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: bareos-fd detected as virus
Description: Hi,


We see that server AV is deleting bareos-fd.exe marking it as a virus.

We downloaded the installer and extracted both version 22.0.2 and 22.0.3, these are the results

https://www.virustotal.com/gui/file/a3c54d9a63d337b473327127cd0ec4162cda1819e403fb1ff0bf440d35d09b6c
https://www.virustotal.com/gui/file/37605d0b5285856c5b150e5924485e8da831d6be792abe97bfe0a832b3f21c6f

Is there anything malicious going on? Is this a known issue?
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1534 [bareos-core] storage daemon block always 2023-04-30 20:58 2023-05-26 19:46
Reporter: pavelr Platform: Linux  
Assigned To: OS: RHEL (and clones)  
Priority: immediate OS Version: 9  
Status: new Product Version: 22.0.3  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: s3 backend doesn't work
Description: integration with aws S3 does not work. used https://docs.bareos.org/TasksAndConcepts/StorageBackends.html and no matter what i configure i get the following error
2023-04-30 17:31:32 bareos-sd: ERROR in backends/droplet_device.cc:109 error: ../../../../../core/src/droplet/libdroplet/src/profile.c:251: conf_cb_func: invalid value 'use_https'
i do have correct permissions and tokens (confirmed with aws cli and was able to put a file in the bucket.
Tags:
Steps To Reproduce: using this configuration:
host = s3.amazonaws.com # This parameter is only used as baseurl and will be prepended with bucket and location set in device resource to form correct url
use_https = true
access_key = key
secret_key = secret
pricing_dir = "" # If not empty, an droplet.csv file will be created which will record all S3 operations.
backend = s3
aws_auth_sign_version = 4 # Currently, AWS S3 uses version 4. The Ceph S3 gateway uses version 2.
aws_region = us-west-2

throws:
2023-04-30 17:31:32 bareos-sd JobId 512: Warning: stored/label.cc:372 Open device "AWS_S3_1-00" (AWS S3 Storage) Volume "s3-longterm-0403" failed: ERR=stored/dev.cc:602 Could not open: AWS S3 Storage/s3-longterm-0403

2023-04-30 17:31:32 bareos-sd: ERROR in backends/droplet_device.cc:109 error: ../../../../../core/src/droplet/libdroplet/src/profile.c:251: conf_cb_func: invalid value 'use_https'
2023-04-30 17:31:32 bareos-sd: ERROR in backends/droplet_device.cc:109 error: ../../../../../core/src/droplet/libdroplet/src/profile.c:251: conf_cb_func: invalid value 'use_https'
2023-04-30 17:31:32 bareos-sd: ERROR in backends/droplet_device.cc:109 error: ../../../../../core/src/droplet/libdroplet/src/profile.c:251: conf_cb_func: invalid value 'use_https'
Additional Information: running on centos 9.
version:

bareos-webui-22.0.3~pre37.3c0624880-27.el9.x86_64
bareos-common-22.0.4~pre78.1134d9896-47.el9.x86_64
bareos-filedaemon-22.0.4~pre78.1134d9896-47.el9.x86_64
bareos-database-common-22.0.4~pre78.1134d9896-47.el9.x86_64
bareos-database-postgresql-22.0.4~pre78.1134d9896-47.el9.x86_64
bareos-database-tools-22.0.4~pre78.1134d9896-47.el9.x86_64
bareos-director-22.0.4~pre78.1134d9896-47.el9.x86_64
bareos-filedaemon-python-plugins-common-22.0.4~pre78.1134d9896-47.el9.x86_64
bareos-filedaemon-python3-plugin-22.0.4~pre78.1134d9896-47.el9.x86_64
bareos-bconsole-22.0.4~pre78.1134d9896-47.el9.x86_64
bareos-client-22.0.4~pre78.1134d9896-47.el9.x86_64
bareos-tools-22.0.4~pre78.1134d9896-47.el9.x86_64
bareos-storage-22.0.4~pre78.1134d9896-47.el9.x86_64
bareos-22.0.4~pre78.1134d9896-47.el9.x86_64
bareos-storage-droplet-22.0.4~pre78.1134d9896-47.el9.x86_64
bareos-filedaemon-percona-xtrabackup-python-plugin-22.0.4~pre78.1134d9896-47.el9.x86_64
System Description
Attached Files:
Notes
(0004994)
pavelr   
2023-04-30 21:05   
sorry. mixed up the errors:
#option 1:
host = s3.amazonaws.com:443
#result
2023-04-30 17:31:32 bareos-sd: ERROR in backends/droplet_device.cc:109 error: ../../../../../core/src/droplet/libdroplet/src/profile.c:251: conf_cb_func: invalid value 'use_https'

#option 2:
host = s3.amazonaws.com
#error:
2023-04-30 19:01:16 bareos-sd: ERROR in backends/droplet_device.cc:109 error: ../../../../../core/src/droplet/libdroplet/src/addrlist.c:600: dpl_addrlist_add: cannot lookup host s3.amazonaws.com
(0004998)
bruno-at-bareos   
2023-05-03 15:44   
Did you tried to remove the comment ?

host = s3.amazonaws.com # This parameter is only used as baseurl and will be prepended with bucket and location set in device resource to form correct url
use_https = true
access_key = key
secret_key = secret
pricing_dir = "" # If not empty, an droplet.csv file will be created which will record all S3 operations.
backend = s3
aws_auth_sign_version = 4 # Currently, AWS S3 uses version 4. The Ceph S3 gateway uses version 2.
aws_region = us-west-2

to

host = s3.amazonaws.com
use_https = true
access_key = key
secret_key = secret
pricing_dir = ""
backend = s3
aws_auth_sign_version = 4
aws_region = us-west-2

look like the parser was confused by previous line ?
(0005004)
pavelr   
2023-05-04 09:48   
i tried every possible option during the 2 days i've spent on it (including what you've suggested) - it just fails with errors above.
none related question to the issues - i did manage to do a work around using another option - but not sure it's correct.
what i did is mount an backblaze b2 bucket locally using s3fs - and i just configure the bareos storage to work with local folder - it does work - however i did get a strange behavior while i briefly tested it - i did a backup which saved 4 files, but when i'm trying to restore it throws warning that it expected 4 files but restored only 3 - is it a known issue? does this considered a proper way of working with buckets? this issue happened with a specific backup where part of the backup were on actual local folder (full) and the other part (incremental) was on locally mounted b2 bucket. when i did a full backup and restore to and from the bucket it worked without issues. again i didn't tested it toughly so wanted to know if it's ok or not.
(0005041)
arogge   
2023-05-10 14:52   
It's a long shot, but are you using a file with unix line endings?
Because if you had DOS/Windows line endings, then the value is "true<CR>", which is neither "true" nor "false" and thus an invalid value.
Maybe running "dos2unix" on your profile configuration file fixes the problem.
(0005061)
pavelr   
2023-05-16 08:45   
well looks like it indeed was the issue :) however now i get a different error:
2023-05-16 06:43:07 bareos-sd: ERROR in backends/droplet_device.cc:109 error: ../../../../../core/src/droplet/libdroplet/src/conn.c:376: init_ssl_conn: SSL certificate verification status: 0: ok
2023-05-16 06:43:07 bareos-sd: ERROR in backends/droplet_device.cc:109 error: ../../../../../core/src/droplet/libdroplet/src/conn.c:373: init_ssl_conn: SSL connect error: -1 (error:00000001:lib(0)::reason(1))
2023-05-16 06:43:07 bareos-sd: ERROR in backends/droplet_device.cc:109 error: ../../../../../core/src/droplet/libdroplet/src/conn.c:372: init_ssl_conn: SSL_connect failed with 40F6FF4F617F0000:error:0A000126:SSL routines:ssl3_read_n:unexpected eof while reading:ssl/record/rec_layer_s3.c:320:
(0005062)
glani   
2023-05-26 19:06   
I can confirm the following:
Backend AWS does not work on 22.0.3 centos 9 either.
Docu is misleading and slightly incorrect. The region is not taken into consideration.
I have tried many ways: only one I got worked health check for storage: <bucket-name>.s3.amazonaws.com:443

*status
Status available for:
1: Director
2: Storage
3: Client
4: Scheduler
5: All
Select daemon type for status (1-5): 2
The defined Storage resources are:
1: File
2: S3_ObjectStorage_General_smblob_10.0.0.100
3: S3_ObjectStorage_smblob_10.0.0.3
4: S3_ObjectStorage_smblob_10.0.0.4
Select Storage resource (1-4): 2
Connecting to Storage daemon S3_ObjectStorage_General_smblob_10.0.0.100 at 10.0.0.100:9103
 Encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3

bareos-sd Version: 22.0.3 (24 March 2023) CentOS Stream release 9
Daemon started 26-May-23 16:21. Jobs: run=1, running=0, self-compiled binary
 Sizes: boffset_t=8 size_t=8 int32_t=4 int64_t=8 bwlimit=0kB/s

There are WARNINGS for this storagedaemon's configuration!
See output of 'bareos-sd -t' for details.

Running Jobs:
No Jobs running.
====

Jobs waiting to reserve a drive:
====

Device status:

Device "S3_ObjectDevice_General_smblob_10.0.0.100_1" (S3_ObjectDevice_General_smblob_1) is not open.
Backend connection is working.
No pending IO flush requests.
==

Device "S3_ObjectDevice_General_smblob_10.0.0.100_2" (S3_ObjectDevice_General_smblob_2) is not open.
Backend connection is working.
No pending IO flush requests.
==
====

Used Volume status:
====

====


but when I do backup

I got :
26-May 16:36 bareos-dir JobId 22: No prior Full backup Job record found.
26-May 16:36 bareos-dir JobId 22: No prior or suitable Full backup found in catalog. Doing FULL backup.
26-May 16:36 bareos-dir JobId 22: Start Backup JobId 22, Job=bareos-dir-conf.2023-05-26_16.36.22_07
26-May 16:36 bareos-dir JobId 22: Connected Storage daemon at 10.0.0.100:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
26-May 16:36 bareos-dir JobId 22: Encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
26-May 16:36 bareos-dir JobId 22: Connected Client: client-fd-smblob_10-0-0-100 at 10.0.0.100:9102, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
26-May 16:36 bareos-dir JobId 22: Handshake: Immediate TLS
26-May 16:36 bareos-dir JobId 22: Encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
26-May 16:36 bareos-dir JobId 22: Using Device "S3_ObjectDevice_General_smblob_10.0.0.100_1" to write.
26-May 16:36 client-fd-smblob_10-0-0-100 JobId 22: Connected Storage daemon at 10.0.0.100:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
26-May 16:36 client-fd-smblob_10-0-0-100 JobId 22: Encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
26-May 16:36 client-fd-smblob_10-0-0-100 JobId 22: Extended attribute support is enabled
26-May 16:36 client-fd-smblob_10-0-0-100 JobId 22: ACL support is enabled
26-May 16:36 bareos-sd JobId 22: Warning: Volume "Full-0010" not on device "S3_ObjectDevice_General_smblob_10.0.0.100_1" (S3_ObjectDevice_General_smblob_1).
26-May 16:36 bareos-sd JobId 22: Marking Volume "Full-0010" in Error in Catalog.
26-May 16:36 bareos-sd JobId 22: Warning: Volume "Full-0010" not on device "S3_ObjectDevice_General_smblob_10.0.0.100_1" (S3_ObjectDevice_General_smblob_1).
26-May 16:36 bareos-sd JobId 22: Marking Volume "Full-0010" in Error in Catalog.
26-May 16:36 bareos-dir JobId 22: Created new Volume "Full-0011" in catalog.
26-May 16:36 bareos-sd JobId 22: Labeled new Volume "Full-0011" on device "S3_ObjectDevice_General_smblob_10.0.0.100_1" (S3_ObjectDevice_General_smblob_1).
26-May 16:36 bareos-sd JobId 22: Wrote label to prelabeled Volume "Full-0011" on device "S3_ObjectDevice_General_smblob_10.0.0.100_1" (S3_ObjectDevice_General_smblob_1)
26-May 16:36 bareos-sd JobId 22: Releasing device "S3_ObjectDevice_General_smblob_10.0.0.100_1" (S3_ObjectDevice_General_smblob_1).
26-May 16:36 bareos-sd JobId 22: Fatal error: Failed to flush device "S3_ObjectDevice_General_smblob_10.0.0.100_1" (S3_ObjectDevice_General_smblob_1).
26-May 16:36 bareos-sd JobId 22: Elapsed time=00:00:30, Transfer rate=1.433 K Bytes/second
26-May 16:36 bareos-dir JobId 22: Insert of attributes batch table with 102 entries start
26-May 16:36 bareos-dir JobId 22: Insert of attributes batch table done
26-May 16:37 bareos-dir JobId 22: Error: Bareos bareos-dir 22.0.3 (24Mar23):

I googled a little bit looks like this issue has existed since the fork from bacula.

I have spent two days as well. I cannot mount s3fs because plans are backup postgres cluster with 3 nodes and only filedaemon is sitting on replica
(0005063)
glani   
2023-05-26 19:46   
Looks like I found the issue.

The working config for the device profile is:

host = s3.amazonaws.com:443
use_https = true
backend = s3
aws_region = {{ aws_region }}
aws_auth_sign_version = 4
access_key = "{{ aws_access_id }}"
secret_key = "{{ aws_secret_key }}"
pricing_dir = ""

AWS bucket should have ACL to handle requests from API. The default creation of the bucket denies it (bareos hides this info because of no implementation here https://github.com/scality/Droplet). Better to test bucket and AWS creds on something independent. I tested via amazon ansible module. The same bucket and creds

The bucket is specified in the device itself. NO COMMENTS! After removing all comments from the profile it started working.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1488 [bareos-core] file daemon block always 2022-10-22 19:29 2023-05-15 21:40
Reporter: tuxmaster Platform: x86_64  
Assigned To: OS: RHEL  
Priority: urgent OS Version: 9  
Status: acknowledged Product Version: 21.1.4  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: PostgreSQL Plugin fails on PostgreSQL 15
Description: Because the backup functions was renamed, the add can't backup anything.
See:
https://www.postgresql.org/docs/15/functions-admin.html
https://www.postgresql.org/docs/current/release-15.html

Functions pg_start_backup()/pg_stop_backup() have been renamed to pg_backup_start()/pg_backup_stop(),
and the functions pg_backup_start_time() and pg_is_in_backup() have been removed.
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0004818)
tuxmaster   
2022-10-22 21:12   
Also the file backup_label is not created any more.
It looks like the backup process has changed.
https://www.postgresql.org/docs/current/continuous-archiving.html#BACKUP-BASE-BACKUP
See 26.3.3
pg_backup_stop will return two fields, that has to be saved into files by the backup tool. (2-> backup_label, 3. -> tablespace_map).
(0004822)
bruno-at-bareos   
2022-11-03 10:19   
Seen by developer, but PR from external is welcomed too ;-)
(0005042)
hostedpower   
2023-05-11 16:50   
We have the same problem, we upgraded recently to postgresql 15.2 and no we cannot backup anything more, what can we do now?
(0005043)
bruno-at-bareos   
2023-05-11 16:54   
Using postgresql15 as database for bareos is possible but need new grant script which is proposed here
future new grant script in https://github.com/bareos/bareos/pull/1449/files
full file is https://github.com/bareos/bareos/blob/3660d4c4b59fd17052d663bd22927fa85b2398e6/core/src/cats/ddl/grants/postgresql.sql

Backup & Restore PostgreSQL15 database with the postgresql plugin is not possible due to full changes of the backup API by PGDG group starting with 15.
Contribution for improving the code is still awaited ...
(0005044)
hostedpower   
2023-05-11 18:31   
I checked, but we still use postgresql 14 for Bareos fortunately :)

However our servers with postgresql 15 fail to backup. I checked and probably the easiest way is to switch to non exclusive backups for postgresql 15 as well. Most likely it could than function in the same way.

The commands are renamed (easy) and the biggest difference for the rest is that the backup_label is no longer written to the file system but written when the "backup stop" is called.
(0005045)
hostedpower   
2023-05-12 05:26   
https://www.enterprisedb.com/blog/exclusive-backup-mode-finally-removed-postgres-15
https://www.cybertec-postgresql.com/en/exclusive-backup-deprecated-what-now/

The exclusive backup mode was deprecated since PostgreSQL 9.6 when non-exclusive backups were introduced.

No idea why a brand new plugin was created with a method deprecated for more than 5 years!
(0005060)
hostedpower   
2023-05-15 21:40   
Hi, let us know if there is anything we can do to get the required changes. I think it must be pretty doable for the creator of the original plugin. As ar as I know the older PostgreSQL support even the same backup procedures (and even the same name: pg_backup_start etc)

Maybe we can fund it a bit with the original requester of the functionality?

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
690 [bareos-core] General crash always 2016-08-23 18:19 2023-05-09 16:58
Reporter: tigerfoot Platform: x86_64  
Assigned To: bruno-at-bareos OS: openSUSE  
Priority: low OS Version: Leap 42.1  
Status: resolved Product Version: 15.2.5  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: bcopy segfault
Description: With a small volume Full001 I'm using a bsr (Catalog job)
to try to copy this job to another media (and device)

bcopy start to work but segv at the end
Tags:
Steps To Reproduce: Pick a bsr of a job in a multiple jobs volume, run bcopy with a new volume on a different destination (different media)
Additional Information: gdb /usr/sbin/bcopy
GNU gdb (GDB; openSUSE Leap 42.1) 7.9.1
Copyright (C) 2015 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law. Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-suse-linux".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<http://bugs.opensuse.org/>.
Find the GDB manual and other documentation resources online at:
<http://www.gnu.org/software/gdb/documentation/>.
For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from /usr/sbin/bcopy...Reading symbols from /usr/lib/debug/usr/sbin/bcopy.debug...done.
done.
(gdb) run -d200 -b Catalog.bsr -c /etc/bareos/bareos-sd.conf -v -o catalog01 Default FileStorage
Starting program: /usr/sbin/bcopy -d200 -b Catalog.bsr -c /etc/bareos/bareos-sd.conf -v -o catalog01 Default FileStorage
Missing separate debuginfos, use: zypper install glibc-debuginfo-2.19-22.1.x86_64
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Detaching after fork from child process 3016.
bcopy (90): stored_conf.c:837-0 Inserting Device res: Default
bcopy (90): stored_conf.c:837-0 Inserting Director res: earth-dir
Detaching after fork from child process 3019.
bcopy (50): plugins.c:222-0 load_plugins
bcopy (50): plugins.c:302-0 Found plugin: name=autoxflate-sd.so len=16
bcopy (50): sd_plugins.c:371-0 is_plugin_compatible called
bcopy (50): plugins.c:299-0 Rejected plugin: want=-sd.so name=bpipe-fd.so len=11
bcopy (50): plugins.c:302-0 Found plugin: name=scsicrypto-sd.so len=16
bcopy (200): scsicrypto-sd.c:137-0 scsicrypto-sd: Loaded: size=104 version=3
bcopy (50): sd_plugins.c:371-0 is_plugin_compatible called
bcopy (50): plugins.c:302-0 Found plugin: name=scsitapealert-sd.so len=19
bcopy (200): scsitapealert-sd.c:99-0 scsitapealert-sd: Loaded: size=104 version=3
bcopy (50): sd_plugins.c:371-0 is_plugin_compatible called
bcopy (8): crypto_cache.c:55-0 Could not open crypto cache file. /var/lib/bareos/bareos-sd.9103.cryptoc ERR=No such file or directory
bcopy (100): bcopy.c:194-0 About to setup input jcr
bcopy (200): autoxflate-sd.c:183-0 autoxflate-sd: newPlugin JobId=0
bcopy (200): scsicrypto-sd.c:168-0 scsicrypto-sd: newPlugin JobId=0
bcopy (200): scsitapealert-sd.c:130-0 scsitapealert-sd: newPlugin JobId=0
bcopy: butil.c:271-0 Using device: "Default" for reading.
bcopy (100): dev.c:393-0 init_dev: tape=0 dev_name=/var/lib/bareos/storage/default
bcopy (100): dev.c:395-0 dev=/var/lib/bareos/storage/default dev_max_bs=0 max_bs=0
bcopy (100): block.c:127-0 created new block of blocksize 64512 (dev->device->label_block_size) as dev->max_block_size is zero
bcopy (200): acquire.c:791-0 Attach Jid=0 dcr=61fc28 size=0 dev="Default" (/var/lib/bareos/storage/default)
bcopy (150): vol_mgr.c:171-0 add read_vol=Full-0001 JobId=0
bcopy (100): butil.c:164-0 Acquire device for read
bcopy (100): acquire.c:63-0 dcr=61fc28 dev=623f58
bcopy (100): acquire.c:64-0 MediaType dcr= dev=default
bcopy (100): acquire.c:92-0 Want Vol=Full-0001 Slot=0
bcopy (100): acquire.c:106-0 MediaType dcr=default dev=default
bcopy (100): acquire.c:174-0 MediaType dcr=default dev=default
bcopy (100): acquire.c:193-0 dir_get_volume_info vol=Full-0001
bcopy (100): askdir.c:731-0 Fake dir_get_volume_info
bcopy (200): scsitapealert-sd.c:228-0 scsitapealert-sd: tapealert is not enabled on device /var/lib/bareos/storage/default
bcopy (100): mount.c:650-0 No swap_dev set
bcopy (100): mount.c:600-0 Must load "Default" (/var/lib/bareos/storage/default)
bcopy (100): autochanger.c:99-0 Device "Default" (/var/lib/bareos/storage/default) is not an autochanger
bcopy (200): scsitapealert-sd.c:192-0 scsitapealert-sd: Unknown event 5
bcopy (100): acquire.c:235-0 stored: open vol=Full-0001
bcopy (100): dev.c:561-0 open dev: type=1 dev_name="Default" (/var/lib/bareos/storage/default) vol=Full-0001 mode=OPEN_READ_ONLY
bcopy (100): dev.c:572-0 call open_device mode=OPEN_READ_ONLY
bcopy (190): dev.c:1006-0 Enter mount
bcopy (100): dev.c:646-0 open disk: mode=OPEN_READ_ONLY open(/var/lib/bareos/storage/default/Full-0001, 0x0, 0640)
bcopy (100): dev.c:662-0 open dev: disk fd=3 opened
bcopy (100): dev.c:580-0 preserve=0xffffde60 fd=3
bcopy (100): acquire.c:243-0 opened dev "Default" (/var/lib/bareos/storage/default) OK
bcopy (100): acquire.c:257-0 calling read-vol-label
bcopy (100): dev.c:502-0 setting minblocksize to 64512, maxblocksize to label_block_size=64512, on device "Default" (/var/lib/bareos/storage/default)
bcopy (100): label.c:76-0 Enter read_volume_label res=0 device="Default" (/var/lib/bareos/storage/default) vol=Full-0001 dev_Vol=*NULL* max_blocksize=64512
bcopy (130): label.c:140-0 Big if statement in read_volume_label
bcopy (190): label.c:907-0 unser_vol_label

Volume Label:
Id : Bareos 2.0 immortal
VerNo : 20
VolName : Full-0001
PrevVolName :
VolFile : 0
LabelType : VOL_LABEL
LabelSize : 159
PoolName : Full
MediaType : default
PoolType : Backup
HostName : earth
Date label written: 23-aoû-2016 16:12
bcopy (130): label.c:213-0 Compare Vol names: VolName=Full-0001 hdr=Full-0001

Volume Label:
Id : Bareos 2.0 immortal
VerNo : 20
VolName : Full-0001
PrevVolName :
VolFile : 0
LabelType : VOL_LABEL
LabelSize : 159
PoolName : Full
MediaType : default
PoolType : Backup
HostName : earth
Date label written: 23-aoû-2016 16:12
bcopy (130): label.c:234-0 Leave read_volume_label() VOL_OK
bcopy (100): label.c:251-0 Call reserve_volume=Full-0001
bcopy (150): vol_mgr.c:414-0 enter reserve_volume=Full-0001 drive="Default" (/var/lib/bareos/storage/default)
bcopy (150): vol_mgr.c:323-0 new Vol=Full-0001 at 625718 dev="Default" (/var/lib/bareos/storage/default)
bcopy (150): vol_mgr.c:582-0 === set in_use. vol=Full-0001 dev="Default" (/var/lib/bareos/storage/default)
bcopy (150): vol_mgr.c:619-0 Inc walk_start use_count=2 volname=Full-0001
bcopy (150): vol_mgr.c:259-0 List end new volume: Full-0001 in_use=1 swap=0 on device "Default" (/var/lib/bareos/storage/default)
bcopy (200): scsitapealert-sd.c:228-0 scsitapealert-sd: tapealert is not enabled on device /var/lib/bareos/storage/default
bcopy (100): dev.c:432-0 Device "Default" (/var/lib/bareos/storage/default) has dev->device->max_block_size of 0 and dev->max_block_size of 64512, dcr->VolMaxBlocksize is 0
bcopy (100): dev.c:474-0 set minblocksize to 64512, maxblocksize to 64512 on device "Default" (/var/lib/bareos/storage/default)
bcopy (100): acquire.c:263-0 Got correct volume.
23-aoû 18:16 bcopy JobId 0: Ready to read from volume "Full-0001" on device "Default" (/var/lib/bareos/storage/default).
bcopy (100): acquire.c:370-0 dcr=61fc28 dev=623f58
bcopy (100): acquire.c:371-0 MediaType dcr=default dev=default
bcopy (100): bcopy.c:212-0 About to setup output jcr
bcopy (200): autoxflate-sd.c:183-0 autoxflate-sd: newPlugin JobId=0
bcopy (200): scsicrypto-sd.c:168-0 scsicrypto-sd: newPlugin JobId=0
bcopy (200): scsitapealert-sd.c:130-0 scsitapealert-sd: newPlugin JobId=0
bcopy: butil.c:274-0 Using device: "FileStorage" for writing.
bcopy (100): dev.c:393-0 init_dev: tape=0 dev_name=/var/lib/bareos/storage/file/
bcopy (100): dev.c:395-0 dev=/var/lib/bareos/storage/file/ dev_max_bs=0 max_bs=0
bcopy (100): block.c:127-0 created new block of blocksize 64512 (dev->device->label_block_size) as dev->max_block_size is zero
bcopy (200): acquire.c:791-0 Attach Jid=0 dcr=6260d8 size=0 dev="FileStorage" (/var/lib/bareos/storage/file/)
bcopy (150): vol_mgr.c:169-0 read_vol=Full-0001 JobId=0 already in list.
bcopy (120): device.c:266-0 start open_output_device()
bcopy (129): device.c:275-0 Device is file, deferring open.
bcopy (100): bcopy.c:225-0 About to acquire device for writing
bcopy (100): dev.c:561-0 open dev: type=1 dev_name="FileStorage" (/var/lib/bareos/storage/file/) vol=catalog01 mode=OPEN_READ_WRITE
bcopy (100): dev.c:572-0 call open_device mode=OPEN_READ_WRITE
bcopy (190): dev.c:1006-0 Enter mount
bcopy (100): dev.c:646-0 open disk: mode=OPEN_READ_WRITE open(/var/lib/bareos/storage/file/catalog01, 0x2, 0640)
bcopy (100): dev.c:662-0 open dev: disk fd=4 opened
bcopy (100): dev.c:580-0 preserve=0xffffe0b0 fd=4
bcopy (100): acquire.c:400-0 acquire_append device is disk
bcopy (190): acquire.c:435-0 jid=0 Do mount_next_write_vol
bcopy (150): mount.c:71-0 Enter mount_next_volume(release=0) dev="FileStorage" (/var/lib/bareos/storage/file/)
bcopy (150): mount.c:84-0 mount_next_vol retry=0
bcopy (200): scsitapealert-sd.c:228-0 scsitapealert-sd: tapealert is not enabled on device /var/lib/bareos/storage/file/
bcopy (100): mount.c:650-0 No swap_dev set
bcopy (200): scsitapealert-sd.c:192-0 scsitapealert-sd: Unknown event 5
bcopy (200): mount.c:390-0 Before dir_find_next_appendable_volume.
bcopy (100): askdir.c:731-0 Fake dir_get_volume_info
bcopy (150): mount.c:124-0 After find_next_append. Vol=catalog01 Slot=0
bcopy (100): autochanger.c:99-0 Device "FileStorage" (/var/lib/bareos/storage/file/) is not an autochanger
bcopy (150): mount.c:173-0 autoload_dev returns 0
bcopy (150): mount.c:209-0 want vol=catalog01 devvol= dev="FileStorage" (/var/lib/bareos/storage/file/)
bcopy (100): dev.c:502-0 setting minblocksize to 64512, maxblocksize to label_block_size=64512, on device "FileStorage" (/var/lib/bareos/storage/file/)
bcopy (100): label.c:76-0 Enter read_volume_label res=0 device="FileStorage" (/var/lib/bareos/storage/file/) vol=catalog01 dev_Vol=*NULL* max_blocksize=64512
bcopy (130): label.c:140-0 Big if statement in read_volume_label
bcopy (190): label.c:907-0 unser_vol_label

Volume Label:
Id : Bareos 2.0 immortal
VerNo : 20
VolName : catalog01
PrevVolName :
VolFile : 0
LabelType : VOL_LABEL
LabelSize : 159
PoolName : Default
MediaType : file
PoolType : Backup
HostName : earth
Date label written: 23-aoû-2016 17:59
bcopy (130): label.c:213-0 Compare Vol names: VolName=catalog01 hdr=catalog01

Volume Label:
Id : Bareos 2.0 immortal
VerNo : 20
VolName : catalog01
PrevVolName :
VolFile : 0
LabelType : VOL_LABEL
LabelSize : 159
PoolName : Default
MediaType : file
PoolType : Backup
HostName : earth
Date label written: 23-aoû-2016 17:59
bcopy (130): label.c:234-0 Leave read_volume_label() VOL_OK
bcopy (100): label.c:251-0 Call reserve_volume=catalog01
bcopy (150): vol_mgr.c:414-0 enter reserve_volume=catalog01 drive="FileStorage" (/var/lib/bareos/storage/file/)
bcopy (150): vol_mgr.c:619-0 Inc walk_start use_count=2 volname=Full-0001
bcopy (150): vol_mgr.c:259-0 List begin reserve_volume: Full-0001 in_use=1 swap=0 on device "Default" (/var/lib/bareos/storage/default)
bcopy (150): vol_mgr.c:323-0 new Vol=catalog01 at 6258c8 dev="FileStorage" (/var/lib/bareos/storage/file/)
bcopy (150): vol_mgr.c:582-0 === set in_use. vol=catalog01 dev="FileStorage" (/var/lib/bareos/storage/file/)
bcopy (150): vol_mgr.c:619-0 Inc walk_start use_count=2 volname=Full-0001
bcopy (150): vol_mgr.c:259-0 List end new volume: Full-0001 in_use=1 swap=0 on device "Default" (/var/lib/bareos/storage/default)
bcopy (150): vol_mgr.c:638-0 Inc walk_next use_count=2 volname=catalog01
bcopy (150): vol_mgr.c:259-0 List end new volume: catalog01 in_use=1 swap=0 on device "FileStorage" (/var/lib/bareos/storage/file/)
bcopy (200): scsitapealert-sd.c:228-0 scsitapealert-sd: tapealert is not enabled on device /var/lib/bareos/storage/file/
bcopy (100): dev.c:432-0 Device "FileStorage" (/var/lib/bareos/storage/file/) has dev->device->max_block_size of 0 and dev->max_block_size of 64512, dcr->VolMaxBlocksize is 0
bcopy (100): dev.c:474-0 set minblocksize to 64512, maxblocksize to 64512 on device "FileStorage" (/var/lib/bareos/storage/file/)
bcopy (150): mount.c:438-0 Want dirVol=catalog01 dirStat=
bcopy (150): mount.c:446-0 Vol OK name=catalog01
bcopy (100): askdir.c:731-0 Fake dir_get_volume_info
bcopy (100): askdir.c:731-0 Fake dir_get_volume_info
bcopy (200): mount.c:289-0 applying vol block sizes to device "FileStorage" (/var/lib/bareos/storage/file/): dcr->VolMinBlocksize set to 0, dcr->VolMaxBlocksize set to 0
bcopy (100): dev.c:432-0 Device "FileStorage" (/var/lib/bareos/storage/file/) has dev->device->max_block_size of 0 and dev->max_block_size of 64512, dcr->VolMaxBlocksize is 0
bcopy (100): dev.c:474-0 set minblocksize to 64512, maxblocksize to 64512 on device "FileStorage" (/var/lib/bareos/storage/file/)
bcopy (100): mount.c:323-0 Device previously written, moving to end of data. Expect 0 bytes
23-aoû 18:16 bcopy JobId 0: Volume "catalog01" previously written, moving to end of data.
bcopy (100): dev.c:749-0 Enter eod
bcopy (200): dev.c:761-0 ====== Seek to 14465367
23-aoû 18:16 bcopy JobId 0: Warning: For Volume "catalog01":
The sizes do not match! Volume=14465367 Catalog=0
Correcting Catalog
bcopy (150): mount.c:341-0 update volinfo mounts=1
bcopy (150): mount.c:351-0 set APPEND, normal return from mount_next_write_volume. dev="FileStorage" (/var/lib/bareos/storage/file/)
bcopy (190): acquire.c:448-0 Output pos=0:14465367
bcopy (100): acquire.c:459-0 === nwriters=1 nres=0 vcatjob=1 dev="FileStorage" (/var/lib/bareos/storage/file/)
23-aoû 18:16 bcopy JobId 0: Forward spacing Volume "Full-0001" to file:block 0:1922821345.
bcopy (100): dev.c:892-0 ===== lseek to 1922821345
bcopy (10): bcopy.c:384-0 Begin Job Session Record: VolSessionId=3 VolSessionTime=1471954183 JobId=2 DataLen=144
Begin Job Session Record: VolSessionId=3 VolSessionTime=1471954183 JobId=2 DataLen=144
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=1335
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=2407
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=3479
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=4551
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=5623
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=6695
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=7767
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=8839
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=9911
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=10983
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=12055
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=13127
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=14199
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=15271
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=16343
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=17415
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=18487
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=19559
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=20631
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=21703
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=22775
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=23847
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=24919
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=25991
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=27063
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=28135
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=29207
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=30279
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=31351
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=32423
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=33495
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=34567
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=35639
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=36711
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=37783
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=38855
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=39927
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=40999
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=42071
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=43143
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=44215
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=45287
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=46359
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=47431
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=48503
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=49575
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=50647
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=51719
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=52791
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=53863
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=54935
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=56007
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=57079
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=58151
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=59223
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=60295
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=61367
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=62439
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=63511
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=64583
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=107
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=1179
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=2251
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=3323
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=4395
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=5467
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=6539
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=7611
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=8683
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=9755
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=10827
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=11899
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=12971
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=600 rem=200
bcopy (10): bcopy.c:384-0 End Job Session Record: VolSessionId=3 VolSessionTime=1471954183 JobId=2 DataLen=180
End Job Session Record: VolSessionId=3 VolSessionTime=1471954183 JobId=2 DataLen=180
bcopy (200): read_record.c:243-0 End of file 0 on device "Default" (/var/lib/bareos/storage/default), Volume "Full-0001"
23-aoû 18:16 bcopy JobId 0: End of Volume at file 0 on device "Default" (/var/lib/bareos/storage/default), Volume "Full-0001"
bcopy (150): vol_mgr.c:713-0 === clear in_use vol=Full-0001
bcopy (150): vol_mgr.c:732-0 === set not reserved vol=Full-0001 num_writers=0 dev_reserved=0 dev="Default" (/var/lib/bareos/storage/default)
bcopy (150): vol_mgr.c:763-0 === clear in_use vol=Full-0001
bcopy (150): vol_mgr.c:777-0 === remove volume Full-0001 dev="Default" (/var/lib/bareos/storage/default)
bcopy (150): vol_mgr.c:619-0 Inc walk_start use_count=2 volname=catalog01
bcopy (150): vol_mgr.c:259-0 List free_volume: catalog01 in_use=1 swap=0 on device "FileStorage" (/var/lib/bareos/storage/file/)
bcopy (90): mount.c:928-0 NumReadVolumes=1 CurReadVolume=1
bcopy (150): vol_mgr.c:705-0 vol_unused: no vol on "Default" (/var/lib/bareos/storage/default)
bcopy (150): vol_mgr.c:619-0 Inc walk_start use_count=2 volname=catalog01
bcopy (150): vol_mgr.c:259-0 List null vol cannot unreserve_volume: catalog01 in_use=1 swap=0 on device "FileStorage" (/var/lib/bareos/storage/file/)
bcopy (90): mount.c:947-0 End of Device reached.
23-aoû 18:16 bcopy JobId 0: End of all volumes.
bcopy (10): bcopy.c:384-0 Unknown Record: VolSessionId=0 VolSessionTime=0 JobId=0 DataLen=0
Unknown Record: VolSessionId=0 VolSessionTime=0 JobId=0 DataLen=0
bcopy: bcopy.c:328-0 EOT label not copied.
bcopy (100): block.c:449-0 return write_block_to_dev no data to write
bcopy: bcopy.c:251-0 1 Jobs copied. 377 records copied.
bcopy (100): dev.c:933-0 close_dev "Default" (/var/lib/bareos/storage/default)
bcopy (100): dev.c:1043-0 Enter unmount
bcopy (100): dev.c:921-0 Clear volhdr vol=Full-0001
bcopy (100): dev.c:933-0 close_dev "FileStorage" (/var/lib/bareos/storage/file/)
bcopy (100): dev.c:1043-0 Enter unmount
bcopy (100): dev.c:921-0 Clear volhdr vol=catalog01
bcopy (150): vol_mgr.c:193-0 remove_read_vol=Full-0001 JobId=0 found=1
bcopy: lockmgr.c:319-0 ASSERT failed at acquire.c:839: !priority || priority >= max_prio
[New Thread 0x7ffff53e7700 (LWP 3015)]

Program received signal SIGSEGV, Segmentation fault.
0x00007ffff7743c29 in lmgr_thread_t::pre_P (this=0x61c478, m=0x61fc58, priority=-1431655766, f=0x7ffff7bc5f7b "acquire.c", l=839) at lockmgr.c:319
319 lockmgr.c: No such file or directory.
Missing separate debuginfos, use: zypper install libcap2-debuginfo-2.22-16.1.x86_64 libgcc_s1-debuginfo-5.3.1+r233831-6.1.x86_64 libjansson4-debuginfo-2.7-3.2.x86_64 liblzo2-2-debuginfo-2.08-4.1.x86_64 libopenssl1_0_0-debuginfo-1.0.1i-15.1.x86_64 libstdc++6-debuginfo-5.3.1+r233831-6.1.x86_64 libwrap0-debuginfo-7.6-885.4.x86_64 libz1-debuginfo-1.2.8-6.4.x86_64
(gdb) bt
#0 0x00007ffff7743c29 in lmgr_thread_t::pre_P (this=0x61c478, m=0x61fc58, priority=-1431655766, f=0x7ffff7bc5f7b "acquire.c", l=839) at lockmgr.c:319
0000001 0x00007ffff77433a6 in bthread_mutex_lock_p (m=m@entry=0x61fc58, file=file@entry=0x7ffff7bc5f7b "acquire.c", line=line@entry=839) at lockmgr.c:742
0000002 0x00007ffff7b9e72f in free_dcr (dcr=0x61fc28) at acquire.c:839
0000003 0x00007ffff7baa43f in my_free_jcr (jcr=0x622bc8) at butil.c:215
0000004 0x00007ffff77409bd in b_free_jcr (file=file@entry=0x402d83 "bcopy.c", line=line@entry=256, jcr=0x622bc8) at jcr.c:641
0000005 0x0000000000401f98 in main (argc=<optimized out>, argv=<optimized out>) at bcopy.c:256
(gdb) cont
Continuing.
[Thread 0x7ffff53e7700 (LWP 3015) exited]

Program terminated with signal SIGSEGV, Segmentation fault.
Attached Files:
Notes
(0002413)
tigerfoot   
2016-10-27 12:07   
End of the trace with full debuginfo packages installed.

27-oct-2016 12:05:29.103637 bcopy (90): mount.c:947-0 End of Device reached.
27-oct 12:05 bcopy JobId 0: End of all volumes.
27-oct-2016 12:05:29.103642 bcopy (10): bcopy.c:384-0 Unknown Record: VolSessionId=0 VolSessionTime=0 JobId=0 DataLen=0
Unknown Record: VolSessionId=0 VolSessionTime=0 JobId=0 DataLen=0
bcopy: bcopy.c:328-0 EOT label not copied.
27-oct-2016 12:05:29.103650 bcopy (100): block.c:449-0 return write_block_to_dev no data to write
bcopy: bcopy.c:251-0 1 Jobs copied. 309113 records copied.
27-oct-2016 12:05:29.103656 bcopy (100): dev.c:933-0 close_dev "Default" (/var/lib/bareos/storage/default)
27-oct-2016 12:05:29.103703 bcopy (100): dev.c:1043-0 Enter unmount
27-oct-2016 12:05:29.103707 bcopy (100): dev.c:921-0 Clear volhdr vol=Full-0001
27-oct-2016 12:05:29.103713 bcopy (100): dev.c:933-0 close_dev "FileStorage" (/var/lib/bareos/storage/file)
27-oct-2016 12:05:29.103716 bcopy (100): dev.c:1043-0 Enter unmount
27-oct-2016 12:05:29.103719 bcopy (100): dev.c:921-0 Clear volhdr vol=catalog01
27-oct-2016 12:05:29.103726 bcopy (150): vol_mgr.c:193-0 remove_read_vol=Full-0001 JobId=0 found=1
bcopy: lockmgr.c:319-0 ASSERT failed at acquire.c:839: !priority || priority >= max_prio
319 lockmgr.c: No such file or directory.

Thread 1 "bcopy" received signal SIGSEGV, Segmentation fault.
0x00007ffff7743c29 in lmgr_thread_t::pre_P (this=0x61c478, m=0x623008, priority=-1431655766, f=0x7ffff7bc5f7b "acquire.c", l=839) at lockmgr.c:319
(gdb) bt
#0 0x00007ffff7743c29 in lmgr_thread_t::pre_P (this=0x61c478, m=0x623008, priority=-1431655766, f=0x7ffff7bc5f7b "acquire.c", l=839) at lockmgr.c:319
0000001 0x00007ffff77433a6 in bthread_mutex_lock_p (m=m@entry=0x623008, file=file@entry=0x7ffff7bc5f7b "acquire.c", line=line@entry=839) at lockmgr.c:742
0000002 0x00007ffff7b9e72f in free_dcr (dcr=0x622fd8) at acquire.c:839
0000003 0x00007ffff7baa43f in my_free_jcr (jcr=0x623558) at butil.c:215
0000004 0x00007ffff77409bd in b_free_jcr (file=file@entry=0x402d83 "bcopy.c", line=line@entry=256, jcr=0x623558) at jcr.c:641
0000005 0x0000000000401f98 in main (argc=<optimized out>, argv=<optimized out>) at bcopy.c:256
(gdb) cont
Continuing.
[Thread 0x7ffff53e7700 (LWP 6975) exited]

Program terminated with signal SIGSEGV, Segmentation fault.
The program no longer exists.


(full stack will be attached)
(0002414)
tigerfoot   
2016-10-27 12:18   
trace can't be attached (log.xz is 24Mb)
you can have it here
https://dav.ioda.net/index.php/s/V7RPrq6M3KtbFc0/download
(0005031)
bruno-at-bareos   
2023-05-09 16:58   
Has been fixed in recent version 21.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
587 [bareos-core] director minor always 2015-12-22 16:48 2023-05-09 16:55
Reporter: joergs Platform: Linux  
Assigned To: bruno-at-bareos OS: any  
Priority: low OS Version: 3  
Status: resolved Product Version: 15.2.2  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: joblog has "Backup Error", but jobstatus is set to successfull ('T') if writing the bootstrap file fails
Description: If the director can't write the bootstrap file, the joblog says:

...
  Non-fatal FD errors: 1
  SD Errors: 0
  FD termination status: OK
  SD termination status: OK
  Termination: *** Backup Error ***

however, the jobstatus is 'T':

+-------+---------------+-----------------+---------------------+------+-------+----------+----------+-----------+
| JobId | Name | Client | StartTime | Type | Level | JobFiles | JobBytes | JobStatus |
+-------+---------------+-----------------+---------------------+------+-------+----------+----------+-----------+
| 225 | BackupClient1 | ting.dass-it-fd | 2015-12-22 16:32:13 | B | I | 2 | 44 | T |
+-------+---------------+-----------------+---------------------+------+-------+----------+----------+-----------+
Tags:
Steps To Reproduce: configure a job with

  Write Bootstrap = "/NONEXISTINGPATH/%c.bsr"

and run the job.

Compare status from "list joblog" with "list jobs".
Additional Information: list joblog jobid=...

will show something like:

...
 2015-12-22 16:46:12 ting.dass-it-dir JobId 226: Error: Could not open WriteBootstrap file:
/NONEXISTINGPATH/ting.dass-it-fd.bsr: ERR=No such file or directory
...
  Non-fatal FD errors: 1
  SD Errors: 0
  FD termination status: OK
  SD termination status: OK
  Termination: *** Backup Error ***

However "list jobs" will show 'T'.
System Description General issues regarding all linux platforms or not specific to only one distribution
Attached Files:
Notes
(0005029)
bruno-at-bareos   
2023-05-09 16:55   
Fixed in recent version.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
460 [bareos-core] file daemon minor N/A 2015-04-27 15:01 2023-05-09 16:50
Reporter: qwerty Platform: Slackware Linux  
Assigned To: pstorz OS: any  
Priority: normal OS Version: 3  
Status: feedback Product Version: 14.2.4  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: SHA1 checksum sometimes wrong
Description: For some files the checksum is wrong when doing incremental backup in accurate mode.
Note that is just for some files. And always the same files.
These files are not touched in any way.
They get backed up on every backup, full and incremental.
The problem is that it eat up backup media.

The FileSet setup look like this:
FileSet {
  Name = "Server1-data_opt_u"
  Include {
    Options {
      signature = SHA1
      compression = GZIP
      accurate = pins1u
      basejob = pins1u
      verify = pins1
      sparse = yes
      noatime = yes
      checkfilechanges = no
      exclude = yes

      @/opt/bareos/etc/clients.d/srv1.files/data_opt_u.wild_exclude
    }

    @/opt/bareos/etc/clients.d/srv1.files/data_opt_u.include
  }

  Exclude {
    @/opt/bareos/etc/clients.d/srv1.files/data_opt_u.exclude
  }
}

Shorted debug output from the file daemon.
One file as example.
.
.
.
srv1-fd: accurate_htable.c:97-0 add fname=</opt/u/Slackware/13.0/slackware-13.0-install-dvd.iso> lstat=P0E Jxf IHk B H0 H0 A DpqbgA BAA dPIV BUCE69 BMKTCp BU2xiJ A A g delta_seq=0 chksum=qCVteboqSK23Btz+1XrFVVb+KpA
.
.
.
srv1-fd: verify.c:322-0 === read_digest
srv1-fd: accurate.c:55-0 lookup </opt/u/Slackware/13.0/slackware-13.0-install-dvd.iso> ok
srv1-fd: verify.c:262-0 === digest_file
srv1-fd: bfile.c:1101-0 bopen: fname /opt/u/Slackware/13.0/slackware-13.0-install-dvd.iso, flags 262144, mode 0, rdev 0
srv1-fd: verify.c:322-0 === read_digest
srv1-fd: verify.c:437-0 /opt/u/Slackware/13.0/slackware-13.0-install-dvd.iso SHA1 chksum diff. Cat: qCVteboqSK23Btz+1XrFVVb+KpA File: WNmTLFTpUj5hZwRbIWmuqsx6n50
.
.
.
Tags:
Steps To Reproduce:
Additional Information:
System Description General issues regarding all linux platforms or not specific to only one distribution
Attached Files:
Notes
(0001741)
pstorz   
2015-05-26 12:00   
Hello,

we fixed some problems with the digest code for 0000462, and I think that this should also solve this problem.
(see
https://github.com/bareos/bareos/commit/b75dbb84551b93a4f2359db71dea7527edfd0541 )

Please check with the newest master code if your problem is now fixed.

Thank you
(0001765)
qwerty   
2015-06-02 21:00   
No. The problem is still there.
(0001822)
backelj   
2015-09-09 10:49   
Hello, I'm experiencing the same thing with these settings:

FileSet {
  Name = test-data-fileset
  Ignore FileSet Changes = yes
  Include {
    Options {
      Compression = GZIP
      Signature = MD5
      BaseJob = pmugcs5
      Accurate = mcs5
      Verify = pins5
      sparse = yes
      aclsupport = yes
      xattrsupport = yes
      hfsplussupport = yes
      checkfilechanges = yes
    }

The initial checksum is different from all subsequent checksums which are all the same.
That is, given the output above, the checksum that is shown in the debug output for accurate_htable.c is different from the one in the debug output of verify.c. In subsequent backups, the checksum in the debug output of verify.c does not change.

I'm wondering how these checksums are computed (are they computed on the original file or on the gzipped-file?) and why the initial checksum is different...
(0001828)
backelj   
2015-09-10 11:12   
Hello again,

I've found an old thread on the bacula mailing lists that is about a similar issue:

http://sourceforge.net/p/bacula/mailman/message/20282084/

I followed the thread and did the following:
- Create a full backup.
- Run a VolumeToCatalog verify job = OK.
- Run a DiskToCatalog verify job = OK.
- Run a incremental backup = files are still copied.

At the end they talk about the "sparse = yes" setting.
If I remove this setting, all is fine again!
Moreover, the checksums in the initial full backup are different from the one with the setting "sparse=yes".
In this thread, they mention that "the Verify code that creates the hash
does not take account of the Sparse flag". Maybe this is the case for the bareos code as well???

So qwerty, would you be able and willing to test this as well (i.e., without the "sparse=yes" setting)?

Many thanks,

Franky
(0001830)
qwerty   
2015-09-11 14:01   
Hi

Yes, with "Sparse = no" it works.
I missed that bacula thread.
It looks like that old bug is still there.
Thanks Franky.
(0002025)
backelj   
2015-12-03 11:34   
I'm wondering: will there be some action taken to resolve this issue, or will using the "sparse=no" the only solution to this problem?
(0005027)
bruno-at-bareos   
2023-05-09 16:50   
Is this still reproducible with newer version like 22x?

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
458 [bareos-core] director major random 2015-04-17 10:30 2023-05-09 16:43
Reporter: Dmitriy Platform: x64  
Assigned To: pstorz OS: CentOS  
Priority: high OS Version: release 6.6  
Status: feedback Product Version: 14.2.2  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: During the execution of Job, volume in the status Used mistakenly Recycled.
Description: During the execution of Job, volume in the status Used mistakenly Recycled.
In bconsole, web interface and database errors are not detected, the job is done with the status OK. Error can be seen in the logs or when restoring data from a failed job.

in log file bareos-dir says: Recycled volume "krus-fs-vol-0176"
15-Апр 00:00 backup-is.enforta.net-dir JobId 8476: Max configured use duration=86,400 sec. exceeded. Marking Volume "krus-fs-vol-0482" as Used.
16-Апр 00:00 backup-is.enforta.net-dir JobId 8530: Recycled volume "krus-fs-vol-0176"

bareos-sd answers: that the other volume is Recycled "krus-fs-vol-0436" (lost data)
16-Апр 00:00 backup-is-sd JobId 8530: Recycled volume "krus-fs-vol-0436" on device "krus-fd-dev" (/extra3/bacula_backup/krus-fs-pool/), all previous data lost.
Tags:
Steps To Reproduce:
Additional Information: 15.04.2015 00:00 running job 8476, return status ОК.

15-Апр 00:00 backup-is.enforta.net-dir JobId 8476: Using Device "krus-fd-dev" to write.
15-Апр 00:00 siebel-oracle-fd JobId 8476: shell command: run ClientBeforeJob "/opt/krus/bin/bacula_pg_backup.sh Incremental start 8476"
15-Апр 00:00 backup-is.enforta.net-dir JobId 8476: Sending Accurate information.
15-Апр 00:00 backup-is.enforta.net-dir JobId 8476: Max configured use duration=86,400 sec. exceeded. Marking Volume "krus-fs-vol-0482" as Used.
15-Апр 00:00 backup-is.enforta.net-dir JobId 8476: Recycled volume "krus-fs-vol-0436"
15-Апр 00:00 backup-is-sd JobId 8476: Recycled volume "krus-fs-vol-0436" on device "krus-fd-dev" (/extra3/bacula_backup/krus-fs-pool/), all previous data lost.
15-Апр 00:01 backup-is-sd JobId 8476: Elapsed time=00:01:05, Transfer rate=85.88 M Bytes/second
  Volume name(s): krus-fs-vol-0436
  Termination: Backup OK

16.04.2015 00:00 Running job 8530, return status ОК, in fact destroyed the data from the volume krus-fs-vol-0436(job 8476)
16-Апр 00:00 backup-is.enforta.net-dir JobId 8530: Using Device "krus-fd-dev" to write.
16-Апр 00:00 backup-is.enforta.net-dir JobId 8530: Max configured use duration=86,400 sec. exceeded. Marking Volume "krus-fs-vol-0436" as Used.
16-Апр 00:00 backup-is.enforta.net-dir JobId 8530: Recycled volume "krus-fs-vol-0176"
16-Апр 00:00 backup-is-sd JobId 8530: Recycled volume "krus-fs-vol-0436" on device "krus-fd-dev" (/extra3/bacula_backup/krus-fs-pool/), all previous data lost.
16-Апр 00:00 backup-is.enforta.net-dir JobId 8530: Max configured use duration=86,400 sec. exceeded. Marking Volume "krus-fs-vol-0436" as Used.
16-Апр 00:01 siebel-oracle-fd JobId 8530: shell command: run ClientAfterJob "/opt/krus/bin/bacula_pg_backup.sh Incremental finish 8530"
16-Апр 00:01 backup-is-sd JobId 8530: Elapsed time=00:01:08, Transfer rate=88.62 M Bytes/second
  Volume name(s): krus-fs-vol-0176|krus-fs-vol-0436
  Termination: Backup OK

in database bareos job 8476 exist on volume krus-fs-vol-0436
bareos=# select m.mediaid, m.volumename from jobmedia jm, media m where jobid=8476 and m.mediaid=jm.mediaid group by m.mediaid, m.volumename;
 mediaid | volumename
---------+------------------
     436 | krus-fs-vol-0436

bareos=# select m.mediaid, m.volumename from jobmedia jm, media m where jobid=8530 and m.mediaid=jm.mediaid group by m.mediaid, m.volumename;
 mediaid | volumename
---------+------------------
     176 | krus-fs-vol-0176
     436 | krus-fs-vol-0436

But viewing volume shows that there is the only job - 8530 (job 8476 absent, restoring data job 8476 on volume krus-fs-vol-0436 failed)
bls -c /home/cruel/bareos-sd.conf -V krus-fs-vol-0436 krus-fd-dev -j
bls: butil.c:298-0 Using device: "krus-fd-dev" for reading.
17-Апр 10:41 bls JobId 0: Ready to read from volume "krus-fs-vol-0436" on device "krus-fd-dev" (/extra3/bacula_backup/krus-fs-pool/).
Volume Record: File:blk=0:226 SessId=409 SessTime=1428425778 JobId=0 DataLen=191
Begin Job Session Record: File:blk=0:64738 SessId=409 SessTime=1428425778 JobId=8530
   Job=siebel-oracle-krus-fs-log-job.2015-04-16_00.00.01_38 Date=16-Апр-2015 00:00:21 Level=I Type=B
End Job Session Record: File:blk=1:1735869214 SessId=409 SessTime=1428425778 JobId=8530
   Date=16-Апр-2015 00:01:29 Level=I Type=B Files=383 Bytes=6,026,360,124 Errors=0 Status=T
17-Апр 10:43 bls JobId 0: End of Volume at file 1 on device "krus-fd-dev" (/extra3/bacula_backup/krus-fs-pool/), Volume "krus-fs-vol-0436"
17-Апр 10:43 bls JobId 0: End of all volumes.

Volume settings:
llist volume=krus-fs-vol-0176
Automatically selected Catalog: MyCatalog
Using Catalog "MyCatalog"
          mediaid: 176
       volumename: krus-fs-vol-0176
             slot: 0
           poolid: 13
        mediatype: File
     firstwritten: 2015-04-16 13:20:06
      lastwritten: 2015-04-17 00:00:39
        labeldate: 2015-04-17 00:00:03
          voljobs: 2
         volfiles: 4
        volblocks: 324,295
        volmounts: 6
         volbytes: 20,920,886,821
        volerrors: 0
        volwrites: 1,525,891
volcapacitybytes: 0
        volstatus: Append
          enabled: 1
          recycle: 1
     volretention: 1,209,600
   voluseduration: 86,400
       maxvoljobs: 0
      maxvolfiles: 0
      maxvolbytes: 53,687,091,200
       inchanger: 0
          endfile: 4
         endblock: 3,741,017,636
        labeltype: 0
        storageid: 11
         deviceid: 0
       locationid: 0
     recyclecount: 3
     initialwrite:
    scratchpoolid: 0
    recyclepoolid: 0
          comment:

llist volume=krus-fs-vol-0436
          mediaid: 436
       volumename: krus-fs-vol-0436
             slot: 0
           poolid: 13
        mediatype: File
     firstwritten: 2015-04-15 00:00:19
      lastwritten: 2015-04-16 00:01:29
       labeldate: 2015-04-16 13:14:27
          voljobs: 1
         volfiles: 1
        volblocks: 93,484
        volmounts: 5
         volbytes: 6,030,836,511
        volerrors: 0
        volwrites: 1,295,080
volcapacitybytes: 0
        volstatus: Used
          enabled: 1
          recycle: 1
     volretention: 1,209,600
   voluseduration: 86,400
       maxvoljobs: 0
      maxvolfiles: 0
      maxvolbytes: 53,687,091,200
        inchanger: 0
          endfile: 1
         endblock: 1,735,869,214
        labeltype: 0
        storageid: 11
         deviceid: 0
       locationid: 0
     recyclecount: 1
     initialwrite:
    scratchpoolid: 0
    recyclepoolid: 0
          comment:
System Description
Attached Files:
Notes
(0001708)
Dmitriy   
2015-05-05 08:16   
The problem is repeated and observed in the storage located on the NAS connected via iScsi.

There is a suspicion that the storage falls asleep from inactivity. At the start of the job, NAS awakens after 30 - 60 seconds, and at this time there is a failure.
(0001742)
pstorz   
2015-05-26 12:06   
Hello,

your last comment seems to show that the problem is caused by your NAS system.

If your NAS system does not behave correctly this is not a bug in bareos.

Do you have more info? Otherwise we will close this bug

Best regards,

Philipp
(0001743)
Dmitriy   
2015-05-26 13:19   
Hello,
I turned off the stop drive on NAS, there is no timeout.
The problem persists.
It may need more information to diagnose the problem?
(0001827)
pstorz   
2015-09-10 09:22   
Hello,

can you send your configuration please?

Especially the configuration of the Pools would be interesting.

Also, you can run the sd with high debug level (e.g. 500), then it will tell you why it thinks it can recycle a volume.
(0001835)
Dmitriy   
2015-09-16 13:58   
Hello,

I launched bareos-sd demon with debug level 500.
Problem Repeat every 1-2 weeks.
I am waiting for the failure and send the configuration and debug info
(0001848)
Dmitriy   
2015-09-22 10:54   
Hello,

The problem occurs only on volumes in the pool krus-fs-pool when start job siebel-oracle-krus-fs-log-job Type=Incremental

It is a mistake that cleared krus-fs-vol-0187 containing the previous incremental backup

jobid 18190
22-Сен 00:00 backup-is.enforta.net-dir JobId 18190: Recycled volume "krus-fs-vol-0189"
22-Сен 00:00 backup-is-sd JobId 18190: Recycled volume "krus-fs-vol-0187" on device "krus-fd-dev" (/extra3/bacula_backup/krus-fs-pool/), all previous data lost.

Job log
https://drive.google.com/file/d/0BwXK64WOr_77NkcxRXNzNHF6cVk/view?usp=sharing

Full debug bareos-sd level 500
https://drive.google.com/file/d/0BwXK64WOr_77U3JNbV96dmpiMGc/view?usp=sharing

grep 'krus-fs-vol-0187\|krus-fs-vol-0187\|=18190' bareos-sd1.debug >1.log
https://drive.google.com/file/d/0BwXK64WOr_77Y2tFMlZmdWZpQms/view?usp=sharing

Config file
https://drive.google.com/file/d/0BwXK64WOr_77UERqSEl5NzhFV28/view?usp=sharing
(0005024)
bruno-at-bareos   
2023-05-09 16:43   
is this still reproducible with recent version 22x

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
301 [bareos-core] director tweak always 2014-05-27 17:02 2023-05-09 16:41
Reporter: alexbrueckel Platform:  
Assigned To: bruno-at-bareos OS: Debian 7  
Priority: low OS Version:  
Status: resolved Product Version: 13.2.2  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Inconsistency when configuring bandwith limitation
Description: Hi,
while configuring different jobs for a client, some width bandwith limitation, i noticed that every configuration item could be placed in quotation marks except the desired max. bandwidth.

It's a bit inconsistent this way so it would be great if this could be fixed.

Thank you very much
Tags:
Steps To Reproduce:
Additional Information:
Attached Files:
Notes
(0000889)
mvwieringen   
2014-05-31 22:04   
An example and the exact error would be handy. Its probably some missing
parsing as all config code uses the same config parser. But without a
clear example and the exact error its not something we can go on.
(0000899)
alexbrueckel   
2014-06-04 17:37   
Hi,

here's the example that works:
Job {
  Name = "myhost-backupjob"
  Client = "myhost.mydomain.tld"
  JobDefs = "default"
  FileSet = "myfileset"
  Maximum Bandwidth = 10Mb/s
}

Note that the bandwidth value has noch quotation marks.

Thats an example that doesn't work:
Job {
  [same as above]
  Maximum Bandwidth = "10Mb/s"
}

The error message i get in this case is:
ERROR in parse_conf.c:764 Config error: expected a speed, got: 10Mb/s

Hope that helps and thanks for your work.
Alex
(0000900)
mvwieringen   
2014-06-06 15:39   
It seems that the config engine only allows a quoted string. E.g. all numbers
are now allowed to have a quotation. As the speed get parsed by the same function
as a number it currently doesn't allow you to use quotes. You can indeed argue
that its inconsistent but it seems to be envisioned by the original creator of
the config engine. We might change this one day but for now I wouldn't hold my
breath for it to occur any time soon. There are just more important things to do.
(0001086)
joergs   
2014-12-01 16:13   
I added some notes about this to the documentation.
(0005023)
bruno-at-bareos   
2023-05-09 16:41   
documentation updated.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1532 [bareos-core] file daemon major always 2023-04-27 10:22 2023-05-03 15:39
Reporter: hostedpower Platform: x86  
Assigned To: OS: Windows  
Priority: normal OS Version: 2016  
Status: new Product Version: 22.0.3  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Windows backups fails on a lot of files, seems vss is not used properly
Description: Using Windows 2022

mdf": ERR=The process cannot access the file because it is being used by another process.

Cannot open "E:/SQLLogs/xxxxx/xxxx_Cbro_log.ldf": ERR=The process cannot access the file because it is being used by another proces

But we see lot's of these errors (easily 100's at moments which is quite weird).

I cannot remember seeing so many errors with Windows backups as we experience now on Windows 2022 and Baroes 22.x (Not sure which one causes it)

Could you provide me any troubleshooting steps so I can report back?
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0004993)
hostedpower   
2023-04-28 16:21   
Maybe a similar issue here: https://groups.google.com/g/bareos-users/c/F46rRPh7Hf8
(0004997)
bruno-at-bareos   
2023-05-03 15:39   
Are the "E:/SQLLogs" directory pick into account by the VSS SQL Writer ?

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1517 [bareos-core] director block always 2023-03-01 17:24 2023-04-20 18:24
Reporter: mschiff Platform:  
Assigned To: arogge OS:  
Priority: normal OS Version:  
Status: assigned Product Version: 21.1.6  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: compilation fails with GCC-13: cats.h:491:50: error: expected class-name before { token
Description: Reading the changelog from 21.1.5 to .6 there seems no related fix, so I assume this is still relevant.

compiling bareos on a system with gcc-13 fails with many errors like:

core/src/cats/cats.h:491:50: error: expected class-name before ‘{’ token

Please see this links for a complete log:

https://895192.bugs.gentoo.org/attachment.cgi?id=852366
Tags: broken, cmake compilation, director
Steps To Reproduce: Try to compile bareos 21.1.6 using gcc-13
Additional Information: Gentoo bug: https://bugs.gentoo.org/895192
Attached Files: bareos-build-fail (139,450 bytes) 2023-03-01 17:24
https://bugs.bareos.org/file_download.php?file_id=543&type=bug
Notes
(0004954)
arogge   
2023-03-24 11:34   
Can you do a PR to fix that problem against master? We could then backport it to 22 and 21.
(0004955)
arogge   
2023-03-24 11:38   
AFAICT there is just an "#include <stdexcept>" missing in the include-block at the top of cats.h
(0004958)
bruno-at-bareos   
2023-03-27 13:27   
PR in progress https://github.com/bareos/bareos/pull/1424
(0004973)
mschiff   
2023-04-20 00:46   
I tested with PR 1424 applied and using gcc-13.0.1_pre20230409 I now get this error:

[53/362] /usr/bin/x86_64-pc-linux-gnu-g++ -D_FILE_OFFSET_BITS=64 -Dbareossd_EXPORTS -I/usr/include/tirpc -I/var/tmp/portage/app-backup/bareos-21.1.7/work/bareos-Release-21.1.7/core/src -O2 -pipe -Wsuggest-override -Wformat -Wformat-security -fdebug-prefix-map=/var/tmp/portage/app-backup/bareos-21.1.7/work/bareos-Release-21.1.7/core=. -fmacro-prefix-map=/var/tmp/portage/app-backup/bareos-21.1.7/work/bareos-Release-21.1.7/core=. -Wno-unknown-pragmas -Wall -std=gnu++17 -fPIC -Wno-deprecated-declarations -MD -MT core/src/stored/CMakeFiles/bareossd.dir/dev.cc.o -MF core/src/stored/CMakeFiles/bareossd.dir/dev.cc.o.d -o core/src/stored/CMakeFiles/bareossd.dir/dev.cc.o -c /var/tmp/portage/app-backup/bareos-21.1.7/work/bareos-Release-21.1.7/core/src/stored/dev.cc
FAILED: core/src/stored/CMakeFiles/bareossd.dir/dev.cc.o
/usr/bin/x86_64-pc-linux-gnu-g++ -D_FILE_OFFSET_BITS=64 -Dbareossd_EXPORTS -I/usr/include/tirpc -I/var/tmp/portage/app-backup/bareos-21.1.7/work/bareos-Release-21.1.7/core/src -O2 -pipe -Wsuggest-override -Wformat -Wformat-security -fdebug-prefix-map=/var/tmp/portage/app-backup/bareos-21.1.7/work/bareos-Release-21.1.7/core=. -fmacro-prefix-map=/var/tmp/portage/app-backup/bareos-21.1.7/work/bareos-Release-21.1.7/core=. -Wno-unknown-pragmas -Wall -std=gnu++17 -fPIC -Wno-deprecated-declarations -MD -MT core/src/stored/CMakeFiles/bareossd.dir/dev.cc.o -MF core/src/stored/CMakeFiles/bareossd.dir/dev.cc.o.d -o core/src/stored/CMakeFiles/bareossd.dir/dev.cc.o -c /var/tmp/portage/app-backup/bareos-21.1.7/work/bareos-Release-21.1.7/core/src/stored/dev.cc
/var/tmp/portage/app-backup/bareos-21.1.7/work/bareos-Release-21.1.7/core/src/stored/dev.cc: In function ‘storagedaemon::Device* storagedaemon::FactoryCreateDevice(JobControlRecord*, DeviceResource*)’:
/var/tmp/portage/app-backup/bareos-21.1.7/work/bareos-Release-21.1.7/core/src/stored/dev.cc:243:41: error: expected unqualified-id before ‘&’ token
  243 | } catch (const std::out_of_range&) {
      | ^
/var/tmp/portage/app-backup/bareos-21.1.7/work/bareos-Release-21.1.7/core/src/stored/dev.cc:243:41: error: expected ‘)’ before ‘&’ token
  243 | } catch (const std::out_of_range&) {
      | ~ ^
      | )
/var/tmp/portage/app-backup/bareos-21.1.7/work/bareos-Release-21.1.7/core/src/stored/dev.cc:243:41: error: expected ‘{’ before ‘&’ token
/var/tmp/portage/app-backup/bareos-21.1.7/work/bareos-Release-21.1.7/core/src/stored/dev.cc:243:42: error: expected primary-expression before ‘)’ token
  243 | } catch (const std::out_of_range&) {
      |
(0004974)
bruno-at-bareos   
2023-04-20 09:44   
Sorry no backport nor tests have been nor will be done for bareos-21 which is no more current code.

The current code/version is 22x
(0004976)
mschiff   
2023-04-20 10:04   
OK, please is there a published information somewhere which branch of bareos is mainteined and which is EOL and when?

Ideally also with a plan in the future about how long each branch will be maintained

My last information was that you support more than one branch actively and provide at least bugfix releases for the last two branches before the current one (so 20 and 21 currently)
(0004977)
bruno-at-bareos   
2023-04-20 10:32   
Your information are correct for the subscription version (paying customer). As none of the supported operating system has gcc13 present by default the fixes for 21 are postponed until there's a paying request (or external contribution of course).

Otherwise the changes for the repositories and versions was published here
https://www.bareos.com/bareos-release-policy/

Hope this help you.
(0004978)
mschiff   
2023-04-20 13:23   
Thanks for clarification. So as I am packaging for Gentoo, which is source based we dont need the binaries at all so I am really only interested in the sources at this point.

So you are saying that you would accept PRs on github for things like gcc13 support for the maintained branches (20-22 currently).

But is there some roadmap somewhere where you describe how long each of the active branches will be supported?
Or is there some policy somewhere stating how many months a branch is supported until its EOL after a new major branch has been released?

Thanks in advance!
(0004979)
bruno-at-bareos   
2023-04-20 13:58   
I'm not aware of anything else published than the official documentation and published policy.

So on community side (download.bareos.org)

So basically it will have as usually a major version per year like next one will be 23. Globally we would like to increase the speed of this, but for the moment it will stay stable.
The objective that will be tried is to get the major version out around end of October.

When 23 will be out, that will replace 22 in current, and people will have to move to that new version as soon as it is out.
Immediately 24 development will start (in next) at the same time.

So basically 22 is maintained until 23 is released.
This has been decided to get better and recent feedback instead having people running very old code.

For you and anybody building sources, you are free to do what you want of course and copy the policy we have for subscription customer
meaning we support bareos-20,21,22 actually, 20 being dropped once 23 will be released.

For PR: normally they are against master, then backported when an legitimate interest is there, but they can also address a specific version like it is maybe the case here.
(0004980)
mschiff   
2023-04-20 15:31   
Thanks again for clarification!

For the specufic error in this bug I just found out, that its just a missing

#include <stdexcept>

in the head of
  core/src/stored/dev.cc

after adding this the source compiles fine.
(0004981)
mschiff   
2023-04-20 18:05   
https://github.com/bareos/bareos/pull/1451
(0004982)
mschiff   
2023-04-20 18:24   
And a final comment (sorry): PR1451 and 1424 both need to be applied to bareos-20 and bareos-22 branches as well and I guess master. Will you do it or should I create seperate PR for each branch?

TIA!

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1529 [bareos-core] General trivial have not tried 2023-03-24 14:06 2023-03-24 14:06
Reporter: arogge_adm Platform:  
Assigned To: OS:  
Priority: normal OS Version:  
Status: new Product Version:  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Release Bareos 22.0.4
Description: This ticket acts as a tracker ticket to collect information about releasing Bareos 22.0.4
Tags: release
Steps To Reproduce:
Additional Information:
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1528 [bareos-core] General trivial have not tried 2023-03-24 11:32 2023-03-24 11:32
Reporter: arogge_adm Platform:  
Assigned To: OS:  
Priority: normal OS Version:  
Status: new Product Version:  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Release Bareos 21.1.8
Description: This ticket acts as a tracker ticket to collect information about releasing Bareos 21.1.8
Tags: release
Steps To Reproduce:
Additional Information:
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
782 [bareos-core] director minor have not tried 2017-02-09 16:12 2023-03-23 16:20
Reporter: hostedpower Platform: Linux  
Assigned To: joergs OS: Debian  
Priority: normal OS Version: 8  
Status: assigned Product Version:  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Disk full not adequately detected
Description: Hi,


Once a volume disk is full, it keeps creating ghost volumes over and over instead of stopping when it doesn't work. Is there some setting to prevent this, is this a bug or is it by design somehow?

For example disk was full and it created several volumes which are never created because of this full. I have the feeling there should be a better method?


09-Feb 09:20 bareos-sd JobId 809: End of medium on Volume "vol-cons-0768" Bytes=158,430,174,241 Blocks=2,455,825 at 09-Feb-2017 09:20.
09-Feb 09:20 hostedpower-dir JobId 809: Created new Volume "vol-cons-0773" in catalog.
09-Feb 09:20 bareos-sd JobId 809: End of Volume "vol-cons-0773" at 0:0 on device "vps66143-cons" (/home/vps66143/bareos). Write of 209 bytes got -1.
09-Feb 09:20 bareos-sd JobId 809: End of Volume "vol-cons-0773" at 0:0 on device "vps66143-cons" (/home/vps66143/bareos). Write of 209 bytes got -1.
09-Feb 09:20 bareos-sd JobId 809: Marking Volume "vol-cons-0773" in Error in Catalog.
09-Feb 09:20 hostedpower-dir JobId 809: Created new Volume "vol-cons-0774" in catalog.
09-Feb 09:20 bareos-sd JobId 809: End of Volume "vol-cons-0774" at 0:0 on device "vps66143-cons" (/home/vps66143/bareos). Write of 209 bytes got -1.
09-Feb 09:20 bareos-sd JobId 809: Marking Volume "vol-cons-0774" in Error in Catalog.
09-Feb 09:20 hostedpower-dir JobId 809: Created new Volume "vol-cons-0775" in catalog.
09-Feb 09:20 bareos-sd JobId 809: End of Volume "vol-cons-0775" at 0:0 on device "vps66143-cons" (/home/vps66143/bareos). Write of 209 bytes got -1.
09-Feb 09:20 bareos-sd JobId 809: Marking Volume "vol-cons-0775" in Error in Catalog.
09-Feb 09:20 hostedpower-dir JobId 809: Created new Volume "vol-cons-0776" in catalog.
09-Feb 09:20 bareos-sd JobId 809: End of Volume "vol-cons-0776" at 0:0 on device "vps66143-cons" (/home/vps66143/bareos). Write of 209 bytes got -1.
09-Feb 09:20 bareos-sd JobId 809: Marking Volume "vol-cons-0776" in Error in Catalog.
09-Feb 09:20 hostedpower-dir JobId 809: Created new Volume "vol-cons-0777" in catalog.
09-Feb 09:20 bareos-sd JobId 809: End of Volume "vol-cons-0777" at 0:0 on device "vps66143-cons" (/home/vps66143/bareos). Write of 209 bytes got -1.
09-Feb 09:20 bareos-sd JobId 809: Marking Volume "vol-cons-0777" in Error in Catalog.
09-Feb 09:20 bareos-sd JobId 809: Please mount append Volume "vol-cons-0777" or label a new one for:
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1516 [bareos-core] director major always 2023-02-28 16:01 2023-03-21 17:36
Reporter: hostedpower Platform: Linux  
Assigned To: OS: Debian  
Priority: normal OS Version: 10  
Status: new Product Version: 22.0.2  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Problem with failing queries in postgresql logs
Description: Hi,


We use PostgreSQL 14 together with Bareos 22.x and we find some worrisome errors in the logs. We recently upgrade to Bareos 22, as far as I can see the same problems existed in 21.x

I'm attaching the error logs
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files: sql_errors.txt (111,845 bytes) 2023-02-28 16:01
https://bugs.bareos.org/file_download.php?file_id=542&type=bug
bareos-logs.txt (10,486 bytes) 2023-03-02 15:06
https://bugs.bareos.org/file_download.php?file_id=545&type=bug
volume.png (24,199 bytes) 2023-03-08 09:12
https://bugs.bareos.org/file_download.php?file_id=546&type=bug
png
Notes
(0004885)
bruno-at-bareos   
2023-03-02 09:51   
By any chance do you still have the joblog of running jobs at that time ?
(0004886)
hostedpower   
2023-03-02 10:22   
I can make a new export, just let me know what's the best way to retrieve those logs of a specific time? Are they just in /var/log/bareos, or can I issue a message command or similar?
(0004889)
bruno-at-bareos   
2023-03-02 14:11   
Usually you should find the joblog in bareos.log, but also if you check your `list jobs` ouput in joblog you will have the start time of the job, then you can extract the joblog

list joblog jobid=XXX
(0004893)
hostedpower   
2023-03-02 15:06   
Ok I checked, it's happening with (virtually) all jobs most likely. I attach matching logs here. I just anonimised the server names.
(0004894)
bruno-at-bareos   
2023-03-02 16:18   
If it was happening with almost all jobs, we would have those error in postgresql logs when running the systemtest which is unfortunately not the case.
For example we are not able to reproduce the case of jobid in () ... We really wonder what is triggering this.
Do you have any admin job routine or similar scripts doing purge,prune deleting volume, job client ?

Thanks.
(0004895)
hostedpower   
2023-03-02 16:30   
The job is using these jobdefs


JobDefs {
        Name = "FullRJob"
        Type = Backup

        Level = Full

        Client = xxxx.bla.com
        Write Bootstrap = "/var/lib/bareos/%c.bsr"

        Accurate = No
        Messages = Standard

        # Be careful not to start double jobs (since ran every hour by default)
        Allow Duplicate Jobs = No

        Allow Higher Duplicates = No
        Cancel Queued Duplicates = No
        Cancel Running Duplicates = No

        Allow Mixed Priority = True
        Priority = 10

        Pool = R-Full

        Incremental Backup Pool = R-Incremental
        Differential Backup Pool = R-Differential
        Full Backup Pool = R-Full

        Run Script {
                RunsWhen = After
                RunsOnFailure = No
                FailJobOnError = No
                RunsOnClient = No
                console = ".bvfs_update jobid=%i"
        }
}
(0004896)
bruno-at-bareos   
2023-03-06 12:03   
Thanks the jobdef is completely normal,

The interesting part in job log is the run of autoprune/autopurge on client.
I had to mention that this part of the code is deprecated (22) and will normally not trigger something. but still some queries seems to be run.
 
02-Mar 15:03 backup4-dir JobId 1309: Begin pruning Jobs older than 6 months .
02-Mar 15:03 backup4-dir JobId 1309: Purging the following 0 JobIds:
02-Mar 15:03 backup4-dir JobId 1309: No Jobs found to prune.
02-Mar 15:03 backup4-dir JobId 1309: Begin pruning Files.
02-Mar 15:03 backup4-dir JobId 1309: No Files found to prune.
02-Mar 15:03 backup4-dir JobId 1309: End auto prune.
(0004899)
hostedpower   
2023-03-06 22:28   
Well I saw similar errors in 21. But what part is deprecated? Can we no longer truncate?
(0004900)
bruno-at-bareos   
2023-03-07 09:59   
The part that is deprecated, is the autoprune parameter Files and Job normally attached to Client definition.
Too many are hit by the purge done here, and then doesn't understand why they can't restore (or at least can see anymore jobs and files, while the volume is still valid).
So the vast majority want to see into the database what they have on volumes, so mainly the pruning will occur on volume only soon.
(0004901)
hostedpower   
2023-03-07 10:14   
So I should remove this in the client def to resolve it?


Client {
    Name = "xxx"
....
    AutoPrune = yes

}

What should I do instead to make sure old stuff is getting properly deleted timely from disk and database? Maybe I'm "overly" pruning atm :)
(0004903)
bruno-at-bareos   
2023-03-07 12:12   
So you can move Autoprune from yes to no (new default in 22 and will be removed afterwards)
and check how your database is behaving.

Check your volumes retention, has those will drive the pruning of jobs and files in DB.

Of course the errors will not disappear completely (not all case belong to that part of the code), but should happen really less.
(0004905)
hostedpower   
2023-03-07 14:01   
Ok, I'll try to rely on the volumes, for example:

Pool {
    Name = R-Incremental
    Storage = R-Incremental
    LabelFormat = "vol-rincr-"

    Pool Type = Backup
    Recycle = yes # Bareos can automatically recycle Volumes
    Auto Prune = yes # Prune expired volumes
    Volume Retention = 2 days # How long should jobs be kept?
    Maximum Volume Bytes = 100G # Limit Volume size to something reasonable -> Keep it small so it can be pruned fast
    Volume Use Duration = 23h

    Action On Purge = Truncate
}


---> This would remove any volumes AND related jobs after 2 days maximum? Also remove from disk?



I'm also using the always incremental scheme for certain jobs, I should "rely" on the always incremental system to do the pruning? I have these kind of settings there:

Pool {
    Name = AI-Incremental
    Storage = AI-Incremental
    LabelFormat = "vol-incr-"
    Next Pool = AI-Consolidated

    Pool Type = Backup
    Recycle = yes # Bareos can automatically recycle Volumes
    Auto Prune = yes # Prune expired volumes
    Volume Retention = 120 days # How long should jobs be kept? <----- Keeping this as a safety measure for the moment, I expect alwaysincremental consolidate to cleanup properly ... ?
    Maximum Volume Bytes = 200G # Limit Volume size to something reasonable
    Volume Use Duration = 23h

    Action On Purge = Truncate
}

Pool {
    Name = AI-Consolidated
    Storage = AI-Consolidated
    LabelFormat = "vol-cons-"

    Pool Type = Backup
    Recycle = yes # Bareos can automatically recycle Volumes
    Auto Prune = yes # Prune expired volumes
    Volume Retention = 120 days # How long should jobs be kept? <----- Keeping this as a safety measure for the moment, I expect alwaysincremental consolidate to cleanup properly ... ?
    Maximum Volume Bytes = 200G # Limit Volume size to something reasonable
    Volume Use Duration = 23h

    Action On Purge = Truncate
}

Is this right? Because I have the feeling that not everything is removed always. Sometimes I find really old volumes, while tje always incremental scheme only has 20 days:

JobDefs {
    Name = "AIJob"
    Type = Backup
    Level = Incremental
    Client = xxx
    Write Bootstrap = "/var/lib/bareos/%c.bsr"
    Accurate=yes

    Messages = Standard
    Always Incremental = yes
    Always Incremental Job Retention = 20 days # The resulting interval between full consolidations when running daily backups and daily consolidations is Always Incremental Max Full Age Dir Job - Always Incremental Job Retention Dir Job.
    Always Incremental Max Full Age = 30 days # should NEVER be less then Always Incremental Job Retention -> Every 15 days the full backup is also consolidated ( Always Incremental Max Full Age Dir Job - Always Incremental Job Retention Dir )
    Always Incremental Keep Number = 6 # Guarantee that at least the specified number of Backup Jobs will persist, even if they are older than "Always Incremental Job Retention". Must be less than Always Incremental Job Retention
    # Maximum Concurrent Jobs = 10 # Let up to 10 jobs run
    # Max Virtual Full Interval = 20 days # Convert regular jobs to virtual full when this limit is passed

    Allow Mixed Priority = True
    Priority = 10

    Pool = AI-Incremental
    Full Backup Pool = AI-Consolidated

    Run Script {
        RunsWhen = After
        RunsOnFailure = No
        FailJobOnError = No
        RunsOnClient = No
        console = ".bvfs_update jobid=%i"
    }
}

Job {
        Name = "Consolidate"
        Type = "Consolidate"
        Accurate = "yes"

        JobDefs = "DefaultJob"

        Schedule = "ConsolidateCycle"
        Max Full Consolidations = 500 # Don't limit on this, run as needed

        Maximum Concurrent Jobs = 1
        Prune Volumes = yes

# Priority = 11
### Fix: Set same priority as other jobs so they can be mixed together
     Priority = 10
}


How is it possible I still tumble on such old data in volumes from time to time?
(0004906)
bruno-at-bareos   
2023-03-07 15:17   
point a : yes (and no, because if the volume doesn't need to be pruned for example it will be kept intact, until it get recycled for writing a new job on)
so starting after 2 days this volume may be pruned/recycled/truncated and jobs/files cleaned up if no other appendable volumes exist

point b: yes in AI scheme the consolidation will prune the jobs/files once they get consolidated and you got that information at the end of the job.

point c: Volumes in Error in Archive and job in Error, Cancel status will not be touched/pruned automatically. So a volume containing a failed incremental will be not purged because it still contain a job.

Personally I've organised myself to do some admin cleanup from time to time to avoid such situation.
(0004907)
hostedpower   
2023-03-08 09:12   
Ok one final question regarding this truncating, is there anything that could be done about the example in attachment? This has been a volume from always incremental.

The retention was for "safety" put on 8 months, but apparently nothing is left on it. Why is it keeping over 200 Gb of disk space on our storage? It's also not being reused or anything, how can this be solved since atm we need to cleanup a lot manually in order not to fill up our disks.
(0004920)
bruno-at-bareos   
2023-03-21 17:36   
You should first check what jobs are still existing on this volume, (bconsole query choice 13 or 14 to see if you want to prune/purge them
You can then purge and truncate the volume.

I've not seen disk filling up out of control with a good planning and estimation.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1524 [bareos-core] file daemon major always 2023-03-14 16:11 2023-03-14 16:11
Reporter: Petya Genius Platform: Linux  
Assigned To: OS: RHEL (and clones)  
Priority: high OS Version: 8  
Status: new Product Version:  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: bpipe does not track the status of the running program
Description: Hello!
When using the plugin bpipe, I found that this plugin does not track the execution status of the running program. If a running program in bpipe fails, the backup task itself completes successfully without checking the status of the running program.

For example:
Plugin = "bpipe:file=/var/backup:reader=bash -c 'pg_basebackup --host=testhost --username=postgres --wal-method=none --format=tar -D -':writer=bash -c 'tar xf - -C /var/backup'"

If pg_basebackup fails with an error (the host is unavailable, the host turned off during the backup, etc.), the backup task itself will succeed.
Anything can be written in bash -c, the execution status will not be tracked. If the program fails, nothing is written to stderr.

Thank you in advance!
Tags:
Steps To Reproduce: File Set {
  Name = "test-fileset"
  Description = "FileSet for tests"
  Include {
    Options {
      Signature = MD5
    }
    Plugin = "bpipe:file=/var/backup:reader=bash -c 'pg_basebackup --host=testhost --username=postgres --wal-method=none --format=tar -D -':writer=bash -c 'tar xf - -C /var/backup'"
    }
}
Additional Information:
System Description
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1523 [bareos-core] file daemon crash always 2023-03-13 13:09 2023-03-13 13:09
Reporter: mp Platform:  
Assigned To: OS:  
Priority: urgent OS Version:  
Status: new Product Version: 22.0.2  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: fatal error bareos-fd for full backup postgresql 11
Description: Full backup of Postgesql 11 crash with error:
Error: python3-fd-mod: Could net get stat-info for file /var/lib/postgresql/11/main/base/964374/t4_384322129: "[Errno 2] No such file or directory: '/var/lib/postgresql/11/main/base/964374/t4_384322129'"

1c-pg11-fd JobId 238: Fatal error: bareosfd: Traceback (most recent call last):
File "/usr/lib/bareos/plugins/BareosFdWrapper.py", line 61, in start_backup_file
return bareos_fd_plugin_object.start_backup_file(savepkt)
File "/usr/lib/bareos/plugins/BareosFdPluginPostgres.py", line 396, in start_backup_file
return super(BareosFdPluginPostgres, self).start_backup_file(savepkt)
File "/usr/lib/bareos/plugins/BareosFdPluginLocalFilesBaseclass.py", line 118, in start_backup_file
mystatp.st_mode = statp.st_mode
UnboundLocalError: local variable 'statp' referenced before assignment
Tags:
Steps To Reproduce:
Additional Information: Backup fail every time when try to backup postgesql 11. At same time backup of pg14 finished w/o any problem
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1521 [bareos-core] file daemon minor always 2023-03-06 15:59 2023-03-07 13:19
Reporter: Tomasz_Filipek Platform: Linux  
Assigned To: OS: RHEL (and clones)  
Priority: normal OS Version: 8  
Status: new Product Version: 22.0.2  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: mariadb - problem with plugin
Description: Hi,

I have problem with backup on mariadb plugin.

Log:
06-Mar 15:46 ala-fd JobId 22: Fatal error: filed/fd_plugins.cc:664 PluginSave: Command plugin "python:module_name=bareos-fd-mariabackup:mycnf=/root/.my.cnf" requested, but is not loaded.

 /etc/bareos/bareos-fd.d/client/myself.conf:

Client {
  Name = ala-fd
  Maximum Concurrent Jobs = 20

  # remove comment from "Plugin Directory" to load plugins from specified directory.
  # if "Plugin Names" is defined, only the specified plugins will be loaded,
  # otherwise all filedaemon plugins (*-fd.so) from the "Plugin Directory".
  #
  Plugin Directory = "/usr/lib64/bareos/plugins"
  Plugin Names = "python"
  #Plugin Directory = /usr/lib64/bareos/plugins
  #Plugin Names = "python"
}

Fileset:
FileSet {
    Name = "mysql-plugin"
    Include {
        Options {
            signature = MD5
        }
        #...
        Plugin = "python"
                 ":module_name=bareos-fd-mariabackup"
                 ":mycnf=/root/.my.cnf"
    }
}

Installed RPMs:
bareos-filedaemon-mariabackup-python-plugin-22.0.3~pre33.543c368c6-26.el8.x86_64
bareos-22.0.3~pre33.543c368c6-26.el8.x86_64
bareos-client-22.0.3~pre33.543c368c6-26.el8.x86_64
bareos-director-22.0.3~pre33.543c368c6-26.el8.x86_64
bareos-database-common-22.0.3~pre33.543c368c6-26.el8.x86_64
bareos-storage-22.0.3~pre33.543c368c6-26.el8.x86_64
bareos-filedaemon-22.0.3~pre33.543c368c6-26.el8.x86_64
bareos-webui-22.0.3~pre33.543c368c6-26.el8.x86_64
bareos-database-tools-22.0.3~pre33.543c368c6-26.el8.x86_64
bareos-filedaemon-python-plugins-common-22.0.3~pre33.543c368c6-26.el8.x86_64
bareos-common-22.0.3~pre33.543c368c6-26.el8.x86_64
bareos-tools-22.0.3~pre33.543c368c6-26.el8.x86_64
bareos-bconsole-22.0.3~pre33.543c368c6-26.el8.x86_64
bareos-database-postgresql-22.0.3~pre33.543c368c6-26.el8.x86_64
bareos-contrib-filedaemon-python-plugins-22.0.3~pre33.543c368c6-26.el8.x86_64
bareos-filedaemon-python3-plugin-22.0.3~pre33.543c368c6-26.el8.x86_64

Plugins directory:
drwxr-xr-x. 5 root root 4096 Mar 6 15:45 .
drwxr-xr-x. 4 root root 4096 Mar 6 14:04 ..
drwxr-xr-x. 2 root root 4096 Mar 6 15:45 bareos_mysql_dump
drwxr-xr-x. 6 root root 4096 Mar 6 15:45 bareos_tasks
drwxr-xr-x. 2 root root 4096 Mar 6 15:45 openvz7
-rwxr-xr-x. 1 root root 20648 Mar 6 10:54 autoxflate-sd.so
-rw-r--r--. 1 root root 1928 Mar 6 10:42 bareos-fd-local-fileset.py
-rw-r--r--. 1 root root 1544 Mar 6 10:42 bareos-fd-mariabackup.py
-rw-r--r--. 1 root root 15425 Mar 6 10:42 BareosFdPluginBaseclass.py
-rw-r--r--. 1 root root 11749 Mar 6 10:42 BareosFdPluginLocalFilesBaseclass.py
-rw-r--r--. 1 root root 8020 Mar 6 10:42 BareosFdPluginLocalFileset.py
-rw-r--r--. 1 root root 20092 Mar 6 10:42 BareosFdPluginMariabackup.py
-rw-r--r--. 1 root root 3113 Mar 6 10:42 BareosFdWrapper.py
-rwxr-xr-x. 1 root root 20984 Mar 6 10:54 bpipe-fd.so
-rwxr-xr-x. 1 root root 25496 Mar 6 10:54 python3-fd.so
Tags: mariadb
Steps To Reproduce: Run mariadb backup
Additional Information:
System Description
Attached Files:
Notes
(0004897)
bruno-at-bareos   
2023-03-06 16:36   
If the file /root/.my.cnf doesn't exist on the FD you are saving with the plugin then it will failed be also sure bareos-fd daemon is allowed to read that file.
Not a bug.
(0004898)
Tomasz_Filipek   
2023-03-06 18:53   
(Last edited: 2023-03-06 19:00)
File exist:

[root@ala schedule]# cat /root/.my.cnf
[client]
user=mariabackup
password=XXX
host=127.0.0.1

FD can read this file:

[root@ala schedule]# ps aux|grep bare
root 808396 0.1 0.0 511696 10428 ? Ssl 18:13 0:02 /usr/sbin/bareos-fd -f
bareos 808400 0.1 0.0 510584 11180 ? Ssl 18:13 0:04 /usr/sbin/bareos-sd -f
bareos 808404 0.1 0.0 1161836 15704 ? Ssl 18:13 0:02 /usr/sbin/bareos-dir -f


This configuration worked in version 21.
Today I freshly installed version 22 and it doesn't work.
(0004904)
bruno-at-bareos   
2023-03-07 13:19   
It seems you are using python3 so the Plugin Names = "python3" should be used instead of python.
status client should show if the plugin are loaded.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1447 [bareos-core] file daemon tweak always 2022-04-06 14:12 2023-03-07 12:09
Reporter: mdc Platform: x86_64  
Assigned To: bruno-at-bareos OS: CentOS  
Priority: normal OS Version: stream 8  
Status: resolved Product Version: 21.1.2  
Product Build: Resolution: won't fix  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Restore of unencrypted files on an encrypted fd thrown an error, bur works.
Description: When restore files from an client that stores the files unencrypted on an client that normally only runs encrypted backups, the restore will work, but an error is thrown.
Tags:
Steps To Reproduce: Sample config:
Client A:
Client {
...
PKI Signatures = Yes
PKI Encryption = Yes
PKI Cipher = aes256
PKI Master Key = ".../master.key"
PKI Keypair = ".../all.keys"
}
Client B:
Client {
...
# without the cryptor config
}

Both can do its' backup and restore for itself to the storage. But when an restore is done, with files from client B on client A, then the files are restored as request, but for every file an error is logged:
clienta JobId 72: Error: filed/crypto.cc:168 Missing cryptographic signature for /var/tmp/bareos/var/log/journal/e882cedd07af40b386b29cfa9c88466f/user-70255@bdb4fa2d506c45ba8f8163f7e4ee7dac-0000000000b6f8c1-0005d99dd2d23d5a.journal
and the hole job is marked as failed.
Additional Information: Because the restore itself works, I think the job should only marked as "OK with warnings" and the "Missing cryptographic signature ..." only as an warning instant of an error.
System Description
Attached Files:
Notes
(0004902)
bruno-at-bareos   
2023-03-07 12:09   
Thanks you for your report. In a bug triage session, we came to the following conclusion for this case.
We understand completely the case, and agree it should be better handled by the code.

The workaround is to change your configuration as with the parameter PKI Signatures = Yes you are requesting the fact that normally you care about the signature for all data, so the job got its failing status. If you need to restore unencrypted data to that client, you should during the restore time to comment out that parameter.

On our side, nobody will work on that improvement, but feel free to propose a fix in a PR on github.
Thanks

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1522 [bareos-core] General minor sometimes 2023-03-06 16:56 2023-03-06 16:56
Reporter: mschiff Platform:  
Assigned To: OS:  
Priority: normal OS Version:  
Status: new Product Version: 22.0.2  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: implicit function declarations in configure logs
Description:  * QA Notice: Found the following implicit function declarations in configure logs:
 * build/CMakeFiles/CMakeError.log:27 - pthread_attr_get_np
 * Check that no features were accidentally disabled.
 * See https://wiki.gentoo.org/wiki/Modern_C_porting.

This also affects 21.0.6
Tags: cmake compilation
Steps To Reproduce:
Additional Information: Gentoo bug: https://bugs.gentoo.org/898656
Complete build log: https://bugs.gentoo.org/attachment.cgi?id=855506
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1520 [bareos-core] file daemon minor sometimes 2023-03-04 16:14 2023-03-04 16:14
Reporter: vsa16 Platform:  
Assigned To: OS: Ubuntu  
Priority: normal OS Version: 22.04  
Status: new Product Version: 22.0.2  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: PostgreSQL-PITR plugin bug
Description: The plugin sometimes fails to perform a full backup job. As far as I have figured out, the problem is exception handling in
bareos/core/src/plugins/filed/python/pyfiles/BareosFdPluginLocalFilesBaseclass.py line 104
Tags:
Steps To Reproduce: During full backup, force Postgres to delete files from its data directory
Additional Information: Suspected job logs:
vm-postgres-14 JobId 56: Fatal error: bareosfd: Traceback (most recent call last):
File "/usr/lib/bareos/plugins/BareosFdWrapper.py", line 42, in start_backup_file
return bareos_fd_plugin_object.start_backup_file(savepkt)
File "/usr/lib/bareos/plugins/BareosFdPluginPostgres.py", line 396, in start_backup_file
return super(BareosFdPluginPostgres, self).start_backup_file(savepkt)
File "/usr/lib/bareos/plugins/BareosFdPluginLocalFilesBaseclass.py", line 118, in start_backup_file
mystatp.st_mode = statp.st_mode
UnboundLocalError: local variable 'statp' referenced before assignment
vm-postgres-14 JobId 56: Error: python3-fd-mod: Could net get stat-info for file /var/lib/postgresql/14/main/base/16394/1682658_fsm: "[Errno 2] No such file or directory: '/var/lib/postgresql/14/main/base/16394/1682658_fsm'"
 
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1519 [bareos-core] General minor always 2023-03-02 15:02 2023-03-02 15:02
Reporter: colttt Platform: Linux  
Assigned To: OS: Debian  
Priority: normal OS Version: 10  
Status: new Product Version: 21.1.6  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: bls exit with wrong exit code if file damagesd
Description: I would expect that bls exit with a non-zero exit code if a file is not valid/contains errors, but it always exit with 0:

bls -j /backupdata/0004HD ; echo $?
bls: stored/butil.cc:304-0 Using device: "/backupdata" for reading.
02-Mar 14:56 bls JobId 0: Ready to read from volume "0004HD" on device "FileStorage" (/backupdata).
Volume Record: File:blk=0:196 SessId=373 SessTime=1632137493 JobId=35 DataLen=161
02-Mar 14:56 bls JobId 0: Error: stored/block.cc:335 Volume data error at 3:1530691745!
Block checksum mismatch in block=28070848 len=64512: calc=c8a85150 blk=8ed1c2cb
bls: stored/block.cc:96-0 Dump block with checksum error 5c36fed8: size=64512 BlkNum=28070848
               Hdrcksum=8ed1c2cb cksum=c8a85150
bls: stored/block.cc:110-0 Rec: VId=373 VT=1632137493 FI=359968 Strm=contCOMPRESSED len=35511 p=5c375130
bls: stored/block.cc:110-0 Rec: VId=373 VT=1632137493 FI=359968 Strm=COMPRESSED len=65580 p=5c37dbf3
02-Mar 14:56 bls JobId 0: Releasing device "FileStorage" (/backupdata).
0
Tags:
Steps To Reproduce: - manipulate a backup container file
- run bls -j <BACKUPFILE>
- exit code is zero
 - it should be non zero (1 = programm error, 2 = (could be) file error)
Additional Information:
System Description
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1512 [bareos-core] installer / packages major always 2023-02-07 04:48 2023-02-07 13:37
Reporter: MarceloRuiz Platform:  
Assigned To: bruno-at-bareos OS:  
Priority: high OS Version:  
Status: resolved Product Version:  
Product Build: Resolution: won't fix  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Updating Bareos polutes/invalidates user settings
Description: Upating Bareos recreates all the sample configuration files inside '/etc/bareos/' even if the folder exists and contains a working configuration.
Tags:
Steps To Reproduce: Install Bareos, configure it using custom filenames, update Bareos.
Additional Information: Bareos should not touch an existing '/etc/bareos/' folder if it exists. That means that a user spent a considerable amount of time configuring the program and a simple system update will make the whole configuration invalid and Bareos won't even start.
If there is a need to provide a sample configuration, do it in a separate folder, like '/etc/bareos-sample-config' so it won't break a working configuration. The installer/updater could even delete that folder before the install/update and recreate to provide an up-to-date example of the configuration for the current version without risking breaking anything.
Attached Files:
Notes
(0004874)
bruno-at-bareos   
2023-02-07 13:37   
What OS are you using ?

We state into the documentation to not remove any installed files by your package manager, as it will be reinstalled if you delete them.
rpm will create for you rpmnew or rpmold files when they have been changed.
make install create .new .old files if existing are already there or have being changed.

One of the best way is to simply comment or keep them empty, so no changes will happen on update.

It is how it is actually, as we didn't have find a way to make the product as easy as possible for newcomer proposing a ready to use configuration.

Our the expert side, you can also simple create your personal /etc/bareos-production structure and create systemd overwrite service to use and point to that location.
(0004875)
bruno-at-bareos   
2023-02-07 13:37   
No changes will occurs

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1503 [bareos-core] storage daemon major always 2022-12-25 04:13 2023-01-26 10:36
Reporter: cmlara Platform: Linux  
Assigned To: OS: Ubuntu  
Priority: normal OS Version: 20.04.4  
Status: new Product Version: 22.0.0  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Bareos-sd with libdroplet(s3) storage leaves sockets in CLOSE_WAIT state (FD exhaustion/resource leak)
Description: bareos-dir Version: 22.0.1~pre (21 December 2022) Ubuntu 20.04.4 LTS
s3-us-east-2-sd Version: 22.0.1~pre (21 December 2022) Ubuntu 20.04.4 LTS

Director and SD are on different hosts.

On a system with the bareos-sd, configured for s3 storage via libdroplet it appears that each individual job causes at least one FD to remain open in close_wait state. I've been tracking this on 21.0.0 for a while.. Now that 22.0.1-pre is published as the current release I upgraded and tested against it to ensure it was not fixed in the past year.

---- Logs from 21.0.1-pre

$ ps afux |grep -i bareos-sd
ubuntu 556449 0.0 0.0 8164 660 pts/0 S+ 02:49 0:00 \_ grep --color=auto -i bareos-sd
bareos 555014 0.0 2.4 307808 24432 ? Ssl Dec24 0:00 /usr/sbin/bareos-sd -f

$ sudo lsof -p 555014
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
...
bareos-sd 555014 bareos 3u IPv4 75608807 0t0 TCP *:bacula-sd (LISTEN)
bareos-sd 555014 bareos 4u IPv6 75608808 0t0 TCP *:bacula-sd (LISTEN)
bareos-sd 555014 bareos 6u IPv4 75609055 0t0 TCP ip-172-26-12-255.us-east-2.compute.internal:40150->s3-r-w.us-east-2.amazonaws.com:https (CLOSE_WAIT)
bareos-sd 555014 bareos 7u IPv4 75608950 0t0 TCP ip-172-26-12-255.us-east-2.compute.internal:39234->s3-r-w.us-east-2.amazonaws.com:https (CLOSE_WAIT)
bareos-sd 555014 bareos 8u REG 202,1 30374841 512018 /var/lib/bareos/s3-us-east-2-sd.trace
bareos-sd 555014 bareos 9u IPv4 75611304 0t0 TCP ip-172-26-12-255.us-east-2.compute.internal:45236->s3-r-w.us-east-2.amazonaws.com:https (CLOSE_WAIT)

<ran a small backup>


$ sudo lsof -p 555014
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
...
bareos-sd 555014 bareos 3u IPv4 75608807 0t0 TCP *:bacula-sd (LISTEN)
bareos-sd 555014 bareos 4u IPv6 75608808 0t0 TCP *:bacula-sd (LISTEN)
bareos-sd 555014 bareos 6u IPv4 75609055 0t0 TCP ip-172-26-12-255.us-east-2.compute.internal:40150->s3-r-w.us-east-2.amazonaws.com:https (CLOSE_WAIT)
bareos-sd 555014 bareos 7u IPv4 75608950 0t0 TCP ip-172-26-12-255.us-east-2.compute.internal:39234->s3-r-w.us-east-2.amazonaws.com:https (CLOSE_WAIT)
bareos-sd 555014 bareos 8u REG 202,1 30438986 512018 /var/lib/bareos/s3-us-east-2-sd.trace
bareos-sd 555014 bareos 9u IPv4 75611304 0t0 TCP ip-172-26-12-255.us-east-2.compute.internal:45236->s3-r-w.us-east-2.amazonaws.com:https (CLOSE_WAIT)
bareos-sd 555014 bareos 11u IPv4 75624999 0t0 TCP ip-172-26-12-255.us-east-2.compute.internal:42052->s3-r-w.us-east-2.amazonaws.com:https (CLOSE_WAIT)
bareos-sd 555014 bareos 12u IPv4 75625002 0t0 TCP ip-172-26-12-255.us-east-2.compute.internal:33668->s3-r-w.us-east-2.amazonaws.com:https (CLOSE_WAIT)
bareos-sd 555014 bareos 13u IPv4 75625007 0t0 TCP ip-172-26-12-255.us-east-2.compute.internal:59050->s3-r-w.us-east-2.amazonaws.com:https (CLOSE_WAIT)
bareos-sd 555014 bareos 14u IPv4 75625012 0t0 TCP ip-172-26-12-255.us-east-2.compute.internal:42476->s3-r-w.us-east-2.amazonaws.com:https (CLOSE_WAIT)
Tags:
Steps To Reproduce: Run a backup where storage is backed by S3.
Additional Information:
Attached Files:
Notes
(0004857)
bruno-at-bareos   
2023-01-12 16:08   
Hello,

TCP/IP state is not handled by Bareos but by your own operating system.
It look like a FIN_CLOSE is sended but not FIN_Ack is received back and as thus the CLOSE_WAIT state remain.

We have several tests and running OS but we are not able to reproduce this.
Could you check with your network admin if you don't have something block the normal workflow of TCP/IP.
(0004861)
cmlara   
2023-01-12 22:04   
(Last edited: 2023-01-12 22:07)
"TCP/IP state is not handled by Bareos but by your own operating system."
True, however on Linux CLOSE_WAIT generally means that the remote side has closed the socket and the local system is waiting on the user software to either close() or shutdown() the socket it has open for the connection.

I'm running these connections inside the same region of AWS services with a Lightstail instance hosting the Bareos-SD and the VPC Gateway configured with S3 Target (the S3 connections are routed by AWS internally on their fabric at gateway/router). No firewall blocking enabled.

I performed an unencrypted network capture to get a better idea of what is going on (contains credentials and real data so I can't post the file)

What I observed for a sequence of operations was:

1) SD: HTTP HEAD of /S3-AI-Incremental-0159/0000 (this is the current active object chunk for this particular media)
2) AWS_S3: HTTP 200 Response with head results
3) SD: HTTP PUT /S3-AI-Incremental-0159/0000 (this is the backup data being uploaded)
4) AWS_S3: HTTP OK Response
5) SD: HEAD of /S3-AI-Incremental-0159/0000 (I presume validating the upload was truly successful, or getting latest data about the file to avoid possible write collisions)
6) AWS_S3: HTTP 200 with Head results
7) SD: HTTP HEAD of /S3-AI-Incremental-0159/0001 --- this would be the next chunk file if it crosses the chunksize threshold, not sure why this occurred at this point. This file shouldn't exist yet.
8) AWS_S3: HTTP 404 to HEAD request (expected)
9) OS of server hosting the SD sends a TCP ACK to the packet in step 8 (note: these have been sent for other packets as well, this is just the first relevant packet for discussion)

Approximately 23 seconds later (a timeout has likely occurred at the S3 bucket web server related to keeping the connection open):

10) AWS_S3: Sends FIN+ACK acknowledging the ACK from step 9 and requesting to close the connection.
11) OS of server with SD: Sends an ACK to the packet in step 10 allowing the S3 bucket to close the connection. Locally the connection move to CLOSE_WAIT.

Now that the connection has been closed on the bucket the OS is waiting on Bareos/libdroplet to read the last of the buffers (if any data is in them) and close() or shutdown() the socket which will generate another FIN/ACK cycle for the two-way shutdown. This does not appear to ever occur and as such the FD is left open and the connection remains in CLOSE_WAIT until the Bareos SD is restarted.

I will note that by the time step 9 occurs but before step 10 the Bareos console already indicates the backup was a success, which makes sense as the data is fully stored in the bucket at this time. To me this makes me suspect that likely there should be some sort of socket shutdown by the SD/libdroplet after step 9 that isn't occurring and instead the connection is timing out from the S3 bucket. Alternately whatever task should occur after step 11 isn't and the socket remains consumed.

If there are any config files or additional logs that could be useful please let me know.
(0004862)
cmlara   
2023-01-14 03:37   
Looking at the raw HTTP helped me get a better understand of what was going on, coupled with some trace logging I had done in the past.

Some context on the steps above:
Steps 3-4 are likely DropletDevice::d_write->ChunkedDevice::WriteChunked->ChunkedDevice::FlushChunk->DropletDevice::FlushRemoteChunk() which eventually leads to dpl_s3_put().
Steps 5-8 are likely DropletDevice::d_flush->ChunkedDevice::WaitUntilChunksWritten()->ChunkedDevice::is_written()->DropletDevice::RemoteVolumeSize() being called which leads eventually to dpl_s3_head_raw(). It is expected behavior that it keeps going until it can't find a chunk to know that all chunks are accounted for.

What I'm observing on a preliminary review is that in src/droplet/libdroplet/src/backend/s3/backend/*.c connections are opened by calling `dpl_try_connect(ctx, req, &conn);` where &conn is returned as the connection to use for communicating with the S3 bucket.

On quick glance all the source files tend to have this piece of code included.
```
  if (NULL != conn) {
    if (1 == connection_close)
      dpl_conn_terminate(conn);
    else
      dpl_conn_release(conn);
  }
```
the value `connection_close` appears to only be set to `1` in times of unexpected errors, so in the case of successful backup and volume size validation at step 8 dpl_conn_release(conn) is again used which returns the connection back to the pool for future use where the connection dies from non use

I'm suspecting that DropletDevice::d_close() may be missing a step that would lead to calling dpl_conn_terminate() to close the connection and cleanup the sockets.
(0004870)
arogge   
2023-01-26 09:19   
You're probably right.
However, AFAICT droplet is supposed to handle this on its own which it doesn't seem to do (correctly). At least not in your use-case.
(0004871)
arogge   
2023-01-26 09:44   
After having yet another look the connection pooling in droplet/Libdroplet/src/conn.c is simlpy not able to handle that.
We would have to have some kind of job that looks at all connections in the the connection pool and shut them down if the remote end closed the connection.

If I understand the current code correctly, it will only look at the connection again if it tries to re-use it and then it should clean it up. So while you're seeing some sockets in CLOSE_WAIT, the amount of sockets in that state should not steadily increase, but max-out at some arbitrary limit (probably number of devices times number of threads).
(0004872)
cmlara   
2023-01-26 10:36   
I currently have 3 'S3 devices' configured (each one limited to a single job at a time due to the the s3 interleaving restrictions).

Just did a restart of the SD (last restart was on the 12th) where I was at 1009 FD's, 6 of them being necessary for bareos-sd and the remaining 1003 are close-wait sockets. If I don't proactively restart the service I always hit the 1024 configured FD ulimit and backups will fail.

If there is an upper limit it appears to be quite high.

On my initial look I did wonder why I wasn't just seeing a reuse of the socket that would lead to an error and closing it. My only theory is that 'ctx' as passed to dpl_try_connect() is reset somewhere (being freed or lost) and that each 'session' (whatever a session is in this case) gets a new context with no knowledge of the previous sockets and as such no knowledge try and use them and cleanup. However I haven't been able to fully follow the code flow to confirm that is accurate.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1372 [bareos-core] General major always 2021-07-20 09:27 2023-01-18 16:55
Reporter: Int Platform: Linux  
Assigned To: bruno-at-bareos OS: CentOS  
Priority: high OS Version: 7  
Status: assigned Product Version: 19.2.10  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Restore/Migrate of job distributed on two tapes fails with ERR=Cannot allocate memory
Description: When I try to Restore/Migrate a specific job distributed on two tapes, the first tape is read successfully but when mounting the second tape the job fails with error

```
20-Jul 08:02 bareos-sd JobId 21833: Ready to read from volume "NIX417L6" on device "tapedrive-0" (/dev/tape/by-id/scsi-350050763121a063c-nst).
20-Jul 08:02 bareos-sd JobId 21833: Forward spacing Volume "NIX417L6" to file:block 0:1.
20-Jul 08:02 bareos-sd JobId 21833: Error: stored/block.cc:1057 Read error on fd=7 at file:blk 0:1 on device "tapedrive-0" (/dev/tape/by-id/scsi-350050763121a063c-nst). ERR=Cannot allocate memory.
```

This only happens on this specific tape. Other jobs distributed on multiple tapes work correctly.

I did configure the Bareos Tape device /etc/bareos/bareos-sd.d/device/tapedrive-0.conf with "Maximum Block Size = 1048576" from the very beginning and all tapes were written using this configuration.
I started using Bareos after Version 14.2.0 so the new block size handling (label block 64k, data blocks 1M) was used on all tapes.

But the error suggests that there is a problem with the block size.
This is seconded by /var/log/messages
```
Jul 20 08:02:14 igms07 kernel: st 3:0:0:0: [st0] Failed to read 1048576 byte block with 64512 byte transfer.
```

So it seems that Bareos is trying to read with 64k block size although it is configured otherwise?


Further tests showed that I can read the tape label correctly with btape:

```
]$ btape /dev/tape/by-id/scsi-350050763121a063c-nst
Tape block granularity is 1024 bytes.
btape: stored/butil.cc:290-0 Using device: "/dev/tape/by-id/scsi-350050763121a063c-nst" for writing.
btape: stored/btape.cc:485-0 open device "tapedrive-0" (/dev/tape/by-id/scsi-350050763121a063c-nst): OK
*status
 Bareos status: file=0 block=0
 Device status: BOT ONLINE IM_REP_EN file=0 block=0
Device status: TAPE BOT ONLINE IMMREPORT. ERR=
*readlabel
btape: stored/btape.cc:532-0 Volume label read correctly.

Volume Label:
Id : Bareos 2.0 immortal
VerNo : 20
VolName : NIX417L6
PrevVolName :
VolFile : 0
LabelType : VOL_LABEL
LabelSize : 167
PoolName : Scratch
MediaType : LTO
PoolType : Backup
HostName : igms07.vision.local
Date label written: 05-Nov-2018 11:42
```


I can read the tape with dd with a blocksize of 1M:
dd if=/dev/tape/by-id/scsi-350050763121a063c-nst bs=1M


btape scan fails with the same error as Bareos:

```
]$ btape /dev/tape/by-id/scsi-350050763121a063c-nst
Tape block granularity is 1024 bytes.
btape: stored/butil.cc:290-0 Using device: "/dev/tape/by-id/scsi-350050763121a063c-nst" for writing.
btape: stored/btape.cc:485-0 open device "tapedrive-0" (/dev/tape/by-id/scsi-350050763121a063c-nst): OK
*rewind
btape: stored/btape.cc:581-0 Rewound "tapedrive-0" (/dev/tape/by-id/scsi-350050763121a063c-nst)
*scan
btape: stored/btape.cc:1901-0 Starting scan at file 0
btape: stored/btape.cc:1909-0 Bad status from read -1. ERR=stored/btape.cc:1907 read error on /dev/tape/by-id/scsi-350050763121a063c-nst. ERR=Cannot allocate memory.
```

but btape scanblocks works fine:

```
]$ btape /dev/tape/by-id/scsi-350050763121a063c-nst
Tape block granularity is 1024 bytes.
btape: stored/butil.cc:290-0 Using device: "/dev/tape/by-id/scsi-350050763121a063c-nst" for writing.
btape: stored/btape.cc:485-0 open device "tapedrive-0" (/dev/tape/by-id/scsi-350050763121a063c-nst): OK
*rewind
btape: stored/btape.cc:581-0 Rewound "tapedrive-0" (/dev/tape/by-id/scsi-350050763121a063c-nst)
*scanblocks
1 block of 203 bytes in file 0
2213 blocks of 1048576 bytes in file 0
1 block of 1048566 bytes in file 0
1072 blocks of 1048576 bytes in file 0
1 block of 1048568 bytes in file 0
```


The question is, what is the issue here?
And how do I fix it?
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files: Screen Shot 2022-08-04 at 1.27.33 PM.png (256,578 bytes) 2022-08-04 22:28
https://bugs.bareos.org/file_download.php?file_id=531&type=bug
png

joblog_21833.txt (4,077 bytes) 2022-09-21 16:12
https://bugs.bareos.org/file_download.php?file_id=532&type=bug
joblog_5822.txt (6,372 bytes) 2022-09-21 16:22
https://bugs.bareos.org/file_download.php?file_id=533&type=bug
bareos-sd.trace (57,480 bytes) 2022-09-26 15:42
https://bugs.bareos.org/file_download.php?file_id=535&type=bug
Notes
(0004183)
Int   
2021-07-20 10:02   
PS: my device configuration:

Device {

    Name = "tapedrive-0"
    DeviceType = tape
    Maximum Concurrent Jobs = 1

    # default:0, only required if the autoloader have multiple drives.
    DriveIndex = 0

    ArchiveDevice = /dev/tape/by-id/scsi-350050763121a063c-nst # Quantum LTO-7 standalone drive

    MediaType = LTO

    AutoChanger = no # default: no
    AutomaticMount = yes # default: no

    MaximumFileSize = 40GB
    MaximumBlockSize = 1048576

    Maximum Spool Size = 40GB
    Spool Directory = /var/lib/bareos/spool
}
(0004195)
Int   
2021-07-31 09:13   
PPS:
When I read other jobs that are also on this tape, the restore/migration works just fine.
(0004714)
aleigh   
2022-08-04 22:28   
I am having this same problem on a restore which spans 2 tapes. The file is particularly large, so probably the case here is that a single file spans two tapes.

bareos-dir Version: 21.0.0 (21 December 2021) Rocky Linux release 8.4 (Green Obsidian) redhat Rocky Linux release 8.4 (Green Obsidian)

[358442.558093] st 9:0:40:0: [st0] Failed to read 262144 byte block with 64512 byte transfer.

bareos-sd JobId 81: Error: stored/block.cc:1004 Read error on fd=8 at file:blk 0:1 on device "tapedrive-0" (/dev/nst0). ERR=Cannot allocate memory.

See attached job log. Note that despite everything it says, the correct media (LC0004) was inserted and mounted at the correct time. Shortly after mounting LC0004 the job failed.
(0004721)
bruno-at-bareos   
2022-08-10 11:00   
Hello aleigh.

We are interested by getting more debugging information, to be able to reproduce 100% of time the problem (if we can, then getting a fix is almost certain).

So if you are able to rerun that restore, would you be able to run it with a higher debug level ?
You can consult this page to understand how to do this.
https://servicedesk.bareos.com/help/en-us/2-support/2-how-do-i-set-the-debug-level

here a debug level of 1000 would be nice.
director, storage, and concerned client should be activated.

We would like to have attached here in text form not picture.
mainly the following command on bconsole

list joblog jobid=81

(or the new jobid of restore try with debug) and also the initial backup jobid.

The whole configuration and logs are a plus do debug the situation.
Maybe our support-info tool can be used and run on director
see instruction here
https://servicedesk.bareos.com/help/en-us/2-support/3-how-to-use-the-bareos-support-info-sh-tool

You can attach the result if not too big, as private (all credential being already removed by the tool)
(0004751)
bruno-at-bareos   
2022-09-21 15:07   
Ping, none of you are able to run with a higher debug level ?
(0004756)
Int   
2022-09-21 16:12   
As a starting point I attached the joblog of the original failing job.

My configuration is not the same as back then when I filed this bug a year ago. In the mean time I updated my bareos to version 21, but the used storage device is still the same.
I created support-info of my current config with your support-info tool but the generated tgz-file is too large (78MB) to be uploaded here.
Let me know how you want to receive the tgz-file instead.

Next I will enable debug level of 1000 and will try to reproduce the error with bareos 21.
(0004757)
Int   
2022-09-21 16:22   
Here the joblog of the initial backup job.
(0004758)
bruno-at-bareos   
2022-09-21 16:26   
You can upload the tgz to https://cloud.dass-it.de/index.php/s/Xf9ZH79737iastj with password mantis1372

Before creating the report check if you have any *trace* in /var/lib/bareos they often are old crash and just took place.
(0004759)
Int   
2022-09-21 16:35   
I uploaded the support-info.
(0004760)
bruno-at-bareos   
2022-09-21 16:58   
Thanks, did you have other jobs stored on both tapes NIX417L6 and NIX416L6 ? If yes are you able to read from them.
(0004762)
Int   
2022-09-22 08:59   
The failing job 5822 is the only job on tape NIX416L6 and fills it completely, The rest of job 5822 is on tape NIX417L6.
There are other jobs on tape NIX417L6 which can be restored successfully.

I tried to reproduce the issue with debug level of 1000 set on director, storage, and client
but since job 5822 is large (4.5 TB of data) the log files get huge and filled up the free disk space on director and client. This caused the director to crash.

I will retry and enable the debugging only shortly before the restore process switches from tape NIX416L6 to tape NIX417L6.
Hopefully this will catch the issue while leaving enough free disk space.

Or do you have other advice on how to proceed?
(0004764)
bruno-at-bareos   
2022-09-22 10:00   
I think you can set debulevel to 500 on director, (none on fd) but 1000 on SD (which is the failing part).
I have to prepare also some instruction to extract block information from the database for those 2 volumes especially around the split.
(0004775)
Int   
2022-09-26 15:42   
The good new is the error is reproduceable with Bareos 21.

I enabled the debug level of 1000 on the SD only after the restore of the first tape was done and the restore job was requesting to load the second tape.
I hope this was sufficient since the error happens when bareos tries to read the first sector of the second tape. If not let me know and I will repeat with earlier tracing.
I attached the trace file of the SD.

The trace file for the director is quite large (3 GB uncompressed). Shall I upload it to https://cloud.dass-it.de/index.php/s/Xf9ZH79737iastj with password mantis1372 again?
(0004776)
bruno-at-bareos   
2022-09-26 16:01   
Yes please upload it there. It has less interest than the SD, but still good to have it somewhere until we can reproduce that problem automatically.
I just don't know when I will be able to parse them this week.
(0004777)
Int   
2022-09-26 17:41   
I uploaded the director trace file.
(0004787)
bruno-at-bareos   
2022-09-29 15:48   
Just to exclude a documented case was the tape labelled with 1M or 64kb ?

If you check here
https://docs.bareos.org/TasksAndConcepts/AutochangerSupport.html#direct-access-to-volumes-with-with-non-default-block-sizes

You will see the memory allocation error.

And then maybe you need to adapt the Label Block Size for those tapes ?
Thanks for your confirmation.
(0004788)
Int   
2022-09-29 16:06   
I know about that.

My my device configuration has set
MaximumBlockSize = 1048576
see my post https://bugs.bareos.org/view.php?id=1372#c4183
and has been configured like this from the beginning when I started using bareos.
(0004846)
Int   
2022-12-28 10:12   
Hi,

is there any progress on this issue?

In the mean time I wanted to migrate other older backups of mine from LTO6 tapes to LTO7 tapes, but all migrate jobs fail with error
Error: stored/block.cc:1004 Read error on fd=8 at file:blk 0:1 on device "tapedrive-0" (/dev/tape/by-id/scsi-350050763121a063c-nst). ERR=Cannot allocate memory.
when starting to read from the second tape.

One time reading from the first tape of the backup set failed with "Error: stored/block.cc:1004 Read error ..."
but on this occasion I had mounted a wrong tape first, so the failing tape was again the second tape accessed in the migrate job.
This looks like the migrate process a fundamental problem with the second tape in it's progress.

All jobs have in common that they were created with Bareos 17.2.4

At the moment practically all my old backup jobs are useless.
(0004847)
Int   
2022-12-28 10:16   
PS:
If needed I can also send you the physical tapes for further investigation.

Best regards and a happy new year!
(0004858)
bruno-at-bareos   
2023-01-12 16:17   
Hello, sorry we didn't have a fix yet.

Could you check the following data for example on both tapes failing (the first and second)

Using sql module of bconsole or directly psql
``` select mediaid,volumename,minblocksize,maxblocksize from media where mediaid in (); ```

Normally both of them should have the maxblocksize set to 1M is it the case.
if not, what happen if you fix the database record of the second one to be 1M?

Regards
(0004863)
Int   
2023-01-18 16:55   
Hi,
I checked the data on both tapes failing and both have maxblocksize set to 1M:

bareos=# select mediaid,volumename,minblocksize,maxblocksize from media where volumename
in ('NIX416L6','NIX417L6');
 mediaid | volumename | minblocksize | maxblocksize
---------+------------+--------------+--------------
      38 | NIX416L6 | 0 | 1048576
      39 | NIX417L6 | 0 | 1048576
(2 rows)

Best regards and thanks for investigating this further.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1504 [bareos-core] General block always 2022-12-27 02:36 2022-12-27 02:36
Reporter: p-linnane Platform:  
Assigned To: OS:  
Priority: normal OS Version:  
Status: new Product Version: 22.0.0  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Compiling bareos-client on Linux for Homebrew fails
Description: Hello. I am a maintainer for Homebrew (https://brew.sh). The last few releases of bareos-client have failed to compile for us on Linux without a few workarounds. The 2 specific issues are:
lmdb/mdb.c:2282:13: error: variable 'rc' set but not used [-Werror=unused-but-set-variable]
fastlzlib.c:512:63: error: unused parameter ‘output_length’ [-Werror=unused-parameter]

The workarounds we have implemented are appending the following to CFLAGS:
"-Wno-unused-but-set-variable"
"-Wno-unused-parameter"

Obviously, this is not ideal. I was hoping someone here could weigh in on what we are missing in order to compile without needing to append CFLAGS.

Ref: https://github.com/Homebrew/homebrew-core/pull/118783
Tags:
Steps To Reproduce:
Additional Information:
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1502 [bareos-core] General trivial have not tried 2022-12-21 13:42 2022-12-21 13:42
Reporter: arogge_adm Platform:  
Assigned To: OS:  
Priority: normal OS Version:  
Status: new Product Version:  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Release Bareos 23.0.0
Description: This ticket acts as a tracker ticket to collect information about releasing Bareos 23.0.0
Tags: release
Steps To Reproduce:
Additional Information:
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1498 [bareos-core] webui minor random 2022-12-15 15:00 2022-12-21 11:47
Reporter: alexanderbazhenov Platform:  
Assigned To: frank OS:  
Priority: normal OS Version:  
Status: assigned Product Version:  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Failed to send result as json. Maybe the result message is too long?
Description: Got something like this again in 21.0 version: https://bugs.bareos.org/view.php?id=719

Failed to retrieve data from Bareos director
Error message received from director:

Failed to send result as json. Maybe the result message is too long?
Tags: director, job, postgresql, ubuntu20.04, webui
Steps To Reproduce: Don't know the steps, but as I got it happens when more volumes or output used. Especcially when you run a script on a client, e.g. gitlab dump:

sudo mkdir /var/opt/gitlab/backups
sudo chown git /var/opt/gitlab/backups
sudo gitlab-rake gitlab:backup:create SKIP=artifacts

If you got this once you'll not be able to open any job details (the same message all the time) until all backup jobs will finish.
Additional Information: I don't know is it web bug or just director, but there is no error in director logs.

Additional info:

root@bareos:/etc/bareos/bareos-dir.d/catalog# dpkg -l | grep bareos
ii bareos 21.0.0-4 amd64 Backup Archiving Recovery Open Sourced - metapackage
ii bareos-bconsole 21.0.0-4 amd64 Backup Archiving Recovery Open Sourced - text console
ii bareos-client 21.0.0-4 amd64 Backup Archiving Recovery Open Sourced - client metapackage
ii bareos-common 21.0.0-4 amd64 Backup Archiving Recovery Open Sourced - common files
ii bareos-database-common 21.0.0-4 amd64 Backup Archiving Recovery Open Sourced - common catalog files
ii bareos-database-postgresql 21.0.0-4 amd64 Backup Archiving Recovery Open Sourced - PostgreSQL backend
ii bareos-database-tools 21.0.0-4 amd64 Backup Archiving Recovery Open Sourced - database tools
ii bareos-director 21.0.0-4 amd64 Backup Archiving Recovery Open Sourced - director daemon
ii bareos-filedaemon 21.0.0-4 amd64 Backup Archiving Recovery Open Sourced - file daemon
ii bareos-storage 21.0.0-4 amd64 Backup Archiving Recovery Open Sourced - storage daemon
ii bareos-tools 21.0.0-4 amd64 Backup Archiving Recovery Open Sourced - common tools
ii bareos-traymonitor 21.0.0-4 amd64 Backup Archiving Recovery Open Sourced - tray monitor
ii bareos-webui 21.0.0-4 all Backup Archiving Recovery Open Sourced - webui

Postgre installed with defaults:

root@bareos:/etc/bareos/bareos-dir.d/catalog# dpkg -l | grep postgresql
ii bareos-database-postgresql 21.0.0-4 amd64 Backup Archiving Recovery Open Sourced - PostgreSQL backend
ii pgdg-keyring 2018.2 all keyring for apt.postgresql.org
ii postgresql-14 14.1-2.pgdg20.04+1 amd64 The World's Most Advanced Open Source Relational Database
ii postgresql-client-14 14.1-2.pgdg20.04+1 amd64 front-end programs for PostgreSQL 14
ii postgresql-client-common 232.pgdg20.04+1 all manager for multiple PostgreSQL client versions
ii postgresql-common 232.pgdg20.04+1 all PostgreSQL database-cluster manager

root@bareos:/etc/bareos/bareos-dir.d/catalog# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 20.04.3 LTS
Release: 20.04
Codename: focal

Any ideas? Or what other info I should provide.
Attached Files: joblog_jobid4230_json.log (282,928 bytes) 2022-12-15 19:15
https://bugs.bareos.org/file_download.php?file_id=539&type=bug
Notes
(0004839)
bruno-at-bareos   
2022-12-15 16:54   
To help debugging, it would be nice to have at least one of the offending joblog, which can be extracted with bconsole.
please to so and attach the output here (if < 2MB) or on an accessible share.

Developper can be also interested by the same output in json, to do so you can switch bconsole output to ".api 2"

bconsole <<<"@output /var/tmp/joblog_jobidXXXX_json.log
.api 2
list joblog jobid=XXXX
"

where XXX is the problematic jobid.
(0004840)
alexanderbazhenov   
2022-12-15 19:15   
Here is one of them.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1500 [bareos-core] General block always 2022-12-19 13:35 2022-12-19 13:35
Reporter: mdc Platform: x86_64  
Assigned To: OS: Windows  
Priority: normal OS Version: Win10  
Status: new Product Version: 21.1.6  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Build on Windows using the MSVC fails, because of invalid compiler flags
Description: The cmake config part runs fine, but the compile part fails because of invalid compiler options.
At the summary page of the cmake run, the invalid compiler flags are show:
C Compiler: C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.34.31933/bin/Hostx64/x64/cl.exe 19.34.31937.0
C++ Compiler: C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.34.31933/bin/Hostx64/x64/cl.exe 19.34.31937.0
C Compiler flags: /DWIN32 /D_WINDOWS /W3 -Werror -Wall
C++ Compiler flags: /DWIN32 /D_WINDOWS /W3 /GR /EHsc -Werror -Wall -m64 -mwin32 -mthreads
Linker flags: /machine:x64 /machine:x64 /machine:x64 /machine:x64

The build it self fails with:
[ 1%] Building CXX object core/src/lib/CMakeFiles/version-obj.dir/version.cc.obj
cl : Befehlszeile error D8021 : Ungültiges numerisches Argument /Werror.
NMAKE : fatal error U1077: ""C:\Program Files\CMake\bin\cmake.exe"": Rückgabe-Code "0x2"
Stop.
NMAKE : fatal error U1077: ""C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.34.31933\bin\HostX64\x64\nmake.exe"": Rückgabe-Code "0x2"
Stop.
NMAKE : fatal error U1077: ""C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.34.31933\bin\HostX64\x64\nmake.exe"": Rückgabe-Code "0x2"
Stop.


Tags:
Steps To Reproduce: Call:
cmake -B bauen -DENABLE_PYTHON=OFF -DENABLE_JANSSON=OFF -DENABLE_BCONSOLE=OFF -Dclient-only=ON -DCMAKE_PREFIX_PATH="c:\zlib" -DOPENSSL_ROOT_DIR="c:\openssl"
cmake --build bauen
Additional Information: Using Visual Studio Community 2022.

As I understand the Microsoft compiler don't know the "gcc Wxxx" options.
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
67 [bareos-core] regression testing feature have not tried 2013-02-13 17:14 2022-11-29 17:22
Reporter: pstorz Platform: Linux  
Assigned To: OS: any  
Priority: normal OS Version: 3  
Status: acknowledged Product Version:  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: create regression test for scsi crypto LTO encryption
Description: After we have the mhvtl support on the regression machines, we need to create a regression test for the scsicrypto option.
Tags:
Steps To Reproduce:
Additional Information:
System Description General issues regarding all linux platforms or not specific to only one distribution
Attached Files:
Notes
(0000153)
pstorz   
2013-02-25 10:39   
basic check for scsi crypto is done, but we also need a test for disaster recovery.


Marco: I think you should also show that you cannot do a bls of the tape after you clear the encryption key.
(0000508)
maik   
2013-07-05 16:57   
partly done, test for bextract missing

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1450 [bareos-core] documentation tweak always 2022-04-20 10:12 2022-11-10 16:53
Reporter: mdc Platform: x86_64  
Assigned To: bruno-at-bareos OS: CentOS  
Priority: normal OS Version: stream 8  
Status: resolved Product Version: 21.1.2  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact: yes
bareos-master: action: fixed
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Wrong link to git hub
Description: The GH link in
https://docs.bareos.org/TasksAndConcepts/Plugins.html#python-fd-plugin
points to:
https://github.com/bareos/bareos/tree/master/contrib/fd-plugins/options-plugin-sample
But correct will be:
https://github.com/bareos/bareos/tree/master/contrib/fd-plugins/bareos_option_example
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0004572)
bruno-at-bareos   
2022-04-20 11:44   
Thanks for your report.
We have a fix in progress for that in https://github.com/bareos/bareos/pull/1165
(0004573)
bruno-at-bareos   
2022-04-21 10:21   
PR1165 merged (master), PR1167 Bareos-21 in progress
(0004576)
bruno-at-bareos   
2022-04-21 15:16   
Fix for bareos-21 (default) documentation has been merged too.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1445 [bareos-core] bconsole minor always 2022-03-31 08:35 2022-11-10 16:52
Reporter: mdc Platform: x86_64  
Assigned To: bruno-at-bareos OS: CentOS  
Priority: normal OS Version: stream 8  
Status: resolved Product Version: 21.1.2  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version: 21.1.3  
    Target Version:  
bareos-master: impact: yes
bareos-master: action: fixed
bareos-19.2: impact: yes
bareos-19.2: action: none
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Quotes are missing at the director name on export
Description: When calling configure export client="Foo" on the console, in the exported file the quotes for the director name are missing.
instant of:
Director {
  Name = "Bareos Director"
this will exported:
Director {
  Name = Bareos Director

As written in the documentation, quotes must be used, when the string contains an space.
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0004562)
bruno-at-bareos   
2022-03-31 10:06   
Hello, I'm just confirmed the missing quote on export.
But even if space is allowed in such resource name, we really advise you to avoid them, it will hurt you in lot a situation.
Space in name for example also doesn't work well with autocompletion in bconsole, etc.

It is safer to consider Name = resource as fqdn name using only ascii alphanumeric and .-_ as special characters.


Regards
(0004577)
bruno-at-bareos   
2022-04-25 16:49   
PR1171 in progress.
(0004590)
bruno-at-bareos   
2022-05-04 17:10   
PR-1171 merged + backport for 21 1173 merged
will appear in next 21.1.3

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1429 [bareos-core] documentation major have not tried 2022-02-14 16:29 2022-11-10 16:52
Reporter: abaguinski Platform:  
Assigned To: bruno-at-bareos OS:  
Priority: high OS Version:  
Status: resolved Product Version: 20.0.5  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Mysql to Postgres migration howto doesn't explain how to initialise the postgres database
Description: I'm trying to figure out how to migrate the catalog from mysql to postgres but I think I'm missing something. The howto (https://docs.bareos.org/Appendix/Howtos.html#prepare-the-new-database) suggests: "Firstly, create a new PostgreSQL database as described in Prepare Bareos database" and links to this document: "https://docs.bareos.org/IntroductionAndTutorial/InstallingBareos.html#prepare-bareos-database", which in turn instructs to run a series of commands that would initialize the database (https://docs.bareos.org/IntroductionAndTutorial/InstallingBareos.html#id9):

su postgres -c /usr/lib/bareos/scripts/create_bareos_database
su postgres -c /usr/lib/bareos/scripts/make_bareos_tables
su postgres -c /usr/lib/bareos/scripts/grant_bareos_privileges

However these commands assume that I mean the currently configured Mysql catalog and fail because Mysql catalog is deprecated:

su postgres -c /usr/lib/bareos/scripts/create_bareos_database
Creating mysql database
The MySQL database backend is deprecated. Please use PostgreSQL instead.
Creating of bareos database failed.

Does that mean I first have to "Add the new PostgreSQL database to the current Bareos Director configuration" (second sentence of the Howto section) and only then go back to the first sentence? Shouldn't the sentences be swapped then (except for "Firstly, ")? And will the create_bareos_database understand which catalog I mean when I configure two catalogs at the same time?

Tags:
Steps To Reproduce: 1. Install bareos 19 with mysql catalog
2. upgrade to bareos 20
3. try to follow the how exactly how it is written
Additional Information:
Attached Files:
Notes
(0004527)
bruno-at-bareos   
2022-02-24 15:56   
I've been able to reproduce the problem, which is due to missing keywords into the documentation. (Passing the dbdriver to scripts)

at the postgresql create database stage could you retry by using this commands

  su - postgres /usr/lib/bareos/scripts/create_bareos_database postgresql
  su - postgres /usr/lib/bareos/scripts/make_bareos_tables postgresql
  su - postgres /usr/lib/bareos/scripts/grant_bareos_privileges postgresql

After that you should be able to use bareos-dbcopy as documented.
Thanks to confirm this works for you, I will then propose and update to the documentation.
(0004528)
abaguinski   
2022-02-25 08:51   
Hi

Thanks for your reaction!

In the mean time we were able to migrate to postgres with a slight difference in the order of steps: 1 added the new catalog resource to the director configuration, 2 created and initialized the postgres database using these scripts. Indeed we've found that the 'postgresql' argument was necessary.

Since we have done it already in this order I unfortunately cannot confirm if only adding the argument was enough (i.e. would the scripts with extra argument work without the catalog resource)

Greetings,
Artem
(0004529)
bruno-at-bareos   
2022-02-28 09:29   
Thanks for your feedback,
Yes the script would have worked without the second catalog resource, when you give them the dbtype.

I will update the documentation to be more precise in that sense.
(0004530)
bruno-at-bareos   
2022-03-01 15:33   
PR#1093 and PR#1094 in review actually
(0004543)
bruno-at-bareos   
2022-03-21 10:56   
PR1094 for updating documentation has been merged.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1480 [bareos-core] documentation minor always 2022-08-30 12:33 2022-11-10 16:51
Reporter: crameleon Platform: Bareos 21.1.3  
Assigned To: frank OS: SUSE Linux Enterprise Server  
Priority: low OS Version: 15 SP4  
Status: resolved Product Version:  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: password string length limitation
Description: Hi,

if I try to log into the web console with the following configuration snippet active:

Console {
  Name = "mygreatusername"
  Password = "SX~E5eMw21shy%z!!B!cZ0PiQ)ex+FOn$Q-A&iv~B3,x|dSGqxsP&4}Zm6iF;[6c6#>LcAvFArcL%d|J}Ae*NB.g8S?$}gJ4mqUH:6aS+Jh6Vtv^Qhno7$>FW24|t2gq"
  Profile = "mygreatwebuiprofile"
  TLS Enable = No
}

The web UI prints the following message:

"Please provide a director, username and password."

If I change the password line to something more simple:

Console {
  Name = "suse-superuser"
  Password = "12345"
  Profile = "webui-superadmin"
  TLS Enable = No
}

Login works as expected.

Since the system does not seem to print any error messages about invalid passwords in its configuration, it would be nice if the allowed characters and lengths (and possibly a sample `pwgen -r <forbidden characters> <length> 1` command) were documented.

Best,
Georg
Tags:
Steps To Reproduce: 1. Configure a web UI user with a complex password such as SX~E5eMw21shy%z!!B!cZ0PiQ)ex+FOn$Q-A&iv~B3,x|dSGqxsP&4}Zm6iF;[6c6#>LcAvFArcL%d|J}Ae*NB.g8S?$}gJ4mqUH:6aS+Jh6Vtv^Qhno7$>FW24|t2gq
2. Copy paste username and password into the browser
3. Try to log in
Additional Information:
Attached Files:
Notes
(0004737)
bruno-at-bareos   
2022-08-31 11:16   
Thanks for your report, the title is a bit misleading, as the problem seems to be present only with the webui.
Having a strong password like described work perfectly with dir<->bconsole for example.

We are now checking where the problem really occur.
(0004738)
bruno-at-bareos   
2022-08-31 11:17   
Long or complicated password are truncated during POST operation with login form.
Those password work well with bconsole for example.
(0004739)
crameleon   
2022-08-31 11:28   
Apologies, I did not consider it to be specific to the webui. Thanks for looking into this! Maybe the POST truncation could be adjusted in my Apache web server?
(0004740)
bruno-at-bareos   
2022-08-31 11:38   
Actual research has proved that the length is important and the password for webui console should be less than 64 chars.
Maybe you can confirm this also on your installation so when our dev's will check this it will be more precise about the symptoms.
(0004741)
crameleon   
2022-09-02 19:00   
Can confirm, with 64 characters it works fine!
(0004742)
crameleon   
2022-09-02 19:02   
And I can also confirm, with one more character, so 65 in total, it returns the "Please provide a director, username and password." message.
(0004744)
frank   
2022-09-08 15:23   
(Last edited: 2022-09-08 16:33)
The form data input filter for password input is set to validate for a PW length between 1 and 64. We simply can remove the max value from the filter to not cause problems like this or set it to a value corresponding to what is allowed in configuration files.
(0004747)
frank   
2022-09-13 18:11   
Fix committed to bareos master branch with changesetid 16581.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1489 [bareos-core] webui minor always 2022-11-02 06:23 2022-11-09 14:11
Reporter: dimmko Platform: Linux  
Assigned To: frank OS: Ubuntu  
Priority: normal OS Version: 22.04  
Status: resolved Product Version: 21.1.4  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: broken storage pool link
Description: Hello!
Sorry for my very bad English!

I have a error when go to see detail
bareos-webui/pool/details/Diff

Tags:
Steps To Reproduce: 1) login in webui
2) click on jobid
3) click on "+"
4) click on pool - Full (for example).
Additional Information: Error:

An error occurred
An error occurred during execution; please try again later.
Additional information:
Exception
File:
/usr/share/bareos-webui/module/Pool/src/Pool/Model/PoolModel.php:94
Message:
Missing argument.
Stack trace:
#0 /usr/share/bareos-webui/module/Pool/src/Pool/Controller/PoolController.php(137): Pool\Model\PoolModel->getPool()
0000001 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/Controller/AbstractActionController.php(82): Pool\Controller\PoolController->detailsAction()
0000002 [internal function]: Zend\Mvc\Controller\AbstractActionController->onDispatch()
0000003 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(444): call_user_func()
0000004 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(205): Zend\EventManager\EventManager->triggerListeners()
0000005 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/Controller/AbstractController.php(118): Zend\EventManager\EventManager->trigger()
0000006 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/DispatchListener.php(93): Zend\Mvc\Controller\AbstractController->dispatch()
0000007 [internal function]: Zend\Mvc\DispatchListener->onDispatch()
0000008 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(444): call_user_func()
0000009 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(205): Zend\EventManager\EventManager->triggerListeners()
0000010 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/Application.php(314): Zend\EventManager\EventManager->trigger()
0000011 /usr/share/bareos-webui/public/index.php(46): Zend\Mvc\Application->run()
0000012 {main}
Attached Files: bareos_webui_error.png (63,510 bytes) 2022-11-02 06:23
https://bugs.bareos.org/file_download.php?file_id=538&type=bug
png
Notes
(0004821)
bruno-at-bareos   
2022-11-03 10:18   
What is needed to try to understand the error is you pool configuration, also maybe you can use your browser console to log POST and GET answer and headers.
Maybe you can also afford the effort to check the php-fpm if used log and apache log (access and error) when the problem occur.

Thanks.
(0004824)
dimmko   
2022-11-07 09:01   
(Last edited: 2022-11-07 09:05)
bruno-at-bareos, thank's for you comment.

1) my pool - Diff
Pool {
  Name = Diff
  Pool Type = Backup
  RecyclePool = Diff
  Purge Oldest Volume = yes
  Recycle = no
  Recycle Oldest Volume = no
  AutoPrune = no
  Volume Retention = 21 days
  ActionOnPurge = Truncate
  Maximum Volume Jobs = 1
  Label Format = "${Client}_${Level}_${Pool}.${Year}-${Month:p/2/0/r}-${Day:p/2/0/r}-${Hour:p/2/0/r}-${Minute:p/2/0/r}_${JobId}"
}

apache2 access.log
[07/Nov/2022:10:40:58 +0300] "GET /pool/details/Diff HTTP/1.1" 500 3225 "http://192.168.5.16/job/?period=1&status=Success" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36"

apache error.log
[Mon Nov 07 10:50:09.844798 2022] [php:warn] [pid 1340] [client 192.168.1.13:61800] PHP Warning: "continue" targeting switch is equivalent to "break". Did you mean to use "continue 2"? in /usr/share/bareos-webui/vendor/zendframework/zend-stdlib/src/ArrayObject.php on line 426, referer: http://192.168.5.16/job/?period=1&status=Success


In Chrome (103):
General:
Request URL: http://192.168.5.16/pool/details/Diff
Request Method: GET
Status Code: 500 Internal Server Error
Remote Address: 192.168.5.16:80
Referrer Policy: strict-origin-when-cross-origin

Response Headers:
HTTP/1.1 500 Internal Server Error
Date: Mon, 07 Nov 2022 07:59:54 GMT
Server: Apache/2.4.52 (Ubuntu)
Expires: Thu, 19 Nov 1981 08:52:00 GMT
Cache-Control: no-store, no-cache, must-revalidate
Pragma: no-cache
Content-Length: 2927
Connection: close
Content-Type: text/html; charset=UTF-8

Request Headers:
GET /pool/details/Diff HTTP/1.1
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9
Accept-Encoding: gzip, deflate
Accept-Language: ru-RU,ru;q=0.9,en-US;q=0.8,en;q=0.7
Cache-Control: no-cache
Connection: keep-alive
Cookie: bareos=o87i7ftkdsf2r160k2j0g5vic2
DNT: 1
Host: 192.168.5.16
Pragma: no-cache
Referer: http://192.168.5.16/job/?period=1&status=Success
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36
(0004825)
dimmko   
2022-11-07 09:18   
Enable display_error in php

[Mon Nov 07 11:17:57.573002 2022] [php:error] [pid 1545] [client 192.168.1.13:63174] PHP Fatal error: Uncaught Zend\\Session\\Exception\\InvalidArgumentException: 'session.name' is not a valid sessions-related ini setting. in /usr/share/bareos-webui/vendor/zendframework/zend-session/src/Config/SessionConfig.php:90\nStack trace:\n#0 /usr/share/bareos-webui/vendor/zendframework/zend-session/src/Config/StandardConfig.php(266): Zend\\Session\\Config\\SessionConfig->setStorageOption()\n#1 /usr/share/bareos-webui/vendor/zendframework/zend-session/src/Config/StandardConfig.php(114): Zend\\Session\\Config\\StandardConfig->setName()\n#2 /usr/share/bareos-webui/module/Application/Module.php(154): Zend\\Session\\Config\\StandardConfig->setOptions()\n#3 [internal function]: Application\\Module->Application\\{closure}()\n#4 /usr/share/bareos-webui/vendor/zendframework/zend-servicemanager/src/ServiceManager.php(939): call_user_func()\n#5 /usr/share/bareos-webui/vendor/zendframework/zend-servicemanager/src/ServiceManager.php(1099): Zend\\ServiceManager\\ServiceManager->createServiceViaCallback()\n#6 /usr/share/bareos-webui/vendor/zendframework/zend-servicemanager/src/ServiceManager.php(638): Zend\\ServiceManager\\ServiceManager->createFromFactory()\n#7 /usr/share/bareos-webui/vendor/zendframework/zend-servicemanager/src/ServiceManager.php(598): Zend\\ServiceManager\\ServiceManager->doCreate()\n#8 /usr/share/bareos-webui/vendor/zendframework/zend-servicemanager/src/ServiceManager.php(530): Zend\\ServiceManager\\ServiceManager->create()\n#9 /usr/share/bareos-webui/module/Application/Module.php(82): Zend\\ServiceManager\\ServiceManager->get()\n#10 /usr/share/bareos-webui/module/Application/Module.php(42): Application\\Module->initSession()\n#11 [internal function]: Application\\Module->onBootstrap()\n#12 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(444): call_user_func()\n#13 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(205): Zend\\EventManager\\EventManager->triggerListeners()\n#14 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/Application.php(157): Zend\\EventManager\\EventManager->trigger()\n#15 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/Application.php(261): Zend\\Mvc\\Application->bootstrap()\n#16 /usr/share/bareos-webui/public/index.php(46): Zend\\Mvc\\Application::init()\n#17 {main}\n\nNext Zend\\ServiceManager\\Exception\\ServiceNotCreatedException: An exception was raised while creating "Zend\\Session\\SessionManager"; no instance returned in /usr/share/bareos-webui/vendor/zendframework/zend-servicemanager/src/ServiceManager.php:946\nStack trace:\n#0 /usr/share/bareos-webui/vendor/zendframework/zend-servicemanager/src/ServiceManager.php(1099): Zend\\ServiceManager\\ServiceManager->createServiceViaCallback()\n#1 /usr/share/bareos-webui/vendor/zendframework/zend-servicemanager/src/ServiceManager.php(638): Zend\\ServiceManager\\ServiceManager->createFromFactory()\n#2 /usr/share/bareos-webui/vendor/zendframework/zend-servicemanager/src/ServiceManager.php(598): Zend\\ServiceManager\\ServiceManager->doCreate()\n#3 /usr/share/bareos-webui/vendor/zendframework/zend-servicemanager/src/ServiceManager.php(530): Zend\\ServiceManager\\ServiceManager->create()\n#4 /usr/share/bareos-webui/module/Application/Module.php(82): Zend\\ServiceManager\\ServiceManager->get()\n#5 /usr/share/bareos-webui/module/Application/Module.php(42): Application\\Module->initSession()\n#6 [internal function]: Application\\Module->onBootstrap()\n#7 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(444): call_user_func()\n#8 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(205): Zend\\EventManager\\EventManager->triggerListeners()\n#9 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/Application.php(157): Zend\\EventManager\\EventManager->trigger()\n#10 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/Application.php(261): Zend\\Mvc\\Application->bootstrap()\n#11 /usr/share/bareos-webui/public/index.php(46): Zend\\Mvc\\Application::init()\n#12 {main}\n thrown in /usr/share/bareos-webui/vendor/zendframework/zend-servicemanager/src/ServiceManager.php on line 946, referer: http://192.168.5.16/job/?period=1&status=Success
(0004826)
bruno-at-bareos   
2022-11-07 09:55   
I was able to reproduce, what is funny is if you go to storage -> pool tab -> pool name it will work.
We will transfer that to developer.
(0004827)
bruno-at-bareos   
2022-11-07 09:57   
There's a subtile difference in url called

by storage, pool, poolname url is bareos-webui/pool/details/?pool=Full
by jobid, + details, pool bareos-webui/pool/details/Full
-> create error missing parameter
(0004828)
dimmko   
2022-11-07 10:34   
bruno-at-bareos, your method is work, thank's.
(0004830)
frank   
2022-11-08 15:11   
Fix committed to bareos master branch with changesetid 16853.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1357 [bareos-core] director crash have not tried 2021-05-18 10:53 2022-11-09 14:09
Reporter: harm Platform:  
Assigned To: arogge OS:  
Priority: normal OS Version:  
Status: resolved Product Version:  
Product Build: Resolution: unable to reproduce  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: bareos-dir: ERROR in lib/mem_pool.cc:215 Failed ASSERT: obuf
Description: Hello folks,

when I try to make a long term copy of an always incremental backup the Bareos director crashes.

Version: 20.0.1 (02 March 2021) Ubuntu 20.04.1 LTS

Please let me know what more information you need.

Best regards
Harm
Tags:
Steps To Reproduce: Follow the instructions of https://docs.bareos.org/TasksAndConcepts/AlwaysIncrementalBackupScheme.html
Additional Information:
Attached Files:
Notes
(0004130)
harm   
2021-05-19 15:15   
The problem seems to occur when a client is selected. I don't seem to have quite grasped the concept yet, but the error should be handled?
(0004149)
arogge   
2021-06-09 17:48   
We need a meaningful backtrace to debug that. Please install a debugger and debug-Packages (or tell me what system your director runs on so I can provide you the commands) and reproduce the issue, so we can see what goes wrong.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1492 [bareos-core] General trivial have not tried 2022-11-09 12:01 2022-11-09 12:01
Reporter: arogge_adm Platform:  
Assigned To: OS:  
Priority: normal OS Version:  
Status: new Product Version:  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Release Bareos 20.0.9
Description: This ticket acts as a tracker ticket to collect information about releasing Bareos 20.0.9
Tags: release
Steps To Reproduce:
Additional Information:
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1487 [bareos-core] webui text have not tried 2022-10-13 14:21 2022-11-04 11:54
Reporter: fcolista Platform: Linux  
Assigned To: frank OS: any  
Priority: normal OS Version: 3  
Status: feedback Product Version: 21.1.4  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: webui support for PHP 8.1
Description: Hello. I'm the maintainer of BareOS for Alpine Linux.
We are almost ready to go ahead with new release of Alpine 3.17, where we are going to drop PHP 8 support in favor of PHP 8.1.
Can you please confirm that in BareOS webui 21.1.4 PHP 8.1 is fully supported?
Thank you.

.: Francesco
Tags:
Steps To Reproduce:
Additional Information:
System Description General issues regarding all linux platforms or not specific to only one distribution
Attached Files:
Notes
(0004820)
fcolista   
2022-10-31 14:57   
Any update on this, please?
(0004823)
frank   
2022-11-04 11:54   
> Can you please confirm that in BareOS webui 21.1.4 PHP 8.1 is fully supported?

Yes, currently there are no known issues that break functionality.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
854 [bareos-core] director tweak have not tried 2017-09-21 10:21 2022-10-11 09:43
Reporter: hostedpower Platform: Linux  
Assigned To: stephand OS: Debian  
Priority: normal OS Version: 8  
Status: assigned Product Version: 16.2.5  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Problem with always incremental virtual full
Description: Hi,


All the sudden we have issues with virtual full (for consolidate) no longer working.

We have 2 pools for each customer. One is for the full (consolidate) and the other for the incremental.

We used to have the option to limit a single job to a single volume, we removed that a while ago, maybe there is a relation.

We also had to downgrade 16.2.6 to 16.2.5 because of the MySQL slowness issues, that happened recently, so that's also a possibility.

We have the feeling this software is not very reliable or at least very complex to get it somewhat stable.

PS: The volume it asks is there, but it doesn't want to use it :(

We used this config for a long time without this particular issue, except for the

        Action On Purge = Truncate
and commenting the:
# Maximum Volume Jobs = 1 # a new file for each backup that is done
Tags:
Steps To Reproduce: Config:



Storage {
    Name = customerx-incr
    Device = customerx-incr # bareos-sd Device
    Media Type = customerx
    Address = xxx # backup server fqdn > sent to client sd
    Password = "xxx" # password for Storage daemon
    Maximum Concurrent Jobs = 8 # required for virtual full
}

Storage {
    Name = customerx-cons
    Device = customerx-cons # bareos-sd Device
    Media Type = customerx
    Address = xxx # backup server fqdn > sent to client sd
    Password = "xxx" # password for Storage daemon
    Maximum Concurrent Jobs = 8 # required for virtual full
}

cat /etc/bareos/pool-defaults.conf
#pool defaults used for always incremental scheme
        Pool Type = Backup
# Maximum Volume Jobs = 1 # a new file for each backup that is done
        Recycle = yes # Bareos can automatically recycle Volumes
        Auto Prune = yes # Prune expired volumes
        Volume Use Duration = 23h
        Action On Purge = Truncate
#end


Pool {
    Name = customerx-incr
    Storage = customerx-incr
    LabelFormat = "vol-incr-"
    Next Pool = customerx-cons
    @/etc/bareos/pool-defaults.conf
}

Pool {
    Name = customerx-cons
    Storage = customerx-cons
    LabelFormat = "vol-cons-"
    @/etc/bareos/pool-defaults.conf
}



# lb1.customerx.com
Job {
    Name = "backup-lb1.customerx.com"
    Client = lb1.customerx.com
    Pool = customerx-incr
    Full Backup Pool = customerx-cons
    FileSet = "linux-all"
    Schedule = "always-inc-cycle-4"
    #Defaults
    JobDefs = "HPJobInc"
    Maximum Concurrent Jobs = 8 # Let up to 8 jobs run
}

Client {
    Name = lb1.customerx.com
    Address = lb1.customerx.com
# Catalog = MyCatalog
    Password = "xxx" # password for Storage daemon
    AutoPrune = yes
}


# lb2.customerx.com
Job {
    Name = "backup-lb2.customerx.com"
    Client = lb2.customerx.com
    Pool = customerx-incr
    Full Backup Pool = customerx-cons
    FileSet = "linux-all"
    Schedule = "always-inc-cycle-4"
    #Defaults
    JobDefs = "HPJobInc"
    Maximum Concurrent Jobs = 8 # Let up to 8 jobs run
}

Client {
    Name = lb2.customerx.com
    Address = lb2.customerx.com
# Catalog = MyCatalog
    Password = "xxx" # password for Storage daemon
    AutoPrune = yes
}



# app1.customerx.com
Job {
    Name = "backup-app1.customerx.com"
    Client = app1.customerx.com
    Pool = customerx-incr
    Full Backup Pool = customerx-cons
    FileSet = "linux-all"
    Schedule = "always-inc-cycle-4"
    #Defaults
    JobDefs = "HPJobInc"
    Maximum Concurrent Jobs = 8 # Let up to 8 jobs run
}

Client {
    Name = app1.customerx.com
    Address = app1.customerx.com
# Catalog = MyCatalog
    Password = "xxx" # password for Storage daemon
    AutoPrune = yes
}



# app2.customerx.com
Job {
    Name = "backup-app2.customerx.com"
    Client = app2.customerx.com
    Pool = customerx-incr
    Full Backup Pool = customerx-cons
    FileSet = "linux-all"
    Schedule = "always-inc-cycle-4"
    #Defaults
    JobDefs = "HPJobInc"
    Maximum Concurrent Jobs = 8 # Let up to 8 jobs run
}

Client {
    Name = app2.customerx.com
    Address = app2.customerx.com
# Catalog = MyCatalog
    Password = "xxx" # password for Storage daemon
    AutoPrune = yes
}

# app3.customerx.com
Job {
    Name = "backup-app3.customerx.com"
    Client = app3.customerx.com
    Pool = customerx-incr
    Full Backup Pool = customerx-cons
    FileSet = "linux-all"
    Schedule = "always-inc-cycle-4"
    #Defaults
    JobDefs = "HPJobInc"
    Maximum Concurrent Jobs = 8 # Let up to 8 jobs run
}

Client {
    Name = app3.customerx.com
    Address = app3.customerx.com
# Catalog = MyCatalog
    Password = "xxx" # password for Storage daemon
    AutoPrune = yes
}




# db1.customerx.com
Job {
    Name = "backup-db1.customerx.com"
    Client = db1.customerx.com
    Pool = customerx-incr
    Full Backup Pool = customerx-cons
    FileSet = "linux-all-mysql"
    Schedule = "always-inc-cycle-4"
    #Defaults
    JobDefs = "HPJobInc"
    Maximum Concurrent Jobs = 8 # Let up to 8 jobs run
}

Client {
    Name = db1.customerx.com
    Address = db1.customerx.com
# Catalog = MyCatalog
    Password = "xxx" # password for Storage daemon
    AutoPrune = yes
}


# db2.customerx.com
Job {
    Name = "backup-db2.customerx.com"
    Client = db2.customerx.com
    Pool = customerx-incr
    Full Backup Pool = customerx-cons
    FileSet = "linux-all-mysql"
    Schedule = "always-inc-cycle-4"
    #Defaults
    JobDefs = "HPJobInc"
    Maximum Concurrent Jobs = 8 # Let up to 8 jobs run
}

Client {
    Name = db2.customerx.com
    Address = db2.customerx.com
# Catalog = MyCatalog
    Password = "xxx" # password for Storage daemon
    AutoPrune = yes
}

# db3.customerx.com
Job {
    Name = "backup-db3.customerx.com"
    Client = db3.customerx.com
    Pool = customerx-incr
    Full Backup Pool = customerx-cons
    FileSet = "linux-all-mysql"
    Schedule = "always-inc-cycle-4"
    #Defaults
    JobDefs = "HPJobInc"
    Maximum Concurrent Jobs = 8 # Let up to 8 jobs run
}

Client {
    Name = db3.customerx.com
    Address = db3.customerx.com
# Catalog = MyCatalog
    Password = "xxx" # password for Storage daemon
    AutoPrune = yes
}




# Backup for customerx
Device {
  Name = customerx-incr
  Media Type = customerx
  Archive Device = /home/customerx/bareos
  LabelMedia = yes;
  Random Access = Yes;
  AutomaticMount = yes;
  RemovableMedia = no;
  AlwaysOpen = no;
  Maximum Concurrent Jobs = 8
}

Device {
  Name = customerx-cons
  Media Type = customerx
  Archive Device = /home/customerx/bareos
  LabelMedia = yes;
  Random Access = Yes;
  AutomaticMount = yes;
  RemovableMedia = no;
  AlwaysOpen = no;
  Maximum Concurrent Jobs = 8
}
Additional Information: 2017-09-21 10:07:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 10:02:35 bareos-sd JobId 2467: Please mount read Volume "vol-cons-0378" for:
 Job: backup-db3.customerx.cloud.2017-09-21_09.00.04_05
 Storage: "customerx-incr" (/home/customerx/bareos)
 Pool: customerx-incr
 Media type: customerx
 
2017-09-21 09:57:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:52:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:47:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:42:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:37:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:32:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:27:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:22:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:17:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:12:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:07:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:02:35 hostedpower-dir JobId 2467: Bootstrap records written to /var/lib/bareos/hostedpower-dir.restore.107.bsr
 
2017-09-21 09:02:35 hostedpower-dir JobId 2467: Using Device "customerx-incr" to read.
 
2017-09-21 09:02:35 hostedpower-dir JobId 2467: Using Device "customerx-cons" to write.
 
2017-09-21 09:02:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:02:35 bareos-sd JobId 2467: Warning: acquire.c:294 Read acquire: label.c:254 Could not reserve volume vol-cons-0378 on "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:02:35 bareos-sd JobId 2467: Please mount read Volume "vol-cons-0378" for:
 Job: backup-db3.customerx.cloud.2017-09-21_09.00.04_05
 Storage: "customerx-incr" (/home/customerx/bareos)
 Pool: customerx-incr
 Media type: customerx
 
2017-09-21 09:02:34 hostedpower-dir JobId 2467: Start Virtual Backup JobId 2467, Job=backup-db3.customerx.cloud.2017-09-21_09.00.04_05
 
2017-09-21 09:02:34 hostedpower-dir JobId 2467: Consolidating JobIds 2392,971
System Description
Attached Files:
Notes
(0002744)
joergs   
2017-09-21 16:08   
I see some problems in this configuration.

You should check section http://doc.bareos.org/master/html/bareos-manual-main-reference.html#ConcurrentDiskJobs from the manual.

Each device can only read/write one volume at a time. VirtualFull requires multiple volumes.

Basically, you need multiple devices to the some "Storage Directory", each with "Maximum Concurrent Jobs = 1" to make it work.

Your setting

Device {
  Name = customerx-incr
  Media Type = customerx
  Archive Device = /home/customerx/bareos
  LabelMedia = yes;
  Random Access = Yes;
  AutomaticMount = yes;
  RemovableMedia = no;
  AlwaysOpen = no;
  Maximum Concurrent Jobs = 8
}

will use interleaving instead, which could cause performance issues on restore.
(0002745)
hostedpower   
2017-09-21 16:14   
So you suggest just making the device:

Device {
  Name = customerx-incr
  Media Type = customerx
  Archive Device = /home/customerx/bareos
  LabelMedia = yes;
  Random Access = Yes;
  AutomaticMount = yes;
  RemovableMedia = no;
  AlwaysOpen = no;
  Maximum Concurrent Jobs = 1
}

This would fix all issues?

Before we had this and I think that worked also: Maximum Volume Jobs = 1 , but it seems discouraged...
(0002746)
joergs   
2017-09-21 16:26   
No, this will not fix the issue, but it prevents interleaving, which might cause other problems.

I suggest, by pointing to the documentation, that you setup multiple Devices all pointing to the same Archive Device. Then, access them all to a Director Storage, like

Storage {
    Name = customerx
    Device = customerx-dev1, customerx-dev2, ... # bareos-sd Device
    Media Type = customerx
    Address = xxx # backup server fqdn > sent to client sd
    Password = "xxx" # password for Storage daemon
    Maximum Concurrent Jobs = 8 # required for virtual full
}
(0002747)
joergs   
2017-09-21 16:26   
With Maximum Concurrent Jobs = 8 you should use 8 Devices.
(0002748)
hostedpower   
2017-09-21 16:37   
Hi Joergs,

Thanks a lot, that documentation makes sense and seems to have improved since I last read it (or I missed it somehow).

Will test it like that, but it looks very promising :)
(0002754)
hostedpower   
2017-09-21 22:57   
I've read it, but it would mean I need to create multiple storage devices in this case? That's a lot of extra definitions to just backup 1 customer. It would be nice if you would simply declare the device object and tell there are 8 from it for example. All in just one definition. That shouldn't be too hard I suppose?

Something like:

Device {
  Name = customerx-incr
  Media Type = customerx
  Archive Device = /home/customerx/bareos
  LabelMedia = yes;
  Random Access = Yes;
  AutomaticMount = yes;
  RemovableMedia = no;
  AlwaysOpen = no;
  Maximum Concurrent Jobs = 8 --> this automatically creates customerx-incr1 --> customerx-incr8, probably with some extra setting to allow it.
}



For now, would it be a solution to set

Device {
  Name = customerx-incr
  Media Type = customerx
  Archive Device = /home/customerx/bareos
  LabelMedia = yes;
  Random Access = Yes;
  AutomaticMount = yes;
  RemovableMedia = no;
  AlwaysOpen = no;
  Maximum Concurrent Jobs = 1 # just use one at the same time
}

Since we have so many clients, it would still run multiple backups at the same time for different clients I suppose? (Each client has his own media, device and storage)


PS: We want to keep about 20 days of data, is the next config ok together with the above scenario?


JobDefs {
    Name = "HPJobInc"
    Type = Backup
    Level = Full
    Write Bootstrap = "/var/lib/bareos/%c.bsr"
    Accurate=yes
    Level=Full
    Messages = Standard
    Always Incremental = yes
    Always Incremental Job Retention = 20 days
    # The resulting interval between full consolidations when running daily backups and daily consolidations is Always Incremental Max Full Age Dir Job - Always Incremental Job Retention Dir Job.
    Always Incremental Max Full Age = 35 days # should NEVER be less then Always Incremental Job Retention -> Every 15 days the full backup is also consolidated ( Always Incremental Max Full Age Dir Job - Always Incremental Job Retention Dir )
    Always Incremental Keep Number = 5 # Guarantee that at least the specified number of Backup Jobs will persist, even if they are older than "Always Incremental Job Retention".
    Maximum Concurrent Jobs = 5 # Let up to 5 jobs run
}
(0002759)
hostedpower   
2017-09-25 09:50   
We used this now:

  Maximum Concurrent Jobs = 1 # just use one at the same time

But some jobs also have the same error:

2017-09-25 09:00:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 
2017-09-25 08:55:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)


It looks almost like multiple jobs can't exist together in one volume (well they can, but then issues like this start to occur.

Before, probably with the "Maximum Volume Jobs = 1", we never encountered these problems.
(0002760)
joergs   
2017-09-25 11:17   
Only one access per volume is possible at a time.
So reading and writing to the same volume is not possible.

I thought, you covered this by "Maximum Use Duration = 23h". Have you disabled it, or did you run multiple jobs during one day?

However, this is a bug tracker. I think further questions about Always Incrementals are best handled using the bareos-users mailing list or a bareos.com support ticket.
(0002761)
hostedpower   
2017-09-25 11:37   
Yes I was wondering why we encountered it now and never before.

It wants to swap consolidate pool for and incremental pool (or vice versa). Don't understand why.
(0002763)
joergs   
2017-09-25 12:15   
Please attach the output of

list joblog jobid=2668
(0002764)
hostedpower   
2017-09-25 12:22   
Enter a period to cancel a command.
*list joblog jobid=2668
Automatically selected Catalog: MyCatalog
Using Catalog "MyCatalog"
 2017-09-24 09:00:13 hostedpower-dir JobId 2668: Start Virtual Backup JobId 2668, Job=backup-web.hosted-power.com.2017-09-24_09.00.03_27
 2017-09-24 09:00:13 hostedpower-dir JobId 2668: Consolidating JobIds 2593,1173
 2017-09-24 09:00:18 hostedpower-dir JobId 2668: Bootstrap records written to /var/lib/bareos/hostedpower-dir.restore.87.bsr
 2017-09-24 09:00:18 hostedpower-dir JobId 2668: Using Device "hostedpower-incr" to read.
 2017-09-24 09:00:18 hostedpower-dir JobId 2668: Using Device "hostedpower-cons" to write.
 2017-09-24 09:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:00:18 bareos-sd JobId 2668: Warning: acquire.c:294 Read acquire: label.c:254 Could not reserve volume vol-cons-0344 on "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:00:18 bareos-sd JobId 2668: Please mount read Volume "vol-cons-0344" for:
    Job: backup-web.hosted-power.com.2017-09-24_09.00.03_27
    Storage: "hostedpower-incr" (/home/hostedpower/bareos)
    Pool: hostedpower-incr
    Media type: hostedpower
 2017-09-24 09:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 10:00:18 bareos-sd JobId 2668: Please mount read Volume "vol-cons-0344" for:
    Job: backup-web.hosted-power.com.2017-09-24_09.00.03_27
    Storage: "hostedpower-incr" (/home/hostedpower/bareos)
    Pool: hostedpower-incr
    Media type: hostedpower
 2017-09-24 10:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 10:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 10:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 10:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 10:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 10:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 10:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 10:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 10:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 10:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 10:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:20:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:25:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:30:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:35:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:40:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:45:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:50:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:55:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 12:00:18 bareos-sd JobId 2668: Please mount read Volume "vol-cons-0344" for:
    Job: backup-web.hosted-power.com.2017-09-24_09.00.03_27
    Storage: "hostedpower-incr" (/home/hostedpower/bareos)
    Pool: hostedpower-incr
    Media type: hostedpower
 2017-09-24 12:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 12:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 12:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 12:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 12:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 12:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 12:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 12:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 12:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 12:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 12:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 16:00:18 bareos-sd JobId 2668: Please mount read Volume "vol-cons-0344" for:
    Job: backup-web.hosted-power.com.2017-09-24_09.00.03_27
    Storage: "hostedpower-incr" (/home/hostedpower/bareos)
    Pool: hostedpower-incr
    Media type: hostedpower
 2017-09-24 16:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 16:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 16:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 16:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 16:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 16:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 16:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 16:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 16:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 16:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 16:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 00:00:18 bareos-sd JobId 2668: Please mount read Volume "vol-cons-0344" for:
    Job: backup-web.hosted-power.com.2017-09-24_09.00.03_27
    Storage: "hostedpower-incr" (/home/hostedpower/bareos)
    Pool: hostedpower-incr
    Media type: hostedpower
 2017-09-25 00:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 00:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 00:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 00:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 00:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 00:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 00:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 00:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 00:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 00:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 00:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:30:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:35:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:40:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:45:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:50:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:55:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:00:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:05:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:10:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:15:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:20:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:25:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:30:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:35:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:40:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:45:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:50:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:55:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:00:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:05:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:10:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:15:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:20:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:25:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:30:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:35:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:40:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:45:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:50:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:55:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:00:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:05:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:10:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:15:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:20:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:25:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:30:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:35:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:40:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:45:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:50:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:55:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:00:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:05:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:10:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:15:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:20:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:25:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:30:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:35:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:40:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:45:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:50:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:55:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 09:00:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 09:05:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 09:10:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 09:15:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 09:18:14 bareos-sd JobId 2668: acquire.c:221 Job 2668 canceled.
 2017-09-25 09:18:14 bareos-sd JobId 2668: Elapsed time=418423:18:14, Transfer rate=0 Bytes/second
 2017-09-25 09:18:14 hostedpower-dir JobId 2668: Bareos hostedpower-dir 16.2.5 (03Mar17):
  Build OS: x86_64-pc-linux-gnu debian Debian GNU/Linux 8.0 (jessie)
  JobId: 2668
  Job: backup-web.hosted-power.com.2017-09-24_09.00.03_27
  Backup Level: Virtual Full
  Client: "web.hosted-power.com" 16.2.4 (01Jul16) x86_64-pc-linux-gnu,debian,Debian GNU/Linux 7.0 (wheezy),Debian_7.0,x86_64
  FileSet: "linux-all-mysql" 2017-08-09 00:15:00
  Pool: "hostedpower-cons" (From Job Pool's NextPool resource)
  Catalog: "MyCatalog" (From Client resource)
  Storage: "hostedpower-cons" (From Storage from Pool's NextPool resource)
  Scheduled time: 24-Sep-2017 09:00:03
  Start time: 04-Sep-2017 02:15:02
  End time: 04-Sep-2017 02:42:30
  Elapsed time: 27 mins 28 secs
  Priority: 10
  SD Files Written: 0
  SD Bytes Written: 0 (0 B)
  Rate: 0.0 KB/s
  Volume name(s):
  Volume Session Id: 194
  Volume Session Time: 1506027627
  Last Volume Bytes: 0 (0 B)
  SD Errors: 0
  SD termination status: Canceled
  Accurate: yes
  Termination: Backup Canceled

You have messages.
*
(0002765)
hostedpower   
2017-09-27 11:57   
Hi,

Now jobs seem to success for the moment.

They also seem to be set as incremental all the time while before it was set as full after consolidate.

Example of such job

2017-09-27 09:02:07 hostedpower-dir JobId 2892: Joblevel was set to joblevel of first consolidated job: Incremental
 
2017-09-27 09:02:07 hostedpower-dir JobId 2892: Bareos hostedpower-dir 16.2.5 (03Mar17):
 Build OS: x86_64-pc-linux-gnu debian Debian GNU/Linux 8.0 (jessie)
 JobId: 2892
 Job: backup-web.hosted-power.com.2017-09-27_09.00.02_47
 Backup Level: Virtual Full
 Client: "web.hosted-power.com" 16.2.4 (01Jul16) x86_64-pc-linux-gnu,debian,Debian GNU/Linux 7.0 (wheezy),Debian_7.0,x86_64
 FileSet: "linux-all-mysql" 2017-08-09 00:15:00
 Pool: "hostedpower-cons" (From Job Pool's NextPool resource)
 Catalog: "MyCatalog" (From Client resource)
 Storage: "hostedpower-cons" (From Storage from Pool's NextPool resource)
 Scheduled time: 27-Sep-2017 09:00:02
 Start time: 07-Sep-2017 02:15:03
 End time: 07-Sep-2017 02:42:52
 Elapsed time: 27 mins 49 secs
 Priority: 10
 SD Files Written: 2,803
 SD Bytes Written: 8,487,235,164 (8.487 GB)
 Rate: 5085.2 KB/s
 Volume name(s): vol-cons-0010
 Volume Session Id: 121
 Volume Session Time: 1506368550
 Last Volume Bytes: 8,495,713,067 (8.495 GB)
 SD Errors: 0
 SD termination status: OK
 Accurate: yes
 Termination: Backup OK

 
2017-09-27 09:02:07 hostedpower-dir JobId 2892: purged JobIds 2817,1399 as they were consolidated into Job 2892
 
2017-09-27 09:02:07 hostedpower-dir JobId 2892: Begin pruning Jobs older than 6 months .
 
2017-09-27 09:02:07 hostedpower-dir JobId 2892: No Jobs found to prune.
 
2017-09-27 09:02:07 hostedpower-dir JobId 2892: Begin pruning Files.
 
2017-09-27 09:02:07 hostedpower-dir JobId 2892: No Files found to prune.
 
2017-09-27 09:02:07 hostedpower-dir JobId 2892: End auto prune.

 
2017-09-27 09:02:03 bareos-sd JobId 2892: Elapsed time=00:01:46, Transfer rate=80.06 M Bytes/second
 
2017-09-27 09:00:32 bareos-sd JobId 2892: End of Volume at file 1 on device "hostedpower-incr" (/home/hostedpower/bareos), Volume "vol-cons-0344"
 
2017-09-27 09:00:32 bareos-sd JobId 2892: Ready to read from volume "vol-incr-0097" on device "hostedpower-incr" (/home/hostedpower/bareos).
 
2017-09-27 09:00:32 bareos-sd JobId 2892: Forward spacing Volume "vol-incr-0097" to file:block 0:4709591.
 
2017-09-27 09:00:17 bareos-sd JobId 2892: Recycled volume "vol-cons-0010" on device "hostedpower-cons" (/home/hostedpower/bareos), all previous data lost.
 
2017-09-27 09:00:17 bareos-sd JobId 2892: Forward spacing Volume "vol-cons-0344" to file:block 0:215.
 
2017-09-27 09:00:14 hostedpower-dir JobId 2892: Start Virtual Backup JobId 2892, Job=backup-web.hosted-power.com.2017-09-27_09.00.02_47
 
2017-09-27 09:00:14 hostedpower-dir JobId 2892: Consolidating JobIds 2817,1399
 
2017-09-27 09:00:14 hostedpower-dir JobId 2892: Bootstrap records written to /var/lib/bareos/hostedpower-dir.restore.48.bsr
 
2017-09-27 09:00:14 hostedpower-dir JobId 2892: Using Device "hostedpower-incr" to read.
 
2017-09-27 09:00:14 hostedpower-dir JobId 2892: Max configured use duration=82,800 sec. exceeded. Marking Volume "vol-cons-0344" as Used.
 
2017-09-27 09:00:14 hostedpower-dir JobId 2892: There are no more Jobs associated with Volume "vol-cons-0010". Marking it purged.
 
2017-09-27 09:00:14 hostedpower-dir JobId 2892: All records pruned from Volume "vol-cons-0010"; marking it "Purged"
 
2017-09-27 09:00:14 hostedpower-dir JobId 2892: Recycled volume "vol-cons-0010"
 
2017-09-27 09:00:14 hostedpower-dir JobId 2892: Using Device "hostedpower-cons" to write.
 
2017-09-27 09:00:14 bareos-sd JobId 2892: Ready to read from volume "vol-cons-0344" on device "hostedpower-incr" (/home/hostedpower/bareos).
(0002766)
hostedpower   
2017-09-28 12:39   
Very strange things, most work for the moment it seems (but how long).

Some show full , other incremental


hostedpower-dir JobId 2956: Joblevel was set to joblevel of first consolidated job: Full
 
2017-09-28 09:05:03 hostedpower-dir JobId 2956: Bareos hostedpower-dir 16.2.5 (03Mar17):
 Build OS: x86_64-pc-linux-gnu debian Debian GNU/Linux 8.0 (jessie)
 JobId: 2956
 Job: backup-vps53404.2017-09-28_09.00.02_20
 Backup Level: Virtual Full
 Client: "vps53404" 16.2.4 (01Jul16) Microsoft Windows Server 2012 Standard Edition (build 9200), 64-bit,Cross-compile,Win64
 FileSet: "windows-all" 2017-08-08 22:15:00
 Pool: "vps53404-cons" (From Job Pool's NextPool resource)
 Catalog: "MyCatalog" (From Client resource)
 Storage: "vps53404-cons" (From Storage from Pool's NextPool resource)
 Scheduled time: 28-Sep-2017 09:00:02

Before they always all showed full
(0002802)
hostedpower   
2017-10-19 09:43   
Well I downgraded to 16.2.5 and guess what, issue was gone for few weeks.

Now I tried out the 16.2.7 and issue is back again ...

2017-10-19 09:40:03 bareos-sd JobId 4445: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0340 from dev="vps52320-cons" (/home/vps52320/bareos) to "vps52320-incr" (/home/vps52320/bareos)
 
2017-10-19 09:35:03 bareos-sd JobId 4445: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0340 from dev="vps52320-cons" (/home/vps52320/bareos) to "vps52320-incr" (/home/vps52320/bareos)
 
2017-10-19 09:30:03 bareos-sd JobId 4445: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0340 from dev="vps52320-cons" (/home/vps52320/bareos) to "vps52320-incr" (/home/vps52320/bareos)
 
2017-10-19 09:25:03 bareos-sd JobId 4445: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0340 from dev="vps52320-cons" (/home/vps52320/bareos) to "vps52320-incr" (/home/vps52320/bareos)
 
2017-10-19 09:20:03 bareos-sd JobId 4445: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0340 from dev="vps52320-cons" (/home/vps52320/bareos) to "vps52320-incr" (/home/vps52320/bareos)

Coincidence? I start doubting it.
(0002830)
hostedpower   
2017-12-02 12:19   
Hi,


I'm still on 17.2.7 and have to say it's an intermittent error. It goes fine for days and then all the sudden sometimes one or more jobs suffer from it.

Never had it in the past until a certain version, pretty sure of that.
(0002949)
hostedpower   
2018-03-26 20:22   
The problem was long time gone, but now it's back in full force. Any idea what the cause could be?
(0002991)
stephand   
2018-05-04 10:57   
With larger MySQL databases and Bareos 17.2, for incremental jobs with accurate=yes, it seems to help to add the following index:

CREATE INDEX jobtdate_idx ON Job (JobTDate);
ANALYZE TABLE Job;

Could you please check if that works for you?
(0002992)
hostedpower   
2018-05-04 11:16   
ok thanks, we addeed the index, but it took only 0.5 seconds. Usually this means there was not an issue :)

Creating an index which is slow, usually means there is (serious) performance gain.
(0002994)
stephand   
2018-05-04 17:56   
For sure it depends on the size of the Job table. I've measured it 25% faster with this index with 10000 records in the Job table.

However, looking at the logs like
2017-10-19 09:20:03 bareos-sd JobId 4445: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0340 from dev="vps52320-cons" (/home/vps52320/bareos) to "vps52320-incr" (/home/vps52320/bareos)
that's a different problem, has nothing to do with that index.
As Joerg already suggested to use multiple storage devices, I'd propose increase the number. This is documented meanwhile at
http://doc.bareos.org/master/html/bareos-manual-main-reference.html#UsingMultipleStorageDevices
(0003047)
stephand   
2018-06-21 07:59   
Were you able to solve the problem by using multiple storages devices?
(0003053)
hostedpower   
2018-06-27 23:56   
while this might work, we use at least 50-60 devices atm. So it would be a lot of extra work to add extra storage devices.

Why not create a device of type 'disk volume' and automatically give it the right properties? It would make things SO MUCH EASIER and bring bareos into the 2018 ...

Anything that can be done to get this supported? We would want to sponsor this if needed...
(0003145)
stephand   
2018-10-22 15:54   
Hi,

When you say "a lot of extra work to add extra storage devices" are you talking about the Storage Daemon configuration mentioned at
http://doc.bareos.org/master/html/bareos-manual-main-reference.html#UsingMultipleStorageDevices
which is a always

Device {
  Name = FileStorage1
  Media Type = File
  Archive Device = /var/lib/bareos/storage
  LabelMedia = yes
  Random Access = yes
  AutomaticMount = yes
  RemovableMedia = no
  AlwaysOpen = no
  Maximum Concurrent Jobs = 1
}

and only the number in Name = FileStorage1 is increased for each Device?

Are you already using configuration management tools like Ansible, Puppet etc? Then it shouldn't be too hard to get done.

Or what exactly do you mean by "Why not create a device of type 'disk volume' and automatically give it the right properties? It would make things SO MUCH EASIER and bring bareos into the 2018 ..."?

Let me guess, lets imagine we would have MultiDevice in the SD configuration, then the example config from http://doc.bareos.org/master/html/bareos-manual-main-reference.html#UsingMultipleStorageDevices could be written like this:

The Bareos Director config:

Director {
  Name = bareos-dir.example.com
  QueryFile = "/usr/lib/bareos/scripts/query.sql"
  Maximum Concurrent Jobs = 10
  Password = "<secret>"
}
 
Storage {
  Name = File
  Address = bareos-sd.bareos.com
  Password = "<sd-secret>"
  Device = MultiFileStorage
  # number of devices = Maximum Concurrent Jobs
  Maximum Concurrent Jobs = 4
  Media Type = File
}

The Bareos Storagedaemon config:

Storage {
  Name = bareos-sd.example.com
  # any number >= 4
  Maximum Concurrent Jobs = 20
}
 
Director {
  Name = bareos-dir.example.com
  Password = "<sd-secret>"
}
 
MultiDevice {
  Name = MultiFileStorage
  Media Type = File
  Archive Device = /var/lib/bareos/storage
  LabelMedia = yes
  Random Access = yes
  AutomaticMount = yes
  RemovableMedia = no
  AlwaysOpen = no
  Maximum Concurrent Jobs = 1
  Multi Device Count = 4
}
 
Do you mean that?

Or if not, please give an example of how the config should look like to make things easier for you.
(0003146)
hostedpower   
2018-10-22 16:25   
We're indeed looking into Ansible to automate this, but still, something like:

MultiDevice {
  Name = MultiFileStorage
  Media Type = File
  Archive Device = /var/lib/bareos/storage
  LabelMedia = yes
  Random Access = yes
  AutomaticMount = yes
  RemovableMedia = no
  AlwaysOpen = no
  Maximum Concurrent Jobs = 1
  Multi Device Count = 4
}

would be more than fantastic!!

Just a single device supporting concurrent access in an easy fashion!

Probably we could then also set "Maximum Concurrent Jobs = 4" Pretty safely?


I can imagine if you're used to Bareos (and tapes), this seems maybe a strange way of working.

However if you're used to (most) other backup software supporting harddrives by original design, the way it's designed now for disks is just way too complicated :(
(0003768)
hostedpower   
2020-02-10 21:08   
Hi,


I think this feature was written: https://docs.bareos.org/Configuration/StorageDaemon.html#storageresourcemultiplieddevice

Does it require an autochanger for this to work as disussed in this thread? Or would simply more devices thanks to the count parameter be sufficient?

I ask since lately we see a lot of errors again as reported here :| :)
(0003774)
arogge   
2020-02-11 09:34   
The autochanger is optional, but the feature won't help you if you don't configure an autochanger.
With the autochanger you only need to configure one storage device in your Director. Otherwise you'd need to configure each of the mutliplied devices in your director separately.

This is - of course - not a physical existing autochanger, it is just an autochanger configuration in the storage daemon to group the different storage devices together.
(0003775)
hostedpower   
2020-02-11 10:00   
ok, but in our case we ha
(0003776)
hostedpower   
2020-02-11 10:02   
ok, but in our case we have more than 100 storages configured with different names.

Do we need multiple autochangers as well or just 1 auto changer for all these devices? I'm afraid it's the latter, so we'd have to add tons of autochangers as well right? :)
(0003777)
arogge   
2020-02-11 10:30   
If you have more than 100 storages, I hope you're generating your configuration with something like puppet or ansible.
Why exactly do you need such a large amount of individual storages?
Usually if you're using only File-based storage a single storage (or file-autochanger) per RAID array is enough. Everything else usually just overcomplicates things in the end.
(0003798)
hostedpower   
2020-02-12 10:04   
Well it's because we allocate storage space for each customer, so each customer pays for his own storage. Putting all into one large storage, wouldn't show us anymore who is using what exactly.

Is there a better way to "allocate" storage for individual customers while at the same time use large storage as you suggest?

PS: Yes we generate config, but updating config now to include autochanger, would still be quite some work since we generate this config only once .

Just adding a device count is easy since we use include file. So adding autochanger now isn't really what we hoped for :)
(0004060)
hostedpower   
2020-12-01 12:43   
Hi,

We still have this: Need volume from other drive, but swap not possible

Strange thing is that it works 99% of times, but then we have periods we see this error a lot. I don't understand why this works so well mostly to work so bad at other times.

 It's one of the primary reasons we're now looking to other backup solutions. Next to that we have many storage servers and bareos has currently no way to let "x number of tasks" run on a per storage server basis.
(0004217)
hostedpower   
2021-08-25 11:27   
(Last edited: 2021-08-26 23:41)
Ok we finally re-architectured our whole backup infrastructure, only to find this problem/bug hits us hard again.

We use the latest bareos 20.2 version.

We now use one large folder for all backups with 10 concurrent consolidates (max). We use postgresql as our database engine (so it cannot be because of MySQL). We tried to follow all best practices, I don't understand what is wrong with it :(


Storage {
        Name = AI-Incremental
        Device = AI-Incremental-Autochanger
        Media Type = AI
        Address = xxx
        Password = "xxx"
        Maximum Concurrent Jobs = 10
        Autochanger = yes
}

Storage {
        Name = AI-Consolidated
        Device = AI-Consolidated-Autochanger
        Media Type = AI
        Address = xxx
        Password = "xxx"
        Maximum Concurrent Jobs = 10
        Autochanger = yes
}

Device {
  Name = AI-Incremental
  Media Type = AI
  Archive Device = /var/lib/bareos/storage
  LabelMedia = yes;
  Random Access = Yes;
  AutomaticMount = yes;
  RemovableMedia = no;
  AlwaysOpen = no;
  Maximum Concurrent Jobs = 1 # this has to be 1! (1 for each device)
  Count = 10
}

Autochanger {
  Name = "AI-Incremental-Autochanger"
  Device = AI-Incremental

  Changer Device = /dev/null
  Changer Command = ""
}


Device {
  Name = AI-Consolidated
  Media Type = AI
  Archive Device = /var/lib/bareos/storage
  LabelMedia = yes;
  Random Access = Yes;
  AutomaticMount = yes;
  RemovableMedia = no;
  AlwaysOpen = no;
  Maximum Concurrent Jobs = 1 # this has to be 1! (1 for each device)
  Count = 10
}

Autochanger {
  Name = "AI-Consolidated-Autochanger"
  Device = AI-Consolidated

  Changer Device = /dev/null
  Changer Command = ""
}


I suppose the error must be easy to spot? Or everyone would have this problem :(

(0004218)
hostedpower   
2021-08-25 11:32   
3838 machine.example.com-files machine.example.com Backup VirtualFull 0 0.00 B 0 Running
Messages
Search
#
Timestamp
Message
52 2021-08-25 11:25:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
51 2021-08-25 11:20:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
50 2021-08-25 11:15:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
49 2021-08-25 11:10:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
48 2021-08-25 11:05:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
47 2021-08-25 11:00:25 backup1-sd JobId 3838: Please mount read Volume "vol-cons-0287" for:
Job: machine.example.com-files.2021-08-25_08.00.19_36
Storage: "AI-Incremental0007" (/var/lib/bareos/storage)
Pool: AI-Incremental
Media type: AI
46 2021-08-25 10:55:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
45 2021-08-25 10:50:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
44 2021-08-25 10:45:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
43 2021-08-25 10:40:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
42 2021-08-25 10:35:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
41 2021-08-25 10:30:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
40 2021-08-25 10:25:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
39 2021-08-25 10:20:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
38 2021-08-25 10:15:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
37 2021-08-25 10:10:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
36 2021-08-25 10:05:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
35 2021-08-25 10:00:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
34 2021-08-25 09:55:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
33 2021-08-25 09:50:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
32 2021-08-25 09:45:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
31 2021-08-25 09:40:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
30 2021-08-25 09:35:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
29 2021-08-25 09:30:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
28 2021-08-25 09:25:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
27 2021-08-25 09:20:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
26 2021-08-25 09:15:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
25 2021-08-25 09:10:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
24 2021-08-25 09:05:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
23 2021-08-25 09:00:25 backup1-sd JobId 3838: Please mount read Volume "vol-cons-0287" for:
Job: machine.example.com-files.2021-08-25_08.00.19_36
Storage: "AI-Incremental0007" (/var/lib/bareos/storage)
Pool: AI-Incremental
Media type: AI
22 2021-08-25 08:55:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
21 2021-08-25 08:50:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
20 2021-08-25 08:45:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
19 2021-08-25 08:40:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
18 2021-08-25 08:35:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
17 2021-08-25 08:30:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
16 2021-08-25 08:25:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
15 2021-08-25 08:20:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
14 2021-08-25 08:15:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
13 2021-08-25 08:10:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
12 2021-08-25 08:05:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
11 2021-08-25 08:00:25 backup1-sd JobId 3838: Please mount read Volume "vol-cons-0287" for:
Job: machine.example.com-files.2021-08-25_08.00.19_36
Storage: "AI-Incremental0007" (/var/lib/bareos/storage)
Pool: AI-Incremental
Media type: AI
10 2021-08-25 08:00:25 backup1-sd JobId 3838: Warning: stored/acquire.cc:348 Read acquire: stored/label.cc:269 Could not reserve volume vol-cons-0287 on "AI-Incremental0007" (/var/lib/bareos/storage)
9 2021-08-25 08:00:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
8 2021-08-25 08:00:24 backup1-sd JobId 3838: Ready to append to end of Volume "vol-cons-0287" size=12609080131
7 2021-08-25 08:00:24 backup1-sd JobId 3838: Volume "vol-cons-0287" previously written, moving to end of data.
6 2021-08-25 08:00:24 backup1-dir JobId 3838: Using Device "AI-Consolidated0007" to write.
5 2021-08-25 08:00:24 backup1-dir JobId 3838: Using Device "AI-Incremental0007" to read.
4 2021-08-25 08:00:23 backup1-dir JobId 3838: Connected Storage daemon at backupx.xxxx:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
3 2021-08-25 08:00:23 backup1-dir JobId 3838: Bootstrap records written to /var/lib/bareos/backup1-dir.restore.379.bsr
(0004219)
hostedpower   
2021-08-25 11:48   
I see now it tries to mount consolidate volumes on the incremental devices, you can see it in above sample, but also below:
25-Aug 08:02 backup1-dir JobId 3860: Start Virtual Backup JobId 3860, Job=machine.example.com-files.2021-08-25_08.00.31_02
25-Aug 08:02 backup1-dir JobId 3860: Consolidating JobIds 3563,724
25-Aug 08:02 backup1-dir JobId 3860: Bootstrap records written to /var/lib/bareos/backup1-dir.restore.394.bsr
25-Aug 08:02 backup1-dir JobId 3860: Connected Storage daemon at xxx.xxx:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
25-Aug 08:02 backup1-dir JobId 3860: Using Device "AI-Incremental0005" to read.
25-Aug 08:02 backup1-dir JobId 3860: Using Device "AI-Consolidated0005" to write.
25-Aug 08:02 backup1-sd JobId 3860: Volume "vol-cons-0292" previously written, moving to end of data.
25-Aug 08:02 backup1-sd JobId 3860: Ready to append to end of Volume "vol-cons-0292" size=26118365623
25-Aug 08:02 backup1-sd JobId 3860: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0005" (/var/lib/bareos/storage)
25-Aug 08:02 backup1-sd JobId 3860: Warning: stored/acquire.cc:348 Read acquire: stored/label.cc:269 Could not reserve volume vol-cons-0287 on "AI-Incremental0005" (/var/lib/bareos/storage)
25-Aug 08:02 backup1-sd JobId 3860: Please mount read Volume "vol-cons-0287" for:
    Job: machine.example.com-files.2021-08-25_08.00.31_02
    Storage: "AI-Incremental0005" (/var/lib/bareos/storage)
    Pool: AI-Incremental
    Media type: AI

This might be the cause? What could be causing this?
(0004220)
hostedpower   
2021-08-25 11:51   
This is the first job today with the messages, but it succeeded anyway; maybe you could see what is going wrong here?

2021-08-24 15:53:54 backup1-dir JobId 3549: console command: run AfterJob ".bvfs_update JobId=3549"
30 2021-08-24 15:53:54 backup1-dir JobId 3549: End auto prune.

29 2021-08-24 15:53:54 backup1-dir JobId 3549: No Files found to prune.
28 2021-08-24 15:53:54 backup1-dir JobId 3549: Begin pruning Files.
27 2021-08-24 15:53:54 backup1-dir JobId 3549: No Jobs found to prune.
26 2021-08-24 15:53:54 backup1-dir JobId 3549: Begin pruning Jobs older than 6 months .
25 2021-08-24 15:53:54 backup1-dir JobId 3549: purged JobIds 3237,648 as they were consolidated into Job 3549
24 2021-08-24 15:53:54 backup1-dir JobId 3549: Bareos backup1-dir 20.0.2 (10Jun21):
Build OS: Debian GNU/Linux 10 (buster)
JobId: 3549
Job: another.xxx-files.2021-08-24_15.48.51_46
Backup Level: Virtual Full
Client: "another.xxx" 20.0.2 (10Jun21) Debian GNU/Linux 10 (buster),debian
FileSet: "linux-files" 2021-07-20 16:03:24
Pool: "AI-Consolidated" (From Job Pool's NextPool resource)
Catalog: "MyCatalog" (From Client resource)
Storage: "AI-Consolidated" (From Storage from Pool's NextPool resource)
Scheduled time: 24-Aug-2021 15:48:51
Start time: 03-Aug-2021 23:08:50
End time: 03-Aug-2021 23:09:30
Elapsed time: 40 secs
Priority: 10
SD Files Written: 653
SD Bytes Written: 55,510,558 (55.51 MB)
Rate: 1387.8 KB/s
Volume name(s): vol-cons-0288
Volume Session Id: 2056
Volume Session Time: 1628888564
Last Volume Bytes: 55,596,662 (55.59 MB)
SD Errors: 0
SD termination status: OK
Accurate: yes
Bareos binary info: official Bareos subscription
Job triggered by: User
Termination: Backup OK

23 2021-08-24 15:53:54 backup1-dir JobId 3549: Joblevel was set to joblevel of first consolidated job: Incremental
22 2021-08-24 15:53:54 backup1-dir JobId 3549: Insert of attributes batch table done
21 2021-08-24 15:53:54 backup1-dir JobId 3549: Insert of attributes batch table with 653 entries start
20 2021-08-24 15:53:54 backup1-sd JobId 3549: Releasing device "AI-Incremental0008" (/var/lib/bareos/storage).
19 2021-08-24 15:53:54 backup1-sd JobId 3549: Releasing device "AI-Consolidated0008" (/var/lib/bareos/storage).
18 2021-08-24 15:53:54 backup1-sd JobId 3549: Elapsed time=00:00:01, Transfer rate=55.51 M Bytes/second
17 2021-08-24 15:53:54 backup1-sd JobId 3549: Forward spacing Volume "vol-incr-0135" to file:block 0:2909195921.
16 2021-08-24 15:53:54 backup1-sd JobId 3549: Ready to read from volume "vol-incr-0135" on device "AI-Incremental0008" (/var/lib/bareos/storage).
15 2021-08-24 15:53:54 backup1-sd JobId 3549: End of Volume at file 0 on device "AI-Incremental0008" (/var/lib/bareos/storage), Volume "vol-cons-0284"
14 2021-08-24 15:53:54 backup1-sd JobId 3549: Forward spacing Volume "vol-cons-0284" to file:block 0:307710024.
13 2021-08-24 15:53:54 backup1-sd JobId 3549: Ready to read from volume "vol-cons-0284" on device "AI-Incremental0008" (/var/lib/bareos/storage).
12 2021-08-24 15:48:54 backup1-sd JobId 3549: Please mount read Volume "vol-cons-0284" for:
Job: another.xxx-files.2021-08-24_15.48.51_46
Storage: "AI-Incremental0008" (/var/lib/bareos/storage)
Pool: AI-Incremental
Media type: AI
11 2021-08-24 15:48:54 backup1-sd JobId 3549: Warning: stored/acquire.cc:348 Read acquire: stored/label.cc:269 Could not reserve volume vol-cons-0284 on "AI-Incremental0008" (/var/lib/bareos/storage)
10 2021-08-24 15:48:54 backup1-sd JobId 3549: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=1 num_writers=0 num_reserve=0 swap=0 vol=vol-cons-0284 from dev="AI-Incremental0005" (/var/lib/bareos/storage) to "AI-Incremental0008" (/var/lib/bareos/storage)
9 2021-08-24 15:48:54 backup1-sd JobId 3549: Wrote label to prelabeled Volume "vol-cons-0288" on device "AI-Consolidated0008" (/var/lib/bareos/storage)
8 2021-08-24 15:48:54 backup1-sd JobId 3549: Labeled new Volume "vol-cons-0288" on device "AI-Consolidated0008" (/var/lib/bareos/storage).
7 2021-08-24 15:48:53 backup1-dir JobId 3549: Using Device "AI-Consolidated0008" to write.
6 2021-08-24 15:48:53 backup1-dir JobId 3549: Created new Volume "vol-cons-0288" in catalog.
5 2021-08-24 15:48:53 backup1-dir JobId 3549: Using Device "AI-Incremental0008" to read.
4 2021-08-24 15:48:52 backup1-dir JobId 3549: Connected Storage daemon at xxx.xxx:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
3 2021-08-24 15:48:52 backup1-dir JobId 3549: Bootstrap records written to /var/lib/bareos/backup1-dir.restore.331.bsr
2 2021-08-24 15:48:52 backup1-dir JobId 3549: Consolidating JobIds 3237,648
1 2021-08-24 15:48:52 backup1-dir JobId 3549: Start Virtual Backup JobId 3549, Job=another.xxx-files.2021-08-24_15.48.51_46
(0004221)
hostedpower   
2021-08-25 11:54   
Just saw this swap not possible error also sometimes happen when the same device/storage/pool was used:

5 2021-08-24 15:54:03 backup1-sd JobId 3553: Ready to read from volume "vol-incr-0136" on device "AI-Incremental0002" (/var/lib/bareos/storage).
14 2021-08-24 15:49:03 backup1-sd JobId 3553: Please mount read Volume "vol-incr-0136" for:
Job: xxx.xxx.bxxe-files.2021-08-24_15.48.52_50
Storage: "AI-Incremental0002" (/var/lib/bareos/storage)
Pool: AI-Incremental
Media type: AI
13 2021-08-24 15:49:03 backup1-sd JobId 3553: Warning: stored/acquire.cc:348 Read acquire: stored/label.cc:269 Could not reserve volume vol-incr-0136 on "AI-Incremental0002" (/var/lib/bareos/storage)
12 2021-08-24 15:49:03 backup1-sd JobId 3553: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=1 num_writers=0 num_reserve=0 swap=0 vol=vol-incr-0136 from dev="AI-Incremental0006" (/var/lib/bareos/storage) to "AI-Incremental0002" (/var/lib/bareos/storage)
11 2021-08-24 15:49:03 backup1-sd JobId 3553: End of Volume at file 0 on device "AI-Incremental0002" (/var/lib/bareos/storage), Volume "vol-cons-0285"
(0004222)
hostedpower   
2021-08-25 12:20   
PS: Consolidate job missing in the posted config above:

Job {
        Name = "Consolidate"
        Type = "Consolidate"
        Accurate = "yes"

        JobDefs = "DefaultJob"

        Schedule = "ConsolidateCycle"
        Max Full Consolidations = 200

        Maximum Concurrent Jobs = 1
        Prune Volumes = yes

        Priority = 11
}
(0004227)
hostedpower   
2021-08-26 13:11   
2021-08-26 11:34:12 backup1-sd JobId 4151: Warning: stored/acquire.cc:348 Read acquire: stored/label.cc:269 Could not reserve volume vol-cons-0282 on "AI-Incremental0004" (/var/lib/bareos/storage) <-------
10 2021-08-26 11:34:12 backup1-sd JobId 4151: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0282 from dev="AI-Consolidated0010" (/var/lib/bareos/storage) to "AI-Incremental0004" (/var/lib/bareos/storage)
9 2021-08-26 11:34:12 backup1-sd JobId 4151: Wrote label to prelabeled Volume "vol-cons-0298" on device "AI-Incremental0001" (/var/lib/bareos/storage)
8 2021-08-26 11:34:12 backup1-sd JobId 4151: Labeled new Volume "vol-cons-0298" on device "AI-Incremental0001" (/var/lib/bareos/storage).
7 2021-08-26 11:34:12 backup1-dir JobId 4151: Using Device "AI-Incremental0001" to write.
6 2021-08-26 11:34:12 backup1-dir JobId 4151: Created new Volume "vol-cons-0298" in catalog.


All jobs don't even continue after this "event"...
(0004494)
hostedpower   
2022-01-31 08:33   
This still happens, even after using seperate devices and labels etc

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1474 [bareos-core] storage daemon crash always 2022-07-27 16:12 2022-10-04 10:28
Reporter: jens Platform: Linux  
Assigned To: bruno-at-bareos OS: Debian  
Priority: normal OS Version: 9  
Status: resolved Product Version: 19.2.12  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: bareos-sd crashing on VirtualFull with SIGSEGV ../src/lib/serial.cc file not found
Description: When running 'always incremental' backup scheme the storage daemon crashes with segmentation fault
on VirtualFull backup triggered by consolidation.

Job error:
bareos-dir JobId 1267: Fatal error: Director's comm line to SD dropped.

GDB debug:
bareos-sd (200): stored/mac.cc:159-154 joblevel from SOS_LABEL is now F
bareos-sd (130): stored/label.cc:672-154 session_label record=ec015288
bareos-sd (150): stored/label.cc:718-154 Write sesson_label record JobId=154 FI=SOS_LABEL SessId=1 Strm=154 len=165 remainder=0
bareos-sd (150): stored/label.cc:722-154 Leave WriteSessionLabel Block=1351364161d File=0d
bareos-sd (200): stored/mac.cc:221-154 before write JobId=154 FI=1 SessId=1 Strm=UNIX-Attributes-EX len=123
Thread 4 "bareos-sd" received signal SIGSEGV, Segmentation fault.

[Switching to Thread 0x7ffff4c5b700 (LWP 2271)]
serial_uint32 (ptr=ptr@entry=0x7ffff4c5aa70, v=<optimized out>) at ../../../src/lib/serial.cc:76
76 ../../../src/lib/serial.cc: No such file or directory.


I am running daily incrementals into 'File' pool, consolidating every 4 days into 'FileCons' pool, a virtual full every 1st Monday of a month into 'LongTerm-Disk' pool
and finally a migration fto tape every 2nd Monday of a month from 'LongTerm-Disk' pool into 'LongTerm-Tape' pool.

BareOS version: 19.2.7
BareOS director and storage daemon on the same machine.
Disk storage on CEPH mount
Tape storage on Fujitsu Eternus LT2 tape library with 1 LTO-7 drive

---------------------------------------------------------------------------------------------------
Storage Device config:

FileStorage with 10 devices, all into same 1st folder:

Device {
  Name = FileStorage
  Media Type = File
  Archive Device = /storage/backup/bareos_Incremental # storage location
  LabelMedia = yes # lets Bareos label unlabeled media
  Random Access = yes # allow this device to be used by any job
  AutomaticMount = yes # when device opened, read it
  RemovableMedia = no # fixed media ( no tape, no usb )
  AlwaysOpen = no
  Auto Inflate = both # auto-decompress in- out- stream
  Auto Deflate = both # auto-compress in- out- stream ( backup server side compression )
  Auto Deflate Algorithm = LZ4HC # compression algorithm
}
....

FileStorageCons with 10 devices, all into same 2nd folder

Name = FileStorageCons
  Media Type = FileCons
  Archive Device = /storage/backup/bareos_Consolidate # storage location
  LabelMedia = yes # lets Bareos label unlabeled media
  Random Access = yes # allow this device to be used by any job
  AutomaticMount = yes # when device opened, read it
  RemovableMedia = no # fixed media ( no tape, no usb )
  AlwaysOpen = no
  Auto Inflate = both # auto-decompress in- out- stream
  Auto Deflate = both # auto-compress in- out- stream ( backup server side compression )
  Auto Deflate Algorithm = LZ4HC # compression algorithm
}
...

FileStorageVault with 10 devices, all into same 3rd folder

Name = FileStorageVault
  Media Type = FileVLT
  Archive Device = /storage/backup/bareos_LongTermDisk # storage location
  LabelMedia = yes # lets Bareos label unlabeled media
  Random Access = yes # allow this device to be used by any job
  AutomaticMount = yes # when device opened, read it
  RemovableMedia = no # fixed media ( no tape, no usb )
  AlwaysOpen = no
  Auto Inflate = both # auto-decompress in- out- stream
  Auto Deflate = both # auto-compress in- out- stream ( backup server side compression )
  Auto Deflate Algorithm = LZ4HC # compression algorithm
}
....

Tape Device:

Device {
  Name = IBM-ULTRIUM-HH7
  Device Type = Tape
  DriveIndex = 0
  ArchiveDevice = /dev/nst0
  Media Type = IBM-LTO-7
  AutoChanger = yes
  AutomaticMount = yes
  LabelMedia = yes
  RemovableMedia = yes
  Autoselect = yes
  MaximumFileSize = 10GB
  Spool Directory = /storage/scratch
  Maximum Spool Size = 2199023255552 # maximum total spool size in bytes (2Tbyte)
}

---------------------------------------------------------------------------------------------------
Pool Config:

Pool {
  Name = AI-Incremental # name of the media pool
  Pool Type = Backup # pool type
  Recycle = yes # BAReOS can automatically recycle volumes from that pool
  AutoPrune = yes # do not automatically prune expired volumes
  Volume Retention = 72 days
  Storage = File # storage device to be used
  Maximum Volume Bytes = 10G # maximum file size per volume
  Maximum Volumes = 500 # maximum allowed total number of volumes in pool
  Label Format = "AI-Incremental_" # volumes will be labeled "AI-Incremental_-<volume-id>"
  Volume Use Duration = 36 days # volume will be no longer used than
  Next Pool = AI-Consolidate # next pool for consolidation
  Job Retention = 72 days
  File Retention = 36 days
}

Pool {
  Name = AI-Consolidate # name of the media pool
  Pool Type = Backup # pool type
  Recycle = yes # BAReOS can automatically recycle volumes from that pool
  AutoPrune = yes # do not automatically prune expired volumes
  Volume Retention = 366 days
  Job Retention = 180 days
  File Retention = 93 days
  Storage = FileCons # storage device to be used
  Maximum Volume Bytes = 10G # maximum file size per volume
  Maximum Volumes = 1000 # maximum allowed total number of volumes in pool
  Label Format = "AI-Consolidate_" # volumes will be labeled "AI-Consolidate_-<volume-id>"
  Volume Use Duration = 2 days # volume will be no longer used than
  Next Pool = LongTerm-Disk # next pool for long term backups to disk
}

Pool {
  Name = LongTerm-Disk # name of the media pool
  Pool Type = Backup # pool type
  Recycle = yes # BAReOS can automatically recycle volumes from that pool
  AutoPrune = yes # do not automatically prune expired volumes
  Volume Retention = 732 days
  Job Retention = 732 days
  File Retention = 366 days
  Storage = FileVLT # storage device to be used
  Maximum Volume Bytes = 10G # maximum file size per volume
  Maximum Volumes = 1000 # maximum allowed total number of volumes in pool
  Label Format = "LongTerm-Disk_" # volumes will be labeled "LongTerm-Disk_<volume-id>"
  Volume Use Duration = 2 days # volume will be no longer used than
  Next Pool = LongTerm-Tape # next pool for long term backups to disk
  Migration Time = 2 days # Jobs older than 2 days in this pool will be migrated to 'Next Pool'
}

Pool {
  Name = LongTerm-Tape
  Pool Type = Backup
  Recycle = yes # Bareos can automatically recycle Volumes
  AutoPrune = yes # Prune expired volumes
  Volume Retention = 732 days # How long should the Backups be kept? (0000012)
  Job Retention = 732 days
  File Retention = 366 days
  Storage = TapeLibrary # Physical Media
  Maximum Block Size = 1048576
  Recycle Pool = Scratch
  Cleaning Prefix = "CLN"
}

---------------------------------------------------------------------------------------------------
JobDefs:

JobDefs {
  Name = AI-Incremental
  Type = Backup
  Level = Incremental
  Storage = File
  Messages = Standard
  Pool = AI-Incremental
  Incremental Backup Pool = AI-Incremental
  Full Backup Pool = AI-Consolidate
  Accurate = yes
  Allow Mixed Priority = yes
  Always Incremental = yes
  Always Incremental Job Retention = 36 days
  Always Incremental Keep Number = 14
  Always Incremental Max Full Age = 31 days
}

JobDefs {
  Name = AI-Consolidate
  Type = Consolidate
  Storage = File
  Messages = Standard
  Pool = AI-Consolidate
  Priority = 25
  Write Bootstrap = "/storage/bootstrap/%c.bsr"
  Incremental Backup Pool = AI-Incremental
  Full Backup Pool = AI-Consolidate
  Max Full Consolidations = 1
  Prune Volumes = yes
  Accurate = yes
}

JobDefs {
  Name = LongTermDisk
  Type = Backup
  Level = VirtualFull
  Messages = Standard
  Pool = AI-Consolidate
  Priority = 30
  Write Bootstrap = "/storage/bootstrap/%c.bsr"
  Accurate = yes
  Run Script {
    console = "update jobid=%1 jobtype=A"
    Runs When = After
    Runs On Client = No
    Runs On Failure = No
  }
}

JobDefs {
  Name = "LongTermTape"
  Pool = LongTerm-Disk
  Messages = Standard
  Type = Migrate
}


---------------------------------------------------------------------------------------------------
Job Config ( per client )

Job {
  Name = "Incr-<client>"
  Description = "<client> always incremental 36d retention"
  Client = <client>
  Jobdefs = AI-Incremental
  FileSet="fileset-<client>"
  Schedule = "daily_incremental_<client>"
  # Write Bootstrap file for disaster recovery.
  Write Bootstrap = "/storage/bootstrap/%j.bsr"
  # The higher the number the lower the job priority
  Priority = 15
  Run Script {
    Console = ".bvfs_update jobid=%i"
    RunsWhen = After
    RunsOnClient = No
  }
}

Job {
  Name = "AI-Consolidate"
  Description = "consolidation of 'always incremental' jobs"
  Client = backup.mgmt.drs
  FileSet = SelfTest
  Jobdefs = AI-Consolidate
  Schedule = consolidate

  # The higher the number the lower the job priority
  Priority = 25
}

Job {
  Name = "VFull-<client>"
  Description = "<client> monthly virtual full"
  Messages = Standard
  Client = <client>
  Type = Backup
  Level = VirtualFull
  Jobdefs = LongTermDisk
  FileSet=fileset-<client>
  Pool = AI-Consolidate
  Schedule = virtual-full_<client>
  Priority = 30
  Run Script {
    Console = ".bvfs_update"
    RunsWhen = After
    RunsOnClient = No
  }
}

Job {
  Name = "migrate-2-tape"
  Description = "monthly migration of virtual full backups from LongTerm-Disk to LongTerm-Tape pool"
  Jobdefs = LongTermTape
  Selection Type = PoolTime
  Schedule = "migrate-2-tape"
  Priority = 15
  Run Script {
    Console = ".bvfs_update jobid=%i"
    RunsWhen = After
    RunsOnClient = No
  }
}

---------------------------------------------------------------------------------------------------
Schedule config:

Schedule {
  Name = "daily_incremental_<client>"
  Run = daily at 02:00
}

Schedule {
  Name = "consolidate"
  Run = Incremental 3/4 at 00:00
}

Schedule {
  Name = "virtual-full_<client>"
  Run = 1st monday at 10:00
}

Schedule {
  Name = "migrate-2-tape"
  Run = 2nd monday at 8:00
}

---------------------------------------------------------------------------------------------------
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files: bareos_sd_debug.zip (3,771 bytes) 2022-07-27 16:59
https://bugs.bareos.org/file_download.php?file_id=530&type=bug
Notes
(0004688)
bruno-at-bareos   
2022-07-27 16:43   
could you check in your working dir (/var/lib/bareos) of SD any other trace backtrace and debug files
if you have them, please attach them (eventually compressed).
(0004690)
jens   
2022-07-27 17:00   
debug files attached in private note
(0004697)
bruno-at-bareos   
2022-07-28 09:34   
What is the reason behind running 19.2 instead upgrading to 21 ?
(0004699)
jens   
2022-07-28 13:06   
1. missing comprehensive and easy to follow step-by-step guide on how to upgrade
2. being unconfident about a flawless upgrade procedure without rendering backup data unusable
3. lack of experience and skilled personnel resulting in major effort to roll out new version
4. limited access to online repositories to update local mirrors -> very long lead time to get new versions
(0004700)
jens   
2022-07-28 13:09   
(Last edited: 2022-07-28 13:12)
For the above reasons I am little hesitant to take the effort of upgrading.
Currently I am considering an update only if this will be the only chance to get the issue resolved.
I need confirmation from your end first.
My hope is that there is just something wrong in my configuration or I am running an adverse setup and changing either one might resolve the issue ?
(0004701)
bruno-at-bareos   
2022-08-01 11:59   
Hi Jens,

Thanks for the complements.

Did this crash happen each time a consolidation VF is created ?
(0004702)
bruno-at-bareos   
2022-08-01 12:04   
Maybe related to fixed in 19.2.9 (available with subscription)
 - fix a memory corruption when autolabeling with increased maxiumum block size
https://docs.bareos.org/bareos-19.2/Appendix/ReleaseNotes.html#id12
(0004703)
jens   
2022-08-01 12:05   
Hi Bruno,

so far, yes, that is my experience.
It is always failing.
Also when repeating or manually rescheduling the failed job through the web-ui during idle hours where nothing else is running on the director.
(0004704)
jens   
2022-08-01 12:14   
The "- fix a memory corruption when autolabeling with increased maximum block size" indeed could be a lead
as I see the following in the job logs ?

Warning: For Volume "AI-Consolidate_0118": The sizes do not match!
Volume=64574484 Catalog=32964717
Correcting Catalog
(0004705)
bruno-at-bareos   
2022-08-02 13:42   
Hi Jens, a quick note about the size do not match, this is unrelated. Aborted or failed job can have this effect.

This Fix was introduce with this commit https://github.com/bareos/bareos/commit/0086b852d and the 19.2.9 has the fix.
(0004800)
bruno-at-bareos   
2022-10-04 10:27   
Closing as a fix already exist
(0004801)
bruno-at-bareos   
2022-10-04 10:28   
Fix is present in source code and published subscription binaries.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1465 [bareos-core] file daemon feature always 2022-05-23 13:31 2022-09-21 16:02
Reporter: mdc Platform: x86_64  
Assigned To: OS: CentOS  
Priority: normal OS Version: stream 8  
Status: new Product Version: 21.1.3  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Use the virtual file path + prefix when no writer is set
Description: Creating the backup woks fine, but on restore there exists often the problem, that the restore must be done to an normal file like in the PostgreSQL or MariaDB add on.
But when using the an dd or the demo code that is in the documentation, the file can only created in an absolute path.
I think it will be better, when no writer command is set(or :writer=none), that the file restore is done to the file prefix (Where setting of the restore job) + path of the virtual file like it is done in the both python add-on's.

Sample:
bpipe:file=/_mongobackups_/foo_db.archive:reader=sh -c 'cat /foo/backup_pw_file | mongodump <OPT ... > --archive':writer=none"
Then the file will written to <where>/_mongobackups_/foo_db.archive
Tags: bpipe
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0004755)
bruno-at-bareos   
2022-09-21 16:02   
As it looks like more a feature request, why not proposing a PR with ? ;-)

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1476 [bareos-core] file daemon major always 2022-08-03 16:01 2022-08-23 12:08
Reporter: support@ingenium.trading Platform: Linux  
Assigned To: bruno-at-bareos OS: CentOS  
Priority: high OS Version: 7  
Status: resolved Product Version:  
Product Build: Resolution: won't fix  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Backups for Full and Incremental are approx 10 times bigger than the server used
Description: Whenever a backup job is running, it takes very long without errors and the file size is 10 times bigger than the server usual uses.

Earlier we had issues with the connectivity so I enabled heartbeat in the clien/myself.conf => Heartbeat Interval = 1 min
Tags:
Steps To Reproduce: Manually start the job via webui or bconsole.
Additional Information: Backup Server:
OS: Fedora 35
Bareos Version: 22.0.0~pre613.d7109f123

Client Server:
OS: Alma Linux 9 / CentOS7
Bareos Version: 22.0.0~pre613.d7109f123

Backup job:
03-Aug 09:48 bareos-dir JobId 565: No prior Full backup Job record found.
03-Aug 09:48 bareos-dir JobId 565: No prior or suitable Full backup found in catalog. Doing FULL backup.
03-Aug 09:48 bareos-dir JobId 565: Start Backup JobId 565, Job=td02.example.com.2022-08-03_09.48.28_03
03-Aug 09:48 bareos-dir JobId 565: Connected Storage daemon at backup01.example.com:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
03-Aug 09:48 bareos-dir JobId 565: Max configured use duration=82,800 sec. exceeded. Marking Volume "AI-Example-Consolidated-0490" as Used.
03-Aug 09:48 bareos-dir JobId 565: Created new Volume "AI-Example-Consolidated-0584" in catalog.
03-Aug 09:48 bareos-dir JobId 565: Using Device "FileStorage01" to write.
03-Aug 09:48 bareos-dir JobId 565: Probing client protocol... (result will be saved until config reload)
03-Aug 09:48 bareos-dir JobId 565: Connected Client: td02.example.com at td02.example.com:9102, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
03-Aug 09:48 bareos-dir JobId 565: Handshake: Immediate TLS
03-Aug 09:48 bareos-dir JobId 565: Encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
03-Aug 09:48 trade02-fd JobId 565: Connected Storage daemon at backup01.example.com:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
03-Aug 09:48 trade02-fd JobId 565: Extended attribute support is enabled
03-Aug 09:48 trade02-fd JobId 565: ACL support is enabled
03-Aug 09:48 bareos-sd JobId 565: Labeled new Volume "AI-Example-Consolidated-0584" on device "FileStorage01" (/backup/bareos/storage).
03-Aug 09:48 bareos-sd JobId 565: Wrote label to prelabeled Volume "AI-Example-Consolidated-0584" on device "FileStorage01" (/backup/bareos/storage)
03-Aug 09:48 trade02-fd JobId 565: Disallowed filesystem. Will not descend from / into /dev
03-Aug 09:51 trade02-fd JobId 565: Disallowed filesystem. Will not descend from / into /run
03-Aug 09:51 trade02-fd JobId 565: Disallowed filesystem. Will not descend from / into /sys
03-Aug 09:51 trade02-fd JobId 565: Disallowed filesystem. Will not descend from / into /var/lib/nfs/rpc_pipefs
03-Aug 10:27 bareos-sd JobId 565: User defined maximum volume capacity 107,374,182,400 exceeded on device "FileStorage01" (/backup/bareos/storage).
03-Aug 10:27 bareos-sd JobId 565: End of medium on Volume "AI-Example-Consolidated-0584" Bytes=107,374,159,911 Blocks=1,664,406 at 03-Aug-2022 10:27.
03-Aug 10:27 bareos-dir JobId 565: Created new Volume "AI-Example-Consolidated-0585" in catalog.
03-Aug 10:27 bareos-sd JobId 565: Labeled new Volume "AI-Example-Consolidated-0585" on device "FileStorage01" (/backup/bareos/storage).
03-Aug 10:27 bareos-sd JobId 565: Wrote label to prelabeled Volume "AI-Example-Consolidated-0585" on device "FileStorage01" (/backup/bareos/storage)
03-Aug 10:27 bareos-sd JobId 565: New volume "AI-Example-Consolidated-0585" mounted on device "FileStorage01" (/backup/bareos/storage) at 03-Aug-2022 10:27.
03-Aug 11:07 bareos-sd JobId 565: User defined maximum volume capacity 107,374,182,400 exceeded on device "FileStorage01" (/backup/bareos/storage).
03-Aug 11:07 bareos-sd JobId 565: End of medium on Volume "AI-Example-Consolidated-0585" Bytes=107,374,160,137 Blocks=1,664,406 at 03-Aug-2022 11:07.
03-Aug 11:07 bareos-dir JobId 565: Created new Volume "AI-Example-Consolidated-0586" in catalog.
03-Aug 11:07 bareos-sd JobId 565: Labeled new Volume "AI-Example-Consolidated-0586" on device "FileStorage01" (/backup/bareos/storage).
03-Aug 11:07 bareos-sd JobId 565: Wrote label to prelabeled Volume "AI-Example-Consolidated-0586" on device "FileStorage01" (/backup/bareos/storage)
03-Aug 11:07 bareos-sd JobId 565: New volume "AI-Example-Consolidated-0586" mounted on device "FileStorage01" (/backup/bareos/storage) at 03-Aug-2022 11:07.
03-Aug 11:46 bareos-sd JobId 565: User defined maximum volume capacity 107,374,182,400 exceeded on device "FileStorage01" (/backup/bareos/storage).
03-Aug 11:46 bareos-sd JobId 565: End of medium on Volume "AI-Example-Consolidated-0586" Bytes=107,374,160,137 Blocks=1,664,406 at 03-Aug-2022 11:46.
03-Aug 11:46 bareos-dir JobId 565: Created new Volume "AI-Example-Consolidated-0587" in catalog.
03-Aug 11:46 bareos-sd JobId 565: Labeled new Volume "AI-Example-Consolidated-0587" on device "FileStorage01" (/backup/bareos/storage).
03-Aug 11:46 bareos-sd JobId 565: Wrote label to prelabeled Volume "AI-Example-Consolidated-0587" on device "FileStorage01" (/backup/bareos/storage)
03-Aug 11:46 bareos-sd JobId 565: New volume "AI-Example-Consolidated-0587" mounted on device "FileStorage01" (/backup/bareos/storage) at 03-Aug-2022 11:46.
03-Aug 12:25 bareos-sd JobId 565: User defined maximum volume capacity 107,374,182,400 exceeded on device "FileStorage01" (/backup/bareos/storage).
03-Aug 12:25 bareos-sd JobId 565: End of medium on Volume "AI-Example-Consolidated-0587" Bytes=107,374,160,137 Blocks=1,664,406 at 03-Aug-2022 12:25.
03-Aug 12:25 bareos-dir JobId 565: Created new Volume "AI-Example-Consolidated-0588" in catalog.
03-Aug 12:25 bareos-sd JobId 565: Labeled new Volume "AI-Example-Consolidated-0588" on device "FileStorage01" (/backup/bareos/storage).
03-Aug 12:25 bareos-sd JobId 565: Wrote label to prelabeled Volume "AI-Example-Consolidated-0588" on device "FileStorage01" (/backup/bareos/storage)
03-Aug 12:25 bareos-sd JobId 565: New volume "AI-Example-Consolidated-0588" mounted on device "FileStorage01" (/backup/bareos/storage) at 03-Aug-2022 12:25.
03-Aug 12:56 bareos-sd JobId 565: Releasing device "FileStorage01" (/backup/bareos/storage).
03-Aug 12:56 bareos-sd JobId 565: Elapsed time=03:08:04, Transfer rate=45.57 M Bytes/second
03-Aug 12:56 bareos-dir JobId 565: Insert of attributes batch table with 188627 entries start
03-Aug 12:56 bareos-dir JobId 565: Insert of attributes batch table done
03-Aug 12:56 bareos-dir JobId 565: Bareos bareos-dir 22.0.0~pre613.d7109f123 (01Aug22):
  Build OS: Fedora release 35 (Thirty Five)
  JobId: 565
  Job: td02.example.com.2022-08-03_09.48.28_03
  Backup Level: Full (upgraded from Incremental)
  Client: "td02.example.com" 22.0.0~pre553.6a41db3f7 (07Jul22) CentOS Stream release 9,redhat
  FileSet: "ExampleLinux" 2022-08-03 09:48:28
  Pool: "AI-Example-Consolidated" (From Job FullPool override)
  Catalog: "MyCatalog" (From Client resource)
  Storage: "File" (From Pool resource)
  Scheduled time: 03-Aug-2022 09:48:27
  Start time: 03-Aug-2022 09:48:31
  End time: 03-Aug-2022 12:56:50
  Elapsed time: 3 hours 8 mins 19 secs
  Priority: 10
  FD Files Written: 188,627
  SD Files Written: 188,627
  FD Bytes Written: 514,227,307,623 (514.2 GB)
  SD Bytes Written: 514,258,382,470 (514.2 GB)
  Rate: 45510.9 KB/s
  Software Compression: None
  VSS: no
  Encryption: no
  Accurate: yes
  Volume name(s): AI-Example-Consolidated-0584|AI-Example-Consolidated-0585|AI-Example-Consolidated-0586|AI-Example-Consolidated-0587|AI-Example-Consolidated-0588
  Volume Session Id: 4
  Volume Session Time: 1659428963
  Last Volume Bytes: 85,150,808,401 (85.15 GB)
  Non-fatal FD errors: 0
  SD Errors: 0
  FD termination status: OK
  SD termination status: OK
  Bareos binary info: pre-release version: Get official binaries and vendor support on bareos.com
  Job triggered by: User
  Termination: Backup OK

03-Aug 12:56 bareos-dir JobId 565: shell command: run AfterJob "echo '.bvfs_update jobid=565' | bconsole"
03-Aug 12:56 bareos-dir JobId 565: AfterJob: .bvfs_update jobid=565 | bconsole

Client Alma Linux 9:
Filesystem Size Used Avail Use% Mounted on
devtmpfs 47G 0 47G 0% /dev
tmpfs 47G 196K 47G 1% /dev/shm
tmpfs 19G 2.3M 19G 1% /run
/dev/mapper/almalinux-root 12G 3.5G 8.6G 29% /
/dev/sda1 2.0G 237M 1.6G 13% /boot
/dev/mapper/almalinux-opt 8.0G 90M 7.9G 2% /opt
/dev/mapper/almalinux-home 12G 543M 12G 5% /home
/dev/mapper/almalinux-var 8.0G 309M 7.7G 4% /var
/dev/mapper/almalinux-opt_ExampleAd 8.0G 373M 7.7G 5% /opt/ExampleAd
/dev/mapper/almalinux-opt_ExampleEn 32G 7.5G 25G 24% /opt/ExampleEn
/dev/mapper/almalinux-var_log 20G 8.1G 12G 41% /var/log
/dev/mapper/almalinux-var_lib 12G 259M 12G 3% /var/lib
tmpfs 9.3G 0 9.3G 0% /run/user/1703000011
tmpfs 9.3G 0 9.3G 0% /run/user/1703000004



Server JobDefs:
JobDefs {
  Name = "ExampleLinux"
  Type = Backup
  Client = bareos-fd
  FileSet = "ExampleLinux"
  Storage = File
  Messages = Standard
  Schedule = "BasicBackup"
  Pool = AI-Example-Incremental
  Priority = 10
  Write Bootstrap = "/var/lib/bareos/%c.bsr"
  Full Backup Pool = AI-Example-Consolidated # write Full Backups into "Full" Pool (0000005)
  Incremental Backup Pool = AI-Example-Incremental # write Incr Backups into "Incremental" Pool (0000011)
}
System Description
Attached Files:
Notes
(0004710)
bruno-at-bareos   
2022-08-03 18:15   
with bconsole show FileSet="ExampleLinux" we will better understand what you've tried to do.

bconsole
estimate job=td02.example.com listing
will show you all the file included.
(0004731)
bruno-at-bareos   
2022-08-23 12:08   
No informations given to go further

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
835 [bareos-core] director major always 2017-07-18 02:34 2022-08-08 16:32
Reporter: divanikus Platform: Linux  
Assigned To: OS: Debian  
Priority: normal OS Version: 8  
Status: acknowledged Product Version: 16.2.6  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact: yes
bareos-17.2: action: will care
bareos-16.2: impact: yes
bareos-16.2: action: will care
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Restore job picks wrong storage
Description: I'm using version 16.2.6 compiled from source package (https://packages.debian.org/buster/bareos)

When I'm trying to restore fileset I get the following:

*restore client=zabbix-int-fd select current all done yes
Using Catalog "Metahouse"
Automatically selected FileSet: ZabbixServer
+-------+-------+----------+---------------+---------------------+------------------+
| jobid | level | jobfiles | jobbytes | starttime | volumename |
+-------+-------+----------+---------------+---------------------+------------------+
| 4,949 | F | 557 | 5,332,479,991 | 2017-07-02 01:01:08 | Full-0138 |
| 4,949 | F | 557 | 5,332,479,991 | 2017-07-02 01:01:08 | Full-0140 |
| 5,178 | I | 536 | 5,359,064,324 | 2017-07-03 01:01:15 | Incremental-0139 |
| 5,200 | I | 536 | 5,387,606,276 | 2017-07-04 01:01:13 | Incremental-0007 |
| 5,240 | I | 536 | 5,715,316,484 | 2017-07-17 18:10:47 | Incremental-0012 |
| 5,240 | I | 536 | 5,715,316,484 | 2017-07-17 18:10:47 | Incremental-0014 |
| 5,248 | I | 536 | 5,716,501,252 | 2017-07-18 01:01:00 | Incremental-0012 |
+-------+-------+----------+---------------+---------------------+------------------+
You have selected the following JobIds: 4949,5178,5200,5240,5248

Building directory tree for JobId(s) 4949,5178,5200,5240,5248 ... +++++++++++++++++++++++++++++++++++++++++++++
543 files inserted into the tree and marked for extraction.
Bootstrap records written to /var/lib/bareos/bareos-dir.restore.1.bsr

The job will require the following
   Volume(s) Storage(s) SD Device(s)
===========================================================================

    Full-0140 bs-00-full FullFileStorage
    Incremental-0012 bs-00-inc IncFileStorage

Volumes marked with "*" are online.


557 files selected to be restored.

Using Catalog "Metahouse"
Job queued. JobId=5270
You have messages.
*m
18-Jul 02:53 bareos-dir JobId 5269: Start Restore Job RestoreFiles.2017-07-18_02.53.36_33
18-Jul 02:53 bareos-dir JobId 5269: Using Device "FullFileStorage" to read.
18-Jul 02:53 bs-00 JobId 5269: Ready to read from volume "Full-0140" on device "FullFileStorage" (/opt/storage/full/).
18-Jul 02:53 bs-00 JobId 5269: Forward spacing Volume "Full-0140" to file:block 0:2999808192.
18-Jul 02:53 bs-00 JobId 5269: End of Volume at file 0 on device "FullFileStorage" (/opt/storage/full/), Volume "Full-0140"
18-Jul 02:53 bs-00 JobId 5269: Warning: acquire.c:239 Read open device "FullFileStorage" (/opt/storage/full/) Volume "Incremental-0012" failed: ERR=dev.c:661 Could not open: /opt/storage/full/Incremental-0012, ERR=No such file or directory

18-Jul 02:53 bs-00 JobId 5269: Please mount read Volume "Incremental-0012" for:
    Job: RestoreFiles.2017-07-18_02.53.36_33
    Storage: "FullFileStorage" (/opt/storage/full/)
    Pool: Incremental
    Media type: File

It just tries to read Incremental-0012 from wrong storage (FullFileStorage instead of IncFileStorage).
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0002872)
divanikus   
2018-01-12 14:22   
Problem is still here, Bareos 17.2 from official repo.

bs-00 JobId 11502: Warning: acquire.c:241 Read open device "FullFileStorage" (/opt/storage/full/) Volume "Incremental-0043" failed: ERR=dev.c:664 Could not open: /opt/storage/full/Incremental-0043, ERR=No such file or directory
(0003056)
stephand   
2018-06-28 17:14   
The Documentation for Media Type points out:

"...
If you are writing to disk Volumes, you must make doubly sure that each Device resource defined in the Storage daemon (and hence in the Director’s conf file) has a unique media type. Otherwise Bareos may assume, these Volumes can be mounted and read by any Storage daemon File device.
..."

See http://doc.bareos.org/master/html/bareos-manual-main-reference.html#directiveDirStorageMedia%20Type

It is admittedly not very obvious why this is necesary. However, the documentation contains some more detailed explanations.

We should decide if we really want to consider this a bug, as it can be fixed by configuring different media types.
(0003988)
embareossed   
2020-05-15 06:18   
On the other hand, the documentation for the storage daemon says re device:

"...
If a directory is specified, it is used as file storage. The directory must be existing and be specified as absolute path. Bareos will write to file storage in the specified directory and the filename used will be the Volume name as specified in the Catalog. If you want to write into more than one directory (i.e. to spread the load to different disk drives), you will need to define two Device resources, each containing an Archive Device with a different directory.
..."

I am using file storage, and I have my "tape" files in separate directories. I have performed many restores under this configuration, but today when I tried a restore, I get the type of error reported in this bug.

What are you referring to when you say there are "more detailed explanations" so I can understand how to correct my configuration? Thank you for your response.
(0003989)
embareossed   
2020-05-18 00:58   
(Last edited: 2020-05-23 03:21)
For the time being -- in particular, to perform an urgent restore at this moment -- I have decided to combine the tape (type Disk, in bareos parlance) directories into one, create one storage daemon Device, and just have all my director Storage's point to the one Storage I defined for the storage daemon.

It works now. I can both backup and restore as before. I still feel that this is either a bug, or maybe there needs to be additional documentation explaining how to use multiple directories in the storage daemon for Disk type Storage's.

(0004717)
sedlmeier   
2022-08-08 16:32   
Always us a different Media Type if the Device writes/reads from a different directory. Bareos uses the Media Type as an indicator which media types can be put in which device.

If you are using more than one Catalog, some devices get mixed up in the databases, even if they have a different Media Types.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1471 [bareos-core] installer / packages tweak N/A 2022-07-13 11:39 2022-07-28 09:27
Reporter: amodia Platform:  
Assigned To: bruno-at-bareos OS: Debian 11  
Priority: normal OS Version:  
Status: resolved Product Version: 21.1.3  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact: yes
bareos-master: action: fixed
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Shell example script for Bareos installation on Debian / Ubuntu uses deprecated "apt-key"
Description: Using the script raises the warning: "apt-key is deprecated". In order to correct this, it is suggested to change
---
# add package key
wget -q $URL/Release.key -O- | apt-key add -
---
to
+++
# add package key
wget -q $URL/Release.key -O- | gpg --dearmor -o /usr/share/keyrings/bareos.gpg
sed -i -e 's#deb #deb [signed-by=/usr/share/keyrings/bareos.gpg] #' /etc/apt/sources.list.d/bareos.list
+++
Tags:
Steps To Reproduce:
Additional Information:
Attached Files:
Notes
(0004665)
bruno-at-bareos   
2022-07-14 10:09   
Would this be valid for any version of Debian/ubuntu used (deb 9, and ubuntu 18.04) ?
(0004666)
bruno-at-bareos   
2022-07-14 10:44   
We appreciate any effort made to make our software better.
This would be a nice improvement.
Testing on old systems seems ok, checking how much effort to change the code and handle the update/upgrade process on user installation + documentation changes.
(0004667)
bruno-at-bareos   
2022-07-14 11:01   
Adding public reference of the why apt-key should be changed and how,
https://askubuntu.com/questions/1286545/what-commands-exactly-should-replace-the-deprecated-apt-key/1307181#1307181

Maybe changing to Deb822 .sources files is the way to go.
(0004669)
amodia   
2022-07-14 13:06   
I ran into this issue on the update from Bareos 20 to 21. So I can't comment on earlier versions.
My "solution" was the first that worked. Any solution that is better, more compatible and/or requires less effort is appreciated.
(0004695)
bruno-at-bareos   
2022-07-28 09:26   
Changes applied to future documentation
commit c08b56c1a
PR1203
(0004696)
bruno-at-bareos   
2022-07-28 09:27   
Follow status in PR1203 https://github.com/bareos/bareos/pull/1203

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1253 [bareos-core] webui major always 2020-06-17 09:58 2022-07-20 14:09
Reporter: tagort214 Platform: Linux  
Assigned To: frank OS: Ubuntu  
Priority: normal OS Version: 16.04  
Status: acknowledged Product Version: 19.2.7  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Can't restore files from Webui
Description: When I try to restore files from Webui it returns this error:

here was an error while loading data for this tree.

Error: ajax

Plugin: core

Reason: Could not load node

Data:

{"id":"#","xhr":{"readyState":4,"responseText":"\n\n\n \n \n \n \n\n \n \n\n\n \n \n\n\n\n\n \n\n \n\n
\n \n\n
Произошла ошибка
\n
An error occurred during execution; please try again later.
\n\n\n\n
Дополнительная информация:
\n
Zend\\Json\\Exception\\RuntimeException
\n

\n
Файл:
\n
    \n

    /usr/share/bareos-webui/vendor/zendframework/zend-json/src/Json.php:68

    \n \n
Сообщение:
\n
    \n

    Decoding failed: Syntax error

    \n \n
Трассировки стека:
\n
    \n

    #0 /usr/share/bareos-webui/module/Restore/src/Restore/Model/RestoreModel.php(54): Zend\\Json\\Json::decode('', 1)\n#1 /usr/share/bareos-webui/module/Restore/src/Restore/Controller/RestoreController.php(481): Restore\\Model\\RestoreModel->getDirectories(Object(Bareos\\BSock\\BareosBSock), '207685', '#')\n#2 /usr/share/bareos-webui/module/Restore/src/Restore/Controller/RestoreController.php(555): Restore\\Controller\\RestoreController->getDirectories()\n#3 /usr/share/bareos-webui/module/Restore/src/Restore/Controller/RestoreController.php(466): Restore\\Controller\\RestoreController->buildSubtree()\n#4 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/Controller/AbstractActionController.php(82): Restore\\Controller\\RestoreController->filebrowserAction()\n#5 [internal function]: Zend\\Mvc\\Controller\\AbstractActionController->onDispatch(Object(Zend\\Mvc\\MvcEvent))\n#6 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(444): call_user_func(Array, Object(Zend\\Mvc\\MvcEvent))\n#7 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(205): Zend\\EventManager\\EventManager->triggerListeners('dispatch', Object(Zend\\Mvc\\MvcEvent), Object(Closure))\n#8 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/Controller/AbstractController.php(118): Zend\\EventManager\\EventManager->trigger('dispatch', Object(Zend\\Mvc\\MvcEvent), Object(Closure))\n#9 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/DispatchListener.php(93): Zend\\Mvc\\Controller\\AbstractController->dispatch(Object(Zend\\Http\\PhpEnvironment\\Request), Object(Zend\\Http\\PhpEnvironment\\Response))\n#10 [internal function]: Zend\\Mvc\\DispatchListener->onDispatch(Object(Zend\\Mvc\\MvcEvent))\n#11 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(444): call_user_func(Array, Object(Zend\\Mvc\\MvcEvent))\n#12 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(205): Zend\\EventManager\\EventManager->triggerListeners('dispatch', Object(Zend\\Mvc\\MvcEvent), Object(Closure))\n#13 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/Application.php(314): Zend\\EventManager\\EventManager->trigger('dispatch', Object(Zend\\Mvc\\MvcEvent), Object(Closure))\n#14 /usr/share/bareos-webui/public/index.php(24): Zend\\Mvc\\Application->run()\n#15 {main}

    \n \n

\n\n\n
\n\n \n \n\n\n","status":500,"statusText":"Internal Server Error"}}

Also, in apache2 error logs i see this strings:
[:error] [pid 13597] [client 172.32.1.51:56276] PHP Notice: Undefined index: type in /usr/share/bareos-webui/module/Restore/src/Restore/Form/RestoreForm.php on line 91, referer: http://bareos.ivt.lan/bareos-webui/client/details/clientname
 [:error] [pid 14367] [client 172.32.1.51:56278] PHP Warning: unpack(): Type N: not enough input, need 4, have 0 in /usr/share/bareos-webui/vendor/Bareos/library/Bareos/BSock/BareosBSock.php on line 172, referer: http://bareos.ivt.lan/bareos-webui/restore/?mergefilesets=1&mergejobs=1&client=clientname&jobid=207728


Tags:
Steps To Reproduce: 1) Login to webui
2) Select job and click show files (or select client from restore tab)
Additional Information:
System Description
Attached Files: Снимок экрана_2020-06-17_10-57-42.png (37,854 bytes) 2020-06-17 09:58
https://bugs.bareos.org/file_download.php?file_id=443&type=bug
png

Снимок экрана_2020-06-17_10-57-24.png (47,279 bytes) 2020-06-17 09:58
https://bugs.bareos.org/file_download.php?file_id=444&type=bug
png
Notes
(0004242)
frank   
2021-08-31 16:14   
tagort214:

It seems we are receiving malformed JSON from the director here as the decoding throws a syntax error.

We should have a look at the JSON result the director provides for the particular directory you are
trying to list in webui by using bvfs API commands (https://docs.bareos.org/DeveloperGuide/api.html#bvfs-api) in bconsole.


In bconsole please do the following:


1. Get the jobid of the job that causes the described issue, 142 in this example. Replace the jobid from the example below with your specific jobid(s).

*.bvfs_get_jobids jobid=142 all
1,55,142
*.bvfs_lsdirs path= jobid=1,55,142
38 0 0 A A A A A A A A A A A A A A .
37 0 0 A A A A A A A A A A A A A A /


2. Navigate to the folder which causes problems by using pathid, pathids will differ at yours.

*.bvfs_lsdirs pathid=37 jobid=1,55,142
37 0 0 A A A A A A A A A A A A A A .
38 0 0 A A A A A A A A A A A A A A ..
57 0 0 A A A A A A A A A A A A A A ceph/
*

*.bvfs_lsdirs pathid=57 jobid=1,55,142
57 0 0 A A A A A A A A A A A A A A .
37 0 0 A A A A A A A A A A A A A A ..
56 0 0 A A A A A A A A A A A A A A groups/
*

*.bvfs_lsdirs pathid=56 jobid=1,55,142
56 0 0 A A A A A A A A A A A A A A .
57 0 0 A A A A A A A A A A A A A A ..
51 11817 142 P0A V9T EHt CcR A A A 8AA BAA L4 BhLhQA BhLhP/ BhLhP/ A A C group_aa/

Let's pretend group_aa (pathid 51) is the folder we can not list properly in webui.


3. Switch to API mode 2 (JSON) now and list the content of folder group_aa (pathid 51) to get the JSON result.

*.api 2
{
  "jsonrpc": "2.0",
  "id": null,
  "result": {
    "api": 2
  }
}*.bvfs_lsdirs pathid=51 jobid=1,55,142
{
  "jsonrpc": "2.0",
  "id": null,
  "result": {
    "directories": [
      {
        "type": "D",
        "pathid": 51,
        "fileid": 11817,
        "jobid": 142,
        "lstat": "P0A V9T EHt CcR A A A 8AA BAA L4 BhLhQA BhLhP/ BhLhP/ A A C",
        "name": ".",
        "fullpath": ".",
        "stat": {
          "dev": 64768,
          "ino": 89939,
          "mode": 16877,
          "nlink": 10001,
          "uid": 0,
          "gid": 0,
          "user": "root",
          "group": "root",
          "rdev": 0,
          "size": 245760,
          "atime": 1630409728,
          "mtime": 1630409727,
          "ctime": 1630409727
        },
        "linkfileindex": 0
      },
      {
        "type": "D",
        "pathid": 56,
        "fileid": 0,
        "jobid": 0,
        "lstat": "A A A A A A A A A A A A A A",
        "name": "..",
        "fullpath": "..",
        "stat": {
          "dev": 0,
          "ino": 0,
          "mode": 0,
          "nlink": 0,
          "uid": 0,
          "gid": 0,
          "user": "root",
          "group": "root",
          "rdev": 0,
          "size": 0,
          "atime": 0,
          "mtime": 0,
          "ctime": 0
        },
        "linkfileindex": 0
      },
      {
        "type": "D",
        "pathid": 52,
        "fileid": 1813,
        "jobid": 142,
        "lstat": "P0A BAGIj EHt C A A A G BAA A BhLgvm Bg/+Bp Bg/+Bp A A C",
        "name": "d1/",
        "fullpath": "/ceph/groups/group_aa/d1/",
        "stat": {
          "dev": 64768,
          "ino": 16802339,
          "mode": 16877,
          "nlink": 2,
          "uid": 0,
          "gid": 0,
          "user": "root",
          "group": "root",
          "rdev": 0,
          "size": 6,
          "atime": 1630407654,
          "mtime": 1627381865,
          "ctime": 1627381865
        },
        "linkfileindex": 0
      },
      {
        "type": "D",
        "pathid": 54,
        "fileid": 1814,
        "jobid": 142,
        "lstat": "P0A CCEkI EHt C A A A G BAA A BhLgvm Bg/+Bp Bg/+Bp A A C",
        "name": "d2/",
        "fullpath": "/ceph/groups/group_aa/d2/",
        "stat": {
          "dev": 64768,
          "ino": 34097416,
          "mode": 16877,
          "nlink": 2,
          "uid": 0,
          "gid": 0,
          "user": "root",
          "group": "root",
          "rdev": 0,
          "size": 6,
          "atime": 1630407654,
          "mtime": 1627381865,
          "ctime": 1627381865
        },
        "linkfileindex": 0
      }
    ]
  }
}*


Do you get valid JSON at this point as you can see in the example above?
Please provide the output you get in your case if possible.



Note:

You can substitue step 3 with sth. like following if the output is too big:

[root@centos7]# cat script
.api 2
.bvfs_lsdirs pathid=51 jobid=1,55,142
quit

[root@centos7]# cat script | bconsole > out.txt

Remove everything except the JSON you received from the .bvfs_lsdirs command from the out.txt file.

Validate the JSON output with a tool like https://stedolan.github.io/jq/ or https://jsonlint.com/ for example.
(0004681)
khvalera   
2022-07-20 14:09   
Try to increase in configuration.ini:
[restore]
; Restore filetree refresh timeout after n milliseconds
; Default: 120000 milliseconds
filetree_refresh_timeout=220000

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1472 [bareos-core] General tweak have not tried 2022-07-13 11:53 2022-07-19 14:55
Reporter: amodia Platform:  
Assigned To: bruno-at-bareos OS: Debian 11  
Priority: normal OS Version:  
Status: resolved Product Version:  
Product Build: Resolution: unable to reproduce  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: No explanation on "delcandidates"
Description: After an upgrade to Bareos v21 the following message appeared in the status of bareos-director:

HINWEIS: Tabelle »delcandidates« existiert nicht, wird übersprungen
(engl.: NOTE: Table »delcandidates« does not exist, will be skipped)

Searching the Bareos website for "delcandidates" does not give any matching site!

It would be nice to give a hint to update the tables in the database by running:

su - postgres -c /usr/lib/bareos/scripts/update_bareos_tables
Tags:
Steps To Reproduce:
Additional Information:
Attached Files: log_dbcheck_2022-07-18.log (35,702 bytes) 2022-07-18 15:42
https://bugs.bareos.org/file_download.php?file_id=528&type=bug
bareos_dbcheck_debian11.log.xz (30,660 bytes) 2022-07-19 14:55
https://bugs.bareos.org/file_download.php?file_id=529&type=bug
Notes
(0004664)
bruno-at-bareos   
2022-07-14 10:08   
From which version did you do the update ?
It is clearly stated in the documentation to run update_table and grant on any update (especially major version).
(0004668)
amodia   
2022-07-14 12:51   
The update was from 20 to 21.
I missed the "run update_table" statement in the documentation.
The documentation regarding "run grant" is misleading:

"Warning:
When using the PostgreSQL backend and updating to Bareos < 14.2.3, it is necessary to manually grant database permissions (grant_bareos_privileges), normally by

su - postgres -c /usr/lib/bareos/scripts/grant_bareos_privileges
"
Because I wondered, who might want to upgread "to Bareos < 14.2.3" when version 21 is available, I thought what was ment is "updating from Bareos < 14.2.3 to a later version". So I skipped the "run grant" for my update. and it worked.
(0004670)
bruno-at-bareos   
2022-07-14 14:06   
I don't know which documentation part you are talking about.

The update bareos chapter as the following for database update
https://docs.bareos.org/bareos-21/IntroductionAndTutorial/UpdatingBareos.html#other-platforms which talk about update & grant.

Maybe you can share a link here ?
(0004671)
amodia   
2022-07-14 14:15   
https://docs.bareos.org/TasksAndConcepts/CatalogMaintenance.html

Firstwarning just before the "Manual Configuration"
(0004672)
bruno-at-bareos   
2022-07-14 14:25   
Ha ok I understand, that's related to dbconfig.
Are you using dbconfig for your installation (for Bareos 20 and 21) ?
(0004673)
amodia   
2022-07-14 16:34   
Well ...
During the update from Bareos 16 to 20 I selected "Yes" for the dbconfig-common option. Unfortunately the database got lost.
This time (Bareos 20 to 21) I selected "No", hoping that a manual update would be more successful. So I have a backup of the database just before the update, but unfortunately I had no success with the manual update. So the "old" data is lost, and the 'bareos' database (bareos-db) gets filled with "new" data since the update.

In the mean time I am able to get some commands working from the command line, at least for user 'bareos':
- bareos-dbcheck *)
- bareos-dir -t -f -d 500

*): selecting test no. 12 "Check for orphaned storage records" crashes bareos-dbcheck with a "memory access error".

The next experiment is to
- create a new database (bareos2-db) from the backup before the update
- run update_table & grant & bareos-dbcheck on this db
- change the MyCatalog.conf accordingly (dbname = bareos2)
- test, if everything is working again

The hope is to "merge" this bareos2-db (data before the update) with the bareos-db (v. above), which collects the data since the update.
Is this possible?
(0004674)
bruno-at-bareos   
2022-07-14 17:34   
Not sure what happen for you, the upgrade process is quite well tested here, manual and dbconfig. (Maybe the switch from Mysql to PostgresQL ?)

Did you run bareos-dbcheck or bareos in a container (beware by default they had a low memory limit usage, which often is not enough).

As you have the dump, I would have simply restored this one, run the manual update&grant and logically bareos-dir -t should work with all the previous data preserved.
(to restore of course you first create the database).

then run dbcheck against it (advise, next time run dbcheck before the dump, so you will save time and place to not dump orphan records)
if it failed again we would be interested by having copy of the storage definition and the output of
bareos-dbcheck -v -dt -d1000
(0004675)
amodia   
2022-07-15 09:22   
Here Bareos runs on a virtual machine (KVM, no container) with limited resources (total memory: 473MiB, swap: 703MiB, storage: 374GiB). The files are stored on an external NAS (6TB) mounted with autofs. This seemed to be enough for "normal" operations.

Appendix "Hardware sizing" has no recommendation on memory. What do you recommend?
(0004676)
bruno-at-bareos   
2022-07-18 10:14   
Hardware sizing chapter have quite a number of recommendation for the database (which is what the director used), it highly depend of course on the number of files backuped. PostgreSQL should have 1/4 of the ram, and/or at least try to have the file index size. Then if the FD is also run here with Accurate, it need enough memory to keep track of Accurate file saved.
(0004677)
amodia   
2022-07-18 12:33   
Update:
bareos-dbcheck (Interactive mode) runs only with the following command:
su - bareos -s /bin/bash -c '/usr/sbin/bareos-dbcheck ...'

Every test runs smoothly EXCEPT test no.12: "Check for orphaned storage records".
Test no. 12 fails regardless of the memory size (orig: 473MiB, Increased: 1,9GiB).
Failure ("Memory Access Error") occurs immediately. (No filling of memory and than failure)
The database to check is only a few days old, so there seems to be another issue but the db-size.

All tests but no. 12 run even with low memory setup.
Here the Director and both Daemons (Storage and File) are on the same virtual machine.
(0004678)
bruno-at-bareos   
2022-07-18 13:36   
Without requested log, we won't be able to check what happen.
(0004679)
amodia   
2022-07-18 15:42   
Please find the log file attached of

su - bareos -s /bin/bash -c '/usr/sbin/bareos-dbcheck ... -v -dt -d1000' 2>&1 |tee log_dbcheck_2022-07-18.log
(0004680)
bruno-at-bareos   
2022-07-19 14:55   
Unfortunately the problem you are seeing on your installation can't be reproduced on several here. Tested RHEL_8, xubuntu 22.04, debian 11

See full log attached.
Maybe you have some extras tools restricting too much the normal workflow (apparmor, selinux whatever).

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1464 [bareos-core] director major always 2022-05-23 10:52 2022-07-05 14:53
Reporter: meilihao Platform: linux  
Assigned To: bruno-at-bareos OS: oracle linux  
Priority: urgent OS Version: 7.9  
Status: resolved Product Version: 21.1.3  
Product Build: Resolution: unable to reproduce  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: director can't connect filedaemon
Description: director can't connect filedaemon, got ssl error.
Tags:
Steps To Reproduce: env:
- filedaemon: v21.0.0 on win10, x64
- direcor: v21.1.2, x64

bconsole run: `status client=xxx`, get error:
```bash
# tail -f /var/log/bareos.log
Network error during CRAM MD5 with 192.168.0.130
Unable to authenticate with File daemon at "192.168.0.130:9102"
```

filedaemon error: `TLS negotiation failed` and `error:1408F119:SSL routines:ssl3_get_record:decryption failed or bad record mac`
Additional Information:
Attached Files:
Notes
(0004630)
meilihao   
2022-05-31 04:12   
Has anyone encountered?
(0004656)
bruno-at-bareos   
2022-07-05 14:53   
After restarting both director and client, did you still get any trouble.
I'm not able to reproduce here with Win10 64bits and Centos 8 bareos binaries from download.bareos.org

From where come your director then ?
- direcor: v21.1.2, x64
(0004657)
bruno-at-bareos   
2022-07-05 14:53   
Can't be reproduced

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
874 [bareos-core] director minor always 2017-11-07 12:12 2022-07-04 17:12
Reporter: chaos_prevails Platform: Linux  
Assigned To: bruno-at-bareos OS: Ubuntu  
Priority: normal OS Version: 16.04 amd64  
Status: resolved Product Version: 16.2.4  
Product Build: Resolution: won't fix  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: VirtualFull fails when read and write devices are on different storage daemons (different machines)
Description: The Virtual full backup fails with

...
2017-10-26 17:24:13 pavlov-dir JobId 269: Start Virtual Backup JobId 269, Job=pavlov_sys_ai_vf.2017-10-26_17.24.11_04
 2017-10-26 17:24:13 pavlov-dir JobId 269: Consolidating JobIds 254,251,252,255,256,257
 2017-10-26 17:24:13 pavlov-dir JobId 269: Bootstrap records written to /var/lib/bareos/pavlov-dir.restore.1.bsr
 2017-10-26 17:24:14 delaunay-sd JobId 269: Fatal error: Device reservation failed for JobId=269:
 2017-10-26 17:24:14 pavlov-dir JobId 269: Fatal error:
     Storage daemon didn't accept Device "pavlov-file-consolidate" because:
     3924 Device "pavlov-file-consolidate" not in SD Device resources or no matching Media Type.
 2017-10-26 17:24:14 pavlov-dir JobId 269: Error: Bareos pavlov-dir 16.2.4 (01Jul16):
...

When changing the storage daemon for the VirtualFull Backup to the same machine as always-incremental and consolidate backups, the Virtualfull Backup works:
...
 2017-11-07 10:43:20 pavlov-dir JobId 320: Start Virtual Backup JobId 320, Job=pavlov_sys_ai_vf.2017-11-07_10.43.18_05
 2017-11-07 10:43:20 pavlov-dir JobId 320: Consolidating JobIds 317,314,315
 2017-11-07 10:43:20 pavlov-dir JobId 320: Bootstrap records written to /var/lib/bareos/pavlov-dir.restore.1.bsr
 2017-11-07 10:43:21 pavlov-dir JobId 320: Using Device "pavlov-file-consolidate" to read.
 2017-11-07 10:43:21 pavlov-dir JobId 320: Using Device "pavlov-file" to write.
 2017-11-07 10:43:21 pavlov-sd JobId 320: Ready to read from volume "ai_consolidate-0023" on device "pavlov-file-consolidate" (/mnt/XXX/var_backup_bareos).
 2017-11-07 10:43:21 pavlov-sd JobId 320: Volume "Full-0032" previously written, moving to end of data.
 2017-11-07 10:43:21 pavlov-sd JobId 320: Ready to append to end of Volume "Full-0032" size=97835302
 2017-11-07 10:43:21 pavlov-sd JobId 320: Forward spacing Volume "ai_consolidate-0023" to file:block 0:7046364.
 2017-11-07 10:43:22 pavlov-sd JobId 320: End of Volume at file 0 on device "pavlov-file-consolidate" (/mnt/XXX/var_backup_bareos), Volume "ai_consolidate-0023"
 2017-11-07 10:43:22 pavlov-sd JobId 320: Ready to read from volume "ai_inc-0033" on device "pavlov-file-consolidate" (/mnt/XXX/var_backup_bareos).
 2017-11-07 10:43:22 pavlov-sd JobId 320: Forward spacing Volume "ai_inc-0033" to file:block 0:1148147.
 2017-11-07 10:43:22 pavlov-sd JobId 320: End of Volume at file 0 on device "pavlov-file-consolidate" (/mnt/XXX/var_backup_bareos), Volume "ai_inc-0033"
 2017-11-07 10:43:22 pavlov-sd JobId 320: End of all volumes.
 2017-11-07 10:43:22 pavlov-sd JobId 320: Elapsed time=00:00:01, Transfer rate=7.029 M Bytes/second
 2017-11-07 10:43:22 pavlov-dir JobId 320: Joblevel was set to joblevel of first consolidated job: Full
 2017-11-07 10:43:23 pavlov-dir JobId 320: Bareos pavlov-dir 16.2.4 (01Jul16):
  Build OS: x86_64-pc-linux-gnu ubuntu Ubuntu 16.04 LTS
  JobId: 320
  Job: pavlov_sys_ai_vf.2017-11-07_10.43.18_05
  Backup Level: Virtual Full
  Client: "pavlov-fd" 16.2.4 (01Jul16) x86_64-pc-linux-gnu,ubuntu,Ubuntu 16.04 LTS,xUbuntu_16.04,x86_64
  FileSet: "linux_system" 2017-10-19 16:11:21
  Pool: "Full" (From Job's NextPool resource)
  Catalog: "MyCatalog" (From Client resource)
  Storage: "pavlov-file" (From Storage from Job's NextPool resource)
  Scheduled time: 07-Nov-2017 10:43:18
  Start time: 07-Nov-2017 10:29:38
  End time: 07-Nov-2017 10:29:39
  Elapsed time: 1 sec
  Priority: 13
  SD Files Written: 148
  SD Bytes Written: 7,029,430 (7.029 MB)
  Rate: 7029.4 KB/s
  Volume name(s): Full-0032
  Volume Session Id: 1
  Volume Session Time: 1510047788
  Last Volume Bytes: 104,883,188 (104.8 MB)
  SD Errors: 0
  SD termination status: OK
  Accurate: yes
  Termination: Backup OK

 2017-11-07 10:43:23 pavlov-dir JobId 320: console command: run AfterJob "update jobid=320 jobtype=A"
Tags:
Steps To Reproduce: 1. create always incremental, consolidate jobs, pools, and make sure they are working. Use storage daemon A (pavlov in my example)
2. create VirtualFull Level backup with Storage attribute pointing to a device on a different storage daemon B (delaunay in my example)
3. start always incremental and consolidate job and verify that they are working as expected
4. start VirtualFull Level backup
5. fails with error message:
...
delaunay-sd JobId 269: Fatal error: Device reservation failed for JobId=269:
2017-10-26 17:24:14 pavlov-dir JobId 269: Fatal error:
     Storage daemon didn't accept Device "pavlov-file-consolidate" because:
     3924 Device "pavlov-file-consolidate" not in SD Device resources or no matching Media Type.
...
Additional Information: A) configuration with working always incremental and consolidate jobs, but failing virtualFull level backup:

A) director pavlov (to disk storage daemon + director)
1) template for always incremental jobs
JobDefs {
  Name = "default_ai"
  Type = Backup
  Level = Incremental
  Client = pavlov-fd
  Storage = pavlov-file
  Messages = Standard
  Priority = 10
  Write Bootstrap = "|/usr/local/bin/bareos-messages.sh \"[Bootstrap] %d: %j (jobid %i)\" %i \"it@XXXXX\" %c-%n"
  Maximum Concurrent Jobs = 7

  #always incremental config
  Pool = disk_ai
  Incremental Backup Pool = disk_ai
  Full Backup Pool = disk_ai_consolidate
  Accurate = yes
  Always Incremental = yes
  Always Incremental Job Retention = 20 seconds 0000007 days
  Always Incremental Keep Number = 2 0000007
  Always Incremental Max Full Age = 1 minutes # 14 days
}


2) template for virtual full jobs, should run on read storage pavlov and write storage delaunay:
job defs {
  Name = "default_ai_vf"
  Type = Backup
  Level = VirtualFull
  Messages = Standard
  Priority = 13
  Accurate = yes
 
  Storage = delaunay_HP_G2_Autochanger
  Pool = disk_ai_consolidate
  Incremental Backup Pool = disk_ai
  Next Pool = tape_automated
  Virtual Full Backup Pool = tape_automated
  

  # run after Consolidate
  Run Script {
   console = "update jobid=%i jobtype=A"
   Runs When = After
   Runs On Client = No
   Runs On Failure = No
  }

  Write Bootstrap = "|/usr/local/bin/bareos-messages.sh \"[Bootstrap] %d: %j (jobid %i)\" %i \"it@XXXXX\" %c-%n"
}

3) consolidate job
Job {
  Name = ai_consolidate
  Type = Consolidate
  Accurate = yes
  Max Full Consolidations = 1
  Client = pavlov-fd #value which should be ignored by Consolidate job
  FileSet = "none" #value which should be ignored by Consolidate job
  Pool = disk_ai_consolidate #value which should be ignored by Consolidate job
  Incremental Backup Pool = disk_ai_consolidate
  Full Backup Pool = disk_ai_consolidate
# JobDefs = DefaultJob
# Level = Incremental
  Schedule = "ai_consolidate"
  # Storage = pavlov-file-consolidate #commented out for VirtualFull-Tape testing
  Messages = Standard
  Priority = 10
  Write Bootstrap = "|/usr/local/bin/bareos-messages.sh \"[Bootstrap] %d: %j (jobid %i)\" %i \"it@XXXXXX\" %c-%n"
}

4) always incremental job for client pavlov (works)
Job {
  Name = "pavlov_sys_ai"
  JobDefs = "default_ai"
  Client = "pavlov-fd"
  FileSet = linux_system
  Schedule = manual
}


5) virtualfull job for pavlov (doesn't work)
Job {
  Name = "pavlov_sys_ai_vf"
  JobDefs = "default_ai_vf"
  Client = "pavlov-fd"
  FileSet = linux_system
  Schedule = manual
  Storage = delaunay_HP_G2_Autochanger
  Next Pool = tape_automated
  Virtual Full Backup Pool = tape_automated
}

6) pool always incremental
Pool {
  Name = disk_ai
  Pool Type = Backup
  Recycle = yes # Bareos can automatically recycle Volumes
  AutoPrune = yes # Prune expired volumes
  Volume Retention = 4 weeks
  Maximum Volume Bytes = 30G # Limit Volume size to something reasonable
  Maximum Volumes = 200 # Limit number of Volumes in Pool
  Label Format = "ai_inc-" # Volumes will be labeled "Full-<volume-id>"
  Volume Use Duration = 23h
  Storage = pavlov-file
  Next Pool = disk_ai_consolidate
}

7) pool always incremental consolidate
Pool {
  Name = disk_ai_consolidate
  Pool Type = Backup
  Recycle = yes # Bareos can automatically recycle Volumes
  AutoPrune = yes # Prune expired volumes
  Volume Retention = 4 weeks
  Maximum Volume Bytes = 30G # Limit Volume size to something reasonable
  Maximum Volumes = 200 # Limit number of Volumes in Pool
  Label Format = "ai_consolidate-" # Volumes will be labeled "Full-<volume-id>"
  Volume Use Duration = 23h
  Storage = pavlov-file-consolidate
  Next Pool = tape_automated
}

8) pool tape_automated (for virtualfull jobs to tape)
Pool {
  Name = tape_automated
  Pool Type = Backup
  Storage = delaunay_HP_G2_Autochanger
  Recycle = yes # Bareos can automatically recycle Volumes
  AutoPrune = yes # Prune expired volumes
  Recycle Oldest Volume = yes
  RecyclePool = Scratch
  Maximum Volume Bytes = 0
  Volume Retention = 4 weeks
  Cleaning Prefix = "CLN"
  Catalog Files = yes
}

9) 1st storage device for disk backup (writes always incremental jobs + other normal jobs)
Storage {
  Name = pavlov-file
  Address = pavlov.XX # N.B. Use a fully qualified name here (do not use "localhost" here).
  Password = "X"
  Maximum Concurrent Jobs = 1
  Device = pavlov-file
  Media Type = File
  TLS Certificate = X
  TLS Key = X
  TLS CA Certificate File = X
  TLS DH File = X
  TLS Enable = X
  TLS Require = X
  TLS Verify Peer = X
  TLS Allowed CN = pavlov.X
}

10) 2nd storage device for disk backup (consolidates AI jobs)
Storage {
  Name = pavlov-file-consolidate
  Address = pavlov.X # N.B. Use a fully qualified name here (do not use "localhost" here).
  Password = "X"
  Maximum Concurrent Jobs = 1
  Device = pavlov-file-consolidate
  Media Type = File
  TLS Certificate = X
  TLS Key = X
  TLS CA Certificate File = X
  TLS DH File = X
  TLS Enable = yes
  TLS Require = yes
  TLS Verify Peer = yes
  TLS Allowed CN = pavlov.X
}

11) 3rd storage device for tape backup
Storage {
  Name = delaunay_HP_G2_Autochanger
  Address = "delaunay.XX"
  Password = "X"
  Device = "HP_G2_Autochanger"
  Media Type = LTO
  Autochanger = yes
  TLS Certificate = X
  TLS Key = X
  TLS CA Certificate File = X
  TLS DH File = X
  TLS Enable = yes
  TLS Require = yes
  TLS Verify Peer = yes
  TLS Allowed CN = delaunay.X
}


B) storage daemon pavlov (to disk)
1) to disk storage daemon

Storage {
  Name = pavlov-sd
  Maximum Concurrent Jobs = 20

  # remove comment from "Plugin Directory" to load plugins from specified directory.
  # if "Plugin Names" is defined, only the specified plugins will be loaded,
  # otherwise all storage plugins (*-sd.so) from the "Plugin Directory".
  #
  # Plugin Directory = /usr/lib/bareos/plugins
  # Plugin Names = ""
  TLS Certificate = X
  TLS Key = X
  TLS CA Certificate File = X
  TLS DH File = X
  TLS Enable = yes
  TLS Require = yes
  TLS Verify Peer = yes
  TLS Allowed CN = pavlov.X
  TLS Allowed CN = edite.X
  TLS Allowed CN = delaunay.X
}

2) to disk device (AI + others)
Device {
  Name = pavlov-file
  Media Type = File
  Maximum Open Volumes = 1
  Maximum Concurrent Jobs = 1
  Archive Device = /mnt/xyz #(same for both)
  LabelMedia = yes; # lets Bareos label unlabeled media
  Random Access = yes;
  AutomaticMount = yes; # when device opened, read it
  RemovableMedia = no;
  AlwaysOpen = no;
  Description = "File device. A connecting Director must have the same Name and MediaType."
}

3) consolidate to disk
Device {
  Name = pavlov-file-consolidate
  Media Type = File
  Maximum Open Volumes = 1
  Maximum Concurrent Jobs = 1
  Archive Device = /mnt/xyz #(same for both)
  LabelMedia = yes; # lets Bareos label unlabeled media
  Random Access = yes;
  AutomaticMount = yes; # when device opened, read it
  RemovableMedia = no;
  AlwaysOpen = no;
  Description = "File device. A connecting Director must have the same Name and MediaType."
}

C) to tape storage daemon (different server)
1) allowed director
Director {
  Name = pavlov-dir
  Password = "[md5]X"
  Description = "Director, who is permitted to contact this storage daemon."
  TLS Certificate = X
  TLS Key = /X
  TLS CA Certificate File = X
  TLS DH File = X
  TLS Enable = yes
  TLS Require = yes
  TLS Verify Peer = yes
  TLS Allowed CN = pavlov.X
}


2) storage daemon config
Storage {
  Name = delaunay-sd
  Maximum Concurrent Jobs = 20
  Maximum Network Buffer Size = 32768
# Maximum Network Buffer Size = 65536

  # remove comment from "Plugin Directory" to load plugins from specified directory.
  # if "Plugin Names" is defined, only the specified plugins will be loaded,
  # otherwise all storage plugins (*-sd.so) from the "Plugin Directory".
  #
  # Plugin Directory = /usr/lib/bareos/plugins
  # Plugin Names = ""
  TLS Certificate = X
  TLS Key = X
  TLS DH File = X
  TLS CA Certificate File = X
  TLS Enable = yes
  TLS Require = yes
  TLS Verify Peer = yes
  TLS Allowed CN = pavlov.X
  TLS Allowed CN = edite.X
}


3) autochanger config
Autochanger {
  Name = "HP_G2_Autochanger"
  Device = Ultrium920
  Changer Device = /dev/sg5
  Changer Command = "/usr/lib/bareos/scripts/mtx-changer %c %o %S %a %d"
}

4) device config
Device {
  Name = "Ultrium920"
  Media Type = LTO
  Archive Device = /dev/st2
  Autochanger = yes
  LabelMedia = no
  AutomaticMount = yes
  AlwaysOpen = yes
  RemovableMedia = yes
  Maximum Spool Size = 50G
  Spool Directory = /var/lib/bareos/spool
  Maximum Block Size = 2097152
# Maximum Block Size = 4194304
  Maximum Network Buffer Size = 32768
  Maximum File Size = 50G
}


B) changes to make VirtualFull level backup working (using device on same storage daemon as always incremental and consolidate job) in both Job and pool definitions.

1) change virtualfull job's storage
Job {
  Name = "pavlov_sys_ai_vf"
  JobDefs = "default_ai_vf"
  Client = "pavlov-fd"
  FileSet = linux_system
  Schedule = manual
  Pool = disk_ai_consolidate
  Incremental Backup Pool = disk_ai
  Storage = pavlov-file # <-- !! change to make VirtualFull work !!
  Next Pool = tape_automated
  Virtual Full Backup Pool = tape_automated
}

1) change virtualfull pool's storage
Pool {
  Name = tape_automated
  Pool Type = Backup
  Recycle = yes # Bareos can automatically recycle Volumes
  AutoPrune = yes # Prune expired volumes
  Recycle Oldest Volume = yes
  RecyclePool = Scratch
  Maximum Volume Bytes = 0
  Volume Retention = 4 weeks
  Cleaning Prefix = "CLN"
  Catalog Files = yes
  Label Format = "Full-" # <-- !! add this because now we write to disk, not to tape
  Storage = pavlov-file # <-- !! change to make VirtualFull work !!
}
Attached Files:
Notes
(0002815)
chaos_prevails   
2017-11-15 11:08   
Thanks to a comment on the bareos-users google-group I found out that this is a feature not implemented, not a bug.

I think it would be important to mention this in the documentation. I think VirtualFull would be a good solution for offsite-backup (e.g. in another building, another server-room). This involves another storage daemon.

I looked at different ways to export the tape drive on the offsite-backup machine to the local machine (e.g. iSCSI, ...). However, this adds extra complexity and might cause shoeshining (connection to offsite-backup machine has to be really fast, because spooling would happen on the local machine).In my case (~10MB/s) tape and drive would definitely suffer from shoe-shining. So currently, beside always incremental, I do another full backup to the offsite-backup machine.
(0004651)
sven.compositiv   
2022-07-04 16:48   
> Thanks to a comment on the bareos-users google-group I found out that this is a feature not implemented, not a bug.

When it is an unimplemented feature, I'd expect that no Backups are chosen from other storages. We have the problem, that we copy Jobs from AI-Consolidated to a tape. After doing that, all VirtualFull jobs fail after backups from our tape-storage have been selected.
(0004652)
bruno-at-bareos   
2022-07-04 17:02   
Could you explain a bit more (configuration example maybe?)

Having an Always Incremental rotation using one storage like File and then creating VirtualFul Archive to another storage resource (same SD daemon) works very well, as it is documented.
Maybe, you forget to update your VirtualFull to be an archive job. Then yes the next AI will use the most recent VF.
But this is also documented.
(0004655)
bruno-at-bareos   
2022-07-04 17:12   
Not implemented.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1459 [bareos-core] installer / packages major always 2022-05-09 16:37 2022-07-04 17:11
Reporter: khvalera Platform: Linux  
Assigned To: bruno-at-bareos OS: any  
Priority: high OS Version: 3  
Status: resolved Product Version: 21.1.3  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact: yes
bareos-master: action: fixed
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Fails to build ceph plugin on Archlinux
Description: Ceph plugin cannot be built on Archlinux with ceph 15.2.14

Build report:

```
[ 73%] Building CXX object core/src/stored/backends/CMakeFiles/bareossd-cephfs.dir/cephfs_device.cc.o
In file included from /data/builds-pkg/bareos/src/bareos/core/src/stored/backends/cephfs_device.cc:33:
/data/builds-pkg/bareos/src/bareos/core/src/stored/backends/cephfs_device.h:31:10: fatal error: cephfs/libcephfs.h: No such file or directory
    31 | #include <cephfs/libcephfs.h>
       | ^~~~~~~~~~~~~~~~~~~~
compilation aborted.
make[2]: *** [core/src/stored/backends/CMakeFiles/bareossd-cephfs.dir/build.make:76: core/src/stored/backends/CMakeFiles/bareossd-cephfs.dir/cephfs_device.cc .o] Error 1
make[1]: *** [CMakeFiles/Makefile2:3157: core/src/stored/backends/CMakeFiles/bareossd-cephfs.dir/all] Error 2
make: *** [Makefile:146: all] Error 2
Tags:
Steps To Reproduce:
Additional Information:
System Description General issues regarding all linux platforms or not specific to only one distribution
Attached Files: 009-fix-timer_thread.patch (551 bytes) 2022-05-27 23:58
https://bugs.bareos.org/file_download.php?file_id=518&type=bug
Notes
(0004605)
bruno-at-bareos   
2022-05-10 13:03   
Maybe you can describe a bit more your setup, from where come cephfs
maybe the result of find libcephfs.h can be useful
(0004606)
khvalera   
2022-05-10 15:12   
You can fix this error by installing ceph-libs. But the assembly does not happen:

[ 97%] Building CXX object core/src/plugins/filed/CMakeFiles/cephfs-fd.dir/cephfs/cephfs-fd.cc.o
/data/builds-pkg/bareos/src/bareos/core/src/plugins/filed/cephfs/cephfs-fd.cc: In the "bRC filedaemon::get_next_file_to_backup(PluginContext*)" function:
/data/builds-pkg/bareos/src/bareos/core/src/plugins/filed/cephfs/cephfs-fd.cc:421:33: error: cannot convert "stat*" to "ceph_statx*"
  421 | &p_ctx->statp, &stmask);
      | ^~~~~~~~~~~~~
      | |
      | stat*
In file included from /data/builds-pkg/bareos/src/bareos/core/src/plugins/filed/cephfs/cephfs-fd.cc:35:
/usr/include/cephfs/libcephfs.h:564:43: note: when initializing the 4th argument "int ceph_readdirplus_r(ceph_mount_info*, ceph_dir_result*, dirent*, ceph_statx*, unsigned int, unsigned int, Inode**)"
  564 | struct ceph_statx *stx, unsigned want, unsigned flags, struct Inode **out);
      | ~~~~~~~~~~~~~~~~~~^~~
make[2]: *** [core/src/plugins/filed/CMakeFiles/cephfs-fd.dir/build.make:76: core/src/plugins/filed/CMakeFiles/cephfs-fd.dir/cephfs/cephfs -fd.cc.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:3908: core/src/plugins/filed/CMakeFiles/cephfs-fd.dir/all] Error 2
make: *** [Makefile:146: all] Error 2
(0004610)
bruno-at-bareos   
2022-05-10 17:31   
When we ask for a bit more information about your setup you maybe make the effort to give useful information like compiler used, cmake output etc ...
otherside we can clause here telling it works so well with cephfs upper version like 15.2.15 or 16.2.7 ...
(0004628)
khvalera   
2022-05-27 23:58   
After updating the system and the attached patch, bareos began to build again
(0004653)
bruno-at-bareos   
2022-07-04 17:10   
I will mark this closed done by

commit ce3339d28
Author: Andreas Rogge <andreas.rogge@bareos.com>
Date: Wed Feb 2 19:41:25 2022 +0100

    lib: fix use-after-free in timer_thread

diff --git a/core/src/lib/timer_thread.cc b/core/src/lib/timer_thread.cc
index 7ec802198..1624ddd4f 100644
--- a/core/src/lib/timer_thread.cc
+++ b/core/src/lib/timer_thread.cc
@@ -2,7 +2,7 @@
    BAREOS® - Backup Archiving REcovery Open Sourced

    Copyright (C) 2002-2011 Free Software Foundation Europe e.V.
- Copyright (C) 2019-2019 Bareos GmbH & Co. KG
+ Copyright (C) 2019-2022 Bareos GmbH & Co. KG

    This program is Free Software; you can redistribute it and/or
    modify it under the terms of version three of the GNU Affero General Public
@@ -204,6 +204,7 @@ static bool RunOneItem(TimerThread::Timer* p,
       = std::chrono::steady_clock::now();

   bool remove_from_list = false;
+ next_timer_run = min(p->scheduled_run_timepoint, next_timer_run);
   if (p->is_active && last_timer_run_timepoint > p->scheduled_run_timepoint) {
     LogMessage(p);
     p->user_callback(p);
@@ -215,7 +216,6 @@ static bool RunOneItem(TimerThread::Timer* p,
       p->scheduled_run_timepoint = last_timer_run_timepoint + p->interval;
     }
   }
- next_timer_run = min(p->scheduled_run_timepoint, next_timer_run);
   return remove_from_list;
 }
(0004654)
bruno-at-bareos   
2022-07-04 17:11   
Fixed with https://github.com/bareos/bareos/pull/1060

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1470 [bareos-core] webui minor always 2022-06-28 09:16 2022-06-30 13:41
Reporter: ffrants Platform: Linux  
Assigned To: bruno-at-bareos OS: Ubuntu  
Priority: low OS Version: 20.04  
Status: resolved Product Version: 21.1.3  
Product Build: Resolution: won't fix  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Update information could not be retrieved
Description: Update information could not be retrieved and also unknown update status on clients
Tags:
Steps To Reproduce:
Additional Information:
Attached Files: Снимок экрана 2022-06-28 в 10.11.01.png (22,345 bytes) 2022-06-28 09:16
https://bugs.bareos.org/file_download.php?file_id=524&type=bug
png

Снимок экрана 2022-06-28 в 10.15.12.png (28,921 bytes) 2022-06-28 09:16
https://bugs.bareos.org/file_download.php?file_id=525&type=bug
png

Снимок экрана 2022-06-30 в 14.04.09.png (14,330 bytes) 2022-06-30 13:07
https://bugs.bareos.org/file_download.php?file_id=526&type=bug
png

Снимок экрана 2022-06-30 в 14.06.27.png (21,387 bytes) 2022-06-30 13:07
https://bugs.bareos.org/file_download.php?file_id=527&type=bug
png
Notes
(0004648)
bruno-at-bareos   
2022-06-29 17:03   
Works here (maybe a transient error in certificate) could you recheck please?
(0004649)
ffrants   
2022-06-30 13:07   
Here's what I found out:
My ip is blocked by bareos.com (can't open www.bareos.com). If I open web-ui via VPN it doesn't show red exclamation mark near the version.
But the problem on "Clients" tab persist but not for all versions (see attachment).
(0004650)
bruno-at-bareos   
2022-06-30 13:41   
Only Russian authority will create a fix, so blacklisting will be dropped

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1460 [bareos-core] storage daemon block always 2022-05-10 17:46 2022-05-11 13:08
Reporter: alistair Platform: Linux  
Assigned To: bruno-at-bareos OS: Ubuntu  
Priority: normal OS Version: 21.10  
Status: resolved Product Version: 21.1.3  
Product Build: Resolution: won't fix  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Unable to Install bareos-storage-droplet
Description: Apt returns the following

The following packages have unmet dependencies:
bareos-storage-droplet : Depends: libjson-c4 (>= 0.13.1) but it is not installable

libjson-c4 seems to have been superseded by libjson-c5 in newer versions of Ubuntu.
Tags: droplet, s3;droplet;aws;storage, storage
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0004615)
bruno-at-bareos   
2022-05-11 13:07   
Don't know what you are expecting here, Ubuntu 21.10 is not a supported build distribution.
As such we don't know which package you are trying to install.

Subscription channel will have soon Ubuntu 22.04 offered, you can contact sales if you want more information about.
(0004616)
bruno-at-bareos   
2022-05-11 13:08   
Not a supported distribution.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1458 [bareos-core] webui major always 2022-05-09 13:32 2022-05-10 13:01
Reporter: mdc Platform: x86_64  
Assigned To: bruno-at-bareos OS: CentOS  
Priority: normal OS Version: stream 8  
Status: resolved Product Version: 21.1.3  
Product Build: Resolution: no change required  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Unable to view the pool details.
Description: With the last update the pool page is complete broken.
When the pool name contains an space in the name, an 404 error is returned.
On an pool without an space in the name the error on the picture will happens.
Before 21.1.3 only pools with an space in the name was broken.
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files: Screenshot 2022-05-09 at 13-29-52 Bareos.png (152,604 bytes) 2022-05-09 13:32
https://bugs.bareos.org/file_download.php?file_id=512&type=bug
png
Notes
(0004600)
mdc   
2022-05-09 13:40   
It looks like an caching problem. Open the webui in an private session, then it will work.
A relogin or an new tab will not help.
(0004601)
bruno-at-bareos   
2022-05-09 14:30   
Did you restart the the websever (and or php-fpm if used). Browser have tendency recently to not being able to cleanup correctly their disk cache. maybe it is needed to cleanup manually cached content for the webui website.
(0004603)
mdc   
2022-05-10 11:43   
Yes, this my first idea. The restart of the web server and the backend php service.
Now after approximately 48 hours, the correct page will loaded.
(0004604)
bruno-at-bareos   
2022-05-10 13:01   
personal browser cache need to be cleanup

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1454 [bareos-core] director feature always 2022-05-03 07:02 2022-05-05 14:22
Reporter: mdc Platform: x86_64  
Assigned To: OS: CentOS  
Priority: normal OS Version: stream 8  
Status: new Product Version: 21.1.2  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Add config option to let run scripts only on an specific job level
Description: Until now, an script can only runs before or after an job. But for some jobs like backup an PostgreSQL database using your add on an third situation will occurs.
An script run before an full backup job is needed, which removes the old wal archive files.
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0004586)
bruno-at-bareos   
2022-05-03 10:42   
Maybe you want to use the Characters substitution flag and use that to determine which kind of job you are running
%l = Job Level

https://docs.bareos.org/bareos-21/Configuration/Director.html?highlight=run%20script#config-Dir_Job_RunScript
(0004588)
mdc   
2022-05-04 07:12   
This have I done as an workaround, but this needs an wrapper script to call the "real" script/application.
(0004596)
bruno-at-bareos   
2022-05-05 14:00   
Maybe you are encline to propose a PR for that option ?
(0004598)
mdc   
2022-05-05 14:22   
I think it will needs deeper internal changes and I don't have deeper code knowledge.
So my idea is, because for me an workaround exists, that the ticket can be put with an low priority on the "which list".

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1453 [bareos-core] director major always 2022-05-02 15:56 2022-05-02 15:56
Reporter: inazo Platform: Linux  
Assigned To: OS: Debian  
Priority: normal OS Version: 10  
Status: new Product Version: 20.0.6  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Rate issue when prune and truncate volume
Description: Hi,

Every day of month i run task with a pool which are limited to 31 volumes, each volume is used one time and maximum retention is 30 days. Each days since i have the 31 volumes in my pools the rate decrease to 14150 Kb/s to 141 Kb/s... So my backup which take initialy 5 minutes to run, take now 30 minutes... I thin it's when the job truncate / recycle / prune the volume.

First day of month i run an other pool in full mode. And it's not affected by the rate decrease beacuse for the moment the job don't have to recycle/prune/truncate volume.

Other information i use S3 storage.
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files: bareos-dir.conf (893 bytes) 2022-05-02 15:56
https://bugs.bareos.org/file_download.php?file_id=503&type=bug
client.conf (79 bytes) 2022-05-02 15:56
https://bugs.bareos.org/file_download.php?file_id=504&type=bug
fileset.conf (333 bytes) 2022-05-02 15:56
https://bugs.bareos.org/file_download.php?file_id=505&type=bug
job.conf (435 bytes) 2022-05-02 15:56
https://bugs.bareos.org/file_download.php?file_id=506&type=bug
jobdefs.conf (655 bytes) 2022-05-02 15:56
https://bugs.bareos.org/file_download.php?file_id=507&type=bug
pool.conf (2,697 bytes) 2022-05-02 15:56
https://bugs.bareos.org/file_download.php?file_id=508&type=bug
schedule.conf (167 bytes) 2022-05-02 15:56
https://bugs.bareos.org/file_download.php?file_id=509&type=bug
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1446 [bareos-core] bconsole crash always 2022-04-01 13:24 2022-04-01 13:24
Reporter: mdc Platform: x86_64  
Assigned To: OS: CentOS  
Priority: normal OS Version: stream 8  
Status: new Product Version: 21.1.2  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: bconsole will crash when the unneeded password is missing
Description: When using pam authentication, only the console password is needed for the connection.
But when this is so done, the bconsole will crash with:
 bconsole -c /etc/bareos/bconsole-tui.conf
bconsole: ABORTING due to ERROR in console/console_conf.cc:181
Password item is required in Director resource, but not found.
BAREOS interrupted by signal 6: IOT trap
bconsole, bconsole got signal 6 - IOT trap. Attempting traceback.

So an empty and unneeded Password entry must added as an workaround.
Tags:
Steps To Reproduce:
Additional Information: Sample crash config:
Director {
  Name = bareos-dir
  Address = localhost
  }
Console {
  Name = "PamConsole"
  @/etc/bareos/pam.pw
}
Sample working config:
Director {
  Name = bareos-dir
  Address = localhost
  Password = ""
}
Console {
  Name = "PamConsole"
  @/etc/bareos/pam.pw
}

System Description
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1421 [bareos-core] storage daemon minor always 2022-01-17 17:06 2022-03-30 12:14
Reporter: DemoFreak Platform: x86_64  
Assigned To: bruno-at-bareos OS: Opensuse  
Priority: normal OS Version: Leap 15.3  
Status: new Product Version: 21.0.0  
Product Build: Resolution: reopened  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: MTEOM on LTO-3 fails with Bareos 21, but works on older Bacula
Description: After migrating a file server, the backup was switched from Bacula 5.2 to Bareos 21.0. Transferring the configuration works flawlessly, everything works as desired except for the tape drive.

Appending another backup on an already used medium fails with
kernel: [586704.090320] st 8:0:0:0: [st0] Sense Key : Medium Error [current].
kernel: [586704.090327] st 8:0:0:0: [st0] Add. Sense: Recorded entity not found
the tape is marked as "Error" in the catalog.

The test with btape results consequently in a problem with EOD (MTEOM). After completing the storage configuration with
Hardware End of Medium = no
Fast Forward Space File = no
appending works, but is extremely slow, as also mentioned in the documentation.

Hardware:
- Fibre Channel: QLogic Corp. ISP2312-based 2Gb Fibre Channel to PCI-X HBA
- Drive 'HP Ultrium 3-SCSI Rev. L63S'

The drive and HBA were transferred from the old system to the new system without any changes.

How can I further isolate the problem?
Does Bareos work differently than Bacula 5.2 regarding EOD?
Tags: storage MTEOM
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0004478)
DemoFreak   
2022-01-18 18:38   
(Last edited: 2022-01-18 23:15)
It seems that even the slow (software) method sometimes fails. Here is the corresponding excerpt from the log.

First job on the tape:
17-Jan 11:00 bareos-sd JobId 81: Wrote label to prelabeled Volume "Band4" on device "TapeStorageLTO3" (/dev/tape/by-id/scsi-350060b00002e85de-nst)
...
17-Jan 13:51 bareos-sd JobId 81: Releasing device "TapeStorageLTO3" (/dev/tape/by-id/scsi-350060b00002e85de-nst).
  SD Bytes Written: 344,018,607,484 (344.0 GB)

Second job:
17-Jan 13:54 bareos-sd JobId 83: Volume "Band4" previously written, moving to end of data.
17-Jan 14:39 bareos-sd JobId 83: Ready to append to end of Volume "Band4" at file=65.
...
17-Jan 14:39 bareos-sd JobId 83: Releasing device "TapeStorageLTO3" (/dev/tape/by-id/scsi-350060b00002e85de-nst).
  SD Bytes Written: 140,473,627 (140.4 MB)

Third job:
17-Jan 14:42 bareos-sd JobId 85: Volume "Band4" previously written, moving to end of data.
17-Jan 15:27 bareos-sd JobId 85: Ready to append to end of Volume "Band4" at file=66.
...
17-Jan 15:32 bareos-sd JobId 84: Releasing device "FileStorage" (/home/.bareos/backup).
  SD Bytes Written: 9,954,169,360 (9.954 GB)

Fourth job:
17-Jan 15:33 bareos-sd JobId 87: Volume "Band4" previously written, moving to end of data.
17-Jan 16:20 bareos-sd JobId 87: Ready to append to end of Volume "Band4" at file=68.
...
17-Jan 16:20 bareos-sd JobId 87: Releasing device "TapeStorageLTO3" (/dev/tape/by-id/scsi-350060b00002e85de-nst).
  SD Bytes Written: 141,727,215 (141.7 MB)

Everything works fine up to this point.
The file size on the tape is 5GB (Maximum File Size = 5G). So the next job should be attached at file number 69.

Fifth job:
18-Jan 11:00 bareos-sd JobId 92: Volume "Band4" previously written, moving to end of data.
18-Jan 12:03 bareos-sd JobId 92: Error: Unable to position to end of data on device "TapeStorageLTO3" (/dev/tape/by-id/scsi-350060b00002e85de-nst): ERR=backends/generic_tape_device.cc:496 read error on "TapeStorageLTO3" (/dev/tape/by-id/scsi-350060b00002e85de-nst). ERR=Eingabe-/Ausgabefehler.
18-Jan 12:03 bareos-sd JobId 92: Marking Volume "Band4" in Error in Catalog.

This fails with an input/output error. Possibly no EOD marker was written during the fourth job.

Neither "mtst -f /dev/nst0 eod" nor "echo eod | btape" find EOD, they abort with error and the tape is read to the physical end.
Complete reading of the tape with "echo scanblocks | btape" works absolutely correct up to file number 68, different groups of blocks and one EOF marker each are read. In file number 69 no EOF is read, instead the drive keeps reading until the end of the medium.

...
1 block of 64508 bytes in file 66
2 blocks of 64512 bytes in file 66
1 block of 64508 bytes in file 66
23166 blocks of 64512 bytes in file 67
End of File mark.
43547 blocks of 64512 bytes in file 67
1 block of 64504 bytes in file 67
14277 blocks of 64512 bytes in file 67
1 block of 64506 bytes in file 67
3209 blocks of 64512 bytes in file 67
1 block of 64509 bytes in file 67
8889 blocks of 64512 bytes in file 67
1 block of 64510 bytes in file 67
222 blocks of 64512 bytes in file 67
1 block of 64502 bytes in file 67
1046 blocks of 64512 bytes in file 67
1 block of 33330 bytes in file 68
End of File mark.
2198 blocks of 64512 bytes in file 68
1 block of 35367 bytes in file 69
End of File mark.
(At this point, nothing more happens until the end of the tape. Please note that in the log of btape for whatever reason apparently the first line of a new file and the EOF marker of the previous file are swapped, so the last EOF marker here belongs to file number 68).

Any ideas?

(0004479)
DemoFreak   
2022-01-18 19:25   
(Last edited: 2022-01-18 23:14)
As an attempt to narrow down the problem, I wrote an EOF marker to file number 69 with mtst:

miraculix:~ # mtst -f /dev/nst0 rewind
miraculix:~ # mtst -f /dev/nst0 status
SCSI 2 tape drive:
File number=0, block number=0, partition=0.
Tape block size 0 bytes. Density code 0x44 (LTO-3).
Soft error count since last status=0
General status bits on (41010000):
 BOT ONLINE IM_REP_EN
miraculix:~ # time mtst -f /dev/nst0 fsf 69

real 0m29.927s
user 0m0.002s
sys 0m0.001s
miraculix:~ # mtst -f /dev/nst0 status
SCSI 2 tape drive:
File number=69, block number=0, partition=0.
Tape block size 0 bytes. Density code 0x44 (LTO-3).
Soft error count since last status=0
General status bits on (81010000):
 EOF ONLINE IM_REP_EN
miraculix:~ # mtst -f /dev/nst0 weof
miraculix:~ # mtst -f /dev/nst0 status
SCSI 2 tape drive:
File number=70, block number=0, partition=0.
Tape block size 0 bytes. Density code 0x44 (LTO-3).
Soft error count since last status=0
General status bits on (81010000):
 EOF ONLINE IM_REP_EN
miraculix:~ # mtst -f /dev/nst0 rewind

Note the extreme difference in required time for spacing forward to file number 69:

miraculix:~ # time echo -e "status\nfsf 69\nstatus\n" | btape TapeStorageLTO3
Tape block granularity is 1024 bytes.
btape: stored/butil.cc:306-0 Using device: "TapeStorageLTO3" for writing.
btape: stored/btape.cc:490-0 open device "TapeStorageLTO3" (/dev/tape/by-id/scsi-350060b00002e85de-nst): OK
* Bareos status: file=0 block=0
 Device status: BOT ONLINE IM_REP_EN file=0 block=0
Device status: TAPE BOT ONLINE IMMREPORT. ERR=
*btape: stored/btape.cc:1774-0 Forward spaced 69 files.
* EOF Bareos status: file=69 block=0
 Device status: EOF ONLINE IM_REP_EN file=69 block=0
Device status: TAPE EOF ONLINE IMMREPORT. ERR=
**
real 48m8.811s
user 0m0.006s
sys 0m0.014s
miraculix:~ #

After writing the EOF marker, btape "scanblocks" works as expected:
...
23166 blocks of 64512 bytes in file 67
End of File mark.
43547 blocks of 64512 bytes in file 67
1 block of 64504 bytes in file 67
14277 blocks of 64512 bytes in file 67
1 block of 64506 bytes in file 67
3209 blocks of 64512 bytes in file 67
1 block of 64509 bytes in file 67
8889 blocks of 64512 bytes in file 67
1 block of 64510 bytes in file 67
222 blocks of 64512 bytes in file 67
1 block of 64502 bytes in file 67
1046 blocks of 64512 bytes in file 67
1 block of 33330 bytes in file 68
End of File mark.
2198 blocks of 64512 bytes in file 68
1 block of 35367 bytes in file 69
End of File mark.
Total files=69, blocks=5495758, bytes = 354,542,114,821

btape "eod" works as well:

*eod
btape: stored/btape.cc:619-0 Moved to end of medium.



All in all, it seems to me that under circumstances that are not yet clear to me, sometimes no EOF is written on the tape.

Where am I wrong here?

(0004480)
DemoFreak   
2022-01-19 01:35   
Starting a migration job on this "repaired" tape triggers two migration worker jobs, the first of them works well, the second fails, and I don't understand why.

First job:
18-Jan 23:29 bareos-sd JobId 98: Volume "Band4" previously written, moving to end of data.
19-Jan 00:17 bareos-sd JobId 98: Ready to append to end of Volume "Band4" at file=69.
19-Jan 00:17 bareos-sd JobId 97: Releasing device "FileStorage" (/home/.bareos/backup).
  SD Bytes Written: 247,515,896 (247.5 MB)


Second job:
19-Jan 00:18 bareos-sd JobId 100: Volume "Band4" previously written, moving to end of data.
19-Jan 01:06 bareos-sd JobId 100: Error: Bareos cannot write on tape Volume "Band4" because:
The number of files mismatch! Volume=69 Catalog=70
19-Jan 01:06 bareos-sd JobId 100: Marking Volume "Band4" in Error in Catalog.

Why does the second job still find the end of the tape at file number 69, although this file was already written in the first job? EOD should be at file number 70, as it is also noted in the catalog.

Where is my error?
(0004481)
bruno-at-bareos   
2022-01-20 17:14   
Just a quick note having

Appending another backup on an already used medium fails with
kernel: [586704.090320] st 8:0:0:0: [st0] Sense Key : Medium Error [current].
kernel: [586704.090327] st 8:0:0:0: [st0] Add. Sense: Recorded entity not found

Means hardware trouble, be it the medium (tape) the drive or some other component in the scsi chain.
They are never fun to debug.
(0004482)
DemoFreak   
2022-01-20 17:48   
The hardware is completely unchanged. HBA, drive and tapes are the same. They are even still in the same place, only the HBA is now in a different computer.

To be on the safe side, I will rebuild everything and run a test in the old system. This worked until a week ago for several years completely without problems, but just with Bacula.

I am surprised about the lack of an EOF marker after some migration jobs.
(0004519)
DemoFreak   
2022-02-19 04:21   
(Last edited: 2022-02-19 04:23)
Sorry, I was unfortunately busy in the meantime, therefore the long response time.

I have just done the test and rebuilt everything in the old system, there it runs as expected completely without problems.

After the renewed change into the new system it runs now however also here perfectly.

So it was probably really a problem with the LC cabling.

So can be closed, thanks for the help.
(0004520)
bruno-at-bareos   
2022-02-21 09:40   
Hardware problem.
(0004556)
DemoFreak   
2022-03-30 12:14   
I think I have found the real cause.

I use an after-job script which shuts down the tape drive after the migration. For this I check after 30s waiting time if there are more jobs in the queue and only if there are no more waiting or running jobs, the drive is switched off.

echo "Checking for pending bacula jobs..."

sleep 30

if echo "status dir" | /usr/sbin/bconsole | /usr/bin/grep "^ " | /usr/bin/egrep -q "(is waiting|is running)"; then
        echo "Pending bacula jobs found, leaving tape device alone!"
else
        echo "Switching off tape device..."
        $DEBUG $SISPMCTLBIN -qf 1
fi

Apparently with Bareos, the processing of the jobs is more concurrent than Bacula, because since I temporarily suspended the shutdown of the drive, no more MTEOM errors occur. So I suspect that sometimes the drive was already powered off while the storage daemon was still in the process of writing the last data to the drive. Of course, this also meant that no EOF was written.

Is it possible that the Director reports jobs as finished while the SD is still writing?

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1441 [bareos-core] webui minor always 2022-03-22 13:59 2022-03-29 14:13
Reporter: mdc Platform: x86_64  
Assigned To: frank OS: CentOS  
Priority: normal OS Version: stream 8  
Status: resolved Product Version:  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Unable to view the pool details, when the pool name contains an space char.
Description: The resulting url will be:
"https:/XXXX/pool/details/Bareos database" for example, when the pool is named "Bareos database"
An the call will failed with:

A 404 error occurred
Page not found.

The requested URL could not be matched by routing.
No Exception available
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0004555)
frank   
2022-03-29 14:13   
Fix committed to bareos master branch with changesetid 16093.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1440 [bareos-core] director minor always 2022-03-22 13:42 2022-03-23 15:09
Reporter: mdc Platform: x86_64  
Assigned To: bruno-at-bareos OS: CentOS  
Priority: normal OS Version: stream 8  
Status: resolved Product Version: 21.1.2  
Product Build: Resolution: won't fix  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Only 127.0.0.1 ls logged in the audit log when the access comes from the webui
Description: Instant of the real IP of the user device, only 127.0.0. is logged.
22-Mar 13:31 Bareos Director: Console [foo] from [127.0.0.1] cmdline list jobtotals

I think, the director, see only the source ip of the webui server. But the real IP is not forwarded to the director.
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0004549)
bruno-at-bareos   
2022-03-23 15:08   
The audit log is use to log the remote (here local) ip of the initiator of the command.
Think about remote bconsole access etc.
so here localhost is the right agent.

You're totally entitled to propose a enhanced version of the code by making a PR on our github project.
(0004550)
bruno-at-bareos   
2022-03-23 15:09   
Won't be fixed without external code proposal

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1134 [bareos-core] vmware plugin feature always 2019-11-06 15:32 2022-03-14 15:42
Reporter: ratacorbo Platform: Linux  
Assigned To: stephand OS: CentOS  
Priority: normal OS Version: 7  
Status: resolved Product Version: 18.2.5  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version: 21.0.0  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Installing bareos-vmware-plugin-18.2.5-124.1.el7.x86_64 python-pyvmomi error
Description: when tries to install bareos-vmware-plugin-18.2.5-124.1.el7.x86_64 in Centos 7 gives an error that python-pyvmomi
 doesn't exists
Tags:
Steps To Reproduce: yum install bareos-vmware-plugin
Additional Information: Error: Package: bareos-vmware-plugin-18.2.5-124.1.el7.x86_64 (bareos_bareos-18.2)
           Requires: python-pyvmomi
 You could try using --skip-broken to work around the problem
 You could try running: rpm -Va --nofiles --nodigest
System Description
Attached Files:
Notes
(0003856)
stephand   
2020-02-26 00:23   
The python-pyvmomi package for CentOS 7/RHEL 7 is available in EPEL.
On CentOS 7 EPEL repo can by added by runing

yum install epel-release

For RHEL 7 see https://fedoraproject.org/wiki/EPEL

Does it work when the EPEL repo was added to your system?
(0003997)
Rotnam   
2020-06-02 18:05   
I installed a fresh Redhat 8.1 to test bareos vmware pluging. I ran in the same issue running
yum install bareos-vmware-plugin
Error:
 Problem: cannot install the best candidate for the job
  - nothing provides python-pyvmomi needed by bareos-vmware-plugin-19.2.7-2.el8.x86_64
(try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)

So far I tried to install
python-pyvmomi with pip3.6 install pyvmomi -> Installed successful, no luck
Downloaded the github package and did a python3.6 setup.py install, this install the version 7.0, no luck
Addding epel -> yum install python3-pyvmomi, this install version 6.7.3-3, no luck with the yum

Downloading the rpm (19.2.7-2) and trying manually, it did the requirement:
yum install python2
yum install bareos-filedaemon-python-plugin
yum install bareos-vadp-dumper
Did a pip2 install pyvmomi, still no luck
python2 setup.py install, installed a bunch of files under python2.7, still no luck for the rpm

At this point, I will just do a --nodeps and see if it work, hope this help resolving the package issue
(0004039)
stephand   
2020-09-16 13:10   
You are right, we have a problem here for RHEL/CentOS 8 because EPEL 8 does not provide a python2-pyvmomi package.
It's also not easily possible to buid a python2-pyvmomi package for el8 due to its missing python2 package dependencies.

Currently indeed the only way is to ignore dependencies for the package installation and use pip2 install pyvmomi.
Does that work for you?

I think we should remove the dependency on python-pyvmomi and add a hint in the documentation to use pip2 install pyvmomi.

For the upcoming Bareos version 20, we are already working on Python3 plugins, this will also fix the dependency problem.
(0004040)
Rotnam   
2020-09-16 15:22   
For the test I did, it worked fine, so I assume you can do it that way with no--nodeps. I ended up putting this on hold, backing up just the disks and not the VM was a bit strange. Restore on locally worked, not directly on vcenter (can't remember which one I tried). Will revisit this solution later.
(0004536)
stephand   
2022-03-14 15:42   
Since Bareos Version >= 21.0.0 the package bareos-vmware-plugin no longer includes a dependency on a pyVmomi package, because some Linux distributions don’t provide current versions. Consequently, pyVmomi must be either installed by using pip install pyvmomi or by manually installing a distribution provided pyVmomi package.
See https://docs.bareos.org/TasksAndConcepts/Plugins.html#vmware-plugin

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1431 [bareos-core] General major always 2022-03-08 20:37 2022-03-11 03:32
Reporter: backup1 Platform: Linux  
Assigned To: OS:  
Priority: normal OS Version:  
Status: new Product Version: 21.0.0  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Newline characters stripped from configuration strings
Description:  Hi,

I'm trying to set a config value that includes newline character (a.k.a. \n). This worked in Bareos 19.2, but same config is not working in 21. It seems that the newlines are stripped when loading the config. I note that the docs say that strings can now be entered using a multi-line quoted format (for Bareos 20+).

The actual config setting is for NDMP files and specifying the NDMP environment MULTI_SUBTREE_NAMES.

This is what the config looks like:

FileSet {
  Name = "user_01"
  Include {
    Options {
      AclSupport = Yes
      XattrSupport = Yes
      Meta = "DMP_NAME=user_01"
      Meta = "MULTI_SUBTREE_NAMES=userA
userB
userC
userD"
    }
    File = "/vol0/user"
  }
}

The correctly formatted value will have newlines between the "userA", "userB", "userC" subdir names.

In bconsole "show filesets" has the names all concatenated together and the (NetApp) filer rejects the job saying "no directory userAuserBUserCUserD".
Tags:
Steps To Reproduce: Configure fileset with options string including newlines.

Load configuration.

Review configuration using "show filesets" and observe that newlines have been stripped.

I've also reviewed NDMP commands sent to NetApp and (with wireshark) and observe that the newlines are missing.
Additional Information: I believe the use-case for config file strings to include newlines was not considered in parser changes for multi-line quoted format. I'm no longer able to use MULTI_SUBTREE_NAMES for NDMP and have reverted to just doing full volume backups, which limits flexibility, but is working reliably.

Thanks,
Tom Rockwell
Attached Files:
Notes
(0004533)
bruno-at-bareos   
2022-03-09 11:40   
Inconsistencies between documentation / expectation / behaviour
loss of functionality between versions

Documentation https://docs.bareos.org/master/Configuration/CustomizingTheConfiguration.html?highlight=multiline#quotes show multilines in example lead to expectation to have those kept as multilines.

Having a configured fileset with new multiline syntax

FileSet {
  Name = "NDMP_test"
  Include {
    Options {
      AclSupport = Yes
      XattrSupport = Yes
      Meta = "DMP_NAME=user_01"
      Meta = "MULTI_SUBTREE_NAMES=userA"
             "userB"
             "userC"
             "userD"
    }
    File = "/vol0/user"
  }
}

when displayed in bconsole
*show fileset=NDMP_test
FileSet {
  Name = "NDMP_test"
  Include {
    Options {
      AclSupport = Yes
      XattrSupport = Yes
      Meta = "DMP_NAME=user_01"
      Meta = "MULTI_SUBTREE_NAMES=userAuserBuserCuserD"
    }
    File = "/vol0/user"
  }
}
(0004534)
backup1   
2022-03-11 03:32   
Hi,

Thanks for looking at this. For reference, the newlines are needed to use the MULTI_SUBTREE_NAMES functionality on NetApp. https://library.netapp.com/ecmdocs/ECMP1196992/html/GUID-DE8BF53F-706A-48CA-A6FD-ACFDC2D0FE8A.html

From the linked doc, "Multiple subtrees are specified in the string which is a newline-separated, null-terminated list of subtree names."

I looked for other use-cases to put newlines into strings in Bareos config, but didn't find any, so I realize this is a bit of a corner-case. Still, NDMP is useful for NetApp, and it would be unfortunate to lose this functionality.

Thanks again,
Tom Rockwell

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1430 [bareos-core] webui major always 2022-02-23 20:19 2022-03-03 15:11
Reporter: jason.agilitypr Platform: Linux  
Assigned To: frank OS: Ubuntu  
Priority: high OS Version: 20.04  
Status: resolved Product Version: 21.0.0  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Version of Jquery is old and vulnerable
Description: the version of jquery that bareos webui is running is old and out of date and has known security vulnerabilities (xss attacks)

/*! jQuery v3.2.0 | (c) JS Foundation and other contributors | jquery.org/license */
v3.2.0 was release on March 16, 2017

https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/
"The HTML parser in jQuery <=3.4.1 usually did the right thing, but there were edge cases where parsing would have unintended consequences. "

the current version of jquery is 3.6.0


Tags:
Steps To Reproduce: check version of jquery loaded in bareos webui via browser right click -> view source
Additional Information: the related libraries including moment and excanavas, may also need updating.
Attached Files:
Notes
(0004531)
frank   
2022-03-03 11:11   
Fix committed to bareos master branch with changesetid 15977.
(0004532)
frank   
2022-03-03 15:11   
Fix committed to bareos bareos-19.2 branch with changesetid 15981.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1426 [bareos-core] director minor always 2022-02-07 10:37 2022-02-24 11:46
Reporter: mschiff Platform: Linux  
Assigned To: stephand OS: any  
Priority: normal OS Version: 3  
Status: acknowledged Product Version: 21.0.0  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: bareos send useless operator "mount" messages
Description: The default configuration has messages/Standard.conf which contains:

operator = <email> = mount

which shouldsend an email if an operator is required for a job to continue.

But these mails will also be triggered on a busy bareos-sd with multiple virtual drives and multiple jobs running, when a job just needs to wait a bit for a volume to become available.
Every month, when our systems are doing virtual full backups at night, we get lots of mails like:

06-Feb 23:37 kilo-sd JobId 58793: Please mount read Volume "VolIncr-0034" for:
    Job: BackupIndia.2022-02-06_23.15.01_31
    Storage: "MultiFileStorage0001" (/srv/backup/bareos)
    Pool: Incr
    Media type: File

But in the morning, all jobs are finished successsfully.

So when one job is reading a volume and another job is waiting for the same volume, an email is triggered. But after waiting a couple of minutes, this "issue" solves itself.

It should be possible to set some timeout, after which such messages are being sent, so that they will only be sent on really hanging jobs.

This is part of the joblog:

 2022-02-06 23:25:38 kilo-dir JobId 58793: Start Virtual Backup JobId 58793, Job=BackupIndia.2022-02-06_23.15.01_31
 2022-02-06 23:25:38 kilo-dir JobId 58793: Consolidating JobIds 58147,58164,58182,58200,58218,58236,58254,58272,58290,58308,58326,58344,58362,58380,58398,58416,58434,58452,58470,58488,58506
,58524,58542,58560,58578,58596,58614,58632,58650,58668,58686,58704,58722,58740,58758,58764
 2022-02-06 23:25:40 kilo-dir JobId 58793: Bootstrap records written to /var/lib/bareos/kilo-dir.restore.16.bsr
 2022-02-06 23:25:40 kilo-dir JobId 58793: Connected Storage daemon at kilo.sys4.de:9103, encryption: TLS_AES_256_GCM_SHA384 TLSv1.3
 2022-02-06 23:26:42 kilo-dir JobId 58793: Using Device "MultiFileStorage0001" to read.
 2022-02-06 23:26:42 kilo-dir JobId 58793: Using Device "MultiFileStorage0002" to write.
 2022-02-06 23:26:42 kilo-sd JobId 58793: Ready to read from volume "VolFull-0165" on device "MultiFileStorage0001" (/srv/backup/bareos).
 2022-02-06 23:26:42 kilo-sd JobId 58793: Forward spacing Volume "VolFull-0165" to file:block 0:3367481982.
 2022-02-06 23:26:53 kilo-sd JobId 58793: End of Volume at file 1 on device "MultiFileStorage0001" (/srv/backup/bareos), Volume "VolFull-0165"
 2022-02-06 23:26:53 kilo-sd JobId 58793: Ready to read from volume "VolFull-0168" on device "MultiFileStorage0001" (/srv/backup/bareos).
 2022-02-06 23:26:53 kilo-sd JobId 58793: Forward spacing Volume "VolFull-0168" to file:block 2:1033779909.
 2022-02-06 23:26:54 kilo-sd JobId 58793: End of Volume at file 2 on device "MultiFileStorage0001" (/srv/backup/bareos), Volume "VolFull-0168"
 2022-02-06 23:26:54 kilo-sd JobId 58793: Ready to read from volume "VolFull-0169" on device "MultiFileStorage0001" (/srv/backup/bareos).
 2022-02-06 23:26:54 kilo-sd JobId 58793: Forward spacing Volume "VolFull-0169" to file:block 0:64702.
 2022-02-06 23:27:03 kilo-sd JobId 58793: End of Volume at file 1 on device "MultiFileStorage0001" (/srv/backup/bareos), Volume "VolFull-0169"
 2022-02-06 23:27:03 kilo-sd JobId 58793: Warning: stored/vol_mgr.cc:542 Need volume from other drive, but swap not possible. Status: read=1 num_writers=0 num_reserve=0 swap=0 vol=VolIncr-0
022 from dev="MultiFileStorage0004" (/srv/backup/bareos) to "MultiFileStorage0001" (/srv/backup/bareos)
 2022-02-06 23:27:03 kilo-sd JobId 58793: Warning: stored/acquire.cc:348 Read acquire: stored/label.cc:268 Could not reserve volume VolIncr-0022 on "MultiFileStorage0001" (/srv/backup/bareo
s)
 2022-02-06 23:27:03 kilo-sd JobId 58793: Please mount read Volume "VolIncr-0022" for:
    Job: BackupIndia.2022-02-06_23.15.01_31
    Storage: "MultiFileStorage0001" (/srv/backup/bareos)
    Pool: Incr
    Media type: File
 2022-02-06 23:32:03 kilo-sd JobId 58793: Ready to read from volume "VolIncr-0022" on device "MultiFileStorage0001" (/srv/backup/bareos).
 2022-02-06 23:32:03 kilo-sd JobId 58793: Forward spacing Volume "VolIncr-0022" to file:block 0:3331542115.
 2022-02-06 23:32:03 kilo-sd JobId 58793: End of Volume at file 0 on device "MultiFileStorage0001" (/srv/backup/bareos), Volume "VolIncr-0022"
 2022-02-06 23:32:03 kilo-sd JobId 58793: Ready to read from volume "VolIncr-0023" on device "MultiFileStorage0001" (/srv/backup/bareos).
 2022-02-06 23:32:03 kilo-sd JobId 58793: Forward spacing Volume "VolIncr-0023" to file:block 0:750086502.
Tags:
Steps To Reproduce:
Additional Information:
System Description General issues regarding all linux platforms or not specific to only one distribution
Attached Files:
Notes
(0004526)
stephand   
2022-02-24 11:46   
Thanks for reporting this issue. I also already noticed that problem.
It will be very hard to fix this properly without a complete redesign of the whole reservation logic, which would be a huge effort.
But meanwhile we could think about a workaround to mitigate this somehow.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1349 [bareos-core] file daemon major always 2021-05-07 18:29 2022-02-02 10:47
Reporter: oskarsr Platform: Linux  
Assigned To: bruno-at-bareos OS: Debian  
Priority: urgent OS Version: 9  
Status: resolved Product Version: 20.0.1  
Product Build: Resolution: unable to reproduce  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Fatal error: bareos-fd on the following backup after the successful backup of postgresql database using the PostgreSQL Plugin.
Description: Fatal error: bareos-fd on the following backup after the successful backup of postgresql database using the PostgreSQL Plugin.
When the Client daemon is restarted, the backup of Postgresql database goes without the error, but just once. On the second attempt, there is an error again.

it-fd JobId 118: Fatal error: bareosfd: Traceback (most recent call last):
File "/usr/lib/bareos/plugins/bareos-fd-postgres.py", line 40, in
import BareosFdPluginPostgres
File "/usr/lib/bareos/plugins/BareosFdPluginPostgres.py", line 30, in
import psycopg2
File "/usr/local/lib/python3.5/dist-packages/psycopg2/__init__.py", line 51, in
from psycopg2._psycopg import ( # noqa
SystemError: initialization of _psycopg raised unreported exception
Tags:
Steps To Reproduce: When the backup is executed right after the client daemon restart, the debug log is following:

it-fd (100): filed/fileset.cc:271-150 P python:module_path=/usr/lib/bareos/plugins:module_name=bareos-fd-postgres:postgresDataDir=/var/lib/postgresql/10/main:walArchive=/var/lib/postgresql/10/wal_archive/:dbuser=bareos:dbname=bareos
it-fd (150): filed/fd_plugins.cc:542-150 plugin cmd=python:module_path=/usr/lib/bareos/plugins:module_name=bareos-fd-postgres:postgresDataDir=/var/lib/postgresql/10/main:walArchive=/var/lib/postgresql/10/wal_archive/:dbuser=bareos:dbname=bareos
it-fd (150): filed/fd_plugins.cc:441-150 plugin_ctx=7f3964015250 JobId=150
it-fd (150): filed/fd_plugins.cc:229-150 IsEventForThisPlugin? name=python:module_path=/usr/lib/bareos/plugins:module_name=bareos-fd-postgres:postgresDataDir=/var/lib/postgresql/10/main:walArchive=/var/lib/postgresql/10/wal_archive/:dbuser=bareos:dbname=bareos len=6 plugin=python3-fd.so plen=7
it-fd (150): filed/fd_plugins.cc:261-150 IsEventForThisPlugin: yes, without last character: (plugin=python3-fd.so, name=python:module_path=/usr/lib/bareos/plugins:module_name=bareos-fd-postgres:postgresDataDir=/var/lib/postgresql/10/main:walArchive=/var/lib/postgresql/10/wal_archive/:dbuser=bareos:dbname=bareos)
it-fd (150): python/python-fd.cc:992-150 python3-fd: Trying to load module with name bareos-fd-postgres
it-fd (150): python/python-fd.cc:1006-150 python3-fd: Successfully loaded module with name bareos-fd-postgres
it-fd (100): module/bareosfd.cc:1442-150 python3-fd-mod: Constructor called in module BareosFdPluginPostgres with plugindef=postgresDataDir=/var/lib/postgresql/10/main:walArchive=/var/lib/postgresql/10/wal_archive/:dbuser=bareos:dbname=bareos:
it-fd (100): module/bareosfd.cc:1442-150 python3-fd-mod: Constructor called in module BareosFdPluginLocalFilesBaseclass with plugindef=postgresDataDir=/var/lib/postgresql/10/main:walArchive=/var/lib/postgresql/10/wal_archive/:dbuser=bareos:dbname=bareos:
it-fd (100): module/bareosfd.cc:1442-150 python3-fd-mod: Constructor called in module BareosFdPluginBaseclass with plugindef=postgresDataDir=/var/lib/postgresql/10/main:walArchive=/var/lib/postgresql/10/wal_archive/:dbuser=bareos:dbname=bareos:
it-fd (150): module/bareosfd.cc:1495-150 python3-fd-mod: PyBareosRegisterEvents registering event 2
it-fd (150): filed/fd_plugins.cc:2266-150 fd-plugin: Plugin registered event=2
it-fd (150): module/bareosfd.cc:1495-150 python3-fd-mod: PyBareosRegisterEvents registering event 4
it-fd (150): filed/fd_plugins.cc:2266-150 fd-plugin: Plugin registered event=4
it-fd (150): module/bareosfd.cc:1495-150 python3-fd-mod: PyBareosRegisterEvents registering event 16
it-fd (150): filed/fd_plugins.cc:2266-150 fd-plugin: Plugin registered event=16
it-fd (150): module/bareosfd.cc:1495-150 python3-fd-mod: PyBareosRegisterEvents registering event 19
it-fd (150): filed/fd_plugins.cc:2266-150 fd-plugin: Plugin registered event=19
it-fd (150): module/bareosfd.cc:1495-150 python3-fd-mod: PyBareosRegisterEvents registering event 3
it-fd (150): filed/fd_plugins.cc:2266-150 fd-plugin: Plugin registered event=3
it-fd (150): module/bareosfd.cc:1495-150 python3-fd-mod: PyBareosRegisterEvents registering event 5
it-fd (150): filed/fd_plugins.cc:2266-150 fd-plugin: Plugin registered event=5


But, when the backup is started repeatedly for the same client, the log consists of following:

it-fd (100): filed/fileset.cc:271-151 P python:module_path=/usr/lib/bareos/plugins:module_name=bareos-fd-postgres:postgresDataDir=/var/lib/postgresql/10/main:walArchive=/var/lib/postgresql/10/wal_archive/:dbuser=bareos:dbname=bareos
it-fd (150): filed/fd_plugins.cc:542-151 plugin cmd=python:module_path=/usr/lib/bareos/plugins:module_name=bareos-fd-postgres:postgresDataDir=/var/lib/postgresql/10/main:walArchive=/var/lib/postgresql/10/wal_archive/:dbuser=bareos:dbname=bareos
it-fd (150): filed/fd_plugins.cc:441-151 plugin_ctx=7f39641d1b60 JobId=151
it-fd (150): filed/fd_plugins.cc:229-151 IsEventForThisPlugin? name=python:module_path=/usr/lib/bareos/plugins:module_name=bareos-fd-postgres:postgresDataDir=/var/lib/postgresql/10/main:walArchive=/var/lib/postgresql/10/wal_archive/:dbuser=bareos:dbname=bareos len=6 plugin=python3-fd.so plen=7
it-fd (150): filed/fd_plugins.cc:261-151 IsEventForThisPlugin: yes, without last character: (plugin=python3-fd.so, name=python:module_path=/usr/lib/bareos/plugins:module_name=bareos-fd-postgres:postgresDataDir=/var/lib/postgresql/10/main:walArchive=/var/lib/postgresql/10/wal_archive/:dbuser=bareos:dbname=bareos)
it-fd (150): python/python-fd.cc:992-151 python3-fd: Trying to load module with name bareos-fd-postgres
it-fd (150): python/python-fd.cc:1000-151 python3-fd: Failed to load module with name bareos-fd-postgres
it-fd (150): include/python_plugins_common.inc:124-151 bareosfd: Traceback (most recent call last):
  File "/usr/lib/bareos/plugins/bareos-fd-postgres.py", line 40, in <module>
    import BareosFdPluginPostgres
  File "/usr/lib/bareos/plugins/BareosFdPluginPostgres.py", line 30, in <module>
    import psycopg2
  File "/usr/lib/python3/dist-packages/psycopg2/__init__.py", line 51, in <module>
    from psycopg2._psycopg import ( # noqa
SystemError: initialization of _psycopg raised unreported exception

it-fd (150): filed/fd_plugins.cc:480-151 Cancel return from GeneratePluginEvent
it-fd (100): filed/fileset.cc:271-151 N
it-fd (100): filed/dir_cmd.cc:462-151 <dird: getSecureEraseCmd
Additional Information:
System Description
Attached Files:
Notes
(0004129)
oskarsr   
2021-05-12 17:33   
(Last edited: 2021-05-12 17:34)
Does anybody has tried to back up postgresql DB by using bareos-fd-postgres python plugin ?

(0004263)
perkons   
2021-09-13 15:38   
We are experiencing exactly the same issue on Ubuntu 18.04.
(0004297)
bruno-at-bareos   
2021-10-11 13:31   
To Both of you, could you share the installed bareos package (and confirm they're coming from bareos.org) and also the python3 version used
also python packages (and from where they come) related (main core + psycopg)
(0004298)
perkons   
2021-10-11 14:52   
We installed the bareos-filedaemon from https://download.bareos.org
The python modules are installed from Ubuntu repositories. The reason we use both python and python3 modules is because if one is missing the backups fail. This seems pretty wrong to me, but as I understand there is active work to migrate to python3.
We also have both of these python modules (python2 and python3) on our RHEL based hosts and have not had any problems with the PostgreSQL Plugin.

# dpkg -l | grep psycopg
ii python-psycopg2 2.8.4-1~pgdg18.04+1 amd64 Python module for PostgreSQL
ii python3-psycopg2 2.8.6-2~pgdg18.04+1 amd64 Python 3 module for PostgreSQL
# dpkg -l | grep dateutil
ii python-dateutil 2.6.1-1 all powerful extensions to the standard Python datetime module
ii python3-dateutil 2.6.1-1 all powerful extensions to the standard Python 3 datetime module
# dpkg -l | grep bareos
ii bareos-common 20.0.1-3 amd64 Backup Archiving Recovery Open Sourced - common files
ii bareos-filedaemon 20.0.1-3 amd64 Backup Archiving Recovery Open Sourced - file daemon
ii bareos-filedaemon-postgresql-python-plugin 20.0.1-3 amd64 Backup Archiving Recovery Open Sourced - file daemon PostgreSQL plugin
ii bareos-filedaemon-python-plugins-common 20.0.1-3 amd64 Backup Archiving Recovery Open Sourced - file daemon Python plugin common files
ii bareos-filedaemon-python3-plugin 20.0.1-3 amd64 Backup Archiving Recovery Open Sourced - file daemon Python plugin
# dpkg -s bareos-filedaemon
Package: bareos-filedaemon
Status: install ok installed
Priority: optional
Section: admin
Installed-Size: 384
Maintainer: Joerg Steffens <joerg.steffens@bareos.com>
Architecture: amd64
Source: bareos
Version: 20.0.1-3
Replaces: bacula-fd
Depends: bareos-common (= 20.0.1-3), lsb-base (>= 3.2-13), lsof, libc6 (>= 2.14), libgcc1 (>= 1:3.0), libjansson4 (>= 2.0.1), libstdc++6 (>= 5.2), zlib1g (>= 1:1.1.4)
Pre-Depends: debconf (>= 1.4.30) | debconf-2.0, adduser
Conflicts: bacula-fd
Conffiles:
 /etc/init.d/bareos-fd bcc61ad57fde8a771a5002365130c3ec
Description: Backup Archiving Recovery Open Sourced - file daemon
 Bareos is a set of programs to manage backup, recovery and verification of
 data across a network of computers of different kinds.
 .
 The file daemon has to be installed on the machine to be backed up. It is
 responsible for providing the file attributes and data when requested by
 the Director, and also for the file system-dependent part of restoration.
 .
 This package contains the Bareos File daemon.
Homepage: http://www.bareos.org/
# cat /etc/apt/sources.list.d/bareos-20.list
deb https://download.bareos.org/bareos/release/20/xUbuntu_18.04 /
#
(0004299)
bruno-at-bareos   
2021-10-11 15:48   
Thanks for your report, as you stated the python/python3 situation is far from ideal, but PR are progressing, the tunnel's end is near.
Also as you mentioned there's no trouble on RHEL system, I'm aware too.

I would have tried to use only python2 code on such version.
I make a note about testing that with the future new code on Ubuntu 18... But I just can't say when.
(0004497)
bruno-at-bareos   
2022-02-02 10:46   
For the issue reported there's something that look wrong

File "/usr/lib/bareos/plugins/BareosFdPluginPostgres.py", line 30, in
import psycopg2
File "/usr/local/lib/python3.5/dist-packages/psycopg2/__init__.py", line 51, in
from psycopg2._psycopg import ( # noqa
SystemError: initialization of _psycopg raised unreported exception

here it is /usr/local/lib/python3.5

And then

it-fd (150): include/python_plugins_common.inc:124-151 bareosfd: Traceback (most recent call last):
  File "/usr/lib/bareos/plugins/bareos-fd-postgres.py", line 40, in <module>
    import BareosFdPluginPostgres
  File "/usr/lib/bareos/plugins/BareosFdPluginPostgres.py", line 30, in <module>
    import psycopg2
  File "/usr/lib/python3/dist-packages/psycopg2/__init__.py", line 51, in <module>
    from psycopg2._psycopg import ( # noqa
SystemError: initialization of _psycopg raised unreported exception

/usr/lib/python3

So seems you have mixed python env which create strange behaviour, cause the module loaded is not always the same.
Our best advice would be to clean up the global environment and make sure only one consistent version is used for bareos.

Also python3 support has been greetly improved in Bareos 21.
Will closing, as we are not able to reproduce such environment.

btw postgresql plugin is tested each time the code is updated.
(0004498)
bruno-at-bareos   
2022-02-02 10:47   
Mixed python version used with different psyscopg. /usr/local/lib/python3.5 and /usr/lib/python3

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1418 [bareos-core] storage daemon major always 2022-01-04 14:23 2022-01-31 09:34
Reporter: Scorpionking83 Platform: Linux  
Assigned To: bruno-at-bareos OS: RHEL  
Priority: immediate OS Version: 7  
Status: resolved Product Version: 19.2.11  
Product Build: Resolution: duplicate  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Still autoprune and recycle not working in Bareos 19.2.7
Description: Dear developers,

I have still a problem wit autoprune and recycle tapes:
1. Everything works, but when he reaches the max volume tapes with retention of 90 day's , he cannot create any backup any more. Then update the incrementail pool>
update --> Option 2 "Pool from resource" --> Option 3 Incremental
update --> Option 1 "Volumes parameters" --> Option 13: All volumes from pool --> Option 1 Incremental
update --> Option 1 "Volumes parameters" --> Option 13: All volumes from pool -->Option 2 Full
update --> Option 1 "Volumes parameters" --> Option 13: All volumes from pool -->Option 3 Incremental

2. I get the following error:
Volume "Incrementail-0001" has Volume Retention of 7776000 sec. and has 0 jobs that will be pruned

Max volume tapes is set to 400

But way does autoprune and recycle not work if max volumes tapes has been reaches and retention period is not yet been reach?
Is it also possible to detele old tapes from file and database?

I need a answer soon.
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0004451)
Scorpionking83   
2022-01-04 14:36   
Why close this, but the issue is not resolved.
(0004452)
bruno-at-bareos   
2022-01-04 14:39   
This issue is the same as the report 001318 made by the same user.
This is clearly a duplicate case.
(0004493)
Scorpionking83   
2022-01-29 17:14   
Can someone please check my other bug report 0001318.
I still looking for a solution.
(0004496)
bruno-at-bareos   
2022-01-31 09:34   
duplicate of 0001318

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1422 [bareos-core] General major always 2022-01-20 11:58 2022-01-27 11:49
Reporter: niklas.skog Platform: Linux  
Assigned To: arogge OS: Debian  
Priority: normal OS Version: 11  
Status: confirmed Product Version: 21.0.0  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Bareos Libcloud Plugin incompatibe
Description: Goal is: to do backups of S3-buckets using Bareos

Situation:

Installed the Bareos 21.0.0-4 and "bareos-filedaemon-libcloud-python-plugin" on Debian 11 from "https://download.bareos.org/bareos/release/21/Debian_11"

Installed the "python3-libcloud" package on which the Plugin "bareos-filedaemon-libcloud-python-plugin" depends.

Configured the plugin according https://docs.bareos.org/TasksAndConcepts/Plugins.html

Trying to start a job, that should backup the data from S3 and getting following error in bconsole output:
---
20-Jan 08:27 bareos-dir JobId 13: Connected Storage daemon at backup:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
20-Jan 08:27 bareos-dir JobId 13: Using Device "FileStorage" to write.
20-Jan 08:27 bareos-dir JobId 13: Connected Client: backup-fd at localhost:9102, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
20-Jan 08:27 bareos-dir JobId 13: Handshake: Immediate TLS
20-Jan 08:27 backup-fd JobId 13: Fatal error: python3-fd-mod: BareosFdPluginLibcloud [50986]: Need Python version < 3.8 for Bareos Libcloud (current version: 3.9.2)
20-Jan 08:27 backup-fd JobId 13: Connected Storage daemon at backup:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
20-Jan 08:27 backup-fd JobId 13: Fatal error: TwoWayAuthenticate failed, because job was canceled.
20-Jan 08:27 backup-fd JobId 13: Fatal error: Failed to authenticate Storage daemon.
20-Jan 08:27 bareos-dir JobId 13: Fatal error: Bad response to Storage command: wanted 2000 OK storage
, got 2902 Bad storage
[...]
---

and the job fails.

Thus, main message:

"Fatal error: python3-fd-mod: BareosFdPluginLibcloud [50986]: Need Python version < 3.8 for Bareos Libcloud (current version: 3.9.2)"

which is understandable, because Debian 11 brings python3.9.*:

---
root@backup:/etc/bareos/bareos-dir.d/fileset# apt-cache policy python3
python3:
  Installed: 3.9.2-3
  Candidate: 3.9.2-3
  Version table:
 *** 3.9.2-3 500
        500 http://cdn-aws.deb.debian.org/debian bullseye/main amd64 Packages
        100 /var/lib/dpkg/status
root@backup:/etc/bareos/bareos-dir.d/fileset#
---


Accordingly, the plugin is incompatible with the current Debian version.
Tags: libcloud, plugin, s3
Steps To Reproduce: * install stock debian 11
* install & configure bareos 21, "python3-libcloud" and "bareos-filedaemon-libcloud-python-plugin"
* configure the plugin according https://docs.bareos.org/TasksAndConcepts/Plugins.html
* try to run a job that is backing up an S3-bucket
* this will fail
Additional Information:
Attached Files:
Notes
(0004487)
arogge   
2022-01-27 11:49   
You cannot use Python 3.9 or newer with python libcloud plugin due to a limitation in Python 3.9.
We're looking into this, but it isn't that easy to work around that limitation.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1409 [bareos-core] director tweak always 2021-12-19 00:33 2022-01-13 14:22
Reporter: jalseos Platform: Linux  
Assigned To: bruno-at-bareos OS: Debian  
Priority: low OS Version: 10  
Status: resolved Product Version: 20.0.2  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: DB error on restore with ExitOnFatal=true
Description: I was trying to use ExitOnFatal=true in director and noticed a persistent error when trying to initiate a restore:

bareos-dir: ERROR TERMINATION at cats/postgresql.cc:675
Fatal database error

The error does not happen with unset/default ExitOnFatal=false

The postgresql (11) log reveals:
STATEMENT: DROP TABLE temp
ERROR: table "temp1" does not exist
STATEMENT: DROP TABLE temp1
ERROR: table "temp1" does not exist

I found the SQL statements in thes files in the code:
/core/src/cats/dml/0018_uar_del_temp
/core/src/cats/dml/0019_uar_del_temp1

I am wondering if something like this might be in order: (like 0012_drop_deltabs.postgresql)
/core/src/cats/dml/0018_uar_del_temp.postgres
DROP TABLE IF EXISTS temp
Tags:
Steps To Reproduce: $ bconsole
* restore
9
bareos-dir: ERROR TERMINATION at cats/postgresql.cc:675
Fatal database error
Additional Information:
System Description
Attached Files:
Notes
(0004400)
bruno-at-bareos   
2021-12-21 15:58   
The behavior is exiting in case of error when ExitOnFatal = true

STATEMENT: DROP TABLE temp
ERROR: table "temp1" does not exist

this is an error and the product the obey strictly to the paramet Exit On Fatal.

Now with future version, where only postgresql will be kept as database, but also older postgresql will never be installed the code can be reviewed to chase every drop table without and if exist.

Files to change

```
core/src/cats/dml/0018_uar_del_temp:DROP TABLE temp
core/src/cats/dml/0019_uar_del_temp1:DROP TABLE temp1
core/src/cats/mysql_queries.inc:"DROP TABLE temp "
core/src/cats/mysql_queries.inc:"DROP TABLE temp1 "
core/src/cats/postgresql_queries.inc:"DROP TABLE temp "
core/src/cats/postgresql_queries.inc:"DROP TABLE temp1 "
core/src/cats/sqlite_queries.inc:"DROP TABLE temp "
core/src/cats/sqlite_queries.inc:"DROP TABLE temp1 "
core/src/dird/query.sql:!DROP TABLE temp;
core/src/dird/query.sql:!DROP TABLE temp2;
```
Do you want to propose a PR for ?
(0004405)
bruno-at-bareos   
2021-12-21 16:50   
PR proposed
https://github.com/bareos/bareos/pull/1035

Once the PR will be build, there's will be some testing package available, would you like to test them ?
(0004443)
jalseos   
2022-01-02 16:52   
Hi, thank you for looking into this issue! I will try to test the built package (deb preferred) if a subsequent code/package "downgrade" (ie. no Catalog DB changes, ...) to a published Community Edition release remains possible afterwards.
(0004473)
bruno-at-bareos   
2022-01-13 14:22   
Fix committed to bareos master branch with changesetid 15753.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1416 [bareos-core] General minor have not tried 2021-12-30 11:43 2022-01-11 21:50
Reporter: hostedpower Platform: Linux  
Assigned To: joergs OS: Debian  
Priority: low OS Version: 10  
Status: assigned Product Version: 21.0.1  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Bareos python3 contrib plugin filedaemon
Description: Hi,


We used a version of the bareos contrib mysql plugin which seems to support Python3, however in recent builds the file seems to be regressed to be only compatible with pyton2 again.

Tags:
Steps To Reproduce:
Additional Information: In attachment you can find the python3 compatible version which was previously found on git under "dev/joergs/master/contrib-build" branch, however since this branch was updated, the older python2 version is back in there
System Description
Attached Files: MySQL-Python3.zip (3,594 bytes) 2021-12-30 11:43
https://bugs.bareos.org/file_download.php?file_id=488&type=bug
Notes
(0004469)
joergs   
2022-01-11 21:32   
I just verified this. In my environment, the module is working fine with Python3.
I even added a systemtest to verify this: https://github.com/bareos/bareos/tree/dev/joergs/master/contrib-build/systemtests/tests/py3plug-fd-contrib-mysql_dump

However, I guess you have already noted that the path and the initialisation of the module have changed to the bareos_mysql_dump directory. Maybe this is not reflected in your environment?

Please be aware that we currently are in the process of finding a resonable file and directory structure for this plugins.

Without further information, I'd judge this bug entry as invalid.
(0004470)
hostedpower   
2022-01-11 21:39   
I think you could be right, I tried the v21 one : https://github.com/bareos/bareos/blob/bareos-21/contrib/fd-plugins/mysql-python/BareosFdMySQLclass.py

So master is working, but not v21 ?
(0004471)
joergs   
2022-01-11 21:50   
Correct. v21 should be identical to v20 , and both versions only work with Python 2.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1405 [bareos-core] storage daemon major sometimes 2021-12-02 12:01 2022-01-09 23:29
Reporter: gusevmk Platform: Linux  
Assigned To: OS: RHEL  
Priority: urgent OS Version: 8.2  
Status: new Product Version: 19.2.11  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Storage daemon not answer, when there are no available devices.
Description: UPD: Version is 19.2.7
I have troubles when no more available devices on storage daemon.
how storage=pgbkps01.mgmt.
Storage {
  Name = "pgbkps01_loop"
  Address = "10.*.*.192"
  Password = "***"
  Device = "local-storage-0","local-storage-1","local-storage-2","local-storage-3"
  MediaType = "File"
  MaximumConcurrentJobs = 4
  SddPort = 9103
}

There are 4 jobs running at the moment.
When starting new job, it does not queue, but ends with error:

02-Dec 13:13 bareos-dir JobId 1903: Fatal error: Authorization key rejected bareos-dir.
02-Dec 13:13 bareos-dir JobId 1903: Fatal error: Director unable to authenticate with Storage daemon at "10.*.*.192:9103". Possible causes:
Passwords or names not the same or
TLS negotiation problem or
Maximum Concurrent Jobs exceeded on the SD or
SD networking messed up (restart daemon).

*status storage=pgbkps01_loop
Connecting to Storage daemon pgbkps01_loop at 10.*.*.192:9103
Failed to connect to Storage daemon pgbkps01_loop

When one of 4 jobs ends, there are no problem.

I did check strace of bareos-sd threads:
ps -eLo pid,tid,ppid,user:11,comm,state,wchan | grep bareos-sd
3256361 3256361 1 bareos bareos-sd S x64_sys_poll
3256361 3256363 1 bareos bareos-sd S -
3256361 1332308 1 bareos bareos-sd S x64_sys_poll
3256361 2252738 1 bareos bareos-sd S x64_sys_poll
3256361 2407428 1 bareos bareos-sd S x64_sys_poll
========================================================================
WHEN IS PROBLEM:
strace -p 3256361:
accept(3, {sa_family=AF_INET, sin_port=htons(48954), sin_addr=inet_addr("10.76.74.192")}, [128->16]) = 6
setsockopt(6, SOL_SOCKET, SO_KEEPALIVE, [1], 4) = 0
clone(child_stack=0x7ff13d75feb0, flags=CLONE_VM|CLONE_FS|CLONE_FILES|CLONE_SIGHAND|CLONE_THREAD|CLONE_SYSVSEM|CLONE_SETTLS|CLONE_PARENT_SETTID|CLONE_CHILD_CLEARTID, parent_tidptr=0x7ff13d7609d0, tls=0x7ff13d760700, child_tidptr=0x7ff13d7609d0) = 2458575
futex(0x2487490, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 0, {tv_sec=1638442869, tv_nsec=440823368}, FUTEX_BITSET_MATCH_ANY) = 0
futex(0x2487418, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x24874c0, FUTEX_WAKE_PRIVATE, 1) = 1
futex(0x2487440, FUTEX_WAKE_PRIVATE, 1) = 1

strace -p 3256363
strace: Process 3256363 attached
restart_syscall(<... resuming interrupted futex ...>
========================================================================
WHEN IS NO PROBLEM:
strace -p 3256361:
accept(3, {sa_family=AF_INET, sin_port=htons(48954), sin_addr=inet_addr("10.76.74.192")}, [128->16]) = 6
setsockopt(6, SOL_SOCKET, SO_KEEPALIVE, [1], 4) = 0
clone(child_stack=0x7ff13d75feb0, flags=CLONE_VM|CLONE_FS|CLONE_FILES|CLONE_SIGHAND|CLONE_THREAD|CLONE_SYSVSEM|CLONE_SETTLS|CLONE_PARENT_SETTID|CLONE_CHILD_CLEARTID, parent_tidptr=0x7ff13d7609d0, tls=0x7ff13d760700, child_tidptr=0x7ff13d7609d0) = 2458575
futex(0x2487490, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 0, {tv_sec=1638442869, tv_nsec=440823368}, FUTEX_BITSET_MATCH_ANY) = 0
futex(0x2487418, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x24874c0, FUTEX_WAKE_PRIVATE, 1) = 1
futex(0x2487440, FUTEX_WAKE_PRIVATE, 1) = 1

strace -p 3256363
strace: Process 3256363 attached
restart_syscall(<... resuming interrupted futex ...>
futex(0x7ff1508ce340, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7ff1508ce328, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 0, {tv_sec=1638442749, tv_nsec=490329000}, FUTEX_BITSET_MATCH_ANY) = 0
futex(0x7ff1508ce340, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7ff1508ce32c, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 0, {tv_sec=1638442807, tv_nsec=745975000}, FUTEX_BITSET_MATCH_ANY) = 0
futex(0x7ff1508ce340, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7ff1508ce328, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 0, {tv_sec=1638442807, tv_nsec=747064000}, FUTEX_BITSET_MATCH_ANY) = 0
futex(0x7ff1508ce340, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7ff1508ce32c, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 0, {tv_sec=1638442807, tv_nsec=792002000}, FUTEX_BITSET_MATCH_ANY) = -1 EAGAIN (Resource temporarily unavailable)
futex(0x7ff1508ce340, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7ff1508ce328, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 0, {tv_sec=1638442807, tv_nsec=792408000}, FUTEX_BITSET_MATCH_ANY
=============================================

I think here is some inter process communication issue
Tags:
Steps To Reproduce:
Additional Information:
Attached Files:
Notes
(0004379)
gusevmk   
2021-12-04 06:52   
This problem happens, when director configuration was reload, steps to reproduce:
1. Set 4 maximum concurrent jobs on storage = 4
2. Run 4 simultaneous jobs
3. Run one more job - all ok, this job is in queue
4. Reload director configuration (add new job/schedule)
5. Start one more job - fail, this job starts and fail - because - no more available devices/Maximum concurent jobs limit exceeded on storage daemon
(0004380)
bruno-at-bareos   
2021-12-06 14:17   
Could you provide you bareos-sd configuration bareos-sd.d/storage/bareos-sd.conf (normally) and also the bareos-dir.d/director/bareos-dir.conf (usually)

You have to check if you change the default values if there's enough for those (here the default)

Maximum Concurrent Jobs (Sd->Storage) = PINT32 20
Maximum Connections (Sd->Storage) = PINT32 42

As you see Maximum Connections is higher to allow to pass command to SD (like status etc).
(0004408)
bruno-at-bareos   
2021-12-22 16:35   
Any news here ?
(0004446)
embareossed   
2022-01-03 22:35   
(Last edited: 2022-01-09 23:29)
[EDIT]: Sorry, I saw the strace output, and it appeared to be the same as my own situation. Trouble is, I am now seeing something similar to the OP's situation, but not quite the same.

I will start a separate ticket for mine. Please disregard my original post here.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1389 [bareos-core] installer / packages minor always 2021-09-20 12:23 2022-01-05 13:23
Reporter: colttt Platform:  
Assigned To: bruno-at-bareos OS:  
Priority: normal OS Version:  
Status: resolved Product Version: 20.0.2  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact: yes
bareos-master: action: fixed
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: no repository for debian 11
Description: Debian 11 (bullseye) was released on 14th august 2021 but there is no bareos repository yet.
I would appreciate if debian 11 would be supported as well.
Tags:
Steps To Reproduce:
Additional Information:
Attached Files:
Notes
(0004276)
bruno-at-bareos   
2021-09-27 13:37   
Thanks for your report.

Starting September 14th Debian 11 is available for all customers with a subscription contract.
Nightly is also build for Debian 11, and will be part of bareos 21 release.
(0004292)
brechsteiner   
2021-10-02 22:51   
what is about the Community Repository? https://download.bareos.org/bareos/release/20/
(0004293)
bruno-at-bareos   
2021-10-04 09:30   
Sorry if it wasn't clear in my previous statement, Debian 11 will be available for next Bareos 21.
(0004455)
bruno-at-bareos   
2022-01-05 13:23   
community repository published https://download.bareos.org/bareos/release/21/Debian_11/

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1408 [bareos-core] director minor have not tried 2021-12-18 20:32 2021-12-28 09:44
Reporter: embareossed Platform: Linux  
Assigned To: bruno-at-bareos OS: Debian  
Priority: normal OS Version: 10  
Status: resolved Product Version: 20.0.2  
Product Build: Resolution: no change required  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: "Backup OK" email message subject line no longer displays the job name
Description: In bareos 18, backups which concluded successfully would be followed up by an email with a subject line indicating the name of the specific job that ran. However, in bareos 20, the subject line now only indicates the name of the client for which the job ran.

This is a minor nuisance, but I found the more distinguishing subject line to be more useful. In a case where there are multiple backup jobs for a single client where one but not all jobs fail, it is not immediately obvious -- as it was in bareos 18 -- as to which job for that client failed.
Tags:
Steps To Reproduce: Run two jobs on a host which has more than 1 backup job associated with it.
The email subject lines will be identical even though they are for 2 different jobs.
Additional Information:
System Description
Attached Files:
Notes
(0004401)
bruno-at-bareos   
2021-12-21 16:05   
Maybe some configuration file example used would help

From code we can see that the line was not changed since 2016
67ad14188a src/defaultconfigs/bareos-dir.d/messages/Standard.conf.in (Joerg Steffens 2016-08-01 14:03:06 +0200 5) mailcommand = "@bindir@/bsmtp -h @smtp_host@ -f \"\(Bareos\) \<%r\>\" -s \"Bareos: %t %e of %c %l\" %r"
(0004415)
embareossed   
2021-12-24 17:58   
Here is what my configs look like:
# grep mailcommand *
Daemon.conf: mailcommand = "/usr/bin/bsmtp -h localhost -f \"\(Bareos\) \<%r\>\" -s \"Bareos daemon message\" %r"
Standard.conf: mailcommand = "/usr/bin/bsmtp -h localhost -f \"\(Bareos\) \<%r\>\" -s \"Bareos: %t %e of %c %l\" %r"

All references to message resources are for Standard, except for the director which uses Daemon. I copied most of my config files from the old director (bareos 18) to the setup for the new director (bareos 20); I did not make any changes to messages, afair. I'll take a deeper look at this and see what I can figure out. Maybe bsmtp semantics have changed?
(0004416)
embareossed   
2021-12-24 18:12   
OK, it appears that in bareos 20, as per doc, the %c stands for the client, not the jobname (which should be %n). However, in bareos 18 and prior, this same setup seems to be generating the jobname, not the clientname. So it appears that the semantics have changed to properly implement the documented purpose of the %c macro (and perhaps others; I haven't tested those).

Changing the macro to %n works as desired.
(0004428)
bruno-at-bareos   
2021-12-28 09:44   
Adapting configuration following documentation

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1407 [bareos-core] General minor always 2021-12-18 20:26 2021-12-28 09:43
Reporter: embareossed Platform: Linux  
Assigned To: bruno-at-bareos OS: Debian  
Priority: normal OS Version: 10  
Status: resolved Product Version: 20.0.2  
Product Build: Resolution: no change required  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Run before script hangs unless debug tracing enabled in script
Description: A "run before job script" which has been working since bareos 18 no longer works in bareos 20 (20.0.3 or 20.0.4). This is a simple bash shell script that performs some chores before the backup. It only sends output to stdout, not stderr (I've checked this). The script works properly in bareos 18, but causes the job to hang in bareos 20.

The script is actually being run on a remote file daemon. This may be a clue to the behavior. But again, this has been working in bareos 18.

Interestingly, I found that by enabling bash tracing (-xv options) inside the script itself to try to see what was causing the hang, it actually alleviated the hang!
Tags:
Steps To Reproduce: Create a bash shell on a remote bareos 20 client.
Create a job in a bareos 20 director on a local system that calls a "run before job script" on the remote client.
Run the job.
If this is reproducible, the job will hang when it reaches the call to the remote script.

if this is reproducible, try setting traces in the bash script.

Additional Information: I built the 20.0.3 executables from the git source code on a devuan beowulf host and distributed the packages to the bareos director server and the bareos file daemon client, both of which are also devuan beowulf.
System Description
Attached Files:
Notes
(0004403)
bruno-at-bareos   
2021-12-21 16:10   
Would you mind to share the job definition so we can try to reproduce.
The script would be nice, but perhaps it do something secret.
(0004404)
bruno-at-bareos   
2021-12-21 16:17   
I can't reproduce, it works here

with a job definiton

```
 2021-12-21 16:14:12 qt-kt-dir JobId 8204: Start Backup JobId 8204, Job=yoda.2021-12-21_16.14.10_06
 2021-12-21 16:14:12 qt-kt-dir JobId 8204: Connected Storage daemon at qt-kt.labaroche.ioda.net:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
 2021-12-21 16:14:12 qt-kt-dir JobId 8204: Using Device "admin" to write.
 2021-12-21 16:14:12 qt-kt-dir JobId 8204: Probing client protocol... (result will be saved until config reload)
 2021-12-21 16:14:12 qt-kt-dir JobId 8204: Connected Client: yoda-fd at yoda.labaroche.ioda.net:9102, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
 2021-12-21 16:14:12 qt-kt-dir JobId 8204: Handshake: Immediate TLS 2021-12-21 16:14:12 qt-kt-dir JobId 8204: Encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
 2021-12-21 16:14:12 yoda-fd JobId 8204: Connected Storage daemon at qt-kt.labaroche.ioda.net:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
 2021-12-21 16:14:12 yoda-fd JobId 8204: shell command: run ClientBeforeJob "sh -c 'snapper list && snapper -c ioda list'"
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: # | Type | Pre # | Date | User | Used Space | Cleanup | Description | Userdata
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: -------+--------+-------+----------------------------------+------+------------+----------+-----------------------+--------------
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 0 | single | | | root | | | current |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 1* | single | | Sun 21 Jun 2020 05:17:47 PM CEST | root | 92.00 KiB | | first root filesystem |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 4803 | single | | Fri 01 Jan 2021 12:00:23 AM CET | root | 13.97 GiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 10849 | single | | Wed 01 Sep 2021 12:00:02 AM CEST | root | 12.58 GiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 11582 | single | | Fri 01 Oct 2021 12:00:01 AM CEST | root | 7.90 GiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 12342 | single | | Mon 01 Nov 2021 12:00:08 AM CET | root | 8.07 GiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13085 | single | | Wed 01 Dec 2021 12:00:07 AM CET | root | 156.00 KiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13272 | pre | | Wed 08 Dec 2021 06:23:04 PM CET | root | 156.00 KiB | number | zypp(zypper) | important=yes
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13273 | post | 13272 | Wed 08 Dec 2021 06:46:13 PM CET | root | 3.28 MiB | number | | important=yes
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13278 | pre | | Wed 08 Dec 2021 10:11:11 PM CET | root | 304.00 KiB | number | zypp(zypper) | important=yes
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13279 | post | 13278 | Wed 08 Dec 2021 10:11:26 PM CET | root | 124.00 KiB | number | | important=yes
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13447 | pre | | Wed 15 Dec 2021 09:57:35 PM CET | root | 48.00 KiB | number | zypp(zypper) | important=no
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13448 | post | 13447 | Wed 15 Dec 2021 09:57:42 PM CET | root | 48.00 KiB | number | | important=no
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13499 | single | | Sat 18 Dec 2021 12:00:06 AM CET | root | 156.00 KiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13523 | single | | Sun 19 Dec 2021 12:00:05 AM CET | root | 156.00 KiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13547 | single | | Mon 20 Dec 2021 12:00:05 AM CET | root | 156.00 KiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13557 | pre | | Mon 20 Dec 2021 09:27:21 AM CET | root | 156.00 KiB | number | zypp(zypper) | important=yes
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13559 | pre | | Mon 20 Dec 2021 10:30:43 AM CET | root | 156.00 KiB | number | zypp(zypper) | important=yes
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13560 | post | 13559 | Mon 20 Dec 2021 10:52:01 AM CET | root | 1.76 MiB | number | | important=yes
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13562 | pre | | Mon 20 Dec 2021 11:53:40 AM CET | root | 352.00 KiB | number | zypp(zypper) | important=yes
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13563 | post | 13562 | Mon 20 Dec 2021 11:53:56 AM CET | root | 124.00 KiB | number | | important=yes
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13576 | single | | Tue 21 Dec 2021 12:00:00 AM CET | root | 172.00 KiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13585 | single | | Tue 21 Dec 2021 09:00:00 AM CET | root | 172.00 KiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13586 | single | | Tue 21 Dec 2021 10:00:00 AM CET | root | 172.00 KiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13587 | single | | Tue 21 Dec 2021 11:00:00 AM CET | root | 172.00 KiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13588 | single | | Tue 21 Dec 2021 12:00:00 PM CET | root | 172.00 KiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13589 | single | | Tue 21 Dec 2021 01:00:00 PM CET | root | 172.00 KiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13590 | single | | Tue 21 Dec 2021 02:00:00 PM CET | root | 172.00 KiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13591 | single | | Tue 21 Dec 2021 03:00:00 PM CET | root | 172.00 KiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13592 | single | | Tue 21 Dec 2021 04:00:00 PM CET | root | 92.00 KiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: # | Type | Pre # | Date | User | Cleanup | Description | Userdata
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: -------+--------+-------+---------------------------------+------+----------+-------------+---------
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 0 | single | | | root | | current |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13050 | single | | Mon 20 Dec 2021 12:00:05 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13061 | single | | Mon 20 Dec 2021 11:00:00 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13062 | single | | Mon 20 Dec 2021 12:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13063 | single | | Mon 20 Dec 2021 01:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13064 | single | | Mon 20 Dec 2021 02:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13065 | single | | Mon 20 Dec 2021 03:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13066 | single | | Mon 20 Dec 2021 04:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13067 | single | | Mon 20 Dec 2021 05:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13068 | single | | Mon 20 Dec 2021 06:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13069 | single | | Mon 20 Dec 2021 07:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13070 | single | | Mon 20 Dec 2021 08:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13071 | single | | Mon 20 Dec 2021 09:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13072 | single | | Mon 20 Dec 2021 10:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13073 | single | | Mon 20 Dec 2021 11:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13074 | single | | Tue 21 Dec 2021 12:00:00 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13075 | single | | Tue 21 Dec 2021 01:00:00 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13076 | single | | Tue 21 Dec 2021 02:00:00 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13077 | single | | Tue 21 Dec 2021 03:00:00 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13078 | single | | Tue 21 Dec 2021 04:00:00 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13079 | single | | Tue 21 Dec 2021 05:00:00 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13080 | single | | Tue 21 Dec 2021 06:00:00 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13081 | single | | Tue 21 Dec 2021 07:00:00 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13082 | single | | Tue 21 Dec 2021 08:00:00 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13083 | single | | Tue 21 Dec 2021 09:00:00 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13084 | single | | Tue 21 Dec 2021 10:00:00 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13085 | single | | Tue 21 Dec 2021 11:00:00 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13086 | single | | Tue 21 Dec 2021 12:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13087 | single | | Tue 21 Dec 2021 01:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13088 | single | | Tue 21 Dec 2021 02:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13089 | single | | Tue 21 Dec 2021 03:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13090 | single | | Tue 21 Dec 2021 04:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: Extended attribute support is enabled
 2021-12-21 16:14:36 yoda-fd JobId 8204: ACL support is enabled
 
 RunScript {
    RunsWhen = Before
    RunsOnClient = Yes
    FailJobOnError = No
    Command = "sh -c 'snapper list && snapper -c ioda list'"
  }

```
(0004414)
embareossed   
2021-12-24 17:45   
Nothing secret really. It's just a script that runs "estimate" and parses the output for the size of the backup. Then it decides (based on a value in a config file for the backup name) whether to proceed or not. This way, estimates can be used to determine whether to proceed with a backup or not. This was my workaround to my request in https://bugs.bareos.org/view.php?id=1135.

I did some upgrades recently and the problem has disappeared. So you can close this.
(0004427)
bruno-at-bareos   
2021-12-28 09:43   
Upgrade solve this.
estimate can take time, and from the bconsole point of view, look it is stalled or blocked, when you use the "listing instruction" you'll see the file by file proceed output.
closing

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1413 [bareos-core] bconsole major always 2021-12-27 15:29 2021-12-28 09:38
Reporter: jcottin Platform: Linux  
Assigned To: bruno-at-bareos OS: Debian  
Priority: high OS Version: 10  
Status: resolved Product Version: 21.0.0  
Product Build: Resolution: no change required  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: While restoring a directory using always incremental scheme, Bareos looks for a volume in the wrong storage.
Description: I configured the Always Incremental scheme using 2 different storage (FILE) as advised in the documentation.
-----------------------------------
https://docs.bareos.org/TasksAndConcepts/AlwaysIncrementalBackupScheme.html?highlight=job#storages-and-pools

While restoring a directory using always incremental scheme, Bareos looks for a volume in the wrong storage.

he job will require the following
   Volume(s) Storage(s) SD Device(s)
===========================================================================

    AI-Consolidated-vm-aiqi-linux-test-backup-0006 FileStorage-AI-Consolidated FileStorage-AI-Consolidated
    AI-Incremental-vm-aiqi-linux-test-backup0012 FileStorage-AI-Incremental FileStorage-AI-Incremental

It looks for AI-Incremental-vm-aiqi-linux-test-backup0012 in FileStorage-AI-Consolidated.
it should look for it in FileStorage-AI-Incremental.

Is there a problem with my setup ?
Tags: always incremental, storage
Steps To Reproduce: Using bconsole I target a backup before : 2021-12-27 19:00:00
I can find 3 backup (1 Full, 2 Incremental)
=======================================================
+-------+-------+----------+----------------+---------------------+------------------------------------------------+
| jobid | level | jobfiles | jobbytes | starttime | volumename |
+-------+-------+----------+----------------+---------------------+------------------------------------------------+
| 24 | F | 108,199 | 13,145,763,765 | 2021-12-25 08:06:41 | AI-Consolidated-vm-aiqi-linux-test-backup-0006 |
| 27 | I | 95 | 68,530 | 2021-12-25 20:00:04 | AI-Incremental-vm-aiqi-linux-test-backup0008 |
| 32 | I | 40 | 1,322,314 | 2021-12-26 20:00:09 | AI-Incremental-vm-aiqi-linux-test-backup0012 |
+-------+-------+----------+----------------+---------------------+------------------------------------------------+
-----------------------------------
$ cd /var/lib/mysql.dumps/wordpressdb/
cwd is: /var/lib/mysql.dumps/wordpressdb/
-----------------------------------
$ dir
-rw-r--r-- 1 0 (root) 112 (bareos) 1830 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/%create.gz
-rw-r--r-- 1 0 (root) 112 (bareos) 149 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/%tables
-rw-r--r-- 1 0 (root) 112 (bareos) 783 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/wp_commentmeta.sql.gz
-rw-r--r-- 1 0 (root) 112 (bareos) 1161 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/wp_comments.sql.gz
-rw-r--r-- 1 0 (root) 112 (bareos) 869 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/wp_links.sql.gz
-rw-r--r-- 1 0 (root) 112 (bareos) 235966 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/wp_options.sql.gz
-rw-r--r-- 1 0 (root) 112 (bareos) 830 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/wp_postmeta.sql.gz
-rw-r--r-- 1 0 (root) 112 (bareos) 3470 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/wp_posts.sql.gz
-rw-r--r-- 1 0 (root) 112 (bareos) 770 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/wp_term_relationships.sql.gz
-rw-r--r-- 1 0 (root) 112 (bareos) 838 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/wp_term_taxonomy.sql.gz
-rw-r--r-- 1 0 (root) 112 (bareos) 780 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/wp_termmeta.sql.gz
-rw-r--r-- 1 0 (root) 112 (bareos) 814 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/wp_terms.sql.gz
-rw-r--r-- 1 0 (root) 112 (bareos) 1404 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/wp_usermeta.sql.gz
-rw-r--r-- 1 0 (root) 112 (bareos) 983 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/wp_users.sql.gz
-----------------------------------
$ cd ..
cwd is: /var/lib/mysql.dumps/
-----------------------------------
I mark the folder:
$ mark /var/lib/mysql.dumps/wordpressdb
15 files marked.
$ done
-----------------------------------
The job will require the following
   Volume(s) Storage(s) SD Device(s)
============================================

    AI-Consolidated-vm-aiqi-linux-test-backup-0006 FileStorage-AI-Consolidated FileStorage-AI-Consolidated
    AI-Incremental-vm-aiqi-linux-test-backup0012 FileStorage-AI-Incremental FileStorage-AI-Incremental

Volumes marked with "*" are online.
18 files selected to be restored.

Using Catalog "MyCatalog"
Run Restore job
JobName: RestoreFiles
Bootstrap: /var/lib/bareos/bareos-dir.restore.2.bsr
Where: /tmp/bareos-restores
Replace: Always
FileSet: LinuxAll
Backup Client: vm-aiqi-linux-test-backup-fd
Restore Client: vm-aiqi-linux-test-backup-fd
Format: Native
Storage: FileStorage-AI-Consolidated
When: 2021-12-27 22:10:13
Catalog: MyCatalog
Priority: 10
Plugin Options: *None*
OK to run? (yes/mod/no): yes

I get these two messages.
============================================
27-Dec 22:15 bareos-sd JobId 43: Warning: stored/acquire.cc:286 Read open device "FileStorage-AI-Consolidated" (/var/lib/bareos/storage-AI-Consolidated) Volume "AI-Incremental-vm-aiqi-linux-test-backup0012" failed: ERR=stored/dev.cc:716 Could not open: /var/lib/bareos/storage-AI-Consolidated/AI-Incremental-vm-aiqi-linux-test-backup0012, ERR=No such file or directory

27-Dec 22:15 bareos-sd JobId 43: Please mount read Volume "AI-Incremental-vm-aiqi-linux-test-backup0012" for:
    Job: RestoreFiles.2021-12-27_22.15.29_31
    Storage: "FileStorage-AI-Consolidated" (/var/lib/bareos/storage-AI-Consolidated)
    Pool: Incremental-BareOS
    Media type: File
============================================

Bareos try to find AI-Incremental-vm-aiqi-linux-test-backup0012 in the wrong storage.
Additional Information:
===========================================
Job {
  Name = vm-aiqi-linux-test-backup-job
  Client = vm-aiqi-linux-test-backup-fd

  Accurate = yes
  Always Incremental = yes
  Always Incremental Job Retention = 30 days
  Always Incremental Keep Number = 15
  Always Incremental Max Full Age = 60 days

  Level = Incremental
  Type = Backup
  FileSet = "LinuxAll-vm-aiqi-linux-test-backup" # LinuxAll fileset (0000013)
  Schedule = "WeeklyCycleCustomers"
  Storage = FileStorage-AI-Incremental
  Messages = Standard
  Pool = AI-Incremental-vm-aiqi-linux-test-backup
  Priority = 10
  Write Bootstrap = "/var/lib/bareos/%c.bsr"

  Full Backup Pool = AI-Consolidated-vm-aiqi-linux-test-backup # write Full Backups into "Full" Pool (0000005)
  Incremental Backup Pool = AI-Incremental-vm-aiqi-linux-test-backup # write Incr Backups into "Incremental" Pool (0000011)

  Enabled = yes

  RunScript {
    FailJobOnError = Yes
    RunsOnClient = Yes
    RunsWhen = Before
    Command = "sh /SCRIPTS/mysql/pre.mysql.sh"
  }

  Run Script {
    Console = ".bvfs_update jobid=%i"
    RunsWhen = After
    RunsOnClient = No
  }
}
===========================================
Pool {
  Name = AI-Consolidated-vm-aiqi-linux-test-backup
  Pool Type = Backup
  Recycle = yes # Bareos can automatically recycle Volumes
  AutoPrune = yes # Prune expired volumes
  Volume Retention = 360 days # How long should the Full Backups be kept? (0000006)
  Maximum Volume Bytes = 50G # Limit Volume size to something reasonable
  Label Format = "AI-Consolidated-vm-aiqi-linux-test-backup-" # Volumes will be labeled "Full-<volume-id>"
  Storage = FileStorage-AI-Consolidated
}
===========================================
Pool {
  Name = AI-Incremental-vm-aiqi-linux-test-backup
  Pool Type = Backup
  Recycle = yes # Bareos can automatically recycle Volumes
  AutoPrune = yes # Prune expired volumes
  Volume Retention = 360 days # How long should the Incremental Backups be kept? (0000012)
  Maximum Volume Bytes = 50G # Limit Volume size to something reasonable
  Label Format = "AI-Incremental-vm-aiqi-linux-test-backup" # Volumes will be labeled "Incremental-<volume-id>"
  Volume Use Duration = 23h
  Storage = FileStorage-AI-Incremental
  Next Pool = AI-Consolidated-vm-aiqi-linux-test-backup
}

Both volume are available in their respective storage.

root@vm-aiqi-testbackup:~# ls -l /var/lib/bareos/storage-AI-Consolidated/AI-Consolidated-vm-aiqi-linux-test-backup-0006
-rw-r----- 1 bareos bareos 26349467738 Dec 25 08:09 /var/lib/bareos/storage-AI-Consolidated/AI-Consolidated-vm-aiqi-linux-test-backup-0006

root@vm-aiqi-testbackup:~# ls -l /var/lib/bareos/storage-AI-Incremental/AI-Incremental-vm-aiqi-linux-test-backup0012
-rw-r----- 1 bareos bareos 1329612 Dec 26 20:00 /var/lib/bareos/storage-AI-Incremental/AI-Incremental-vm-aiqi-linux-test-backup0012
System Description
Attached Files: Bareos-always-incremental-restore-fail.txt (7,259 bytes) 2021-12-27 15:53
https://bugs.bareos.org/file_download.php?file_id=487&type=bug
Notes
(0004421)
jcottin   
2021-12-27 15:53   
Output with TXT might be easier to read.
(0004422)
jcottin   
2021-12-27 16:32   
Device {
  Name = FileStorage-AI-Consolidated
  Media Type = File
  Archive Device = /var/lib/bareos/storage-AI-Consolidated
  LabelMedia = yes; # lets Bareos label unlabeled media
  Random Access = yes;
  AutomaticMount = yes; # when device opened, read it
  RemovableMedia = no;
  AlwaysOpen = no;
  Description = "File device. A connecting Director must have the same Name and MediaType."
}

Device {
  Name = FileStorage-AI-Incremental
  Media Type = File
  Archive Device = /var/lib/bareos/storage-AI-Incremental
  LabelMedia = yes; # lets Bareos label unlabeled media
  Random Access = yes;
  AutomaticMount = yes; # when device opened, read it
  RemovableMedia = no;
  AlwaysOpen = no;
  Description = "File device. A connecting Director must have the same Name and MediaType."
}

Storage {
  Name = FileStorage-AI-Consolidated
  Address = bareos-server # N.B. Use a fully qualified name here (do not use "localhost" here).
  Password = "22222222222222222222222222222222222222222222"
  Device = FileStorage-AI-Consolidated
  Media Type = File
}

Storage {
  Name = FileStorage-AI-Incremental
  Address = bareos-server # N.B. Use a fully qualified name here (do not use "localhost" here).
  Password = "22222222222222222222222222222222222222222222"
  Device = FileStorage-AI-Incremental
  Media Type = File
}
(0004423)
jcottin   
2021-12-27 16:43   
The documentation said 2 storage.
But I created 2 device.

1 storage => 1 device.

I moved the data from one device (FILE: Directory) to another.
2 storage => 1 device.

Problem solved.
(0004425)
bruno-at-bareos   
2021-12-28 09:37   
Thanks for sharing. Yes when the documentation talk about 2 storages it's in the director view, and not bareos-storage having 2 devices.
I close the issue.
(0004426)
bruno-at-bareos   
2021-12-28 09:38   
AI need 2 storages on the director but One device able to read/write both Incremental and Full

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1339 [bareos-core] webui minor always 2021-04-19 11:49 2021-12-23 08:39
Reporter: khvalera Platform:  
Assigned To: frank OS:  
Priority: normal OS Version: archlinux  
Status: assigned Product Version: 20.0.1  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: When going to the Run jobs tab I get an error
Description: when going to the Run jobs https://127.0.0.1/bareos-webui/job/run/ tab I get an error:
Notice: Undefined index: value in /usr/share/webapps/bareos-webui/module/Pool/src/Pool/Model/PoolModel.php on line 152
Tags: webui
Steps To Reproduce: https://127.0.0.1/bareos-webui/job/run/
Additional Information:
Attached Files: Снимок экрана_2021-04-19_12-52-56.png (110,528 bytes) 2021-04-19 11:54
https://bugs.bareos.org/file_download.php?file_id=464&type=bug
png
Notes
(0004112)
khvalera   
2021-04-19 11:54   
I am attaching a screenshot:
(0004156)
khvalera   
2021-06-11 22:36   
You need to correct the expression: preg_match('/\s*Next\s*Pool\s*=\s*("|\')?(?<value>.*)(?(1)\1|)/i', $result, $matches);
(0004157)
khvalera   
2021-06-11 22:39   
I temporarily corrected myself so that the error does not appear: preg_match('/\s*Pool\s*=?(?<value>.*)(?(1)\1|)/i', $result, $matches);
But most of all this is not the right decision.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1404 [bareos-core] director major have not tried 2021-11-25 11:24 2021-12-22 16:32
Reporter: Int Platform: Linux  
Assigned To: OS: CentOS  
Priority: high OS Version: 7  
Status: new Product Version: 19.2.11  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: prune deletes virtualfull job from volume although the retention period is not expired
Description: executing the command
  prune volume=Bilddaten-0408 yes
caused the virtualfull job, which was stored on this volume to be deleted, although the retention period of "9 months 6 days" was not expired. The volume was last written 1 day ago.

The bconsole output was:

Automatically selected Catalog: MyCatalog
Using Catalog "MyCatalog"
The current Volume retention period is: 9 months 6 days
There are no more Jobs associated with Volume "Bilddaten-0408". Marking it purged.
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0004361)
Int   
2021-11-25 11:32   
how can I recover the deleted virtualfull job from the now purged volume?
(0004362)
Int   
2021-11-25 11:41   
The bareos daemon sent this message:

25-Nov 11:00 bareos-dir JobId 0: Volume "Bilddaten-0408" has Volume Retention of 23846400 sec. and has 0 jobs that will be pruned
25-Nov 11:00 bareos-dir JobId 0: Pruning volume Bilddaten-0408: 0 Jobs have expired and can be pruned.
25-Nov 11:00 bareos-dir JobId 0: Volume "" contains no jobs after pruning.
(0004363)
bruno-at-bareos   
2021-11-25 18:48   
What was the purpose of running prune volume=Bilddaten-0408 yes ?
If the media has not been used or truncated (depending of your configuration) you can still use bscan to reimport it inside the database.
(0004371)
Int   
2021-11-29 10:52   
the pool contains other volumes that were expired. My goal was to truncate the expired volumes to gain free disk space.
I ran the shell script

#!/bin/bash
for f in `echo "list volumes pool=Bilddaten" | bconsole | grep Bilddaten- | cut -d '|' -f3`; do
  echo "prune volume=$f yes" | bconsole;
done

to prune each volume in the pool.
But the prune command did not only prune the expired volumes but also all volumes that contained the virtualfull job.

My second step would have been to truncate the pruned volumes.
But since I did not execute the truncate I will try to restore the virtualfull job by using bscan as you suggested.
(0004406)
bruno-at-bareos   
2021-12-22 16:32   
Sorry busy on other tasks.

In fact what you expect was not was is configured and how it works with virtualfull.
The date of the virtualfull job used it the one of the first job composing the virtualfull (let say the first job was done in 1st January)
even if the volume has been written yesterday, the date of the job on it is 1st jan

That's why you get the result you had, and not the one you expected ...

In AI the documentation stated clearly that Volumes/Files/Client retention should be setup with high number
It is the job of consolidate to do the cleanup, and nothing else should be used.

I hope this will help you understand better how to setup and use the product.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1397 [bareos-core] documentation minor always 2021-11-01 16:45 2021-12-21 16:07
Reporter: Norst Platform: Linux  
Assigned To: bruno-at-bareos OS: Debian  
Priority: normal OS Version: 9  
Status: resolved Product Version: 20.0.2  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: "Tapespeed and blocksizes" chapter location is wrong
Description: "Tapespeed and blocksizes" chapter is a general topic. Therefore, it must be moved away from "Autochanger Support" page/category.
https://docs.bareos.org/TasksAndConcepts/AutochangerSupport.html#setblocksizes
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0004360)
bruno-at-bareos   
2021-11-25 10:35   
Would you like to propose a PR changing the place, it would be really appreciate.
Are you doing backup on tapes on a single drive ? (Most of the use case we seen actually are all using autochanger, that's why the chapter is actually there).
(0004376)
Norst   
2021-11-30 21:01   
(Last edited: 2021-11-30 21:03)
Yes, I use standalone tape drive, but for infrequent, long-term archiving rather than regular backup.

PR to move "Tapespeed and blocksizes" one level up, to "Tasks and Concepts": https://github.com/bareos/bareos/pull/1009

(0004383)
bruno-at-bareos   
2021-12-09 09:42   
Did you see the comment in the PR ?
(0004402)
bruno-at-bareos   
2021-12-21 16:07   
PR#1009 merged last week.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1369 [bareos-core] webui crash always 2021-07-12 11:54 2021-12-21 13:58
Reporter: jarek_herisz Platform: Linux  
Assigned To: frank OS: Debian  
Priority: normal OS Version: 10  
Status: resolved Product Version: 20.0.2  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: webui tries to load a nonexistent file
Description: When the Polish language is chosen at the login stage. Webui tries to load the file:
bareos-webui/js/locale/pl_PL/LC_MESSAGES/pl_PL.po

Such a file does not exist, it results in an error:
i_gettext.js:413 iJS-gettext:'try_load_lang_po': failed. Unable to exec XMLHttpRequest for link

Remaining javasctipt is terminated. Interfeis becomes inoperable
Tags: webui
Steps To Reproduce: With version 20.0.1
On the webui login page, select Polish.
Additional Information:
System Description
Attached Files: Przechwytywanie.PNG (78,772 bytes) 2021-07-19 10:36
https://bugs.bareos.org/file_download.php?file_id=472&type=bug
png
Notes
(0004182)
jarek_herisz   
2021-07-19 10:36   
System:
root@backup:~# cat /etc/debian_version
10.10
(0004206)
frank   
2021-08-09 17:22   
Fix committed to bareos bareos-19.2 branch with changesetid 15093.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1324 [bareos-core] webui major always 2021-03-02 10:26 2021-12-21 13:57
Reporter: Emmanuel Garette Platform: Linux Ubuntu  
Assigned To: frank OS: 20.04  
Priority: normal OS Version:  
Status: resolved Product Version: 20.0.0  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Infinite loop when trying to log with invalid account
Description: I'm using this community version of Webui: http://download.bareos.org/bareos/release/20/xUbuntu_20.04/

When I'm trying to log with invalid account, webui didn't return nothing and apache seems to run an infinite loop. The log file increases rapidly.

I think the problem is in this two lines:

          $send = fwrite($this->socket, $msg, $str_length);
         if($send === 0) {

The fwrite function returns false when an error provides (see: https://www.php.net/manual/en/function.fwrite.php ).

If a replace 0 by false, everything is ok.

In attachement a patch to solve this issues.
Tags:
Steps To Reproduce:
Additional Information:
Attached Files: webui.patch (483 bytes) 2021-03-02 10:26
https://bugs.bareos.org/file_download.php?file_id=458&type=bug
Notes
(0004163)
frank   
2021-06-28 15:22   
Fix committed to bareos master branch with changesetid 15006.
(0004165)
frank   
2021-06-29 14:22   
Fix committed to bareos bareos-19.2 branch with changesetid 15017.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1316 [bareos-core] storage daemon major always 2021-01-30 10:01 2021-12-21 13:57
Reporter: kardel Platform:  
Assigned To: franku OS:  
Priority: high OS Version:  
Status: resolved Product Version: 20.0.0  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: storage daemon loses a configured device instance causing major confusion in device handling
Description: After start "status storage=<name>" show the device as not open or with it parameters - that is expected.

After the first backup with spooling "status storage=<name>" shows the device as "not open or does not exist" - that is a hint
=> the the configured "device_resource->dev" value is nullptr.

The follow up effects are that the reservation code is unable to match the same active device and the same volume in all cases.
When the match fails (log shows "<name> (/dev/<tapename>) and "<name> (/dev/<tapename>) " with no differences) it attempts to allocate new volumes possibly with operator intervention even though the expected volume is available in the drive.

The root cause is a temporary device created in spool.cc::295 => auto rdev(std::make_unique<SpoolDevice>());
Line 302 sets device resource rdev->device_resource = dcr->dev->device_resource;
When rdev leaves scope the Device::~Device() Dtor is called which happily sets this.device_resource->dev = nullptr in
dev.cc:1281 if (device_resource) { device_resource->dev = nullptr; } (=> potential memory leak)

At this point the configured device_resource is lost (even though it might still be known by active volume reservations).
After that the reservation code is completely confused due to new default allocations of devices (see additional info).

A fix is provided as patch against 20.0.0. It only clears this.device_resource->dev when
this.device_resource->dev references this instance.
Tags:
Steps To Reproduce: start bareos system.
observe "status storage=..."
run a spooling job
observer "status storage=..."

If you want to see the confusion it involves a more elaborate test setup with multipe jobs where a spooling job finishes before
another job for the same volume and device begins to run.
Additional Information: It might be worthwhile to check the validity of creating a device in dir_cmd.cc:932. During testing
a difference of device pointer was seen in vol_mgr.cc:916 although the device parameter where the same.
This is most likely cause by Device::this.device_resource->dev being a nullptr and the device creation
in dir_cmd.cc:932. The normal expected lifetime of a device is from reading the configuration until the
program termination. Autochanger support might change that rule though - I didn't analyze that far.
Attached Files: dev.cc.patch (568 bytes) 2021-01-30 10:01
https://bugs.bareos.org/file_download.php?file_id=455&type=bug
Notes
(0004088)
franku   
2021-02-12 12:15   
Thank you for your deep analysis and the proposed fix which solves the issue.

See github PR https://github.com/bareos/bareos/pull/724/commits for more information on the fix and systemtests (which is draft at the time of adding this note).
(0004089)
franku   
2021-02-15 11:38   
Experimental binaries with the proposed bugfix can be found here: http://download.bareos.org/bareos/experimental/CD/PR-724/
(0004091)
franku   
2021-02-24 13:22   
Fix committed to bareos master branch with changesetid 14543.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1300 [bareos-core] webui minor always 2021-01-11 16:27 2021-12-21 13:57
Reporter: fapg Platform: Linux  
Assigned To: frank OS: CentOS  
Priority: normal OS Version: 7  
Status: resolved Product Version: 20.0.0  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact: yes
bareos-master: action: will care
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: some job status are not categorized properly
Description: in the dashboard, when we click in waiting jobs, the url is:

https://bareos-server/bareos-webui/job//?period=1&status=Waiting
but should be:
https://bareos-server/bareos-webui/job//?period=1&status=Queued

Best Regards,
Fernando Gomes
Tags:
Steps To Reproduce:
Additional Information: affects table column filter
System Description
Attached Files:
Notes
(0004168)
frank   
2021-06-29 18:45   
It's not a query parameter issue. WebUI categorizes all the different job status flags into groups. I had a look into it and some job status are not categorized properly so the column filter on the table does not work as expected in those cases. A fix will follow.
(0004175)
frank   
2021-07-06 11:22   
Fix committed to bareos master branch with changesetid 15053.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1251 [bareos-core] webui tweak always 2020-06-11 09:13 2021-12-21 13:57
Reporter: juanpebalsa Platform: Linux  
Assigned To: frank OS: CentOS  
Priority: low OS Version: 7  
Status: resolved Product Version: 19.2.7  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Error when displaying pool detail
Description: When I try to see the detail of the Pools under Storage -> Pools -> 15-Days (one of my pools), I get an error message because I can't find the page.

http://xxxxxxxxx.com/bareos-webui/pool/details/15-Days:
|A 404 error occurred
|Page not found.
|The requested URL could not be matched by routing.
|
|No Exception available
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files: Captura de pantalla 2020-06-11 a las 9.13.02.png (20,870 bytes) 2020-06-11 09:13
https://bugs.bareos.org/file_download.php?file_id=442&type=bug
png
Notes
(0004207)
frank   
2021-08-09 17:22   
Fix committed to bareos bareos-19.2 branch with changesetid 15094.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1232 [bareos-core] installer / packages minor always 2020-04-21 09:26 2021-12-21 13:57
Reporter: rogern Platform: Linux  
Assigned To: bruno-at-bareos OS: CentOS  
Priority: normal OS Version: 8  
Status: resolved Product Version: 19.2.7  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact: yes
bareos-master: action: fixed
bareos-19.2: impact: yes
bareos-19.2: action: fixed
bareos-18.2: impact: yes
bareos-18.2: action: fixed
bareos-17.2: impact: yes
bareos-17.2: action: none
bareos-16.2: impact: yes
bareos-16.2: action: none
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: bareos logrotate errors
Description: Problem with logrotate seems to be back (previously addressed and fixed in 0000417) due to missing

su bareos bareos

in /etc/logrotate.d/bareos-dir

Logrotate gives "error: skipping "/var/log/bareos/bareos.log" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation."
Also the same for bareos-audit.log
Tags:
Steps To Reproduce: Two fresh installs of 19.2.7 with same error from logrotate and lacking "su bareos bareos" in /etc/logrotate.d/bareos-dir
Additional Information:
Attached Files:
Notes
(0004256)
bruno-at-bareos   
2021-09-08 13:46   
PR is now proposed with also backport to supported version
https://github.com/bareos/bareos/pull/918
(0004259)
bruno-at-bareos   
2021-09-09 15:07   
PR#918 has been merged, and backport will be made to 20,19,18 Monday 13th. and will be available on next minor release.
(0004260)
bruno-at-bareos   
2021-09-09 15:22   
Fix committed to bareos master branch with changesetid 15139.
(0004261)
bruno-at-bareos   
2021-09-09 16:22   
Fix committed to bareos bareos-19.2 branch with changesetid 15141.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1205 [bareos-core] webui minor always 2020-02-28 09:42 2021-12-21 13:57
Reporter: Ryushin Platform: Linux  
Assigned To: frank OS: Devuan (Debian)  
Priority: normal OS Version: 10  
Status: resolved Product Version: 19.2.6  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: HeadLink.php error with PHP 7.3
Description: I received this error when trying to connecting to the webui:
Notice: compact(): Undefined variable: extras in /usr/share/bareos-webui/vendor/zendframework/zend-view/src/Helper/HeadLink.php on line 403

Seems to be related to this issue:
https://github.com/zendframework/zend-view/issues/172#issue-388080603
Though the line numbers for their fix is not the same.
Tags:
Steps To Reproduce:
Additional Information: I solved the issue by replacing the HeadLink.php file with an updated version from here:
https://raw.githubusercontent.com/zendframework/zend-view/f7242f7d5ccec2b8c319634b4098595382ef651c/src/Helper/HeadLink.php
Attached Files:
Notes
(0004144)
frank   
2021-06-08 12:22   
Fix committed to bareos bareos-19.2 branch with changesetid 14922.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1191 [bareos-core] webui crash always 2020-02-12 15:40 2021-12-21 13:57
Reporter: khvalera Platform: Linux  
Assigned To: frank OS: Arch Linux  
Priority: high OS Version: x64  
Status: resolved Product Version: 19.2.6  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: The web interface runs under any login and password
Description: To enter the web interface starts under any arbitrary username and password.
How to fix it?
Tags: webui
Steps To Reproduce: /etc/bareos/bareos-dir.d/console/web-admin.conf

Console {
  Name = web-admin
  Password = "123"
  Profile = "webui-admin"
}

/etc/bareos/bareos-dir.d/profile/webui-admin.conf

Profile {
  Name = "webui-admin"
  CommandACL = !.bvfs_clear_cache, !.exit, !.sql, !configure, !create, !delete, !purge, !prune, !sqlquery, !umount, !unmount, *all*
  Job ACL = *all*
  Schedule ACL = *all*
  Catalog ACL = *all*
  Pool ACL = *all*
  Storage ACL = *all*
  Client ACL = *all*
  FileSet ACL = *all*
  Where ACL = *all*
}

/etc/bareos-webui/directors.ini

[bareos_dir]
enabled = "yes"
diraddress = "localhost"
dirport>= 9101
;UsePamAuthentication = yes
pam_console_name = "web-admin"
pam_console_password = "123"
Additional Information:
Attached Files:
Notes
(0003936)
khvalera   
2020-04-10 00:10   
UsePamAuthentication = yes
#pam_console_name = "web-admin"
#pam_console_password = "123"
(0004289)
frank   
2021-09-29 18:22   
Fix committed to bareos master branch with changesetid 15252.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1020 [bareos-core] webui major always 2018-10-10 09:37 2021-12-21 13:56
Reporter: linkstat Platform: Linux  
Assigned To: frank OS: Debian  
Priority: normal OS Version: 9  
Status: resolved Product Version: 18.2.4-rc1  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Can not restore a client with spaces in its name
Description: All my clients have names with spaces on their names like "client-fd using Catalog-XXX"; correctly handled (ie, enclosing the name in quotation marks, or escaping the space with \), this has not represented any problem... until now. Webui can even perform backup tasks (previously defined in the configuration files) and has not presented problems with the spaces. But when it came time to restore something ... it just does not seem to be able to properly handle the character strings that contain spaces inside. Apparently, cuts the string to first place especially found (as you can see by looking at the attached image).
Tags:
Steps To Reproduce: Define a client whose name contains spaces inside, such as "hostname-fd Testing Client".
Try to restore a backup from Webui to that client (it does not matter that the backup was originally made in that client or that the newly defined client is a new destination for the restoration of a backup previously made in another client).
Webui will fail by saying "invalid client argument: hostname-fd". As you can see, Webui will "cut" the client's name to the first string before the first space, and since there is no hostname-fd client, the task will fail; or worse, if additionally there was a client whose name matched the first string before the first space, Webui will restore the wrong client.
Additional Information: bconsole does not present any problem when the clients contain spaces in their names (this of course, when the spaces are correctly handled by the human operator who writes the commands, either by enclosing the name with quotation marks, or escaping spaces with a backslash).
System Description
Attached Files: Bareos - Can not restore when a client name has spaces in their name.jpg (139,884 bytes) 2018-10-10 09:37
https://bugs.bareos.org/file_download.php?file_id=311&type=bug
jpg
Notes
(0003546)
linkstat   
2019-07-31 18:03   
Hello!

Any news regarding this problem? (or any ideas about how to patch it temporarily so that you can use webui for the case described)?
Sometimes it is tedious to use bconsole all the time instead of webui ...

Regards!
(0004185)
frank   
2021-07-21 15:22   
Fix committed to bareos master branch with changesetid 15068.
(0004188)
frank   
2021-07-22 15:22   
Fix committed to bareos bareos-19.2 branch with changesetid 15079.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
971 [bareos-core] webui major always 2018-06-25 11:54 2021-12-21 13:56
Reporter: Masanetz Platform: Linux  
Assigned To: frank OS: any  
Priority: normal OS Version: 3  
Status: resolved Product Version: 17.2.5  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Error building tree for filenames with backslashes
Description: WebUI Restore fails building the tree if directory contains filenames with backslashes.

Some time ago the adobe reader plugin created a file named "C:\nppdf32Log\debuglog.txt" in the working dir.
Building the restore tree in WebUI fails with popup "Oops, something went wrong, probably too many files.".

Filebname handling for backslashes should be adapted for backslashes (e.g. like https://github.com/bareos/bareos-webui/commit/ee232a6f04eaf2a7c1084fee981f011ede000e8a)
Tags:
Steps To Reproduce: 1. Put an empty file with a filename with backslashes (e.g. C:\nppdf32Log\debuglog.txt) in your home directory
2. Backup
3. Try to restore any file from your home directory from this backup via WebUI
Additional Information: Attached diff of my "workaround"
System Description General issues regarding all linux platforms or not specific to only one distribution
Attached Files: RestoreController.php.diff (1,669 bytes) 2018-06-25 11:54
https://bugs.bareos.org/file_download.php?file_id=299&type=bug
Notes
(0004184)
frank   
2021-07-21 15:22   
Fix committed to bareos master branch with changesetid 15067.
(0004189)
frank   
2021-07-22 15:22   
Fix committed to bareos bareos-19.2 branch with changesetid 15080.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
871 [bareos-core] webui block always 2017-11-04 16:10 2021-12-21 13:56
Reporter: tuxmaster Platform: Linux  
Assigned To: frank OS: CentOS  
Priority: normal OS Version: 7  
Status: resolved Product Version: 17.4.2  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: UI will not load complete
Description: After the login, the website will not load complete.
Only the spinner are shown. (See picture)

The php error log will flooded with:
PHP Notice: Undefined index: meta in /usr/share/bareos-webui/module/Job/src/Job/Model/JobModel.php on line 120

The bareos director will run with version 16.2.7.
Tags:
Steps To Reproduce:
Additional Information: PHP 7.1 via fpm
System Description
Attached Files: Bildschirmfoto von »2017-11-04 16-06-19«.png (50,705 bytes) 2017-11-04 16:10
https://bugs.bareos.org/file_download.php?file_id=270&type=bug
png
Notes
(0002812)
frank   
2017-11-09 15:35   
DIRD and WebUI need to have the same version currently.

WebUI 17.2.4 is not compatible to a 16.2.7 director yet, which may change in future.
(0002813)
tuxmaster   
2017-11-09 17:36   
Thanks for the information.
But this shout be noted in the release notes, or better result in an error message about an unsupported version.
(0004169)
frank   
2021-06-30 11:49   
There is a note in the installation chapter, see https://docs.bareos.org/master/IntroductionAndTutorial/InstallingBareosWebui.html#system-requirements .
Nevertheless, I'm going to have a look if we can somehow improve the error handling reagarding version compatibility.
(0004176)
frank   
2021-07-06 17:22   
Fix committed to bareos master branch with changesetid 15057.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
579 [bareos-core] webui block always 2015-12-06 12:41 2021-12-21 13:56
Reporter: tuxmaster Platform: x86_64  
Assigned To: frank OS: Fedora  
Priority: normal OS Version: 22  
Status: resolved Product Version:  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Unable to connect to the director from webui via ipv6
Description: The web ui and the director are running on the same system.
After enter the password the error message "Error: , director seems to be down or blocking our request." will presented.
Tags:
Steps To Reproduce: Open the website enter the credentials and try to log in.
Additional Information: getsebool httpd_can_network_connect
httpd_can_network_connect --> on

Error from the apache log file:
[Sun Dec 06 12:37:32.658104 2015] [:error] [pid 2642] [client ABC] PHP Warning: stream_socket_client(): unable to connect to tcp://[XXX]:9101 (Unknown error) in /usr/share/bareos-webui/vendor/Bareos/library/Bareos/BSock/BareosBSock.php on line 521, referer: http://CDE/bareos-webui/

XXX=ip6addr of the director.

connect(from the web server) via telnet ip6addr 9101 will work.
bconsole will also work.
Attached Files:
Notes
(0002323)
frank   
2016-07-15 16:07   
Note: When specifying a numerical IPv6 address (e.g. fe80::1), you must enclose the IP in square brackets—for example, tcp://[fe80::1]:80.

http://php.net/manual/en/function.stream-socket-client.php

You could try setting your IPv6 address in your directors.ini into square brackets until we provide a fix, that might already work.
(0002324)
tuxmaster   
2016-07-15 17:04   
I have try to set it to:
diraddress = "[XXX]"
XXX are the ipv6 address

But the error are the same.
(0002439)
tuxmaster   
2016-11-06 12:09   
Same on Fedora 24 using php 7.0
(0004159)
pete   
2021-06-23 12:41   
(Last edited: 2021-06-23 12:55)
This is still present in version 20 of the Bareos WebUI, on all RHEL variants I tested (CentOS 8, AlmaLinux 8).

It results from a totally unnecessary "bindto" configuration in line 473 of /usr/share/bareos-webui/vendor/Bareos/library/Bareos/BSock/BareosBSock.php:

      $opts = array(
          'socket' => array(
              'bindto' => '0:0',
          ),
      );

This unnecessarily limits PHP socket binding to IPv4 interfaces as documented in https://www.php.net/manual/en/context.socket.php. The simplest solution is to just comment out the "bindto" line:

      $opts = array(
          'socket' => array(
              // 'bindto' => '0:0',
          ),
      );

Restart php-fpm and now IPv6 works perfectly

(0004167)
frank   
2021-06-29 17:22   
Fix committed to bareos bareos-19.2 branch with changesetid 15043.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
565 [bareos-core] file daemon feature N/A 2015-11-16 08:33 2021-12-07 14:24
Reporter: joergs Platform: Linux  
Assigned To: OS: SLES  
Priority: none OS Version: 12  
Status: acknowledged Product