View Issue Details

IDProjectCategoryView StatusLast Update
0001114bareos-corefile daemonpublic2024-03-19 14:12
Reporterunki Assigned Tobruno-at-bareos  
PrioritynormalSeverityminorReproducibilityalways
Status closedResolutionunable to reproduce 
PlatformLinuxOSDebianOS Version9
Product Version18.2.6 
Summary0001114: Restoring backups with data-encryption + Percona xtrabackup Plugin file with "Bareos - filed/crypto.cc:303 Decryption error"
DescriptionI'm start encountering an issue on restoring MariaDB backups that have used the Percona xtrabackup Plugin for the file-daemon together with data-encryption.

The file-daemon config contains the following lines to enable data-encryption:

  PKI Signatures = yes
  PKI Encryption = yes
  PKI Master Key = "/etc/bareos/bareos.example.com_crt.pem" # ONLY the Public Key
  PKI Keypair = "/etc/bareos/backup4.pem" # Public and Private Keys

The problem only occurs in combination of an ongoing restore + xtrabackup-plugin + a job that involves Full + Increment backups.
Restoring the MariaDB from a Full-backup that has been made via the plugin works without any issue.
Restoring data of the same node that was _NOT_ using the xtrabackup-plugin but where ordinary file-backups, does not show any issue regardless of restoring a Full or Incremental backup.

When this problem occurs, the restore terminates just after having successfully restored the FULL backup on starting handling the involved Increment backups. The log then contains:

 2019-09-18 14:09:42 backup4-sd JobId 38061: stored/acquire.cc:151 Changing read device. Want Media Type="FileIncPool" have="FileFullPool"
  device="FullPool1" (/srv/bareos/pool)
 2019-09-18 14:09:42 backup4-sd JobId 38061: Releasing device "FullPool1" (/srv/bareos/pool).
 2019-09-18 14:09:42 backup4-sd JobId 38061: Media Type change. New read device "IncPool8" (/srv/bareos/pool) chosen.
 2019-09-18 14:09:42 backup4-sd JobId 38061: Ready to read from volume "Incremental-Sql-1736" on device "IncPool8" (/srv/bareos/pool).
 2019-09-18 14:09:42 backup4-sd JobId 38061: Forward spacing Volume "Incremental-Sql-1736" to file:block 0:212.
 2019-09-18 14:10:04 backup4-sd JobId 38061: End of Volume at file 0 on device "IncPool8" (/srv/bareos/pool), Volume "Incremental-Sql-1736"
 2019-09-18 14:10:04 backup4-sd JobId 38061: Ready to read from volume "Incremental-Sql-4829" on device "IncPool8" (/srv/bareos/pool).
 2019-09-18 14:10:04 backup4-sd JobId 38061: Forward spacing Volume "Incremental-Sql-4829" to file:block 0:749109650.
 2019-09-18 14:10:04 backup4-fd JobId 38061: Error: filed/crypto.cc:303 Decryption error. buf_len=30903 decrypt_len=0 on file /data/restore/prod-db-2019-09-18/_percona/xbstream.0000038025
 2019-09-18 14:10:04 backup4-fd JobId 38061: Error: Uncompression error on file /data/restore/prod-db-2019-09-18/_percona/xbstream.0000038025. ERR=Zlib data error
 2019-09-18 14:10:04 backup4-fd JobId 38061: Error: filed/crypto.cc:172 Missing cryptographic signature for /data/restore/prod-db-2019-09-18/_percona/xbstream.0000038025
 2019-09-18 14:10:06 backup4-sd JobId 38061: Error: lib/bsock_tcp.cc:417 Wrote 31 bytes to client:2a01:4f8:212:369f:4100:0:1c00:0:9103, but only 0 accepted.
 2019-09-18 14:10:06 backup4-sd JobId 38061: Fatal error: stored/read.cc:147 Error sending to File daemon. ERR=Connection reset by peer
 2019-09-18 14:10:06 backup4-sd JobId 38061: Error: lib/bsock_tcp.cc:457 Socket has errors=1 on call to client:2a01:4f8:212:369f:4100:0:1c00:0:9103
 2019-09-18 14:10:06 backup4-sd JobId 38061: Releasing device "IncPool8" (/srv/bareos/pool).

And one more thing to note - bareos-fd is actually crashing when this has happened:

Sep 17 12:37:02 backup4 bareos-fd[1046]: backup4-fd JobId 37880: Error: filed/crypto.cc:303 Decryption error. buf_len=28277 decrypt_len=0 on file /data/restore/prod-db-2019-09-17/_percona/xbstream.0000037835
Sep 17 12:37:02 backup4 bareos-fd[1046]: backup4-fd JobId 37880: Error: Uncompression error on file /data/restore/prod-db-2019-09-17/_percona/xbstream.0000037835. ERR=Zlib data error
Sep 17 12:37:02 backup4 bareos-fd[1046]: backup4-fd JobId 37880: Error: filed/crypto.cc:172 Missing cryptographic signature for /data/restore/prod-db-2019-09-17/_percona/xbstream.0000037835
Sep 17 12:37:02 backup4 bareos-fd[1046]: BAREOS interrupted by signal 11: Segmentation violation
Sep 17 12:37:02 backup4 systemd[1]: bareos-filedaemon.service: Main process exited, code=exited, status=1/FAILURE
Sep 17 12:37:02 backup4 systemd[1]: bareos-filedaemon.service: Unit entered failed state.
Sep 17 12:37:02 backup4 systemd[1]: bareos-filedaemon.service: Failed with result 'exit-code'.
Sep 17 12:37:02 backup4 systemd[1]: bareos-filedaemon.service: Service hold-off time over, scheduling restart.
Sep 17 12:37:02 backup4 systemd[1]: Stopped Bareos File Daemon service.


On my search I have only found this bug here with related error-messages:
https://bugs.bareos.org/view.php?id=192
But obviously it has been solved.

It's all about the xtrabackup-plugin - but I don't really understand why a plugin would have an affect on the data-encryption that is done by the file-daemon. Aren't that different layers in Bareos that are involved here?
Steps To ReproduceOnly on my setup at the moment.
TagsNo tags attached.

Activities

unki

unki

2019-09-20 13:37

reporter   ~0003577

one more - as seen in the log lines, data-compress with GZIP is enabled on that fileset

FileSet {
  Name = "DbClientFileSetSql"
  Ignore File Set Changes = yes
  Include {
     Options {
       compression=GZIP
       signature = MD5
     }
     File = /etc/mysql
     Plugin = "python:module_path=/usr/lib/bareos/plugins:module_name=bareos-fd-percona:mycnf=/etc/bareos/bareos.my.cnf:extradumpoptions=--galera-info --tmpdir=/data/percona_tmp --ftwrl-wait-timeout=300 --check-privileges --no-backup-locks --no-lock --parallel=2 --use-memory=2GB"
  }
}

I will give it a try this night to perform the backups without data-compression enabled.
Just to see if it makes any difference.
unki

unki

2019-09-21 12:11

reporter   ~0003578

Last edited: 2019-09-21 12:13

disabling compression makes no difference here. I've also tried disabling the usage of the Percona xtrabackup Plugin while restoring (by commenting-out 'Plugin Directory = ' and adding 'Plugin Names = ""').

The decryption error remains the same. Somehow the plugin seems to screw up the backups so that they fail later on restoring them.

 2019-09-21 11:48:07 backup4-sd JobId 38585: End of Volume at file 12 on device "FullPool1" (/srv/bareos/pool), Volume "Full-Sql-5549"
 2019-09-21 11:48:07 backup4-sd JobId 38585: Ready to read from volume "Full-Sql-5550" on device "FullPool1" (/srv/bareos/pool).
 2019-09-21 11:48:07 backup4-sd JobId 38585: Forward spacing Volume "Full-Sql-5550" to file:block 0:215.
 2019-09-21 11:56:21 backup4-sd JobId 38585: End of Volume at file 9 on device "FullPool1" (/srv/bareos/pool), Volume "Full-Sql-5550"
 2019-09-21 11:56:21 backup4-sd JobId 38585: stored/acquire.cc:151 Changing read device. Want Media Type="FileIncPool" have="FileFullPool"
  device="FullPool1" (/srv/bareos/pool)
 2019-09-21 11:56:21 backup4-sd JobId 38585: Releasing device "FullPool1" (/srv/bareos/pool).
 2019-09-21 11:56:21 backup4-sd JobId 38585: Media Type change. New read device "IncPool1" (/srv/bareos/pool) chosen.
 2019-09-21 11:56:21 backup4-sd JobId 38585: Ready to read from volume "Incremental-Sql-2164" on device "IncPool1" (/srv/bareos/pool).
 2019-09-21 11:56:21 backup4-sd JobId 38585: Forward spacing Volume "Incremental-Sql-2164" to file:block 0:212.
 2019-09-21 11:56:32 backup4-sd JobId 38585: End of Volume at file 0 on device "IncPool1" (/srv/bareos/pool), Volume "Incremental-Sql-2164"
 2019-09-21 11:56:32 backup4-sd JobId 38585: Ready to read from volume "Incremental-Sql-2157" on device "IncPool1" (/srv/bareos/pool).
 2019-09-21 11:56:32 backup4-sd JobId 38585: Forward spacing Volume "Incremental-Sql-2157" to file:block 0:621305456.
 2019-09-21 11:56:37 backup4-sd JobId 38585: End of Volume at file 0 on device "IncPool1" (/srv/bareos/pool), Volume "Incremental-Sql-2157"
 2019-09-21 11:56:37 backup4-sd JobId 38585: Ready to read from volume "Incremental-Sql-3950" on device "IncPool1" (/srv/bareos/pool).
 2019-09-21 11:56:37 backup4-sd JobId 38585: Forward spacing Volume "Incremental-Sql-3950" to file:block 0:219.
 2019-09-21 11:56:55 backup4-sd JobId 38585: End of Volume at file 0 on device "IncPool1" (/srv/bareos/pool), Volume "Incremental-Sql-3950"
 2019-09-21 11:56:55 backup4-sd JobId 38585: Ready to read from volume "Incremental-Sql-2167" on device "IncPool1" (/srv/bareos/pool).
 2019-09-21 11:56:55 backup4-sd JobId 38585: Forward spacing Volume "Incremental-Sql-2167" to file:block 0:212.
 2019-09-21 11:56:56 backup4-sd JobId 38585: End of Volume at file 0 on device "IncPool1" (/srv/bareos/pool), Volume "Incremental-Sql-2167"
 2019-09-21 11:56:56 backup4-sd JobId 38585: Ready to read from volume "Incremental-Sql-2169" on device "IncPool1" (/srv/bareos/pool).
 2019-09-21 11:56:56 backup4-sd JobId 38585: Forward spacing Volume "Incremental-Sql-2169" to file:block 0:212.
 2019-09-21 11:57:09 backup4-sd JobId 38585: End of Volume at file 0 on device "IncPool1" (/srv/bareos/pool), Volume "Incremental-Sql-2169"
 2019-09-21 11:57:09 backup4-sd JobId 2019-09-21 11:48:07 backup4-sd JobId 38585: End of Volume at file 12 on device "FullPool1" (/srv/bareos/pool), Volume "Full-Sql-5549"
 2019-09-21 11:48:07 backup4-sd JobId 38585: Ready to read from volume "Full-Sql-5550" on device "FullPool1" (/srv/bareos/pool).
 2019-09-21 11:48:07 backup4-sd JobId 38585: Forward spacing Volume "Full-Sql-5550" to file:block 0:215.
 2019-09-21 11:56:21 backup4-sd JobId 38585: End of Volume at file 9 on device "FullPool1" (/srv/bareos/pool), Volume "Full-Sql-5550"
 2019-09-21 11:56:21 backup4-sd JobId 38585: stored/acquire.cc:151 Changing read device. Want Media Type="FileIncPool" have="FileFullPool"
  device="FullPool1" (/srv/bareos/pool)
 2019-09-21 11:56:21 backup4-sd JobId 38585: Releasing device "FullPool1" (/srv/bareos/pool).
 2019-09-21 11:56:21 backup4-sd JobId 38585: Media Type change. New read device "IncPool1" (/srv/bareos/pool) chosen.
 2019-09-21 11:56:21 backup4-sd JobId 38585: Ready to read from volume "Incremental-Sql-2164" on device "IncPool1" (/srv/bareos/pool).
 2019-09-21 11:56:21 backup4-sd JobId 38585: Forward spacing Volume "Incremental-Sql-2164" to file:block 0:212.
 2019-09-21 11:56:32 backup4-sd JobId 38585: End of Volume at file 0 on device "IncPool1" (/srv/bareos/pool), Volume "Incremental-Sql-2164"
 2019-09-21 11:56:32 backup4-sd JobId 38585: Ready to read from volume "Incremental-Sql-2157" on device "IncPool1" (/srv/bareos/pool).
 2019-09-21 11:56:32 backup4-sd JobId 38585: Forward spacing Volume "Incremental-Sql-2157" to file:block 0:621305456.
 2019-09-21 11:56:37 backup4-sd JobId 38585: End of Volume at file 0 on device "IncPool1" (/srv/bareos/pool), Volume "Incremental-Sql-2157"
 2019-09-21 11:56:37 backup4-sd JobId 38585: Ready to read from volume "Incremental-Sql-3950" on device "IncPool1" (/srv/bareos/pool).
 2019-09-21 11:56:37 backup4-sd JobId 38585: Forward spacing Volume "Incremental-Sql-3950" to file:block 0:219.
 2019-09-21 11:56:55 backup4-sd JobId 38585: End of Volume at file 0 on device "IncPool1" (/srv/bareos/pool), Volume "Incremental-Sql-3950"
 2019-09-21 11:56:55 backup4-sd JobId 38585: Ready to read from volume "Incremental-Sql-2167" on device "IncPool1" (/srv/bareos/pool).
 2019-09-21 11:56:55 backup4-sd JobId 38585: Forward spacing Volume "Incremental-Sql-2167" to file:block 0:212.
 2019-09-21 11:56:56 backup4-sd JobId 38585: End of Volume at file 0 on device "IncPool1" (/srv/bareos/pool), Volume "Incremental-Sql-2167"
 2019-09-21 11:56:56 backup4-sd JobId 38585: Ready to read from volume "Incremental-Sql-2169" on device "IncPool1" (/srv/bareos/pool).
 2019-09-21 11:56:56 backup4-sd JobId 38585: Forward spacing Volume "Incremental-Sql-2169" to file:block 0:212.
 2019-09-21 11:57:09 backup4-sd JobId 38585: End of Volume at file 0 on device "IncPool1" (/srv/bareos/pool), Volume "Incremental-Sql-2169"
 2019-09-21 11:57:09 backup4-sd JobId 38585: Ready to read from volume "Incremental-Sql-2171" on device "IncPool1" (/srv/bareos/pool).
 2019-09-21 11:57:09 backup4-fd JobId 38585: Error: filed/crypto.cc:303 Decryption error. buf_len=65512 decrypt_len=0 on file /data/restore/prod-db-2019-09-21/_percona/xbstream.0000038576
 2019-09-21 11:57:09 backup4-fd JobId 38585: Error: filed/crypto.cc:172 Missing cryptographic signature for /data/restore/prod-db-2019-09-21/_percona/xbstream.0000038576
 2019-09-21 11:57:09 backup4-fd JobId 38585: Fatal error: TLS read/write failure.: ERR=error:06065064:digital envelope routines:EVP_DecryptFinal_ex:bad decrypt
 2019-09-21 11:57:09 backup4-fd JobId 38585: Error: filed/crypto.cc:172 Missing cryptographic signature for /data/restore/prod-db-2019-09-21/_percona/xbstream.0000038568
 2019-09-21 11:57:09 backup4-sd JobId 38585: Forward spacing Volume "Incremental-Sql-2171" to file:block 0:212.
 2019-09-21 11:57:12 backup4-sd JobId 38585: Error: lib/bsock_tcp.cc:417 Wrote 32 bytes to client:2a01:4f8:212:369f:4100:0:1c00:0:9103, but only 0 accepted.
 2019-09-21 11:57:12 backup4-sd JobId 38585: Fatal error: stored/read.cc:147 Error sending to File daemon. ERR=Connection reset by peer
 2019-09-21 11:57:12 backup4-sd JobId 38585: Error: lib/bsock_tcp.cc:457 Socket has errors=1 on call to client:2a01:4f8:212:369f:4100:0:1c00:0:9103
 2019-09-21 11:57:12 backup4-sd JobId 38585: Releasing device "IncPool1" (/srv/bareos/pool).
 38585: Ready to read from volume "Incremental-Sql-2171" on device "IncPool1" (/srv/bareos/pool).
 2019-09-21 11:57:09 backup4-fd JobId 38585: Error: filed/crypto.cc:303 Decryption error. buf_len=65512 decrypt_len=0 on file /data/restore/prod-db-2019-09-21/_percona/xbstream.0000038576
 2019-09-21 11:57:09 backup4-fd JobId 38585: Error: filed/crypto.cc:172 Missing cryptographic signature for /data/restore/prod-db-2019-09-21/_percona/xbstream.0000038576
 2019-09-21 11:57:09 backup4-fd JobId 38585: Fatal error: TLS read/write failure.: ERR=error:06065064:digital envelope routines:EVP_DecryptFinal_ex:bad decrypt
 2019-09-21 11:57:09 backup4-fd JobId 38585: Error: filed/crypto.cc:172 Missing cryptographic signature for /data/restore/prod-db-2019-09-21/_percona/xbstream.0000038568
 2019-09-21 11:57:09 backup4-sd JobId 38585: Forward spacing Volume "Incremental-Sql-2171" to file:block 0:212.
 2019-09-21 11:57:12 backup4-sd JobId 38585: Error: lib/bsock_tcp.cc:417 Wrote 32 bytes to client:2a01:4f8:212:369f:4100:0:1c00:0:9103, but only 0 accepted.
 2019-09-21 11:57:12 backup4-sd JobId 38585: Fatal error: stored/read.cc:147 Error sending to File daemon. ERR=Connection reset by peer
 2019-09-21 11:57:12 backup4-sd JobId 38585: Error: lib/bsock_tcp.cc:457 Socket has errors=1 on call to client:2a01:4f8:212:369f:4100:0:1c00:0:9103
 2019-09-21 11:57:12 backup4-sd JobId 38585: Releasing device "IncPool1" (/srv/bareos/pool).

unki

unki

2019-10-10 07:03

reporter   ~0003592

I had now the chance to perform the backup & restore without PKI encryption of the backup-data.
The issue with the Percona xtrabackup Plugin for the file-daemon also occurs without encryption.

10-Oct 06:30 bareos-dir JobId 41723: Start Restore Job RestoreFiles.2019-10-10_06.29.59_27
10-Oct 06:30 bareos-dir JobId 41723: Connected Storage daemon at backup4.example.com:9103, encryption: DHE-RSA-AES256-GCM-SHA384
10-Oct 06:30 bareos-dir JobId 41723: Using Device "FullPool1" to read.
10-Oct 06:30 bareos-dir JobId 41723: Connected Client: db-fd at db.example.com:9102, encryption: DHE-RSA-AES256-GCM-SHA384
10-Oct 06:30 bareos-dir JobId 41723: Handshake: Immediate TLS 10-Oct 06:30 bareos-dir JobId 41723: Encryption: DHE-RSA-AES256-GCM-SHA384
10-Oct 06:30 db-fd JobId 41723: Connected Storage daemon at backup4.example.com:9103, encryption: DHE-RSA-AES256-GCM-SHA384
10-Oct 06:30 db-fd JobId 41723: python-fd: Got to_lsn 8731775617207 from restore object of job 41695
10-Oct 06:30 db-fd JobId 41723: python-fd: Got to_lsn 8731965444667 from restore object of job 41707
10-Oct 06:30 db-fd JobId 41723: python-fd: Got to_lsn 8732041077071 from restore object of job 41712
10-Oct 06:30 db-fd JobId 41723: python-fd: Got to_lsn 8732111279885 from restore object of job 41717
10-Oct 06:30 backup4-sd JobId 41723: Ready to read from volume "Full-Sql-2711" on device "FullPool1" (/srv/bareos/pool).
10-Oct 06:30 backup4-sd JobId 41723: Forward spacing Volume "Full-Sql-2711" to file:block 0:206.
10-Oct 06:48 backup4-sd JobId 41723: End of Volume at file 12 on device "FullPool1" (/srv/bareos/pool), Volume "Full-Sql-2711"
10-Oct 06:48 backup4-sd JobId 41723: Ready to read from volume "Full-Sql-2552" on device "FullPool1" (/srv/bareos/pool).
10-Oct 06:48 backup4-sd JobId 41723: Forward spacing Volume "Full-Sql-2552" to file:block 9:407702037.
10-Oct 06:53 backup4-sd JobId 41723: End of Volume at file 12 on device "FullPool1" (/srv/bareos/pool), Volume "Full-Sql-2552"
10-Oct 06:53 backup4-sd JobId 41723: Ready to read from volume "Full-Sql-2795" on device "FullPool1" (/srv/bareos/pool).
10-Oct 06:53 backup4-sd JobId 41723: Forward spacing Volume "Full-Sql-2795" to file:block 0:206.
10-Oct 06:54 backup4-sd JobId 41723: End of Volume at file 0 on device "FullPool1" (/srv/bareos/pool), Volume "Full-Sql-2795"
10-Oct 06:54 backup4-sd JobId 41723: stored/acquire.cc:151 Changing read device. Want Media Type="FileIncPool" have="FileFullPool"
  device="FullPool1" (/srv/bareos/pool)
10-Oct 06:54 backup4-sd JobId 41723: Releasing device "FullPool1" (/srv/bareos/pool).
10-Oct 06:54 backup4-sd JobId 41723: Media Type change. New read device "IncPool1" (/srv/bareos/pool) chosen.
10-Oct 06:54 backup4-sd JobId 41723: Ready to read from volume "Incremental-Sql-2873" on device "IncPool1" (/srv/bareos/pool).
10-Oct 06:54 backup4-sd JobId 41723: Forward spacing Volume "Incremental-Sql-2873" to file:block 0:883472630.
10-Oct 06:54 backup4-sd JobId 41723: End of Volume at file 0 on device "IncPool1" (/srv/bareos/pool), Volume "Incremental-Sql-2873"
10-Oct 06:54 backup4-sd JobId 41723: Ready to read from volume "Incremental-Sql-2875" on device "IncPool1" (/srv/bareos/pool).
10-Oct 06:54 backup4-sd JobId 41723: Forward spacing Volume "Incremental-Sql-2875" to file:block 0:590294726.
10-Oct 06:54 db-fd JobId 41723: Fatal error: filed/fd_plugins.cc:1178 Second call to startRestoreFile. plugin=python-fd.so cmd=python:module_path=/usr/lib/bareos/plugins:module_name=bareos-fd-percona:mycnf=/etc/bareos/bareos.my.cnf:log=/var/log/bareos/xtrabackup.log:extradumpoptions=--galera-info --ftwrl-wait-timeout=300 --check-privileges --tmpdir=/data/percona_tmp --no-backup-locks --no-lock --parallel=2 --use-memory=2GB
10-Oct 06:54 backup4-sd JobId 41723: Error: lib/bsock_tcp.cc:417 Wrote 33 bytes to client:::4100:0:1c00:0:9103, but only 0 accepted.
10-Oct 06:54 backup4-sd JobId 41723: Fatal error: stored/read.cc:147 Error sending to File daemon. ERR=Connection reset by peer
10-Oct 06:54 backup4-sd JobId 41723: Error: lib/bsock_tcp.cc:457 Socket has errors=1 on call to client:::4100:0:1c00:0:9103
10-Oct 06:54 backup4-sd JobId 41723: Releasing device "IncPool1" (/srv/bareos/pool).
10-Oct 06:54 bareos-dir JobId 41723: Error: Bareos bareos-dir 18.2.6 (13Feb19):
  Build OS: Linux-4.19.0-6-amd64 debian Debian GNU/Linux 9.11 (stretch)
  JobId: 41723
  Job: RestoreFiles.2019-10-10_06.29.59_27
  Restore Client: db-fd
  Start time: 10-Oct-2019 06:30:01
  End time: 10-Oct-2019 06:54:19
  Elapsed time: 24 mins 18 secs
  Files Expected: 20
  Files Restored: 19
  Bytes Restored: 286,059,460,373
  Rate: 196199.9 KB/s
  FD Errors: 1
  FD termination status: Fatal Error
  SD termination status: Fatal Error
  Bareos binary info:
  Termination: *** Restore Error ***

10-Oct 06:54 bareos-dir JobId 41723: Begin pruning Files.
10-Oct 06:54 bareos-dir JobId 41723: No Files found to prune.
10-Oct 06:54 bareos-dir JobId 41723: End auto prune.
unki

unki

2019-10-10 07:04

reporter   ~0003593

the used xtrabackup version is v2.4.15
unki

unki

2019-10-10 13:23

reporter   ~0003595

just to be sure I tried now downgrading xtrabackup to v2.4.12 - the same issue occurs

10-Oct 13:06 backup4-sd JobId 41769: Ready to read from volume "Incremental-Sql-2885" on device "IncPool1" (/srv/bareos/pool).
10-Oct 13:06 backup4-sd JobId 41769: Forward spacing Volume "Incremental-Sql-2885" to file:block 0:212.
10-Oct 13:06 db7-fd JobId 41769: Error: Compressed header size error. comp_len=31653, message_length=36220
10-Oct 13:06 db7-fd JobId 41769: Error: python-fd: Restore command returned non-zero value: 1, message:
10-Oct 13:06 backup4-sd JobId 41769: End of Volume at file 0 on device "IncPool1" (/srv/bareos/pool), Volume "Incremental-Sql-2885"
10-Oct 13:06 backup4-sd JobId 41769: Ready to read from volume "Incremental-Sql-2888" on device "IncPool1" (/srv/bareos/pool).
unki

unki

2019-11-06 17:36

reporter   ~0003615

Last edited: 2019-11-07 07:45

Strange thing - e.g. the job has stored (with the help of the percona-plugin) the following files in the backup.

$ ls
xbstream.0000046633
xbstream.0000046648
xbstream.0000046665
xbstream.0000046675
xbstream.0000046683
xbstream.0000046691
xbstream.0000046698
xbstream.0000046705
xbstream.0000046717
xbstream.0000046722
xbstream.0000046729


If I restore those xbstream.* files one by one with individual jobs into individual directories, it's a success.
All-in-one causes the previously posted errors.
Wondering if this has then a similar reason as it was back in https://bugs.bareos.org/view.php?id=192

bruno-at-bareos

bruno-at-bareos

2024-03-19 14:12

manager   ~0005840

We can't reproduce any errors actually with the current version of bareos and plugin.

Issue History

Date Modified Username Field Change
2019-09-20 13:24 unki New Issue
2019-09-20 13:37 unki Note Added: 0003577
2019-09-21 12:11 unki Note Added: 0003578
2019-09-21 12:12 unki Note Edited: 0003578
2019-09-21 12:13 unki Note Edited: 0003578
2019-10-10 07:03 unki Note Added: 0003592
2019-10-10 07:04 unki Note Added: 0003593
2019-10-10 13:23 unki Note Added: 0003595
2019-10-16 10:11 arogge Status new => acknowledged
2019-11-06 17:36 unki Note Added: 0003615
2019-11-07 07:45 unki Note Edited: 0003615
2024-03-19 14:12 bruno-at-bareos Assigned To => bruno-at-bareos
2024-03-19 14:12 bruno-at-bareos Status acknowledged => closed
2024-03-19 14:12 bruno-at-bareos Resolution open => unable to reproduce
2024-03-19 14:12 bruno-at-bareos Note Added: 0005840