Bareos Bug Tracker
Bareos Bug Tracker

View Issue Details Jump to Notes ] Issue History ] Print ]
IDProjectCategoryView StatusDate SubmittedLast Update
0000829bareos-core[All Projects] storage daemonpublic2017-06-30 11:072017-07-26 14:08
ReporterPaoloM 
Assigned To 
PrioritynormalSeveritytrivialReproducibilityunable to reproduce
StatusnewResolutionopen 
PlatformLinuxOSDebianOS Version8
Product Version16.2.4 
Target VersionFixed in Version 
Summary0000829: Consolidate Job fails beacuse of volume corrupted
DescriptionSince 2 days ago, my "Consolidate" job fails. The worst is that, the failing volume, is one of the Full Backup of my always incremental job. It worked for a couple of weeks, and the I have the problem.
The first error has been:

29-Jun 12:20 bareos-sd JobId 3614: Error: block.c:333 Volume data error at 1:2666845125!
Block checksum mismatch in block=657335 len=64512: calc=4814efdd blk=9589a529

With bls utility I found exactly the error:

29-Jun 18:24 bls JobId 0: Error: block.c:333 Volume data error at 1:2666845125!
Block checksum mismatch in block=657335 len=64512: calc=4814efdd blk=9589a529
bls: block.c:96-0 Dump block with checksum error 101d0e0: size=64512 BlkNum=657335
               Hdrcksum=9589a529 cksum=4814efdd

I tried with the "Block Checksum = no" directive in the device configuration file, and retried the consolidation, and now I have this error:

29-Jun 18:59 bareos-sd JobId 3618: Error: block.c:286 Volume data error at 1:2666909637! Wanted ID: "BB02", got "". Buffer discarded.

I think is "correct" because the bls command on the corrupted volume, after the corrupted block shows:

bls: block.c:109-0 Rec: VId=982 VT=1495786927 FI=366903 Strm=contGZIP len=16541 p=1024cb8
bls: block.c:109-0 Rec: VId=982 VT=1495786927 FI=0 Strm=0 len=0 p=1028d61
bls: block.c:109-0 Rec: VId=982 VT=1495786927 FI=0 Strm=0 len=0 p=1028d6d
bls: block.c:109-0 Rec: VId=982 VT=1495786927 FI=0 Strm=0 len=0 p=1028d79
....

While a good volume, ends with:

bls: block.c:109-0 Rec: VId=143 VT=1498492591 FI=340630 Strm=contGZIP len=18710 p=998ca8
bls: block.c:109-0 Rec: VId=143 VT=1498492591 FI=340630 Strm=GZIP len=34704 p=99d5ca
bls: block.c:109-0 Rec: VId=143 VT=1498492591 FI=340630 Strm=GZIP len=37975 p=9a5d66
30-Jun 10:18 bls JobId 0: End of file 2 on device "FileStorage" (/storage/qnap/bareos), Volume "AI-Consolidated-0343"
30-Jun 10:18 bls JobId 0: Got EOM at file 2 on device "FileStorage" (/storage/qnap/bareos), Volume "AI-Consolidated-0343"

So, I need a way to solve the problem, if possible without do a new full, because it means loose 3 months of backup !!!

Possibly solutions I imagine:

 - a command to write the EOF on the file volume, like the btape weof that Unfortunately is only for tapes !!!
 - a directive that permit to ignore the eof on the tape
 - a way to set the last block (EOF) in the catalog or its db (postgresql)
 - a working bcopy version, since in the bareos doc is written: "One of the objectives of this program is to be able to recover as much data as possible from a damaged tape. However, the current version does not yet have this feature"

Thanks in advance.




TagsNo tags attached.
bareos-master: impact
bareos-master: action
bareos-17.2: impact
bareos-17.2: action
bareos-16.2: impact
bareos-16.2: action
bareos-15.2: impact
bareos-15.2: action
bareos-14.2: impact
bareos-14.2: action
bareos-13.2: impact
bareos-13.2: action
bareos-12.4: impact
bareos-12.4: action
Attached Files

- Relationships

-  Notes
(0002691)
PaoloM (reporter)
2017-07-26 14:08

After almost a month, the problem is still there. All consolidate jobs fail because one of the volumes, on the Full backup is corrupted. Is there a way to solve the problem. Is it possible to bypass that volume and consolidate with the other ones?

THanks in advance.

- Issue History
Date Modified Username Field Change
2017-06-30 11:07 PaoloM New Issue
2017-07-26 14:08 PaoloM Note Added: 0002691


Copyright © 2000 - 2017 MantisBT Team
Powered by Mantis Bugtracker