View Issue Details

IDProjectCategoryView StatusLast Update
0001217bareos-corestorage daemonpublic2024-05-15 15:46
Reporterigormedo Assigned Toarogge  
PriorityhighSeveritymajorReproducibilityalways
Status closedResolutionsuspended 
PlatformLinuxOSCentOSOS Version7
Product Version18.2.7 
Summary0001217: SD error during S3 restore
DescriptionI have a working bareos installation with AWS S3 storage backend.
The lifecycle policy moves Full backup chunk files to Glacier Deep Archive.

I have a restore request from my client.

Did a restore request from glacier with this command:

aws s3 ls s3://mybucketname/CloudFull-0012/ | awk '{print $4}' | xargs -I{} -L 1 aws s3api restore-object --restore-request '{"Days":3,"GlacierJobParameters":{"Tier":"Standard"}}' --bucket mybucketname --key CloudFull-0012/{}

Few hour later after completion of my request, I could copy any chunk file, so it is now reachable: aws s3 cp s3://mybucketname/CloudFull-0012/0957 ./

However, bareos cannot restore from it with the following error:

bareos-sd JobId 541: Please mount read Volume "CloudFull-0012" for:
Job: RestoreFiles.2020-03-24_18.32.23_12
Storage: "S3_Full" (AWS S3 Storage)
Pool: CloudIncremental
Media type: S3_Object_Full

bareos-sd JobId 541: Warning: stored/acquire.cc:331 Read acquire: Requested Volume "CloudFull-0012" on "S3_Full" (AWS S3 Storage) is not a Bareos labeled Volume, because: ERR=stored/block.cc:1036 Read zero bytes at 0:0 on device "S3_Full" (AWS S3 Storage).

Weird thing is, it refers to Pool: CloudIncremental, but CloudFull was used during backup.
Tagss3;droplet;aws;storage

Activities

igormedo

igormedo

2020-03-24 20:40

reporter   ~0003914

For testing, I did a restore from Incremantal pool with STANDARD S3 storage tier.

It succeeded:

24-Mar 20:28 bareos-dir JobId 552: Start Restore Job RestoreFiles.2020-03-24_20.00.35_43
24-Mar 20:28 bareos-dir JobId 552: Connected Storage daemon at example.tld:9103, encryption: PSK-AES256-CBC-SHA
24-Mar 20:28 bareos-dir JobId 552: Using Device "S3_Incremental" to read.
24-Mar 20:28 bareos-dir JobId 552: Connected Client: client.tld-fd at client.tld:9102, encryption: ECDHE-RSA-AES256-GCM-SHA384
24-Mar 20:28 bareos-dir JobId 552: Handshake: Immediate TLS
24-Mar 20:28 bareos-sd JobId 552: Connected File Daemon at client.tld:9102, encryption: ECDHE-RSA-AES256-GCM-SHA384
24-Mar 20:28 bareos-sd JobId 552: Ready to read from volume "CloudIncremental-0023" on device "S3_Incremental" (AWS S3 Storage).
24-Mar 20:28 bareos-sd JobId 552: Forward spacing Volume "CloudIncremental-0023" to file:block 0:259569452.
24-Mar 20:28 bareos-sd JobId 552: Releasing device "S3_Incremental" (AWS S3 Storage).
24-Mar 20:28 bareos-dir JobId 552: Bareos bareos-dir 18.2.5 (30Jan19):
  Build OS: Linux-4.4.92-6.18-default redhat CentOS Linux release 7.6.1810 (Core)
  JobId: 552
  Job: RestoreFiles.2020-03-24_20.00.35_43
  Restore Client: client.tld-fd
  Start time: 24-Mar-2020 20:28:23
  End time: 24-Mar-2020 20:28:49
  Elapsed time: 26 secs
  Files Expected: 1
  Files Restored: 1
  Bytes Restored: 171,943
  Rate: 6.6 KB/s
  FD Errors: 0
  FD termination status: OK
  SD termination status: OK
  Bareos binary info: bareos.org build: Get official binaries and vendor support on bareos.com
  Termination: Restore OK
arogge

arogge

2020-03-26 10:47

manager   ~0003915

Can you rerun the restore with the SD put into trace mode with at least debuglevel 100 and take a look at the trace? Maybe it cannot find the volume chunks.
Thank you!
igormedo

igormedo

2020-03-26 11:33

reporter   ~0003918

I had to overwrite the bucket with the restored chunks:
aws s3 cp s3://mybucketname/Volumename/ s3://mybucketname/Volumename/ --storage-class STANDARD_IA --recursive --force-glacier-transfer

Looks like it's not enough that you can copy the chunks with aws cli.

This adds additional transfer costs, so I decided to keep latest Full and subsequential Differential backups in STANDRD_IA and ONEZONE_IA tier and move them to glacier with lifecycle rules after a new Full backup is done.
arogge

arogge

2020-03-26 14:23

manager   ~0003923

Does that mean you could fix the issue yourself?
TheCritter

TheCritter

2021-01-15 09:50

reporter   ~0004079

Last edited: 2021-01-15 10:15

Sam error here. I use Bareos 20.0 on Ubuntu 20.04. Backup works fine, but restore does not.
bareos-sd JobId 9387: Warning: stored/acquire.cc:348 Read acquire: Requested Volume "FullS3-6401" on "RadosStorage" (Object S3 Storage) is not a Bareos labeled Volume, because: ERR=stored/block.cc:1074 Read zero bytes at 0:0 on device "RadosStorage" (Object S3 Storage).

I dont use S3 from AWS, I use S3 from Ceph Rados Gateway.
If I dont use SSL, restore works fine:
2021-01-15 07:43:13 bareos-sd JobId 9383: Ready to read from volume "FullS3-6401" on device "RadosStorage" (Object S3 Storage).
2021-01-15 07:43:13 bareos-sd JobId 9383: Forward spacing Volume "FullS3-6401" to file:block 0:207.

I open a new Ticket for this. I think its a little bit different

arogge

arogge

2024-05-15 15:46

manager   ~0005941

This was reported for an end-of-life (EOL) version of Bareos.

If you can reproduce this bug against a currently maintained version of
Bareos please feel free to open a new issue against that version at
https://github.com/bareos/bareos/issues

Thank you for reporting this bug and we are sorry it could not be fixed.

Issue History

Date Modified Username Field Change
2020-03-24 18:50 igormedo New Issue
2020-03-24 18:50 igormedo Tag Attached: s3;droplet;aws;storage
2020-03-24 20:40 igormedo Note Added: 0003914
2020-03-26 10:47 arogge Assigned To => arogge
2020-03-26 10:47 arogge Status new => feedback
2020-03-26 10:47 arogge Note Added: 0003915
2020-03-26 11:33 igormedo Note Added: 0003918
2020-03-26 11:33 igormedo Status feedback => assigned
2020-03-26 14:23 arogge Status assigned => feedback
2020-03-26 14:23 arogge Note Added: 0003923
2021-01-15 09:50 TheCritter Note Added: 0004079
2021-01-15 10:15 TheCritter Note Edited: 0004079
2024-05-15 15:46 arogge Status feedback => closed
2024-05-15 15:46 arogge Resolution open => suspended
2024-05-15 15:46 arogge Note Added: 0005941