View Issue Details

IDProjectCategoryView StatusLast Update
0001217bareos-core[All Projects] storage daemonpublic2020-03-26 14:23
ReporterigormedoAssigned Toarogge 
PriorityhighSeveritymajorReproducibilityalways
Status feedbackResolutionopen 
PlatformLinuxOSCentOSOS Version7
Product Version18.2.7 
Fixed in Version 
Summary0001217: SD error during S3 restore
DescriptionI have a working bareos installation with AWS S3 storage backend.
The lifecycle policy moves Full backup chunk files to Glacier Deep Archive.

I have a restore request from my client.

Did a restore request from glacier with this command:

aws s3 ls s3://mybucketname/CloudFull-0012/ | awk '{print $4}' | xargs -I{} -L 1 aws s3api restore-object --restore-request '{"Days":3,"GlacierJobParameters":{"Tier":"Standard"}}' --bucket mybucketname --key CloudFull-0012/{}

Few hour later after completion of my request, I could copy any chunk file, so it is now reachable: aws s3 cp s3://mybucketname/CloudFull-0012/0957 ./

However, bareos cannot restore from it with the following error:

bareos-sd JobId 541: Please mount read Volume "CloudFull-0012" for:
Job: RestoreFiles.2020-03-24_18.32.23_12
Storage: "S3_Full" (AWS S3 Storage)
Pool: CloudIncremental
Media type: S3_Object_Full

bareos-sd JobId 541: Warning: stored/acquire.cc:331 Read acquire: Requested Volume "CloudFull-0012" on "S3_Full" (AWS S3 Storage) is not a Bareos labeled Volume, because: ERR=stored/block.cc:1036 Read zero bytes at 0:0 on device "S3_Full" (AWS S3 Storage).

Weird thing is, it refers to Pool: CloudIncremental, but CloudFull was used during backup.
Tagss3;droplet;aws;storage
bareos-master: impact
bareos-master: action
bareos-19.2: impact
bareos-19.2: action
bareos-18.2: impact
bareos-18.2: action
bareos-17.2: impact
bareos-17.2: action
bareos-16.2: impact
bareos-16.2: action
bareos-15.2: impact
bareos-15.2: action
bareos-14.2: impact
bareos-14.2: action
bareos-13.2: impact
bareos-13.2: action
bareos-12.4: impact
bareos-12.4: action

Activities

igormedo

igormedo

2020-03-24 20:40

reporter   ~0003914

For testing, I did a restore from Incremantal pool with STANDARD S3 storage tier.

It succeeded:

24-Mar 20:28 bareos-dir JobId 552: Start Restore Job RestoreFiles.2020-03-24_20.00.35_43
24-Mar 20:28 bareos-dir JobId 552: Connected Storage daemon at example.tld:9103, encryption: PSK-AES256-CBC-SHA
24-Mar 20:28 bareos-dir JobId 552: Using Device "S3_Incremental" to read.
24-Mar 20:28 bareos-dir JobId 552: Connected Client: client.tld-fd at client.tld:9102, encryption: ECDHE-RSA-AES256-GCM-SHA384
24-Mar 20:28 bareos-dir JobId 552: Handshake: Immediate TLS
24-Mar 20:28 bareos-sd JobId 552: Connected File Daemon at client.tld:9102, encryption: ECDHE-RSA-AES256-GCM-SHA384
24-Mar 20:28 bareos-sd JobId 552: Ready to read from volume "CloudIncremental-0023" on device "S3_Incremental" (AWS S3 Storage).
24-Mar 20:28 bareos-sd JobId 552: Forward spacing Volume "CloudIncremental-0023" to file:block 0:259569452.
24-Mar 20:28 bareos-sd JobId 552: Releasing device "S3_Incremental" (AWS S3 Storage).
24-Mar 20:28 bareos-dir JobId 552: Bareos bareos-dir 18.2.5 (30Jan19):
  Build OS: Linux-4.4.92-6.18-default redhat CentOS Linux release 7.6.1810 (Core)
  JobId: 552
  Job: RestoreFiles.2020-03-24_20.00.35_43
  Restore Client: client.tld-fd
  Start time: 24-Mar-2020 20:28:23
  End time: 24-Mar-2020 20:28:49
  Elapsed time: 26 secs
  Files Expected: 1
  Files Restored: 1
  Bytes Restored: 171,943
  Rate: 6.6 KB/s
  FD Errors: 0
  FD termination status: OK
  SD termination status: OK
  Bareos binary info: bareos.org build: Get official binaries and vendor support on bareos.com
  Termination: Restore OK
arogge

arogge

2020-03-26 10:47

developer   ~0003915

Can you rerun the restore with the SD put into trace mode with at least debuglevel 100 and take a look at the trace? Maybe it cannot find the volume chunks.
Thank you!
igormedo

igormedo

2020-03-26 11:33

reporter   ~0003918

I had to overwrite the bucket with the restored chunks:
aws s3 cp s3://mybucketname/Volumename/ s3://mybucketname/Volumename/ --storage-class STANDARD_IA --recursive --force-glacier-transfer

Looks like it's not enough that you can copy the chunks with aws cli.

This adds additional transfer costs, so I decided to keep latest Full and subsequential Differential backups in STANDRD_IA and ONEZONE_IA tier and move them to glacier with lifecycle rules after a new Full backup is done.
arogge

arogge

2020-03-26 14:23

developer   ~0003923

Does that mean you could fix the issue yourself?

Issue History

Date Modified Username Field Change
2020-03-24 18:50 igormedo New Issue
2020-03-24 18:50 igormedo Tag Attached: s3;droplet;aws;storage
2020-03-24 20:40 igormedo Note Added: 0003914
2020-03-26 10:47 arogge Assigned To => arogge
2020-03-26 10:47 arogge Status new => feedback
2020-03-26 10:47 arogge Note Added: 0003915
2020-03-26 11:33 igormedo Note Added: 0003918
2020-03-26 11:33 igormedo Status feedback => assigned
2020-03-26 14:23 arogge Status assigned => feedback
2020-03-26 14:23 arogge Note Added: 0003923