View Issue Details

IDProjectCategoryView StatusLast Update
0000874bareos-coredirectorpublic2022-07-04 17:12
Reporterchaos_prevails Assigned Tobruno-at-bareos  
PrioritynormalSeverityminorReproducibilityalways
Status resolvedResolutionwon't fix 
PlatformLinuxOSUbuntuOS Version16.04 amd64
Product Version16.2.4 
Summary0000874: VirtualFull fails when read and write devices are on different storage daemons (different machines)
DescriptionThe Virtual full backup fails with

...
2017-10-26 17:24:13 pavlov-dir JobId 269: Start Virtual Backup JobId 269, Job=pavlov_sys_ai_vf.2017-10-26_17.24.11_04
 2017-10-26 17:24:13 pavlov-dir JobId 269: Consolidating JobIds 254,251,252,255,256,257
 2017-10-26 17:24:13 pavlov-dir JobId 269: Bootstrap records written to /var/lib/bareos/pavlov-dir.restore.1.bsr
 2017-10-26 17:24:14 delaunay-sd JobId 269: Fatal error: Device reservation failed for JobId=269:
 2017-10-26 17:24:14 pavlov-dir JobId 269: Fatal error:
     Storage daemon didn't accept Device "pavlov-file-consolidate" because:
     3924 Device "pavlov-file-consolidate" not in SD Device resources or no matching Media Type.
 2017-10-26 17:24:14 pavlov-dir JobId 269: Error: Bareos pavlov-dir 16.2.4 (01Jul16):
...

When changing the storage daemon for the VirtualFull Backup to the same machine as always-incremental and consolidate backups, the Virtualfull Backup works:
...
 2017-11-07 10:43:20 pavlov-dir JobId 320: Start Virtual Backup JobId 320, Job=pavlov_sys_ai_vf.2017-11-07_10.43.18_05
 2017-11-07 10:43:20 pavlov-dir JobId 320: Consolidating JobIds 317,314,315
 2017-11-07 10:43:20 pavlov-dir JobId 320: Bootstrap records written to /var/lib/bareos/pavlov-dir.restore.1.bsr
 2017-11-07 10:43:21 pavlov-dir JobId 320: Using Device "pavlov-file-consolidate" to read.
 2017-11-07 10:43:21 pavlov-dir JobId 320: Using Device "pavlov-file" to write.
 2017-11-07 10:43:21 pavlov-sd JobId 320: Ready to read from volume "ai_consolidate-0023" on device "pavlov-file-consolidate" (/mnt/XXX/var_backup_bareos).
 2017-11-07 10:43:21 pavlov-sd JobId 320: Volume "Full-0032" previously written, moving to end of data.
 2017-11-07 10:43:21 pavlov-sd JobId 320: Ready to append to end of Volume "Full-0032" size=97835302
 2017-11-07 10:43:21 pavlov-sd JobId 320: Forward spacing Volume "ai_consolidate-0023" to file:block 0:7046364.
 2017-11-07 10:43:22 pavlov-sd JobId 320: End of Volume at file 0 on device "pavlov-file-consolidate" (/mnt/XXX/var_backup_bareos), Volume "ai_consolidate-0023"
 2017-11-07 10:43:22 pavlov-sd JobId 320: Ready to read from volume "ai_inc-0033" on device "pavlov-file-consolidate" (/mnt/XXX/var_backup_bareos).
 2017-11-07 10:43:22 pavlov-sd JobId 320: Forward spacing Volume "ai_inc-0033" to file:block 0:1148147.
 2017-11-07 10:43:22 pavlov-sd JobId 320: End of Volume at file 0 on device "pavlov-file-consolidate" (/mnt/XXX/var_backup_bareos), Volume "ai_inc-0033"
 2017-11-07 10:43:22 pavlov-sd JobId 320: End of all volumes.
 2017-11-07 10:43:22 pavlov-sd JobId 320: Elapsed time=00:00:01, Transfer rate=7.029 M Bytes/second
 2017-11-07 10:43:22 pavlov-dir JobId 320: Joblevel was set to joblevel of first consolidated job: Full
 2017-11-07 10:43:23 pavlov-dir JobId 320: Bareos pavlov-dir 16.2.4 (01Jul16):
  Build OS: x86_64-pc-linux-gnu ubuntu Ubuntu 16.04 LTS
  JobId: 320
  Job: pavlov_sys_ai_vf.2017-11-07_10.43.18_05
  Backup Level: Virtual Full
  Client: "pavlov-fd" 16.2.4 (01Jul16) x86_64-pc-linux-gnu,ubuntu,Ubuntu 16.04 LTS,xUbuntu_16.04,x86_64
  FileSet: "linux_system" 2017-10-19 16:11:21
  Pool: "Full" (From Job's NextPool resource)
  Catalog: "MyCatalog" (From Client resource)
  Storage: "pavlov-file" (From Storage from Job's NextPool resource)
  Scheduled time: 07-Nov-2017 10:43:18
  Start time: 07-Nov-2017 10:29:38
  End time: 07-Nov-2017 10:29:39
  Elapsed time: 1 sec
  Priority: 13
  SD Files Written: 148
  SD Bytes Written: 7,029,430 (7.029 MB)
  Rate: 7029.4 KB/s
  Volume name(s): Full-0032
  Volume Session Id: 1
  Volume Session Time: 1510047788
  Last Volume Bytes: 104,883,188 (104.8 MB)
  SD Errors: 0
  SD termination status: OK
  Accurate: yes
  Termination: Backup OK

 2017-11-07 10:43:23 pavlov-dir JobId 320: console command: run AfterJob "update jobid=320 jobtype=A"
Steps To Reproduce1. create always incremental, consolidate jobs, pools, and make sure they are working. Use storage daemon A (pavlov in my example)
2. create VirtualFull Level backup with Storage attribute pointing to a device on a different storage daemon B (delaunay in my example)
3. start always incremental and consolidate job and verify that they are working as expected
4. start VirtualFull Level backup
5. fails with error message:
...
delaunay-sd JobId 269: Fatal error: Device reservation failed for JobId=269:
2017-10-26 17:24:14 pavlov-dir JobId 269: Fatal error:
     Storage daemon didn't accept Device "pavlov-file-consolidate" because:
     3924 Device "pavlov-file-consolidate" not in SD Device resources or no matching Media Type.
...
Additional InformationA) configuration with working always incremental and consolidate jobs, but failing virtualFull level backup:

A) director pavlov (to disk storage daemon + director)
1) template for always incremental jobs
JobDefs {
  Name = "default_ai"
  Type = Backup
  Level = Incremental
  Client = pavlov-fd
  Storage = pavlov-file
  Messages = Standard
  Priority = 10
  Write Bootstrap = "|/usr/local/bin/bareos-messages.sh \"[Bootstrap] %d: %j (jobid %i)\" %i \"it@XXXXX\" %c-%n"
  Maximum Concurrent Jobs = 7

  #always incremental config
  Pool = disk_ai
  Incremental Backup Pool = disk_ai
  Full Backup Pool = disk_ai_consolidate
  Accurate = yes
  Always Incremental = yes
  Always Incremental Job Retention = 20 seconds 0000007 days
  Always Incremental Keep Number = 2 0000007
  Always Incremental Max Full Age = 1 minutes # 14 days
}


2) template for virtual full jobs, should run on read storage pavlov and write storage delaunay:
job defs {
  Name = "default_ai_vf"
  Type = Backup
  Level = VirtualFull
  Messages = Standard
  Priority = 13
  Accurate = yes
 
  Storage = delaunay_HP_G2_Autochanger
  Pool = disk_ai_consolidate
  Incremental Backup Pool = disk_ai
  Next Pool = tape_automated
  Virtual Full Backup Pool = tape_automated
  

  # run after Consolidate
  Run Script {
   console = "update jobid=%i jobtype=A"
   Runs When = After
   Runs On Client = No
   Runs On Failure = No
  }

  Write Bootstrap = "|/usr/local/bin/bareos-messages.sh \"[Bootstrap] %d: %j (jobid %i)\" %i \"it@XXXXX\" %c-%n"
}

3) consolidate job
Job {
  Name = ai_consolidate
  Type = Consolidate
  Accurate = yes
  Max Full Consolidations = 1
  Client = pavlov-fd #value which should be ignored by Consolidate job
  FileSet = "none" #value which should be ignored by Consolidate job
  Pool = disk_ai_consolidate #value which should be ignored by Consolidate job
  Incremental Backup Pool = disk_ai_consolidate
  Full Backup Pool = disk_ai_consolidate
# JobDefs = DefaultJob
# Level = Incremental
  Schedule = "ai_consolidate"
  # Storage = pavlov-file-consolidate #commented out for VirtualFull-Tape testing
  Messages = Standard
  Priority = 10
  Write Bootstrap = "|/usr/local/bin/bareos-messages.sh \"[Bootstrap] %d: %j (jobid %i)\" %i \"it@XXXXXX\" %c-%n"
}

4) always incremental job for client pavlov (works)
Job {
  Name = "pavlov_sys_ai"
  JobDefs = "default_ai"
  Client = "pavlov-fd"
  FileSet = linux_system
  Schedule = manual
}


5) virtualfull job for pavlov (doesn't work)
Job {
  Name = "pavlov_sys_ai_vf"
  JobDefs = "default_ai_vf"
  Client = "pavlov-fd"
  FileSet = linux_system
  Schedule = manual
  Storage = delaunay_HP_G2_Autochanger
  Next Pool = tape_automated
  Virtual Full Backup Pool = tape_automated
}

6) pool always incremental
Pool {
  Name = disk_ai
  Pool Type = Backup
  Recycle = yes # Bareos can automatically recycle Volumes
  AutoPrune = yes # Prune expired volumes
  Volume Retention = 4 weeks
  Maximum Volume Bytes = 30G # Limit Volume size to something reasonable
  Maximum Volumes = 200 # Limit number of Volumes in Pool
  Label Format = "ai_inc-" # Volumes will be labeled "Full-<volume-id>"
  Volume Use Duration = 23h
  Storage = pavlov-file
  Next Pool = disk_ai_consolidate
}

7) pool always incremental consolidate
Pool {
  Name = disk_ai_consolidate
  Pool Type = Backup
  Recycle = yes # Bareos can automatically recycle Volumes
  AutoPrune = yes # Prune expired volumes
  Volume Retention = 4 weeks
  Maximum Volume Bytes = 30G # Limit Volume size to something reasonable
  Maximum Volumes = 200 # Limit number of Volumes in Pool
  Label Format = "ai_consolidate-" # Volumes will be labeled "Full-<volume-id>"
  Volume Use Duration = 23h
  Storage = pavlov-file-consolidate
  Next Pool = tape_automated
}

8) pool tape_automated (for virtualfull jobs to tape)
Pool {
  Name = tape_automated
  Pool Type = Backup
  Storage = delaunay_HP_G2_Autochanger
  Recycle = yes # Bareos can automatically recycle Volumes
  AutoPrune = yes # Prune expired volumes
  Recycle Oldest Volume = yes
  RecyclePool = Scratch
  Maximum Volume Bytes = 0
  Volume Retention = 4 weeks
  Cleaning Prefix = "CLN"
  Catalog Files = yes
}

9) 1st storage device for disk backup (writes always incremental jobs + other normal jobs)
Storage {
  Name = pavlov-file
  Address = pavlov.XX # N.B. Use a fully qualified name here (do not use "localhost" here).
  Password = "X"
  Maximum Concurrent Jobs = 1
  Device = pavlov-file
  Media Type = File
  TLS Certificate = X
  TLS Key = X
  TLS CA Certificate File = X
  TLS DH File = X
  TLS Enable = X
  TLS Require = X
  TLS Verify Peer = X
  TLS Allowed CN = pavlov.X
}

10) 2nd storage device for disk backup (consolidates AI jobs)
Storage {
  Name = pavlov-file-consolidate
  Address = pavlov.X # N.B. Use a fully qualified name here (do not use "localhost" here).
  Password = "X"
  Maximum Concurrent Jobs = 1
  Device = pavlov-file-consolidate
  Media Type = File
  TLS Certificate = X
  TLS Key = X
  TLS CA Certificate File = X
  TLS DH File = X
  TLS Enable = yes
  TLS Require = yes
  TLS Verify Peer = yes
  TLS Allowed CN = pavlov.X
}

11) 3rd storage device for tape backup
Storage {
  Name = delaunay_HP_G2_Autochanger
  Address = "delaunay.XX"
  Password = "X"
  Device = "HP_G2_Autochanger"
  Media Type = LTO
  Autochanger = yes
  TLS Certificate = X
  TLS Key = X
  TLS CA Certificate File = X
  TLS DH File = X
  TLS Enable = yes
  TLS Require = yes
  TLS Verify Peer = yes
  TLS Allowed CN = delaunay.X
}


B) storage daemon pavlov (to disk)
1) to disk storage daemon

Storage {
  Name = pavlov-sd
  Maximum Concurrent Jobs = 20

  # remove comment from "Plugin Directory" to load plugins from specified directory.
  # if "Plugin Names" is defined, only the specified plugins will be loaded,
  # otherwise all storage plugins (*-sd.so) from the "Plugin Directory".
  #
  # Plugin Directory = /usr/lib/bareos/plugins
  # Plugin Names = ""
  TLS Certificate = X
  TLS Key = X
  TLS CA Certificate File = X
  TLS DH File = X
  TLS Enable = yes
  TLS Require = yes
  TLS Verify Peer = yes
  TLS Allowed CN = pavlov.X
  TLS Allowed CN = edite.X
  TLS Allowed CN = delaunay.X
}

2) to disk device (AI + others)
Device {
  Name = pavlov-file
  Media Type = File
  Maximum Open Volumes = 1
  Maximum Concurrent Jobs = 1
  Archive Device = /mnt/xyz #(same for both)
  LabelMedia = yes; # lets Bareos label unlabeled media
  Random Access = yes;
  AutomaticMount = yes; # when device opened, read it
  RemovableMedia = no;
  AlwaysOpen = no;
  Description = "File device. A connecting Director must have the same Name and MediaType."
}

3) consolidate to disk
Device {
  Name = pavlov-file-consolidate
  Media Type = File
  Maximum Open Volumes = 1
  Maximum Concurrent Jobs = 1
  Archive Device = /mnt/xyz #(same for both)
  LabelMedia = yes; # lets Bareos label unlabeled media
  Random Access = yes;
  AutomaticMount = yes; # when device opened, read it
  RemovableMedia = no;
  AlwaysOpen = no;
  Description = "File device. A connecting Director must have the same Name and MediaType."
}

C) to tape storage daemon (different server)
1) allowed director
Director {
  Name = pavlov-dir
  Password = "[md5]X"
  Description = "Director, who is permitted to contact this storage daemon."
  TLS Certificate = X
  TLS Key = /X
  TLS CA Certificate File = X
  TLS DH File = X
  TLS Enable = yes
  TLS Require = yes
  TLS Verify Peer = yes
  TLS Allowed CN = pavlov.X
}


2) storage daemon config
Storage {
  Name = delaunay-sd
  Maximum Concurrent Jobs = 20
  Maximum Network Buffer Size = 32768
# Maximum Network Buffer Size = 65536

  # remove comment from "Plugin Directory" to load plugins from specified directory.
  # if "Plugin Names" is defined, only the specified plugins will be loaded,
  # otherwise all storage plugins (*-sd.so) from the "Plugin Directory".
  #
  # Plugin Directory = /usr/lib/bareos/plugins
  # Plugin Names = ""
  TLS Certificate = X
  TLS Key = X
  TLS DH File = X
  TLS CA Certificate File = X
  TLS Enable = yes
  TLS Require = yes
  TLS Verify Peer = yes
  TLS Allowed CN = pavlov.X
  TLS Allowed CN = edite.X
}


3) autochanger config
Autochanger {
  Name = "HP_G2_Autochanger"
  Device = Ultrium920
  Changer Device = /dev/sg5
  Changer Command = "/usr/lib/bareos/scripts/mtx-changer %c %o %S %a %d"
}

4) device config
Device {
  Name = "Ultrium920"
  Media Type = LTO
  Archive Device = /dev/st2
  Autochanger = yes
  LabelMedia = no
  AutomaticMount = yes
  AlwaysOpen = yes
  RemovableMedia = yes
  Maximum Spool Size = 50G
  Spool Directory = /var/lib/bareos/spool
  Maximum Block Size = 2097152
# Maximum Block Size = 4194304
  Maximum Network Buffer Size = 32768
  Maximum File Size = 50G
}


B) changes to make VirtualFull level backup working (using device on same storage daemon as always incremental and consolidate job) in both Job and pool definitions.

1) change virtualfull job's storage
Job {
  Name = "pavlov_sys_ai_vf"
  JobDefs = "default_ai_vf"
  Client = "pavlov-fd"
  FileSet = linux_system
  Schedule = manual
  Pool = disk_ai_consolidate
  Incremental Backup Pool = disk_ai
  Storage = pavlov-file # <-- !! change to make VirtualFull work !!
  Next Pool = tape_automated
  Virtual Full Backup Pool = tape_automated
}

1) change virtualfull pool's storage
Pool {
  Name = tape_automated
  Pool Type = Backup
  Recycle = yes # Bareos can automatically recycle Volumes
  AutoPrune = yes # Prune expired volumes
  Recycle Oldest Volume = yes
  RecyclePool = Scratch
  Maximum Volume Bytes = 0
  Volume Retention = 4 weeks
  Cleaning Prefix = "CLN"
  Catalog Files = yes
  Label Format = "Full-" # <-- !! add this because now we write to disk, not to tape
  Storage = pavlov-file # <-- !! change to make VirtualFull work !!
}
TagsNo tags attached.

Activities

chaos_prevails

chaos_prevails

2017-11-15 11:08

reporter   ~0002815

Thanks to a comment on the bareos-users google-group I found out that this is a feature not implemented, not a bug.

I think it would be important to mention this in the documentation. I think VirtualFull would be a good solution for offsite-backup (e.g. in another building, another server-room). This involves another storage daemon.

I looked at different ways to export the tape drive on the offsite-backup machine to the local machine (e.g. iSCSI, ...). However, this adds extra complexity and might cause shoeshining (connection to offsite-backup machine has to be really fast, because spooling would happen on the local machine).In my case (~10MB/s) tape and drive would definitely suffer from shoe-shining. So currently, beside always incremental, I do another full backup to the offsite-backup machine.
sven.compositiv

sven.compositiv

2022-07-04 16:48

reporter   ~0004651

> Thanks to a comment on the bareos-users google-group I found out that this is a feature not implemented, not a bug.

When it is an unimplemented feature, I'd expect that no Backups are chosen from other storages. We have the problem, that we copy Jobs from AI-Consolidated to a tape. After doing that, all VirtualFull jobs fail after backups from our tape-storage have been selected.
bruno-at-bareos

bruno-at-bareos

2022-07-04 17:02

manager   ~0004652

Could you explain a bit more (configuration example maybe?)

Having an Always Incremental rotation using one storage like File and then creating VirtualFul Archive to another storage resource (same SD daemon) works very well, as it is documented.
Maybe, you forget to update your VirtualFull to be an archive job. Then yes the next AI will use the most recent VF.
But this is also documented.
bruno-at-bareos

bruno-at-bareos

2022-07-04 17:12

manager   ~0004655

Not implemented.

Issue History

Date Modified Username Field Change
2017-11-07 12:12 chaos_prevails New Issue
2017-11-15 11:08 chaos_prevails Note Added: 0002815
2021-12-21 13:53 arogge Relationship added related to 0001289
2021-12-21 13:56 arogge Relationship deleted related to 0001289
2022-07-04 16:48 sven.compositiv Note Added: 0004651
2022-07-04 17:02 bruno-at-bareos Note Added: 0004652
2022-07-04 17:12 bruno-at-bareos Assigned To => bruno-at-bareos
2022-07-04 17:12 bruno-at-bareos Status new => resolved
2022-07-04 17:12 bruno-at-bareos Resolution open => won't fix
2022-07-04 17:12 bruno-at-bareos Note Added: 0004655