View Issue Details

IDProjectCategoryView StatusLast Update
0000878bareos-core[All Projects] directorpublic2019-12-18 15:24
Reporterchaos_prevailsAssigned Toarogge 
PrioritynormalSeverityminorReproducibilityhave not tried
Status closedResolutionnot fixable 
PlatformLinuxOSUbuntuOS Version16.04 amd64
Product Version16.2.4 
Fixed in Version 
Summary0000878: Consolidate takes Non-Always Incremental (full) Jobs into consideration when same Fileset is used
DescriptionConsolidate takes normal (Full) backup jobs (*without* Always Incremental = yes or any other Always incremental configuration) into consideration when the same fileset is used.

This leads to the situation that consolidate never does anything, because e.g. the weekly normal full backup is too young to consolidate, and all "always incremental" incremental jobs never exeed the "Always Incremental Keep Number".

Steps To Reproduce1) Backup of client1 with Always Incremental job using fileset X (e.g. daily)
2) Backup of client1 with normal full job using fileset X (for offsite backup, cannot use VirtualFull because tape is connected to remote storage daemon) (e.g. weekly)
3) set always incremental keep number = 7
4) consolidate daily
5) consolidate never consolidates anything

Temporary Solution:
A) change backuptype of tape backup to "archive" (as explained in chapter 23.6.2. Virtual Full Jobs). However, these backups cannot be restored via web-ui .
B) clone fileset and give it another name

Permanent Solution:
Don't take Non-Always Incremental Jobs into consideration when consolidating. These are separate backups and not "descendants" of the always incremental+consolidate process (like virtual-Full backups)
Additional InformationMy configuration:

JobDefs {
  Name = "default_ai"
  Type = Backup
  Level = Incremental
  Client = DIR-fd
  Messages = Standard
  Priority = 40
  Write Bootstrap = "|/usr/local/bin/ \"[Bootstrap] %j (%d: jobid %i)\" %i \"it@XXXXXX\" %c-%n"
  Maximum Concurrent Jobs = 7

  #Allow Duplicate Jobs = no #doesn't work with virtualFull Job
  #Cancel Lower Level Duplicates = yes
  #Cancel Queued Duplicates = yes

  #always incremental config
  Pool = disk_ai
  #Incremental Backup Pool = disk_ai
  Full Backup Pool = disk_ai_consolidate
  Storage = DIR-file
  Accurate = yes
  Always Incremental = yes
  Always Incremental Job Retention = 7 days
  Always Incremental Keep Number = 7
  Always Incremental Max Full Age = 14 days

JobDefs {
  Name = "default_tape"
  Type = Backup
  Level = Full
  Client = DIR-fd
  Messages = Standard
  Maximum Concurrent Jobs = 1
  Spool Data = yes
  Pool = tape_automated
  Priority = 10
  Full Backup Pool = tape_automated
  Incremental Backup Pool = tape_automated
  Storage = XXXX_HP_G2_Autochanger

  #prevent duplicate jobs
  Allow Duplicate Jobs = no
  Cancel Lower Level Duplicates = yes
  Cancel Queued Duplicates = yes

  Write Bootstrap = "|/usr/local/bin/ \"[Bootstrap] %j (%d: jobid %i)\" %i \"it@XXXXXX\" %c-%n"

#my always incremental job
Job {
  Name = "client1_sys_ai"
  JobDefs = "default_ai"
  Client = "client1-fd"
  FileSet = linux_common
  Schedule = client1_sys_ai
  RunAfterJob = "/bin/bash -c '/bin/echo \"run client1_sys_ai_v yes\" | bconsole >/dev/null'"

Job {
  Name = client1_sys_ai_v
  JobDefs = default_verify
  Verify Job = client1_sys_ai
  Level = VolumeToCatalog
  Client = client1-fd
  FileSet = linux_common
  Schedule = manual
  Pool = disk_ai
  Priority = 41

Job {
  Name = client1_sys_tape
  JobDefs = default_tape
  Level = Full
  Client = client1-fd
  FileSet = linux_common
# FileSet = linux_common_tape #<--temporary Solution B)
  Schedule = client1_sys_tape
  RunAfterJob = "/bin/bash -c '/bin/echo \"run client1_sys_tape_v yes\" | bconsole >/dev/null'"
# Run Script {
# console = "update jobid=%i jobtype=A" #<-- temporary solution A)
# Runs When = After
# Runs On Client = No
# Runs On Failure = No
# }
Job {
  Name = client1_sys_tape_v
  JobDefs = default_verify
  Verify Job = client1_sys_tape
  Level = VolumeToCatalog
  Client = client1-fd
  FileSet = linux_common
  Schedule = manual
  Pool = tape_automated
  Priority = 41
TagsNo tags attached.
bareos-master: impactyes
bareos-master: actionnone
bareos-19.2: impact
bareos-19.2: action
bareos-18.2: impact
bareos-18.2: actionnone
bareos-17.2: impactyes
bareos-17.2: actionnone
bareos-16.2: impactyes
bareos-16.2: actionnone
bareos-15.2: impactno
bareos-15.2: action
bareos-14.2: impactno
bareos-14.2: action
bareos-13.2: impactno
bareos-13.2: action
bareos-12.4: impactno
bareos-12.4: action




2017-11-30 09:58

developer   ~0002827

I think, this works as designed. What is the reason for running both Always Incremental and normal Full backup jobs in parallel?


2017-11-30 10:16

reporter   ~0002829

Because I need to have an offsite (tape) backup beside the Always Incremental disk backup. The tape drive is not connected to the same storage daemon, but on a different machine. I cannot do a VirtualFull backup, which only works on same storage daemon.
Even if I would have better access to the server room (so I can change tapes without problems), bareos runs on a VM and my physical backup server cannot passthrough pci devices (so I would need to run bareos on bare metal instead of within a VM)

I was surprised by this behaviour because the second (full/incremental) backup doesn't share any job configuration with the always incremental jobs. I thought I can see them as separate.


2019-11-13 15:39

developer   ~0003635

I'm setting this to not fixable.
As joergs already mentioned: this works as designed.
In Bareos all backups using the same FileSet are merged when you restore (or consolidate or run a virtualfull). The common thing in your two jobs is the fileset and that produces the behaviour.

The workaround with the two filesets is the only thing you can do to achieve your desired behaviour.

Issue History

Date Modified Username Field Change
2017-11-29 10:19 chaos_prevails New Issue
2017-11-30 09:58 joergs Note Added: 0002827
2017-11-30 09:58 joergs Status new => feedback
2017-11-30 09:58 joergs bareos-master: impact => yes
2017-11-30 09:58 joergs bareos-17.2: impact => yes
2017-11-30 09:58 joergs bareos-16.2: impact => yes
2017-11-30 09:58 joergs bareos-15.2: impact => no
2017-11-30 09:58 joergs bareos-14.2: impact => no
2017-11-30 09:58 joergs bareos-13.2: impact => no
2017-11-30 09:58 joergs bareos-12.4: impact => no
2017-11-30 10:16 chaos_prevails Note Added: 0002829
2017-11-30 10:16 chaos_prevails Status feedback => new
2019-11-13 15:39 arogge Assigned To => arogge
2019-11-13 15:39 arogge Status new => resolved
2019-11-13 15:39 arogge Resolution open => not fixable
2019-11-13 15:39 arogge bareos-master: action => none
2019-11-13 15:39 arogge bareos-18.2: action => none
2019-11-13 15:39 arogge bareos-17.2: action => none
2019-11-13 15:39 arogge bareos-16.2: action => none
2019-11-13 15:39 arogge Note Added: 0003635
2019-12-18 15:24 arogge Status resolved => closed