Bareos Bug Tracker
Bareos Bug Tracker

View Issue Details Jump to Notes ] Issue History ] Print ]
IDProjectCategoryView StatusDate SubmittedLast Update
0000878bareos-core[All Projects] directorpublic2017-11-29 10:192017-11-30 10:16
Reporterchaos_prevails 
Assigned To 
PrioritynormalSeverityminorReproducibilityhave not tried
StatusnewResolutionopen 
PlatformLinuxOSUbuntuOS Version16.04 amd64
Product Version16.2.4 
Target VersionFixed in Version 
Summary0000878: Consolidate takes Non-Always Incremental (full) Jobs into consideration when same Fileset is used
DescriptionConsolidate takes normal (Full) backup jobs (*without* Always Incremental = yes or any other Always incremental configuration) into consideration when the same fileset is used.

This leads to the situation that consolidate never does anything, because e.g. the weekly normal full backup is too young to consolidate, and all "always incremental" incremental jobs never exeed the "Always Incremental Keep Number".


Steps To Reproduce1) Backup of client1 with Always Incremental job using fileset X (e.g. daily)
2) Backup of client1 with normal full job using fileset X (for offsite backup, cannot use VirtualFull because tape is connected to remote storage daemon) (e.g. weekly)
3) set always incremental keep number = 7
4) consolidate daily
5) consolidate never consolidates anything

Temporary Solution:
A) change backuptype of tape backup to "archive" (as explained in chapter 23.6.2. Virtual Full Jobs). However, these backups cannot be restored via web-ui .
or
B) clone fileset and give it another name

Permanent Solution:
Don't take Non-Always Incremental Jobs into consideration when consolidating. These are separate backups and not "descendants" of the always incremental+consolidate process (like virtual-Full backups)
Additional InformationMy configuration:

JobDefs {
  Name = "default_ai"
  Type = Backup
  Level = Incremental
  Client = DIR-fd
  Messages = Standard
  Priority = 40
  Write Bootstrap = "|/usr/local/bin/bareos-messages.sh \"[Bootstrap] %j (%d: jobid %i)\" %i \"it@XXXXXX\" %c-%n"
  Maximum Concurrent Jobs = 7

  #Allow Duplicate Jobs = no #doesn't work with virtualFull Job
  #Cancel Lower Level Duplicates = yes
  #Cancel Queued Duplicates = yes


  #always incremental config
  Pool = disk_ai
  #Incremental Backup Pool = disk_ai
  Full Backup Pool = disk_ai_consolidate
  Storage = DIR-file
  Accurate = yes
  Always Incremental = yes
  Always Incremental Job Retention = 7 days
  Always Incremental Keep Number = 7
  Always Incremental Max Full Age = 14 days
}

JobDefs {
  Name = "default_tape"
  Type = Backup
  Level = Full
  Client = DIR-fd
  Messages = Standard
  Maximum Concurrent Jobs = 1
  Spool Data = yes
  Pool = tape_automated
  Priority = 10
  Full Backup Pool = tape_automated
  Incremental Backup Pool = tape_automated
  Storage = XXXX_HP_G2_Autochanger

  #prevent duplicate jobs
  Allow Duplicate Jobs = no
  Cancel Lower Level Duplicates = yes
  Cancel Queued Duplicates = yes

  Write Bootstrap = "|/usr/local/bin/bareos-messages.sh \"[Bootstrap] %j (%d: jobid %i)\" %i \"it@XXXXXX\" %c-%n"
}
~
~


#my always incremental job
Job {
  Name = "client1_sys_ai"
  JobDefs = "default_ai"
  Client = "client1-fd"
  FileSet = linux_common
  Schedule = client1_sys_ai
  RunAfterJob = "/bin/bash -c '/bin/echo \"run client1_sys_ai_v yes\" | bconsole >/dev/null'"
}

Job {
  Name = client1_sys_ai_v
  JobDefs = default_verify
  Verify Job = client1_sys_ai
  Level = VolumeToCatalog
  Client = client1-fd
  FileSet = linux_common
  Schedule = manual
  Pool = disk_ai
  Priority = 41
}

Job {
  Name = client1_sys_tape
  JobDefs = default_tape
  Level = Full
  Client = client1-fd
  FileSet = linux_common
# FileSet = linux_common_tape #<--temporary Solution B)
  Schedule = client1_sys_tape
  RunAfterJob = "/bin/bash -c '/bin/echo \"run client1_sys_tape_v yes\" | bconsole >/dev/null'"
# Run Script {
# console = "update jobid=%i jobtype=A" #<-- temporary solution A)
# Runs When = After
# Runs On Client = No
# Runs On Failure = No
# }
}
Job {
  Name = client1_sys_tape_v
  JobDefs = default_verify
  Verify Job = client1_sys_tape
  Level = VolumeToCatalog
  Client = client1-fd
  FileSet = linux_common
  Schedule = manual
  Pool = tape_automated
  Priority = 41
}
TagsNo tags attached.
bareos-master: impactyes
bareos-master: action
bareos-18.2: impact
bareos-18.2: action
bareos-17.2: impactyes
bareos-17.2: action
bareos-16.2: impactyes
bareos-16.2: action
bareos-15.2: impactno
bareos-15.2: action
bareos-14.2: impactno
bareos-14.2: action
bareos-13.2: impactno
bareos-13.2: action
bareos-12.4: impactno
bareos-12.4: action
Attached Files

- Relationships

-  Notes
(0002827)
joergs (administrator)
2017-11-30 09:58

I think, this works as designed. What is the reason for running both Always Incremental and normal Full backup jobs in parallel?
(0002829)
chaos_prevails (reporter)
2017-11-30 10:16

Because I need to have an offsite (tape) backup beside the Always Incremental disk backup. The tape drive is not connected to the same storage daemon, but on a different machine. I cannot do a VirtualFull backup, which only works on same storage daemon.
Even if I would have better access to the server room (so I can change tapes without problems), bareos runs on a VM and my physical backup server cannot passthrough pci devices (so I would need to run bareos on bare metal instead of within a VM)

I was surprised by this behaviour because the second (full/incremental) backup doesn't share any job configuration with the always incremental jobs. I thought I can see them as separate.

- Issue History
Date Modified Username Field Change
2017-11-29 10:19 chaos_prevails New Issue
2017-11-30 09:58 joergs Note Added: 0002827
2017-11-30 09:58 joergs Status new => feedback
2017-11-30 09:58 joergs bareos-master: impact => yes
2017-11-30 09:58 joergs bareos-17.2: impact => yes
2017-11-30 09:58 joergs bareos-16.2: impact => yes
2017-11-30 09:58 joergs bareos-15.2: impact => no
2017-11-30 09:58 joergs bareos-14.2: impact => no
2017-11-30 09:58 joergs bareos-13.2: impact => no
2017-11-30 09:58 joergs bareos-12.4: impact => no
2017-11-30 10:16 chaos_prevails Note Added: 0002829
2017-11-30 10:16 chaos_prevails Status feedback => new


Copyright © 2000 - 2018 MantisBT Team
Powered by Mantis Bugtracker