View Issue Details

IDProjectCategoryView StatusLast Update
0000792bareos-coredirectorpublic2021-04-26 13:22
Reporterchrisbzc Assigned Tofrank  
PrioritynormalSeveritymajorReproducibilityalways
Status resolvedResolutionfixed 
PlatformLinuxOSDebianOS Version8
Product Version16.2.4 
Summary0000792: Unable to use "AllowDuplicateJobs = no" with AlwaysIncremental / Consolidate jobs
DescriptionI have my jobs / jobdefs set up as follows, as output by "show jobdefs" and "show jobs":

JobDefs {
  Name = "DefaultJob"
  Type = Backup
  Level = Incremental
  Messages = "Standard"
  Storage = "File"
  Pool = "AI-Incremental"
  FileSet = "LinuxAll"
  WriteBootstrap = "/var/lib/bareos/%c.bsr"
  AllowDuplicateJobs = no
  CancelLowerLevelDuplicates = yes
  CancelQueuedDuplicates = yes
}

Job {
  Name = "Consolidate"
  Type = Consolidate
  Storage = "File"
  Pool = "AI-Incremental"
  Client = "thog-fd"
  Schedule = "Consolidate"
  JobDefs = "DefaultJob"
  MaximumConcurrentJobs = 2
}

Job {
  Name = "swift-storage.mycompany.com"
  Pool = "AI-Incremental"
  FullBackupPool = "AI-Full"
  Client = "swift-storage.mycompany.com"
  Schedule = "Nightly"
  JobDefs = "DefaultJob"
  MaximumConcurrentJobs = 2
  Accurate = yes
  AlwaysIncremental = yes
  AlwaysIncrementalJobRetention = 1 weeks
  AlwaysIncrementalKeepNumber = 7
  AlwaysIncrementalMaxFullAge = 1 months
}


I really need to be able to use "AllowDuplicateJobs = no" to cancel lower-level and queued up duplicates, because a full backup of this client takes a number of days. However, if I have AllowDuplicateJobs turned off then the Consolidate jobs fail because the VirtualFull jobs they create immediately cancel themselves. It looks like it's incorrectly treating the Consolidate job and the VirtualFull job it creates as duplicates:

01-Mar 10:00 thog-dir JobId 842: Start Consolidate JobId 842, Job=Consolidate.2017-03-01_10.00.00_34
01-Mar 10:00 thog-dir JobId 842: Looking at always incremental job swift-storage.mycompany.com
01-Mar 10:00 thog-dir JobId 842: swift-storage.mycompany.com: considering jobs older than 22-Feb-2017 10:00:03 for consolidation.
01-Mar 10:00 thog-dir JobId 842: before ConsolidateFull: jobids: 800,819,822
01-Mar 10:00 thog-dir JobId 842: check full age: full is 20-Feb-2017 03:45:08, allowed is 30-Jan-2017 10:00:03
01-Mar 10:00 thog-dir JobId 842: Full is newer than AlwaysIncrementalMaxFullAge -> skipping first jobid 800 because of age
01-Mar 10:00 thog-dir JobId 842: after ConsolidateFull: jobids: 819,822
01-Mar 10:00 thog-dir JobId 842: swift-storage.mycompany.com: Start new consolidation
01-Mar 10:00 thog-dir JobId 842: Using Catalog "MyCatalog"
01-Mar 10:00 thog-dir JobId 843: Fatal error: JobId 842 already running. Duplicate job not allowed.
01-Mar 10:00 thog-dir JobId 842: Job queued. JobId=843
01-Mar 10:00 thog-dir JobId 842: Consolidating JobId 843 started.
01-Mar 10:00 thog-dir JobId 842: BAREOS 16.2.4 (01Jul16): 01-Mar-2017 10:00:03
  JobId: 842
  Job: Consolidate.2017-03-01_10.00.00_34
  Scheduled time: 01-Mar-2017 10:00:00
  Start time: 01-Mar-2017 10:00:03
  End time: 01-Mar-2017 10:00:03
  Termination: Consolidate OK

01-Mar 10:00 thog-dir JobId 843: Bareos thog-dir 16.2.4 (01Jul16):
  Build OS: x86_64-pc-linux-gnu debian Debian GNU/Linux 7.0 (wheezy)
  JobId: 843
  Job: swift-storage.mycompany.com.2017-03-01_10.00.03_35
  Backup Level: Virtual Full
  Client: "swift-storage.mycompany.com" 16.2.4 (01Jul16) x86_64-pc-linux-gnu,debian,Debian GNU/Linux 8.0 (jessie),Debian_8.0,x86_64
  FileSet: "LinuxAll" 2016-09-29 20:43:39
  Pool: "AI-Full" (From Job Pool's NextPool resource)
  Catalog: "MyCatalog" (From Client resource)
  Storage: "File" (From Storage from Pool's NextPool resource)
  Scheduled time: 01-Mar-2017 10:00:03
  Start time: 01-Mar-2017 10:00:03
  End time: 01-Mar-2017 10:00:03
  Elapsed time: 0 secs
  Priority: 10
  SD Files Written: 0
  SD Bytes Written: 0 (0 B)
  Rate: 0.0 KB/s
  Volume name(s):
  Volume Session Id: 0
  Volume Session Time: 0
  Last Volume Bytes: 0 (0 B)
  SD Errors: 0
  SD termination status:
  Accurate: no
  Termination: Backup Canceled


If I set AllowDuplicateJobs to "yes" in the JobDef, then the Consolidate job works, but then I get duplicate jobs that I have to cancel manually if I ever need to do a new Full backup, which is not ideal.

I have tried setting AllowDuplicateJobs to yes in the _Job_ resource:

root@thog:/etc/bareos/bareos-dir.d# cat jobdefs/DefaultJob.conf
JobDefs {
  Name = "DefaultJob"
  Type = Backup
  Level = Incremental
  FileSet = "LinuxAll"
  Storage = File
  Messages = Standard
  Pool = AI-Incremental
  Priority = 10
  Write Bootstrap = "/var/lib/bareos/%c.bsr"

  Allow Duplicate Jobs = no
  Cancel Lower Level Duplicates = yes
  Cancel Queued Duplicates = yes
}
root@thog:/etc/bareos/bareos-dir.d# cat job/Consolidate.conf
Job {
  Name = "Consolidate"
  JobDefs = "DefaultJob"
  Schedule = Consolidate
  Type = Consolidate
  Storage = File
  Pool = AI-Incremental
  Client = thog-fd
  Maximum Concurrent Jobs = 2
  Allow Duplicate Jobs = yes
}

But this doesn't seem to actually override the JobDef which seems very broken to me:

*reload
reloaded
*mes
You have no messages.
*show jobs
Job {
  Name = "Consolidate"
  Type = Consolidate
  Storage = "File"
  Pool = "AI-Incremental"
  Client = "thog-fd"
  Schedule = "Consolidate"
  JobDefs = "DefaultJob"
  MaximumConcurrentJobs = 2
}
*run
A job name must be specified.
The defined Job resources are:
     1: RestoreFiles
     2: Consolidate
     3: swift-storage.mycompany.cob
     4: BackupCatalog
Select Job resource (1-4): 2
Run Consolidate Job
JobName: Consolidate
FileSet: LinuxAll
Client: thog-fd
Storage: File
When: 2017-03-01 10:47:58
Priority: 10
OK to run? (yes/mod/no): yes
Job queued. JobId=850
*mess
01-Mar 10:48 thog-dir JobId 850: Start Consolidate JobId 850, Job=Consolidate.2017-03-01_10.47.59_04
01-Mar 10:48 thog-dir JobId 850: Looking at always incremental job swift-storage.mycompany.com
01-Mar 10:48 thog-dir JobId 850: swift-storage.mycompany.com: considering jobs older than 22-Feb-2017 10:48:01 for consolidation.
01-Mar 10:48 thog-dir JobId 850: before ConsolidateFull: jobids: 800,819,822
01-Mar 10:48 thog-dir JobId 850: check full age: full is 20-Feb-2017 03:45:08, allowed is 30-Jan-2017 10:48:01
01-Mar 10:48 thog-dir JobId 850: Full is newer than AlwaysIncrementalMaxFullAge -> skipping first jobid 800 because of age
01-Mar 10:48 thog-dir JobId 850: after ConsolidateFull: jobids: 819,822
01-Mar 10:48 thog-dir JobId 850: swift-storage.mycompany.com: Start new consolidation
01-Mar 10:48 thog-dir JobId 850: Using Catalog "MyCatalog"
01-Mar 10:48 thog-dir JobId 851: Fatal error: JobId 850 already running. Duplicate job not allowed.
01-Mar 10:48 thog-dir JobId 850: Job queued. JobId=851
01-Mar 10:48 thog-dir JobId 850: Consolidating JobId 851 started.
01-Mar 10:48 thog-dir JobId 850: BAREOS 16.2.4 (01Jul16): 01-Mar-2017 10:48:02
  JobId: 850
  Job: Consolidate.2017-03-01_10.47.59_04
  Scheduled time: 01-Mar-2017 10:47:58
  Start time: 01-Mar-2017 10:48:01
  End time: 01-Mar-2017 10:48:02
  Termination: Consolidate OK



I even tried setting up an entirely new JobDef with AllowDuplicateJobs set to "yes", but this failed too:

JobDefs {
  Name = "DefaultJobAllowDuplicates"
  Type = Backup
  Level = Incremental
  Messages = "Standard"
  Storage = "File"
  Pool = "AI-Incremental"
  FileSet = "LinuxAll"
  WriteBootstrap = "/var/lib/bareos/%c.bsr"
  CancelLowerLevelDuplicates = yes
  CancelQueuedDuplicates = yes
}

Job {
  Name = "Consolidate"
  Type = Consolidate
  Storage = "File"
  Pool = "AI-Incremental"
  Client = "thog-fd"
  Schedule = "Consolidate"
  JobDefs = "DefaultJobAllowDuplicates"
  MaximumConcurrentJobs = 2
}

01-Mar 10:54 thog-dir JobId 852: Start Consolidate JobId 852, Job=Consolidate.2017-03-01_10.53.59_06
01-Mar 10:54 thog-dir JobId 852: Looking at always incremental job swift-storage.trac.jobs
01-Mar 10:54 thog-dir JobId 852: swift-storage.trac.jobs: considering jobs older than 22-Feb-2017 10:54:01 for consolidation.
01-Mar 10:54 thog-dir JobId 852: before ConsolidateFull: jobids: 800,819,822
01-Mar 10:54 thog-dir JobId 852: check full age: full is 20-Feb-2017 03:45:08, allowed is 30-Jan-2017 10:54:01
01-Mar 10:54 thog-dir JobId 852: Full is newer than AlwaysIncrementalMaxFullAge -> skipping first jobid 800 because of age
01-Mar 10:54 thog-dir JobId 852: after ConsolidateFull: jobids: 819,822
01-Mar 10:54 thog-dir JobId 852: swift-storage.trac.jobs: Start new consolidation
01-Mar 10:54 thog-dir JobId 852: Using Catalog "MyCatalog"
01-Mar 10:54 thog-dir JobId 853: Fatal error: JobId 852 already running. Duplicate job not allowed.
01-Mar 10:54 thog-dir JobId 852: Job queued. JobId=853
01-Mar 10:54 thog-dir JobId 852: Consolidating JobId 853 started.
01-Mar 10:54 thog-dir JobId 853: Bareos thog-dir 16.2.4 (01Jul16):
  Build OS: x86_64-pc-linux-gnu debian Debian GNU/Linux 7.0 (wheezy)
  JobId: 853
  Job: swift-storage.trac.jobs.2017-03-01_10.54.01_07
  Backup Level: Virtual Full
  Client: "swift-storage.trac.jobs" 16.2.4 (01Jul16) x86_64-pc-linux-gnu,debian,Debian GNU/Linux 8.0 (jessie),Debian_8.0,x86_64
  FileSet: "LinuxAll" 2016-09-29 20:43:39
  Pool: "AI-Full" (From Job Pool's NextPool resource)
  Catalog: "MyCatalog" (From Client resource)
  Storage: "File" (From Storage from Pool's NextPool resource)
  Scheduled time: 01-Mar-2017 10:54:01
  Start time: 01-Mar-2017 10:54:01
  End time: 01-Mar-2017 10:54:01
  Elapsed time: 0 secs
  Priority: 10
  SD Files Written: 0
  SD Bytes Written: 0 (0 B)
  Rate: 0.0 KB/s
  Volume name(s):
  Volume Session Id: 0
  Volume Session Time: 0
  Last Volume Bytes: 0 (0 B)
  SD Errors: 0
  SD termination status:
  Accurate: no
  Termination: Backup Canceled

01-Mar 10:54 thog-dir JobId 852: BAREOS 16.2.4 (01Jul16): 01-Mar-2017 10:54:01
  JobId: 852
  Job: Consolidate.2017-03-01_10.53.59_06
  Scheduled time: 01-Mar-2017 10:53:58
  Start time: 01-Mar-2017 10:54:01
  End time: 01-Mar-2017 10:54:01
  Termination: Consolidate OK
TagsNo tags attached.

Activities

therm

therm

2017-05-11 07:02

reporter   ~0002641

This one affects me as well.
Currently it is impossible to use bareos in production if you have a lot of data (backup duration > 24h) and AlwaysIncremental is not working. Without getting it working with AllowDuplicateJobs=no the AlwaysIncremental feature is nearly useless.
chrisbzc

chrisbzc

2017-06-20 12:03

reporter   ~0002672

Just wondering if anyone has a solution to this? I'd really like to roll out Bareos for all my company's backups but this one fatal flaw in AlwaysIncremental is blocking that...
dextor

dextor

2018-01-06 16:27

reporter   ~0002870

+1

I wrote a quick and dirty patch to fix this issue. Here it is:
--- a/src/dird/job.c 2017-12-14 13:57:08.000000000 +0300
+++ b/src/dird/job.c 2018-01-06 17:27:25.897508235 +0300
@@ -996,4 +996,8 @@
          }

+ if (djcr->is_JobType(JT_CONSOLIDATE)) {
+ break;
+ }
+
          if (cancel_dup || job->CancelRunningDuplicates) {
             /*
bluco

bluco

2018-07-13 08:28

reporter   ~0003069

+1
chrisbzc

chrisbzc

2018-08-29 16:32

reporter   ~0003103

Is anyone from Bareos going to look at this? It's been almost a year and a half now since this was opened and not a word from the authors at all.
Even just a quick yes/no on whether the above three-line patch is good or not would be appreciated because at least then I know whether it's safe to patch my local build.
hsn

hsn

2018-10-14 13:36

reporter   ~0003140

This issue definitely needs fix. I have feeling that development is directed more towards new features then bugfixes.
brockp

brockp

2020-04-14 21:28

reporter   ~0003941

Tested on 19.2.6 can confirm issue still remains, jobs are canceled as duplicates when running Consolidate jobs even if you set the consolidate job to allow duplicates as the jobs all inherit form the jobs they are consolidating from.
frank

frank

2020-09-29 12:39

developer   ~0004043

FYI: We figured a configuration workaround which you could use until it is fixed.


Configuration Workaround
======================

In principle we simply use an extra job for AI which is not scheduled but used by the consolidation job for the AI Cycle. The configuration workaround looks like following.

We do have a normal job named "ai-backup-bareos-fd" which is automatically scheduled, one initial full backup and only incremental backups afterwards, no AI directives set in here.

We introduce another backup job named "ai-backup-bareos-fd" which is NOT and NEVER automatically scheduled, it is just used by the generated Virtual Full from the Consolidation job, we'll set our AI directives in here.

As the normal first initial full backup is gone after consolidation and the generated virtual full has another name (ai-backup-bareos-fd-consolidate) in comparison to the original backup job (ai-backup-bareos-fd),
we simply use a run script after the job is done to rename ai-backup-bareos-fd-consolidate to ai-backup-bareos-fd.

The renaming via the run after script is required so we don't get a full backup fallback when the normal backup job ai-backup-bareos-fd is scheduled again.
The initial full backup has been consolidated into the virtual full as mentioned above.

If you look at the jobdefs below you'll see that by this configuration we are able to use the "AllowDuplicateJobs = no" setting in normal backup jobs and "AllowDuplicateJobs = yes" for consolidation.

File: /etc/bareos/bareos-dir.d/job/ai-backup-bareos-fd.conf

Job {
  Name = "ai-backup-bareos-fd"
  JobDefs = "DefaultJob1"
  Client = "bareos-fd"
  Accurate = yes
}

File: /etc/bareos/bareos-dir.d/job/ai-backup-bareos-fd-consolidate.conf

Job {
  Name = "ai-backup-bareos-fd-consolidate"
  JobDefs = "DefaultJob2"
  Client = "bareos-fd"
  Accurate = yes
  AlwaysIncremental = yes
  AlwaysIncrementalJobRetention = 1 seconds
  AlwaysIncrementalKeepNumber = 1
  AlwaysIncrementalMaxFullAge = 1 seconds
  Run Script {
        console = "update jobid=%i jobname=ai-backup-bareos-fd"
        Runs When = After
        Runs On Client = No
        Runs On Failure = No
  }
}

File: /etc/bareos/bareos-dir.d/job/consolidate.conf

Job {
    Name = "Consolidate"
    Type = "Consolidate"
    Accurate = "yes"
    JobDefs = "DefaultJob3"
    Schedule = Consolidate
}

File: /etc/bareos/bareos-dir.d/jobdefs/DefaultJob1.conf

JobDefs {
  Name = "DefaultJob1"
  Type = Backup
  Level = Incremental
  Client = bareos-fd
  FileSet = "SelfTest"
  Storage = File
  Messages = Standard
  Pool = AI-Incremental
  Priority = 10
  Write Bootstrap = "@working_dir@/%c.bsr"
  Full Backup Pool = AI-Consolidated
  AllowDuplicateJobs = no
  CancelLowerLevelDuplicates = yes
  CancelQueuedDuplicates = yes
  CancelRunningDuplicates = no
  Schedule = "ai-schedule"
}

File: /etc/bareos/bareos-dir.d/jobdefs/DefaultJob2.conf

JobDefs {
  Name = "DefaultJob2"
  Type = Backup
  Level = Incremental
  Client = bareos-fd
  FileSet = "SelfTest"
  Storage = File
  Messages = Standard
  Pool = AI-Incremental
  Priority = 10
  Write Bootstrap = "@working_dir@/%c.bsr"
  Full Backup Pool = AI-Consolidated
  AllowDuplicateJobs = yes
  CancelLowerLevelDuplicates = no
  CancelQueuedDuplicates = no
  CancelRunningDuplicates = no
}

File: /etc/bareos/bareos-dir.d/jobdefs/DefaultJob3.conf

JobDefs {
  Name = "DefaultJob3"
  Type = Backup
  Level = Incremental
  Messages = "Standard"
  Storage = "file-storage"
  Pool = "Full"
  FullBackupPool = "Full"
  IncrementalBackupPool = "Incremental"
  DifferentialBackupPool = "Differential"
  Client = "bareos-fd"
  FileSet = "linux-server-fileset"
  Schedule = "WeeklyCycle"
  WriteBootstrap = "/var/lib/bareos/%c.bsr"
  Priority = 99
  AllowDuplicateJobs = no
  SpoolAttributes = yes
}

File: /etc/bareos/bareos-dir.d/schedule/Consolidate.conf

Schedule {
  Name = "Consolidate"
  run = at 12:00
}

File: /etc/bareos/bareos-dir.d/schedule/ai-schedule.conf

Schedule {
  Name = "ai-schedule"
  run = Incremental Mon-Fri at 21:00
}
frank

frank

2021-04-26 13:22

developer   ~0004121

Fix committed to bareos master branch with changesetid 14803.

Related Changesets

bareos: master 371c5b07

2020-07-07 14:44

frank

Ported: N/A

Details Diff
dird: ignore duplicate job checking on native virtual full backup

Fixes 0000792: Unable to use "AllowDuplicateJobs=no" with AI/Consolidate
Affected Issues
0000792
mod - core/src/dird/consolidate.cc Diff File

Issue History

Date Modified Username Field Change
2017-03-01 11:56 chrisbzc New Issue
2017-05-11 07:02 therm Note Added: 0002641
2017-06-20 12:03 chrisbzc Note Added: 0002672
2018-01-06 16:27 dextor Note Added: 0002870
2018-07-13 08:28 bluco Note Added: 0003069
2018-08-29 16:32 chrisbzc Note Added: 0003103
2018-10-14 13:36 hsn Note Added: 0003140
2020-04-14 21:28 brockp Note Added: 0003941
2020-06-10 13:09 frank Status new => acknowledged
2020-09-29 12:39 frank Note Added: 0004043
2021-04-26 13:22 frank Changeset attached => bareos master 371c5b07
2021-04-26 13:22 frank Note Added: 0004121
2021-04-26 13:22 frank Assigned To => frank
2021-04-26 13:22 frank Status acknowledged => resolved
2021-04-26 13:22 frank Resolution open => fixed