Bareos Bug Tracker
Bareos Bug Tracker

View Issue Details Jump to Notes ] Issue History ] Print ]
IDProjectCategoryView StatusDate SubmittedLast Update
0000792bareos-core[All Projects] directorpublic2017-03-01 11:562018-01-06 16:27
Reporterchrisbzc 
Assigned To 
PrioritynormalSeveritymajorReproducibilityalways
StatusnewResolutionopen 
PlatformLinuxOSDebianOS Version8
Product Version16.2.4 
Target VersionFixed in Version 
Summary0000792: Unable to use "AllowDuplicateJobs = no" with AlwaysIncremental / Consolidate jobs
DescriptionI have my jobs / jobdefs set up as follows, as output by "show jobdefs" and "show jobs":

JobDefs {
  Name = "DefaultJob"
  Type = Backup
  Level = Incremental
  Messages = "Standard"
  Storage = "File"
  Pool = "AI-Incremental"
  FileSet = "LinuxAll"
  WriteBootstrap = "/var/lib/bareos/%c.bsr"
  AllowDuplicateJobs = no
  CancelLowerLevelDuplicates = yes
  CancelQueuedDuplicates = yes
}

Job {
  Name = "Consolidate"
  Type = Consolidate
  Storage = "File"
  Pool = "AI-Incremental"
  Client = "thog-fd"
  Schedule = "Consolidate"
  JobDefs = "DefaultJob"
  MaximumConcurrentJobs = 2
}

Job {
  Name = "swift-storage.mycompany.com"
  Pool = "AI-Incremental"
  FullBackupPool = "AI-Full"
  Client = "swift-storage.mycompany.com"
  Schedule = "Nightly"
  JobDefs = "DefaultJob"
  MaximumConcurrentJobs = 2
  Accurate = yes
  AlwaysIncremental = yes
  AlwaysIncrementalJobRetention = 1 weeks
  AlwaysIncrementalKeepNumber = 7
  AlwaysIncrementalMaxFullAge = 1 months
}


I really need to be able to use "AllowDuplicateJobs = no" to cancel lower-level and queued up duplicates, because a full backup of this client takes a number of days. However, if I have AllowDuplicateJobs turned off then the Consolidate jobs fail because the VirtualFull jobs they create immediately cancel themselves. It looks like it's incorrectly treating the Consolidate job and the VirtualFull job it creates as duplicates:

01-Mar 10:00 thog-dir JobId 842: Start Consolidate JobId 842, Job=Consolidate.2017-03-01_10.00.00_34
01-Mar 10:00 thog-dir JobId 842: Looking at always incremental job swift-storage.mycompany.com
01-Mar 10:00 thog-dir JobId 842: swift-storage.mycompany.com: considering jobs older than 22-Feb-2017 10:00:03 for consolidation.
01-Mar 10:00 thog-dir JobId 842: before ConsolidateFull: jobids: 800,819,822
01-Mar 10:00 thog-dir JobId 842: check full age: full is 20-Feb-2017 03:45:08, allowed is 30-Jan-2017 10:00:03
01-Mar 10:00 thog-dir JobId 842: Full is newer than AlwaysIncrementalMaxFullAge -> skipping first jobid 800 because of age
01-Mar 10:00 thog-dir JobId 842: after ConsolidateFull: jobids: 819,822
01-Mar 10:00 thog-dir JobId 842: swift-storage.mycompany.com: Start new consolidation
01-Mar 10:00 thog-dir JobId 842: Using Catalog "MyCatalog"
01-Mar 10:00 thog-dir JobId 843: Fatal error: JobId 842 already running. Duplicate job not allowed.
01-Mar 10:00 thog-dir JobId 842: Job queued. JobId=843
01-Mar 10:00 thog-dir JobId 842: Consolidating JobId 843 started.
01-Mar 10:00 thog-dir JobId 842: BAREOS 16.2.4 (01Jul16): 01-Mar-2017 10:00:03
  JobId: 842
  Job: Consolidate.2017-03-01_10.00.00_34
  Scheduled time: 01-Mar-2017 10:00:00
  Start time: 01-Mar-2017 10:00:03
  End time: 01-Mar-2017 10:00:03
  Termination: Consolidate OK

01-Mar 10:00 thog-dir JobId 843: Bareos thog-dir 16.2.4 (01Jul16):
  Build OS: x86_64-pc-linux-gnu debian Debian GNU/Linux 7.0 (wheezy)
  JobId: 843
  Job: swift-storage.mycompany.com.2017-03-01_10.00.03_35
  Backup Level: Virtual Full
  Client: "swift-storage.mycompany.com" 16.2.4 (01Jul16) x86_64-pc-linux-gnu,debian,Debian GNU/Linux 8.0 (jessie),Debian_8.0,x86_64
  FileSet: "LinuxAll" 2016-09-29 20:43:39
  Pool: "AI-Full" (From Job Pool's NextPool resource)
  Catalog: "MyCatalog" (From Client resource)
  Storage: "File" (From Storage from Pool's NextPool resource)
  Scheduled time: 01-Mar-2017 10:00:03
  Start time: 01-Mar-2017 10:00:03
  End time: 01-Mar-2017 10:00:03
  Elapsed time: 0 secs
  Priority: 10
  SD Files Written: 0
  SD Bytes Written: 0 (0 B)
  Rate: 0.0 KB/s
  Volume name(s):
  Volume Session Id: 0
  Volume Session Time: 0
  Last Volume Bytes: 0 (0 B)
  SD Errors: 0
  SD termination status:
  Accurate: no
  Termination: Backup Canceled


If I set AllowDuplicateJobs to "yes" in the JobDef, then the Consolidate job works, but then I get duplicate jobs that I have to cancel manually if I ever need to do a new Full backup, which is not ideal.

I have tried setting AllowDuplicateJobs to yes in the _Job_ resource:

root@thog:/etc/bareos/bareos-dir.d# cat jobdefs/DefaultJob.conf
JobDefs {
  Name = "DefaultJob"
  Type = Backup
  Level = Incremental
  FileSet = "LinuxAll"
  Storage = File
  Messages = Standard
  Pool = AI-Incremental
  Priority = 10
  Write Bootstrap = "/var/lib/bareos/%c.bsr"

  Allow Duplicate Jobs = no
  Cancel Lower Level Duplicates = yes
  Cancel Queued Duplicates = yes
}
root@thog:/etc/bareos/bareos-dir.d# cat job/Consolidate.conf
Job {
  Name = "Consolidate"
  JobDefs = "DefaultJob"
  Schedule = Consolidate
  Type = Consolidate
  Storage = File
  Pool = AI-Incremental
  Client = thog-fd
  Maximum Concurrent Jobs = 2
  Allow Duplicate Jobs = yes
}

But this doesn't seem to actually override the JobDef which seems very broken to me:

*reload
reloaded
*mes
You have no messages.
*show jobs
Job {
  Name = "Consolidate"
  Type = Consolidate
  Storage = "File"
  Pool = "AI-Incremental"
  Client = "thog-fd"
  Schedule = "Consolidate"
  JobDefs = "DefaultJob"
  MaximumConcurrentJobs = 2
}
*run
A job name must be specified.
The defined Job resources are:
     1: RestoreFiles
     2: Consolidate
     3: swift-storage.mycompany.cob
     4: BackupCatalog
Select Job resource (1-4): 2
Run Consolidate Job
JobName: Consolidate
FileSet: LinuxAll
Client: thog-fd
Storage: File
When: 2017-03-01 10:47:58
Priority: 10
OK to run? (yes/mod/no): yes
Job queued. JobId=850
*mess
01-Mar 10:48 thog-dir JobId 850: Start Consolidate JobId 850, Job=Consolidate.2017-03-01_10.47.59_04
01-Mar 10:48 thog-dir JobId 850: Looking at always incremental job swift-storage.mycompany.com
01-Mar 10:48 thog-dir JobId 850: swift-storage.mycompany.com: considering jobs older than 22-Feb-2017 10:48:01 for consolidation.
01-Mar 10:48 thog-dir JobId 850: before ConsolidateFull: jobids: 800,819,822
01-Mar 10:48 thog-dir JobId 850: check full age: full is 20-Feb-2017 03:45:08, allowed is 30-Jan-2017 10:48:01
01-Mar 10:48 thog-dir JobId 850: Full is newer than AlwaysIncrementalMaxFullAge -> skipping first jobid 800 because of age
01-Mar 10:48 thog-dir JobId 850: after ConsolidateFull: jobids: 819,822
01-Mar 10:48 thog-dir JobId 850: swift-storage.mycompany.com: Start new consolidation
01-Mar 10:48 thog-dir JobId 850: Using Catalog "MyCatalog"
01-Mar 10:48 thog-dir JobId 851: Fatal error: JobId 850 already running. Duplicate job not allowed.
01-Mar 10:48 thog-dir JobId 850: Job queued. JobId=851
01-Mar 10:48 thog-dir JobId 850: Consolidating JobId 851 started.
01-Mar 10:48 thog-dir JobId 850: BAREOS 16.2.4 (01Jul16): 01-Mar-2017 10:48:02
  JobId: 850
  Job: Consolidate.2017-03-01_10.47.59_04
  Scheduled time: 01-Mar-2017 10:47:58
  Start time: 01-Mar-2017 10:48:01
  End time: 01-Mar-2017 10:48:02
  Termination: Consolidate OK



I even tried setting up an entirely new JobDef with AllowDuplicateJobs set to "yes", but this failed too:

JobDefs {
  Name = "DefaultJobAllowDuplicates"
  Type = Backup
  Level = Incremental
  Messages = "Standard"
  Storage = "File"
  Pool = "AI-Incremental"
  FileSet = "LinuxAll"
  WriteBootstrap = "/var/lib/bareos/%c.bsr"
  CancelLowerLevelDuplicates = yes
  CancelQueuedDuplicates = yes
}

Job {
  Name = "Consolidate"
  Type = Consolidate
  Storage = "File"
  Pool = "AI-Incremental"
  Client = "thog-fd"
  Schedule = "Consolidate"
  JobDefs = "DefaultJobAllowDuplicates"
  MaximumConcurrentJobs = 2
}

01-Mar 10:54 thog-dir JobId 852: Start Consolidate JobId 852, Job=Consolidate.2017-03-01_10.53.59_06
01-Mar 10:54 thog-dir JobId 852: Looking at always incremental job swift-storage.trac.jobs
01-Mar 10:54 thog-dir JobId 852: swift-storage.trac.jobs: considering jobs older than 22-Feb-2017 10:54:01 for consolidation.
01-Mar 10:54 thog-dir JobId 852: before ConsolidateFull: jobids: 800,819,822
01-Mar 10:54 thog-dir JobId 852: check full age: full is 20-Feb-2017 03:45:08, allowed is 30-Jan-2017 10:54:01
01-Mar 10:54 thog-dir JobId 852: Full is newer than AlwaysIncrementalMaxFullAge -> skipping first jobid 800 because of age
01-Mar 10:54 thog-dir JobId 852: after ConsolidateFull: jobids: 819,822
01-Mar 10:54 thog-dir JobId 852: swift-storage.trac.jobs: Start new consolidation
01-Mar 10:54 thog-dir JobId 852: Using Catalog "MyCatalog"
01-Mar 10:54 thog-dir JobId 853: Fatal error: JobId 852 already running. Duplicate job not allowed.
01-Mar 10:54 thog-dir JobId 852: Job queued. JobId=853
01-Mar 10:54 thog-dir JobId 852: Consolidating JobId 853 started.
01-Mar 10:54 thog-dir JobId 853: Bareos thog-dir 16.2.4 (01Jul16):
  Build OS: x86_64-pc-linux-gnu debian Debian GNU/Linux 7.0 (wheezy)
  JobId: 853
  Job: swift-storage.trac.jobs.2017-03-01_10.54.01_07
  Backup Level: Virtual Full
  Client: "swift-storage.trac.jobs" 16.2.4 (01Jul16) x86_64-pc-linux-gnu,debian,Debian GNU/Linux 8.0 (jessie),Debian_8.0,x86_64
  FileSet: "LinuxAll" 2016-09-29 20:43:39
  Pool: "AI-Full" (From Job Pool's NextPool resource)
  Catalog: "MyCatalog" (From Client resource)
  Storage: "File" (From Storage from Pool's NextPool resource)
  Scheduled time: 01-Mar-2017 10:54:01
  Start time: 01-Mar-2017 10:54:01
  End time: 01-Mar-2017 10:54:01
  Elapsed time: 0 secs
  Priority: 10
  SD Files Written: 0
  SD Bytes Written: 0 (0 B)
  Rate: 0.0 KB/s
  Volume name(s):
  Volume Session Id: 0
  Volume Session Time: 0
  Last Volume Bytes: 0 (0 B)
  SD Errors: 0
  SD termination status:
  Accurate: no
  Termination: Backup Canceled

01-Mar 10:54 thog-dir JobId 852: BAREOS 16.2.4 (01Jul16): 01-Mar-2017 10:54:01
  JobId: 852
  Job: Consolidate.2017-03-01_10.53.59_06
  Scheduled time: 01-Mar-2017 10:53:58
  Start time: 01-Mar-2017 10:54:01
  End time: 01-Mar-2017 10:54:01
  Termination: Consolidate OK
TagsNo tags attached.
bareos-master: impact
bareos-master: action
bareos-17.2: impact
bareos-17.2: action
bareos-16.2: impact
bareos-16.2: action
bareos-15.2: impact
bareos-15.2: action
bareos-14.2: impact
bareos-14.2: action
bareos-13.2: impact
bareos-13.2: action
bareos-12.4: impact
bareos-12.4: action
Attached Files

- Relationships

-  Notes
(0002641)
therm (reporter)
2017-05-11 07:02

This one affects me as well.
Currently it is impossible to use bareos in production if you have a lot of data (backup duration > 24h) and AlwaysIncremental is not working. Without getting it working with AllowDuplicateJobs=no the AlwaysIncremental feature is nearly useless.
(0002672)
chrisbzc (reporter)
2017-06-20 12:03

Just wondering if anyone has a solution to this? I'd really like to roll out Bareos for all my company's backups but this one fatal flaw in AlwaysIncremental is blocking that...
(0002870)
dextor (reporter)
2018-01-06 16:27

+1

I wrote a quick and dirty patch to fix this issue. Here it is:
--- a/src/dird/job.c 2017-12-14 13:57:08.000000000 +0300
+++ b/src/dird/job.c 2018-01-06 17:27:25.897508235 +0300
@@ -996,4 +996,8 @@
          }

+ if (djcr->is_JobType(JT_CONSOLIDATE)) {
+ break;
+ }
+
          if (cancel_dup || job->CancelRunningDuplicates) {
             /*

- Issue History
Date Modified Username Field Change
2017-03-01 11:56 chrisbzc New Issue
2017-05-11 07:02 therm Note Added: 0002641
2017-06-20 12:03 chrisbzc Note Added: 0002672
2018-01-06 16:27 dextor Note Added: 0002870


Copyright © 2000 - 2018 MantisBT Team
Powered by Mantis Bugtracker