View Issue Details

IDProjectCategoryView StatusLast Update
0001478bareos-coredirectorpublic2022-09-30 11:40
Reportergrd Assigned Toarogge  
PrioritynormalSeverityminorReproducibilityalways
Status closedResolutionno change required 
PlatformLinuxOSDebianOS Version10
Product Version21.1.4 
Summary0001478: Name labeling race condition
DescriptionHello. Have a configuration when every job create a separate volume like
$level-$id-$nodename.dat

 Also have a:
  Maximum Volume Jobs = 1
  Volume Use Duration = 6h
  Recycle = no
  AutoPrune = yes


When I have a long script on Client Run Before Job to one storage and one more task to another storage:

12 2022-08-12 20:00:01 rms52 JobId 426: Labeled new Volume "Full-427-vps712.dat" on device "FileStorage" (/var/lib/bareos/storage).


11 2022-08-12 20:00:01 vps712 JobId 427: shell command: run ClientBeforeJob "/usr/sbin/backup"
5 2022-08-12 20:00:00 bareos.dcsig JobId 427: Created new Volume "Full-427-vps712.dat" in catalog.

I expect that after creating volume only this job will be written
Steps To Reproduce1) Run job to storage 1
2) Run long Client Run Before Job (storage 1)
3) Run job to storage 2
TagsNo tags attached.
bareos-master: impact
bareos-master: action
bareos-19.2: impact
bareos-19.2: action
bareos-18.2: impact
bareos-18.2: action
bareos-17.2: impact
bareos-17.2: action
bareos-16.2: impact
bareos-16.2: action
bareos-15.2: impact
bareos-15.2: action
bareos-14.2: impact
bareos-14.2: action
bareos-13.2: impact
bareos-13.2: action
bareos-12.4: impact
bareos-12.4: action

Activities

bruno-at-bareos

bruno-at-bareos

2022-08-16 13:34

developer   ~0004724

$id is not $jobid this variable source is unknown so not a bug.
For your information, most of the variable presented here will be deprecated (then after one release removed) https://docs.bareos.org/Configuration/CustomizingTheConfiguration.html#variable-expansion-on-volume-labels
grd

grd

2022-08-16 13:41

reporter   ~0004725

Sorry, misscoping.

Label Format = "Full-${JobId}-${Job}.dat"
bruno-at-bareos

bruno-at-bareos

2022-08-16 15:20

developer   ~0004726

Sorry but Jobid is 426 in that case ?

12 2022-08-12 20:00:01 rms52 JobId 426: Labeled new Volume "Full-427-vps712.dat" on device "FileStorage" (/var/lib/bareos/storage).

Without proper configuration and log attached it will be ping-pong party of guessing.
Provide enough information to make it reproducible (or at least try to).
grd

grd

2022-08-16 15:34

reporter   ~0004727

Sorry, my fault. Here is logs of 2 jobs:

Here is logs for job 426 and 427.

14 2022-08-12 20:00:01 bareos.dcsig JobId 426: Max Volume jobs=1 exceeded. Marking Volume "Full-427-vps712.dat" as Used.
13 2022-08-12 20:00:01 rms52 JobId 426: Wrote label to prelabeled Volume "Full-427-vps712.dat" on device "FileStorage" (/var/lib/bareos/storage)
12 2022-08-12 20:00:01 rms52 JobId 426: Labeled new Volume "Full-427-vps712.dat" on device "FileStorage" (/var/lib/bareos/storage).
11 2022-08-12 20:00:01 bareos-fd JobId 426: ACL support is enabled
10 2022-08-12 20:00:01 bareos-fd JobId 426: Extended attribute support is enabled
9 2022-08-12 20:00:01 bareos-fd JobId 426: Connected Storage daemon at rms52.mgt.gurtam.net:9103, encryption: None
8 2022-08-12 20:00:01 bareos.dcsig JobId 426: Encryption: None
7 2022-08-12 20:00:01 bareos.dcsig JobId 426: Handshake: Cleartext
6 2022-08-12 20:00:01 bareos.dcsig JobId 426: Connected Client: bareos-fd at 127.0.0.1:9102, encryption: None
5 2022-08-12 20:00:01 bareos.dcsig JobId 426: Using Device "FileStorage" to write.
4 2022-08-12 20:00:01 bareos.dcsig JobId 426: Connected Storage daemon at rms52.mgt.gurtam.net:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
3 2022-08-12 20:00:00 bareos.dcsig JobId 426: Start Backup JobId 426, Job=backup-bareos-fd.2022-08-12_20.00.00_26
2 2022-08-12 20:00:00 bareos.dcsig JobId 426: No prior or suitable Full backup found in catalog. Doing FULL backup.
1 2022-08-12 20:00:00 bareos.dcsig JobId 426: No prior Full backup Job record found.


18426 2022-08-12 21:32:05 bareos.dcsig JobId 427: Error: cats/sql_create.cc:427 Volume "Full-427-vps712.dat" already exists.
18425 2022-08-12 21:27:05 rms53 JobId 427: Job vps712.2022-08-12_20.00.00_27 is waiting. Cannot find any appendable volumes.
Please use the "label" command to create a new Volume for:
Storage: "FileStorage" (/var/lib/bareos/storage)
Pool: Full
Media type: File
18424 2022-08-12 21:27:05 bareos.dcsig JobId 427: Error: cats/sql_create.cc:427 Volume "Full-427-vps712.dat" already exists.
11 2022-08-12 20:00:01 vps712 JobId 427: shell command: run ClientBeforeJob "/usr/sbin/gtmms-backup" ######- it tooks about 30 minutes
10 2022-08-12 20:00:01 vps712 JobId 427: Connected Storage daemon at rms53.mgt.gurtam.net:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
9 2022-08-12 20:00:01 bareos.dcsig JobId 427: Encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
8 2022-08-12 20:00:01 bareos.dcsig JobId 427: Handshake: Immediate TLS
7 2022-08-12 20:00:01 bareos.dcsig JobId 427: Connected Client: vps712 at vps712.dc-sig.gurtam.net:9102, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
6 2022-08-12 20:00:00 bareos.dcsig JobId 427: Using Device "FileStorage" to write.
5 2022-08-12 20:00:00 bareos.dcsig JobId 427: Created new Volume "Full-427-vps712.dat" in catalog.
4 2022-08-12 20:00:00 bareos.dcsig JobId 427: Connected Storage daemon at rms53.mgt.gurtam.net:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
3 2022-08-12 20:00:00 bareos.dcsig JobId 427: Start Backup JobId 427, Job=vps712.2022-08-12_20.00.00_27
2 2022-08-12 20:00:00 bareos.dcsig JobId 427: No prior or suitable Full backup found in catalog. Doing FULL backup.
1 2022-08-12 20:00:00 bareos.dcsig JobId 427: No prior Full backup Job record found.

config;
pool:
Pool {
  Name = Full
  Pool Type = Backup
  Maximum Volume Jobs = 1
  Volume Use Duration = 6h
  Recycle = no # Bareos can automatically recycle Volumes
  AutoPrune = yes # Prune expired volumes
  Volume Retention = 35 days # How long should the Full Backups be kept?
  Label Format = "Full-${JobId}-${Job}.dat" # Volumes will be labeled "Full-<volume-id>"
}
arogge

arogge

2022-09-30 11:39

developer   ~0004792

There is no guarantee that the job that labels the volume will actually use it.
As you can see your job 426 used the volume labeled by job 427. When Job 427 tried to label a new volume it failed because your autolabel configuration produced a duplicate.
I can only advise against using custom volume labels, it usually doesn't work out - especially if you use job variables in the volume names (as you just found out).

Issue History

Date Modified Username Field Change
2022-08-16 11:11 grd New Issue
2022-08-16 13:34 bruno-at-bareos Note Added: 0004724
2022-08-16 13:41 grd Note Added: 0004725
2022-08-16 15:20 bruno-at-bareos Note Added: 0004726
2022-08-16 15:34 grd Note Added: 0004727
2022-09-30 11:39 arogge Note Added: 0004792
2022-09-30 11:40 arogge Assigned To => arogge
2022-09-30 11:40 arogge Status new => resolved
2022-09-30 11:40 arogge Resolution open => no change required
2022-09-30 11:40 arogge Status resolved => closed