View Issue Details

IDProjectCategoryView StatusLast Update
0000205bareos-coredirectorpublic2023-08-22 14:47
Reporterzacha Assigned Tobruno-at-bareos  
PrioritynormalSeverityfeatureReproducibilityalways
Status closedResolutionno change required 
PlatformLinuxOSDebianOS Version6
Product Version13.2.0 
Summary0000205: Allow multiple copy jobs to run at the same time
DescriptionWith the current version of bareos it is at least very difficult to run multiple copy jobs at a time.

Assume the following:

Using a D2D2T strategy you back up

a) either to a File device
b) to a virtual tape library

Afterwards you want to copy your data to a changer with multiple tape drives.

The following problem occurs:

in case a:

as bareos does not know something like an autochanger or meta- file device for File backups it is not possible to have multiple "drives" for a single type of file media.

So there is no possibility to let bareos select a virtual file device for each copy job and the correspoding originating media. All copy jobs queue behind each other. Assume we use a different medium for each backup job and every job to be copied could be read from different media.

Specifying multiple device names in a Storage config in the director seems like working on first sight but produces jobs that backup to wrong media, depending on occupation of the "drives". This is not a documented feature so I assume that this is working after all is a defect. This is a problem for file backup in general if you want to backup to different media at the same time.

If you use a virtual tape lib (like we do) we CAN backup different concurring jobs to different media/pools using multiple "tape drives". That works well as with a real changer device. But copying multiple jobs at a time is not possbible. If we rise the Maximum concurrent jobs for the copy job above 1 we run into the problem that bareos tries to "mount" the same originating media to different drives and blocks. Instead it should put this jobs into a waiting state until the other copy job has finished reading from the medium and continue with another job that originates from a different medium that is currently not in use. That should work well at least if you used multiple drives for backing up before- because data will be spread across different media.
TagsNo tags attached.

Activities

zacha

zacha

2013-07-09 11:15

reporter   ~0000516

suggested solution:

a) introduce a meta-file device that works like an autochanger for file media

and b
b) introduce a locking mechanism that checks if a medium is currently used or requested by a prior job for reading and put that job in a waiting queue
maik

maik

2013-07-12 16:38

administrator   ~0000525

Needs investigation and is probably a lot of work.
mvwieringen

mvwieringen

2013-07-12 17:12

developer   ~0000529

For option a there is already a couple of options:

- vchanger (external program)
- diskchanger (supplied with the source and used in regressions.)
- internal virtual changer with /dev/null changer cmd.

It is known that the selection in the SD of tapes/volumes can sometime run
into some race condition selecting a volume for read and write etc. This means
a lot of work in the SD which we plan to do eventually but as it means a serious
redesign it probably won't happen before the end of this year.
zacha

zacha

2013-07-12 17:24

reporter   ~0000531

hello marco,

yes we know that we can use a vchanger (we do so) but I think in general it would be good if bareos supported multi-file devices ootb.

but the virtual tape lib does not help with the copy/move jobs, as described before.
holtkamp

holtkamp

2014-06-03 11:46

reporter   ~0000894

Last edited: 2014-06-03 11:47

> it is not possible to have multiple "drives" for a single type of file media.

That is not true. It is just not documented how this works.

My setup has been running for a few years, so this is tried and tested.

sd.conf:
Device {
  Name = filestorage-1
  Media Type = filestorage
  Archive Device = /var/backup/filestorage/
  LabelMedia = yes;
  Random Access = Yes;
  AutomaticMount = yes;
  RemovableMedia = no;
  AlwaysOpen = no;
}
[... repeat with additional numbered names ...]

--------

director.conf
Storage {
  Name = filestorage
  Address = x.x.x.x
  SDPort = 9103
  Password = "somepwd"
  Device = filestorage-1
  Device = filestorage-2
  Device = filestorage-N
[... repeat with as many writers as defined in the sd.conf - in my case 10 ...]
  Media Type = filestorage
  Maximum Concurrent Jobs = 10
}

-------

I have another set of those definitions for incremental backups (filestorage-incremental). Then i defined a full and incremental pool for each client that point to filestorage and filestorage-incremental respectively. And each pool has his own label definition. So the result when looking at the target directories is something like this:

-------
ls /var/backup/filestorage
clientA.full.0001
clientB.full.0002
clientC.full.0003

ls /var/backup/filestorage-incremental
clientA.incremental.0004
clientB.incremental.0005
clientC.incremental.0006
-------

I have copy jobs that copy all those to my tape library (to two Pools, one for full, one for incrementals - so the jobs from ALL clients are condensed into two pools). Using attribute spooling (and two drives) i am able to copy multiple jobs in parallel (if a job finished writing to tape the next one starts writing while the first one is inserting the attributes). Works nicely, no race conditions, no blocking.
My tape drives are configured to a maximum of one concurrent job as i do not want to have job interleaving on my tapes (just makes things messy). If you change this you could also have multiple jobs writing simultaneously to the tape drives.

Any questions? ;)

PK

PK

2017-10-18 11:06

reporter   ~0002798

Thanks for that holtkamp. I can follow what you have done until:

I have another set of those definitions for incremental backups (filestorage-incremental). Then i defined a full and incremental pool for each client that point to filestorage and filestorage-incremental respectively. And each pool has his own label definition.

How/where did you define the other set of those definitions for incremental backups (filestorage-incremental)?

And where/how did you define the full and incremental pool for each client that point to filestorage and filestorage-incremental respectively. And each pool has his own label definition.

Thanks

Paul
holtkamp

holtkamp

2017-10-18 12:10

reporter   ~0002799

We did abandon that concept a while ago, now clients only have one pool.

What we did back then was create two pools for each client:

poolname: clientname-full-pool
label format: clientname.full.
poolname: clientname-incremental-pool
label format: clientname.incremental.
(plus all the other options).

and in the job definition for the client specify this:
Full Backup Pool = clientname-full-pool
Incremental Backup Pool = clientname-incremental-pool


This way it gets split into the pools.

If you want to get a look at my updated configuration there is a Youtube video from my talk at the OSBConf 2014 https://www.youtube.com/watch?v=PPU7u9y5RCY
PK

PK

2017-10-18 12:23

reporter   ~0002800

Thankyou for your reply. Can hear your presentation fine but not read the screen! Do you have the slides (powerpoint?) from the presenation please?
Thanks
holtkamp

holtkamp

2017-10-18 12:35

reporter   ~0002801

Slides are on the Conference Page here: http://osbconf.org/wp-content/uploads/2014/09/Migration-from-Bacula-to-Bareos-by-Daniel-Holtkamp.pdf

Have fun! If you have any questions you can contact me directly - format is lastname at riege dot com.
bruno-at-bareos

bruno-at-bareos

2023-08-22 14:47

manager   ~0005328

long forgotten issue. Use virtual changer for file with multiple drives.

Issue History

Date Modified Username Field Change
2013-07-09 11:07 zacha New Issue
2013-07-09 11:15 zacha Note Added: 0000516
2013-07-12 16:38 maik Note Added: 0000525
2013-07-12 16:38 maik Status new => acknowledged
2013-07-12 17:12 mvwieringen Note Added: 0000529
2013-07-12 17:24 zacha Note Added: 0000531
2014-06-03 11:46 holtkamp Note Added: 0000894
2014-06-03 11:47 holtkamp Note Edited: 0000894
2017-10-18 11:06 PK Note Added: 0002798
2017-10-18 12:10 holtkamp Note Added: 0002799
2017-10-18 12:23 PK Note Added: 0002800
2017-10-18 12:35 holtkamp Note Added: 0002801
2023-08-22 14:47 bruno-at-bareos Assigned To => bruno-at-bareos
2023-08-22 14:47 bruno-at-bareos Status acknowledged => closed
2023-08-22 14:47 bruno-at-bareos Resolution open => no change required
2023-08-22 14:47 bruno-at-bareos Note Added: 0005328