View Issue Details
|ID||Project||Category||View Status||Date Submitted||Last Update|
|0000307||bareos-core||[All Projects] director||public||2014-06-04 11:35||2014-06-17 16:37|
|Fixed in Version|
|Summary||0000307: copy job from one catalog to another only works in special configurations|
|Description||I want to copy a job from one catalog to another on a single SD.|
This works only when two SDs are involved, and only with some config magic.
|Steps To Reproduce||Define 2 catalogs (and don´t forget to create the database schemas, make tables and grant permissions).|
Define 2 storage daemons.
Configure client resource clientA with catalogA
Configure pool resource poolA with catalogA and storageA
Configure pool resource poolB with catalogB and storageB
Define initial backup job to poolA
Define a copy job between poolA and poolB
Run backup job
-> Initial backup to poolA, check database, everything in catalogA
Run copy job
-> Job will HANG in selecting the correct pools for copy.
03-Jun-2014 16:16:16 backup-dir: migrate.c:254-0 mig_jcr: Name=clientA JobId=3 Type=B Level=F
Looking at migrate.c i suspect it hangs in set_migration_next_pool as the new pool is not in catalogA
Now for some configuration magic:
Comment out catalogB from pool resource poolB (so the pool will be created in both catalogs)
Run copy job
-> Job will complete successfully with all data written to catalogA
= Almost, but not quite what we want.
And now for some advanced tricks:
Reset everything (database, volumes) but keep config as-is (for a fresh start)
Run initial backup job
change client resource clientA to use catalogB
Run copy job
-> Job will complete successfully with all data written to catalogB
|Additional Information||Tried on Version: 14.2.0 (31 May 2014)|
It is possible to do this, but at the moment requires you to trick the director a bit. And it only works when two SDs are involved, i was unable to reproduce this (kind of wanted) behavior with just one SD.
It would be great if this would work on a single SD. I can work around all the other bits, and maybe even around this too with some heavy sql scripting, but i would prefer a simpler solution.
I realize that this breaks the copy-job chain, so when the original backup job is deleted/purged/pruned the copy will NOT be upgraded (bareos only looks at the one catalog of course). And since it is labeled as a copy job it can not be used for restore (unless the restore copy keyword is used i guess). I circumvent this by running an sql script after the job that immediately updates the job record in the new catalog with Type = B, corrects the PoolId and sets priorJobId = 0. That way the job is completely disconnected from the original.
Also when restoring you need to know at which catalog to look. But you need to know that anyway when working with multiple catalogs.
Some background on why we want to do all this (you might wonder :) )
After maintaining a growing database for a couple of years it gets really REALLY messy in my experience. And big. So i wanted to split up the catalogs into client groups: dev, testing, general production, clearingcenter (one of our products - has tons of small files), procars (another one of our products - also tons of files) and archive storage (tapes with 10yrs retention). Utilizing multiple catalogs should make all of this easier to manage on the database side.
|Tags||No tags attached.|
|Related to: https://bugs.bareos.org/view.php?id=305|
I think there a quite a number of problems with using multiple catalogs as
the code assumes to much and thinks only one catalog exists. So this means
major surgery don't know when and if we get to fixing that as its quite a
bit of work for something not used by many. So for now it has little triggers
to make it blink on the radar.
If time permits this might be tackled one day so that its why its on acknowledged
but as stated before there are more import stuff blocking progress on this.