View Issue Details
|ID||Project||Category||View Status||Date Submitted||Last Update|
|0001215||bareos-core||[All Projects] director||public||2020-03-23 12:46||2020-04-20 10:34|
|Priority||normal||Severity||feature||Reproducibility||have not tried|
|Fixed in Version|
|Summary||0001215: Support number of jobs / storage daemon|
We have 3 different storage daemons running now on 3 different servers. We really need a way to limit number of jobs per storage daemon...
They all have independent hardware resources , their own bandwidth and varying processing power. It's somewhat strange this functionality doesn't even exist yet. It probably doesn't make much sense to have common settings for this?
We'd need this for regular backup jobs and for sure also for consolidates...
|Tags||No tags attached.|
|This is a current limitation and there is no easy way to fix this. The Director only knows about Storages and has no idea on which Storage Daemon that Storage is located.|
|can you set a max on the director and also on each storage daemon? The director could then query each storage daemon to see what has been set? :)|
The director doesn't know there is a storage daemon. It just knows there is a storage and it can be accessed using an address, name and pre-shared key. To apply any limitations on a per storage daemon basis the director first needs to know what a storage daemon is.
Definitely doable, but neither simple nor on our roadmap.
As soon as it connects to that storage, would it be possible to simply assign jobs (like now it the case). After that the storage daemon decides how many he wants to run concurrently. Still the director should be smart enough to more or less evenly distribute jobs to different storage daemons.
Let's say we put 24 as a max jobs for director, then the limit on the storage daemon is 8. Then at least the daemons would be protected from overloading.
How do other people with multi storage daemons cope with this? It's really not very scalable like this while the architecture seems to be designed for that? :)
|2020-03-23 12:46||hostedpower||New Issue|
|2020-04-17 15:12||arogge||Assigned To||=> arogge|
|2020-04-17 15:12||arogge||Status||new => acknowledged|
|2020-04-17 15:12||arogge||Note Added: 0003944|
|2020-04-17 19:53||hostedpower||Note Added: 0003950|
|2020-04-20 08:36||arogge||Note Added: 0003951|
|2020-04-20 10:34||hostedpower||Note Added: 0003952|