View Issue Details

IDProjectCategoryView StatusLast Update
0001215bareos-core[All Projects] directorpublic2020-04-20 10:34
ReporterhostedpowerAssigned Toarogge 
PrioritynormalSeverityfeatureReproducibilityhave not tried
Status acknowledgedResolutionopen 
Product Version19.2.6 
Fixed in Version 
Summary0001215: Support number of jobs / storage daemon
DescriptionHi,


We have 3 different storage daemons running now on 3 different servers. We really need a way to limit number of jobs per storage daemon...

They all have independent hardware resources , their own bandwidth and varying processing power. It's somewhat strange this functionality doesn't even exist yet. It probably doesn't make much sense to have common settings for this?

We'd need this for regular backup jobs and for sure also for consolidates...

TagsNo tags attached.
bareos-master: impact
bareos-master: action
bareos-19.2: impact
bareos-19.2: action
bareos-18.2: impact
bareos-18.2: action
bareos-17.2: impact
bareos-17.2: action
bareos-16.2: impact
bareos-16.2: action
bareos-15.2: impact
bareos-15.2: action
bareos-14.2: impact
bareos-14.2: action
bareos-13.2: impact
bareos-13.2: action
bareos-12.4: impact
bareos-12.4: action

Activities

arogge

arogge

2020-04-17 15:12

developer   ~0003944

This is a current limitation and there is no easy way to fix this. The Director only knows about Storages and has no idea on which Storage Daemon that Storage is located.
hostedpower

hostedpower

2020-04-17 19:53

reporter   ~0003950

can you set a max on the director and also on each storage daemon? The director could then query each storage daemon to see what has been set? :)
arogge

arogge

2020-04-20 08:36

developer   ~0003951

The director doesn't know there is a storage daemon. It just knows there is a storage and it can be accessed using an address, name and pre-shared key. To apply any limitations on a per storage daemon basis the director first needs to know what a storage daemon is.
Definitely doable, but neither simple nor on our roadmap.
hostedpower

hostedpower

2020-04-20 10:34

reporter   ~0003952

As soon as it connects to that storage, would it be possible to simply assign jobs (like now it the case). After that the storage daemon decides how many he wants to run concurrently. Still the director should be smart enough to more or less evenly distribute jobs to different storage daemons.

Let's say we put 24 as a max jobs for director, then the limit on the storage daemon is 8. Then at least the daemons would be protected from overloading.

How do other people with multi storage daemons cope with this? It's really not very scalable like this while the architecture seems to be designed for that? :)

Issue History

Date Modified Username Field Change
2020-03-23 12:46 hostedpower New Issue
2020-04-17 15:12 arogge Assigned To => arogge
2020-04-17 15:12 arogge Status new => acknowledged
2020-04-17 15:12 arogge Note Added: 0003944
2020-04-17 19:53 hostedpower Note Added: 0003950
2020-04-20 08:36 arogge Note Added: 0003951
2020-04-20 10:34 hostedpower Note Added: 0003952