View Issue Details
|ID||Project||Category||View Status||Date Submitted||Last Update|
|0001307||bareos-core||[All Projects] director||public||2021-01-19 01:44||2021-01-19 01:44|
|Reporter||Ruth Ivimey-Cook||Assigned To|
|Platform||amd64||OS||Linux||OS Version||Ubuntu 20.04|
|Fixed in Version|
|Summary||0001307: Save queued but not started jobs persistently, so that if director is restarted they are not forgotten|
|Description||If, with a backlog of jobs, the director is stopped, crashes, the server dies or whatever, the queue of upcoming jobs is lost and the schedule restarts from the point the director is restarted. This could mean the schedule gets all out of kilter, and of course any manually started jobs are just forgotten entirely.|
It would be really good if the director stored the queue in a SQL table, so they survived any interruption.
If this was done, perhaps a CLI flag could be added to prevent jobs being added from the SQL queue, just in case there was a problem in the queue that prevented proper operation.
|Steps To Reproduce||1. Start a number of jobs on the director|
2. Shut down the director (normally is fine)
3. Restart the director and note that none of the previously started jobs is still there.
|Additional Information||In a way, I feel this is a data integrity issue, not just a nice to have.|
One could argue that you don't shut down servers, but then stuff happens, and life isn't perfect. Losing jobs in a carefully set up schedule may be ok in many cases, but it can have significant effects. For example if the lost job was a full backup, and the prior incrementals are then expired a short time later, it becomes possible (I think) for only the most recent incremental to be left extant, rather than the Full+incrementals the schedule defined.
|Tags||No tags attached.|