View Issue Details
|ID||Project||Category||View Status||Date Submitted||Last Update|
|0000730||bareos-core||[All Projects] storage daemon||public||2016-12-02 18:34||2018-04-20 15:16|
|Fixed in Version|
|Summary||0000730: User specified Device spool size reached|
|Description||When I'm running multiple backups in a same tame, in a short time I can see something like this:|
02-Dec 17:55 beyla-sd JobId 158706: User specified Device spool size reached: DevSpoolSize=250,000,840,710 MaxDevSpoolSize=250,000,000,000
02-Dec 17:55 beyla-sd JobId 158705: User specified Device spool size reached: DevSpoolSize=250,001,872,914 MaxDevSpoolSize=250,000,000,000
02-Dec 17:55 beyla-sd JobId 158706: Writing spooled data to Volume. Despooling 42,296,623,249 bytes ...
02-Dec 17:55 beyla-sd JobId 158705: Writing spooled data to Volume. Despooling 18,478,516,008 bytes ...
My spool size is 250GB, but it stopped spooling after 42.2GB + 18.4GB. It seems that storage daemon thinks that spool is filled with something and uses just the rest. But the spool is empty and it can use whole 250GB.
If I don't restart storage daemon to reset the capacity of spool back to 250GB it can decrease again and again and once it happens that it used just 1MB of spool !!!.
I remember that this bug was also in 15.2.
Thank you for you answer.
|Steps To Reproduce||Run multiple backups in a same time, maybe you can cancel one backup when there is spooled some amount of data or turn off the client to make error and then bareos will cancel the backup.|
|Additional Information||In the log you can see that:|
At 02-Dec 03:19 the spool size was 170+52GB = ca. 230GB - I don't have a log where it is original 250GB
but there finished some jobs at 02-Dec 03:09 and at 02-Dec 04:39
At 02-Dec 05:22 the spool size is only 153+38GB = ca. 190GB
at 02-Dec 08:28 and at 02-Dec 08:30 were finished other 2 jobs and at
02-Dec 08:50 the spool size is only 143+17GB = 160GB
and so on... the spool capacity is decreasing
|Tags||No tags attached.|