View Issue Details

IDProjectCategoryView StatusLast Update
0000577bareos-corestorage daemonpublic2019-12-18 15:24
Reporters2Xk4G Assigned Toarogge  
PrioritynormalSeveritymajorReproducibilitysometimes
Status closedResolutionunable to reproduce 
PlatformLinuxOSDebianOS Version8
Product Version15.2.2 
Summary0000577: MaxSpoolSize not used completely
DescriptionHello,

following problem: i'm launching a job where about 6 TB should be written to tape. The data source is a remote client, also running Debian jessie with v15.2.2 fd.

Storagedaemon Spool size setting is

     "Maximum Spool Size = 450000000000"

(see "neo400.conf" attached)
The spool Disk(mounted at /var/spool/bareos/1, see below) is an dedicated SSD with 477 GB free space.

-------------------
backup02:/etc/bareos# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 92G 50G 38G 57% /
udev 10M 0 10M 0% /dev
tmpfs 4.8G 5.1M 4.8G 1% /run
tmpfs 12G 1.7M 12G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 12G 0 12G 0% /sys/fs/cgroup
/dev/sdb1 14T 13T 737G 95% /data1
/dev/sdc1 11T 11T 284G 98% /data2
/dev/sda2 7.2T 7.1T 93G 99% /data0
/dev/sdd 477G 42G 436G 9% /var/spool/bareos/1
/dev/sde 477G 33M 477G 1% /var/spool/bareos/2
tmpfs 2.4G 0 2.4G 0% /run/user/0
backup02:/etc/bareos#
-------------------


But the SD spools the data only in ~44GB chunks.

Attaching the log of the SD ("Job_3061.log"). There it is clearly noticeable,
that the MaximumSpoolSize is taken over from the config correctly as
450GB.
But to the spool disk are spooled only 44 GB. And then written to tape.
And then again 44 GB are spooled, etc..

Looks for me, as if a one trailing zero from the "MaximumSpoolSize" get's "lost" during some calculations.






Steps To ReproduceRun this very long lasting job.
TagsNo tags attached.

Activities

s2Xk4G

s2Xk4G

2015-12-04 09:58

reporter  

mvwieringen

mvwieringen

2016-01-11 15:23

developer   ~0002084

First of all I would use something like Maximum Spool Size = 450G
Other then that I don't see this on my own install where the backup
Job is around 230 Gb and I use chunks of 60 Gb and it spools 4 times.
The spooling code does take into consideration how full the filesystem
is maybe that is not working but its a bit to difficult to reproduce this
particular problem as we don't see it elsewhere.
arogge

arogge

2019-01-16 13:09

manager   ~0003187

Last edited: 2019-01-16 13:09

I just had a look at the code and there is no calculation with this parameter. Also the parameter is parsed correctly at least in 15.2, 17.2 and 18.2.

The reproducibility is set to "sometimes". Does this mean it doesn't happen on every job?
Are you running concurrent jobs to the same device?

arogge

arogge

2019-04-10 16:39

manager   ~0003318

Sorry, I'm closing the bug as there has not been any feedback for months.

Issue History

Date Modified Username Field Change
2015-12-04 09:58 s2Xk4G New Issue
2015-12-04 09:58 s2Xk4G File Added: job_log_and_storage_conf.tar
2016-01-11 15:23 mvwieringen Note Added: 0002084
2016-01-11 15:23 mvwieringen Status new => feedback
2019-01-16 13:09 arogge Assigned To => arogge
2019-01-16 13:09 arogge Status feedback => assigned
2019-01-16 13:09 arogge Note Added: 0003187
2019-01-16 13:09 arogge Status assigned => feedback
2019-01-16 13:09 arogge Note Edited: 0003187
2019-04-10 16:39 arogge Status feedback => resolved
2019-04-10 16:39 arogge Resolution open => unable to reproduce
2019-04-10 16:39 arogge Note Added: 0003318
2019-12-18 15:24 arogge Status resolved => closed