View Issue Details

IDProjectCategoryView StatusLast Update
0000643bareos-core[All Projects] storage daemonpublic2016-04-18 12:12
ReporterottoAssigned To 
PrioritynormalSeverityfeatureReproducibilityalways
Status newResolutionopen 
PlatformLinuxOSDebianOS Version8
Product Version15.2.3 
Fixed in Version 
Summary0000643: efficienter use of Tape drives
DescriptionI use a Tape Library with 2 tape drives and spooling.
Later i will upgrade to 4 drives. This , which reduced problem 1 and increase problem 2...

 Prefer Mounted Volumes = yes (default)
 

Problem 1:
A long running job with pool A, blocks a tapedrive for all other pools.
So in my config 2 jobs with long (spooling) duration in two different pools,
will block the hole backup.

Problem 2:
A lot of jobs in one Pool will only use one tape drive.
The other tape drives are idle and the backup performance of all these jobs,
are limited to one drive (some Despool Waiting Jobs).

Maybe the reservation of a tapedrive occours on beginning of despooling.

I think this wouldn't be trivial, but necessary to efficient use of an Tape library with more Tape Drives.
Additional Information*status storage=Scalar6k-TWR
Connecting to Storage daemon Scalar6k-TWR at bareos-sd.mydomain:9103

bareos1-sd Version: 15.2.3 (07 March 2016) x86_64-pc-linux-gnu debian Debian GNU/Linux 8.0 (jessie)
Daemon started 13-Apr-16 16:52. Jobs: run=586, running=11.
 Heap: heap=405,504 smbytes=14,445,531 max_bytes=32,272,609 bufs=908 max_bufs=1,220
 Sizes: boffset_t=8 size_t=8 int32_t=4 int64_t=8 mode=0 bwlimit=0kB/s

Running Jobs:
Writing: Full Backup job dxxx-bj_home JobId=18624 Volume="OF0465L6"
    pool="TWR-3M" device="LTO6-twr-05" (/dev/LTO6-twr-05)
    spooling=1 despooling=0 despool_wait=0
    Files=2,519,006 Bytes=4,176,252,155,687 AveBytes/sec=12,817,909 LastBytes/sec=5,244,472
    FDReadSeqNo=90,013,841 in_msg=82987117 out_msg=6 fd=23
Writing: Full Backup job gxxx-bj1 JobId=18731 Volume="OF0465L6"
    pool="TWR-3M" device="LTO6-twr-05" (/dev/LTO6-twr-05)
    spooling=1 despooling=0 despool_wait=0
    Files=5,355,457 Bytes=1,217,339,579,708 AveBytes/sec=4,445,027 LastBytes/sec=1,436,783
    FDReadSeqNo=76,768,546 in_msg=61286029 out_msg=6 fd=12
Writing: Full Backup job dxyy-bj_home JobId=18732 Volume="OF0465L6"
    pool="TWR-3M" device="LTO6-twr-05" (/dev/LTO6-twr-05)
    spooling=1 despooling=0 despool_wait=0
    Files=6,303,219 Bytes=9,513,898,578,229 AveBytes/sec=23,131,529 LastBytes/sec=1,703,936
    FDReadSeqNo=199,457,777 in_msg=180857551 out_msg=6 fd=14
Writing: Incremental Backup job dxyy-bj_user JobId=19046 Volume="OF0465L6"
    pool="TWR-3M" device="LTO6-twr-05" (/dev/LTO6-twr-05)
    spooling=0 despooling=0 despool_wait=0
    Files=0 Bytes=0 AveBytes/sec=0 LastBytes/sec=0
    FDReadSeqNo=4 in_msg=3 out_msg=4 fd=53
Writing: Full Backup job wxxx-bj1 JobId=19064 Volume="OF0465L6"
    pool="TWR-3M" device="LTO6-twr-05" (/dev/LTO6-twr-05)
    spooling=0 despooling=0 despool_wait=1
    Files=20,361,201 Bytes=3,918,859,194,423 AveBytes/sec=0 LastBytes/sec=0
    FDReadSeqNo=228,578,427 in_msg=171579584 out_msg=6 fd=16
Writing: Full Backup job dxyy-bj_owncloud JobId=19066 Volume="OF0465L6"
    pool="TWR-3M" device="LTO6-twr-05" (/dev/LTO6-twr-05)
    spooling=0 despooling=0 despool_wait=0
    Files=0 Bytes=0 AveBytes/sec=0 LastBytes/sec=0
    FDReadSeqNo=4 in_msg=3 out_msg=4 fd=74
Writing: Full Backup job biox-bj_raid18 JobId=19067 Volume="OF0465L6"
    pool="TWR-3M" device="LTO6-twr-05" (/dev/LTO6-twr-05)
    spooling=1 despooling=0 despool_wait=0
    Files=299,270 Bytes=1,224,903,194,950 AveBytes/sec=9,839,289 LastBytes/sec=4,000,380
    FDReadSeqNo=21,378,861 in_msg=20499910 out_msg=6 fd=39
Writing: Full Backup job biox-bj_raid7 JobId=19068 Volume="OF0465L6"
    pool="TWR-3M" device="LTO6-twr-05" (/dev/LTO6-twr-05)
    spooling=0 despooling=1 despool_wait=0
    Files=83,141 Bytes=1,256,041,667,421 AveBytes/sec=0 LastBytes/sec=0
    FDReadSeqNo=21,110,883 in_msg=20786954 out_msg=6 fd=62
Writing: Full Backup job biox-bj_raid11 JobId=19069 Volume="OF0465L6"
    pool="TWR-3M" device="LTO6-twr-05" (/dev/LTO6-twr-05)
    spooling=0 despooling=0 despool_wait=1
    Files=51,732 Bytes=1,088,934,570,327 AveBytes/sec=0 LastBytes/sec=0
    FDReadSeqNo=18,436,190 in_msg=18235013 out_msg=6 fd=42
Writing: Incremental Backup job jxxx-bj JobId=19347 Volume="OF0465L6"
    pool="TWR-3M" device="LTO6-twr-05" (/dev/LTO6-twr-05)
    spooling=0 despooling=0 despool_wait=1
    Files=65,887 Bytes=512,108,906 AveBytes/sec=0 LastBytes/sec=0
    FDReadSeqNo=405,080 in_msg=272353 out_msg=6 fd=29
Writing: Full Backup job lxxx-bj JobId=19366 Volume="OF0465L6"
    pool="TWR-3M" device="LTO6-twr-05" (/dev/LTO6-twr-05)
    spooling=0 despooling=0 despool_wait=1
    Files=17,748 Bytes=244,545,851 AveBytes/sec=0 LastBytes/sec=0
    FDReadSeqNo=126,005 in_msg=85004 out_msg=10 fd=7
====

Jobs waiting to reserve a drive:
====

Terminated Jobs:
 JobId Level Files Bytes Status Finished Name
===================================================================
...
====

Device status:
Autochanger "Scalar6k-TWR" with devices:
   "LTO6-twr-05" (/dev/LTO6-twr-05)
   "LTO6-twr-06" (/dev/LTO6-twr-06)

Device "LTO6-twr-05" (/dev/LTO6-twr-05) is mounted with:
    Volume: OF0465L6
    Pool: TWR-3M
    Media type: LTO6
    Slot 171 is loaded in drive 0.
    Total Bytes=2,126,624,100,462 Blocks=15,870,174 Bytes/block=134,001
    Positioned at File=141 Block=11,100
==

Device "LTO6-twr-06" (/dev/LTO6-twr-06) is mounted with:
    Volume: OF0699L6
    Pool: TWR-3W
    Media type: LTO6
    Slot 234 is loaded in drive 1.
    Total Bytes=2,324,103,240,764 Blocks=2,216,501 Bytes/block=1,048,545
    Positioned at File=161 Block=0
==
====

Used Volume status:
OF0465L6 on device "LTO6-twr-05" (/dev/LTO6-twr-05)
    Reader=0 writers=9 reserves=2 volinuse=1
OF0699L6 on device "LTO6-twr-06" (/dev/LTO6-twr-06)
    Reader=0 writers=0 reserves=0 volinuse=0
====

Data spooling: 9 active jobs, 1,421,936,868,161 bytes; 583 total jobs, 3,530,901,944,142 max bytes/job.
Attr spooling: 9 active jobs, 9,187,943,025 bytes; 502 total jobs, 9,187,943,025 max bytes.
====
TagsNo tags attached.
bareos-master: impact
bareos-master: action
bareos-19.2: impact
bareos-19.2: action
bareos-18.2: impact
bareos-18.2: action
bareos-17.2: impact
bareos-17.2: action
bareos-16.2: impact
bareos-16.2: action
bareos-15.2: impact
bareos-15.2: action
bareos-14.2: impact
bareos-14.2: action
bareos-13.2: impact
bareos-13.2: action
bareos-12.4: impact
bareos-12.4: action

Activities

There are no notes attached to this issue.

Issue History

Date Modified Username Field Change
2016-04-18 12:12 otto New Issue