View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1608 [bareos-core] director minor have not tried 2024-03-01 06:50 2024-03-01 06:50
Reporter: Int Platform:  
Assigned To: OS:  
Priority: normal OS Version:  
Status: new Product Version: 23.0.1  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Migrate job failed with error but shows status "Success" in the job history
Description: A migrate job failed with error

Error: Could not start migration job.

but in the job history of the Web UI it is shown with status "Success" :

35646 migrate Migration job Full 0 0.00 B 1 Success

Tags:
Steps To Reproduce:
Additional Information: *llist joblog jobid=35646
Automatically selected Catalog: MyCatalog
Using Catalog "MyCatalog"
    time: 2024-03-01 06:15:45
 logtext: bareos-dir JobId 35646: The following 2 JobIds were chosen to be migrated: 29390,34747


    time: 2024-03-01 06:15:45
 logtext: bareos-dir JobId 35646: Automatically selected Catalog: MyCatalog


    time: 2024-03-01 06:15:45
 logtext: bareos-dir JobId 35646: Using Catalog "MyCatalog"


    time: 2024-03-01 06:15:45
 logtext: bareos-dir JobId 35646: Job failed.


    time: 2024-03-01 06:15:45
 logtext: bareos-dir JobId 35646: Error: Could not start migration job.


    time: 2024-03-01 06:15:45
 logtext: bareos-dir JobId 35646: Automatically selected Catalog: MyCatalog


    time: 2024-03-01 06:15:45
 logtext: bareos-dir JobId 35646: Using Catalog "MyCatalog"


    time: 2024-03-01 06:18:51
 logtext: bareos-dir JobId 35646: Job queued. JobId=35648


    time: 2024-03-01 06:18:51
 logtext: bareos-dir JobId 35646: Migration JobId 35648 started.


    time: 2024-03-01 06:18:51
 logtext: bareos-dir JobId 35646: Bareos bareos-dir 23.0.1~pre57.8e89bfe0a (16Jan24):
  Build OS: Red Hat Enterprise Linux release 8.7 (Ootpa)
  Current JobId: 35646
  Current Job: migrate.2024-03-01_06.15.43_17
  Catalog: "MyCatalog" (From Default catalog)
  Start time: 01-Mar-2024 06:15:45
  End time: 01-Mar-2024 06:18:51
  Elapsed time: 3 mins 6 secs
  Priority: 10
  Bareos binary info: Bareos community build (UNSUPPORTED): Get professional support from https://www.bareos.com
  Job triggered by: User
  Termination: Migration OK
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1605 [bareos-core] director major always 2024-02-27 14:45 2024-03-01 06:44
Reporter: Int Platform: Linux  
Assigned To: OS: RHEL (and clones)  
Priority: normal OS Version: 8  
Status: new Product Version: 23.0.1  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: jobs refuse to load volumes from the default Scratch pool
Description: When a job needs a new tape volume it does not accept a volume from the Scratch pool although it is mounted in the tape drive already.
It refuses to use the tape volume with this error - some examples:

2024-02-24 20:01:03 bareos-sd JobId 35561: Warning: Director wanted Volume "NWX005L7".
Current Volume "ZKI079L7" not acceptable because:
1998 Volume "ZKI079L7" catalog status is Append, not in Pool.

2024-02-27 10:15:10 bareos-sd JobId 35567: Warning: Director wanted Volume "NWX005L7".
Current Volume "NIX460L6" not acceptable because:
1998 Volume "NIX460L6" catalog status is Append, not in Pool.


My current workaround is to move the tape volume manually from the Scratch pool to the required pool with this command sequence:

*update volume=NIX460L6 pool=Archiv_Intego_2Y
New Pool is: Archiv_Intego_2Y
*update volume=NIX460L6
12: Volume from Pool
Updating Volume "NIX460L6"
Volume defaults updated from "Archiv_Intego_2Y" Pool record.
*
*umount storage=Tape
*mount storage=Tape
Connecting to Storage daemon Tape at igms07.vision.local:9103 ...
3001 Device "tapedrive-0" (/dev/tape/by-id/scsi-35005076312156561-nst) is mounted with Volume "NIX460L6"
*


After digging into this issue a bit I think the problem is a mismatch of the pool IDs.
The Scratch pool has poolid: 1
but in the pool "Archiv_Intego_2Y" and all other pools the scratch pool is referenced with scratchpoolid: 0

I configured the option "Scratch Pool = Scratch" in the pool config explicit
and issued the command "update pool=Archiv_Intego_2Y" but in the pool the scratch pool is still referenced with scratchpoolid: 0
Tags:
Steps To Reproduce:
Additional Information:
*llist volume=NIX460L6
Using Catalog "MyCatalog"
mediaid: 994
volumename: NIX460L6
slot: 0
poolid: 1
pool: Scratch
mediatype: LTO
firstwritten:
lastwritten:
labeldate: 2024-02-22 14:38:12
voljobs: 0
volfiles: 0
volblocks: 0
volmounts: 0
volbytes: 64,512
volerrors: 0
volwrites: 0
volcapacitybytes: 0
volstatus: Append
enabled: 1
recycle: 1
volretention: 1
voluseduration: 0
maxvoljobs: 0
maxvolfiles: 0
maxvolbytes: 0
inchanger: 0
endfile: 0
endblock: 0
labeltype: 0
storageid: 1
deviceid: 0
locationid: 0
recyclecount: 0
initialwrite:
scratchpoolid: 0
scratchpool:
recyclepoolid: 0
recyclepool:
comment:
storage: Tape

*
*llist pool=Scratch
poolid: 1
name: Scratch
numvols: 11
maxvols: 0
useonce: 0
usecatalog: 1
acceptanyvolume: 0
volretention: 1
voluseduration: 0
maxvoljobs: 0
maxvolbytes: 0
autoprune: 1
recycle: 1
pooltype: Scratch
labelformat: *
enabled: 1
scratchpoolid: 0
recyclepoolid: 0
labeltype: 0

*
*llist pool=Archiv_Intego_2Y
poolid: 13
name: Archiv_Intego_2Y
numvols: 72
maxvols: 0
useonce: 0
usecatalog: 1
acceptanyvolume: 0
volretention: 63,072,000
voluseduration: 432,000
maxvoljobs: 0
maxvolbytes: 0
autoprune: 1
recycle: 1
pooltype: Backup
labelformat: *
enabled: 1
scratchpoolid: 0
recyclepoolid: 0
labeltype: 0


Pool {
  Name = Archiv_Intego_2Y
  Storage = Tape
  Pool Type = Backup
  Recycle = yes # Bareos can automatically recycle Volumes
  AutoPrune = yes # Prune expired volumes
  Scratch Pool = Scratch
  Recycle Pool = Scratch
  Volume Retention = 2 years # How long should the Full Backups be kept? (0000006)
  MaximumBlockSize = 1048576
  Volume Use Duration = 5 days # specifies the maximum time that jobs can be written to the Volume after first write. 1 Weekend to complete archive job + extra time should manual intervention be necessary on Monday
}
System Description
Attached Files:
Notes
(0005823)
Int   
2024-03-01 06:44   
I changed the Storage of pool "Archiv_Intego_2Y" from
Storage = Tape
to
Storage = Autochanger
and this seems to have fixed the scratchpoolid.

*llist pool=Archiv_Intego_2Y
now shows the correct ids:
scratchpoolid: 1
recyclepoolid: 1

And mysteriosly this seems to have fixed also all other pools although I did not change the Storage option on them.
They all show the correct ids now.

But I did not test yet if picking a tape from the Scratch pool is working correctly now.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1601 [bareos-core] webui major always 2024-02-21 15:29 2024-02-28 10:07
Reporter: khvalera Platform: Linux  
Assigned To: bruno-at-bareos OS: any  
Priority: high OS Version: 3  
Status: resolved Product Version: 23.0.1  
Product Build: Resolution: unable to reproduce  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: It is impossible to restore files from an archive via the web interface
Description: It is not possible to restore archived files via the web interface when “Merge all related jobs to last full backup of selected backup job” is selected to “Yes”.
When this option is selected, files are not displayed in the “File selection”.
In bareos-audit.log I see the following empty parameters:
backup-dir: Console [web-admin] from [::1] cmdline .bvfs_get_jobids jobid=7690 all
backup-dir: Console [web-admin] from [::1] cmdline llist backups client="client_16.1" fileset="any" order=desc
backup-dir: Console [web-admin] from [::1] cmdline show fileset="linux-config-system"
backup-dir: Console [web-admin] from [::1] cmdline .bvfs_update
backup-dir: Console [web-admin] from [::1] cmdline llist clients current
backup-dir: Console [web-admin] from [::1] cmdline show client=client_16.1
backup-dir: Console [web-admin] from [::1] cmdline .bvfs_get_jobids jobid=7690 all
backup-dir: Console [web-admin] from [::1] cmdline .bvfs_lsdirs jobid= path= limit=1000 offset=0
backup-dir: Console [web-admin] from [::1] cmdline .bvfs_lsdirs jobid= path=@ limit=1000
backup-dir: Console [web-admin] from [::1] cmdline .bvfs_lsfiles jobid= path= limit=1000 offset=0
Tags:
Steps To Reproduce:
Additional Information:
System Description General issues regarding all linux platforms or not specific to only one distribution
Attached Files: Screenshot_20240221_160412.png (85,262 bytes) 2024-02-21 15:31
https://bugs.bareos.org/file_download.php?file_id=613&type=bug
png

Screenshot_20240221_160228.png (73,327 bytes) 2024-02-21 15:31
https://bugs.bareos.org/file_download.php?file_id=614&type=bug
png

20240221_163441_1.jpg (190,940 bytes) 2024-02-21 16:36
https://bugs.bareos.org/file_download.php?file_id=615&type=bug
jpg

Screenshot_20240221_180853.png (67,805 bytes) 2024-02-21 17:10
https://bugs.bareos.org/file_download.php?file_id=616&type=bug
png
Notes
(0005791)
khvalera   
2024-02-21 15:31   
I am attaching screenshots.
(0005792)
bruno-at-bareos   
2024-02-21 16:31   
Archive jobs are not meant to be merged with normal jobs.
If you enable merge jobs all, then only the active full and its incremental chain can be used for.

You certainly want to use the other tabs (restore file by version).
(0005793)
bruno-at-bareos   
2024-02-21 16:36   
Just to add this it is working in both case for archive.
version 23.0.1
(0005794)
khvalera   
2024-02-21 17:09   
I probably didn’t quite understand you, but as I understand it, with this parameter it’s possible to combine all incremental tasks with a full one!? This does not happen and even if you select a full task, the list of files will be empty.

(0005795)
khvalera   
2024-02-21 17:10   
I am attaching screenshots.
(0005796)
bruno-at-bareos   
2024-02-21 17:26   
> I probably didn’t quite understand you, but as I understand it, with this parameter it’s possible to combine all incremental tasks with a full one!? This does not happen and even if you select a > full task, the list of files will be empty.

Closing as invalid, A full + all incremental on top of it are proposed if the chain work of course, but you were talking about archive.

If it is not working, please go to bconsole choice 5 (which is the equivalent and report success or error)
is the bvfs update has been done for that job or does it failed if space is missing etc ...
(0005797)
bruno-at-bareos   
2024-02-21 17:27   
Not changes needed
Please upgrade to current version which is 23.0.1
(0005798)
khvalera   
2024-02-21 17:44   
(Last edited: 2024-02-21 17:44)
I’m using version 23.0.1, it’s not in the bugs interface, that’s why I tore out 22.1.3.
Tried:
  .bvfs_clear_cache yes
  .bvfs_update
no errors.
There are no problems with restoring from the console (I just tried it, everything works).
(0005799)
bruno-at-bareos   
2024-02-22 09:36   
set correct version
(0005800)
bruno-at-bareos   
2024-02-22 09:47   
I'm a bit astonish, as version 23.0.1 version is present at first position in the selector.

I'm sorry to inform you that we can't reproduce your problem, you have to give us more information so we will be able to do a diagnostic.

on test picking any incremental lead to the following
show client=bareos-fd
.bvfs_get_jobids jobid=4051
.bvfs_lsdirs jobid=15284,15506,15184,15193,15203,15213,15223,15232,15242,15252,15262,15271,15280,15290,15303,15313,15344,15354,15364,15373,15383,15393,15433,15446,15456,15465,15475,15484,15494,15502 path= limit=1000 offset=0
.bvfs_lsfiles jobid=15284,15506,15184,15193,15203,15213,15223,15232,15242,15252,15262,15271,15280,15290,15303,15313,15344,15354,15364,15373,15383,15393,15433,15446,15456,15465,15475,15484,15494,15502 path= limit=1000 offset=0
.bvfs_lsfiles jobid=15284,15506,15184,15193,15203,15213,15223,15232,15242,15252,15262,15271,15280,15290,15303,15313,15344,15354,15364,15373,15383,15393,15433,15446,15456,15465,15475,15484,15494,15502 path=@ limit=1000 offset=0

So your first .bvfs_get_jobids jobid=4151 in bconsole should return the list of related or mandatory jobid etc...
(0005806)
khvalera   
2024-02-22 16:06   
My command .bvfs_get_jobids jobid=7737 does not return dependent jobs from 7737 JobId
What could be the problem?
(0005807)
bruno-at-bareos   
2024-02-22 16:25   
No ideas of the why: this need real proper investigation which often take time.

You may want to contact our sales department to get a quote for consulting/subscription/support see sales(at)bareos(dot)com

Otherwise you may add information here or on the Mailing list to ask free advises there.
(0005819)
bruno-at-bareos   
2024-02-28 10:07   
Software work as expected when a jobid is present and its full chain of incrementals

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1603 [bareos-core] director major always 2024-02-26 13:16 2024-02-27 11:24
Reporter: Int Platform: Linux  
Assigned To: OS: RHEL (and clones)  
Priority: normal OS Version: 8  
Status: new Product Version: 23.0.1  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Labelling of fresh LTO-9 tapes fails with timeout error
Description: Since fresh LTO-9 tapes need to be calibrated by the tape drive on first load (which can take up to 2 hours - see https://www.quantum.com/globalassets/products/tape-storage-new/lto-9/lto-9-quantum-faq-092021.pdf)
the labelling command fails with

ERR=Child died from signal 15: Termination.
Results=Program killed by BAREOS (timeout)
Tags:
Steps To Reproduce: run command

*label storage=Autochanger barcodes slot=11,12,13
Additional Information: full output:

*label storage=Autochanger barcodes slot=11,12,13
Connecting to Storage daemon Autochanger at 192.168.124.209:9103 ...
3306 Issuing autochanger "list" command.
The following Volumes will be labeled:
Slot Volume
==============
  11 NSL140L9
  12 NSL141L9
  13 NSL142L9
Do you want to label these Volumes? (yes|no): yes

...

Connecting to Storage daemon Autochanger at 192.168.124.209:9103 ...
Sending label command for Volume "NSL140L9" Slot 11 ...
3304 Issuing autochanger "load slot 11, drive 0" command.
3992 Bad autochanger "load slot 11, drive 0": ERR=Child died from signal 15: Termination.
Results=Program killed by BAREOS (timeout)

Label command failed for Volume NSL140L9.
Sending label command for Volume "NSL141L9" Slot 12 ...
3307 Issuing autochanger "unload slot 11, drive 0" command.
3995 Bad autochanger "unload slot 11, drive 0": ERR=Child exited with code 1
Results=Unloading drive 0 into Storage Element 11...mtx: Request Sense: Long Report=yes
mtx: Request Sense: Valid Residual=no
mtx: Request Sense: Error Code=70 (Current)
mtx: Request Sense: Sense Key=Aborted Command
mtx: Request Sense: FileMark=no
mtx: Request Sense: EOM=no
mtx: Request Sense: ILI=no
mtx: Request Sense: Additional Sense Code = 29
mtx: Request Sense: Additional Sense Qualifier = 07
mtx: Request Sense: BPV=no
mtx: Request Sense: Error in CDB=no
mtx: Request Sense: SKSV=no
MOVE MEDIUM from Element Address 32 to 266 Failed

Label command failed for Volume NSL141L9.
Sending label command for Volume "NSL142L9" Slot 13 ...
3991 Bad autochanger "loaded? drive 0" command: ERR=Child died from signal 15: Termination.
Results=Program killed by BAREOS (timeout)

3307 Issuing autochanger "unload slot 11, drive 0" command.
3995 Bad autochanger "unload slot 11, drive 0": ERR=Child exited with code 1
Results=Unloading drive 0 into Storage Element 11...mtx: Request Sense: Long Report=yes
mtx: Request Sense: Valid Residual=no
mtx: Request Sense: Error Code=70 (Current)
mtx: Request Sense: Sense Key=Not Ready
mtx: Request Sense: FileMark=no
mtx: Request Sense: EOM=no
mtx: Request Sense: ILI=no
mtx: Request Sense: Additional Sense Code = 04
mtx: Request Sense: Additional Sense Qualifier = 01
mtx: Request Sense: BPV=no
mtx: Request Sense: Error in CDB=no
mtx: Request Sense: SKSV=no
MOVE MEDIUM from Element Address 32 to 266 Failed

Label command failed for Volume NSL142L9.
*
System Description
Attached Files:
Notes
(0005810)
bruno-at-bareos   
2024-02-27 10:04   
Maybe adjusting the mtx-changer.conf value of

# Set to amount of time in seconds to wait after a load
load_sleep=0

and also may it is needed to hack the mtx-changer script itself to add more time

  while [ $i -le 300 ]; do # Wait max 300 seconds
(0005811)
Int   
2024-02-27 11:24   
I decided against changing "load_sleep" as this would affect all tape loads, but a longer timeout is only needed on the first load. If every tape load would have a delay of 2 hours the backup process would be very tedious.

I modified the wait_for_drive() function in the mtx-changer script instead:

wait_for_drive() {
  i=0
  while [ $i -le 8000 ]; do # Wait max 2.22 hours - LTO-9 tapes need 2 hours calibration on first load
    debug "Doing mt -f $1 status ..."
    drivestatus=$(mt -f "$1" status 2>&1)
    if echo "${drivestatus}" | grep "${ready}" >/dev/null 2>&1; then
      break
    fi
    debug "${drivestatus}"
    debug "Device $1 - not ready, retrying ..."
    sleep 100 #was 'sleep 1' - do not poll the drive so often
    i=`expr $i + 100`
  done
}


but this didn't work.
I ran into the same error:

Sending label command for Volume "NSL142L9" Slot 13 ...
3304 Issuing autochanger "load slot 13, drive 0" command.
3992 Bad autochanger "load slot 13, drive 0": ERR=Child died from signal 15: Termination.
Results=Program killed by BAREOS (timeout)

Label command failed for Volume NSL142L9.


The problem behind this is that the wait inside the wait_for_drive() function has no effect as the call to
"mt -f /dev/nsa0 status"
does not return at all while the tape drive is calibrating the LTO-9 tape.
So even the original wait of 300 seconds would not have elapsed as the call of "mt -f /dev/nsa0 status" never returned.

There seems to be another timeout somewhere kicking in that kills the label command.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1604 [bareos-core] file daemon crash random 2024-02-26 13:31 2024-02-26 13:32
Reporter: Int Platform: x86  
Assigned To: OS: Windows  
Priority: normal OS Version: 2016  
Status: new Product Version: 23.0.1  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Windows file daemon crashes sometimes during backup job
Description: It happened 5 times in the last three weeks that the Windows file daemon crashed during a backup job.

It happened on two different Windows systems:
* System IGMS00: Windows Server 2016 x64 Build 14393.6709 - 2 crashes so far
* System IGMS04: Windows 10 Professional 22H2 Build 19045.4046 - 3 crashes so far

I attached the Windows event log entries.

On system IGMS04 I enabled the trace debug output but since then the bug did reappear yet.
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files: IGMS00 windows fd crash.txt (1,978 bytes) 2024-02-26 13:31
https://bugs.bareos.org/file_download.php?file_id=621&type=bug
IGMS04 windows fd crash.txt (6,392 bytes) 2024-02-26 13:31
https://bugs.bareos.org/file_download.php?file_id=622&type=bug
Notes
(0005809)
Int   
2024-02-26 13:32   
Made a typo - should be "On system IGMS04 I enabled the trace debug output but since then the bug did NOT reappear yet."

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1594 [bareos-core] webui minor always 2024-02-08 08:38 2024-02-22 16:31
Reporter: Int Platform: Linux  
Assigned To: OS: RHEL (and clones)  
Priority: normal OS Version: 8  
Status: new Product Version:  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Web UI Job Details - switching rows per page does not work correctly
Description: When switching "rows per page" on the Job Details from 25 to 50 or 100 the switching between pages does not work correctly anymore.
Switching forward to Page 1>2>3 does work but when switching backwards again (3>2>1) the page 1 is not shown. Instead of page 1 the content of page 2 is shown.
When switching directly from page 3 to page 1 the content of page 3 is shown.

This only happens when selecting 50 or 100 rows per page. With 10 or 25 rows per page it works correctly.

Bareos Version is 23.0.1
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files: jobid_35301.log.gz (3,520 bytes) 2024-02-22 15:40
https://bugs.bareos.org/file_download.php?file_id=617&type=bug
jobid_35393.log.gz (3,545 bytes) 2024-02-22 15:52
https://bugs.bareos.org/file_download.php?file_id=618&type=bug
Notes
(0005742)
Int   
2024-02-08 08:47   
It seems that page 1 is not working at all with 50 or 100 "rows per page".
When switching to 50 or 100 "rows per page" page 1 is not updated and still showing only 25 rows.
(0005743)
Int   
2024-02-08 08:56   
this happens only with a specific job which has status "warning" and 3 errors:

35301 filebackup-igms00-fd_D_User igms00-fd Backup Incremental 1134849 551.44 GB 3 Warning


I realized that even in standard view with 25 "rows per page" the page 2 is not shown. I suspect that this is the page where the 3 errors would be shown if it were working correctly, because in all the other pages I can not see any error message.
(0005744)
Int   
2024-02-08 09:06   
Using bconsole I found that these log lines are causing the issue:

 2024-02-08 03:22:25 truenas-sd JobId 35301: New volume "Incremental-0903" mounted on device "FileStorageTrueNAS0001" (/var/lib/bareos/storage) at 08-Feb-2024 03:22.
 2024-02-08 03:24:04 bareos-dir JobId 35301: Insert of attributes batch table with 800001 entries start
 2024-02-08 03:24:33 bareos-dir JobId 35301: Insert of attributes batch table done
 2024-02-08 03:30:25 igms00-fd JobId 35301: Could not stat "d:/user/xyz/Software/Neuer Ordner/con.example.flashplayer.appi-20181105-085630.properties": ERR=Die Syntax f▒r den Dateinamen, Verzeichnisnamen oder die Datentr▒gerbezeichnung ist falsch.

 2024-02-08 03:30:25 igms00-fd JobId 35301: Could not stat "d:/user/xyz/Software/Neuer Ordner/con.example.flashplayer.appi-20181105-085630.tar.gz": ERR=Die Syntax f▒r den Dateinamen, Verzeichnisnamen oder die Datentr▒gerbezeichnung ist falsch.

 2024-02-08 03:30:25 igms00-fd JobId 35301: Could not stat "d:/user/xyz/Software/Neuer Ordner/con.example.flashplayer.appi-8e90d32a8fd566af447a88c5ef7c6ca6.apk.gz": ERR=Die Syntax f▒r den Dateinamen, Verzeichnisnamen oder die Datentr▒gerbezeichnung ist falsch.

 2024-02-08 03:37:42 truenas-sd JobId 35301: User defined maximum volume capacity 40,000,000,000 exceeded on device "FileStorageTrueNAS0001" (/var/lib/bareos/storage).


It seems that Bareos Web UI is not able to handle the German Umlauts in the error messages.
(0005803)
bruno-at-bareos   
2024-02-22 15:17   
It would be interesting to have the whole joblog 35301 so we can try to reproduce the case.

su postgres -c "psql -d bareos -o /tmp/jobid_35301.sql -c \"select * from log where jobid=35301 order by logid asc;\""

we urge you to not touch the content so the encoding error will not be affected (you can use the private button) to upload it.

Thanks

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1598 [bareos-core] director major have not tried 2024-02-09 10:37 2024-02-21 11:14
Reporter: Int Platform: Linux  
Assigned To: OS: RHEL (and clones)  
Priority: normal OS Version: 8  
Status: new Product Version:  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Bareos 23.0.1 creates new Volume in catalog but no physical File volume in storage
Description: I ran into the issue that Bareos 23.0.1 created a new Volume in catalog but no physical File volume in storage.

This happend during a running backup job when the Pool reached its limit of the configured "Maximum Volumes". The job then got stuck because no free volume was available - message in the job log:

26 2024-02-09 06:25:40 truenas-sd JobId 35326: Job filebackup-s01-fd.2024-02-09_01.00.00_16 is waiting. Cannot find any appendable volumes.
Please use the "label" command to create a new Volume for:
Storage: "FileStorageTrueNAS0002" (/var/lib/bareos/storage)
Pool: Incremental
Media type: File

I then increased the "Maximum Volumes" in the Pool config file and issued a "reload" command in bconsole to load the changed configuration.
The job then continued and created a new Volume - message in the job log:

27 2024-02-09 09:51:01 bareos-dir JobId 35326: Created new Volume "Incremental-0920" in catalog.

But Bareos 23.0.1 created the new Volume only in the catalog but no physical File volume in storage. This lead to this error:

28 2024-02-09 09:51:01 truenas-sd JobId 35326: Warning: stored/mount.cc:248 Open device "FileStorageTrueNAS0002" (/var/lib/bareos/storage) Volume "Incremental-0920" failed: ERR=stored/dev.cc:598 Could not open: /var/lib/bareos/storage/Incremental-0920, ERR=No such file or directory
Tags:
Steps To Reproduce: See Description
Additional Information:
System Description
Attached Files: jobid_35326.log.gz (2,096 bytes) 2024-02-13 14:46
https://bugs.bareos.org/file_download.php?file_id=605&type=bug
jobid_35413.log.gz (1,489 bytes) 2024-02-15 09:30
https://bugs.bareos.org/file_download.php?file_id=606&type=bug
bareos-sd.conf (400 bytes) 2024-02-16 16:04
https://bugs.bareos.org/file_download.php?file_id=607&type=bug
FileStorage.conf (445 bytes) 2024-02-16 16:04
https://bugs.bareos.org/file_download.php?file_id=608&type=bug
jobid_35437.log.gz (1,950 bytes) 2024-02-16 16:04
https://bugs.bareos.org/file_download.php?file_id=609&type=bug
llist-Migrate-0956.log.gz (661 bytes) 2024-02-16 16:04
https://bugs.bareos.org/file_download.php?file_id=610&type=bug
truenas-sd.trace.gz (6,235 bytes) 2024-02-16 16:04
https://bugs.bareos.org/file_download.php?file_id=611&type=bug
JailMountPoint.png (31,827 bytes) 2024-02-20 15:46
https://bugs.bareos.org/file_download.php?file_id=612&type=bug
png
Notes
(0005761)
bruno-at-bareos   
2024-02-12 13:32   
Just to confirm: you didn't do the update pool from resource command after your reload right ?
(0005762)
Int   
2024-02-12 14:02   
I edited the config file /etc/bareos/bareos-dir.d/pool/Incremental.conf with a text editor and increased the "Maximum Volumes".
Then I started bconsole and issued the "reload" command.
I did not do the update pool from resource command after the "reload" command in bconsole.
(0005770)
bruno-at-bareos   
2024-02-13 14:00   
Hi we would like to check if its something new. We tried to reproduce the case and the software is working as expected.
Increase pool limit, reload, and a new volume is created and then used.

We would like to see the full joblog, which you can extract with (then compress and attach here)
bconsole <<< "list joblog jobid=35326" > /var/tmp/jobid_35326.log

We have seen in the past, similar issue, but mostly it was because at the time of creating the volume the /var/lib/bareos/storage mount point was not available, and your error looks really similar.
Otherwise the sd would have created that file and would have failed later being not able to read the label.
It maybe be worse the effort to double check system's logs happening at incident time.
Here the SD can't access the file (so the directory).
(0005771)
Int   
2024-02-13 14:46   
I attached the full joblog as requested.

I checked the system's logs of the storage server and it doesn't show any errors during that time.
Later (on Feb 13th) the storage server created new volumes successfully without any changes to config or access rights:

root@truenas-sd:~ # ls -la /var/lib/bareos/storage/
total 114812986408
drwxr-x--- 2 bareos bareos 375 Feb 13 01:38 .
drwxr-x--- 3 bareos bareos 4 Feb 6 14:45 ..
...
-rw-r----- 1 bareos bareos 39998980267 Feb 8 05:19 Incremental-0918
-rw-r----- 1 bareos bareos 39999236662 Feb 9 01:00 Incremental-0919
-rw-r----- 1 bareos bareos 26896987171 Feb 13 01:55 Incremental-0934
-rw-r----- 1 bareos bareos 39999706259 Feb 13 02:04 Incremental-0935
-rw-r----- 1 bareos bareos 32469240858 Feb 13 12:00 Incremental-0936
-rw-r----- 1 bareos bareos 37987976031 Feb 13 01:45 Incremental-0937
-rw-r----- 1 bareos bareos 39999120592 Feb 13 01:38 Incremental-0938
-rw-r----- 1 bareos bareos 35671581125 Feb 13 02:05 Incremental-0939
root@truenas-sd:~ #


I guess the special circumstance here was that the job was running and waiting for a new volume because it ran out of empty/recyclable volumes. And I increased pool limit while the job was running and waiting.
(0005772)
bruno-at-bareos   
2024-02-13 14:51   
Thanks for the log.

You assumption "I guess the special circumstance here was that the job was running and waiting for a new volume because it ran out of empty/recyclable volumes. And I increased pool limit while the job was running and waiting." is what we tested and tried to reproduce without seeing any failures :-( unfortunately as it is more harder to fix something that doesn't failed, or only on unknown circonstances.

I'm seeing root@truenas-sd is the SD detached from the dir ?
(0005773)
Int   
2024-02-13 15:04   
Yes, baros-dir is running in a VM with AlmaLinux release 8.9 (Midnight Oncilla)
bareos-sd is running in a jail with FreeBSD 13.2-RELEASE-p9 on TrueNAS 13.0-U6.1
(0005774)
bruno-at-bareos   
2024-02-14 09:48   
Thank for the details, this may help to reproduce the case.

Are you able to reproduce it easily ( maybe creating a dumb pool with low allowed volumes, and run small job on it to recreate the situation ) ?
(0005775)
Int   
2024-02-15 09:30   
I was able to reproduce it with another pool with low allowed volumes and a small job.
See attached job log.

This time at first nothing happened when I ran the "reload" command in bconsole. The job did not continue and was stuck at

2024-02-15 09:15:52 truenas-sd JobId 35413: Job migrate.2024-02-15_09.10.03_01 is waiting. Cannot find any appendable volumes.
Please use the "label" command to create a new Volume for:
Storage: "FileStorageTrueNAS0001" (/var/lib/bareos/storage)
Pool: Migrate
Media type: File

When I ran the "reload" command in bconsole a second time the job did continue and ran into the same issue.
(0005776)
bruno-at-bareos   
2024-02-15 09:39   
Thanks you. I will forward this to the dev's team.
(0005777)
bruno-at-bareos   
2024-02-15 14:13   
Hi, I'm failing to reproduce yet the case. In the meantime

what is the output of
bconsole <<< "llist volume=Migrate-0950"

Would you mind to rerun the test job with the following set before

bconsole <<< "setdebug level=100 trace=1 timestamp=1 storage=FileStorageTrueNAS0001"

after the job you can remove the debug level

bconsole <<< "setdebug level=0 trace=0 timestamp=0 storage=FileStorageTrueNAS0001"

then check on the storage in /var/lib/bareos and join compressed the trace file there (can be removed afterwards)
(0005780)
Int   
2024-02-16 16:04   
Hi, I collected the information you wanted.

I could not collect the output of
bconsole <<< "llist volume=Migrate-0950"
since I deleted that test volume already. But I collected the output for the newly failed volume "Migrate-0956" instead.

I had to change
bconsole <<< "setdebug level=100 trace=1 timestamp=1 storage=FileStorageTrueNAS0001"
to
bconsole <<< "setdebug level=100 trace=1 timestamp=1 storage=File"
because "FileStorageTrueNAS0001" is the name of the device not the name of the storage.
The storage daemon is using five devices to write jobs in parallel but the issue also happens when only one job is running.
I attached the config files of the storage daemon - maybe they help to reproduce the issue.
(0005784)
bruno-at-bareos   
2024-02-20 10:04   
Thanks for the report, I will check if our dev's can detect something with this. I was not able to reproduce until now.
how is mounted /var/lib/storage/ ?
(0005786)
Int   
2024-02-20 15:46   
I created a mount point for the bareos jail which mounts a path from TrueNAS data pool to /var/lib/bareos/storage/ inside the jail.
See screenshot attached.
(0005787)
Int   
2024-02-21 09:55   
I just realized that I forgot to mention the bareos storage daemon installed in the bareos jail is "bareos-server-23.0.1_1.pkg" from the official FreeBSD repository
https://pkg.freebsd.org/FreeBSD:13:amd64/latest/All/bareos-server-23.0.1_1.pkg
(0005788)
bruno-at-bareos   
2024-02-21 09:58   
Would you mind to test if this is the case with the official Bareos package available here
https://download.bareos.org/current/FreeBSD_13.2/

You can use our helper script to get the repo installed https://download.bareos.org/current/FreeBSD_13.2/add_bareos_repositories.sh
(0005789)
bruno-at-bareos   
2024-02-21 10:43   
Ok seems we may are able to reproduce the case, it will only appear if the sd has emited the first "Please label message"
(0005790)
Int   
2024-02-21 11:14   
Thanks for the feed back. I will skip the effort of switching to the Bareos repository for now.
Let me know if it will be necessary to test with the packages from https://download.bareos.org/current/FreeBSD_13.2/

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1599 [bareos-core] installer / packages minor always 2024-02-12 11:30 2024-02-15 17:15
Reporter: adf_patrickha Platform: Linux  
Assigned To: slederer OS: Debian  
Priority: normal OS Version: 10  
Status: acknowledged Product Version:  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: FD LDAP Plugin has broken dependencies on Debian 11+ (bareos-filedaemon-ldap-python-plugin -> python-ldap)
Description: The package of the FD Python plugin for LDAP backups (bareos-filedaemon-ldap-python-plugin) has broken dependencies, starting on Debian 11. The package is depending on `python-ldap` which is no longer available in Debian 11 and should be replaced with `python3-ldap`.
I'm not sure if the python plugin is fully python3 compatible, as the script uses the shebang `#!/usr/bin/env python` and not `#!/usr/bin/env python3`. But as the package `python-ldap` is not longer installable on Debian 11 and higher and Python 2 is EOL, it should be updated and the dependency should be changed to `python3-ldap` either way.
Tags: debian 11, fd, filedemon, ldap, plugin
Steps To Reproduce: Try to install the package `bareos-filedaemon-ldap-python-plugin` on Debian 11 or higher with: `apt install bareos-filedaemon-ldap-python-plugin`.
Additional Information:
System Description
Attached Files:
Notes
(0005764)
bruno-at-bareos   
2024-02-12 16:36   
We will take care of this in a future update
(0005778)
adf_patrickha   
2024-02-15 17:15   
Thx for looking into it! Also some additional documentation on how to actually use the plugin (FileSet, Jobs, client config, how to do restores, ...) would be nice. At the moment it's not really usable.
(0005779)
adf_patrickha   
2024-02-15 17:15   
I'm referring to this documentation: https://docs.bareos.org/TasksAndConcepts/Plugins.html#ldap-plugin

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1591 [bareos-core] director minor random 2024-01-19 16:17 2024-02-08 17:19
Reporter: raschu Platform: Linux  
Assigned To: bruno-at-bareos OS: Debian  
Priority: normal OS Version: 10  
Status: assigned Product Version:  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Error: stored/device.cc:222 Error getting Volume info
Description: Hi,

we manage over 70 backup jobs. In rare cases we observe the error "Error: stored/device.cc:222 Error getting Volume info" in the backup mails. The job was successful and is marked with a warning.

2 examples:

19-Jan 02:19 bareos-dir JobId 8696: There are no more Jobs associated with Volume "incr7d-1589". Marking it purged.
19-Jan 02:19 bareos-dir JobId 8696: All records pruned from Volume "incr7d-1589"; marking it "Purged"
19-Jan 02:19 bareos-dir JobId 8696: Recycled volume "incr7d-1589"
19-Jan 02:19 bareos-dir JobId 8696: Using Device "bstore02-vd01" to write.
19-Jan 02:19 client01 JobId 8696: Connected Storage daemon at storage02:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
19-Jan 02:19 client01 JobId 8696: Encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
19-Jan 02:19 client01 JobId 8696: Extended attribute support is enabled
19-Jan 02:19 client01 JobId 8696: ACL support is enabled
19-Jan 02:19 bareos-sd JobId 8696: Recycled volume "incr7d-1589" on device "bstore02-vd01" (/bstore02), all previous data lost.
19-Jan 02:19 bareos-dir JobId 8696: Max Volume jobs=1 exceeded. Marking Volume "incr7d-1589" as Used.
19-Jan 02:19 bareos-sd JobId 8696: Error: stored/device.cc:222 Error getting Volume info: 1998 Volume "incr7d-1589" catalog status is Used, but should be Append, Purged or Recycle.
19-Jan 02:19 bareos-sd JobId 8696: Releasing device "bstore02-vd01" (/bstore02).
19-Jan 02:19 bareos-sd JobId 8696: Elapsed time=00:00:02, Transfer rate=7.527 M Bytes/second


19-Jan 01:00 bareos-dir JobId 8693: Sending Accurate information.
19-Jan 02:15 bareos-dir JobId 8693: Created new Volume "aincr30d-bsd02-2821" in catalog.
19-Jan 02:15 bareos-dir JobId 8693: Using Device "bstore02-vd01" to write.
19-Jan 02:17 client02 JobId 8693: Connected Storage daemon at storage02:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
19-Jan 02:17 client02 JobId 8693: Encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
19-Jan 02:17 client02 JobId 8693: Extended attribute support is enabled
19-Jan 02:17 bareos-sd JobId 8693: Labeled new Volume "aincr30d-bsd02-2821" on device "bstore02-vd01" (/bstore02).
19-Jan 02:17 client02 JobId 8693: ACL support is enabled
19-Jan 02:17 bareos-sd JobId 8693: Wrote label to prelabeled Volume "aincr30d-bsd02-2821" on device "bstore02-vd01" (/bstore02)
19-Jan 02:17 bareos-dir JobId 8693: Max Volume jobs=1 exceeded. Marking Volume "aincr30d-bsd02-2821" as Used.
19-Jan 02:18 bareos-sd JobId 8693: Error: stored/device.cc:222 Error getting Volume info: 1998 Volume "aincr30d-bsd02-2821" catalog status is Used, but should be Append, Purged or Recycle.
19-Jan 02:18 bareos-sd JobId 8693: Releasing device "bstore02-vd01" (/bstore02).
19-Jan 02:18 bareos-sd JobId 8693: Elapsed time=00:01:29, Transfer rate=345.0 K Bytes/second


I can't understand the error. The volumes were either newly created or recently reset.
The director has the version "23.0.1~pre57.8e89bfe0a-40".

Thanks Ralf
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files: tracefile.zip (223,303 bytes) 2024-02-06 15:39
https://bugs.bareos.org/file_download.php?file_id=600&type=bug
Notes
(0005712)
bruno-at-bareos   
2024-01-23 13:48   
Two jobs accessing the same volume at the same time ?
(0005717)
raschu   
2024-01-24 18:02   
Thanks Bruno. No, because I set only 1 jobs per volume:

"""
Pool {
  Name = aincr30d-bsd02 # 30 days always incremental
  Pool Type = Backup
  Recycle = yes # Bareos can automatically recycle Volumes
  AutoPrune = yes # Prune expired volumes
  Volume Retention = 65 days # 30 days, recommended retention buffer >2x
  Maximum Volume Bytes = 5 g # Limit Volume size to something reasonable
  Volume Use Duration = 23 hours # defines the time period that the Volume can be written
  Maximum Volume Jobs = 1 # nur ein Job pro Volume, damit fehlgeschlagene Jobs besser gelöscht werden können
  Label Format = "aincr30d-bsd02-" # Volumes label
  Storage = bsd02
  Next Pool = aicons30d-bsd02
}
"""

Bye Ralf
(0005719)
bruno-at-bareos   
2024-01-25 08:59   
And what about the maximum concurrent job on the device ?
(0005721)
raschu   
2024-01-25 10:01   
(Last edited: 2024-01-25 10:08)
Hi Bruno, we use some virtual devices for parallel jobs. The config is:

Storage { # define myself
  Name = bsd02
  Description = "storage for internal backup - filter: !hrz in fqdn"
  Address = storage02
  Password = "pass"
  Device = bstore02
  Device = bstore02-vd01
  Device = bstore02-vd02
  Device = bstore02-vd03
  Media Type = File02
  Maximum Concurrent Jobs = 20
}

Config from the storage daemon:

Device {
  Name = bstore02
  Media Type = File02
  Device Type = File
  Archive Device = /bstore02
  Maximum Concurrent Jobs = 1
  LabelMedia = yes; # lets Bareos label unlabeled media
  Random Access = yes;
  AutomaticMount = yes; # when device opened, read it
  RemovableMedia = no;
  AlwaysOpen = no;
}

Device {
  Name = bstore02-vd01
  Media Type = File02
  Device Type = File
  Archive Device = /bstore02
  Maximum Concurrent Jobs = 1
  LabelMedia = yes; # lets Bareos label unlabeled media
  Random Access = yes;
  AutomaticMount = yes; # when device opened, read it
  RemovableMedia = no;
  AlwaysOpen = no;
  Description = "virtual device - linked for more concurrent jobs"
}

Device {
  Name = bstore02-vd02
  Media Type = File02
  Device Type = File
  Archive Device = /bstore02
  Maximum Concurrent Jobs = 1
  LabelMedia = yes; # lets Bareos label unlabeled media
  Random Access = yes;
  AutomaticMount = yes; # when device opened, read it
  RemovableMedia = no;
  AlwaysOpen = no;
  Description = "virtual device - linked for more concurrent jobs"
}

Device {
  Name = bstore02-vd03
  Media Type = File02
  Device Type = File
  Archive Device = /bstore02
  Maximum Concurrent Jobs = 1
  LabelMedia = yes; # lets Bareos label unlabeled media
  Random Access = yes;
  AutomaticMount = yes; # when device opened, read it
  RemovableMedia = no;
  AlwaysOpen = no;
  Description = "virtual device - linked for more concurrent jobs"
}

I took the idea from here: https://svennd.be/concurrent-jobs-in-bareos-with-disk-storage/


Thanks Ralf
(0005722)
bruno-at-bareos   
2024-01-25 13:25   
Things looks good, would you mind to find back the complete job's log for those having that issue.

btw if you want to optimize even more the configuration you can get inspired by future
https://github.com/bareos/bareos/pull/1467#issuecomment-1780956976
;-)
(0005727)
raschu   
2024-02-05 10:35   
Hi Bruno, thanks. The new option with the autochanger is good :-)

Now here more examples. I got the error 4 times (different jobs) last weekend :-/

"""
05-Feb 04:01 bareos-dir JobId 9638: Start Backup JobId 9638, Job=client01.2024-02-05_04.00.01_09
05-Feb 04:01 bareos-dir JobId 9638: Connected Storage daemon at bstore02:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
05-Feb 04:01 bareos-dir JobId 9638: Encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
05-Feb 04:01 bareos-dir JobId 9638: Connected Client: client01 at client01:9102, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
05-Feb 04:01 bareos-dir JobId 9638: Handshake: Immediate TLS
05-Feb 04:01 bareos-dir JobId 9638: Encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
05-Feb 04:01 bareos-dir JobId 9638: Sending Accurate information.
05-Feb 04:29 bareos-dir JobId 9638: Created new Volume "aincr180d-bsd02-3234" in catalog.
05-Feb 04:29 bareos-dir JobId 9638: Using Device "bstore02-vd03" to write.
05-Feb 04:29 client01 JobId 9638: Connected Storage daemon at bstore02:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
05-Feb 04:29 client01 JobId 9638: Encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
05-Feb 04:29 client01 JobId 9638: Extended attribute support is enabled
05-Feb 04:29 bareos-sd JobId 9638: Labeled new Volume "aincr180d-bsd02-3234" on device "bstore02-vd03" (/bstore02).
05-Feb 04:29 client01 JobId 9638: ACL support is enabled
05-Feb 04:29 bareos-sd JobId 9638: Wrote label to prelabeled Volume "aincr180d-bsd02-3234" on device "bstore02-vd03" (/bstore02)
05-Feb 04:29 bareos-dir JobId 9638: Max Volume jobs=1 exceeded. Marking Volume "aincr180d-bsd02-3234" as Used.
05-Feb 04:29 bareos-sd JobId 9638: Error: stored/device.cc:222 Error getting Volume info: 1998 Volume "aincr180d-bsd02-3234" catalog status is Used, but should be Append, Purged or Recycle.
05-Feb 04:29 bareos-sd JobId 9638: Releasing device "bstore02-vd03" (/bstore02).
05-Feb 04:29 bareos-sd JobId 9638: Elapsed time=00:00:01, Transfer rate=0 Bytes/second
05-Feb 04:29 bareos-dir JobId 9638: Bareos bareos-dir 23.0.2~pre6.5fbc90f69 (23Jan24):
  Build OS: Debian GNU/Linux 11 (bullseye)
  JobId: 9638
  Job: client01.2024-02-05_04.00.01_09
  Backup Level: Incremental, since=2024-02-04 04:06:17
  Client: "client01" 23.0.2~pre32.0a0e55739 (31Jan24) Debian GNU/Linux 12 (bookworm),debian
  FileSet: "client01" 2024-01-19 04:00:00
  Pool: "aincr180d-bsd02" (From Job resource)
  Catalog: "MyCatalog" (From Client resource)
  Storage: "bsd02" (From Pool resource)
  Scheduled time: 05-Feb-2024 04:00:01
  Start time: 05-Feb-2024 04:01:02
  End time: 05-Feb-2024 04:29:08
  Elapsed time: 28 mins 6 secs
  Priority: 10
  Allow Mixed Priority: no
  FD Files Written: 0
  SD Files Written: 0
  FD Bytes Written: 0 (0 B)
  SD Bytes Written: 0 (0 B)
  Quota Used: 14,609,097,104 (14.60 GB)
  Burst Quota: 0 (0 B)
  Soft Quota: 26,843,545,600 (26.84 GB)
  Hard Quota: 32,212,254,720 (32.21 GB)
  Grace Expiry Date: Soft Quota not exceeded
  Rate: 0.0 KB/s
  Software Compression: None
  VSS: no
  Encryption: yes
  Accurate: yes
  Volume name(s):
  Volume Session Id: 565
  Volume Session Time: 1704452670
  Last Volume Bytes: 0 (0 B)
  Non-fatal FD errors: 0
  SD Errors: 1
  FD termination status: OK
  SD termination status: OK
  Bareos binary info: Bareos community build (UNSUPPORTED): Get professional support from https://www.bareos.com
  Job triggered by: Scheduler
  Termination: Backup OK -- with warnings
"""
(0005728)
bruno-at-bareos   
2024-02-05 10:50   
So to resume the client is using encryption and you have quotas in place ?

Any kind of predictive path to reproduce, so you may be able to run the daemons with a high debug level ?
(0005729)
raschu   
2024-02-05 11:04   
Hi Bruno, yes we use encryption and quotas.

The clients are "internal customer" so it is not so easy to set a higher debug level.
But for my opinion - it must be a director or catalog problem!?

It is strange because Bareos create new volume and detect it is status 'used'?

Thanks Ralf
(0005730)
bruno-at-bareos   
2024-02-05 11:23   
you can raise a debug level to any client in bconsole, so not a big concern, I would say.

The question is why you are able to reproduce that error, while we don't with all the tests ( and real ) runs here.
If we can reproduce, we know we can fix it: the problem is how can I reproduce that ;-)
(0005731)
raschu   
2024-02-05 11:40   
Hi Bruno, okay - thanks :-)

Perfect, I set the debug level for one client which often gets the error.

*setdebug level=200 trace=1 timestamp=1 client=renamedclient
Connecting to Client renamedclient at renamedclient:9102
2000 OK setdebug=200 trace=1 hangup=0 timestamp=1 tracefile=/var/lib/bareos/renamedclient.trace

Now I wait for the next job this night.

Bye Ralf
(0005732)
bruno-at-bareos   
2024-02-05 13:52   
but you will need to also trace the sd ...
beware that tracefile can grow quick large ...
(0005733)
raschu   
2024-02-05 15:48   
Thanks, the sd is now also in debug mode :-)
(0005739)
raschu   
2024-02-06 15:39   
Hi Bruno, this night we got also the error and debug was enabled :) I hope it helps to find the problem. I changed the hostnames.

I attached the trace logs. The log from the client-fd is very large and does not contain any data that will help us.

Thanks Ralf

The jobid was 9689. Here the details:


06-Feb 04:00 bareos-dir JobId 9689: Start Backup JobId 9689, Job=client01.renamed.2024-02-06_04.00.01_09
06-Feb 04:00 bareos-dir JobId 9689: Connected Storage daemon at storageserver.renamed:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
06-Feb 04:00 bareos-dir JobId 9689: Encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
06-Feb 04:00 bareos-dir JobId 9689: Probing client protocol... (result will be saved until config reload)
06-Feb 04:00 bareos-dir JobId 9689: Connected Client: client01.renamed at client01.renamed:9102, encryption: PSK-AES256-CBC-SHA TLSv1.2
06-Feb 04:00 bareos-dir JobId 9689: Handshake: Immediate TLS
06-Feb 04:00 bareos-dir JobId 9689: Encryption: PSK-AES256-CBC-SHA TLSv1.2
06-Feb 04:00 bareos-dir JobId 9689: Sending Accurate information.
06-Feb 04:05 bareos-dir JobId 9689: Created new Volume "aincr30d-bsd02-3253" in catalog.
06-Feb 04:05 bareos-dir JobId 9689: Using Device "bstore02-vd03" to write.
06-Feb 04:05 client01.renamed JobId 9689: Connected Storage daemon at storageserver.renamed:9103, encryption: PSK-AES256-CBC-SHA TLSv1.2
06-Feb 04:05 client01.renamed JobId 9689: Encryption: PSK-AES256-CBC-SHA TLSv1.2
06-Feb 04:05 client01.renamed JobId 9689: Extended attribute support is enabled
06-Feb 04:05 bareos-sd JobId 9689: Labeled new Volume "aincr30d-bsd02-3253" on device "bstore02-vd03" (/bstore02).
06-Feb 04:05 client01.renamed JobId 9689: ACL support is enabled
06-Feb 04:05 bareos-sd JobId 9689: Wrote label to prelabeled Volume "aincr30d-bsd02-3253" on device "bstore02-vd03" (/bstore02)
06-Feb 04:05 bareos-dir JobId 9689: Max Volume jobs=1 exceeded. Marking Volume "aincr30d-bsd02-3253" as Used.
06-Feb 04:05 bareos-sd JobId 9689: Error: stored/device.cc:222 Error getting Volume info: 1998 Volume "aincr30d-bsd02-3253" catalog status is Used, but should be Append, Purged or Recycle.
06-Feb 04:30 bareos-sd JobId 9689: Releasing device "bstore02-vd03" (/bstore02).
06-Feb 04:30 bareos-sd JobId 9689: Elapsed time=00:24:26, Transfer rate=39.40 K Bytes/second
06-Feb 04:30 bareos-dir JobId 9689: Insert of attributes batch table with 321 entries start
06-Feb 04:30 bareos-dir JobId 9689: Insert of attributes batch table done
06-Feb 04:30 bareos-dir JobId 9689: Bareos bareos-dir 23.0.2~pre6.5fbc90f69 (23Jan24):
  Build OS: Debian GNU/Linux 11 (bullseye)
  JobId: 9689
  Job: client01.renamed.2024-02-06_04.00.01_09
  Backup Level: Incremental, since=2024-02-05 04:00:18
  Client: "client01.renamed" 23.0.2~pre32.0a0e55739 (31Jan24) Red Hat Enterprise Linux Server release 7.9 (Maipo),redhat
  FileSet: "client01.renamed" 2024-01-20 04:00:00
  Pool: "aincr30d-bsd02" (From Job resource)
  Catalog: "MyCatalog" (From Client resource)
  Storage: "bsd02" (From Pool resource)
  Scheduled time: 06-Feb-2024 04:00:01
  Start time: 06-Feb-2024 04:00:16
  End time: 06-Feb-2024 04:30:13
  Elapsed time: 29 mins 57 secs
  Priority: 10
  Allow Mixed Priority: no
  FD Files Written: 321
  SD Files Written: 321
  FD Bytes Written: 57,709,242 (57.70 MB)
  SD Bytes Written: 57,764,329 (57.76 MB)
  Quota Used: 720,602,415,037 (720.6 GB)
  Burst Quota: 0 (0 B)
  Soft Quota: 1,073,741,824,000 (1.073 TB)
  Hard Quota: 1,288,490,188,800 (1.288 TB)
  Grace Expiry Date: Soft Quota not exceeded
  Rate: 32.1 KB/s
  Software Compression: 46.4 % (lz4)
  VSS: no
  Encryption: no
  Accurate: yes
  Volume name(s): aincr30d-bsd02-3253
  Volume Session Id: 583
  Volume Session Time: 1704452670
  Last Volume Bytes: 57,784,439 (57.78 MB)
  Non-fatal FD errors: 0
  SD Errors: 1
  FD termination status: OK
  SD termination status: OK
  Bareos binary info: Bareos community build (UNSUPPORTED): Get professional support from https://www.bareos.com
  Job triggered by: Scheduler
  Termination: Backup OK -- with warnings
(0005740)
bruno-at-bareos   
2024-02-06 17:41   
Thanks could you also add the output of

llist volume=aincr180d-bsd02-2334
llist volume=aincr30d-bsd02-3243
llist volume=aincr30d-bsd02-3249
llist volume=incr30d-3250

If you are using any script for secure erase command thanks to also give the maximum of information.
(0005741)
raschu   
2024-02-07 15:37   
Thanks Bruno. Here are the output:

*llist volume=aincr180d-bsd02-2334
No results to list.

*llist volume=aincr30d-bsd02-3243
          mediaid: 3,243
       volumename: aincr30d-bsd02-3243
             slot: 0
           poolid: 17
             pool: aincr30d-bsd02
        mediatype: File02
     firstwritten: 2024-02-05 22:00:03
      lastwritten: 2024-02-06 08:04:56
        labeldate: 2024-02-05 22:00:03
          voljobs: 1
         volfiles: 0
        volblocks: 1,092
        volmounts: 1
         volbytes: 1,145,006,249
        volerrors: 0
        volwrites: 1,093
 volcapacitybytes: 0
        volstatus: Used
          enabled: 1
          recycle: 1
     volretention: 5,616,000
   voluseduration: 82,800
       maxvoljobs: 1
      maxvolfiles: 0
      maxvolbytes: 5,368,709,120
        inchanger: 0
          endfile: 0
         endblock: 1,145,006,248
        labeltype: 0
        storageid: 3
         deviceid: 0
       locationid: 0
     recyclecount: 0
     initialwrite:
    scratchpoolid: 0
      scratchpool:
    recyclepoolid: 0
      recyclepool:
          comment:
          storage: bsd02

*llist volume=aincr30d-bsd02-3249
          mediaid: 3,249
       volumename: aincr30d-bsd02-3249
             slot: 0
           poolid: 17
             pool: aincr30d-bsd02
        mediatype: File02
     firstwritten: 2024-02-06 04:00:02
      lastwritten: 2024-02-06 04:03:18
        labeldate: 2024-02-06 04:00:02
          voljobs: 1
         volfiles: 1
        volblocks: 5,119
        volmounts: 1
         volbytes: 5,367,660,784
        volerrors: 0
        volwrites: 5,120
 volcapacitybytes: 0
        volstatus: Full
          enabled: 1
          recycle: 1
     volretention: 5,616,000
   voluseduration: 82,800
       maxvoljobs: 1
      maxvolfiles: 0
      maxvolbytes: 5,368,709,120
        inchanger: 0
          endfile: 1
         endblock: 1,072,693,487
        labeltype: 0
        storageid: 3
         deviceid: 0
       locationid: 0
     recyclecount: 0
     initialwrite:
    scratchpoolid: 0
      scratchpool:
    recyclepoolid: 0
      recyclepool:
          comment:
          storage: bsd02

*llist volume=incr30d-3250
          mediaid: 3,250
       volumename: incr30d-3250
             slot: 0
           poolid: 6
             pool: incr30d
        mediatype: File02
     firstwritten: 2024-02-06 04:00:03
      lastwritten: 2024-02-06 04:41:15
        labeldate: 2024-02-06 04:00:03
          voljobs: 1
         volfiles: 0
        volblocks: 447
        volmounts: 1
         volbytes: 468,619,261
        volerrors: 0
        volwrites: 448
 volcapacitybytes: 0
        volstatus: Used
          enabled: 1
          recycle: 1
     volretention: 2,592,000
   voluseduration: 82,800
       maxvoljobs: 1
      maxvolfiles: 0
      maxvolbytes: 5,368,709,120
        inchanger: 0
          endfile: 0
         endblock: 468,619,260
        labeltype: 0
        storageid: 3
         deviceid: 0
       locationid: 0
     recyclecount: 0
     initialwrite:
    scratchpoolid: 0
      scratchpool:
    recyclepoolid: 0
      recyclepool:
          comment:
          storage: bsd02


######

In time we use no erase script.
But if we want to get more disc space we use this select to identify deletion candidates:

psql -d bareos -h localhost -U $dbUser -c "SELECT m.VolumeName FROM Media m where m.VolStatus not in ('Append','Purged') and not exists (select 1 from JobMedia jm where jm.MediaId=m.MediaId);"

In this case we run the command "delete volume=$volName yes" and delete the volume-files on storage.

Bye Ralf
(0005745)
bruno-at-bareos   
2024-02-08 09:19   
While using sql command I would really advise to use the sqlquery module of bconsole

bconsole <<< "sqlquery
SELECT m.VolumeName FROM Media m where m.VolStatus not in ('Append','Purged') and not exists (select 1 from JobMedia jm where jm.MediaId=m.MediaId);
"

So if I understand, you try to "improve" the volume pruning and volume reservation.
What happen if you stop doing this? Is the error still reproducible?
(0005752)
raschu   
2024-02-08 17:19   
Thanks for the answer Bruno.

I haven't used this procedure for 3 weeks. In my opinion it has no influence on it. Now with the error, new volumes are created directly and these are then evaluated incorrectly.

I currently have this effect with about 3 backups every night. Other systems run without problems.

Bye Ralf

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1592 [bareos-core] director major always 2024-01-23 15:32 2024-01-24 11:23
Reporter: hostedpower Platform: Linux  
Assigned To: bruno-at-bareos OS: Debian  
Priority: normal OS Version: 12  
Status: acknowledged Product Version: 22.1.3  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Rerun failed consolidation group purges all jobs except latest
Description: Hi,


We sometimes see failed Consolidate jobs (for the always incremental scheme). These jobs work fine normally, and all is consolidated properly in normal circumstances.

However when we have a failed job for some host for whatever reason, and we re-run it, ALL restore points are lost (except the very latest one).

This is very dangerous, since when it's done, we lose all but the latest restore point.

I believe this can also be tested / reproduced by just re-running a consolidation job which succeeded.

This bug is there long time (version 18 or longer back) but was never reported before. Since it's so dangerous and not fixed, we finally decided to report it here :)
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0005715)
bruno-at-bareos   
2024-01-23 17:04   
rerun should never never never (repeat after me: NEVER) be used with a failed job prepared by consolidation, as it doesn't do what consolidation is doing (create a precise list of jobid to consolidate) rerun don't do that, so you get a virtual full of everything.

What you have to if you want to rerun a consolidation, is rerun a consolidation job which will create the "consolidated job" correctly, which is documented.

We have a task to make the rerun without a jobid list in case of virtual full automatically canceled, but I can't promise when this will be developped.
(0005716)
hostedpower   
2024-01-23 17:06   
Haha that is clear now, but the error is easily made by anyone if it's not protected. Thx for confirming this is a known "bug" at least :)

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1584 [bareos-core] file daemon minor always 2023-12-19 10:44 2024-01-16 15:11
Reporter: mdc Platform: x86_64  
Assigned To: bruno-at-bareos OS: CentOS  
Priority: normal OS Version: stream 8  
Status: resolved Product Version:  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Incremental PostgreSQL backup will log an warning on the database server
Description: Using the new PostgreSQL plug-in of Bareos 23 will log an error on the database server when running an incremental backup.
Seen on RHEL7-9 on PostgreSQL 12 and 14. (Possible also on other)

warning on the data base server after the incremental one:
2023-12-19T10:32:44+0100 <HOSTNAME> postgres[63014]: [8-1] < 2023-12-19 10:32:44.100 CET >WARNING: aborting backup due to backend exiting before pg_stop_backup was called
Tags: plugin, postgresql
Steps To Reproduce: Let an full backup run following by an incremental one.
Additional Information:
System Description
Attached Files: debug.log (65,056 bytes) 2023-12-20 08:43
https://bugs.bareos.org/file_download.php?file_id=596&type=bug
Notes
(0005648)
bruno-at-bareos   
2023-12-19 11:12   
The connection by pg8000 between the start and the end of the job need to be stable.

maybe you have something that kill the connection. none of our tests shows a trouble.
you may have to track the connection/disconnection in postgresql logs.
(0005649)
mdc   
2023-12-19 11:26   
On all servers, the file daemon and the database are on the same. And the connection are done by an socket.
Here the log of the data base with enabled connection logging:
2023-12-19T11:24:15+0100 postgres[2245125]: [13-1] < 2023-12-19 11:24:15.967 CET >LOG: connection received: host=[local]
2023-12-19T11:24:15+0100 postgres[2245125]: [14-1] < 2023-12-19 11:24:15.969 CET >LOG: connection authenticated: identity="root" method=peer (/var/lib/pgsql/14/data/pg_hba.conf:83)
2023-12-19T11:24:15+0100 postgres[2245125]: [15-1] < 2023-12-19 11:24:15.969 CET >LOG: connection authorized: user=root database=postgres
2023-12-19T11:24:17+0100 postgres[2245125]: [16-1] < 2023-12-19 11:24:17.309 CET >WARNING: aborting backup due to backend exiting before pg_stop_backup was called
2023-12-19T11:24:17+0100 postgres[2245125]: [17-1] < 2023-12-19 11:24:17.309 CET >LOG: disconnection: session time: 0:00:01.342 user=root database=postgres host=[local]
(0005650)
bruno-at-bareos   
2023-12-19 15:52   
Would you mind to help us, by running at least one job with setdebug level=150 on the client.
A description how the PG cluster is setup and run, to maybe have a chance to reproduce this issue.

None of the systemtest in use actually has shown this error, on any platform tested.
(0005651)
mdc   
2023-12-20 08:43   
Yes of curse. Here are the debug output, I hope it will help.
But don't wonder, about the delay of the next answer, because today is my last day in the office until the "next year".
(0005652)
mdc   
2023-12-20 08:44   
The client was running via:
bareos-fd -d 150 -f 2>&1 >/tmp/debug.log
for this test.
(0005659)
bruno-at-bareos   
2023-12-21 09:56   
We will need the fileset, and the corresponding log in the pg cluster to have a chance to understand what's happening there.
(0005663)
bruno-at-bareos   
2024-01-03 14:45   
ok found the information in postgresql database log
(0005678)
bruno-at-bareos   
2024-01-09 16:13   
PR created https://github.com/bareos/bareos/pull/1655
Packages to test will appear after build & test in https://download.bareos.org/experimental/PR-1655/
(0005680)
mdc   
2024-01-10 07:12   
Hi Bruno,
I have build an new 23 version with this patch and the warning are now gone.
Thanks
(0005681)
bruno-at-bareos   
2024-01-10 10:44   
Thanks for the report, I'm still investigating some use case (especially incrementals without changes which may raise the job in W state).
Will fix that a bit later.
(0005686)
bruno-at-bareos   
2024-01-16 15:11   
Fix committed to bareos master branch with changesetid 18537.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
791 [bareos-core] file daemon major always 2017-02-27 15:06 2023-12-22 14:20
Reporter: vgusev2007 Platform: Linux  
Assigned To: arogge OS: Ubuntu  
Priority: urgent OS Version: 14.04 x64  
Status: feedback Product Version: 15.2.2  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: mssql plugin can't restore to other HOST
Description: Dear all, I have a work backup of my mssql server 2008R2 SP3. I can backup and restore a backup to the same host without a problem (Full, Inc), but if I want to restore any backup to other server I can't...
Tags: mssql, mssqlvdi
Steps To Reproduce: Create mssql full backup as usually. Restore it to another host (with the same version of mssql).

Please see a restore log:

27-feb. 16:21 mssql-dev-fd JobId 4086: Fatal error: mssqlvdi-fd: IClientVirtualDeviceSet2::GetConfiguration failed
27-feb. 16:21 boss-sd JobId 4086: Error: bsock_tcp.c:422 Write error sending 22221 bytes to client:192.168.13.90:9103: ERR=Connection reset by peer
27-feb. 16:21 mssql-dev-fd JobId 4086: Fatal error: ???????????????????????????????????????? '{1129DBCE-A9A0-4A69-960B-A4B51E116D8B}'. ??????????????????? 0x80070002(????????????????????.).
RESTORE DATABASE ???????????????????.

27-feb. 16:21 boss-sd JobId 4086: Fatal error: read.c:154 Error sending to File daemon. ERR=Connection reset by peer
27-feb. 16:21 boss-sd JobId 4086: Error: bsock_tcp.c:357 Socket has errors=1 on call to client:192.168.13.90:9103
27-feb. 16:21 boss-dir JobId 4086: Error: Bareos boss-dir 15.2.2 (16Nov15):
  Build OS: x86_64-pc-linux-gnu ubuntu Ubuntu 14.04 LTS
  JobId: 4086
  Job: Restore-Timex-dev.2017-02-27_16.20.40_41
  Restore Client: mssql-dev
  Start time: 27-feb.-2017 16:20:42
  End time: 27-feb.-2017 16:21:42
  Elapsed time: 1 min
  Files Expected: 1
  Files Restored: 0
  Bytes Restored: 0
  Rate: 0.0 KB/s
  FD Errors: 1
  FD termination status: Fatal Error
  SD termination status: Fatal Error
  Termination: *** Restore Error ***

I can't see detail error messages because encoding... :( Please any help to me.
Additional Information:
Attached Files:
Notes
(0002585)
vgusev2007   
2017-02-27 15:09   
Sorry for wrong title of the Issue.

The correct topic is: mssql plugin can't restore to other HOST
(0002588)
vgusev2007   
2017-03-02 15:33   
There is the following error:

Dear all, there is the following message error when I want to restore mssql db from one server to other mssql server:



bareos-fd: Cannot open backup device ххххххххххххххххххххх. Operating system error 0x80070002(The system cannot find the file specified.).

It looks really as a bug mssql plugin of bareos.
(0005662)
arogge   
2023-12-22 14:20   
I assume you used where=/ on restore to get a VDI restore.
Could you try to restore with setting where to some other location and see if that at least restores the files?

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1553 [bareos-core] storage daemon major always 2023-09-20 10:18 2023-12-13 15:31
Reporter: robertdb Platform: Linux  
Assigned To: OS: Debian  
Priority: high OS Version: 9  
Status: new Product Version: 22.1.0  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: S3 (droplet) returns an error on Exoscale S3 service
Description: Using [Exoscale Object Storage](https://www.exoscale.com/object-storage/) ("S3 compatible"), I'm getting this error and a failing job:

```text
bareos-storagedaemon JobId 1: Warning: stored/label.cc:358 Open device "EXO_S3_1-00" (Exoscale S3 Storage) Volume "Full-0001" failed: ERR=stored/dev.cc:602 Could not open: Exoscale S3 Storage/Full-0001
```
Tags: "libdroplet", "s3"
Steps To Reproduce: Configure bareos-sd with these files:

/etc/bareos/bareos-sd.d/device/EXO_S3_1-00.conf:

```text
Device {
  Name = "EXO_S3_1-00"
  Description = "ExoScale S3 device."
  Maximum Concurrent Jobs = 1
  Media Type = "S3_Object"
  Archive Device = "Exoscale S3 Storage"
  Device Type = "droplet"
  Device Options = "profile=/etc/bareos/bareos-sd.d/device/droplet/exoscale.profile,bucket=bareos-backups,chunksize=100M"
  LabelMedia = yes
  Random Access = yes
  AutomaticMount = yes
  RemovableMedia = No
  AlwaysOpen = No
}
```

/etc/bareos/bareos-sd.d/device/droplet/exoscale.profile:

```text
host = "sos-ch-gva-2.exo.io:443"
use_https = true
access_key = REDACTED
secret_key = REDACTED
pricing_dir = ""
backend = s3
aws_auth_sign_version = 4
aws_region = ch-gva-2
```

Start a backup on this Storage.
Additional Information: The packages are installed from the "subscription repository": (`cat /etc/apt/sources.list.d/bareos.list`)

```text
deb [signed-by=/etc/apt/bareos.gpg] https://download.bareos.com/bareos/release/22/Debian_12 /
```

I'm using these version: (`dpkg -l | grep bareos`)

```text
ii bareos-common 22.1.0-2 amd64 Backup Archiving Recovery Open Sourced - common files
ii bareos-dbg 22.1.0-2 amd64 Backup Archiving Recovery Open Sourced - debugging symbols
ii bareos-storage 22.1.0-2 amd64 Backup Archiving Recovery Open Sourced - storage daemon
ii bareos-storage-droplet 22.1.0-2 amd64 Backup Archiving Recovery Open Sourced - storage daemon droplet backend
ii bareos-storage-tape 22.1.0-2 amd64 Backup Archiving Recovery Open Sourced - storage daemon tape support
ii bareos-tools 22.1.0-2 amd64 Backup Archiving Recovery Open Sourced - common tools
```

On this host: (`cat /etc/os-release`)

```text
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
NAME="Debian GNU/Linux"
VERSION_ID="12"
VERSION="12 (bookworm)"
VERSION_CODENAME=bookworm
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
```
System Description
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1582 [bareos-core] General trivial have not tried 2023-12-13 13:50 2023-12-13 13:50
Reporter: arogge_adm Platform:  
Assigned To: OS:  
Priority: normal OS Version:  
Status: new Product Version:  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Release Bareos 22.1.4
Description: This ticket acts as a tracker ticket to collect information about releasing Bareos 22.1.4
Tags: release
Steps To Reproduce:
Additional Information:
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1583 [bareos-core] General trivial have not tried 2023-12-13 13:50 2023-12-13 13:50
Reporter: arogge_adm Platform:  
Assigned To: OS:  
Priority: normal OS Version:  
Status: new Product Version:  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Release Bareos 24.0.0
Description: This ticket acts as a tracker ticket to collect information about releasing Bareos 24.0.0
Tags: release
Steps To Reproduce:
Additional Information:
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1325 [bareos-core] storage daemon major always 2021-03-03 07:44 2023-12-13 13:23
Reporter: sergii.chebotkin Platform: Linux  
Assigned To: OS: CentOS  
Priority: high OS Version: 7  
Status: new Product Version: 20.0.0  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Volume management issue
Description: We have faced with the constant issues with volumes management during AWS backups with BareOS v20 . We did not see this issue with BareOS v18.
The issue is creation more then one (sometime up to 15) volume with random size for 1 backup. As big backup job (2Gb and more) as many volumes will be created.
Some of the volumes marked as "Full", some "Error". Following examples for a Full backup, but Incremental having the same issue.

Full-0001 S3_Object S3_Object1 11 day(s) ago Error 1 year(s) 1.07 TB 68.07 GB
Full-0029 S3_Object S3_Object1 11 day(s) ago Full expires in 354 days 1.07 TB 1.36 GB
Full-0030 S3_Object S3_Object1 11 day(s) ago Full expires in 354 days 1.07 TB 3.36 GB
Full-0031 S3_Object S3_Object1 11 day(s) ago Full expires in 354 days 1.07 TB 5.66 GB

We are using following Full backup configuration :

DIR POOL:
Maximum Volume Jobs - is not set.
Maximum Volumes = 1500
Maximum Volume Bytes = '1000G'
Volume recycle - true
Volume retention - '360 days'

SD DEVICE:
Name = "AWS_S3_1-00"
Always Open = false
Archive Device = "AWS S3 Storage"
Automatic Mount = true
Device Options = "profile=/etc/bareos/bareos-sd.d/device/droplet/aws.profile,bucket=bareos-backups,chunksize=100M,retries=0,iothreads=0,ioslots=0"
Device Type = droplet
Label Media = true
Media Type = "S3_Object1"
Random Access = true
Removable Media = false

Logs example:

Job name : Full-0029
AWS - 12 files - 100.0 MB

Full-0029 S3_Object S3_Object1 2 day(s) ago Full expires in 363 days 1.07 TB 1.36 GB
Volume jobs 1
Volume name(s) in use: Full-0029|Full-0030|Full-0031|Full-0032

47 2021-02-20 18:04:26 bareos-dir JobId 92: Insert of attributes batch table done
46 2021-02-20 18:04:17 bareos-dir JobId 92: Insert of attributes batch table with 737036 entries start
45 2021-02-20 18:04:17 bareos-sd JobId 92: Elapsed time=01:13:31, Transfer rate=2.371 M Bytes/second
44 2021-02-20 18:04:15 bareos-sd JobId 92: Releasing device "AWS_S3_1-00" (AWS S3 Storage).
43 2021-02-20 18:03:47 bareos-sd JobId 92: New volume "Full-0032" mounted on device "AWS_S3_1-00" (AWS S3 Storage) at 20-Feb-2021 18:03.
42 2021-02-20 18:03:47 bareos-sd JobId 92: Wrote label to prelabeled Volume "Full-0032" on device "AWS_S3_1-00" (AWS S3 Storage)
41 2021-02-20 18:03:47 bareos-sd JobId 92: Labeled new Volume "Full-0032" on device "AWS_S3_1-00" (AWS S3 Storage).
40 2021-02-20 18:03:46 bareos-dir JobId 92: Created new Volume "Full-0032" in catalog.
39 2021-02-20 18:03:46 bareos-sd JobId 92: End of medium on Volume "Full-0031" Bytes=5,662,281,530 Blocks=87,771 at 20-Feb-2021 18:03.
38 2021-02-20 18:03:46 bareos-sd JobId 92: Error: stored/block.cc:822 Write error on fd=0 at file:blk 1:1367314233 on device "AWS_S3_1-00" (AWS S3 Storage). ERR=Input/output error.
37 2021-02-20 18:03:46 bareos-sd JobId 92: Error: stored/block.cc:803 Write error at 1:1367314233 on device "AWS_S3_1-00" (AWS S3 Storage). ERR=Input/output error.
36 2021-02-20 17:40:54 bareos-sd JobId 92: New volume "Full-0031" mounted on device "AWS_S3_1-00" (AWS S3 Storage) at 20-Feb-2021 17:40.
35 2021-02-20 17:40:54 bareos-sd JobId 92: Wrote label to prelabeled Volume "Full-0031" on device "AWS_S3_1-00" (AWS S3 Storage)
34 2021-02-20 17:40:54 bareos-sd JobId 92: Labeled new Volume "Full-0031" on device "AWS_S3_1-00" (AWS S3 Storage).
33 2021-02-20 17:40:53 bareos-dir JobId 92: Created new Volume "Full-0031" in catalog.
32 2021-02-20 17:40:53 bareos-sd JobId 92: End of medium on Volume "Full-0030" Bytes=3,355,396,766 Blocks=52,012 at 20-Feb-2021 17:40.
31 2021-02-20 17:40:53 bareos-sd JobId 92: Error: stored/block.cc:822 Write error on fd=0 at file:blk 0:3355396765 on device "AWS_S3_1-00" (AWS S3 Storage). ERR=Input/output error.
30 2021-02-20 17:40:53 bareos-sd JobId 92: Error: stored/block.cc:803 Write error at 0:3355396765 on device "AWS_S3_1-00" (AWS S3 Storage). ERR=Input/output error.
29 2021-02-20 17:07:07 some.host.com JobId 92: Disallowed filesystem. Will not descend from / into /var/lib/nfs/rpc_pipefs
28 2021-02-20 17:06:44 some.host.com JobId 92: Disallowed filesystem. Will not descend from / into /run
27 2021-02-20 17:06:44 some.host.com JobId 92: Disallowed filesystem. Will not descend from / into /sys
26 2021-02-20 17:06:44 some.host.com JobId 92: Disallowed filesystem. Will not descend from / into /dev
25 2021-02-20 17:02:29 bareos-sd JobId 92: New volume "Full-0030" mounted on device "AWS_S3_1-00" (AWS S3 Storage) at 20-Feb-2021 17:02.
24 2021-02-20 17:02:29 bareos-sd JobId 92: Wrote label to prelabeled Volume "Full-0030" on device "AWS_S3_1-00" (AWS S3 Storage)
23 2021-02-20 17:02:29 bareos-sd JobId 92: Labeled new Volume "Full-0030" on device "AWS_S3_1-00" (AWS S3 Storage).
22 2021-02-20 17:02:24 bareos-dir JobId 92: Created new Volume "Full-0030" in catalog.
21 2021-02-20 17:02:24 bareos-sd JobId 92: End of medium on Volume "Full-0029" Bytes=1,363,138,070 Blocks=21,130 at 20-Feb-2021 17:02.
20 2021-02-20 17:02:24 bareos-sd JobId 92: Error: stored/block.cc:822 Write error on fd=0 at file:blk 0:1363138069 on device "AWS_S3_1-00" (AWS S3 Storage). ERR=Input/output error.
19 2021-02-20 17:02:24 bareos-sd JobId 92: Error: stored/block.cc:803 Write error at 0:1363138069 on device "AWS_S3_1-00" (AWS S3 Storage). ERR=Input/output error.
18 2021-02-20 16:50:39 bareos-sd JobId 92: Wrote label to prelabeled Volume "Full-0029" on device "AWS_S3_1-00" (AWS S3 Storage)
17 2021-02-20 16:50:39 bareos-sd JobId 92: Labeled new Volume "Full-0029" on device "AWS_S3_1-00" (AWS S3 Storage).
16 2021-02-20 16:50:38 bareos-dir JobId 92: Created new Volume "Full-0029" in catalog.
15 2021-02-20 16:50:38 bareos-sd JobId 92: Marking Volume "Full-0001" in Error in Catalog.
14 2021-02-20 16:50:38 bareos-sd JobId 92: Error: Bareos cannot write on disk Volume "Full-0001" because: The sizes do not match! Volume=13107200000 Catalog=68068053996
13 2021-02-20 16:50:06 bareos-sd JobId 92: Volume "Full-0001" previously written, moving to end of data.
Tags: droplet, s3;droplet;aws;storage, storage, volume
Steps To Reproduce: Just start scheduled backup.
Additional Information: We can not see this issue in version 18. Switched on version 20 cause of AWS issue.
System Description
Attached Files:
Notes
(0004096)
sergii.chebotkin   
2021-03-03 10:38   
I have changed configuration like:
DIR POOL:
Maximum Volume Jobs = 0
Maximum Volumes = 0
Maximum Volume Bytes = '0G'
Volume recycle - true
Volume retention - '360 days

But anyway got one job with 115Gb backup which created 3 volumes :

Incremental-0956 S3_Object S3_Object1 1 day(s) ago Full expires in 6 days 0.00 B 14.68 GB
Incremental-0957 S3_Object S3_Object1 1 day(s) ago Full expires in 6 days 0.00 B 14.68 GB
Incremental-0958 S3_Object S3_Object1 today Used expires in 7 days 0.00 B 86.02 GB

37 2021-03-03 01:13:20 bareos-dir JobId 26619: Insert of attributes batch table done
36 2021-03-03 01:13:20 bareos-dir JobId 26619: Insert of attributes batch table with 4619 entries start
35 2021-03-03 01:13:20 bareos-sd JobId 26619: Elapsed time=02:22:35, Transfer rate=13.47 M Bytes/second
34 2021-03-03 01:13:13 bareos-sd JobId 26619: Releasing device "AWS_S3_1-00" (AWS S3 Storage).
33 2021-03-02 23:27:35 bareos-sd JobId 26619: New volume "Incremental-0958" mounted on device "AWS_S3_1-00" (AWS S3 Storage) at 02-Mar-2021 23:27.
32 2021-03-02 23:27:35 bareos-dir JobId 26619: Max Volume jobs=1 exceeded. Marking Volume "Incremental-0958" as Used.
31 2021-03-02 23:27:35 bareos-sd JobId 26619: Wrote label to prelabeled Volume "Incremental-0958" on device "AWS_S3_1-00" (AWS S3 Storage)
30 2021-03-02 23:27:35 bareos-sd JobId 26619: Labeled new Volume "Incremental-0958" on device "AWS_S3_1-00" (AWS S3 Storage).
29 2021-03-02 23:27:34 bareos-dir JobId 26619: Created new Volume "Incremental-0958" in catalog.
28 2021-03-02 23:27:33 bareos-sd JobId 26619: End of medium on Volume "Incremental-0957" Bytes=14,680,028,204 Blocks=227,555 at 02-Mar-2021 23:27.
27 2021-03-02 23:27:33 bareos-sd JobId 26619: Error: stored/block.cc:822 Write error on fd=0 at file:blk 3:1795126315 on device "AWS_S3_1-00" (AWS S3 Storage). ERR=Input/output error.
26 2021-03-02 23:27:33 bareos-sd JobId 26619: Error: stored/block.cc:803 Write error at 3:1795126315 on device "AWS_S3_1-00" (AWS S3 Storage). ERR=Input/output error.
25 2021-03-02 23:08:58 bareos-sd JobId 26619: New volume "Incremental-0957" mounted on device "AWS_S3_1-00" (AWS S3 Storage) at 02-Mar-2021 23:08.
24 2021-03-02 23:08:58 bareos-dir JobId 26619: Max Volume jobs=1 exceeded. Marking Volume "Incremental-0957" as Used.
23 2021-03-02 23:08:58 bareos-sd JobId 26619: Wrote label to prelabeled Volume "Incremental-0957" on device "AWS_S3_1-00" (AWS S3 Storage)
22 2021-03-02 23:08:58 bareos-sd JobId 26619: Labeled new Volume "Incremental-0957" on device "AWS_S3_1-00" (AWS S3 Storage).
21 2021-03-02 23:08:58 bareos-dir JobId 26619: Created new Volume "Incremental-0957" in catalog.
20 2021-03-02 23:08:57 bareos-sd JobId 26619: End of medium on Volume "Incremental-0956" Bytes=14,680,028,152 Blocks=227,555 at 02-Mar-2021 23:08.
19 2021-03-02 23:08:57 bareos-sd JobId 26619: Error: stored/block.cc:822 Write error on fd=0 at file:blk 3:1795126263 on device "AWS_S3_1-00" (AWS S3 Storage). ERR=Input/output error.
18 2021-03-02 23:08:57 bareos-sd JobId 26619: Error: stored/block.cc:803 Write error at 3:1795126263 on device "AWS_S3_1-00" (AWS S3 Storage). ERR=Input/output error.
17 2021-03-02 22:51:05 some.host.com JobId 26619: Disallowed filesystem. Will not descend from / into /var/lib/nfs/rpc_pipefs
16 2021-03-02 22:50:53 some.host.com JobId 26619: Disallowed filesystem. Will not descend from / into /run
15 2021-03-02 22:50:53 some.host.com JobId 26619: Disallowed filesystem. Will not descend from / into /sys
14 2021-03-02 22:50:53 some.host.com JobId 26619: Disallowed filesystem. Will not descend from / into /dev
13 2021-03-02 22:50:42 bareos-dir JobId 26619: Max Volume jobs=1 exceeded. Marking Volume "Incremental-0956" as Used.
12 2021-03-02 22:50:42 bareos-sd JobId 26619: Wrote label to prelabeled Volume "Incremental-0956" on device "AWS_S3_1-00" (AWS S3 Storage)

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1565 [bareos-core] file daemon crash always 2023-11-01 00:14 2023-12-12 10:11
Reporter: jamyles Platform: Mac  
Assigned To: joergs OS: MacOS X  
Priority: high OS Version: 10  
Status: resolved Product Version: 22.1.1  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: bareos-fd crash on macOS 14.1 Sonoma
Description: bareos-fd crashes with "Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '-[__NSCFString stringByStandardizingPath]: unrecognized selector sent to instance 0x600002118580'." Can reproduce on 22.1.1 subscription release and 23.0.0~pre1137. /var/run/bareos.log attached.
Tags:
Steps To Reproduce: /usr/local/bareos/sbin/bareos-fd --version

or in normal operation.
Additional Information: % sw_vers
ProductName: macOS
ProductVersion: 14.1
BuildVersion: 23B74
% otool -L /usr/local/bareos/sbin/bareos-fd
/usr/local/bareos/sbin/bareos-fd:
    /usr/lib/libiconv.2.dylib (compatibility version 7.0.0, current version 7.0.0)
    /System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation (compatibility version 150.0.0, current version 1953.255.0)
    @rpath/libbareosfind.22.dylib (compatibility version 22.0.0, current version 22.1.1)
    @rpath/libbareos.22.dylib (compatibility version 22.0.0, current version 22.1.1)
    /usr/lib/libz.1.dylib (compatibility version 1.0.0, current version 1.2.11)
    @rpath/libbareoslmdb.22.dylib (compatibility version 22.0.0, current version 22.1.1)
    @rpath/libbareosfastlz.22.dylib (compatibility version 22.0.0, current version 22.1.1)
    /usr/lib/libc++.1.dylib (compatibility version 1.0.0, current version 1300.36.0)
    /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 1319.0.0)
%
System Description
Attached Files: bareos.log (5,283 bytes) 2023-11-01 00:14
https://bugs.bareos.org/file_download.php?file_id=570&type=bug
Notes
(0005509)
joergs   
2023-11-09 15:04   
Thank you for your report. I also read your message on bareos-users.
Unfortunately, I don't have access to a test machine with macOS 14. For building (and testing) we use github actions. We use macos-12 there. macos-13 is only available as beta, so I'm afraid it will take a long time, before macos-14 will be available.
However, the other problem described on bareos-users (Working Directory: "/usr/local/var/lib/bareos" not found. Cannot continue.) has been addressed by https://github.com/bareos/bareos/pull/1592.
Without adapting, I wasn't able to start the bareos-fd (without copying the config files around).
Test packages are available at https://download.bareos.org/experimental/PR-1592/MacOS/
To you mind to give it a try to see, if this also influence the problem from this bug report?
I'm also not against applying your LC_MESSAGES=C to the plist file we do provide, as the bareos-fd do not use language support at all. Still, finding the root cause would be much better.
Are your aware, where I could get access to a macOS 14 test machine, maybe as cloud offering?
(0005510)
jamyles   
2023-11-09 16:30   
Thanks for the update. PR-1592 does fix the Working Directory issue in my testing.

I'm working to get access to a macOS 14 system that you can use to test, at least in the short term, and I'll email you directly about that.
(0005566)
joergs   
2023-12-04 17:30   
With help of jamyles, we've been able to reproduce and solve the problem. https://github.com/bareos/bareos/pull/1592 is now updated accordingly and will hopefully get merged soon.
(0005601)
joergs   
2023-12-12 10:11   
Fix committed to bareos master branch with changesetid 18412.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1571 [bareos-core] General feature always 2023-11-23 12:17 2023-12-07 10:34
Reporter: hostedpower Platform:  
Assigned To: bruno-at-bareos OS:  
Priority: low OS Version:  
Status: resolved Product Version:  
Product Build: Resolution: no change required  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Differential backups for postgresql new plugin
Description: Hi,


We used the former postgresql plugin from Bareos, and never had this message during backup:

Differential backups are not supported! Only Full and Incremental


This is effectively during a Differential backup. We never had such messages with the old plugin, so we're wondering why we have them now with the Bareos version 23 plugin

Also we wonder, is this hard to support. Isn't the differential simply adding all incrementals into one? We checked old postgresql plugin backups and we have the feeling this also actually just worked correctly.
Tags:
Steps To Reproduce:
Additional Information:
Attached Files: image.png (99,946 bytes) 2023-11-23 16:28
https://bugs.bareos.org/file_download.php?file_id=574&type=bug
png
Notes
(0005525)
bruno-at-bareos   
2023-11-23 13:35   
After checking how the old plugin behave and especially that to be able to do differential, you will need to have all wal archive present in the right order since last full and we can't warranty that especially as the plugin is not in charge to manage the archive repository.

There's no internal build-in of differential backup in PostgreSQL. The plugin being based on PITR Wal archiving methodology there's no real needs to have a differential level. Full + n number of incremental will allow the restoration to the desired point in time.

As it is specifically linked to the cluster, it also forbidden the use of feature like Virtual Full.
(0005526)
hostedpower   
2023-11-23 13:47   
So you're not simply adding incremental backups together? Since that should work up to the last full otherwise? Or do I oversee something here? :)
(0005527)
bruno-at-bareos   
2023-11-23 14:22   
Well we provide enough effort I guess to explain what's going on with the plugin in the documentation
https://docs.bareos.org/master/TasksAndConcepts/Plugins.html#postgresql-plugin

And plugin backup are different from traditional file backup.
(0005528)
hostedpower   
2023-11-23 16:28   
Well if it wouldn't work, I have to believe you. I was just under the impression that the incremental backups just contain wall data. I checked on the old plugin, and there the differential backups are running fine. If I look into those backups, the differential backups just contain all the backups from the incremental ones. It's very easy, all files seem to be simply present.

So I must be overlooking something, because apparentely, having the exact same wal files as in the incremental backups wouldn't be sufficient? :)
(0005532)
bruno-at-bareos   
2023-11-27 10:24   
I can't state anything from the screenshot, as the content greatly depend of how checkbox option are checked.
Maybe here I just see the result of full+incremental job merged :-)
(0005534)
hostedpower   
2023-11-27 13:11   
No, this was of a single job (unchecked the boxes to merge).

In any case, since it are just wal files that are kept with incremental, wouldn't a differential just work? Or am I missing something in the puzzle?

The old jobs (previous plugin with differential as well) seem to work when we test, but maybe we didn't test the right case or something.
(0005570)
bruno-at-bareos   
2023-12-05 16:10   
Just a word about this case. The old plugin blindly believe in the fact that all archived wal are there since the full and backup those. If one is missing, the restore will not be possible while the backup would have status "OK".

We decided for the new one to not let user believe the backup was ok when the chance it is not grow as time pass, that's why we disable the Differential mode.
(0005571)
hostedpower   
2023-12-05 16:28   
Also for the incremental we need to make sure we keep enough wall files or otherwise we would have problems? I don't see the big difference differential versus incremental :)
(0005572)
bruno-at-bareos   
2023-12-05 16:50   
Yes this is not a completely a wrong assumption ;-)
Discovered afterwards.
(0005573)
hostedpower   
2023-12-05 17:32   
Maybe it would be a good idea to allow differential again, warn in the notes we need long enough wall files, and longterm some safety mechanism would be even better hahaha :)

We do a differential each day , so for us it's really nice to keep the differentials working, it's a great advantage since we take 24 incrementals per day, it's crazy to keep these all for a week long.
(0005576)
bruno-at-bareos   
2023-12-07 10:33   
So the final statement on our side (of course for the moment, with money or PR everything can be changed :-))
We will not publish the plugin with level D allowed.

But as you know it is quite simple to hack it. So for your own usage you may want to change the M_FATAL to something else line 217
https://github.com/bareos/bareos/blob/1e7d73e668609f39a7caf4422710dfb58f1c0cd1/core/src/plugins/filed/python/postgresql/bareos-fd-postgresql.py#L217

At your own risk.
(0005577)
bruno-at-bareos   
2023-12-07 10:34   
Differential support will not be enable for this plugin.
Workaround for people knowing the risk they took, can be proposed in a PR and merged afterward if acceptable.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1422 [bareos-core] General major always 2022-01-20 11:58 2023-11-20 11:16
Reporter: niklas.skog Platform: Linux  
Assigned To: arogge OS: Debian  
Priority: normal OS Version: 11  
Status: confirmed Product Version: 21.0.0  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Bareos Libcloud Plugin incompatibe
Description: Goal is: to do backups of S3-buckets using Bareos

Situation:

Installed the Bareos 21.0.0-4 and "bareos-filedaemon-libcloud-python-plugin" on Debian 11 from "https://download.bareos.org/bareos/release/21/Debian_11"

Installed the "python3-libcloud" package on which the Plugin "bareos-filedaemon-libcloud-python-plugin" depends.

Configured the plugin according https://docs.bareos.org/TasksAndConcepts/Plugins.html

Trying to start a job, that should backup the data from S3 and getting following error in bconsole output:
---
20-Jan 08:27 bareos-dir JobId 13: Connected Storage daemon at backup:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
20-Jan 08:27 bareos-dir JobId 13: Using Device "FileStorage" to write.
20-Jan 08:27 bareos-dir JobId 13: Connected Client: backup-fd at localhost:9102, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
20-Jan 08:27 bareos-dir JobId 13: Handshake: Immediate TLS
20-Jan 08:27 backup-fd JobId 13: Fatal error: python3-fd-mod: BareosFdPluginLibcloud [50986]: Need Python version < 3.8 for Bareos Libcloud (current version: 3.9.2)
20-Jan 08:27 backup-fd JobId 13: Connected Storage daemon at backup:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
20-Jan 08:27 backup-fd JobId 13: Fatal error: TwoWayAuthenticate failed, because job was canceled.
20-Jan 08:27 backup-fd JobId 13: Fatal error: Failed to authenticate Storage daemon.
20-Jan 08:27 bareos-dir JobId 13: Fatal error: Bad response to Storage command: wanted 2000 OK storage
, got 2902 Bad storage
[...]
---

and the job fails.

Thus, main message:

"Fatal error: python3-fd-mod: BareosFdPluginLibcloud [50986]: Need Python version < 3.8 for Bareos Libcloud (current version: 3.9.2)"

which is understandable, because Debian 11 brings python3.9.*:

---
root@backup:/etc/bareos/bareos-dir.d/fileset# apt-cache policy python3
python3:
  Installed: 3.9.2-3
  Candidate: 3.9.2-3
  Version table:
 *** 3.9.2-3 500
        500 http://cdn-aws.deb.debian.org/debian bullseye/main amd64 Packages
        100 /var/lib/dpkg/status
root@backup:/etc/bareos/bareos-dir.d/fileset#
---


Accordingly, the plugin is incompatible with the current Debian version.
Tags: libcloud, plugin, s3
Steps To Reproduce: * install stock debian 11
* install & configure bareos 21, "python3-libcloud" and "bareos-filedaemon-libcloud-python-plugin"
* configure the plugin according https://docs.bareos.org/TasksAndConcepts/Plugins.html
* try to run a job that is backing up an S3-bucket
* this will fail
Additional Information:
Attached Files:
Notes
(0004487)
arogge   
2022-01-27 11:49   
You cannot use Python 3.9 or newer with python libcloud plugin due to a limitation in Python 3.9.
We're looking into this, but it isn't that easy to work around that limitation.
(0005513)
troloff   
2023-11-14 15:42   
Is there any progress in this issue?

As far as I can see, the last Debian version which brings a suitable Python is now oldoldstable with Debian 12 being released mid 2023. Would be nice to have a Ceph backup again, or a workaround.
(0005517)
arogge   
2023-11-16 09:47   
I understand that this is frustrating, but we would have to rewrite most of the plugin due to Python deciding we cannot use multi-process in a subinterpreter anymore.
The workaround right now is to use a Bareos FD on RHEL 8, one of its clones or CentOS Stream 8. These will stay supported until 2029.
(0005518)
troloff   
2023-11-16 10:59   
Would it be possible to fund the redesign? If so, maybe you could contact me to discuss the details.
(0005519)
arogge   
2023-11-20 11:16   
We're going to do an estimate and then look for co-funding. It will probably take a few days, but sales will be happy to get in touch with you.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1309 [bareos-core] General feature always 2021-01-19 02:27 2023-10-24 09:37
Reporter: Ruth Ivimey-Cook Platform: amd64  
Assigned To: OS: Linux  
Priority: normal OS Version: Ubuntu 20.04  
Status: new Product Version: 19.2.9  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Update tape selection so the drive is not fixed
Description: Currently, the tape drive used for writing a job is determined once (fixed) and all tapes must use that drive. At least, that is my understanding.

This means a system with two of the same sort of drives cannot 'double buffer' them so while one is being written the other is waiting to be changed. Also it means a long running job can hog a drive even though there is a tape loaded that would satisfy subsequent jobs in the other drive.

I delved into the code a year or so ago and found the drive selection is quite embedded, and non-trivial to change. However, changing the way bareos selects drives and storage daemons so it only picks the drive once it needs a new volume would make a great deal of sense to me, and open up new ways of deploying Bareos. I did try having two storage daemons each controlling one drive, (rather than one daemon for both) but it makes no difference.

It would be even more fun if the director could expose a type of plugin that could be involved in the selection process: essentially a call from the director to the plugin saying "here's my state, what now", that returned an instruction like "use storage=X volume=Y". It would be within this plugin that user interaction happens (i.e. the current prompt "Please mount append Volume "ZZZ" or label a new one for:", and so plugins could potentially cause an X11 prompt, not just a console one.
Tags: drive, reservation, sd
Steps To Reproduce: On a system with two tape drives (or equivalent) directly connected:

1. Start a job on one drive that exceeds the remaining length of that tape (tape 1);
2. Insert in the other drive a tape (tape 2) that is suitable for continuing the job once the first tape is full, and mount it;
3. Note that when tape 1 is full, bareos asks for a new tape (tape 2) to be inserted in drive 1, even though the tape its asking for is already mounted in drive 2.

4. Unmount tape 2, eject both tapes, and put the new tape in drive 1.
5. Note that bareos resumes writing the job as expected.

I presume this would also be true when two tape changers were used (i.e. a tape in changer 1 fills, there is a tape in changer 2 that satisfies the need, but bareos won't use it.
Additional Information:
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1285 [bareos-core] installer / packages major always 2020-12-10 13:00 2023-10-24 09:19
Reporter: dupondje Platform: Linux  
Assigned To: arogge OS: CentOS  
Priority: high OS Version: 8  
Status: resolved Product Version: 19.2.8  
Product Build: Resolution: no change required  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Unable to install oVirt Plugin
Description:   - nothing provides python-ovirt-engine-sdk4 needed by bareos-filedaemon-ovirt-python-plugin-19.2.7-2.el8.x86_64
  - nothing provides python-pycurl needed by bareos-filedaemon-ovirt-python-plugin-19.2.7-2.el8.x86_64
  - nothing provides python-lxml needed by bareos-filedaemon-ovirt-python-plugin-19.2.7-2.el8.x86_64

But those packages do not exist on CentOS8, there its python3- ...
Tags:
Steps To Reproduce:
Additional Information:
System Description General issues regarding all linux platforms or not specific to only one distribution
Attached Files:
Notes
(0004066)
arogge_adm   
2020-12-16 09:00   
That is a known issue, sorry for that.
Could you try with Bareos 20.0.0, that should hit the download mirror later today?

In that new version we provide plugins for python2 and python3 and have removed the RPM dependencies you're mentioning.

You will now have to install the oVirt SDK manually, but you can also use pip to do it now.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1559 [bareos-core] General trivial have not tried 2023-10-23 16:24 2023-10-23 16:24
Reporter: arogge_adm Platform:  
Assigned To: OS:  
Priority: normal OS Version:  
Status: new Product Version:  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Release Bareos 21.1.9
Description: This ticket acts as a tracker ticket to collect information about releasing Bareos 21.1.9
Tags: release
Steps To Reproduce:
Additional Information:
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1558 [bareos-core] General trivial have not tried 2023-10-23 16:24 2023-10-23 16:24
Reporter: arogge_adm Platform:  
Assigned To: OS:  
Priority: normal OS Version:  
Status: new Product Version:  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Release Bareos 20.0.10
Description: This ticket acts as a tracker ticket to collect information about releasing Bareos 20.0.10
Tags: release
Steps To Reproduce:
Additional Information:
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1555 [bareos-core] webui minor always 2023-09-25 20:00 2023-10-12 16:11
Reporter: Animux Platform: Linux  
Assigned To: bruno-at-bareos OS: Debian  
Priority: normal OS Version: 10  
Status: resolved Product Version: 22.1.0  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: bareos-webui depends needlessly on libapache2-mod-fcgid
Description: The official debian package from the community repository has the following depends:

> Depends: apache2 | httpd, libapache2-mod-fcgid, php-fpm (>= 7.0), php-date, php-intl, php-json, php-curl

When using any http server other than apache2 the dependency on libapache2-mod-fcgid is wrong and is pulling unnecessary additional dependencies (something like apache2-bin). Other official debian packages are using "Recommends" (f.e. sympa) or "Suggests" (f.a. munin or oar-restful-api) for such dependencies.

Can you downgrade the dependency on libapache2-mod-fcgid to at least recommends? "Recommends" would still install libapache2-mod-fcgid in default setups, but would allow the server administrator to skip the installation or to remove the package afterwards.
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0005448)
bruno-at-bareos   
2023-09-26 11:27   
Are you willing to propose a PR to fix this ?
(0005456)
bruno-at-bareos   
2023-10-11 16:13   
Will be addressed in https://github.com/bareos/bareos/pull/1573 Maybe backported
(0005472)
bruno-at-bareos   
2023-10-12 16:11   
Fix committed to bareos master branch with changesetid 18115.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1550 [bareos-core] installer / packages minor always 2023-08-31 12:06 2023-10-12 16:11
Reporter: roland Platform: Linux  
Assigned To: bruno-at-bareos OS: Debian  
Priority: normal OS Version: 11  
Status: resolved Product Version: 22.1.0  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Debian: Build-Depends: openssl missing
Description: Building Bareos 21.x and 22.x (including todays GIT HEAD) fails in a clean Debian 11 (bullseye) build environment (using cowbuilder/pbuilder) during the dh_auto_configure run with the following message:

CMake Error at systemtests/CMakeLists.txt:196 (message):
  Creation of certificates failed: 127 /build/bareos-22.1.0/cmake-build

That's because openssl package is not installed in a clean environment and it's missing in debian/control Build-Depends.
Tags: build, debian 11
Steps To Reproduce: # First get the official source package and rename it:
wget https://github.com/bareos/bareos/archive/refs/tags/Release/22.1.0.tar.gz
mv 22.1.0.tar.gz bareos_22.1.0.orig.tar.gz

# Now get the same via GIT (I could also unpack the above package):
git clone https://github.com/bareos/bareos.git
cd bareos
git checkout Release/22.1.0

# Create the missing debian/changelog file:
dch --create --empty --package bareos -v 22.1.0-rr1+deb11u1
dch -a 'RoRo Build bullseye'

# Create a Debian source package (.dsc, .debian.tar.xz):
env DEB_BUILD_PROFILES="debian bullseye" debuild -us -uc -S -d

# And now finally build the Debian source package in a clean bullseye chroot:
sudo env DEB_BUILD_PROFILES="debian bullseye" cowbuilder --build --basepath /var/cache/pbuilder/base-bullseye.cow ../bareos_22.1.0-rr1+deb11u1.dsc
Additional Information: With Bareos 20.x I did not run into this issue.

Adding "openssl" to the Build-Depends in debian/control debian/control.src avoids running into the above build failure.

I'm not sure, whether there are other missing build dependencies, at least the build is complaining above some pam stuff missing, but these don't stop the build.

I still see several automated tests failing, but have to dig deeper there.
System Description
Attached Files:
Notes
(0005403)
bruno-at-bareos   
2023-09-11 17:07   
Would you mind to open a PR to fix the issue on github?
(0005457)
bruno-at-bareos   
2023-10-11 16:13   
Will be addressed in https://github.com/bareos/bareos/pull/1573
(0005471)
bruno-at-bareos   
2023-10-12 16:11   
Fix committed to bareos master branch with changesetid 18114.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1283 [bareos-core] General feature always 2020-12-05 13:03 2023-09-11 17:35
Reporter: rugk Platform:  
Assigned To: arogge OS:  
Priority: high OS Version:  
Status: acknowledged Product Version: 19.2.8  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Remove insecure MD5 hashing
Description: You use/provide CRAM-MD5 hashing:
https://github.com/bareos/bareos/blob/819bba62ebdadd2ac0bd773ac8d26f4f60f5d39e/python-bareos/bareos/util/password.py#L51

However MD5 is easily brute-forcable nowadays, vulnerable to an (active) MITM attack and has many more weaknesses:
https://en.wikipedia.org/wiki/CRAM-MD5#Weaknesses

And it has been deprecated since 2008.
https://tools.ietf.org/html/draft-ietf-sasl-crammd5-to-historic-00
Tags:
Steps To Reproduce:
Additional Information: The linked RFC recommends e.g. SCRAM as an alternative.

AFAIK you use TLS, which should mitigate this problem, but then such an additional authentication is also quite useless here.
You may consider, if appropriate for your use case and not already done. using password stretching hashes (PBKDF, Argon2 etc.) on the server for a secure storage or possibly some kind of private-public-key authentication scheme.
These are only ideas for the future though. For now, just remove legacy and insecure algorithms, or – at least – mark them as deprecated as you should have done in 2008! At most, they can give a false sense of security.
Attached Files:
Notes
(0005409)
arogge   
2023-09-11 17:35   
Right now Bareos has two protocol modes to operate in.
The legacy one is what we inherited from the predecessor project. It uses CRAM-MD5 on plaintext connnections (even if you have TLS enabled).
The modernized protocol does immediate TLS and then authenticates using CRAM-MD5 inside that TLS-session.
While this is still obviously legacy, we chose to keep it for a few reasons:
- the legacy clients require that type of authentication
- in Bareos context it isn't worse than sending a plain password
- it is still considered safe when used via a TLS connection (which is the default nowadays)

Having said that, the document from 2008 that you're referencing is a draft and was never made a standard.

If we decide to implement another incompatible protocol change, we will definitely get rid of CRAM. We will probably not get rid of the shared secrets, so password stretching won't work.
Concerning PKI we decided against it, as PSK seems to be sufficient for our use-case.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1122 [bareos-core] General major always 2019-10-18 19:32 2023-09-04 16:40
Reporter: xyros Platform: Linux  
Assigned To: bruno-at-bareos OS: Ubuntu  
Priority: normal OS Version: 16.04  
Status: resolved Product Version: 18.2.6  
Product Build: Resolution: no change required  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Consolidate queues and indefinitely orphans jobs but falsely reports status as "Consolidate OK" for last queued
Description: My Consolidate job never succeeds -- quickly terminating with "Consolidate OK" while leaving all the VirtualFull jobs it started queued and orphaned.

In the WebUI listing for the allegedly successful Consolidate run, it always list the sequentially last (by job ID) client it queued as being the successful run; however, the level is "Incremental," nothing is actually done, and the client's VirtualFull job is actually still queued up with all the other clients.

In bconsole the status is similar to this:

Running Jobs:
Console connected at 15-Oct-19 15:06
 JobId Level Name Status
======================================================================
   636 Virtual PandoraFMS.2019-10-15_14.33.02_06 is waiting on max Storage jobs
   637 Virtual MongoDB.2019-10-15_14.33.03_09 is waiting on max Storage jobs
   638 Virtual DNS-DHCP.2019-10-15_14.33.04_11 is waiting on max Storage jobs
   639 Virtual Desktop_1.2019-10-15_14.33.05_19 is waiting on max Storage jobs
   640 Virtual Desktop_2.2019-10-15_14.33.05_20 is waiting on max Storage jobs
   641 Virtual Desktop_3.2019-10-15_14.33.06_21 is waiting on max Storage jobs
====


Given that above output, for example the WebUI would show the following:

    642 Consolidate desktop3-fd.hq Consolidate Incremental 0 0.00 B 0 Success
    641 Desktop_3 desktop3-fd.hq Backup VirtualFull 0 0.00 B 0 Queued
    640 Desktop_2 desktop2-fd.hq Backup VirtualFull 0 0.00 B 0 Queued
    639 Desktop_1 desktop1-fd.hq Backup VirtualFull 0 0.00 B 0 Queued
    638 DNS-DHCP dns-dhcp-fd.hq Backup VirtualFull 0 0.00 B 0 Queued
    637 MongoDB mongodb-fd.hq Backup VirtualFull 0 0.00 B 0 Queued
    636 PandoraFMS pandorafms-fd.hq Backup VirtualFull 0 0.00 B 0 Queued


I don't know if this has anything to do with the fact that I have multiple storage definitions, for each VLAN the server is on, and an additional one dedicated for the storage addressable on the default IP (see bareos-dir/storage/File.conf in attached bareos.zip file). Technically this should not matter, but I get the impression Bareos nas not been designed/tested to elegantly work in an environment where the server participates in VLANs.

The reason I'm using VLANs is so that connections do not have to go through a router to reach the clients. Therefore, the full network bandwidth of each LAN segment is available to the Bareos client/server data transfer.

I've tried debugging the Consolidate backup process, using "bareos-dir -d 400 >> /var/log/bareos-dir.log;" however, I get nothing that particularly identifies the issue. I have attached a truncated log file that contains activity starting with queuing the second-to-last. I've cut off the log at the point where it is stuck in the endless cycling with output of:

bareos-dir (50): dird/jobq.cc:951-0 Inc Rstore=File-AI-VLAN105 rncj=1
bareos-dir (50): dird/jobq.cc:1004-0 Fail to acquire Wstore=File-AI-VLAN105 wncj=1
bareos-dir (50): dird/jobq.cc:971-0 Dec Rstore=File-AI-VLAN105 rncj=0
bareos-dir (50): dird/jobq.cc:951-0 Inc Rstore=File-AI-VLAN105 rncj=1
bareos-dir (50): dird/jobq.cc:1004-0 Fail to acquire Wstore=File-AI-VLAN105 wncj=1
bareos-dir (50): dird/jobq.cc:971-0 Dec Rstore=File-AI-VLAN105 rncj=0
bareos-dir (50): dird/jobq.cc:951-0 Inc Rstore=File-AI-VLAN107 rncj=1
bareos-dir (50): dird/jobq.cc:1004-0 Fail to acquire Wstore=File-AI-VLAN107 wncj=1
bareos-dir (50): dird/jobq.cc:971-0 Dec Rstore=File-AI-VLAN107 rncj=0
bareos-dir (50): dird/jobq.cc:951-0 Inc Rstore=File-AI-VLAN107 rncj=1
bareos-dir (50): dird/jobq.cc:1004-0 Fail to acquire Wstore=File-AI-VLAN107 wncj=1
etc...

For convenience, I have attached all the most relevant excerpts of my configuration files (sanitized for privacy/security reasons).

I suspect there's a bug that is responsible for this; however, I'm unable to make heads or tails of what's going on.

Could someone please take a look?

Thanks
Tags: always incremental, consolidate
Steps To Reproduce: 1. Place Bareos on a network switch (virtual or actual) with tagged VLANS
2. Configure Bareos host to have connectivity on three or more VLANs
3. Make sure you have clients you can backup, on each of the VLANs
4. Use the attached config files as reference for setting up storages and jobs for testing.
Additional Information:
System Description
Attached Files: bareos.zip (9,113 bytes) 2019-10-18 19:32
https://bugs.bareos.org/file_download.php?file_id=391&type=bug
bareos-dir.log (41,361 bytes) 2019-10-18 19:32
https://bugs.bareos.org/file_download.php?file_id=392&type=bug
Notes
(0004008)
xyros   
2020-06-11 10:11   
Figured it out myself. The official documentation needs a full working example, as the always incremental backup configuration is very finicky and the error message provide insufficient guidance for resolution.
(0005367)
bruno-at-bareos   
2023-09-04 16:40   
Fixed by user adapted configuration

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
979 [bareos-core] documentation text N/A 2018-07-05 14:42 2023-08-02 14:48
Reporter: stephand Platform:  
Assigned To: bruno-at-bareos OS:  
Priority: low OS Version:  
Status: feedback Product Version:  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version: 18.3.1  
Summary: Clarify Documentation of Max Wait Time Parameter
Description: Documentation is now:

 Max Wait Time = <time>

    The time specifies the maximum allowed time that a job may block waiting for a resource (such as waiting for a tape to be mounted, or waiting for the storage or file daemons to perform their duties), counted from the when the job starts, (not necessarily the same as when the job was scheduled).

It should be clarified how the time in waiting state is counted, the term "counted from when the job start" could be misleading, because the relevant time counting only starts when the job gets in waiting state.
It should also be mentioned that the time counting is reset to 0 when the job can continue before reaching the Max Wait Time, so in fact it refers to the maximum continuous time of the job in waiting state.
Tags:
Steps To Reproduce:
Additional Information:
Attached Files:
Notes
(0005130)
bruno-at-bareos   
2023-07-04 15:23   
Isn't the purpose of the diagram below the configuration ?
Or should we still rephrase the explanation
(0005306)
bruno-at-bareos   
2023-08-02 14:48   
Just modify documentation about the reset if job continue

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1333 [bareos-core] storage daemon block have not tried 2021-03-27 16:33 2023-07-31 14:10
Reporter: noone Platform: x86_64  
Assigned To: bruno-at-bareos OS: SLES  
Priority: normal OS Version: 15.1  
Status: resolved Product Version: 19.2.9  
Product Build: Resolution: no change required  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: mtx-changer stopped working
Description: mtx-changer stopped working after an update.
Each time a new tape was loaded by bareos I got an error message like:
"""
Connecting to Storage daemon SL3-LTO5-00 at uranus.mcservice.eu:9103 ...
3304 Issuing autochanger "load slot 6, drive 0" command.
3992 Bad autochanger "load slot 6, drive 0": ERR=Child died from signal 15: Termination.
Results=Program killed by BAREOS (timeout)

3001 Mounted Volume: ALN043L5
3001 Device "tapedrive-lto-5" (/dev/tape/by-id/scsi-300e09e60001ce29a-nst) is already mounted with Volume "ALN043L5"
"""
In reality the tape was loaded and after 5 minutes the command was killed by timeout. The tape is loaded after roughly 120 seconds and is readable by that time using other applications (like dd or tapeinfo).

I tracked it down to the `wait_for_drive()` function in the script. I modified the function to look like
```
wait_for_drive() {
  i=0
  while [ $i -le 300 ]; do # Wait max 300 seconds
    debug "tapeinfo -f $1 2>&1"
    debug `tapeinfo -f $1 2>&1`
    debug "mt -f $1 status 2>&1"
    debug `mt -f $1 status 2>&1`
    if mt -f $1 status 2>&1 | grep "${ready}" >/dev/null 2>&1; then
      break
    fi
    debug "Device $1 - not ready, retrying..."
    sleep 1
    i=`expr $i + 1`
  done
}
```
An example protocol output is (shortened)
"""
20210327-16:20:35 tapeinfo -f /dev/tape/by-id/scsi-300e09e60001ce29a-nst 2>&1
20210327-16:20:35 mtx: Request Sense: Long Report=yes mtx: Request Sense: Valid Residual=no mtx: Request Sense: Error Code=0 (Unknown?!) mtx: Request Sense: Sense Key=No Sense mtx: Request Sense: FileMark=no mtx: Request Sense: EOM=no mtx: Request Sense: ILI=no mtx: Request Sense: Additional Sense Code = 00 mtx: Request Sense: Additional Sense Qualifier = 00 mtx: Request Sense: BPV=no mtx: Request Sense: Error in CDB=no mtx: Request Sense: SKSV=no INQUIRY Command Failed
20210327-16:20:35 mt -f /dev/tape/by-id/scsi-300e09e60001ce29a-nst status 2>&1
20210327-16:20:35 Unknown tape drive: file no= -1 block no= -1
20210327-16:20:35 Device /dev/tape/by-id/scsi-300e09e60001ce29a-nst - not ready, retrying...
"""
I verified via bash that at this time the tapedrive was ready using tapeinfo.



Tags:
Steps To Reproduce:
Additional Information:
System Description General issues regarding all linux platforms or not specific to only one distribution
Attached Files:
Notes
(0004105)
noone   
2021-03-27 16:39   
For anyone facing this problem:

I found a workaround. The mt commands return value is the problem. So I am using now tapeinfo as a replacement.
At you own risk you could try to replace the `wait_for_drive` function by
```
wait_for_drive() {
  i=0
  while [ $i -le 300 ]; do # Wait max 300 seconds
    # Code Changed because mt has stopped working in December 2020. This is a provisional fix...
    #if mt -f $1 status 2>&1 | grep "${ready}" >/dev/null 2>&1; then
    if tapeinfo -f $1 2>&1 | grep "Ready: yes" >/dev/null 2>&1; then
      break
    fi
    debug "Device $1 - not ready, retrying..."
    sleep 1
    i=`expr $i + 1`
  done
}
```

Might or might not work on different systems. To work bareos-sd has to run as root on my machine.

PS:
I found out that the mt command returns
"""
uranus:~ # mt -f /dev/tape/by-id/scsi-300e09e60001ce29a-nst status
Unknown tape drive:

   file no= -1 block no= -1
"""
so this might be the reason why it stopped working. But I am unable to find out why the output of mt has changed.
(0005279)
bruno-at-bareos   
2023-07-31 14:10   
Thanks for your tips. As it is something we can fix, we mark it as resolved.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
978 [bareos-core] director feature N/A 2018-07-04 22:59 2023-07-04 15:26
Reporter: stevec Platform: Linux  
Assigned To: bruno-at-bareos OS: Ubuntu  
Priority: normal OS Version: 16.04  
Status: resolved Product Version: 16.2.7  
Product Build: Resolution: won't fix  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Define encryption to pool ; auto encrypt tapes when purged
Description:
Ability to migrate to/from encrypted volumes in a pool over time when volumes are purged/overwritten with new data automatically.

Current method of flagging a tape for encryption at label time has issues when you want to migrate pools to/from an encrypted estate over time. Also current methods are very 'patchwork' for scsi encryption and would be better served to have all settings just in the config files directly.
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0005131)
bruno-at-bareos   
2023-07-04 15:26   
Sorry to close but this feature will not happen without community work.

As a workaround a scratch pool dedicated to encrypted tape can be added and configured to all pool using such medium.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1533 [bareos-core] vmware plugin major always 2023-04-29 13:41 2023-06-26 15:11
Reporter: CirocN Platform: Linux  
Assigned To: stephand OS: RHEL (and clones)  
Priority: urgent OS Version: 8  
Status: resolved Product Version: 22.0.3  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Restoring vmware vms keeps failing, can't restore the data.
Description: First I need to mention I am new to Bareos and I have been working with it for the past couple of weeks to replace our old backup solution.
I am trying to use the VMware plugin to get backups of our vmdks for disaster recovery or if we need to extract a specific file using guestfish.
I have followed the official documents regarding setting up the VMware plugin at https://docs.bareos.org/TasksAndConcepts/Plugins.html#general
The backups of the vm in our vSphere server is successful.
But when I try to restore the backups it keeps failing with the following information:

bareos87.simlab.xyz-fd JobId 3: Fatal error: bareosfd: Traceback (most recent call last):
File "/usr/lib64/bareos/plugins/BareosFdWrapper.py", line 85, in create_file
return bareos_fd_plugin_object.create_file(restorepkt)
File "/usr/lib64/bareos/plugins/BareosFdPluginVMware.py", line 504, in create_file
cbt_data = self.vadp.restore_objects_by_objectname[objectname]["data"]
KeyError: '/VMS/Datacenter/backup_client/[DatastoreVM] backup_client/backup_client.vmdk


I have tried the same steps on Bareos 21 and 20, and also tried it on Redhat 9.1 and I kept getting same exact error.

Tags:
Steps To Reproduce: After setting up the vmware plugin accoding to official documents, I have ran the backups using:
run job=vm-websrv1 level=Full
Web-GUI shows the job instantly and after about 10 minutes the job's status shows success.
Right after the backup is done when I try to restore the backup using Web-GUI or console, I keep getting the same error:

19 2023-04-29 07:34:05 bareos-dir JobId 4: Error: Bareos bareos-dir 22.0.4~pre63.807bc5689 (17Apr23):
Build OS: Red Hat Enterprise Linux release 8.7 (Ootpa)
JobId: 4
Job: RestoreFiles.2023-04-29_07.33.59_43
Restore Client: bareos-fd
Start time: 29-Apr-2023 07:34:01
End time: 29-Apr-2023 07:34:05
Elapsed time: 4 secs
Files Expected: 1
Files Restored: 0
Bytes Restored: 0
Rate: 0.0 KB/s
FD Errors: 1
FD termination status: Fatal Error
SD termination status: Fatal Error
Bareos binary info: Bareos community build (UNSUPPORTED): Get professional support from https://www.bareos.com
Job triggered by: User
Termination: *** Restore Error ***

18 2023-04-29 07:34:05 bareos-dir JobId 4: Warning: File count mismatch: expected=1 , restored=0
17 2023-04-29 07:34:05 bareos-sd JobId 4: Releasing device "FileStorage" (/var/lib/bareos/storage).
16 2023-04-29 07:34:05 bareos-sd JobId 4: Error: lib/bsock_tcp.cc:454 Socket has errors=1 on call to client:192.168.111.136:9103
15 2023-04-29 07:34:05 bareos-sd JobId 4: Fatal error: stored/read.cc:146 Error sending to File daemon. ERR=Connection reset by peer
14 2023-04-29 07:34:05 bareos-sd JobId 4: Error: lib/bsock_tcp.cc:414 Wrote 65536 bytes to client:192.168.111.136:9103, but only 16384 accepted.
13 2023-04-29 07:34:04 bareos87.simlab.xyz-fd JobId 4: Fatal error: bareosfd: Traceback (most recent call last):
File "/usr/lib64/bareos/plugins/BareosFdWrapper.py", line 85, in create_file
return bareos_fd_plugin_object.create_file(restorepkt)
File "/usr/lib64/bareos/plugins/BareosFdPluginVMware.py", line 504, in create_file
cbt_data = self.vadp.restore_objects_by_objectname[objectname]["data"]
KeyError: '/VMS/Datacenter/backup_client/[DatastoreVM] backup_client/backup_client.vmdk'

12 2023-04-29 07:34:04 bareos-sd JobId 4: Forward spacing Volume "Full-0001" to file:block 0:627.
11 2023-04-29 07:34:04 bareos-sd JobId 4: Ready to read from volume "Full-0001" on device "FileStorage" (/var/lib/bareos/storage).
10 2023-04-29 07:34:04 bareos87.simlab.xyz-fd JobId 4: Encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
9 2023-04-29 07:34:04 bareos87.simlab.xyz-fd JobId 4: Connected Storage daemon at bareos87.simlab.xyz:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
8 2023-04-29 07:34:02 bareos-dir JobId 4: Encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
7 2023-04-29 07:34:02 bareos-dir JobId 4: Handshake: Immediate TLS
6 2023-04-29 07:34:02 bareos-dir JobId 4: Connected Client: bareos-fd at localhost:9102, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
5 2023-04-29 07:34:02 bareos-dir JobId 4: Probing client protocol... (result will be saved until config reload)
4 2023-04-29 07:34:02 bareos-dir JobId 4: Using Device "FileStorage" to read.
3 2023-04-29 07:34:02 bareos-dir JobId 4: Encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
2 2023-04-29 07:34:02 bareos-dir JobId 4: Connected Storage daemon at bareos87.simlab.xyz:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
1 2023-04-29 07:34:01 bareos-dir JobId 4: Start Restore Job RestoreFiles.2023-04-29_07.33.59_43
Additional Information:
System Description
Attached Files: Restore_Failure.png (84,322 bytes) 2023-04-29 13:41
https://bugs.bareos.org/file_download.php?file_id=554&type=bug
png

Backup_Success.png (78,727 bytes) 2023-04-29 13:41
https://bugs.bareos.org/file_download.php?file_id=555&type=bug
png

BareosFdPluginVMware.png (30,436 bytes) 2023-05-03 23:32
https://bugs.bareos.org/file_download.php?file_id=556&type=bug
png

docs.bareos.org-instruction.png (36,604 bytes) 2023-05-03 23:32
https://bugs.bareos.org/file_download.php?file_id=557&type=bug
png

BareosFdPluginVMware.patch (612 bytes) 2023-06-15 16:21
https://bugs.bareos.org/file_download.php?file_id=562&type=bug
Notes
(0004995)
bruno-at-bareos   
2023-05-03 15:37   
As all continuous tests are working perfectly, you certainly miss a point into your configuration, without the configuration nobody will be able to find the problem.

Also if you want to show a job result screeshots are certainly the worse way.
please use the text log result of bconsole <<< "list joblog jobid=2" as attachement here.
(0005003)
CirocN   
2023-05-03 23:32   
I have found out that this is the result of miss matching the backup path and restore path.
The backup is getting created with /VMS/Datacenter//backup_client/[DatastoreVM] backup_client/backup_client.vmdk while /VMS/Datacenter/backup_client/[DatastoreVM] backup_client/backup_client.vmdk.
I have noticed in your trying to strip it off on BareosFdPluginVMware.py on line 366 but it is not working.

CODE:
        if "uuid" in self.options:
            self.vadp.backup_path = "/VMS/%s" % (self.options["uuid"])
        else:
            self.vadp.backup_path = "/VMS/%s/%s/%s" % (
                self.options["dc"],
                self.options["folder"].strip("/"),
                self.options["vmname"],
            )

My configuration for the folder is precisely what is suggested in the official document: folder=/


The workaround I have found for now was to edit the BareosFdPluginVMware.py Python script to the following code but it needs to get properly fixed:

CODE:
        if "uuid" in self.options:
            self.vadp.backup_path = "/VMS/%s" % (self.options["uuid"])
        else:
            self.vadp.backup_path = "/VMS/%s%s/%s" % (
                self.options["dc"],
                self.options["folder"].strip("/"),
                self.options["vmname"],
            )
(0005065)
stephand   
2023-06-06 17:58   
Thanks for reporting this, confirming that it doesn't work properly when using folder=/ for backups.
The root cause is the double slash in the backup path, like in your example
/VMS/Datacenter//backup_client/[DatastoreVM] backup_client/backup_client.vmdk

I will provide a proper fix.
(0005084)
awillem   
2023-06-15 16:21   
Hi,

here's a more elegant solution, "non breaking" the actual design in case you're using a mixed with- and without-folder VM hierarchy.

Hope this helps others.

BR
Arnaud
(0005085)
stephand   
2023-06-16 11:20   
Thanks for you proposed solution.

The PR 1484 already contains a fix that for this issue which will work when using a mixed with- and without-folder VM hierarchy.
See https://github.com/bareos/bareos/pull/1484

Regards,
Stephan
(0005100)
stephand   
2023-06-26 15:11   
Fix committed to bareos master branch with changesetid 17739.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1523 [bareos-core] file daemon crash always 2023-03-13 13:09 2023-06-26 13:52
Reporter: mp Platform:  
Assigned To: bruno-at-bareos OS:  
Priority: urgent OS Version:  
Status: resolved Product Version: 22.0.2  
Product Build: Resolution: duplicate  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: fatal error bareos-fd for full backup postgresql 11
Description: Full backup of Postgesql 11 crash with error:
Error: python3-fd-mod: Could net get stat-info for file /var/lib/postgresql/11/main/base/964374/t4_384322129: "[Errno 2] No such file or directory: '/var/lib/postgresql/11/main/base/964374/t4_384322129'"

1c-pg11-fd JobId 238: Fatal error: bareosfd: Traceback (most recent call last):
File "/usr/lib/bareos/plugins/BareosFdWrapper.py", line 61, in start_backup_file
return bareos_fd_plugin_object.start_backup_file(savepkt)
File "/usr/lib/bareos/plugins/BareosFdPluginPostgres.py", line 396, in start_backup_file
return super(BareosFdPluginPostgres, self).start_backup_file(savepkt)
File "/usr/lib/bareos/plugins/BareosFdPluginLocalFilesBaseclass.py", line 118, in start_backup_file
mystatp.st_mode = statp.st_mode
UnboundLocalError: local variable 'statp' referenced before assignment
Tags:
Steps To Reproduce:
Additional Information: Backup fail every time when try to backup postgesql 11. At same time backup of pg14 finished w/o any problem
Attached Files:
Notes
(0005093)
bruno-at-bareos   
2023-06-26 10:10   
(Last edited: 2023-06-26 10:11)
Is this reproducible in a way or another. Looks like your table (or part of) has been dropped during the backup ?

Is a vacuum full works on this table ? and the file exist ?
(0005099)
bruno-at-bareos   
2023-06-26 13:52   
In issue 1520 we acknowledge that having a warning instead an error should be the right behavior.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
690 [bareos-core] General crash always 2016-08-23 18:19 2023-05-09 16:58
Reporter: tigerfoot Platform: x86_64  
Assigned To: bruno-at-bareos OS: openSUSE  
Priority: low OS Version: Leap 42.1  
Status: resolved Product Version: 15.2.5  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: bcopy segfault
Description: With a small volume Full001 I'm using a bsr (Catalog job)
to try to copy this job to another media (and device)

bcopy start to work but segv at the end
Tags:
Steps To Reproduce: Pick a bsr of a job in a multiple jobs volume, run bcopy with a new volume on a different destination (different media)
Additional Information: gdb /usr/sbin/bcopy
GNU gdb (GDB; openSUSE Leap 42.1) 7.9.1
Copyright (C) 2015 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law. Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-suse-linux".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<http://bugs.opensuse.org/>.
Find the GDB manual and other documentation resources online at:
<http://www.gnu.org/software/gdb/documentation/>.
For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from /usr/sbin/bcopy...Reading symbols from /usr/lib/debug/usr/sbin/bcopy.debug...done.
done.
(gdb) run -d200 -b Catalog.bsr -c /etc/bareos/bareos-sd.conf -v -o catalog01 Default FileStorage
Starting program: /usr/sbin/bcopy -d200 -b Catalog.bsr -c /etc/bareos/bareos-sd.conf -v -o catalog01 Default FileStorage
Missing separate debuginfos, use: zypper install glibc-debuginfo-2.19-22.1.x86_64
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Detaching after fork from child process 3016.
bcopy (90): stored_conf.c:837-0 Inserting Device res: Default
bcopy (90): stored_conf.c:837-0 Inserting Director res: earth-dir
Detaching after fork from child process 3019.
bcopy (50): plugins.c:222-0 load_plugins
bcopy (50): plugins.c:302-0 Found plugin: name=autoxflate-sd.so len=16
bcopy (50): sd_plugins.c:371-0 is_plugin_compatible called
bcopy (50): plugins.c:299-0 Rejected plugin: want=-sd.so name=bpipe-fd.so len=11
bcopy (50): plugins.c:302-0 Found plugin: name=scsicrypto-sd.so len=16
bcopy (200): scsicrypto-sd.c:137-0 scsicrypto-sd: Loaded: size=104 version=3
bcopy (50): sd_plugins.c:371-0 is_plugin_compatible called
bcopy (50): plugins.c:302-0 Found plugin: name=scsitapealert-sd.so len=19
bcopy (200): scsitapealert-sd.c:99-0 scsitapealert-sd: Loaded: size=104 version=3
bcopy (50): sd_plugins.c:371-0 is_plugin_compatible called
bcopy (8): crypto_cache.c:55-0 Could not open crypto cache file. /var/lib/bareos/bareos-sd.9103.cryptoc ERR=No such file or directory
bcopy (100): bcopy.c:194-0 About to setup input jcr
bcopy (200): autoxflate-sd.c:183-0 autoxflate-sd: newPlugin JobId=0
bcopy (200): scsicrypto-sd.c:168-0 scsicrypto-sd: newPlugin JobId=0
bcopy (200): scsitapealert-sd.c:130-0 scsitapealert-sd: newPlugin JobId=0
bcopy: butil.c:271-0 Using device: "Default" for reading.
bcopy (100): dev.c:393-0 init_dev: tape=0 dev_name=/var/lib/bareos/storage/default
bcopy (100): dev.c:395-0 dev=/var/lib/bareos/storage/default dev_max_bs=0 max_bs=0
bcopy (100): block.c:127-0 created new block of blocksize 64512 (dev->device->label_block_size) as dev->max_block_size is zero
bcopy (200): acquire.c:791-0 Attach Jid=0 dcr=61fc28 size=0 dev="Default" (/var/lib/bareos/storage/default)
bcopy (150): vol_mgr.c:171-0 add read_vol=Full-0001 JobId=0
bcopy (100): butil.c:164-0 Acquire device for read
bcopy (100): acquire.c:63-0 dcr=61fc28 dev=623f58
bcopy (100): acquire.c:64-0 MediaType dcr= dev=default
bcopy (100): acquire.c:92-0 Want Vol=Full-0001 Slot=0
bcopy (100): acquire.c:106-0 MediaType dcr=default dev=default
bcopy (100): acquire.c:174-0 MediaType dcr=default dev=default
bcopy (100): acquire.c:193-0 dir_get_volume_info vol=Full-0001
bcopy (100): askdir.c:731-0 Fake dir_get_volume_info
bcopy (200): scsitapealert-sd.c:228-0 scsitapealert-sd: tapealert is not enabled on device /var/lib/bareos/storage/default
bcopy (100): mount.c:650-0 No swap_dev set
bcopy (100): mount.c:600-0 Must load "Default" (/var/lib/bareos/storage/default)
bcopy (100): autochanger.c:99-0 Device "Default" (/var/lib/bareos/storage/default) is not an autochanger
bcopy (200): scsitapealert-sd.c:192-0 scsitapealert-sd: Unknown event 5
bcopy (100): acquire.c:235-0 stored: open vol=Full-0001
bcopy (100): dev.c:561-0 open dev: type=1 dev_name="Default" (/var/lib/bareos/storage/default) vol=Full-0001 mode=OPEN_READ_ONLY
bcopy (100): dev.c:572-0 call open_device mode=OPEN_READ_ONLY
bcopy (190): dev.c:1006-0 Enter mount
bcopy (100): dev.c:646-0 open disk: mode=OPEN_READ_ONLY open(/var/lib/bareos/storage/default/Full-0001, 0x0, 0640)
bcopy (100): dev.c:662-0 open dev: disk fd=3 opened
bcopy (100): dev.c:580-0 preserve=0xffffde60 fd=3
bcopy (100): acquire.c:243-0 opened dev "Default" (/var/lib/bareos/storage/default) OK
bcopy (100): acquire.c:257-0 calling read-vol-label
bcopy (100): dev.c:502-0 setting minblocksize to 64512, maxblocksize to label_block_size=64512, on device "Default" (/var/lib/bareos/storage/default)
bcopy (100): label.c:76-0 Enter read_volume_label res=0 device="Default" (/var/lib/bareos/storage/default) vol=Full-0001 dev_Vol=*NULL* max_blocksize=64512
bcopy (130): label.c:140-0 Big if statement in read_volume_label
bcopy (190): label.c:907-0 unser_vol_label

Volume Label:
Id : Bareos 2.0 immortal
VerNo : 20
VolName : Full-0001
PrevVolName :
VolFile : 0
LabelType : VOL_LABEL
LabelSize : 159
PoolName : Full
MediaType : default
PoolType : Backup
HostName : earth
Date label written: 23-aoû-2016 16:12
bcopy (130): label.c:213-0 Compare Vol names: VolName=Full-0001 hdr=Full-0001

Volume Label:
Id : Bareos 2.0 immortal
VerNo : 20
VolName : Full-0001
PrevVolName :
VolFile : 0
LabelType : VOL_LABEL
LabelSize : 159
PoolName : Full
MediaType : default
PoolType : Backup
HostName : earth
Date label written: 23-aoû-2016 16:12
bcopy (130): label.c:234-0 Leave read_volume_label() VOL_OK
bcopy (100): label.c:251-0 Call reserve_volume=Full-0001
bcopy (150): vol_mgr.c:414-0 enter reserve_volume=Full-0001 drive="Default" (/var/lib/bareos/storage/default)
bcopy (150): vol_mgr.c:323-0 new Vol=Full-0001 at 625718 dev="Default" (/var/lib/bareos/storage/default)
bcopy (150): vol_mgr.c:582-0 === set in_use. vol=Full-0001 dev="Default" (/var/lib/bareos/storage/default)
bcopy (150): vol_mgr.c:619-0 Inc walk_start use_count=2 volname=Full-0001
bcopy (150): vol_mgr.c:259-0 List end new volume: Full-0001 in_use=1 swap=0 on device "Default" (/var/lib/bareos/storage/default)
bcopy (200): scsitapealert-sd.c:228-0 scsitapealert-sd: tapealert is not enabled on device /var/lib/bareos/storage/default
bcopy (100): dev.c:432-0 Device "Default" (/var/lib/bareos/storage/default) has dev->device->max_block_size of 0 and dev->max_block_size of 64512, dcr->VolMaxBlocksize is 0
bcopy (100): dev.c:474-0 set minblocksize to 64512, maxblocksize to 64512 on device "Default" (/var/lib/bareos/storage/default)
bcopy (100): acquire.c:263-0 Got correct volume.
23-aoû 18:16 bcopy JobId 0: Ready to read from volume "Full-0001" on device "Default" (/var/lib/bareos/storage/default).
bcopy (100): acquire.c:370-0 dcr=61fc28 dev=623f58
bcopy (100): acquire.c:371-0 MediaType dcr=default dev=default
bcopy (100): bcopy.c:212-0 About to setup output jcr
bcopy (200): autoxflate-sd.c:183-0 autoxflate-sd: newPlugin JobId=0
bcopy (200): scsicrypto-sd.c:168-0 scsicrypto-sd: newPlugin JobId=0
bcopy (200): scsitapealert-sd.c:130-0 scsitapealert-sd: newPlugin JobId=0
bcopy: butil.c:274-0 Using device: "FileStorage" for writing.
bcopy (100): dev.c:393-0 init_dev: tape=0 dev_name=/var/lib/bareos/storage/file/
bcopy (100): dev.c:395-0 dev=/var/lib/bareos/storage/file/ dev_max_bs=0 max_bs=0
bcopy (100): block.c:127-0 created new block of blocksize 64512 (dev->device->label_block_size) as dev->max_block_size is zero
bcopy (200): acquire.c:791-0 Attach Jid=0 dcr=6260d8 size=0 dev="FileStorage" (/var/lib/bareos/storage/file/)
bcopy (150): vol_mgr.c:169-0 read_vol=Full-0001 JobId=0 already in list.
bcopy (120): device.c:266-0 start open_output_device()
bcopy (129): device.c:275-0 Device is file, deferring open.
bcopy (100): bcopy.c:225-0 About to acquire device for writing
bcopy (100): dev.c:561-0 open dev: type=1 dev_name="FileStorage" (/var/lib/bareos/storage/file/) vol=catalog01 mode=OPEN_READ_WRITE
bcopy (100): dev.c:572-0 call open_device mode=OPEN_READ_WRITE
bcopy (190): dev.c:1006-0 Enter mount
bcopy (100): dev.c:646-0 open disk: mode=OPEN_READ_WRITE open(/var/lib/bareos/storage/file/catalog01, 0x2, 0640)
bcopy (100): dev.c:662-0 open dev: disk fd=4 opened
bcopy (100): dev.c:580-0 preserve=0xffffe0b0 fd=4
bcopy (100): acquire.c:400-0 acquire_append device is disk
bcopy (190): acquire.c:435-0 jid=0 Do mount_next_write_vol
bcopy (150): mount.c:71-0 Enter mount_next_volume(release=0) dev="FileStorage" (/var/lib/bareos/storage/file/)
bcopy (150): mount.c:84-0 mount_next_vol retry=0
bcopy (200): scsitapealert-sd.c:228-0 scsitapealert-sd: tapealert is not enabled on device /var/lib/bareos/storage/file/
bcopy (100): mount.c:650-0 No swap_dev set
bcopy (200): scsitapealert-sd.c:192-0 scsitapealert-sd: Unknown event 5
bcopy (200): mount.c:390-0 Before dir_find_next_appendable_volume.
bcopy (100): askdir.c:731-0 Fake dir_get_volume_info
bcopy (150): mount.c:124-0 After find_next_append. Vol=catalog01 Slot=0
bcopy (100): autochanger.c:99-0 Device "FileStorage" (/var/lib/bareos/storage/file/) is not an autochanger
bcopy (150): mount.c:173-0 autoload_dev returns 0
bcopy (150): mount.c:209-0 want vol=catalog01 devvol= dev="FileStorage" (/var/lib/bareos/storage/file/)
bcopy (100): dev.c:502-0 setting minblocksize to 64512, maxblocksize to label_block_size=64512, on device "FileStorage" (/var/lib/bareos/storage/file/)
bcopy (100): label.c:76-0 Enter read_volume_label res=0 device="FileStorage" (/var/lib/bareos/storage/file/) vol=catalog01 dev_Vol=*NULL* max_blocksize=64512
bcopy (130): label.c:140-0 Big if statement in read_volume_label
bcopy (190): label.c:907-0 unser_vol_label

Volume Label:
Id : Bareos 2.0 immortal
VerNo : 20
VolName : catalog01
PrevVolName :
VolFile : 0
LabelType : VOL_LABEL
LabelSize : 159
PoolName : Default
MediaType : file
PoolType : Backup
HostName : earth
Date label written: 23-aoû-2016 17:59
bcopy (130): label.c:213-0 Compare Vol names: VolName=catalog01 hdr=catalog01

Volume Label:
Id : Bareos 2.0 immortal
VerNo : 20
VolName : catalog01
PrevVolName :
VolFile : 0
LabelType : VOL_LABEL
LabelSize : 159
PoolName : Default
MediaType : file
PoolType : Backup
HostName : earth
Date label written: 23-aoû-2016 17:59
bcopy (130): label.c:234-0 Leave read_volume_label() VOL_OK
bcopy (100): label.c:251-0 Call reserve_volume=catalog01
bcopy (150): vol_mgr.c:414-0 enter reserve_volume=catalog01 drive="FileStorage" (/var/lib/bareos/storage/file/)
bcopy (150): vol_mgr.c:619-0 Inc walk_start use_count=2 volname=Full-0001
bcopy (150): vol_mgr.c:259-0 List begin reserve_volume: Full-0001 in_use=1 swap=0 on device "Default" (/var/lib/bareos/storage/default)
bcopy (150): vol_mgr.c:323-0 new Vol=catalog01 at 6258c8 dev="FileStorage" (/var/lib/bareos/storage/file/)
bcopy (150): vol_mgr.c:582-0 === set in_use. vol=catalog01 dev="FileStorage" (/var/lib/bareos/storage/file/)
bcopy (150): vol_mgr.c:619-0 Inc walk_start use_count=2 volname=Full-0001
bcopy (150): vol_mgr.c:259-0 List end new volume: Full-0001 in_use=1 swap=0 on device "Default" (/var/lib/bareos/storage/default)
bcopy (150): vol_mgr.c:638-0 Inc walk_next use_count=2 volname=catalog01
bcopy (150): vol_mgr.c:259-0 List end new volume: catalog01 in_use=1 swap=0 on device "FileStorage" (/var/lib/bareos/storage/file/)
bcopy (200): scsitapealert-sd.c:228-0 scsitapealert-sd: tapealert is not enabled on device /var/lib/bareos/storage/file/
bcopy (100): dev.c:432-0 Device "FileStorage" (/var/lib/bareos/storage/file/) has dev->device->max_block_size of 0 and dev->max_block_size of 64512, dcr->VolMaxBlocksize is 0
bcopy (100): dev.c:474-0 set minblocksize to 64512, maxblocksize to 64512 on device "FileStorage" (/var/lib/bareos/storage/file/)
bcopy (150): mount.c:438-0 Want dirVol=catalog01 dirStat=
bcopy (150): mount.c:446-0 Vol OK name=catalog01
bcopy (100): askdir.c:731-0 Fake dir_get_volume_info
bcopy (100): askdir.c:731-0 Fake dir_get_volume_info
bcopy (200): mount.c:289-0 applying vol block sizes to device "FileStorage" (/var/lib/bareos/storage/file/): dcr->VolMinBlocksize set to 0, dcr->VolMaxBlocksize set to 0
bcopy (100): dev.c:432-0 Device "FileStorage" (/var/lib/bareos/storage/file/) has dev->device->max_block_size of 0 and dev->max_block_size of 64512, dcr->VolMaxBlocksize is 0
bcopy (100): dev.c:474-0 set minblocksize to 64512, maxblocksize to 64512 on device "FileStorage" (/var/lib/bareos/storage/file/)
bcopy (100): mount.c:323-0 Device previously written, moving to end of data. Expect 0 bytes
23-aoû 18:16 bcopy JobId 0: Volume "catalog01" previously written, moving to end of data.
bcopy (100): dev.c:749-0 Enter eod
bcopy (200): dev.c:761-0 ====== Seek to 14465367
23-aoû 18:16 bcopy JobId 0: Warning: For Volume "catalog01":
The sizes do not match! Volume=14465367 Catalog=0
Correcting Catalog
bcopy (150): mount.c:341-0 update volinfo mounts=1
bcopy (150): mount.c:351-0 set APPEND, normal return from mount_next_write_volume. dev="FileStorage" (/var/lib/bareos/storage/file/)
bcopy (190): acquire.c:448-0 Output pos=0:14465367
bcopy (100): acquire.c:459-0 === nwriters=1 nres=0 vcatjob=1 dev="FileStorage" (/var/lib/bareos/storage/file/)
23-aoû 18:16 bcopy JobId 0: Forward spacing Volume "Full-0001" to file:block 0:1922821345.
bcopy (100): dev.c:892-0 ===== lseek to 1922821345
bcopy (10): bcopy.c:384-0 Begin Job Session Record: VolSessionId=3 VolSessionTime=1471954183 JobId=2 DataLen=144
Begin Job Session Record: VolSessionId=3 VolSessionTime=1471954183 JobId=2 DataLen=144
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=1335
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=2407
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=3479
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=4551
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=5623
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=6695
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=7767
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=8839
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=9911
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=10983
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=12055
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=13127
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=14199
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=15271
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=16343
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=17415
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=18487
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=19559
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=20631
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=21703
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=22775
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=23847
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=24919
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=25991
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=27063
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=28135
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=29207
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=30279
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=31351
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=32423
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=33495
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=34567
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=35639
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=36711
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=37783
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=38855
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=39927
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=40999
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=42071
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=43143
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=44215
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=45287
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=46359
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=47431
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=48503
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=49575
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=50647
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=51719
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=52791
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=53863
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=54935
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=56007
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=57079
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=58151
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=59223
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=60295
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=61367
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=62439
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=63511
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=64583
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=107
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=1179
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=2251
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=3323
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=4395
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=5467
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=6539
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=7611
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=8683
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=9755
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=10827
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=11899
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=65536 rem=12971
bcopy (150): bcopy.c:343-0 !write_record_to_block data_len=600 rem=200
bcopy (10): bcopy.c:384-0 End Job Session Record: VolSessionId=3 VolSessionTime=1471954183 JobId=2 DataLen=180
End Job Session Record: VolSessionId=3 VolSessionTime=1471954183 JobId=2 DataLen=180
bcopy (200): read_record.c:243-0 End of file 0 on device "Default" (/var/lib/bareos/storage/default), Volume "Full-0001"
23-aoû 18:16 bcopy JobId 0: End of Volume at file 0 on device "Default" (/var/lib/bareos/storage/default), Volume "Full-0001"
bcopy (150): vol_mgr.c:713-0 === clear in_use vol=Full-0001
bcopy (150): vol_mgr.c:732-0 === set not reserved vol=Full-0001 num_writers=0 dev_reserved=0 dev="Default" (/var/lib/bareos/storage/default)
bcopy (150): vol_mgr.c:763-0 === clear in_use vol=Full-0001
bcopy (150): vol_mgr.c:777-0 === remove volume Full-0001 dev="Default" (/var/lib/bareos/storage/default)
bcopy (150): vol_mgr.c:619-0 Inc walk_start use_count=2 volname=catalog01
bcopy (150): vol_mgr.c:259-0 List free_volume: catalog01 in_use=1 swap=0 on device "FileStorage" (/var/lib/bareos/storage/file/)
bcopy (90): mount.c:928-0 NumReadVolumes=1 CurReadVolume=1
bcopy (150): vol_mgr.c:705-0 vol_unused: no vol on "Default" (/var/lib/bareos/storage/default)
bcopy (150): vol_mgr.c:619-0 Inc walk_start use_count=2 volname=catalog01
bcopy (150): vol_mgr.c:259-0 List null vol cannot unreserve_volume: catalog01 in_use=1 swap=0 on device "FileStorage" (/var/lib/bareos/storage/file/)
bcopy (90): mount.c:947-0 End of Device reached.
23-aoû 18:16 bcopy JobId 0: End of all volumes.
bcopy (10): bcopy.c:384-0 Unknown Record: VolSessionId=0 VolSessionTime=0 JobId=0 DataLen=0
Unknown Record: VolSessionId=0 VolSessionTime=0 JobId=0 DataLen=0
bcopy: bcopy.c:328-0 EOT label not copied.
bcopy (100): block.c:449-0 return write_block_to_dev no data to write
bcopy: bcopy.c:251-0 1 Jobs copied. 377 records copied.
bcopy (100): dev.c:933-0 close_dev "Default" (/var/lib/bareos/storage/default)
bcopy (100): dev.c:1043-0 Enter unmount
bcopy (100): dev.c:921-0 Clear volhdr vol=Full-0001
bcopy (100): dev.c:933-0 close_dev "FileStorage" (/var/lib/bareos/storage/file/)
bcopy (100): dev.c:1043-0 Enter unmount
bcopy (100): dev.c:921-0 Clear volhdr vol=catalog01
bcopy (150): vol_mgr.c:193-0 remove_read_vol=Full-0001 JobId=0 found=1
bcopy: lockmgr.c:319-0 ASSERT failed at acquire.c:839: !priority || priority >= max_prio
[New Thread 0x7ffff53e7700 (LWP 3015)]

Program received signal SIGSEGV, Segmentation fault.
0x00007ffff7743c29 in lmgr_thread_t::pre_P (this=0x61c478, m=0x61fc58, priority=-1431655766, f=0x7ffff7bc5f7b "acquire.c", l=839) at lockmgr.c:319
319 lockmgr.c: No such file or directory.
Missing separate debuginfos, use: zypper install libcap2-debuginfo-2.22-16.1.x86_64 libgcc_s1-debuginfo-5.3.1+r233831-6.1.x86_64 libjansson4-debuginfo-2.7-3.2.x86_64 liblzo2-2-debuginfo-2.08-4.1.x86_64 libopenssl1_0_0-debuginfo-1.0.1i-15.1.x86_64 libstdc++6-debuginfo-5.3.1+r233831-6.1.x86_64 libwrap0-debuginfo-7.6-885.4.x86_64 libz1-debuginfo-1.2.8-6.4.x86_64
(gdb) bt
#0 0x00007ffff7743c29 in lmgr_thread_t::pre_P (this=0x61c478, m=0x61fc58, priority=-1431655766, f=0x7ffff7bc5f7b "acquire.c", l=839) at lockmgr.c:319
0000001 0x00007ffff77433a6 in bthread_mutex_lock_p (m=m@entry=0x61fc58, file=file@entry=0x7ffff7bc5f7b "acquire.c", line=line@entry=839) at lockmgr.c:742
0000002 0x00007ffff7b9e72f in free_dcr (dcr=0x61fc28) at acquire.c:839
0000003 0x00007ffff7baa43f in my_free_jcr (jcr=0x622bc8) at butil.c:215
0000004 0x00007ffff77409bd in b_free_jcr (file=file@entry=0x402d83 "bcopy.c", line=line@entry=256, jcr=0x622bc8) at jcr.c:641
0000005 0x0000000000401f98 in main (argc=<optimized out>, argv=<optimized out>) at bcopy.c:256
(gdb) cont
Continuing.
[Thread 0x7ffff53e7700 (LWP 3015) exited]

Program terminated with signal SIGSEGV, Segmentation fault.
Attached Files:
Notes
(0002413)
tigerfoot   
2016-10-27 12:07   
End of the trace with full debuginfo packages installed.

27-oct-2016 12:05:29.103637 bcopy (90): mount.c:947-0 End of Device reached.
27-oct 12:05 bcopy JobId 0: End of all volumes.
27-oct-2016 12:05:29.103642 bcopy (10): bcopy.c:384-0 Unknown Record: VolSessionId=0 VolSessionTime=0 JobId=0 DataLen=0
Unknown Record: VolSessionId=0 VolSessionTime=0 JobId=0 DataLen=0
bcopy: bcopy.c:328-0 EOT label not copied.
27-oct-2016 12:05:29.103650 bcopy (100): block.c:449-0 return write_block_to_dev no data to write
bcopy: bcopy.c:251-0 1 Jobs copied. 309113 records copied.
27-oct-2016 12:05:29.103656 bcopy (100): dev.c:933-0 close_dev "Default" (/var/lib/bareos/storage/default)
27-oct-2016 12:05:29.103703 bcopy (100): dev.c:1043-0 Enter unmount
27-oct-2016 12:05:29.103707 bcopy (100): dev.c:921-0 Clear volhdr vol=Full-0001
27-oct-2016 12:05:29.103713 bcopy (100): dev.c:933-0 close_dev "FileStorage" (/var/lib/bareos/storage/file)
27-oct-2016 12:05:29.103716 bcopy (100): dev.c:1043-0 Enter unmount
27-oct-2016 12:05:29.103719 bcopy (100): dev.c:921-0 Clear volhdr vol=catalog01
27-oct-2016 12:05:29.103726 bcopy (150): vol_mgr.c:193-0 remove_read_vol=Full-0001 JobId=0 found=1
bcopy: lockmgr.c:319-0 ASSERT failed at acquire.c:839: !priority || priority >= max_prio
319 lockmgr.c: No such file or directory.

Thread 1 "bcopy" received signal SIGSEGV, Segmentation fault.
0x00007ffff7743c29 in lmgr_thread_t::pre_P (this=0x61c478, m=0x623008, priority=-1431655766, f=0x7ffff7bc5f7b "acquire.c", l=839) at lockmgr.c:319
(gdb) bt
#0 0x00007ffff7743c29 in lmgr_thread_t::pre_P (this=0x61c478, m=0x623008, priority=-1431655766, f=0x7ffff7bc5f7b "acquire.c", l=839) at lockmgr.c:319
0000001 0x00007ffff77433a6 in bthread_mutex_lock_p (m=m@entry=0x623008, file=file@entry=0x7ffff7bc5f7b "acquire.c", line=line@entry=839) at lockmgr.c:742
0000002 0x00007ffff7b9e72f in free_dcr (dcr=0x622fd8) at acquire.c:839
0000003 0x00007ffff7baa43f in my_free_jcr (jcr=0x623558) at butil.c:215
0000004 0x00007ffff77409bd in b_free_jcr (file=file@entry=0x402d83 "bcopy.c", line=line@entry=256, jcr=0x623558) at jcr.c:641
0000005 0x0000000000401f98 in main (argc=<optimized out>, argv=<optimized out>) at bcopy.c:256
(gdb) cont
Continuing.
[Thread 0x7ffff53e7700 (LWP 6975) exited]

Program terminated with signal SIGSEGV, Segmentation fault.
The program no longer exists.


(full stack will be attached)
(0002414)
tigerfoot   
2016-10-27 12:18   
trace can't be attached (log.xz is 24Mb)
you can have it here
https://dav.ioda.net/index.php/s/V7RPrq6M3KtbFc0/download
(0005031)
bruno-at-bareos   
2023-05-09 16:58   
Has been fixed in recent version 21.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
587 [bareos-core] director minor always 2015-12-22 16:48 2023-05-09 16:55
Reporter: joergs Platform: Linux  
Assigned To: bruno-at-bareos OS: any  
Priority: low OS Version: 3  
Status: resolved Product Version: 15.2.2  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: joblog has "Backup Error", but jobstatus is set to successfull ('T') if writing the bootstrap file fails
Description: If the director can't write the bootstrap file, the joblog says:

...
  Non-fatal FD errors: 1
  SD Errors: 0
  FD termination status: OK
  SD termination status: OK
  Termination: *** Backup Error ***

however, the jobstatus is 'T':

+-------+---------------+-----------------+---------------------+------+-------+----------+----------+-----------+
| JobId | Name | Client | StartTime | Type | Level | JobFiles | JobBytes | JobStatus |
+-------+---------------+-----------------+---------------------+------+-------+----------+----------+-----------+
| 225 | BackupClient1 | ting.dass-it-fd | 2015-12-22 16:32:13 | B | I | 2 | 44 | T |
+-------+---------------+-----------------+---------------------+------+-------+----------+----------+-----------+
Tags:
Steps To Reproduce: configure a job with

  Write Bootstrap = "/NONEXISTINGPATH/%c.bsr"

and run the job.

Compare status from "list joblog" with "list jobs".
Additional Information: list joblog jobid=...

will show something like:

...
 2015-12-22 16:46:12 ting.dass-it-dir JobId 226: Error: Could not open WriteBootstrap file:
/NONEXISTINGPATH/ting.dass-it-fd.bsr: ERR=No such file or directory
...
  Non-fatal FD errors: 1
  SD Errors: 0
  FD termination status: OK
  SD termination status: OK
  Termination: *** Backup Error ***

However "list jobs" will show 'T'.
System Description General issues regarding all linux platforms or not specific to only one distribution
Attached Files:
Notes
(0005029)
bruno-at-bareos   
2023-05-09 16:55   
Fixed in recent version.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
301 [bareos-core] director tweak always 2014-05-27 17:02 2023-05-09 16:41
Reporter: alexbrueckel Platform:  
Assigned To: bruno-at-bareos OS: Debian 7  
Priority: low OS Version:  
Status: resolved Product Version: 13.2.2  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Inconsistency when configuring bandwith limitation
Description: Hi,
while configuring different jobs for a client, some width bandwith limitation, i noticed that every configuration item could be placed in quotation marks except the desired max. bandwidth.

It's a bit inconsistent this way so it would be great if this could be fixed.

Thank you very much
Tags:
Steps To Reproduce:
Additional Information:
Attached Files:
Notes
(0000889)
mvwieringen   
2014-05-31 22:04   
An example and the exact error would be handy. Its probably some missing
parsing as all config code uses the same config parser. But without a
clear example and the exact error its not something we can go on.
(0000899)
alexbrueckel   
2014-06-04 17:37   
Hi,

here's the example that works:
Job {
  Name = "myhost-backupjob"
  Client = "myhost.mydomain.tld"
  JobDefs = "default"
  FileSet = "myfileset"
  Maximum Bandwidth = 10Mb/s
}

Note that the bandwidth value has noch quotation marks.

Thats an example that doesn't work:
Job {
  [same as above]
  Maximum Bandwidth = "10Mb/s"
}

The error message i get in this case is:
ERROR in parse_conf.c:764 Config error: expected a speed, got: 10Mb/s

Hope that helps and thanks for your work.
Alex
(0000900)
mvwieringen   
2014-06-06 15:39   
It seems that the config engine only allows a quoted string. E.g. all numbers
are now allowed to have a quotation. As the speed get parsed by the same function
as a number it currently doesn't allow you to use quotes. You can indeed argue
that its inconsistent but it seems to be envisioned by the original creator of
the config engine. We might change this one day but for now I wouldn't hold my
breath for it to occur any time soon. There are just more important things to do.
(0001086)
joergs   
2014-12-01 16:13   
I added some notes about this to the documentation.
(0005023)
bruno-at-bareos   
2023-05-09 16:41   
documentation updated.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
782 [bareos-core] director minor have not tried 2017-02-09 16:12 2023-03-23 16:20
Reporter: hostedpower Platform: Linux  
Assigned To: joergs OS: Debian  
Priority: normal OS Version: 8  
Status: assigned Product Version:  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Disk full not adequately detected
Description: Hi,


Once a volume disk is full, it keeps creating ghost volumes over and over instead of stopping when it doesn't work. Is there some setting to prevent this, is this a bug or is it by design somehow?

For example disk was full and it created several volumes which are never created because of this full. I have the feeling there should be a better method?


09-Feb 09:20 bareos-sd JobId 809: End of medium on Volume "vol-cons-0768" Bytes=158,430,174,241 Blocks=2,455,825 at 09-Feb-2017 09:20.
09-Feb 09:20 hostedpower-dir JobId 809: Created new Volume "vol-cons-0773" in catalog.
09-Feb 09:20 bareos-sd JobId 809: End of Volume "vol-cons-0773" at 0:0 on device "vps66143-cons" (/home/vps66143/bareos). Write of 209 bytes got -1.
09-Feb 09:20 bareos-sd JobId 809: End of Volume "vol-cons-0773" at 0:0 on device "vps66143-cons" (/home/vps66143/bareos). Write of 209 bytes got -1.
09-Feb 09:20 bareos-sd JobId 809: Marking Volume "vol-cons-0773" in Error in Catalog.
09-Feb 09:20 hostedpower-dir JobId 809: Created new Volume "vol-cons-0774" in catalog.
09-Feb 09:20 bareos-sd JobId 809: End of Volume "vol-cons-0774" at 0:0 on device "vps66143-cons" (/home/vps66143/bareos). Write of 209 bytes got -1.
09-Feb 09:20 bareos-sd JobId 809: Marking Volume "vol-cons-0774" in Error in Catalog.
09-Feb 09:20 hostedpower-dir JobId 809: Created new Volume "vol-cons-0775" in catalog.
09-Feb 09:20 bareos-sd JobId 809: End of Volume "vol-cons-0775" at 0:0 on device "vps66143-cons" (/home/vps66143/bareos). Write of 209 bytes got -1.
09-Feb 09:20 bareos-sd JobId 809: Marking Volume "vol-cons-0775" in Error in Catalog.
09-Feb 09:20 hostedpower-dir JobId 809: Created new Volume "vol-cons-0776" in catalog.
09-Feb 09:20 bareos-sd JobId 809: End of Volume "vol-cons-0776" at 0:0 on device "vps66143-cons" (/home/vps66143/bareos). Write of 209 bytes got -1.
09-Feb 09:20 bareos-sd JobId 809: Marking Volume "vol-cons-0776" in Error in Catalog.
09-Feb 09:20 hostedpower-dir JobId 809: Created new Volume "vol-cons-0777" in catalog.
09-Feb 09:20 bareos-sd JobId 809: End of Volume "vol-cons-0777" at 0:0 on device "vps66143-cons" (/home/vps66143/bareos). Write of 209 bytes got -1.
09-Feb 09:20 bareos-sd JobId 809: Marking Volume "vol-cons-0777" in Error in Catalog.
09-Feb 09:20 bareos-sd JobId 809: Please mount append Volume "vol-cons-0777" or label a new one for:
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1447 [bareos-core] file daemon tweak always 2022-04-06 14:12 2023-03-07 12:09
Reporter: mdc Platform: x86_64  
Assigned To: bruno-at-bareos OS: CentOS  
Priority: normal OS Version: stream 8  
Status: resolved Product Version: 21.1.2  
Product Build: Resolution: won't fix  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Restore of unencrypted files on an encrypted fd thrown an error, bur works.
Description: When restore files from an client that stores the files unencrypted on an client that normally only runs encrypted backups, the restore will work, but an error is thrown.
Tags:
Steps To Reproduce: Sample config:
Client A:
Client {
...
PKI Signatures = Yes
PKI Encryption = Yes
PKI Cipher = aes256
PKI Master Key = ".../master.key"
PKI Keypair = ".../all.keys"
}
Client B:
Client {
...
# without the cryptor config
}

Both can do its' backup and restore for itself to the storage. But when an restore is done, with files from client B on client A, then the files are restored as request, but for every file an error is logged:
clienta JobId 72: Error: filed/crypto.cc:168 Missing cryptographic signature for /var/tmp/bareos/var/log/journal/e882cedd07af40b386b29cfa9c88466f/user-70255@bdb4fa2d506c45ba8f8163f7e4ee7dac-0000000000b6f8c1-0005d99dd2d23d5a.journal
and the hole job is marked as failed.
Additional Information: Because the restore itself works, I think the job should only marked as "OK with warnings" and the "Missing cryptographic signature ..." only as an warning instant of an error.
System Description
Attached Files:
Notes
(0004902)
bruno-at-bareos   
2023-03-07 12:09   
Thanks you for your report. In a bug triage session, we came to the following conclusion for this case.
We understand completely the case, and agree it should be better handled by the code.

The workaround is to change your configuration as with the parameter PKI Signatures = Yes you are requesting the fact that normally you care about the signature for all data, so the job got its failing status. If you need to restore unencrypted data to that client, you should during the restore time to comment out that parameter.

On our side, nobody will work on that improvement, but feel free to propose a fix in a PR on github.
Thanks

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1512 [bareos-core] installer / packages major always 2023-02-07 04:48 2023-02-07 13:37
Reporter: MarceloRuiz Platform:  
Assigned To: bruno-at-bareos OS:  
Priority: high OS Version:  
Status: resolved Product Version:  
Product Build: Resolution: won't fix  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Updating Bareos polutes/invalidates user settings
Description: Upating Bareos recreates all the sample configuration files inside '/etc/bareos/' even if the folder exists and contains a working configuration.
Tags:
Steps To Reproduce: Install Bareos, configure it using custom filenames, update Bareos.
Additional Information: Bareos should not touch an existing '/etc/bareos/' folder if it exists. That means that a user spent a considerable amount of time configuring the program and a simple system update will make the whole configuration invalid and Bareos won't even start.
If there is a need to provide a sample configuration, do it in a separate folder, like '/etc/bareos-sample-config' so it won't break a working configuration. The installer/updater could even delete that folder before the install/update and recreate to provide an up-to-date example of the configuration for the current version without risking breaking anything.
Attached Files:
Notes
(0004874)
bruno-at-bareos   
2023-02-07 13:37   
What OS are you using ?

We state into the documentation to not remove any installed files by your package manager, as it will be reinstalled if you delete them.
rpm will create for you rpmnew or rpmold files when they have been changed.
make install create .new .old files if existing are already there or have being changed.

One of the best way is to simply comment or keep them empty, so no changes will happen on update.

It is how it is actually, as we didn't have find a way to make the product as easy as possible for newcomer proposing a ready to use configuration.

Our the expert side, you can also simple create your personal /etc/bareos-production structure and create systemd overwrite service to use and point to that location.
(0004875)
bruno-at-bareos   
2023-02-07 13:37   
No changes will occurs

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1502 [bareos-core] General trivial have not tried 2022-12-21 13:42 2022-12-21 13:42
Reporter: arogge_adm Platform:  
Assigned To: OS:  
Priority: normal OS Version:  
Status: new Product Version:  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Release Bareos 23.0.0
Description: This ticket acts as a tracker ticket to collect information about releasing Bareos 23.0.0
Tags: release
Steps To Reproduce:
Additional Information:
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1498 [bareos-core] webui minor random 2022-12-15 15:00 2022-12-21 11:47
Reporter: alexanderbazhenov Platform:  
Assigned To: frank OS:  
Priority: normal OS Version:  
Status: assigned Product Version:  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Failed to send result as json. Maybe the result message is too long?
Description: Got something like this again in 21.0 version: https://bugs.bareos.org/view.php?id=719

Failed to retrieve data from Bareos director
Error message received from director:

Failed to send result as json. Maybe the result message is too long?
Tags: director, job, postgresql, ubuntu20.04, webui
Steps To Reproduce: Don't know the steps, but as I got it happens when more volumes or output used. Especcially when you run a script on a client, e.g. gitlab dump:

sudo mkdir /var/opt/gitlab/backups
sudo chown git /var/opt/gitlab/backups
sudo gitlab-rake gitlab:backup:create SKIP=artifacts

If you got this once you'll not be able to open any job details (the same message all the time) until all backup jobs will finish.
Additional Information: I don't know is it web bug or just director, but there is no error in director logs.

Additional info:

root@bareos:/etc/bareos/bareos-dir.d/catalog# dpkg -l | grep bareos
ii bareos 21.0.0-4 amd64 Backup Archiving Recovery Open Sourced - metapackage
ii bareos-bconsole 21.0.0-4 amd64 Backup Archiving Recovery Open Sourced - text console
ii bareos-client 21.0.0-4 amd64 Backup Archiving Recovery Open Sourced - client metapackage
ii bareos-common 21.0.0-4 amd64 Backup Archiving Recovery Open Sourced - common files
ii bareos-database-common 21.0.0-4 amd64 Backup Archiving Recovery Open Sourced - common catalog files
ii bareos-database-postgresql 21.0.0-4 amd64 Backup Archiving Recovery Open Sourced - PostgreSQL backend
ii bareos-database-tools 21.0.0-4 amd64 Backup Archiving Recovery Open Sourced - database tools
ii bareos-director 21.0.0-4 amd64 Backup Archiving Recovery Open Sourced - director daemon
ii bareos-filedaemon 21.0.0-4 amd64 Backup Archiving Recovery Open Sourced - file daemon
ii bareos-storage 21.0.0-4 amd64 Backup Archiving Recovery Open Sourced - storage daemon
ii bareos-tools 21.0.0-4 amd64 Backup Archiving Recovery Open Sourced - common tools
ii bareos-traymonitor 21.0.0-4 amd64 Backup Archiving Recovery Open Sourced - tray monitor
ii bareos-webui 21.0.0-4 all Backup Archiving Recovery Open Sourced - webui

Postgre installed with defaults:

root@bareos:/etc/bareos/bareos-dir.d/catalog# dpkg -l | grep postgresql
ii bareos-database-postgresql 21.0.0-4 amd64 Backup Archiving Recovery Open Sourced - PostgreSQL backend
ii pgdg-keyring 2018.2 all keyring for apt.postgresql.org
ii postgresql-14 14.1-2.pgdg20.04+1 amd64 The World's Most Advanced Open Source Relational Database
ii postgresql-client-14 14.1-2.pgdg20.04+1 amd64 front-end programs for PostgreSQL 14
ii postgresql-client-common 232.pgdg20.04+1 all manager for multiple PostgreSQL client versions
ii postgresql-common 232.pgdg20.04+1 all PostgreSQL database-cluster manager

root@bareos:/etc/bareos/bareos-dir.d/catalog# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 20.04.3 LTS
Release: 20.04
Codename: focal

Any ideas? Or what other info I should provide.
Attached Files: joblog_jobid4230_json.log (282,928 bytes) 2022-12-15 19:15
https://bugs.bareos.org/file_download.php?file_id=539&type=bug
Notes
(0004839)
bruno-at-bareos   
2022-12-15 16:54   
To help debugging, it would be nice to have at least one of the offending joblog, which can be extracted with bconsole.
please to so and attach the output here (if < 2MB) or on an accessible share.

Developper can be also interested by the same output in json, to do so you can switch bconsole output to ".api 2"

bconsole <<<"@output /var/tmp/joblog_jobidXXXX_json.log
.api 2
list joblog jobid=XXXX
"

where XXX is the problematic jobid.
(0004840)
alexanderbazhenov   
2022-12-15 19:15   
Here is one of them.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1450 [bareos-core] documentation tweak always 2022-04-20 10:12 2022-11-10 16:53
Reporter: mdc Platform: x86_64  
Assigned To: bruno-at-bareos OS: CentOS  
Priority: normal OS Version: stream 8  
Status: resolved Product Version: 21.1.2  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Wrong link to git hub
Description: The GH link in
https://docs.bareos.org/TasksAndConcepts/Plugins.html#python-fd-plugin
points to:
https://github.com/bareos/bareos/tree/master/contrib/fd-plugins/options-plugin-sample
But correct will be:
https://github.com/bareos/bareos/tree/master/contrib/fd-plugins/bareos_option_example
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0004572)
bruno-at-bareos   
2022-04-20 11:44   
Thanks for your report.
We have a fix in progress for that in https://github.com/bareos/bareos/pull/1165
(0004573)
bruno-at-bareos   
2022-04-21 10:21   
PR1165 merged (master), PR1167 Bareos-21 in progress
(0004576)
bruno-at-bareos   
2022-04-21 15:16   
Fix for bareos-21 (default) documentation has been merged too.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1445 [bareos-core] bconsole minor always 2022-03-31 08:35 2022-11-10 16:52
Reporter: mdc Platform: x86_64  
Assigned To: bruno-at-bareos OS: CentOS  
Priority: normal OS Version: stream 8  
Status: resolved Product Version: 21.1.2  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version: 21.1.3  
    Target Version:  
Summary: Quotes are missing at the director name on export
Description: When calling configure export client="Foo" on the console, in the exported file the quotes for the director name are missing.
instant of:
Director {
  Name = "Bareos Director"
this will exported:
Director {
  Name = Bareos Director

As written in the documentation, quotes must be used, when the string contains an space.
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0004562)
bruno-at-bareos   
2022-03-31 10:06   
Hello, I'm just confirmed the missing quote on export.
But even if space is allowed in such resource name, we really advise you to avoid them, it will hurt you in lot a situation.
Space in name for example also doesn't work well with autocompletion in bconsole, etc.

It is safer to consider Name = resource as fqdn name using only ascii alphanumeric and .-_ as special characters.


Regards
(0004577)
bruno-at-bareos   
2022-04-25 16:49   
PR1171 in progress.
(0004590)
bruno-at-bareos   
2022-05-04 17:10   
PR-1171 merged + backport for 21 1173 merged
will appear in next 21.1.3

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1429 [bareos-core] documentation major have not tried 2022-02-14 16:29 2022-11-10 16:52
Reporter: abaguinski Platform:  
Assigned To: bruno-at-bareos OS:  
Priority: high OS Version:  
Status: resolved Product Version: 20.0.5  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Mysql to Postgres migration howto doesn't explain how to initialise the postgres database
Description: I'm trying to figure out how to migrate the catalog from mysql to postgres but I think I'm missing something. The howto (https://docs.bareos.org/Appendix/Howtos.html#prepare-the-new-database) suggests: "Firstly, create a new PostgreSQL database as described in Prepare Bareos database" and links to this document: "https://docs.bareos.org/IntroductionAndTutorial/InstallingBareos.html#prepare-bareos-database", which in turn instructs to run a series of commands that would initialize the database (https://docs.bareos.org/IntroductionAndTutorial/InstallingBareos.html#id9):

su postgres -c /usr/lib/bareos/scripts/create_bareos_database
su postgres -c /usr/lib/bareos/scripts/make_bareos_tables
su postgres -c /usr/lib/bareos/scripts/grant_bareos_privileges

However these commands assume that I mean the currently configured Mysql catalog and fail because Mysql catalog is deprecated:

su postgres -c /usr/lib/bareos/scripts/create_bareos_database
Creating mysql database
The MySQL database backend is deprecated. Please use PostgreSQL instead.
Creating of bareos database failed.

Does that mean I first have to "Add the new PostgreSQL database to the current Bareos Director configuration" (second sentence of the Howto section) and only then go back to the first sentence? Shouldn't the sentences be swapped then (except for "Firstly, ")? And will the create_bareos_database understand which catalog I mean when I configure two catalogs at the same time?

Tags:
Steps To Reproduce: 1. Install bareos 19 with mysql catalog
2. upgrade to bareos 20
3. try to follow the how exactly how it is written
Additional Information:
Attached Files:
Notes
(0004527)
bruno-at-bareos   
2022-02-24 15:56   
I've been able to reproduce the problem, which is due to missing keywords into the documentation. (Passing the dbdriver to scripts)

at the postgresql create database stage could you retry by using this commands

  su - postgres /usr/lib/bareos/scripts/create_bareos_database postgresql
  su - postgres /usr/lib/bareos/scripts/make_bareos_tables postgresql
  su - postgres /usr/lib/bareos/scripts/grant_bareos_privileges postgresql

After that you should be able to use bareos-dbcopy as documented.
Thanks to confirm this works for you, I will then propose and update to the documentation.
(0004528)
abaguinski   
2022-02-25 08:51   
Hi

Thanks for your reaction!

In the mean time we were able to migrate to postgres with a slight difference in the order of steps: 1 added the new catalog resource to the director configuration, 2 created and initialized the postgres database using these scripts. Indeed we've found that the 'postgresql' argument was necessary.

Since we have done it already in this order I unfortunately cannot confirm if only adding the argument was enough (i.e. would the scripts with extra argument work without the catalog resource)

Greetings,
Artem
(0004529)
bruno-at-bareos   
2022-02-28 09:29   
Thanks for your feedback,
Yes the script would have worked without the second catalog resource, when you give them the dbtype.

I will update the documentation to be more precise in that sense.
(0004530)
bruno-at-bareos   
2022-03-01 15:33   
PR#1093 and PR#1094 in review actually
(0004543)
bruno-at-bareos   
2022-03-21 10:56   
PR1094 for updating documentation has been merged.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1480 [bareos-core] documentation minor always 2022-08-30 12:33 2022-11-10 16:51
Reporter: crameleon Platform: Bareos 21.1.3  
Assigned To: frank OS: SUSE Linux Enterprise Server  
Priority: low OS Version: 15 SP4  
Status: resolved Product Version:  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: password string length limitation
Description: Hi,

if I try to log into the web console with the following configuration snippet active:

Console {
  Name = "mygreatusername"
  Password = "SX~E5eMw21shy%z!!B!cZ0PiQ)ex+FOn$Q-A&iv~B3,x|dSGqxsP&4}Zm6iF;[6c6#>LcAvFArcL%d|J}Ae*NB.g8S?$}gJ4mqUH:6aS+Jh6Vtv^Qhno7$>FW24|t2gq"
  Profile = "mygreatwebuiprofile"
  TLS Enable = No
}

The web UI prints the following message:

"Please provide a director, username and password."

If I change the password line to something more simple:

Console {
  Name = "suse-superuser"
  Password = "12345"
  Profile = "webui-superadmin"
  TLS Enable = No
}

Login works as expected.

Since the system does not seem to print any error messages about invalid passwords in its configuration, it would be nice if the allowed characters and lengths (and possibly a sample `pwgen -r <forbidden characters> <length> 1` command) were documented.

Best,
Georg
Tags:
Steps To Reproduce: 1. Configure a web UI user with a complex password such as SX~E5eMw21shy%z!!B!cZ0PiQ)ex+FOn$Q-A&iv~B3,x|dSGqxsP&4}Zm6iF;[6c6#>LcAvFArcL%d|J}Ae*NB.g8S?$}gJ4mqUH:6aS+Jh6Vtv^Qhno7$>FW24|t2gq
2. Copy paste username and password into the browser
3. Try to log in
Additional Information:
Attached Files:
Notes
(0004737)
bruno-at-bareos   
2022-08-31 11:16   
Thanks for your report, the title is a bit misleading, as the problem seems to be present only with the webui.
Having a strong password like described work perfectly with dir<->bconsole for example.

We are now checking where the problem really occur.
(0004738)
bruno-at-bareos   
2022-08-31 11:17   
Long or complicated password are truncated during POST operation with login form.
Those password work well with bconsole for example.
(0004739)
crameleon   
2022-08-31 11:28   
Apologies, I did not consider it to be specific to the webui. Thanks for looking into this! Maybe the POST truncation could be adjusted in my Apache web server?
(0004740)
bruno-at-bareos   
2022-08-31 11:38   
Actual research has proved that the length is important and the password for webui console should be less than 64 chars.
Maybe you can confirm this also on your installation so when our dev's will check this it will be more precise about the symptoms.
(0004741)
crameleon   
2022-09-02 19:00   
Can confirm, with 64 characters it works fine!
(0004742)
crameleon   
2022-09-02 19:02   
And I can also confirm, with one more character, so 65 in total, it returns the "Please provide a director, username and password." message.
(0004744)
frank   
2022-09-08 15:23   
(Last edited: 2022-09-08 16:33)
The form data input filter for password input is set to validate for a PW length between 1 and 64. We simply can remove the max value from the filter to not cause problems like this or set it to a value corresponding to what is allowed in configuration files.
(0004747)
frank   
2022-09-13 18:11   
Fix committed to bareos master branch with changesetid 16581.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1489 [bareos-core] webui minor always 2022-11-02 06:23 2022-11-09 14:11
Reporter: dimmko Platform: Linux  
Assigned To: frank OS: Ubuntu  
Priority: normal OS Version: 22.04  
Status: resolved Product Version: 21.1.4  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: broken storage pool link
Description: Hello!
Sorry for my very bad English!

I have a error when go to see detail
bareos-webui/pool/details/Diff

Tags:
Steps To Reproduce: 1) login in webui
2) click on jobid
3) click on "+"
4) click on pool - Full (for example).
Additional Information: Error:

An error occurred
An error occurred during execution; please try again later.
Additional information:
Exception
File:
/usr/share/bareos-webui/module/Pool/src/Pool/Model/PoolModel.php:94
Message:
Missing argument.
Stack trace:
#0 /usr/share/bareos-webui/module/Pool/src/Pool/Controller/PoolController.php(137): Pool\Model\PoolModel->getPool()
0000001 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/Controller/AbstractActionController.php(82): Pool\Controller\PoolController->detailsAction()
0000002 [internal function]: Zend\Mvc\Controller\AbstractActionController->onDispatch()
0000003 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(444): call_user_func()
0000004 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(205): Zend\EventManager\EventManager->triggerListeners()
0000005 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/Controller/AbstractController.php(118): Zend\EventManager\EventManager->trigger()
0000006 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/DispatchListener.php(93): Zend\Mvc\Controller\AbstractController->dispatch()
0000007 [internal function]: Zend\Mvc\DispatchListener->onDispatch()
0000008 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(444): call_user_func()
0000009 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(205): Zend\EventManager\EventManager->triggerListeners()
0000010 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/Application.php(314): Zend\EventManager\EventManager->trigger()
0000011 /usr/share/bareos-webui/public/index.php(46): Zend\Mvc\Application->run()
0000012 {main}
Attached Files: bareos_webui_error.png (63,510 bytes) 2022-11-02 06:23
https://bugs.bareos.org/file_download.php?file_id=538&type=bug
png
Notes
(0004821)
bruno-at-bareos   
2022-11-03 10:18   
What is needed to try to understand the error is you pool configuration, also maybe you can use your browser console to log POST and GET answer and headers.
Maybe you can also afford the effort to check the php-fpm if used log and apache log (access and error) when the problem occur.

Thanks.
(0004824)
dimmko   
2022-11-07 09:01   
(Last edited: 2022-11-07 09:05)
bruno-at-bareos, thank's for you comment.

1) my pool - Diff
Pool {
  Name = Diff
  Pool Type = Backup
  RecyclePool = Diff
  Purge Oldest Volume = yes
  Recycle = no
  Recycle Oldest Volume = no
  AutoPrune = no
  Volume Retention = 21 days
  ActionOnPurge = Truncate
  Maximum Volume Jobs = 1
  Label Format = "${Client}_${Level}_${Pool}.${Year}-${Month:p/2/0/r}-${Day:p/2/0/r}-${Hour:p/2/0/r}-${Minute:p/2/0/r}_${JobId}"
}

apache2 access.log
[07/Nov/2022:10:40:58 +0300] "GET /pool/details/Diff HTTP/1.1" 500 3225 "http://192.168.5.16/job/?period=1&status=Success" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36"

apache error.log
[Mon Nov 07 10:50:09.844798 2022] [php:warn] [pid 1340] [client 192.168.1.13:61800] PHP Warning: "continue" targeting switch is equivalent to "break". Did you mean to use "continue 2"? in /usr/share/bareos-webui/vendor/zendframework/zend-stdlib/src/ArrayObject.php on line 426, referer: http://192.168.5.16/job/?period=1&status=Success


In Chrome (103):
General:
Request URL: http://192.168.5.16/pool/details/Diff
Request Method: GET
Status Code: 500 Internal Server Error
Remote Address: 192.168.5.16:80
Referrer Policy: strict-origin-when-cross-origin

Response Headers:
HTTP/1.1 500 Internal Server Error
Date: Mon, 07 Nov 2022 07:59:54 GMT
Server: Apache/2.4.52 (Ubuntu)
Expires: Thu, 19 Nov 1981 08:52:00 GMT
Cache-Control: no-store, no-cache, must-revalidate
Pragma: no-cache
Content-Length: 2927
Connection: close
Content-Type: text/html; charset=UTF-8

Request Headers:
GET /pool/details/Diff HTTP/1.1
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9
Accept-Encoding: gzip, deflate
Accept-Language: ru-RU,ru;q=0.9,en-US;q=0.8,en;q=0.7
Cache-Control: no-cache
Connection: keep-alive
Cookie: bareos=o87i7ftkdsf2r160k2j0g5vic2
DNT: 1
Host: 192.168.5.16
Pragma: no-cache
Referer: http://192.168.5.16/job/?period=1&status=Success
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36
(0004825)
dimmko   
2022-11-07 09:18   
Enable display_error in php

[Mon Nov 07 11:17:57.573002 2022] [php:error] [pid 1545] [client 192.168.1.13:63174] PHP Fatal error: Uncaught Zend\\Session\\Exception\\InvalidArgumentException: 'session.name' is not a valid sessions-related ini setting. in /usr/share/bareos-webui/vendor/zendframework/zend-session/src/Config/SessionConfig.php:90\nStack trace:\n#0 /usr/share/bareos-webui/vendor/zendframework/zend-session/src/Config/StandardConfig.php(266): Zend\\Session\\Config\\SessionConfig->setStorageOption()\n#1 /usr/share/bareos-webui/vendor/zendframework/zend-session/src/Config/StandardConfig.php(114): Zend\\Session\\Config\\StandardConfig->setName()\n#2 /usr/share/bareos-webui/module/Application/Module.php(154): Zend\\Session\\Config\\StandardConfig->setOptions()\n#3 [internal function]: Application\\Module->Application\\{closure}()\n#4 /usr/share/bareos-webui/vendor/zendframework/zend-servicemanager/src/ServiceManager.php(939): call_user_func()\n#5 /usr/share/bareos-webui/vendor/zendframework/zend-servicemanager/src/ServiceManager.php(1099): Zend\\ServiceManager\\ServiceManager->createServiceViaCallback()\n#6 /usr/share/bareos-webui/vendor/zendframework/zend-servicemanager/src/ServiceManager.php(638): Zend\\ServiceManager\\ServiceManager->createFromFactory()\n#7 /usr/share/bareos-webui/vendor/zendframework/zend-servicemanager/src/ServiceManager.php(598): Zend\\ServiceManager\\ServiceManager->doCreate()\n#8 /usr/share/bareos-webui/vendor/zendframework/zend-servicemanager/src/ServiceManager.php(530): Zend\\ServiceManager\\ServiceManager->create()\n#9 /usr/share/bareos-webui/module/Application/Module.php(82): Zend\\ServiceManager\\ServiceManager->get()\n#10 /usr/share/bareos-webui/module/Application/Module.php(42): Application\\Module->initSession()\n#11 [internal function]: Application\\Module->onBootstrap()\n#12 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(444): call_user_func()\n#13 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(205): Zend\\EventManager\\EventManager->triggerListeners()\n#14 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/Application.php(157): Zend\\EventManager\\EventManager->trigger()\n#15 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/Application.php(261): Zend\\Mvc\\Application->bootstrap()\n#16 /usr/share/bareos-webui/public/index.php(46): Zend\\Mvc\\Application::init()\n#17 {main}\n\nNext Zend\\ServiceManager\\Exception\\ServiceNotCreatedException: An exception was raised while creating "Zend\\Session\\SessionManager"; no instance returned in /usr/share/bareos-webui/vendor/zendframework/zend-servicemanager/src/ServiceManager.php:946\nStack trace:\n#0 /usr/share/bareos-webui/vendor/zendframework/zend-servicemanager/src/ServiceManager.php(1099): Zend\\ServiceManager\\ServiceManager->createServiceViaCallback()\n#1 /usr/share/bareos-webui/vendor/zendframework/zend-servicemanager/src/ServiceManager.php(638): Zend\\ServiceManager\\ServiceManager->createFromFactory()\n#2 /usr/share/bareos-webui/vendor/zendframework/zend-servicemanager/src/ServiceManager.php(598): Zend\\ServiceManager\\ServiceManager->doCreate()\n#3 /usr/share/bareos-webui/vendor/zendframework/zend-servicemanager/src/ServiceManager.php(530): Zend\\ServiceManager\\ServiceManager->create()\n#4 /usr/share/bareos-webui/module/Application/Module.php(82): Zend\\ServiceManager\\ServiceManager->get()\n#5 /usr/share/bareos-webui/module/Application/Module.php(42): Application\\Module->initSession()\n#6 [internal function]: Application\\Module->onBootstrap()\n#7 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(444): call_user_func()\n#8 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(205): Zend\\EventManager\\EventManager->triggerListeners()\n#9 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/Application.php(157): Zend\\EventManager\\EventManager->trigger()\n#10 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/Application.php(261): Zend\\Mvc\\Application->bootstrap()\n#11 /usr/share/bareos-webui/public/index.php(46): Zend\\Mvc\\Application::init()\n#12 {main}\n thrown in /usr/share/bareos-webui/vendor/zendframework/zend-servicemanager/src/ServiceManager.php on line 946, referer: http://192.168.5.16/job/?period=1&status=Success
(0004826)
bruno-at-bareos   
2022-11-07 09:55   
I was able to reproduce, what is funny is if you go to storage -> pool tab -> pool name it will work.
We will transfer that to developer.
(0004827)
bruno-at-bareos   
2022-11-07 09:57   
There's a subtile difference in url called

by storage, pool, poolname url is bareos-webui/pool/details/?pool=Full
by jobid, + details, pool bareos-webui/pool/details/Full
-> create error missing parameter
(0004828)
dimmko   
2022-11-07 10:34   
bruno-at-bareos, your method is work, thank's.
(0004830)
frank   
2022-11-08 15:11   
Fix committed to bareos master branch with changesetid 16853.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1357 [bareos-core] director crash have not tried 2021-05-18 10:53 2022-11-09 14:09
Reporter: harm Platform:  
Assigned To: arogge OS:  
Priority: normal OS Version:  
Status: resolved Product Version:  
Product Build: Resolution: unable to reproduce  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: bareos-dir: ERROR in lib/mem_pool.cc:215 Failed ASSERT: obuf
Description: Hello folks,

when I try to make a long term copy of an always incremental backup the Bareos director crashes.

Version: 20.0.1 (02 March 2021) Ubuntu 20.04.1 LTS

Please let me know what more information you need.

Best regards
Harm
Tags:
Steps To Reproduce: Follow the instructions of https://docs.bareos.org/TasksAndConcepts/AlwaysIncrementalBackupScheme.html
Additional Information:
Attached Files:
Notes
(0004130)
harm   
2021-05-19 15:15   
The problem seems to occur when a client is selected. I don't seem to have quite grasped the concept yet, but the error should be handled?
(0004149)
arogge   
2021-06-09 17:48   
We need a meaningful backtrace to debug that. Please install a debugger and debug-Packages (or tell me what system your director runs on so I can provide you the commands) and reproduce the issue, so we can see what goes wrong.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
854 [bareos-core] director tweak have not tried 2017-09-21 10:21 2022-10-11 09:43
Reporter: hostedpower Platform: Linux  
Assigned To: stephand OS: Debian  
Priority: normal OS Version: 8  
Status: assigned Product Version: 16.2.5  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Problem with always incremental virtual full
Description: Hi,


All the sudden we have issues with virtual full (for consolidate) no longer working.

We have 2 pools for each customer. One is for the full (consolidate) and the other for the incremental.

We used to have the option to limit a single job to a single volume, we removed that a while ago, maybe there is a relation.

We also had to downgrade 16.2.6 to 16.2.5 because of the MySQL slowness issues, that happened recently, so that's also a possibility.

We have the feeling this software is not very reliable or at least very complex to get it somewhat stable.

PS: The volume it asks is there, but it doesn't want to use it :(

We used this config for a long time without this particular issue, except for the

        Action On Purge = Truncate
and commenting the:
# Maximum Volume Jobs = 1 # a new file for each backup that is done
Tags:
Steps To Reproduce: Config:



Storage {
    Name = customerx-incr
    Device = customerx-incr # bareos-sd Device
    Media Type = customerx
    Address = xxx # backup server fqdn > sent to client sd
    Password = "xxx" # password for Storage daemon
    Maximum Concurrent Jobs = 8 # required for virtual full
}

Storage {
    Name = customerx-cons
    Device = customerx-cons # bareos-sd Device
    Media Type = customerx
    Address = xxx # backup server fqdn > sent to client sd
    Password = "xxx" # password for Storage daemon
    Maximum Concurrent Jobs = 8 # required for virtual full
}

cat /etc/bareos/pool-defaults.conf
#pool defaults used for always incremental scheme
        Pool Type = Backup
# Maximum Volume Jobs = 1 # a new file for each backup that is done
        Recycle = yes # Bareos can automatically recycle Volumes
        Auto Prune = yes # Prune expired volumes
        Volume Use Duration = 23h
        Action On Purge = Truncate
#end


Pool {
    Name = customerx-incr
    Storage = customerx-incr
    LabelFormat = "vol-incr-"
    Next Pool = customerx-cons
    @/etc/bareos/pool-defaults.conf
}

Pool {
    Name = customerx-cons
    Storage = customerx-cons
    LabelFormat = "vol-cons-"
    @/etc/bareos/pool-defaults.conf
}



# lb1.customerx.com
Job {
    Name = "backup-lb1.customerx.com"
    Client = lb1.customerx.com
    Pool = customerx-incr
    Full Backup Pool = customerx-cons
    FileSet = "linux-all"
    Schedule = "always-inc-cycle-4"
    #Defaults
    JobDefs = "HPJobInc"
    Maximum Concurrent Jobs = 8 # Let up to 8 jobs run
}

Client {
    Name = lb1.customerx.com
    Address = lb1.customerx.com
# Catalog = MyCatalog
    Password = "xxx" # password for Storage daemon
    AutoPrune = yes
}


# lb2.customerx.com
Job {
    Name = "backup-lb2.customerx.com"
    Client = lb2.customerx.com
    Pool = customerx-incr
    Full Backup Pool = customerx-cons
    FileSet = "linux-all"
    Schedule = "always-inc-cycle-4"
    #Defaults
    JobDefs = "HPJobInc"
    Maximum Concurrent Jobs = 8 # Let up to 8 jobs run
}

Client {
    Name = lb2.customerx.com
    Address = lb2.customerx.com
# Catalog = MyCatalog
    Password = "xxx" # password for Storage daemon
    AutoPrune = yes
}



# app1.customerx.com
Job {
    Name = "backup-app1.customerx.com"
    Client = app1.customerx.com
    Pool = customerx-incr
    Full Backup Pool = customerx-cons
    FileSet = "linux-all"
    Schedule = "always-inc-cycle-4"
    #Defaults
    JobDefs = "HPJobInc"
    Maximum Concurrent Jobs = 8 # Let up to 8 jobs run
}

Client {
    Name = app1.customerx.com
    Address = app1.customerx.com
# Catalog = MyCatalog
    Password = "xxx" # password for Storage daemon
    AutoPrune = yes
}



# app2.customerx.com
Job {
    Name = "backup-app2.customerx.com"
    Client = app2.customerx.com
    Pool = customerx-incr
    Full Backup Pool = customerx-cons
    FileSet = "linux-all"
    Schedule = "always-inc-cycle-4"
    #Defaults
    JobDefs = "HPJobInc"
    Maximum Concurrent Jobs = 8 # Let up to 8 jobs run
}

Client {
    Name = app2.customerx.com
    Address = app2.customerx.com
# Catalog = MyCatalog
    Password = "xxx" # password for Storage daemon
    AutoPrune = yes
}

# app3.customerx.com
Job {
    Name = "backup-app3.customerx.com"
    Client = app3.customerx.com
    Pool = customerx-incr
    Full Backup Pool = customerx-cons
    FileSet = "linux-all"
    Schedule = "always-inc-cycle-4"
    #Defaults
    JobDefs = "HPJobInc"
    Maximum Concurrent Jobs = 8 # Let up to 8 jobs run
}

Client {
    Name = app3.customerx.com
    Address = app3.customerx.com
# Catalog = MyCatalog
    Password = "xxx" # password for Storage daemon
    AutoPrune = yes
}




# db1.customerx.com
Job {
    Name = "backup-db1.customerx.com"
    Client = db1.customerx.com
    Pool = customerx-incr
    Full Backup Pool = customerx-cons
    FileSet = "linux-all-mysql"
    Schedule = "always-inc-cycle-4"
    #Defaults
    JobDefs = "HPJobInc"
    Maximum Concurrent Jobs = 8 # Let up to 8 jobs run
}

Client {
    Name = db1.customerx.com
    Address = db1.customerx.com
# Catalog = MyCatalog
    Password = "xxx" # password for Storage daemon
    AutoPrune = yes
}


# db2.customerx.com
Job {
    Name = "backup-db2.customerx.com"
    Client = db2.customerx.com
    Pool = customerx-incr
    Full Backup Pool = customerx-cons
    FileSet = "linux-all-mysql"
    Schedule = "always-inc-cycle-4"
    #Defaults
    JobDefs = "HPJobInc"
    Maximum Concurrent Jobs = 8 # Let up to 8 jobs run
}

Client {
    Name = db2.customerx.com
    Address = db2.customerx.com
# Catalog = MyCatalog
    Password = "xxx" # password for Storage daemon
    AutoPrune = yes
}

# db3.customerx.com
Job {
    Name = "backup-db3.customerx.com"
    Client = db3.customerx.com
    Pool = customerx-incr
    Full Backup Pool = customerx-cons
    FileSet = "linux-all-mysql"
    Schedule = "always-inc-cycle-4"
    #Defaults
    JobDefs = "HPJobInc"
    Maximum Concurrent Jobs = 8 # Let up to 8 jobs run
}

Client {
    Name = db3.customerx.com
    Address = db3.customerx.com
# Catalog = MyCatalog
    Password = "xxx" # password for Storage daemon
    AutoPrune = yes
}




# Backup for customerx
Device {
  Name = customerx-incr
  Media Type = customerx
  Archive Device = /home/customerx/bareos
  LabelMedia = yes;
  Random Access = Yes;
  AutomaticMount = yes;
  RemovableMedia = no;
  AlwaysOpen = no;
  Maximum Concurrent Jobs = 8
}

Device {
  Name = customerx-cons
  Media Type = customerx
  Archive Device = /home/customerx/bareos
  LabelMedia = yes;
  Random Access = Yes;
  AutomaticMount = yes;
  RemovableMedia = no;
  AlwaysOpen = no;
  Maximum Concurrent Jobs = 8
}
Additional Information: 2017-09-21 10:07:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 10:02:35 bareos-sd JobId 2467: Please mount read Volume "vol-cons-0378" for:
 Job: backup-db3.customerx.cloud.2017-09-21_09.00.04_05
 Storage: "customerx-incr" (/home/customerx/bareos)
 Pool: customerx-incr
 Media type: customerx
 
2017-09-21 09:57:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:52:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:47:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:42:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:37:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:32:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:27:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:22:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:17:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:12:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:07:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:02:35 hostedpower-dir JobId 2467: Bootstrap records written to /var/lib/bareos/hostedpower-dir.restore.107.bsr
 
2017-09-21 09:02:35 hostedpower-dir JobId 2467: Using Device "customerx-incr" to read.
 
2017-09-21 09:02:35 hostedpower-dir JobId 2467: Using Device "customerx-cons" to write.
 
2017-09-21 09:02:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:02:35 bareos-sd JobId 2467: Warning: acquire.c:294 Read acquire: label.c:254 Could not reserve volume vol-cons-0378 on "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:02:35 bareos-sd JobId 2467: Please mount read Volume "vol-cons-0378" for:
 Job: backup-db3.customerx.cloud.2017-09-21_09.00.04_05
 Storage: "customerx-incr" (/home/customerx/bareos)
 Pool: customerx-incr
 Media type: customerx
 
2017-09-21 09:02:34 hostedpower-dir JobId 2467: Start Virtual Backup JobId 2467, Job=backup-db3.customerx.cloud.2017-09-21_09.00.04_05
 
2017-09-21 09:02:34 hostedpower-dir JobId 2467: Consolidating JobIds 2392,971
System Description
Attached Files:
Notes
(0002744)
joergs   
2017-09-21 16:08   
I see some problems in this configuration.

You should check section http://doc.bareos.org/master/html/bareos-manual-main-reference.html#ConcurrentDiskJobs from the manual.

Each device can only read/write one volume at a time. VirtualFull requires multiple volumes.

Basically, you need multiple devices to the some "Storage Directory", each with "Maximum Concurrent Jobs = 1" to make it work.

Your setting

Device {
  Name = customerx-incr
  Media Type = customerx
  Archive Device = /home/customerx/bareos
  LabelMedia = yes;
  Random Access = Yes;
  AutomaticMount = yes;
  RemovableMedia = no;
  AlwaysOpen = no;
  Maximum Concurrent Jobs = 8
}

will use interleaving instead, which could cause performance issues on restore.
(0002745)
hostedpower   
2017-09-21 16:14   
So you suggest just making the device:

Device {
  Name = customerx-incr
  Media Type = customerx
  Archive Device = /home/customerx/bareos
  LabelMedia = yes;
  Random Access = Yes;
  AutomaticMount = yes;
  RemovableMedia = no;
  AlwaysOpen = no;
  Maximum Concurrent Jobs = 1
}

This would fix all issues?

Before we had this and I think that worked also: Maximum Volume Jobs = 1 , but it seems discouraged...
(0002746)
joergs   
2017-09-21 16:26   
No, this will not fix the issue, but it prevents interleaving, which might cause other problems.

I suggest, by pointing to the documentation, that you setup multiple Devices all pointing to the same Archive Device. Then, access them all to a Director Storage, like

Storage {
    Name = customerx
    Device = customerx-dev1, customerx-dev2, ... # bareos-sd Device
    Media Type = customerx
    Address = xxx # backup server fqdn > sent to client sd
    Password = "xxx" # password for Storage daemon
    Maximum Concurrent Jobs = 8 # required for virtual full
}
(0002747)
joergs   
2017-09-21 16:26   
With Maximum Concurrent Jobs = 8 you should use 8 Devices.
(0002748)
hostedpower   
2017-09-21 16:37   
Hi Joergs,

Thanks a lot, that documentation makes sense and seems to have improved since I last read it (or I missed it somehow).

Will test it like that, but it looks very promising :)
(0002754)
hostedpower   
2017-09-21 22:57   
I've read it, but it would mean I need to create multiple storage devices in this case? That's a lot of extra definitions to just backup 1 customer. It would be nice if you would simply declare the device object and tell there are 8 from it for example. All in just one definition. That shouldn't be too hard I suppose?

Something like:

Device {
  Name = customerx-incr
  Media Type = customerx
  Archive Device = /home/customerx/bareos
  LabelMedia = yes;
  Random Access = Yes;
  AutomaticMount = yes;
  RemovableMedia = no;
  AlwaysOpen = no;
  Maximum Concurrent Jobs = 8 --> this automatically creates customerx-incr1 --> customerx-incr8, probably with some extra setting to allow it.
}



For now, would it be a solution to set

Device {
  Name = customerx-incr
  Media Type = customerx
  Archive Device = /home/customerx/bareos
  LabelMedia = yes;
  Random Access = Yes;
  AutomaticMount = yes;
  RemovableMedia = no;
  AlwaysOpen = no;
  Maximum Concurrent Jobs = 1 # just use one at the same time
}

Since we have so many clients, it would still run multiple backups at the same time for different clients I suppose? (Each client has his own media, device and storage)


PS: We want to keep about 20 days of data, is the next config ok together with the above scenario?


JobDefs {
    Name = "HPJobInc"
    Type = Backup
    Level = Full
    Write Bootstrap = "/var/lib/bareos/%c.bsr"
    Accurate=yes
    Level=Full
    Messages = Standard
    Always Incremental = yes
    Always Incremental Job Retention = 20 days
    # The resulting interval between full consolidations when running daily backups and daily consolidations is Always Incremental Max Full Age Dir Job - Always Incremental Job Retention Dir Job.
    Always Incremental Max Full Age = 35 days # should NEVER be less then Always Incremental Job Retention -> Every 15 days the full backup is also consolidated ( Always Incremental Max Full Age Dir Job - Always Incremental Job Retention Dir )
    Always Incremental Keep Number = 5 # Guarantee that at least the specified number of Backup Jobs will persist, even if they are older than "Always Incremental Job Retention".
    Maximum Concurrent Jobs = 5 # Let up to 5 jobs run
}
(0002759)
hostedpower   
2017-09-25 09:50   
We used this now:

  Maximum Concurrent Jobs = 1 # just use one at the same time

But some jobs also have the same error:

2017-09-25 09:00:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 
2017-09-25 08:55:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)


It looks almost like multiple jobs can't exist together in one volume (well they can, but then issues like this start to occur.

Before, probably with the "Maximum Volume Jobs = 1", we never encountered these problems.
(0002760)
joergs   
2017-09-25 11:17   
Only one access per volume is possible at a time.
So reading and writing to the same volume is not possible.

I thought, you covered this by "Maximum Use Duration = 23h". Have you disabled it, or did you run multiple jobs during one day?

However, this is a bug tracker. I think further questions about Always Incrementals are best handled using the bareos-users mailing list or a bareos.com support ticket.
(0002761)
hostedpower   
2017-09-25 11:37   
Yes I was wondering why we encountered it now and never before.

It wants to swap consolidate pool for and incremental pool (or vice versa). Don't understand why.
(0002763)
joergs   
2017-09-25 12:15   
Please attach the output of

list joblog jobid=2668
(0002764)
hostedpower   
2017-09-25 12:22   
Enter a period to cancel a command.
*list joblog jobid=2668
Automatically selected Catalog: MyCatalog
Using Catalog "MyCatalog"
 2017-09-24 09:00:13 hostedpower-dir JobId 2668: Start Virtual Backup JobId 2668, Job=backup-web.hosted-power.com.2017-09-24_09.00.03_27
 2017-09-24 09:00:13 hostedpower-dir JobId 2668: Consolidating JobIds 2593,1173
 2017-09-24 09:00:18 hostedpower-dir JobId 2668: Bootstrap records written to /var/lib/bareos/hostedpower-dir.restore.87.bsr
 2017-09-24 09:00:18 hostedpower-dir JobId 2668: Using Device "hostedpower-incr" to read.
 2017-09-24 09:00:18 hostedpower-dir JobId 2668: Using Device "hostedpower-cons" to write.
 2017-09-24 09:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:00:18 bareos-sd JobId 2668: Warning: acquire.c:294 Read acquire: label.c:254 Could not reserve volume vol-cons-0344 on "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:00:18 bareos-sd JobId 2668: Please mount read Volume "vol-cons-0344" for:
    Job: backup-web.hosted-power.com.2017-09-24_09.00.03_27
    Storage: "hostedpower-incr" (/home/hostedpower/bareos)
    Pool: hostedpower-incr
    Media type: hostedpower
 2017-09-24 09:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 10:00:18 bareos-sd JobId 2668: Please mount read Volume "vol-cons-0344" for:
    Job: backup-web.hosted-power.com.2017-09-24_09.00.03_27
    Storage: "hostedpower-incr" (/home/hostedpower/bareos)
    Pool: hostedpower-incr
    Media type: hostedpower
 2017-09-24 10:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 10:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 10:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 10:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 10:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 10:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 10:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 10:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 10:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 10:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 10:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:20:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:25:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:30:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:35:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:40:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:45:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:50:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:55:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 12:00:18 bareos-sd JobId 2668: Please mount read Volume "vol-cons-0344" for:
    Job: backup-web.hosted-power.com.2017-09-24_09.00.03_27
    Storage: "hostedpower-incr" (/home/hostedpower/bareos)
    Pool: hostedpower-incr
    Media type: hostedpower
 2017-09-24 12:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 12:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 12:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 12:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 12:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 12:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 12:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 12:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 12:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 12:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 12:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 16:00:18 bareos-sd JobId 2668: Please mount read Volume "vol-cons-0344" for:
    Job: backup-web.hosted-power.com.2017-09-24_09.00.03_27
    Storage: "hostedpower-incr" (/home/hostedpower/bareos)
    Pool: hostedpower-incr
    Media type: hostedpower
 2017-09-24 16:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 16:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 16:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 16:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 16:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 16:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 16:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 16:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 16:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 16:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 16:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 00:00:18 bareos-sd JobId 2668: Please mount read Volume "vol-cons-0344" for:
    Job: backup-web.hosted-power.com.2017-09-24_09.00.03_27
    Storage: "hostedpower-incr" (/home/hostedpower/bareos)
    Pool: hostedpower-incr
    Media type: hostedpower
 2017-09-25 00:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 00:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 00:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 00:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 00:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 00:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 00:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 00:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 00:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 00:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 00:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:30:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:35:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:40:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:45:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:50:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:55:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:00:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:05:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:10:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:15:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:20:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:25:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:30:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:35:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:40:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:45:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:50:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:55:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:00:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:05:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:10:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:15:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:20:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:25:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:30:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:35:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:40:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:45:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:50:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:55:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:00:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:05:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:10:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:15:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:20:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:25:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:30:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:35:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:40:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:45:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:50:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:55:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:00:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:05:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:10:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:15:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:20:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:25:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:30:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:35:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:40:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:45:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:50:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:55:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 09:00:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 09:05:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 09:10:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 09:15:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 09:18:14 bareos-sd JobId 2668: acquire.c:221 Job 2668 canceled.
 2017-09-25 09:18:14 bareos-sd JobId 2668: Elapsed time=418423:18:14, Transfer rate=0 Bytes/second
 2017-09-25 09:18:14 hostedpower-dir JobId 2668: Bareos hostedpower-dir 16.2.5 (03Mar17):
  Build OS: x86_64-pc-linux-gnu debian Debian GNU/Linux 8.0 (jessie)
  JobId: 2668
  Job: backup-web.hosted-power.com.2017-09-24_09.00.03_27
  Backup Level: Virtual Full
  Client: "web.hosted-power.com" 16.2.4 (01Jul16) x86_64-pc-linux-gnu,debian,Debian GNU/Linux 7.0 (wheezy),Debian_7.0,x86_64
  FileSet: "linux-all-mysql" 2017-08-09 00:15:00
  Pool: "hostedpower-cons" (From Job Pool's NextPool resource)
  Catalog: "MyCatalog" (From Client resource)
  Storage: "hostedpower-cons" (From Storage from Pool's NextPool resource)
  Scheduled time: 24-Sep-2017 09:00:03
  Start time: 04-Sep-2017 02:15:02
  End time: 04-Sep-2017 02:42:30
  Elapsed time: 27 mins 28 secs
  Priority: 10
  SD Files Written: 0
  SD Bytes Written: 0 (0 B)
  Rate: 0.0 KB/s
  Volume name(s):
  Volume Session Id: 194
  Volume Session Time: 1506027627
  Last Volume Bytes: 0 (0 B)
  SD Errors: 0
  SD termination status: Canceled
  Accurate: yes
  Termination: Backup Canceled

You have messages.
*
(0002765)
hostedpower   
2017-09-27 11:57   
Hi,

Now jobs seem to success for the moment.

They also seem to be set as incremental all the time while before it was set as full after consolidate.

Example of such job

2017-09-27 09:02:07 hostedpower-dir JobId 2892: Joblevel was set to joblevel of first consolidated job: Incremental
 
2017-09-27 09:02:07 hostedpower-dir JobId 2892: Bareos hostedpower-dir 16.2.5 (03Mar17):
 Build OS: x86_64-pc-linux-gnu debian Debian GNU/Linux 8.0 (jessie)
 JobId: 2892
 Job: backup-web.hosted-power.com.2017-09-27_09.00.02_47
 Backup Level: Virtual Full
 Client: "web.hosted-power.com" 16.2.4 (01Jul16) x86_64-pc-linux-gnu,debian,Debian GNU/Linux 7.0 (wheezy),Debian_7.0,x86_64
 FileSet: "linux-all-mysql" 2017-08-09 00:15:00
 Pool: "hostedpower-cons" (From Job Pool's NextPool resource)
 Catalog: "MyCatalog" (From Client resource)
 Storage: "hostedpower-cons" (From Storage from Pool's NextPool resource)
 Scheduled time: 27-Sep-2017 09:00:02
 Start time: 07-Sep-2017 02:15:03
 End time: 07-Sep-2017 02:42:52
 Elapsed time: 27 mins 49 secs
 Priority: 10
 SD Files Written: 2,803
 SD Bytes Written: 8,487,235,164 (8.487 GB)
 Rate: 5085.2 KB/s
 Volume name(s): vol-cons-0010
 Volume Session Id: 121
 Volume Session Time: 1506368550
 Last Volume Bytes: 8,495,713,067 (8.495 GB)
 SD Errors: 0
 SD termination status: OK
 Accurate: yes
 Termination: Backup OK

 
2017-09-27 09:02:07 hostedpower-dir JobId 2892: purged JobIds 2817,1399 as they were consolidated into Job 2892
 
2017-09-27 09:02:07 hostedpower-dir JobId 2892: Begin pruning Jobs older than 6 months .
 
2017-09-27 09:02:07 hostedpower-dir JobId 2892: No Jobs found to prune.
 
2017-09-27 09:02:07 hostedpower-dir JobId 2892: Begin pruning Files.
 
2017-09-27 09:02:07 hostedpower-dir JobId 2892: No Files found to prune.
 
2017-09-27 09:02:07 hostedpower-dir JobId 2892: End auto prune.

 
2017-09-27 09:02:03 bareos-sd JobId 2892: Elapsed time=00:01:46, Transfer rate=80.06 M Bytes/second
 
2017-09-27 09:00:32 bareos-sd JobId 2892: End of Volume at file 1 on device "hostedpower-incr" (/home/hostedpower/bareos), Volume "vol-cons-0344"
 
2017-09-27 09:00:32 bareos-sd JobId 2892: Ready to read from volume "vol-incr-0097" on device "hostedpower-incr" (/home/hostedpower/bareos).
 
2017-09-27 09:00:32 bareos-sd JobId 2892: Forward spacing Volume "vol-incr-0097" to file:block 0:4709591.
 
2017-09-27 09:00:17 bareos-sd JobId 2892: Recycled volume "vol-cons-0010" on device "hostedpower-cons" (/home/hostedpower/bareos), all previous data lost.
 
2017-09-27 09:00:17 bareos-sd JobId 2892: Forward spacing Volume "vol-cons-0344" to file:block 0:215.
 
2017-09-27 09:00:14 hostedpower-dir JobId 2892: Start Virtual Backup JobId 2892, Job=backup-web.hosted-power.com.2017-09-27_09.00.02_47
 
2017-09-27 09:00:14 hostedpower-dir JobId 2892: Consolidating JobIds 2817,1399
 
2017-09-27 09:00:14 hostedpower-dir JobId 2892: Bootstrap records written to /var/lib/bareos/hostedpower-dir.restore.48.bsr
 
2017-09-27 09:00:14 hostedpower-dir JobId 2892: Using Device "hostedpower-incr" to read.
 
2017-09-27 09:00:14 hostedpower-dir JobId 2892: Max configured use duration=82,800 sec. exceeded. Marking Volume "vol-cons-0344" as Used.
 
2017-09-27 09:00:14 hostedpower-dir JobId 2892: There are no more Jobs associated with Volume "vol-cons-0010". Marking it purged.
 
2017-09-27 09:00:14 hostedpower-dir JobId 2892: All records pruned from Volume "vol-cons-0010"; marking it "Purged"
 
2017-09-27 09:00:14 hostedpower-dir JobId 2892: Recycled volume "vol-cons-0010"
 
2017-09-27 09:00:14 hostedpower-dir JobId 2892: Using Device "hostedpower-cons" to write.
 
2017-09-27 09:00:14 bareos-sd JobId 2892: Ready to read from volume "vol-cons-0344" on device "hostedpower-incr" (/home/hostedpower/bareos).
(0002766)
hostedpower   
2017-09-28 12:39   
Very strange things, most work for the moment it seems (but how long).

Some show full , other incremental


hostedpower-dir JobId 2956: Joblevel was set to joblevel of first consolidated job: Full
 
2017-09-28 09:05:03 hostedpower-dir JobId 2956: Bareos hostedpower-dir 16.2.5 (03Mar17):
 Build OS: x86_64-pc-linux-gnu debian Debian GNU/Linux 8.0 (jessie)
 JobId: 2956
 Job: backup-vps53404.2017-09-28_09.00.02_20
 Backup Level: Virtual Full
 Client: "vps53404" 16.2.4 (01Jul16) Microsoft Windows Server 2012 Standard Edition (build 9200), 64-bit,Cross-compile,Win64
 FileSet: "windows-all" 2017-08-08 22:15:00
 Pool: "vps53404-cons" (From Job Pool's NextPool resource)
 Catalog: "MyCatalog" (From Client resource)
 Storage: "vps53404-cons" (From Storage from Pool's NextPool resource)
 Scheduled time: 28-Sep-2017 09:00:02

Before they always all showed full
(0002802)
hostedpower   
2017-10-19 09:43   
Well I downgraded to 16.2.5 and guess what, issue was gone for few weeks.

Now I tried out the 16.2.7 and issue is back again ...

2017-10-19 09:40:03 bareos-sd JobId 4445: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0340 from dev="vps52320-cons" (/home/vps52320/bareos) to "vps52320-incr" (/home/vps52320/bareos)
 
2017-10-19 09:35:03 bareos-sd JobId 4445: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0340 from dev="vps52320-cons" (/home/vps52320/bareos) to "vps52320-incr" (/home/vps52320/bareos)
 
2017-10-19 09:30:03 bareos-sd JobId 4445: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0340 from dev="vps52320-cons" (/home/vps52320/bareos) to "vps52320-incr" (/home/vps52320/bareos)
 
2017-10-19 09:25:03 bareos-sd JobId 4445: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0340 from dev="vps52320-cons" (/home/vps52320/bareos) to "vps52320-incr" (/home/vps52320/bareos)
 
2017-10-19 09:20:03 bareos-sd JobId 4445: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0340 from dev="vps52320-cons" (/home/vps52320/bareos) to "vps52320-incr" (/home/vps52320/bareos)

Coincidence? I start doubting it.
(0002830)
hostedpower   
2017-12-02 12:19   
Hi,


I'm still on 17.2.7 and have to say it's an intermittent error. It goes fine for days and then all the sudden sometimes one or more jobs suffer from it.

Never had it in the past until a certain version, pretty sure of that.
(0002949)
hostedpower   
2018-03-26 20:22   
The problem was long time gone, but now it's back in full force. Any idea what the cause could be?
(0002991)
stephand   
2018-05-04 10:57   
With larger MySQL databases and Bareos 17.2, for incremental jobs with accurate=yes, it seems to help to add the following index:

CREATE INDEX jobtdate_idx ON Job (JobTDate);
ANALYZE TABLE Job;

Could you please check if that works for you?
(0002992)
hostedpower   
2018-05-04 11:16   
ok thanks, we addeed the index, but it took only 0.5 seconds. Usually this means there was not an issue :)

Creating an index which is slow, usually means there is (serious) performance gain.
(0002994)
stephand   
2018-05-04 17:56   
For sure it depends on the size of the Job table. I've measured it 25% faster with this index with 10000 records in the Job table.

However, looking at the logs like
2017-10-19 09:20:03 bareos-sd JobId 4445: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0340 from dev="vps52320-cons" (/home/vps52320/bareos) to "vps52320-incr" (/home/vps52320/bareos)
that's a different problem, has nothing to do with that index.
As Joerg already suggested to use multiple storage devices, I'd propose increase the number. This is documented meanwhile at
http://doc.bareos.org/master/html/bareos-manual-main-reference.html#UsingMultipleStorageDevices
(0003047)
stephand   
2018-06-21 07:59   
Were you able to solve the problem by using multiple storages devices?
(0003053)
hostedpower   
2018-06-27 23:56   
while this might work, we use at least 50-60 devices atm. So it would be a lot of extra work to add extra storage devices.

Why not create a device of type 'disk volume' and automatically give it the right properties? It would make things SO MUCH EASIER and bring bareos into the 2018 ...

Anything that can be done to get this supported? We would want to sponsor this if needed...
(0003145)
stephand   
2018-10-22 15:54   
Hi,

When you say "a lot of extra work to add extra storage devices" are you talking about the Storage Daemon configuration mentioned at
http://doc.bareos.org/master/html/bareos-manual-main-reference.html#UsingMultipleStorageDevices
which is a always

Device {
  Name = FileStorage1
  Media Type = File
  Archive Device = /var/lib/bareos/storage
  LabelMedia = yes
  Random Access = yes
  AutomaticMount = yes
  RemovableMedia = no
  AlwaysOpen = no
  Maximum Concurrent Jobs = 1
}

and only the number in Name = FileStorage1 is increased for each Device?

Are you already using configuration management tools like Ansible, Puppet etc? Then it shouldn't be too hard to get done.

Or what exactly do you mean by "Why not create a device of type 'disk volume' and automatically give it the right properties? It would make things SO MUCH EASIER and bring bareos into the 2018 ..."?

Let me guess, lets imagine we would have MultiDevice in the SD configuration, then the example config from http://doc.bareos.org/master/html/bareos-manual-main-reference.html#UsingMultipleStorageDevices could be written like this:

The Bareos Director config:

Director {
  Name = bareos-dir.example.com
  QueryFile = "/usr/lib/bareos/scripts/query.sql"
  Maximum Concurrent Jobs = 10
  Password = "<secret>"
}
 
Storage {
  Name = File
  Address = bareos-sd.bareos.com
  Password = "<sd-secret>"
  Device = MultiFileStorage
  # number of devices = Maximum Concurrent Jobs
  Maximum Concurrent Jobs = 4
  Media Type = File
}

The Bareos Storagedaemon config:

Storage {
  Name = bareos-sd.example.com
  # any number >= 4
  Maximum Concurrent Jobs = 20
}
 
Director {
  Name = bareos-dir.example.com
  Password = "<sd-secret>"
}
 
MultiDevice {
  Name = MultiFileStorage
  Media Type = File
  Archive Device = /var/lib/bareos/storage
  LabelMedia = yes
  Random Access = yes
  AutomaticMount = yes
  RemovableMedia = no
  AlwaysOpen = no
  Maximum Concurrent Jobs = 1
  Multi Device Count = 4
}
 
Do you mean that?

Or if not, please give an example of how the config should look like to make things easier for you.
(0003146)
hostedpower   
2018-10-22 16:25   
We're indeed looking into Ansible to automate this, but still, something like:

MultiDevice {
  Name = MultiFileStorage
  Media Type = File
  Archive Device = /var/lib/bareos/storage
  LabelMedia = yes
  Random Access = yes
  AutomaticMount = yes
  RemovableMedia = no
  AlwaysOpen = no
  Maximum Concurrent Jobs = 1
  Multi Device Count = 4
}

would be more than fantastic!!

Just a single device supporting concurrent access in an easy fashion!

Probably we could then also set "Maximum Concurrent Jobs = 4" Pretty safely?


I can imagine if you're used to Bareos (and tapes), this seems maybe a strange way of working.

However if you're used to (most) other backup software supporting harddrives by original design, the way it's designed now for disks is just way too complicated :(
(0003768)
hostedpower   
2020-02-10 21:08   
Hi,


I think this feature was written: https://docs.bareos.org/Configuration/StorageDaemon.html#storageresourcemultiplieddevice

Does it require an autochanger for this to work as disussed in this thread? Or would simply more devices thanks to the count parameter be sufficient?

I ask since lately we see a lot of errors again as reported here :| :)
(0003774)
arogge   
2020-02-11 09:34   
The autochanger is optional, but the feature won't help you if you don't configure an autochanger.
With the autochanger you only need to configure one storage device in your Director. Otherwise you'd need to configure each of the mutliplied devices in your director separately.

This is - of course - not a physical existing autochanger, it is just an autochanger configuration in the storage daemon to group the different storage devices together.
(0003775)
hostedpower   
2020-02-11 10:00   
ok, but in our case we ha
(0003776)
hostedpower   
2020-02-11 10:02   
ok, but in our case we have more than 100 storages configured with different names.

Do we need multiple autochangers as well or just 1 auto changer for all these devices? I'm afraid it's the latter, so we'd have to add tons of autochangers as well right? :)
(0003777)
arogge   
2020-02-11 10:30   
If you have more than 100 storages, I hope you're generating your configuration with something like puppet or ansible.
Why exactly do you need such a large amount of individual storages?
Usually if you're using only File-based storage a single storage (or file-autochanger) per RAID array is enough. Everything else usually just overcomplicates things in the end.
(0003798)
hostedpower   
2020-02-12 10:04   
Well it's because we allocate storage space for each customer, so each customer pays for his own storage. Putting all into one large storage, wouldn't show us anymore who is using what exactly.

Is there a better way to "allocate" storage for individual customers while at the same time use large storage as you suggest?

PS: Yes we generate config, but updating config now to include autochanger, would still be quite some work since we generate this config only once .

Just adding a device count is easy since we use include file. So adding autochanger now isn't really what we hoped for :)
(0004060)
hostedpower   
2020-12-01 12:43   
Hi,

We still have this: Need volume from other drive, but swap not possible

Strange thing is that it works 99% of times, but then we have periods we see this error a lot. I don't understand why this works so well mostly to work so bad at other times.

 It's one of the primary reasons we're now looking to other backup solutions. Next to that we have many storage servers and bareos has currently no way to let "x number of tasks" run on a per storage server basis.
(0004217)
hostedpower   
2021-08-25 11:27   
(Last edited: 2021-08-26 23:41)
Ok we finally re-architectured our whole backup infrastructure, only to find this problem/bug hits us hard again.

We use the latest bareos 20.2 version.

We now use one large folder for all backups with 10 concurrent consolidates (max). We use postgresql as our database engine (so it cannot be because of MySQL). We tried to follow all best practices, I don't understand what is wrong with it :(


Storage {
        Name = AI-Incremental
        Device = AI-Incremental-Autochanger
        Media Type = AI
        Address = xxx
        Password = "xxx"
        Maximum Concurrent Jobs = 10
        Autochanger = yes
}

Storage {
        Name = AI-Consolidated
        Device = AI-Consolidated-Autochanger
        Media Type = AI
        Address = xxx
        Password = "xxx"
        Maximum Concurrent Jobs = 10
        Autochanger = yes
}

Device {
  Name = AI-Incremental
  Media Type = AI
  Archive Device = /var/lib/bareos/storage
  LabelMedia = yes;
  Random Access = Yes;
  AutomaticMount = yes;
  RemovableMedia = no;
  AlwaysOpen = no;
  Maximum Concurrent Jobs = 1 # this has to be 1! (1 for each device)
  Count = 10
}

Autochanger {
  Name = "AI-Incremental-Autochanger"
  Device = AI-Incremental

  Changer Device = /dev/null
  Changer Command = ""
}


Device {
  Name = AI-Consolidated
  Media Type = AI
  Archive Device = /var/lib/bareos/storage
  LabelMedia = yes;
  Random Access = Yes;
  AutomaticMount = yes;
  RemovableMedia = no;
  AlwaysOpen = no;
  Maximum Concurrent Jobs = 1 # this has to be 1! (1 for each device)
  Count = 10
}

Autochanger {
  Name = "AI-Consolidated-Autochanger"
  Device = AI-Consolidated

  Changer Device = /dev/null
  Changer Command = ""
}


I suppose the error must be easy to spot? Or everyone would have this problem :(

(0004218)
hostedpower   
2021-08-25 11:32   
3838 machine.example.com-files machine.example.com Backup VirtualFull 0 0.00 B 0 Running
Messages
Search
#
Timestamp
Message
52 2021-08-25 11:25:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
51 2021-08-25 11:20:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
50 2021-08-25 11:15:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
49 2021-08-25 11:10:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
48 2021-08-25 11:05:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
47 2021-08-25 11:00:25 backup1-sd JobId 3838: Please mount read Volume "vol-cons-0287" for:
Job: machine.example.com-files.2021-08-25_08.00.19_36
Storage: "AI-Incremental0007" (/var/lib/bareos/storage)
Pool: AI-Incremental
Media type: AI
46 2021-08-25 10:55:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
45 2021-08-25 10:50:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
44 2021-08-25 10:45:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
43 2021-08-25 10:40:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
42 2021-08-25 10:35:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
41 2021-08-25 10:30:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
40 2021-08-25 10:25:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
39 2021-08-25 10:20:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
38 2021-08-25 10:15:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
37 2021-08-25 10:10:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
36 2021-08-25 10:05:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
35 2021-08-25 10:00:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
34 2021-08-25 09:55:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
33 2021-08-25 09:50:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
32 2021-08-25 09:45:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
31 2021-08-25 09:40:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
30 2021-08-25 09:35:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
29 2021-08-25 09:30:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
28 2021-08-25 09:25:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
27 2021-08-25 09:20:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
26 2021-08-25 09:15:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
25 2021-08-25 09:10:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
24 2021-08-25 09:05:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
23 2021-08-25 09:00:25 backup1-sd JobId 3838: Please mount read Volume "vol-cons-0287" for:
Job: machine.example.com-files.2021-08-25_08.00.19_36
Storage: "AI-Incremental0007" (/var/lib/bareos/storage)
Pool: AI-Incremental
Media type: AI
22 2021-08-25 08:55:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
21 2021-08-25 08:50:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
20 2021-08-25 08:45:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
19 2021-08-25 08:40:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
18 2021-08-25 08:35:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
17 2021-08-25 08:30:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
16 2021-08-25 08:25:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
15 2021-08-25 08:20:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
14 2021-08-25 08:15:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
13 2021-08-25 08:10:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
12 2021-08-25 08:05:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
11 2021-08-25 08:00:25 backup1-sd JobId 3838: Please mount read Volume "vol-cons-0287" for:
Job: machine.example.com-files.2021-08-25_08.00.19_36
Storage: "AI-Incremental0007" (/var/lib/bareos/storage)
Pool: AI-Incremental
Media type: AI
10 2021-08-25 08:00:25 backup1-sd JobId 3838: Warning: stored/acquire.cc:348 Read acquire: stored/label.cc:269 Could not reserve volume vol-cons-0287 on "AI-Incremental0007" (/var/lib/bareos/storage)
9 2021-08-25 08:00:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
8 2021-08-25 08:00:24 backup1-sd JobId 3838: Ready to append to end of Volume "vol-cons-0287" size=12609080131
7 2021-08-25 08:00:24 backup1-sd JobId 3838: Volume "vol-cons-0287" previously written, moving to end of data.
6 2021-08-25 08:00:24 backup1-dir JobId 3838: Using Device "AI-Consolidated0007" to write.
5 2021-08-25 08:00:24 backup1-dir JobId 3838: Using Device "AI-Incremental0007" to read.
4 2021-08-25 08:00:23 backup1-dir JobId 3838: Connected Storage daemon at backupx.xxxx:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
3 2021-08-25 08:00:23 backup1-dir JobId 3838: Bootstrap records written to /var/lib/bareos/backup1-dir.restore.379.bsr
(0004219)
hostedpower   
2021-08-25 11:48   
I see now it tries to mount consolidate volumes on the incremental devices, you can see it in above sample, but also below:
25-Aug 08:02 backup1-dir JobId 3860: Start Virtual Backup JobId 3860, Job=machine.example.com-files.2021-08-25_08.00.31_02
25-Aug 08:02 backup1-dir JobId 3860: Consolidating JobIds 3563,724
25-Aug 08:02 backup1-dir JobId 3860: Bootstrap records written to /var/lib/bareos/backup1-dir.restore.394.bsr
25-Aug 08:02 backup1-dir JobId 3860: Connected Storage daemon at xxx.xxx:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
25-Aug 08:02 backup1-dir JobId 3860: Using Device "AI-Incremental0005" to read.
25-Aug 08:02 backup1-dir JobId 3860: Using Device "AI-Consolidated0005" to write.
25-Aug 08:02 backup1-sd JobId 3860: Volume "vol-cons-0292" previously written, moving to end of data.
25-Aug 08:02 backup1-sd JobId 3860: Ready to append to end of Volume "vol-cons-0292" size=26118365623
25-Aug 08:02 backup1-sd JobId 3860: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0005" (/var/lib/bareos/storage)
25-Aug 08:02 backup1-sd JobId 3860: Warning: stored/acquire.cc:348 Read acquire: stored/label.cc:269 Could not reserve volume vol-cons-0287 on "AI-Incremental0005" (/var/lib/bareos/storage)
25-Aug 08:02 backup1-sd JobId 3860: Please mount read Volume "vol-cons-0287" for:
    Job: machine.example.com-files.2021-08-25_08.00.31_02
    Storage: "AI-Incremental0005" (/var/lib/bareos/storage)
    Pool: AI-Incremental
    Media type: AI

This might be the cause? What could be causing this?
(0004220)
hostedpower   
2021-08-25 11:51   
This is the first job today with the messages, but it succeeded anyway; maybe you could see what is going wrong here?

2021-08-24 15:53:54 backup1-dir JobId 3549: console command: run AfterJob ".bvfs_update JobId=3549"
30 2021-08-24 15:53:54 backup1-dir JobId 3549: End auto prune.

29 2021-08-24 15:53:54 backup1-dir JobId 3549: No Files found to prune.
28 2021-08-24 15:53:54 backup1-dir JobId 3549: Begin pruning Files.
27 2021-08-24 15:53:54 backup1-dir JobId 3549: No Jobs found to prune.
26 2021-08-24 15:53:54 backup1-dir JobId 3549: Begin pruning Jobs older than 6 months .
25 2021-08-24 15:53:54 backup1-dir JobId 3549: purged JobIds 3237,648 as they were consolidated into Job 3549
24 2021-08-24 15:53:54 backup1-dir JobId 3549: Bareos backup1-dir 20.0.2 (10Jun21):
Build OS: Debian GNU/Linux 10 (buster)
JobId: 3549
Job: another.xxx-files.2021-08-24_15.48.51_46
Backup Level: Virtual Full
Client: "another.xxx" 20.0.2 (10Jun21) Debian GNU/Linux 10 (buster),debian
FileSet: "linux-files" 2021-07-20 16:03:24
Pool: "AI-Consolidated" (From Job Pool's NextPool resource)
Catalog: "MyCatalog" (From Client resource)
Storage: "AI-Consolidated" (From Storage from Pool's NextPool resource)
Scheduled time: 24-Aug-2021 15:48:51
Start time: 03-Aug-2021 23:08:50
End time: 03-Aug-2021 23:09:30
Elapsed time: 40 secs
Priority: 10
SD Files Written: 653
SD Bytes Written: 55,510,558 (55.51 MB)
Rate: 1387.8 KB/s
Volume name(s): vol-cons-0288
Volume Session Id: 2056
Volume Session Time: 1628888564
Last Volume Bytes: 55,596,662 (55.59 MB)
SD Errors: 0
SD termination status: OK
Accurate: yes
Bareos binary info: official Bareos subscription
Job triggered by: User
Termination: Backup OK

23 2021-08-24 15:53:54 backup1-dir JobId 3549: Joblevel was set to joblevel of first consolidated job: Incremental
22 2021-08-24 15:53:54 backup1-dir JobId 3549: Insert of attributes batch table done
21 2021-08-24 15:53:54 backup1-dir JobId 3549: Insert of attributes batch table with 653 entries start
20 2021-08-24 15:53:54 backup1-sd JobId 3549: Releasing device "AI-Incremental0008" (/var/lib/bareos/storage).
19 2021-08-24 15:53:54 backup1-sd JobId 3549: Releasing device "AI-Consolidated0008" (/var/lib/bareos/storage).
18 2021-08-24 15:53:54 backup1-sd JobId 3549: Elapsed time=00:00:01, Transfer rate=55.51 M Bytes/second
17 2021-08-24 15:53:54 backup1-sd JobId 3549: Forward spacing Volume "vol-incr-0135" to file:block 0:2909195921.
16 2021-08-24 15:53:54 backup1-sd JobId 3549: Ready to read from volume "vol-incr-0135" on device "AI-Incremental0008" (/var/lib/bareos/storage).
15 2021-08-24 15:53:54 backup1-sd JobId 3549: End of Volume at file 0 on device "AI-Incremental0008" (/var/lib/bareos/storage), Volume "vol-cons-0284"
14 2021-08-24 15:53:54 backup1-sd JobId 3549: Forward spacing Volume "vol-cons-0284" to file:block 0:307710024.
13 2021-08-24 15:53:54 backup1-sd JobId 3549: Ready to read from volume "vol-cons-0284" on device "AI-Incremental0008" (/var/lib/bareos/storage).
12 2021-08-24 15:48:54 backup1-sd JobId 3549: Please mount read Volume "vol-cons-0284" for:
Job: another.xxx-files.2021-08-24_15.48.51_46
Storage: "AI-Incremental0008" (/var/lib/bareos/storage)
Pool: AI-Incremental
Media type: AI
11 2021-08-24 15:48:54 backup1-sd JobId 3549: Warning: stored/acquire.cc:348 Read acquire: stored/label.cc:269 Could not reserve volume vol-cons-0284 on "AI-Incremental0008" (/var/lib/bareos/storage)
10 2021-08-24 15:48:54 backup1-sd JobId 3549: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=1 num_writers=0 num_reserve=0 swap=0 vol=vol-cons-0284 from dev="AI-Incremental0005" (/var/lib/bareos/storage) to "AI-Incremental0008" (/var/lib/bareos/storage)
9 2021-08-24 15:48:54 backup1-sd JobId 3549: Wrote label to prelabeled Volume "vol-cons-0288" on device "AI-Consolidated0008" (/var/lib/bareos/storage)
8 2021-08-24 15:48:54 backup1-sd JobId 3549: Labeled new Volume "vol-cons-0288" on device "AI-Consolidated0008" (/var/lib/bareos/storage).
7 2021-08-24 15:48:53 backup1-dir JobId 3549: Using Device "AI-Consolidated0008" to write.
6 2021-08-24 15:48:53 backup1-dir JobId 3549: Created new Volume "vol-cons-0288" in catalog.
5 2021-08-24 15:48:53 backup1-dir JobId 3549: Using Device "AI-Incremental0008" to read.
4 2021-08-24 15:48:52 backup1-dir JobId 3549: Connected Storage daemon at xxx.xxx:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
3 2021-08-24 15:48:52 backup1-dir JobId 3549: Bootstrap records written to /var/lib/bareos/backup1-dir.restore.331.bsr
2 2021-08-24 15:48:52 backup1-dir JobId 3549: Consolidating JobIds 3237,648
1 2021-08-24 15:48:52 backup1-dir JobId 3549: Start Virtual Backup JobId 3549, Job=another.xxx-files.2021-08-24_15.48.51_46
(0004221)
hostedpower   
2021-08-25 11:54   
Just saw this swap not possible error also sometimes happen when the same device/storage/pool was used:

5 2021-08-24 15:54:03 backup1-sd JobId 3553: Ready to read from volume "vol-incr-0136" on device "AI-Incremental0002" (/var/lib/bareos/storage).
14 2021-08-24 15:49:03 backup1-sd JobId 3553: Please mount read Volume "vol-incr-0136" for:
Job: xxx.xxx.bxxe-files.2021-08-24_15.48.52_50
Storage: "AI-Incremental0002" (/var/lib/bareos/storage)
Pool: AI-Incremental
Media type: AI
13 2021-08-24 15:49:03 backup1-sd JobId 3553: Warning: stored/acquire.cc:348 Read acquire: stored/label.cc:269 Could not reserve volume vol-incr-0136 on "AI-Incremental0002" (/var/lib/bareos/storage)
12 2021-08-24 15:49:03 backup1-sd JobId 3553: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=1 num_writers=0 num_reserve=0 swap=0 vol=vol-incr-0136 from dev="AI-Incremental0006" (/var/lib/bareos/storage) to "AI-Incremental0002" (/var/lib/bareos/storage)
11 2021-08-24 15:49:03 backup1-sd JobId 3553: End of Volume at file 0 on device "AI-Incremental0002" (/var/lib/bareos/storage), Volume "vol-cons-0285"
(0004222)
hostedpower   
2021-08-25 12:20   
PS: Consolidate job missing in the posted config above:

Job {
        Name = "Consolidate"
        Type = "Consolidate"
        Accurate = "yes"

        JobDefs = "DefaultJob"

        Schedule = "ConsolidateCycle"
        Max Full Consolidations = 200

        Maximum Concurrent Jobs = 1
        Prune Volumes = yes

        Priority = 11
}
(0004227)
hostedpower   
2021-08-26 13:11   
2021-08-26 11:34:12 backup1-sd JobId 4151: Warning: stored/acquire.cc:348 Read acquire: stored/label.cc:269 Could not reserve volume vol-cons-0282 on "AI-Incremental0004" (/var/lib/bareos/storage) <-------
10 2021-08-26 11:34:12 backup1-sd JobId 4151: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0282 from dev="AI-Consolidated0010" (/var/lib/bareos/storage) to "AI-Incremental0004" (/var/lib/bareos/storage)
9 2021-08-26 11:34:12 backup1-sd JobId 4151: Wrote label to prelabeled Volume "vol-cons-0298" on device "AI-Incremental0001" (/var/lib/bareos/storage)
8 2021-08-26 11:34:12 backup1-sd JobId 4151: Labeled new Volume "vol-cons-0298" on device "AI-Incremental0001" (/var/lib/bareos/storage).
7 2021-08-26 11:34:12 backup1-dir JobId 4151: Using Device "AI-Incremental0001" to write.
6 2021-08-26 11:34:12 backup1-dir JobId 4151: Created new Volume "vol-cons-0298" in catalog.


All jobs don't even continue after this "event"...
(0004494)
hostedpower   
2022-01-31 08:33   
This still happens, even after using seperate devices and labels etc

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1474 [bareos-core] storage daemon crash always 2022-07-27 16:12 2022-10-04 10:28
Reporter: jens Platform: Linux  
Assigned To: bruno-at-bareos OS: Debian  
Priority: normal OS Version: 9  
Status: resolved Product Version: 19.2.12  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: bareos-sd crashing on VirtualFull with SIGSEGV ../src/lib/serial.cc file not found
Description: When running 'always incremental' backup scheme the storage daemon crashes with segmentation fault
on VirtualFull backup triggered by consolidation.

Job error:
bareos-dir JobId 1267: Fatal error: Director's comm line to SD dropped.

GDB debug:
bareos-sd (200): stored/mac.cc:159-154 joblevel from SOS_LABEL is now F
bareos-sd (130): stored/label.cc:672-154 session_label record=ec015288
bareos-sd (150): stored/label.cc:718-154 Write sesson_label record JobId=154 FI=SOS_LABEL SessId=1 Strm=154 len=165 remainder=0
bareos-sd (150): stored/label.cc:722-154 Leave WriteSessionLabel Block=1351364161d File=0d
bareos-sd (200): stored/mac.cc:221-154 before write JobId=154 FI=1 SessId=1 Strm=UNIX-Attributes-EX len=123
Thread 4 "bareos-sd" received signal SIGSEGV, Segmentation fault.

[Switching to Thread 0x7ffff4c5b700 (LWP 2271)]
serial_uint32 (ptr=ptr@entry=0x7ffff4c5aa70, v=<optimized out>) at ../../../src/lib/serial.cc:76
76 ../../../src/lib/serial.cc: No such file or directory.


I am running daily incrementals into 'File' pool, consolidating every 4 days into 'FileCons' pool, a virtual full every 1st Monday of a month into 'LongTerm-Disk' pool
and finally a migration fto tape every 2nd Monday of a month from 'LongTerm-Disk' pool into 'LongTerm-Tape' pool.

BareOS version: 19.2.7
BareOS director and storage daemon on the same machine.
Disk storage on CEPH mount
Tape storage on Fujitsu Eternus LT2 tape library with 1 LTO-7 drive

---------------------------------------------------------------------------------------------------
Storage Device config:

FileStorage with 10 devices, all into same 1st folder:

Device {
  Name = FileStorage
  Media Type = File
  Archive Device = /storage/backup/bareos_Incremental # storage location
  LabelMedia = yes # lets Bareos label unlabeled media
  Random Access = yes # allow this device to be used by any job
  AutomaticMount = yes # when device opened, read it
  RemovableMedia = no # fixed media ( no tape, no usb )
  AlwaysOpen = no
  Auto Inflate = both # auto-decompress in- out- stream
  Auto Deflate = both # auto-compress in- out- stream ( backup server side compression )
  Auto Deflate Algorithm = LZ4HC # compression algorithm
}
....

FileStorageCons with 10 devices, all into same 2nd folder

Name = FileStorageCons
  Media Type = FileCons
  Archive Device = /storage/backup/bareos_Consolidate # storage location
  LabelMedia = yes # lets Bareos label unlabeled media
  Random Access = yes # allow this device to be used by any job
  AutomaticMount = yes # when device opened, read it
  RemovableMedia = no # fixed media ( no tape, no usb )
  AlwaysOpen = no
  Auto Inflate = both # auto-decompress in- out- stream
  Auto Deflate = both # auto-compress in- out- stream ( backup server side compression )
  Auto Deflate Algorithm = LZ4HC # compression algorithm
}
...

FileStorageVault with 10 devices, all into same 3rd folder

Name = FileStorageVault
  Media Type = FileVLT
  Archive Device = /storage/backup/bareos_LongTermDisk # storage location
  LabelMedia = yes # lets Bareos label unlabeled media
  Random Access = yes # allow this device to be used by any job
  AutomaticMount = yes # when device opened, read it
  RemovableMedia = no # fixed media ( no tape, no usb )
  AlwaysOpen = no
  Auto Inflate = both # auto-decompress in- out- stream
  Auto Deflate = both # auto-compress in- out- stream ( backup server side compression )
  Auto Deflate Algorithm = LZ4HC # compression algorithm
}
....

Tape Device:

Device {
  Name = IBM-ULTRIUM-HH7
  Device Type = Tape
  DriveIndex = 0
  ArchiveDevice = /dev/nst0
  Media Type = IBM-LTO-7
  AutoChanger = yes
  AutomaticMount = yes
  LabelMedia = yes
  RemovableMedia = yes
  Autoselect = yes
  MaximumFileSize = 10GB
  Spool Directory = /storage/scratch
  Maximum Spool Size = 2199023255552 # maximum total spool size in bytes (2Tbyte)
}

---------------------------------------------------------------------------------------------------
Pool Config:

Pool {
  Name = AI-Incremental # name of the media pool
  Pool Type = Backup # pool type
  Recycle = yes # BAReOS can automatically recycle volumes from that pool
  AutoPrune = yes # do not automatically prune expired volumes
  Volume Retention = 72 days
  Storage = File # storage device to be used
  Maximum Volume Bytes = 10G # maximum file size per volume
  Maximum Volumes = 500 # maximum allowed total number of volumes in pool
  Label Format = "AI-Incremental_" # volumes will be labeled "AI-Incremental_-<volume-id>"
  Volume Use Duration = 36 days # volume will be no longer used than
  Next Pool = AI-Consolidate # next pool for consolidation
  Job Retention = 72 days
  File Retention = 36 days
}

Pool {
  Name = AI-Consolidate # name of the media pool
  Pool Type = Backup # pool type
  Recycle = yes # BAReOS can automatically recycle volumes from that pool
  AutoPrune = yes # do not automatically prune expired volumes
  Volume Retention = 366 days
  Job Retention = 180 days
  File Retention = 93 days
  Storage = FileCons # storage device to be used
  Maximum Volume Bytes = 10G # maximum file size per volume
  Maximum Volumes = 1000 # maximum allowed total number of volumes in pool
  Label Format = "AI-Consolidate_" # volumes will be labeled "AI-Consolidate_-<volume-id>"
  Volume Use Duration = 2 days # volume will be no longer used than
  Next Pool = LongTerm-Disk # next pool for long term backups to disk
}

Pool {
  Name = LongTerm-Disk # name of the media pool
  Pool Type = Backup # pool type
  Recycle = yes # BAReOS can automatically recycle volumes from that pool
  AutoPrune = yes # do not automatically prune expired volumes
  Volume Retention = 732 days
  Job Retention = 732 days
  File Retention = 366 days
  Storage = FileVLT # storage device to be used
  Maximum Volume Bytes = 10G # maximum file size per volume
  Maximum Volumes = 1000 # maximum allowed total number of volumes in pool
  Label Format = "LongTerm-Disk_" # volumes will be labeled "LongTerm-Disk_<volume-id>"
  Volume Use Duration = 2 days # volume will be no longer used than
  Next Pool = LongTerm-Tape # next pool for long term backups to disk
  Migration Time = 2 days # Jobs older than 2 days in this pool will be migrated to 'Next Pool'
}

Pool {
  Name = LongTerm-Tape
  Pool Type = Backup
  Recycle = yes # Bareos can automatically recycle Volumes
  AutoPrune = yes # Prune expired volumes
  Volume Retention = 732 days # How long should the Backups be kept? (0000012)
  Job Retention = 732 days
  File Retention = 366 days
  Storage = TapeLibrary # Physical Media
  Maximum Block Size = 1048576
  Recycle Pool = Scratch
  Cleaning Prefix = "CLN"
}

---------------------------------------------------------------------------------------------------
JobDefs:

JobDefs {
  Name = AI-Incremental
  Type = Backup
  Level = Incremental
  Storage = File
  Messages = Standard
  Pool = AI-Incremental
  Incremental Backup Pool = AI-Incremental
  Full Backup Pool = AI-Consolidate
  Accurate = yes
  Allow Mixed Priority = yes
  Always Incremental = yes
  Always Incremental Job Retention = 36 days
  Always Incremental Keep Number = 14
  Always Incremental Max Full Age = 31 days
}

JobDefs {
  Name = AI-Consolidate
  Type = Consolidate
  Storage = File
  Messages = Standard
  Pool = AI-Consolidate
  Priority = 25
  Write Bootstrap = "/storage/bootstrap/%c.bsr"
  Incremental Backup Pool = AI-Incremental
  Full Backup Pool = AI-Consolidate
  Max Full Consolidations = 1
  Prune Volumes = yes
  Accurate = yes
}

JobDefs {
  Name = LongTermDisk
  Type = Backup
  Level = VirtualFull
  Messages = Standard
  Pool = AI-Consolidate
  Priority = 30
  Write Bootstrap = "/storage/bootstrap/%c.bsr"
  Accurate = yes
  Run Script {
    console = "update jobid=%1 jobtype=A"
    Runs When = After
    Runs On Client = No
    Runs On Failure = No
  }
}

JobDefs {
  Name = "LongTermTape"
  Pool = LongTerm-Disk
  Messages = Standard
  Type = Migrate
}


---------------------------------------------------------------------------------------------------
Job Config ( per client )

Job {
  Name = "Incr-<client>"
  Description = "<client> always incremental 36d retention"
  Client = <client>
  Jobdefs = AI-Incremental
  FileSet="fileset-<client>"
  Schedule = "daily_incremental_<client>"
  # Write Bootstrap file for disaster recovery.
  Write Bootstrap = "/storage/bootstrap/%j.bsr"
  # The higher the number the lower the job priority
  Priority = 15
  Run Script {
    Console = ".bvfs_update jobid=%i"
    RunsWhen = After
    RunsOnClient = No
  }
}

Job {
  Name = "AI-Consolidate"
  Description = "consolidation of 'always incremental' jobs"
  Client = backup.mgmt.drs
  FileSet = SelfTest
  Jobdefs = AI-Consolidate
  Schedule = consolidate

  # The higher the number the lower the job priority
  Priority = 25
}

Job {
  Name = "VFull-<client>"
  Description = "<client> monthly virtual full"
  Messages = Standard
  Client = <client>
  Type = Backup
  Level = VirtualFull
  Jobdefs = LongTermDisk
  FileSet=fileset-<client>
  Pool = AI-Consolidate
  Schedule = virtual-full_<client>
  Priority = 30
  Run Script {
    Console = ".bvfs_update"
    RunsWhen = After
    RunsOnClient = No
  }
}

Job {
  Name = "migrate-2-tape"
  Description = "monthly migration of virtual full backups from LongTerm-Disk to LongTerm-Tape pool"
  Jobdefs = LongTermTape
  Selection Type = PoolTime
  Schedule = "migrate-2-tape"
  Priority = 15
  Run Script {
    Console = ".bvfs_update jobid=%i"
    RunsWhen = After
    RunsOnClient = No
  }
}

---------------------------------------------------------------------------------------------------
Schedule config:

Schedule {
  Name = "daily_incremental_<client>"
  Run = daily at 02:00
}

Schedule {
  Name = "consolidate"
  Run = Incremental 3/4 at 00:00
}

Schedule {
  Name = "virtual-full_<client>"
  Run = 1st monday at 10:00
}

Schedule {
  Name = "migrate-2-tape"
  Run = 2nd monday at 8:00
}

---------------------------------------------------------------------------------------------------
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files: bareos_sd_debug.zip (3,771 bytes) 2022-07-27 16:59
https://bugs.bareos.org/file_download.php?file_id=530&type=bug
Notes
(0004688)
bruno-at-bareos   
2022-07-27 16:43   
could you check in your working dir (/var/lib/bareos) of SD any other trace backtrace and debug files
if you have them, please attach them (eventually compressed).
(0004690)
jens   
2022-07-27 17:00   
debug files attached in private note
(0004697)
bruno-at-bareos   
2022-07-28 09:34   
What is the reason behind running 19.2 instead upgrading to 21 ?
(0004699)
jens   
2022-07-28 13:06   
1. missing comprehensive and easy to follow step-by-step guide on how to upgrade
2. being unconfident about a flawless upgrade procedure without rendering backup data unusable
3. lack of experience and skilled personnel resulting in major effort to roll out new version
4. limited access to online repositories to update local mirrors -> very long lead time to get new versions
(0004700)
jens   
2022-07-28 13:09   
(Last edited: 2022-07-28 13:12)
For the above reasons I am little hesitant to take the effort of upgrading.
Currently I am considering an update only if this will be the only chance to get the issue resolved.
I need confirmation from your end first.
My hope is that there is just something wrong in my configuration or I am running an adverse setup and changing either one might resolve the issue ?
(0004701)
bruno-at-bareos   
2022-08-01 11:59   
Hi Jens,

Thanks for the complements.

Did this crash happen each time a consolidation VF is created ?
(0004702)
bruno-at-bareos   
2022-08-01 12:04   
Maybe related to fixed in 19.2.9 (available with subscription)
 - fix a memory corruption when autolabeling with increased maxiumum block size
https://docs.bareos.org/bareos-19.2/Appendix/ReleaseNotes.html#id12
(0004703)
jens   
2022-08-01 12:05   
Hi Bruno,

so far, yes, that is my experience.
It is always failing.
Also when repeating or manually rescheduling the failed job through the web-ui during idle hours where nothing else is running on the director.
(0004704)
jens   
2022-08-01 12:14   
The "- fix a memory corruption when autolabeling with increased maximum block size" indeed could be a lead
as I see the following in the job logs ?

Warning: For Volume "AI-Consolidate_0118": The sizes do not match!
Volume=64574484 Catalog=32964717
Correcting Catalog
(0004705)
bruno-at-bareos   
2022-08-02 13:42   
Hi Jens, a quick note about the size do not match, this is unrelated. Aborted or failed job can have this effect.

This Fix was introduce with this commit https://github.com/bareos/bareos/commit/0086b852d and the 19.2.9 has the fix.
(0004800)
bruno-at-bareos   
2022-10-04 10:27   
Closing as a fix already exist
(0004801)
bruno-at-bareos   
2022-10-04 10:28   
Fix is present in source code and published subscription binaries.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1476 [bareos-core] file daemon major always 2022-08-03 16:01 2022-08-23 12:08
Reporter: support@ingenium.trading Platform: Linux  
Assigned To: bruno-at-bareos OS: CentOS  
Priority: high OS Version: 7  
Status: resolved Product Version:  
Product Build: Resolution: won't fix  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Backups for Full and Incremental are approx 10 times bigger than the server used
Description: Whenever a backup job is running, it takes very long without errors and the file size is 10 times bigger than the server usual uses.

Earlier we had issues with the connectivity so I enabled heartbeat in the clien/myself.conf => Heartbeat Interval = 1 min
Tags:
Steps To Reproduce: Manually start the job via webui or bconsole.
Additional Information: Backup Server:
OS: Fedora 35
Bareos Version: 22.0.0~pre613.d7109f123

Client Server:
OS: Alma Linux 9 / CentOS7
Bareos Version: 22.0.0~pre613.d7109f123

Backup job:
03-Aug 09:48 bareos-dir JobId 565: No prior Full backup Job record found.
03-Aug 09:48 bareos-dir JobId 565: No prior or suitable Full backup found in catalog. Doing FULL backup.
03-Aug 09:48 bareos-dir JobId 565: Start Backup JobId 565, Job=td02.example.com.2022-08-03_09.48.28_03
03-Aug 09:48 bareos-dir JobId 565: Connected Storage daemon at backup01.example.com:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
03-Aug 09:48 bareos-dir JobId 565: Max configured use duration=82,800 sec. exceeded. Marking Volume "AI-Example-Consolidated-0490" as Used.
03-Aug 09:48 bareos-dir JobId 565: Created new Volume "AI-Example-Consolidated-0584" in catalog.
03-Aug 09:48 bareos-dir JobId 565: Using Device "FileStorage01" to write.
03-Aug 09:48 bareos-dir JobId 565: Probing client protocol... (result will be saved until config reload)
03-Aug 09:48 bareos-dir JobId 565: Connected Client: td02.example.com at td02.example.com:9102, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
03-Aug 09:48 bareos-dir JobId 565: Handshake: Immediate TLS
03-Aug 09:48 bareos-dir JobId 565: Encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
03-Aug 09:48 trade02-fd JobId 565: Connected Storage daemon at backup01.example.com:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
03-Aug 09:48 trade02-fd JobId 565: Extended attribute support is enabled
03-Aug 09:48 trade02-fd JobId 565: ACL support is enabled
03-Aug 09:48 bareos-sd JobId 565: Labeled new Volume "AI-Example-Consolidated-0584" on device "FileStorage01" (/backup/bareos/storage).
03-Aug 09:48 bareos-sd JobId 565: Wrote label to prelabeled Volume "AI-Example-Consolidated-0584" on device "FileStorage01" (/backup/bareos/storage)
03-Aug 09:48 trade02-fd JobId 565: Disallowed filesystem. Will not descend from / into /dev
03-Aug 09:51 trade02-fd JobId 565: Disallowed filesystem. Will not descend from / into /run
03-Aug 09:51 trade02-fd JobId 565: Disallowed filesystem. Will not descend from / into /sys
03-Aug 09:51 trade02-fd JobId 565: Disallowed filesystem. Will not descend from / into /var/lib/nfs/rpc_pipefs
03-Aug 10:27 bareos-sd JobId 565: User defined maximum volume capacity 107,374,182,400 exceeded on device "FileStorage01" (/backup/bareos/storage).
03-Aug 10:27 bareos-sd JobId 565: End of medium on Volume "AI-Example-Consolidated-0584" Bytes=107,374,159,911 Blocks=1,664,406 at 03-Aug-2022 10:27.
03-Aug 10:27 bareos-dir JobId 565: Created new Volume "AI-Example-Consolidated-0585" in catalog.
03-Aug 10:27 bareos-sd JobId 565: Labeled new Volume "AI-Example-Consolidated-0585" on device "FileStorage01" (/backup/bareos/storage).
03-Aug 10:27 bareos-sd JobId 565: Wrote label to prelabeled Volume "AI-Example-Consolidated-0585" on device "FileStorage01" (/backup/bareos/storage)
03-Aug 10:27 bareos-sd JobId 565: New volume "AI-Example-Consolidated-0585" mounted on device "FileStorage01" (/backup/bareos/storage) at 03-Aug-2022 10:27.
03-Aug 11:07 bareos-sd JobId 565: User defined maximum volume capacity 107,374,182,400 exceeded on device "FileStorage01" (/backup/bareos/storage).
03-Aug 11:07 bareos-sd JobId 565: End of medium on Volume "AI-Example-Consolidated-0585" Bytes=107,374,160,137 Blocks=1,664,406 at 03-Aug-2022 11:07.
03-Aug 11:07 bareos-dir JobId 565: Created new Volume "AI-Example-Consolidated-0586" in catalog.
03-Aug 11:07 bareos-sd JobId 565: Labeled new Volume "AI-Example-Consolidated-0586" on device "FileStorage01" (/backup/bareos/storage).
03-Aug 11:07 bareos-sd JobId 565: Wrote label to prelabeled Volume "AI-Example-Consolidated-0586" on device "FileStorage01" (/backup/bareos/storage)
03-Aug 11:07 bareos-sd JobId 565: New volume "AI-Example-Consolidated-0586" mounted on device "FileStorage01" (/backup/bareos/storage) at 03-Aug-2022 11:07.
03-Aug 11:46 bareos-sd JobId 565: User defined maximum volume capacity 107,374,182,400 exceeded on device "FileStorage01" (/backup/bareos/storage).
03-Aug 11:46 bareos-sd JobId 565: End of medium on Volume "AI-Example-Consolidated-0586" Bytes=107,374,160,137 Blocks=1,664,406 at 03-Aug-2022 11:46.
03-Aug 11:46 bareos-dir JobId 565: Created new Volume "AI-Example-Consolidated-0587" in catalog.
03-Aug 11:46 bareos-sd JobId 565: Labeled new Volume "AI-Example-Consolidated-0587" on device "FileStorage01" (/backup/bareos/storage).
03-Aug 11:46 bareos-sd JobId 565: Wrote label to prelabeled Volume "AI-Example-Consolidated-0587" on device "FileStorage01" (/backup/bareos/storage)
03-Aug 11:46 bareos-sd JobId 565: New volume "AI-Example-Consolidated-0587" mounted on device "FileStorage01" (/backup/bareos/storage) at 03-Aug-2022 11:46.
03-Aug 12:25 bareos-sd JobId 565: User defined maximum volume capacity 107,374,182,400 exceeded on device "FileStorage01" (/backup/bareos/storage).
03-Aug 12:25 bareos-sd JobId 565: End of medium on Volume "AI-Example-Consolidated-0587" Bytes=107,374,160,137 Blocks=1,664,406 at 03-Aug-2022 12:25.
03-Aug 12:25 bareos-dir JobId 565: Created new Volume "AI-Example-Consolidated-0588" in catalog.
03-Aug 12:25 bareos-sd JobId 565: Labeled new Volume "AI-Example-Consolidated-0588" on device "FileStorage01" (/backup/bareos/storage).
03-Aug 12:25 bareos-sd JobId 565: Wrote label to prelabeled Volume "AI-Example-Consolidated-0588" on device "FileStorage01" (/backup/bareos/storage)
03-Aug 12:25 bareos-sd JobId 565: New volume "AI-Example-Consolidated-0588" mounted on device "FileStorage01" (/backup/bareos/storage) at 03-Aug-2022 12:25.
03-Aug 12:56 bareos-sd JobId 565: Releasing device "FileStorage01" (/backup/bareos/storage).
03-Aug 12:56 bareos-sd JobId 565: Elapsed time=03:08:04, Transfer rate=45.57 M Bytes/second
03-Aug 12:56 bareos-dir JobId 565: Insert of attributes batch table with 188627 entries start
03-Aug 12:56 bareos-dir JobId 565: Insert of attributes batch table done
03-Aug 12:56 bareos-dir JobId 565: Bareos bareos-dir 22.0.0~pre613.d7109f123 (01Aug22):
  Build OS: Fedora release 35 (Thirty Five)
  JobId: 565
  Job: td02.example.com.2022-08-03_09.48.28_03
  Backup Level: Full (upgraded from Incremental)
  Client: "td02.example.com" 22.0.0~pre553.6a41db3f7 (07Jul22) CentOS Stream release 9,redhat
  FileSet: "ExampleLinux" 2022-08-03 09:48:28
  Pool: "AI-Example-Consolidated" (From Job FullPool override)
  Catalog: "MyCatalog" (From Client resource)
  Storage: "File" (From Pool resource)
  Scheduled time: 03-Aug-2022 09:48:27
  Start time: 03-Aug-2022 09:48:31
  End time: 03-Aug-2022 12:56:50
  Elapsed time: 3 hours 8 mins 19 secs
  Priority: 10
  FD Files Written: 188,627
  SD Files Written: 188,627
  FD Bytes Written: 514,227,307,623 (514.2 GB)
  SD Bytes Written: 514,258,382,470 (514.2 GB)
  Rate: 45510.9 KB/s
  Software Compression: None
  VSS: no
  Encryption: no
  Accurate: yes
  Volume name(s): AI-Example-Consolidated-0584|AI-Example-Consolidated-0585|AI-Example-Consolidated-0586|AI-Example-Consolidated-0587|AI-Example-Consolidated-0588
  Volume Session Id: 4
  Volume Session Time: 1659428963
  Last Volume Bytes: 85,150,808,401 (85.15 GB)
  Non-fatal FD errors: 0
  SD Errors: 0
  FD termination status: OK
  SD termination status: OK
  Bareos binary info: pre-release version: Get official binaries and vendor support on bareos.com
  Job triggered by: User
  Termination: Backup OK

03-Aug 12:56 bareos-dir JobId 565: shell command: run AfterJob "echo '.bvfs_update jobid=565' | bconsole"
03-Aug 12:56 bareos-dir JobId 565: AfterJob: .bvfs_update jobid=565 | bconsole

Client Alma Linux 9:
Filesystem Size Used Avail Use% Mounted on
devtmpfs 47G 0 47G 0% /dev
tmpfs 47G 196K 47G 1% /dev/shm
tmpfs 19G 2.3M 19G 1% /run
/dev/mapper/almalinux-root 12G 3.5G 8.6G 29% /
/dev/sda1 2.0G 237M 1.6G 13% /boot
/dev/mapper/almalinux-opt 8.0G 90M 7.9G 2% /opt
/dev/mapper/almalinux-home 12G 543M 12G 5% /home
/dev/mapper/almalinux-var 8.0G 309M 7.7G 4% /var
/dev/mapper/almalinux-opt_ExampleAd 8.0G 373M 7.7G 5% /opt/ExampleAd
/dev/mapper/almalinux-opt_ExampleEn 32G 7.5G 25G 24% /opt/ExampleEn
/dev/mapper/almalinux-var_log 20G 8.1G 12G 41% /var/log
/dev/mapper/almalinux-var_lib 12G 259M 12G 3% /var/lib
tmpfs 9.3G 0 9.3G 0% /run/user/1703000011
tmpfs 9.3G 0 9.3G 0% /run/user/1703000004



Server JobDefs:
JobDefs {
  Name = "ExampleLinux"
  Type = Backup
  Client = bareos-fd
  FileSet = "ExampleLinux"
  Storage = File
  Messages = Standard
  Schedule = "BasicBackup"
  Pool = AI-Example-Incremental
  Priority = 10
  Write Bootstrap = "/var/lib/bareos/%c.bsr"
  Full Backup Pool = AI-Example-Consolidated # write Full Backups into "Full" Pool (0000005)
  Incremental Backup Pool = AI-Example-Incremental # write Incr Backups into "Incremental" Pool (0000011)
}
System Description
Attached Files:
Notes
(0004710)
bruno-at-bareos   
2022-08-03 18:15   
with bconsole show FileSet="ExampleLinux" we will better understand what you've tried to do.

bconsole
estimate job=td02.example.com listing
will show you all the file included.
(0004731)
bruno-at-bareos   
2022-08-23 12:08   
No informations given to go further

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1471 [bareos-core] installer / packages tweak N/A 2022-07-13 11:39 2022-07-28 09:27
Reporter: amodia Platform:  
Assigned To: bruno-at-bareos OS: Debian 11  
Priority: normal OS Version:  
Status: resolved Product Version: 21.1.3  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Shell example script for Bareos installation on Debian / Ubuntu uses deprecated "apt-key"
Description: Using the script raises the warning: "apt-key is deprecated". In order to correct this, it is suggested to change
---
# add package key
wget -q $URL/Release.key -O- | apt-key add -
---
to
+++
# add package key
wget -q $URL/Release.key -O- | gpg --dearmor -o /usr/share/keyrings/bareos.gpg
sed -i -e 's#deb #deb [signed-by=/usr/share/keyrings/bareos.gpg] #' /etc/apt/sources.list.d/bareos.list
+++
Tags:
Steps To Reproduce:
Additional Information:
Attached Files:
Notes
(0004665)
bruno-at-bareos   
2022-07-14 10:09   
Would this be valid for any version of Debian/ubuntu used (deb 9, and ubuntu 18.04) ?
(0004666)
bruno-at-bareos   
2022-07-14 10:44   
We appreciate any effort made to make our software better.
This would be a nice improvement.
Testing on old systems seems ok, checking how much effort to change the code and handle the update/upgrade process on user installation + documentation changes.
(0004667)
bruno-at-bareos   
2022-07-14 11:01   
Adding public reference of the why apt-key should be changed and how,
https://askubuntu.com/questions/1286545/what-commands-exactly-should-replace-the-deprecated-apt-key/1307181#1307181

Maybe changing to Deb822 .sources files is the way to go.
(0004669)
amodia   
2022-07-14 13:06   
I ran into this issue on the update from Bareos 20 to 21. So I can't comment on earlier versions.
My "solution" was the first that worked. Any solution that is better, more compatible and/or requires less effort is appreciated.
(0004695)
bruno-at-bareos   
2022-07-28 09:26   
Changes applied to future documentation
commit c08b56c1a
PR1203
(0004696)
bruno-at-bareos   
2022-07-28 09:27   
Follow status in PR1203 https://github.com/bareos/bareos/pull/1203

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1253 [bareos-core] webui major always 2020-06-17 09:58 2022-07-20 14:09
Reporter: tagort214 Platform: Linux  
Assigned To: frank OS: Ubuntu  
Priority: normal OS Version: 16.04  
Status: acknowledged Product Version: 19.2.7  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Can't restore files from Webui
Description: When I try to restore files from Webui it returns this error:

here was an error while loading data for this tree.

Error: ajax

Plugin: core

Reason: Could not load node

Data:

{"id":"#","xhr":{"readyState":4,"responseText":"\n\n\n \n \n \n \n\n \n \n\n\n \n \n\n\n\n\n \n\n \n\n
\n \n\n
Произошла ошибка
\n
An error occurred during execution; please try again later.
\n\n\n\n
Дополнительная информация:
\n
Zend\\Json\\Exception\\RuntimeException
\n

\n
Файл:
\n
    \n

    /usr/share/bareos-webui/vendor/zendframework/zend-json/src/Json.php:68

    \n \n
Сообщение:
\n
    \n

    Decoding failed: Syntax error

    \n \n
Трассировки стека:
\n
    \n

    #0 /usr/share/bareos-webui/module/Restore/src/Restore/Model/RestoreModel.php(54): Zend\\Json\\Json::decode('', 1)\n#1 /usr/share/bareos-webui/module/Restore/src/Restore/Controller/RestoreController.php(481): Restore\\Model\\RestoreModel->getDirectories(Object(Bareos\\BSock\\BareosBSock), '207685', '#')\n#2 /usr/share/bareos-webui/module/Restore/src/Restore/Controller/RestoreController.php(555): Restore\\Controller\\RestoreController->getDirectories()\n#3 /usr/share/bareos-webui/module/Restore/src/Restore/Controller/RestoreController.php(466): Restore\\Controller\\RestoreController->buildSubtree()\n#4 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/Controller/AbstractActionController.php(82): Restore\\Controller\\RestoreController->filebrowserAction()\n#5 [internal function]: Zend\\Mvc\\Controller\\AbstractActionController->onDispatch(Object(Zend\\Mvc\\MvcEvent))\n#6 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(444): call_user_func(Array, Object(Zend\\Mvc\\MvcEvent))\n#7 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(205): Zend\\EventManager\\EventManager->triggerListeners('dispatch', Object(Zend\\Mvc\\MvcEvent), Object(Closure))\n#8 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/Controller/AbstractController.php(118): Zend\\EventManager\\EventManager->trigger('dispatch', Object(Zend\\Mvc\\MvcEvent), Object(Closure))\n#9 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/DispatchListener.php(93): Zend\\Mvc\\Controller\\AbstractController->dispatch(Object(Zend\\Http\\PhpEnvironment\\Request), Object(Zend\\Http\\PhpEnvironment\\Response))\n#10 [internal function]: Zend\\Mvc\\DispatchListener->onDispatch(Object(Zend\\Mvc\\MvcEvent))\n#11 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(444): call_user_func(Array, Object(Zend\\Mvc\\MvcEvent))\n#12 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(205): Zend\\EventManager\\EventManager->triggerListeners('dispatch', Object(Zend\\Mvc\\MvcEvent), Object(Closure))\n#13 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/Application.php(314): Zend\\EventManager\\EventManager->trigger('dispatch', Object(Zend\\Mvc\\MvcEvent), Object(Closure))\n#14 /usr/share/bareos-webui/public/index.php(24): Zend\\Mvc\\Application->run()\n#15 {main}

    \n \n

\n\n\n
\n\n \n \n\n\n","status":500,"statusText":"Internal Server Error"}}

Also, in apache2 error logs i see this strings:
[:error] [pid 13597] [client 172.32.1.51:56276] PHP Notice: Undefined index: type in /usr/share/bareos-webui/module/Restore/src/Restore/Form/RestoreForm.php on line 91, referer: http://bareos.ivt.lan/bareos-webui/client/details/clientname
 [:error] [pid 14367] [client 172.32.1.51:56278] PHP Warning: unpack(): Type N: not enough input, need 4, have 0 in /usr/share/bareos-webui/vendor/Bareos/library/Bareos/BSock/BareosBSock.php on line 172, referer: http://bareos.ivt.lan/bareos-webui/restore/?mergefilesets=1&mergejobs=1&client=clientname&jobid=207728


Tags:
Steps To Reproduce: 1) Login to webui
2) Select job and click show files (or select client from restore tab)
Additional Information:
System Description
Attached Files: Снимок экрана_2020-06-17_10-57-42.png (37,854 bytes) 2020-06-17 09:58
https://bugs.bareos.org/file_download.php?file_id=443&type=bug
png

Снимок экрана_2020-06-17_10-57-24.png (47,279 bytes) 2020-06-17 09:58
https://bugs.bareos.org/file_download.php?file_id=444&type=bug
png
Notes
(0004242)
frank   
2021-08-31 16:14   
tagort214:

It seems we are receiving malformed JSON from the director here as the decoding throws a syntax error.

We should have a look at the JSON result the director provides for the particular directory you are
trying to list in webui by using bvfs API commands (https://docs.bareos.org/DeveloperGuide/api.html#bvfs-api) in bconsole.


In bconsole please do the following:


1. Get the jobid of the job that causes the described issue, 142 in this example. Replace the jobid from the example below with your specific jobid(s).

*.bvfs_get_jobids jobid=142 all
1,55,142
*.bvfs_lsdirs path= jobid=1,55,142
38 0 0 A A A A A A A A A A A A A A .
37 0 0 A A A A A A A A A A A A A A /


2. Navigate to the folder which causes problems by using pathid, pathids will differ at yours.

*.bvfs_lsdirs pathid=37 jobid=1,55,142
37 0 0 A A A A A A A A A A A A A A .
38 0 0 A A A A A A A A A A A A A A ..
57 0 0 A A A A A A A A A A A A A A ceph/
*

*.bvfs_lsdirs pathid=57 jobid=1,55,142
57 0 0 A A A A A A A A A A A A A A .
37 0 0 A A A A A A A A A A A A A A ..
56 0 0 A A A A A A A A A A A A A A groups/
*

*.bvfs_lsdirs pathid=56 jobid=1,55,142
56 0 0 A A A A A A A A A A A A A A .
57 0 0 A A A A A A A A A A A A A A ..
51 11817 142 P0A V9T EHt CcR A A A 8AA BAA L4 BhLhQA BhLhP/ BhLhP/ A A C group_aa/

Let's pretend group_aa (pathid 51) is the folder we can not list properly in webui.


3. Switch to API mode 2 (JSON) now and list the content of folder group_aa (pathid 51) to get the JSON result.

*.api 2
{
  "jsonrpc": "2.0",
  "id": null,
  "result": {
    "api": 2
  }
}*.bvfs_lsdirs pathid=51 jobid=1,55,142
{
  "jsonrpc": "2.0",
  "id": null,
  "result": {
    "directories": [
      {
        "type": "D",
        "pathid": 51,
        "fileid": 11817,
        "jobid": 142,
        "lstat": "P0A V9T EHt CcR A A A 8AA BAA L4 BhLhQA BhLhP/ BhLhP/ A A C",
        "name": ".",
        "fullpath": ".",
        "stat": {
          "dev": 64768,
          "ino": 89939,
          "mode": 16877,
          "nlink": 10001,
          "uid": 0,
          "gid": 0,
          "user": "root",
          "group": "root",
          "rdev": 0,
          "size": 245760,
          "atime": 1630409728,
          "mtime": 1630409727,
          "ctime": 1630409727
        },
        "linkfileindex": 0
      },
      {
        "type": "D",
        "pathid": 56,
        "fileid": 0,
        "jobid": 0,
        "lstat": "A A A A A A A A A A A A A A",
        "name": "..",
        "fullpath": "..",
        "stat": {
          "dev": 0,
          "ino": 0,
          "mode": 0,
          "nlink": 0,
          "uid": 0,
          "gid": 0,
          "user": "root",
          "group": "root",
          "rdev": 0,
          "size": 0,
          "atime": 0,
          "mtime": 0,
          "ctime": 0
        },
        "linkfileindex": 0
      },
      {
        "type": "D",
        "pathid": 52,
        "fileid": 1813,
        "jobid": 142,
        "lstat": "P0A BAGIj EHt C A A A G BAA A BhLgvm Bg/+Bp Bg/+Bp A A C",
        "name": "d1/",
        "fullpath": "/ceph/groups/group_aa/d1/",
        "stat": {
          "dev": 64768,
          "ino": 16802339,
          "mode": 16877,
          "nlink": 2,
          "uid": 0,
          "gid": 0,
          "user": "root",
          "group": "root",
          "rdev": 0,
          "size": 6,
          "atime": 1630407654,
          "mtime": 1627381865,
          "ctime": 1627381865
        },
        "linkfileindex": 0
      },
      {
        "type": "D",
        "pathid": 54,
        "fileid": 1814,
        "jobid": 142,
        "lstat": "P0A CCEkI EHt C A A A G BAA A BhLgvm Bg/+Bp Bg/+Bp A A C",
        "name": "d2/",
        "fullpath": "/ceph/groups/group_aa/d2/",
        "stat": {
          "dev": 64768,
          "ino": 34097416,
          "mode": 16877,
          "nlink": 2,
          "uid": 0,
          "gid": 0,
          "user": "root",
          "group": "root",
          "rdev": 0,
          "size": 6,
          "atime": 1630407654,
          "mtime": 1627381865,
          "ctime": 1627381865
        },
        "linkfileindex": 0
      }
    ]
  }
}*


Do you get valid JSON at this point as you can see in the example above?
Please provide the output you get in your case if possible.



Note:

You can substitue step 3 with sth. like following if the output is too big:

[root@centos7]# cat script
.api 2
.bvfs_lsdirs pathid=51 jobid=1,55,142
quit

[root@centos7]# cat script | bconsole > out.txt

Remove everything except the JSON you received from the .bvfs_lsdirs command from the out.txt file.

Validate the JSON output with a tool like https://stedolan.github.io/jq/ or https://jsonlint.com/ for example.
(0004681)
khvalera   
2022-07-20 14:09   
Try to increase in configuration.ini:
[restore]
; Restore filetree refresh timeout after n milliseconds
; Default: 120000 milliseconds
filetree_refresh_timeout=220000

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1472 [bareos-core] General tweak have not tried 2022-07-13 11:53 2022-07-19 14:55
Reporter: amodia Platform:  
Assigned To: bruno-at-bareos OS: Debian 11  
Priority: normal OS Version:  
Status: resolved Product Version:  
Product Build: Resolution: unable to reproduce  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: No explanation on "delcandidates"
Description: After an upgrade to Bareos v21 the following message appeared in the status of bareos-director:

HINWEIS: Tabelle »delcandidates« existiert nicht, wird übersprungen
(engl.: NOTE: Table »delcandidates« does not exist, will be skipped)

Searching the Bareos website for "delcandidates" does not give any matching site!

It would be nice to give a hint to update the tables in the database by running:

su - postgres -c /usr/lib/bareos/scripts/update_bareos_tables
Tags:
Steps To Reproduce:
Additional Information:
Attached Files: log_dbcheck_2022-07-18.log (35,702 bytes) 2022-07-18 15:42
https://bugs.bareos.org/file_download.php?file_id=528&type=bug
bareos_dbcheck_debian11.log.xz (30,660 bytes) 2022-07-19 14:55
https://bugs.bareos.org/file_download.php?file_id=529&type=bug
Notes
(0004664)
bruno-at-bareos   
2022-07-14 10:08   
From which version did you do the update ?
It is clearly stated in the documentation to run update_table and grant on any update (especially major version).
(0004668)
amodia   
2022-07-14 12:51   
The update was from 20 to 21.
I missed the "run update_table" statement in the documentation.
The documentation regarding "run grant" is misleading:

"Warning:
When using the PostgreSQL backend and updating to Bareos < 14.2.3, it is necessary to manually grant database permissions (grant_bareos_privileges), normally by

su - postgres -c /usr/lib/bareos/scripts/grant_bareos_privileges
"
Because I wondered, who might want to upgread "to Bareos < 14.2.3" when version 21 is available, I thought what was ment is "updating from Bareos < 14.2.3 to a later version". So I skipped the "run grant" for my update. and it worked.
(0004670)
bruno-at-bareos   
2022-07-14 14:06   
I don't know which documentation part you are talking about.

The update bareos chapter as the following for database update
https://docs.bareos.org/bareos-21/IntroductionAndTutorial/UpdatingBareos.html#other-platforms which talk about update & grant.

Maybe you can share a link here ?
(0004671)
amodia   
2022-07-14 14:15   
https://docs.bareos.org/TasksAndConcepts/CatalogMaintenance.html

Firstwarning just before the "Manual Configuration"
(0004672)
bruno-at-bareos   
2022-07-14 14:25   
Ha ok I understand, that's related to dbconfig.
Are you using dbconfig for your installation (for Bareos 20 and 21) ?
(0004673)
amodia   
2022-07-14 16:34   
Well ...
During the update from Bareos 16 to 20 I selected "Yes" for the dbconfig-common option. Unfortunately the database got lost.
This time (Bareos 20 to 21) I selected "No", hoping that a manual update would be more successful. So I have a backup of the database just before the update, but unfortunately I had no success with the manual update. So the "old" data is lost, and the 'bareos' database (bareos-db) gets filled with "new" data since the update.

In the mean time I am able to get some commands working from the command line, at least for user 'bareos':
- bareos-dbcheck *)
- bareos-dir -t -f -d 500

*): selecting test no. 12 "Check for orphaned storage records" crashes bareos-dbcheck with a "memory access error".

The next experiment is to
- create a new database (bareos2-db) from the backup before the update
- run update_table & grant & bareos-dbcheck on this db
- change the MyCatalog.conf accordingly (dbname = bareos2)
- test, if everything is working again

The hope is to "merge" this bareos2-db (data before the update) with the bareos-db (v. above), which collects the data since the update.
Is this possible?
(0004674)
bruno-at-bareos   
2022-07-14 17:34   
Not sure what happen for you, the upgrade process is quite well tested here, manual and dbconfig. (Maybe the switch from Mysql to PostgresQL ?)

Did you run bareos-dbcheck or bareos in a container (beware by default they had a low memory limit usage, which often is not enough).

As you have the dump, I would have simply restored this one, run the manual update&grant and logically bareos-dir -t should work with all the previous data preserved.
(to restore of course you first create the database).

then run dbcheck against it (advise, next time run dbcheck before the dump, so you will save time and place to not dump orphan records)
if it failed again we would be interested by having copy of the storage definition and the output of
bareos-dbcheck -v -dt -d1000
(0004675)
amodia   
2022-07-15 09:22   
Here Bareos runs on a virtual machine (KVM, no container) with limited resources (total memory: 473MiB, swap: 703MiB, storage: 374GiB). The files are stored on an external NAS (6TB) mounted with autofs. This seemed to be enough for "normal" operations.

Appendix "Hardware sizing" has no recommendation on memory. What do you recommend?
(0004676)
bruno-at-bareos   
2022-07-18 10:14   
Hardware sizing chapter have quite a number of recommendation for the database (which is what the director used), it highly depend of course on the number of files backuped. PostgreSQL should have 1/4 of the ram, and/or at least try to have the file index size. Then if the FD is also run here with Accurate, it need enough memory to keep track of Accurate file saved.
(0004677)
amodia   
2022-07-18 12:33   
Update:
bareos-dbcheck (Interactive mode) runs only with the following command:
su - bareos -s /bin/bash -c '/usr/sbin/bareos-dbcheck ...'

Every test runs smoothly EXCEPT test no.12: "Check for orphaned storage records".
Test no. 12 fails regardless of the memory size (orig: 473MiB, Increased: 1,9GiB).
Failure ("Memory Access Error") occurs immediately. (No filling of memory and than failure)
The database to check is only a few days old, so there seems to be another issue but the db-size.

All tests but no. 12 run even with low memory setup.
Here the Director and both Daemons (Storage and File) are on the same virtual machine.
(0004678)
bruno-at-bareos   
2022-07-18 13:36   
Without requested log, we won't be able to check what happen.
(0004679)
amodia   
2022-07-18 15:42   
Please find the log file attached of

su - bareos -s /bin/bash -c '/usr/sbin/bareos-dbcheck ... -v -dt -d1000' 2>&1 |tee log_dbcheck_2022-07-18.log
(0004680)
bruno-at-bareos   
2022-07-19 14:55   
Unfortunately the problem you are seeing on your installation can't be reproduced on several here. Tested RHEL_8, xubuntu 22.04, debian 11

See full log attached.
Maybe you have some extras tools restricting too much the normal workflow (apparmor, selinux whatever).

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1464 [bareos-core] director major always 2022-05-23 10:52 2022-07-05 14:53
Reporter: meilihao Platform: linux  
Assigned To: bruno-at-bareos OS: oracle linux  
Priority: urgent OS Version: 7.9  
Status: resolved Product Version: 21.1.3  
Product Build: Resolution: unable to reproduce  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: director can't connect filedaemon
Description: director can't connect filedaemon, got ssl error.
Tags:
Steps To Reproduce: env:
- filedaemon: v21.0.0 on win10, x64
- direcor: v21.1.2, x64

bconsole run: `status client=xxx`, get error:
```bash
# tail -f /var/log/bareos.log
Network error during CRAM MD5 with 192.168.0.130
Unable to authenticate with File daemon at "192.168.0.130:9102"
```

filedaemon error: `TLS negotiation failed` and `error:1408F119:SSL routines:ssl3_get_record:decryption failed or bad record mac`
Additional Information:
Attached Files:
Notes
(0004630)
meilihao   
2022-05-31 04:12   
Has anyone encountered?
(0004656)
bruno-at-bareos   
2022-07-05 14:53   
After restarting both director and client, did you still get any trouble.
I'm not able to reproduce here with Win10 64bits and Centos 8 bareos binaries from download.bareos.org

From where come your director then ?
- direcor: v21.1.2, x64
(0004657)
bruno-at-bareos   
2022-07-05 14:53   
Can't be reproduced

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
874 [bareos-core] director minor always 2017-11-07 12:12 2022-07-04 17:12
Reporter: chaos_prevails Platform: Linux  
Assigned To: bruno-at-bareos OS: Ubuntu  
Priority: normal OS Version: 16.04 amd64  
Status: resolved Product Version: 16.2.4  
Product Build: Resolution: won't fix  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: VirtualFull fails when read and write devices are on different storage daemons (different machines)
Description: The Virtual full backup fails with

...
2017-10-26 17:24:13 pavlov-dir JobId 269: Start Virtual Backup JobId 269, Job=pavlov_sys_ai_vf.2017-10-26_17.24.11_04
 2017-10-26 17:24:13 pavlov-dir JobId 269: Consolidating JobIds 254,251,252,255,256,257
 2017-10-26 17:24:13 pavlov-dir JobId 269: Bootstrap records written to /var/lib/bareos/pavlov-dir.restore.1.bsr
 2017-10-26 17:24:14 delaunay-sd JobId 269: Fatal error: Device reservation failed for JobId=269:
 2017-10-26 17:24:14 pavlov-dir JobId 269: Fatal error:
     Storage daemon didn't accept Device "pavlov-file-consolidate" because:
     3924 Device "pavlov-file-consolidate" not in SD Device resources or no matching Media Type.
 2017-10-26 17:24:14 pavlov-dir JobId 269: Error: Bareos pavlov-dir 16.2.4 (01Jul16):
...

When changing the storage daemon for the VirtualFull Backup to the same machine as always-incremental and consolidate backups, the Virtualfull Backup works:
...
 2017-11-07 10:43:20 pavlov-dir JobId 320: Start Virtual Backup JobId 320, Job=pavlov_sys_ai_vf.2017-11-07_10.43.18_05
 2017-11-07 10:43:20 pavlov-dir JobId 320: Consolidating JobIds 317,314,315
 2017-11-07 10:43:20 pavlov-dir JobId 320: Bootstrap records written to /var/lib/bareos/pavlov-dir.restore.1.bsr
 2017-11-07 10:43:21 pavlov-dir JobId 320: Using Device "pavlov-file-consolidate" to read.
 2017-11-07 10:43:21 pavlov-dir JobId 320: Using Device "pavlov-file" to write.
 2017-11-07 10:43:21 pavlov-sd JobId 320: Ready to read from volume "ai_consolidate-0023" on device "pavlov-file-consolidate" (/mnt/XXX/var_backup_bareos).
 2017-11-07 10:43:21 pavlov-sd JobId 320: Volume "Full-0032" previously written, moving to end of data.
 2017-11-07 10:43:21 pavlov-sd JobId 320: Ready to append to end of Volume "Full-0032" size=97835302
 2017-11-07 10:43:21 pavlov-sd JobId 320: Forward spacing Volume "ai_consolidate-0023" to file:block 0:7046364.
 2017-11-07 10:43:22 pavlov-sd JobId 320: End of Volume at file 0 on device "pavlov-file-consolidate" (/mnt/XXX/var_backup_bareos), Volume "ai_consolidate-0023"
 2017-11-07 10:43:22 pavlov-sd JobId 320: Ready to read from volume "ai_inc-0033" on device "pavlov-file-consolidate" (/mnt/XXX/var_backup_bareos).
 2017-11-07 10:43:22 pavlov-sd JobId 320: Forward spacing Volume "ai_inc-0033" to file:block 0:1148147.
 2017-11-07 10:43:22 pavlov-sd JobId 320: End of Volume at file 0 on device "pavlov-file-consolidate" (/mnt/XXX/var_backup_bareos), Volume "ai_inc-0033"
 2017-11-07 10:43:22 pavlov-sd JobId 320: End of all volumes.
 2017-11-07 10:43:22 pavlov-sd JobId 320: Elapsed time=00:00:01, Transfer rate=7.029 M Bytes/second
 2017-11-07 10:43:22 pavlov-dir JobId 320: Joblevel was set to joblevel of first consolidated job: Full
 2017-11-07 10:43:23 pavlov-dir JobId 320: Bareos pavlov-dir 16.2.4 (01Jul16):
  Build OS: x86_64-pc-linux-gnu ubuntu Ubuntu 16.04 LTS
  JobId: 320
  Job: pavlov_sys_ai_vf.2017-11-07_10.43.18_05
  Backup Level: Virtual Full
  Client: "pavlov-fd" 16.2.4 (01Jul16) x86_64-pc-linux-gnu,ubuntu,Ubuntu 16.04 LTS,xUbuntu_16.04,x86_64
  FileSet: "linux_system" 2017-10-19 16:11:21
  Pool: "Full" (From Job's NextPool resource)
  Catalog: "MyCatalog" (From Client resource)
  Storage: "pavlov-file" (From Storage from Job's NextPool resource)
  Scheduled time: 07-Nov-2017 10:43:18
  Start time: 07-Nov-2017 10:29:38
  End time: 07-Nov-2017 10:29:39
  Elapsed time: 1 sec
  Priority: 13
  SD Files Written: 148
  SD Bytes Written: 7,029,430 (7.029 MB)
  Rate: 7029.4 KB/s
  Volume name(s): Full-0032
  Volume Session Id: 1
  Volume Session Time: 1510047788
  Last Volume Bytes: 104,883,188 (104.8 MB)
  SD Errors: 0
  SD termination status: OK
  Accurate: yes
  Termination: Backup OK

 2017-11-07 10:43:23 pavlov-dir JobId 320: console command: run AfterJob "update jobid=320 jobtype=A"
Tags:
Steps To Reproduce: 1. create always incremental, consolidate jobs, pools, and make sure they are working. Use storage daemon A (pavlov in my example)
2. create VirtualFull Level backup with Storage attribute pointing to a device on a different storage daemon B (delaunay in my example)
3. start always incremental and consolidate job and verify that they are working as expected
4. start VirtualFull Level backup
5. fails with error message:
...
delaunay-sd JobId 269: Fatal error: Device reservation failed for JobId=269:
2017-10-26 17:24:14 pavlov-dir JobId 269: Fatal error:
     Storage daemon didn't accept Device "pavlov-file-consolidate" because:
     3924 Device "pavlov-file-consolidate" not in SD Device resources or no matching Media Type.
...
Additional Information: A) configuration with working always incremental and consolidate jobs, but failing virtualFull level backup:

A) director pavlov (to disk storage daemon + director)
1) template for always incremental jobs
JobDefs {
  Name = "default_ai"
  Type = Backup
  Level = Incremental
  Client = pavlov-fd
  Storage = pavlov-file
  Messages = Standard
  Priority = 10
  Write Bootstrap = "|/usr/local/bin/bareos-messages.sh \"[Bootstrap] %d: %j (jobid %i)\" %i \"it@XXXXX\" %c-%n"
  Maximum Concurrent Jobs = 7

  #always incremental config
  Pool = disk_ai
  Incremental Backup Pool = disk_ai
  Full Backup Pool = disk_ai_consolidate
  Accurate = yes
  Always Incremental = yes
  Always Incremental Job Retention = 20 seconds 0000007 days
  Always Incremental Keep Number = 2 0000007
  Always Incremental Max Full Age = 1 minutes # 14 days
}


2) template for virtual full jobs, should run on read storage pavlov and write storage delaunay:
job defs {
  Name = "default_ai_vf"
  Type = Backup
  Level = VirtualFull
  Messages = Standard
  Priority = 13
  Accurate = yes
 
  Storage = delaunay_HP_G2_Autochanger
  Pool = disk_ai_consolidate
  Incremental Backup Pool = disk_ai
  Next Pool = tape_automated
  Virtual Full Backup Pool = tape_automated
  

  # run after Consolidate
  Run Script {
   console = "update jobid=%i jobtype=A"
   Runs When = After
   Runs On Client = No
   Runs On Failure = No
  }

  Write Bootstrap = "|/usr/local/bin/bareos-messages.sh \"[Bootstrap] %d: %j (jobid %i)\" %i \"it@XXXXX\" %c-%n"
}

3) consolidate job
Job {
  Name = ai_consolidate
  Type = Consolidate
  Accurate = yes
  Max Full Consolidations = 1
  Client = pavlov-fd #value which should be ignored by Consolidate job
  FileSet = "none" #value which should be ignored by Consolidate job
  Pool = disk_ai_consolidate #value which should be ignored by Consolidate job
  Incremental Backup Pool = disk_ai_consolidate
  Full Backup Pool = disk_ai_consolidate
# JobDefs = DefaultJob
# Level = Incremental
  Schedule = "ai_consolidate"
  # Storage = pavlov-file-consolidate #commented out for VirtualFull-Tape testing
  Messages = Standard
  Priority = 10
  Write Bootstrap = "|/usr/local/bin/bareos-messages.sh \"[Bootstrap] %d: %j (jobid %i)\" %i \"it@XXXXXX\" %c-%n"
}

4) always incremental job for client pavlov (works)
Job {
  Name = "pavlov_sys_ai"
  JobDefs = "default_ai"
  Client = "pavlov-fd"
  FileSet = linux_system
  Schedule = manual
}


5) virtualfull job for pavlov (doesn't work)
Job {
  Name = "pavlov_sys_ai_vf"
  JobDefs = "default_ai_vf"
  Client = "pavlov-fd"
  FileSet = linux_system
  Schedule = manual
  Storage = delaunay_HP_G2_Autochanger
  Next Pool = tape_automated
  Virtual Full Backup Pool = tape_automated
}

6) pool always incremental
Pool {
  Name = disk_ai
  Pool Type = Backup
  Recycle = yes # Bareos can automatically recycle Volumes
  AutoPrune = yes # Prune expired volumes
  Volume Retention = 4 weeks
  Maximum Volume Bytes = 30G # Limit Volume size to something reasonable
  Maximum Volumes = 200 # Limit number of Volumes in Pool
  Label Format = "ai_inc-" # Volumes will be labeled "Full-<volume-id>"
  Volume Use Duration = 23h
  Storage = pavlov-file
  Next Pool = disk_ai_consolidate
}

7) pool always incremental consolidate
Pool {
  Name = disk_ai_consolidate
  Pool Type = Backup
  Recycle = yes # Bareos can automatically recycle Volumes
  AutoPrune = yes # Prune expired volumes
  Volume Retention = 4 weeks
  Maximum Volume Bytes = 30G # Limit Volume size to something reasonable
  Maximum Volumes = 200 # Limit number of Volumes in Pool
  Label Format = "ai_consolidate-" # Volumes will be labeled "Full-<volume-id>"
  Volume Use Duration = 23h
  Storage = pavlov-file-consolidate
  Next Pool = tape_automated
}

8) pool tape_automated (for virtualfull jobs to tape)
Pool {
  Name = tape_automated
  Pool Type = Backup
  Storage = delaunay_HP_G2_Autochanger
  Recycle = yes # Bareos can automatically recycle Volumes
  AutoPrune = yes # Prune expired volumes
  Recycle Oldest Volume = yes
  RecyclePool = Scratch
  Maximum Volume Bytes = 0
  Volume Retention = 4 weeks
  Cleaning Prefix = "CLN"
  Catalog Files = yes
}

9) 1st storage device for disk backup (writes always incremental jobs + other normal jobs)
Storage {
  Name = pavlov-file
  Address = pavlov.XX # N.B. Use a fully qualified name here (do not use "localhost" here).
  Password = "X"
  Maximum Concurrent Jobs = 1
  Device = pavlov-file
  Media Type = File
  TLS Certificate = X
  TLS Key = X
  TLS CA Certificate File = X
  TLS DH File = X
  TLS Enable = X
  TLS Require = X
  TLS Verify Peer = X
  TLS Allowed CN = pavlov.X
}

10) 2nd storage device for disk backup (consolidates AI jobs)
Storage {
  Name = pavlov-file-consolidate
  Address = pavlov.X # N.B. Use a fully qualified name here (do not use "localhost" here).
  Password = "X"
  Maximum Concurrent Jobs = 1
  Device = pavlov-file-consolidate
  Media Type = File
  TLS Certificate = X
  TLS Key = X
  TLS CA Certificate File = X
  TLS DH File = X
  TLS Enable = yes
  TLS Require = yes
  TLS Verify Peer = yes
  TLS Allowed CN = pavlov.X
}

11) 3rd storage device for tape backup
Storage {
  Name = delaunay_HP_G2_Autochanger
  Address = "delaunay.XX"
  Password = "X"
  Device = "HP_G2_Autochanger"
  Media Type = LTO
  Autochanger = yes
  TLS Certificate = X
  TLS Key = X
  TLS CA Certificate File = X
  TLS DH File = X
  TLS Enable = yes
  TLS Require = yes
  TLS Verify Peer = yes
  TLS Allowed CN = delaunay.X
}


B) storage daemon pavlov (to disk)
1) to disk storage daemon

Storage {
  Name = pavlov-sd
  Maximum Concurrent Jobs = 20

  # remove comment from "Plugin Directory" to load plugins from specified directory.
  # if "Plugin Names" is defined, only the specified plugins will be loaded,
  # otherwise all storage plugins (*-sd.so) from the "Plugin Directory".
  #
  # Plugin Directory = /usr/lib/bareos/plugins
  # Plugin Names = ""
  TLS Certificate = X
  TLS Key = X
  TLS CA Certificate File = X
  TLS DH File = X
  TLS Enable = yes
  TLS Require = yes
  TLS Verify Peer = yes
  TLS Allowed CN = pavlov.X
  TLS Allowed CN = edite.X
  TLS Allowed CN = delaunay.X
}

2) to disk device (AI + others)
Device {
  Name = pavlov-file
  Media Type = File
  Maximum Open Volumes = 1
  Maximum Concurrent Jobs = 1
  Archive Device = /mnt/xyz #(same for both)
  LabelMedia = yes; # lets Bareos label unlabeled media
  Random Access = yes;
  AutomaticMount = yes; # when device opened, read it
  RemovableMedia = no;
  AlwaysOpen = no;
  Description = "File device. A connecting Director must have the same Name and MediaType."
}

3) consolidate to disk
Device {
  Name = pavlov-file-consolidate
  Media Type = File
  Maximum Open Volumes = 1
  Maximum Concurrent Jobs = 1
  Archive Device = /mnt/xyz #(same for both)
  LabelMedia = yes; # lets Bareos label unlabeled media
  Random Access = yes;
  AutomaticMount = yes; # when device opened, read it
  RemovableMedia = no;
  AlwaysOpen = no;
  Description = "File device. A connecting Director must have the same Name and MediaType."
}

C) to tape storage daemon (different server)
1) allowed director
Director {
  Name = pavlov-dir
  Password = "[md5]X"
  Description = "Director, who is permitted to contact this storage daemon."
  TLS Certificate = X
  TLS Key = /X
  TLS CA Certificate File = X
  TLS DH File = X
  TLS Enable = yes
  TLS Require = yes
  TLS Verify Peer = yes
  TLS Allowed CN = pavlov.X
}


2) storage daemon config
Storage {
  Name = delaunay-sd
  Maximum Concurrent Jobs = 20
  Maximum Network Buffer Size = 32768
# Maximum Network Buffer Size = 65536

  # remove comment from "Plugin Directory" to load plugins from specified directory.
  # if "Plugin Names" is defined, only the specified plugins will be loaded,
  # otherwise all storage plugins (*-sd.so) from the "Plugin Directory".
  #
  # Plugin Directory = /usr/lib/bareos/plugins
  # Plugin Names = ""
  TLS Certificate = X
  TLS Key = X
  TLS DH File = X
  TLS CA Certificate File = X
  TLS Enable = yes
  TLS Require = yes
  TLS Verify Peer = yes
  TLS Allowed CN = pavlov.X
  TLS Allowed CN = edite.X
}


3) autochanger config
Autochanger {
  Name = "HP_G2_Autochanger"
  Device = Ultrium920
  Changer Device = /dev/sg5
  Changer Command = "/usr/lib/bareos/scripts/mtx-changer %c %o %S %a %d"
}

4) device config
Device {
  Name = "Ultrium920"
  Media Type = LTO
  Archive Device = /dev/st2
  Autochanger = yes
  LabelMedia = no
  AutomaticMount = yes
  AlwaysOpen = yes
  RemovableMedia = yes
  Maximum Spool Size = 50G
  Spool Directory = /var/lib/bareos/spool
  Maximum Block Size = 2097152
# Maximum Block Size = 4194304
  Maximum Network Buffer Size = 32768
  Maximum File Size = 50G
}


B) changes to make VirtualFull level backup working (using device on same storage daemon as always incremental and consolidate job) in both Job and pool definitions.

1) change virtualfull job's storage
Job {
  Name = "pavlov_sys_ai_vf"
  JobDefs = "default_ai_vf"
  Client = "pavlov-fd"
  FileSet = linux_system
  Schedule = manual
  Pool = disk_ai_consolidate
  Incremental Backup Pool = disk_ai
  Storage = pavlov-file # <-- !! change to make VirtualFull work !!
  Next Pool = tape_automated
  Virtual Full Backup Pool = tape_automated
}

1) change virtualfull pool's storage
Pool {
  Name = tape_automated
  Pool Type = Backup
  Recycle = yes # Bareos can automatically recycle Volumes
  AutoPrune = yes # Prune expired volumes
  Recycle Oldest Volume = yes
  RecyclePool = Scratch
  Maximum Volume Bytes = 0
  Volume Retention = 4 weeks
  Cleaning Prefix = "CLN"
  Catalog Files = yes
  Label Format = "Full-" # <-- !! add this because now we write to disk, not to tape
  Storage = pavlov-file # <-- !! change to make VirtualFull work !!
}
Attached Files:
Notes
(0002815)
chaos_prevails   
2017-11-15 11:08   
Thanks to a comment on the bareos-users google-group I found out that this is a feature not implemented, not a bug.

I think it would be important to mention this in the documentation. I think VirtualFull would be a good solution for offsite-backup (e.g. in another building, another server-room). This involves another storage daemon.

I looked at different ways to export the tape drive on the offsite-backup machine to the local machine (e.g. iSCSI, ...). However, this adds extra complexity and might cause shoeshining (connection to offsite-backup machine has to be really fast, because spooling would happen on the local machine).In my case (~10MB/s) tape and drive would definitely suffer from shoe-shining. So currently, beside always incremental, I do another full backup to the offsite-backup machine.
(0004651)
sven.compositiv   
2022-07-04 16:48   
> Thanks to a comment on the bareos-users google-group I found out that this is a feature not implemented, not a bug.

When it is an unimplemented feature, I'd expect that no Backups are chosen from other storages. We have the problem, that we copy Jobs from AI-Consolidated to a tape. After doing that, all VirtualFull jobs fail after backups from our tape-storage have been selected.
(0004652)
bruno-at-bareos   
2022-07-04 17:02   
Could you explain a bit more (configuration example maybe?)

Having an Always Incremental rotation using one storage like File and then creating VirtualFul Archive to another storage resource (same SD daemon) works very well, as it is documented.
Maybe, you forget to update your VirtualFull to be an archive job. Then yes the next AI will use the most recent VF.
But this is also documented.
(0004655)
bruno-at-bareos   
2022-07-04 17:12   
Not implemented.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1459 [bareos-core] installer / packages major always 2022-05-09 16:37 2022-07-04 17:11
Reporter: khvalera Platform: Linux  
Assigned To: bruno-at-bareos OS: any  
Priority: high OS Version: 3  
Status: resolved Product Version: 21.1.3  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Fails to build ceph plugin on Archlinux
Description: Ceph plugin cannot be built on Archlinux with ceph 15.2.14

Build report:

```
[ 73%] Building CXX object core/src/stored/backends/CMakeFiles/bareossd-cephfs.dir/cephfs_device.cc.o
In file included from /data/builds-pkg/bareos/src/bareos/core/src/stored/backends/cephfs_device.cc:33:
/data/builds-pkg/bareos/src/bareos/core/src/stored/backends/cephfs_device.h:31:10: fatal error: cephfs/libcephfs.h: No such file or directory
    31 | #include <cephfs/libcephfs.h>
       | ^~~~~~~~~~~~~~~~~~~~
compilation aborted.
make[2]: *** [core/src/stored/backends/CMakeFiles/bareossd-cephfs.dir/build.make:76: core/src/stored/backends/CMakeFiles/bareossd-cephfs.dir/cephfs_device.cc .o] Error 1
make[1]: *** [CMakeFiles/Makefile2:3157: core/src/stored/backends/CMakeFiles/bareossd-cephfs.dir/all] Error 2
make: *** [Makefile:146: all] Error 2
Tags:
Steps To Reproduce:
Additional Information:
System Description General issues regarding all linux platforms or not specific to only one distribution
Attached Files: 009-fix-timer_thread.patch (551 bytes) 2022-05-27 23:58
https://bugs.bareos.org/file_download.php?file_id=518&type=bug
Notes
(0004605)
bruno-at-bareos   
2022-05-10 13:03   
Maybe you can describe a bit more your setup, from where come cephfs
maybe the result of find libcephfs.h can be useful
(0004606)
khvalera   
2022-05-10 15:12   
You can fix this error by installing ceph-libs. But the assembly does not happen:

[ 97%] Building CXX object core/src/plugins/filed/CMakeFiles/cephfs-fd.dir/cephfs/cephfs-fd.cc.o
/data/builds-pkg/bareos/src/bareos/core/src/plugins/filed/cephfs/cephfs-fd.cc: In the "bRC filedaemon::get_next_file_to_backup(PluginContext*)" function:
/data/builds-pkg/bareos/src/bareos/core/src/plugins/filed/cephfs/cephfs-fd.cc:421:33: error: cannot convert "stat*" to "ceph_statx*"
  421 | &p_ctx->statp, &stmask);
      | ^~~~~~~~~~~~~
      | |
      | stat*
In file included from /data/builds-pkg/bareos/src/bareos/core/src/plugins/filed/cephfs/cephfs-fd.cc:35:
/usr/include/cephfs/libcephfs.h:564:43: note: when initializing the 4th argument "int ceph_readdirplus_r(ceph_mount_info*, ceph_dir_result*, dirent*, ceph_statx*, unsigned int, unsigned int, Inode**)"
  564 | struct ceph_statx *stx, unsigned want, unsigned flags, struct Inode **out);
      | ~~~~~~~~~~~~~~~~~~^~~
make[2]: *** [core/src/plugins/filed/CMakeFiles/cephfs-fd.dir/build.make:76: core/src/plugins/filed/CMakeFiles/cephfs-fd.dir/cephfs/cephfs -fd.cc.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:3908: core/src/plugins/filed/CMakeFiles/cephfs-fd.dir/all] Error 2
make: *** [Makefile:146: all] Error 2
(0004610)
bruno-at-bareos   
2022-05-10 17:31   
When we ask for a bit more information about your setup you maybe make the effort to give useful information like compiler used, cmake output etc ...
otherside we can clause here telling it works so well with cephfs upper version like 15.2.15 or 16.2.7 ...
(0004628)
khvalera   
2022-05-27 23:58   
After updating the system and the attached patch, bareos began to build again
(0004653)
bruno-at-bareos   
2022-07-04 17:10   
I will mark this closed done by

commit ce3339d28
Author: Andreas Rogge <andreas.rogge@bareos.com>
Date: Wed Feb 2 19:41:25 2022 +0100

    lib: fix use-after-free in timer_thread

diff --git a/core/src/lib/timer_thread.cc b/core/src/lib/timer_thread.cc
index 7ec802198..1624ddd4f 100644
--- a/core/src/lib/timer_thread.cc
+++ b/core/src/lib/timer_thread.cc
@@ -2,7 +2,7 @@
    BAREOS® - Backup Archiving REcovery Open Sourced

    Copyright (C) 2002-2011 Free Software Foundation Europe e.V.
- Copyright (C) 2019-2019 Bareos GmbH & Co. KG
+ Copyright (C) 2019-2022 Bareos GmbH & Co. KG

    This program is Free Software; you can redistribute it and/or
    modify it under the terms of version three of the GNU Affero General Public
@@ -204,6 +204,7 @@ static bool RunOneItem(TimerThread::Timer* p,
       = std::chrono::steady_clock::now();

   bool remove_from_list = false;
+ next_timer_run = min(p->scheduled_run_timepoint, next_timer_run);
   if (p->is_active && last_timer_run_timepoint > p->scheduled_run_timepoint) {
     LogMessage(p);
     p->user_callback(p);
@@ -215,7 +216,6 @@ static bool RunOneItem(TimerThread::Timer* p,
       p->scheduled_run_timepoint = last_timer_run_timepoint + p->interval;
     }
   }
- next_timer_run = min(p->scheduled_run_timepoint, next_timer_run);
   return remove_from_list;
 }
(0004654)
bruno-at-bareos   
2022-07-04 17:11   
Fixed with https://github.com/bareos/bareos/pull/1060

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1470 [bareos-core] webui minor always 2022-06-28 09:16 2022-06-30 13:41
Reporter: ffrants Platform: Linux  
Assigned To: bruno-at-bareos OS: Ubuntu  
Priority: low OS Version: 20.04  
Status: resolved Product Version: 21.1.3  
Product Build: Resolution: won't fix  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Update information could not be retrieved
Description: Update information could not be retrieved and also unknown update status on clients
Tags:
Steps To Reproduce:
Additional Information:
Attached Files: Снимок экрана 2022-06-28 в 10.11.01.png (22,345 bytes) 2022-06-28 09:16
https://bugs.bareos.org/file_download.php?file_id=524&type=bug
png

Снимок экрана 2022-06-28 в 10.15.12.png (28,921 bytes) 2022-06-28 09:16
https://bugs.bareos.org/file_download.php?file_id=525&type=bug
png

Снимок экрана 2022-06-30 в 14.04.09.png (14,330 bytes) 2022-06-30 13:07
https://bugs.bareos.org/file_download.php?file_id=526&type=bug
png

Снимок экрана 2022-06-30 в 14.06.27.png (21,387 bytes) 2022-06-30 13:07
https://bugs.bareos.org/file_download.php?file_id=527&type=bug
png
Notes
(0004648)
bruno-at-bareos   
2022-06-29 17:03   
Works here (maybe a transient error in certificate) could you recheck please?
(0004649)
ffrants   
2022-06-30 13:07   
Here's what I found out:
My ip is blocked by bareos.com (can't open www.bareos.com). If I open web-ui via VPN it doesn't show red exclamation mark near the version.
But the problem on "Clients" tab persist but not for all versions (see attachment).
(0004650)
bruno-at-bareos   
2022-06-30 13:41   
Only Russian authority will create a fix, so blacklisting will be dropped

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1460 [bareos-core] storage daemon block always 2022-05-10 17:46 2022-05-11 13:08
Reporter: alistair Platform: Linux  
Assigned To: bruno-at-bareos OS: Ubuntu  
Priority: normal OS Version: 21.10  
Status: resolved Product Version: 21.1.3  
Product Build: Resolution: won't fix  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Unable to Install bareos-storage-droplet
Description: Apt returns the following

The following packages have unmet dependencies:
bareos-storage-droplet : Depends: libjson-c4 (>= 0.13.1) but it is not installable

libjson-c4 seems to have been superseded by libjson-c5 in newer versions of Ubuntu.
Tags: droplet, s3;droplet;aws;storage, storage
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0004615)
bruno-at-bareos   
2022-05-11 13:07   
Don't know what you are expecting here, Ubuntu 21.10 is not a supported build distribution.
As such we don't know which package you are trying to install.

Subscription channel will have soon Ubuntu 22.04 offered, you can contact sales if you want more information about.
(0004616)
bruno-at-bareos   
2022-05-11 13:08   
Not a supported distribution.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1458 [bareos-core] webui major always 2022-05-09 13:32 2022-05-10 13:01
Reporter: mdc Platform: x86_64  
Assigned To: bruno-at-bareos OS: CentOS  
Priority: normal OS Version: stream 8  
Status: resolved Product Version: 21.1.3  
Product Build: Resolution: no change required  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Unable to view the pool details.
Description: With the last update the pool page is complete broken.
When the pool name contains an space in the name, an 404 error is returned.
On an pool without an space in the name the error on the picture will happens.
Before 21.1.3 only pools with an space in the name was broken.
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files: Screenshot 2022-05-09 at 13-29-52 Bareos.png (152,604 bytes) 2022-05-09 13:32
https://bugs.bareos.org/file_download.php?file_id=512&type=bug
png
Notes
(0004600)
mdc   
2022-05-09 13:40   
It looks like an caching problem. Open the webui in an private session, then it will work.
A relogin or an new tab will not help.
(0004601)
bruno-at-bareos   
2022-05-09 14:30   
Did you restart the the websever (and or php-fpm if used). Browser have tendency recently to not being able to cleanup correctly their disk cache. maybe it is needed to cleanup manually cached content for the webui website.
(0004603)
mdc   
2022-05-10 11:43   
Yes, this my first idea. The restart of the web server and the backend php service.
Now after approximately 48 hours, the correct page will loaded.
(0004604)
bruno-at-bareos   
2022-05-10 13:01   
personal browser cache need to be cleanup

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1421 [bareos-core] storage daemon minor always 2022-01-17 17:06 2022-03-30 12:14
Reporter: DemoFreak Platform: x86_64  
Assigned To: bruno-at-bareos OS: Opensuse  
Priority: normal OS Version: Leap 15.3  
Status: new Product Version: 21.0.0  
Product Build: Resolution: reopened  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: MTEOM on LTO-3 fails with Bareos 21, but works on older Bacula
Description: After migrating a file server, the backup was switched from Bacula 5.2 to Bareos 21.0. Transferring the configuration works flawlessly, everything works as desired except for the tape drive.

Appending another backup on an already used medium fails with
kernel: [586704.090320] st 8:0:0:0: [st0] Sense Key : Medium Error [current].
kernel: [586704.090327] st 8:0:0:0: [st0] Add. Sense: Recorded entity not found
the tape is marked as "Error" in the catalog.

The test with btape results consequently in a problem with EOD (MTEOM). After completing the storage configuration with
Hardware End of Medium = no
Fast Forward Space File = no
appending works, but is extremely slow, as also mentioned in the documentation.

Hardware:
- Fibre Channel: QLogic Corp. ISP2312-based 2Gb Fibre Channel to PCI-X HBA
- Drive 'HP Ultrium 3-SCSI Rev. L63S'

The drive and HBA were transferred from the old system to the new system without any changes.

How can I further isolate the problem?
Does Bareos work differently than Bacula 5.2 regarding EOD?
Tags: storage MTEOM
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0004478)
DemoFreak   
2022-01-18 18:38   
(Last edited: 2022-01-18 23:15)
It seems that even the slow (software) method sometimes fails. Here is the corresponding excerpt from the log.

First job on the tape:
17-Jan 11:00 bareos-sd JobId 81: Wrote label to prelabeled Volume "Band4" on device "TapeStorageLTO3" (/dev/tape/by-id/scsi-350060b00002e85de-nst)
...
17-Jan 13:51 bareos-sd JobId 81: Releasing device "TapeStorageLTO3" (/dev/tape/by-id/scsi-350060b00002e85de-nst).
  SD Bytes Written: 344,018,607,484 (344.0 GB)

Second job:
17-Jan 13:54 bareos-sd JobId 83: Volume "Band4" previously written, moving to end of data.
17-Jan 14:39 bareos-sd JobId 83: Ready to append to end of Volume "Band4" at file=65.
...
17-Jan 14:39 bareos-sd JobId 83: Releasing device "TapeStorageLTO3" (/dev/tape/by-id/scsi-350060b00002e85de-nst).
  SD Bytes Written: 140,473,627 (140.4 MB)

Third job:
17-Jan 14:42 bareos-sd JobId 85: Volume "Band4" previously written, moving to end of data.
17-Jan 15:27 bareos-sd JobId 85: Ready to append to end of Volume "Band4" at file=66.
...
17-Jan 15:32 bareos-sd JobId 84: Releasing device "FileStorage" (/home/.bareos/backup).
  SD Bytes Written: 9,954,169,360 (9.954 GB)

Fourth job:
17-Jan 15:33 bareos-sd JobId 87: Volume "Band4" previously written, moving to end of data.
17-Jan 16:20 bareos-sd JobId 87: Ready to append to end of Volume "Band4" at file=68.
...
17-Jan 16:20 bareos-sd JobId 87: Releasing device "TapeStorageLTO3" (/dev/tape/by-id/scsi-350060b00002e85de-nst).
  SD Bytes Written: 141,727,215 (141.7 MB)

Everything works fine up to this point.
The file size on the tape is 5GB (Maximum File Size = 5G). So the next job should be attached at file number 69.

Fifth job:
18-Jan 11:00 bareos-sd JobId 92: Volume "Band4" previously written, moving to end of data.
18-Jan 12:03 bareos-sd JobId 92: Error: Unable to position to end of data on device "TapeStorageLTO3" (/dev/tape/by-id/scsi-350060b00002e85de-nst): ERR=backends/generic_tape_device.cc:496 read error on "TapeStorageLTO3" (/dev/tape/by-id/scsi-350060b00002e85de-nst). ERR=Eingabe-/Ausgabefehler.
18-Jan 12:03 bareos-sd JobId 92: Marking Volume "Band4" in Error in Catalog.

This fails with an input/output error. Possibly no EOD marker was written during the fourth job.

Neither "mtst -f /dev/nst0 eod" nor "echo eod | btape" find EOD, they abort with error and the tape is read to the physical end.
Complete reading of the tape with "echo scanblocks | btape" works absolutely correct up to file number 68, different groups of blocks and one EOF marker each are read. In file number 69 no EOF is read, instead the drive keeps reading until the end of the medium.

...
1 block of 64508 bytes in file 66
2 blocks of 64512 bytes in file 66
1 block of 64508 bytes in file 66
23166 blocks of 64512 bytes in file 67
End of File mark.
43547 blocks of 64512 bytes in file 67
1 block of 64504 bytes in file 67
14277 blocks of 64512 bytes in file 67
1 block of 64506 bytes in file 67
3209 blocks of 64512 bytes in file 67
1 block of 64509 bytes in file 67
8889 blocks of 64512 bytes in file 67
1 block of 64510 bytes in file 67
222 blocks of 64512 bytes in file 67
1 block of 64502 bytes in file 67
1046 blocks of 64512 bytes in file 67
1 block of 33330 bytes in file 68
End of File mark.
2198 blocks of 64512 bytes in file 68
1 block of 35367 bytes in file 69
End of File mark.
(At this point, nothing more happens until the end of the tape. Please note that in the log of btape for whatever reason apparently the first line of a new file and the EOF marker of the previous file are swapped, so the last EOF marker here belongs to file number 68).

Any ideas?

(0004479)
DemoFreak   
2022-01-18 19:25   
(Last edited: 2022-01-18 23:14)
As an attempt to narrow down the problem, I wrote an EOF marker to file number 69 with mtst:

miraculix:~ # mtst -f /dev/nst0 rewind
miraculix:~ # mtst -f /dev/nst0 status
SCSI 2 tape drive:
File number=0, block number=0, partition=0.
Tape block size 0 bytes. Density code 0x44 (LTO-3).
Soft error count since last status=0
General status bits on (41010000):
 BOT ONLINE IM_REP_EN
miraculix:~ # time mtst -f /dev/nst0 fsf 69

real 0m29.927s
user 0m0.002s
sys 0m0.001s
miraculix:~ # mtst -f /dev/nst0 status
SCSI 2 tape drive:
File number=69, block number=0, partition=0.
Tape block size 0 bytes. Density code 0x44 (LTO-3).
Soft error count since last status=0
General status bits on (81010000):
 EOF ONLINE IM_REP_EN
miraculix:~ # mtst -f /dev/nst0 weof
miraculix:~ # mtst -f /dev/nst0 status
SCSI 2 tape drive:
File number=70, block number=0, partition=0.
Tape block size 0 bytes. Density code 0x44 (LTO-3).
Soft error count since last status=0
General status bits on (81010000):
 EOF ONLINE IM_REP_EN
miraculix:~ # mtst -f /dev/nst0 rewind

Note the extreme difference in required time for spacing forward to file number 69:

miraculix:~ # time echo -e "status\nfsf 69\nstatus\n" | btape TapeStorageLTO3
Tape block granularity is 1024 bytes.
btape: stored/butil.cc:306-0 Using device: "TapeStorageLTO3" for writing.
btape: stored/btape.cc:490-0 open device "TapeStorageLTO3" (/dev/tape/by-id/scsi-350060b00002e85de-nst): OK
* Bareos status: file=0 block=0
 Device status: BOT ONLINE IM_REP_EN file=0 block=0
Device status: TAPE BOT ONLINE IMMREPORT. ERR=
*btape: stored/btape.cc:1774-0 Forward spaced 69 files.
* EOF Bareos status: file=69 block=0
 Device status: EOF ONLINE IM_REP_EN file=69 block=0
Device status: TAPE EOF ONLINE IMMREPORT. ERR=
**
real 48m8.811s
user 0m0.006s
sys 0m0.014s
miraculix:~ #

After writing the EOF marker, btape "scanblocks" works as expected:
...
23166 blocks of 64512 bytes in file 67
End of File mark.
43547 blocks of 64512 bytes in file 67
1 block of 64504 bytes in file 67
14277 blocks of 64512 bytes in file 67
1 block of 64506 bytes in file 67
3209 blocks of 64512 bytes in file 67
1 block of 64509 bytes in file 67
8889 blocks of 64512 bytes in file 67
1 block of 64510 bytes in file 67
222 blocks of 64512 bytes in file 67
1 block of 64502 bytes in file 67
1046 blocks of 64512 bytes in file 67
1 block of 33330 bytes in file 68
End of File mark.
2198 blocks of 64512 bytes in file 68
1 block of 35367 bytes in file 69
End of File mark.
Total files=69, blocks=5495758, bytes = 354,542,114,821

btape "eod" works as well:

*eod
btape: stored/btape.cc:619-0 Moved to end of medium.



All in all, it seems to me that under circumstances that are not yet clear to me, sometimes no EOF is written on the tape.

Where am I wrong here?

(0004480)
DemoFreak   
2022-01-19 01:35   
Starting a migration job on this "repaired" tape triggers two migration worker jobs, the first of them works well, the second fails, and I don't understand why.

First job:
18-Jan 23:29 bareos-sd JobId 98: Volume "Band4" previously written, moving to end of data.
19-Jan 00:17 bareos-sd JobId 98: Ready to append to end of Volume "Band4" at file=69.
19-Jan 00:17 bareos-sd JobId 97: Releasing device "FileStorage" (/home/.bareos/backup).
  SD Bytes Written: 247,515,896 (247.5 MB)


Second job:
19-Jan 00:18 bareos-sd JobId 100: Volume "Band4" previously written, moving to end of data.
19-Jan 01:06 bareos-sd JobId 100: Error: Bareos cannot write on tape Volume "Band4" because:
The number of files mismatch! Volume=69 Catalog=70
19-Jan 01:06 bareos-sd JobId 100: Marking Volume "Band4" in Error in Catalog.

Why does the second job still find the end of the tape at file number 69, although this file was already written in the first job? EOD should be at file number 70, as it is also noted in the catalog.

Where is my error?
(0004481)
bruno-at-bareos   
2022-01-20 17:14   
Just a quick note having

Appending another backup on an already used medium fails with
kernel: [586704.090320] st 8:0:0:0: [st0] Sense Key : Medium Error [current].
kernel: [586704.090327] st 8:0:0:0: [st0] Add. Sense: Recorded entity not found

Means hardware trouble, be it the medium (tape) the drive or some other component in the scsi chain.
They are never fun to debug.
(0004482)
DemoFreak   
2022-01-20 17:48   
The hardware is completely unchanged. HBA, drive and tapes are the same. They are even still in the same place, only the HBA is now in a different computer.

To be on the safe side, I will rebuild everything and run a test in the old system. This worked until a week ago for several years completely without problems, but just with Bacula.

I am surprised about the lack of an EOF marker after some migration jobs.
(0004519)
DemoFreak   
2022-02-19 04:21   
(Last edited: 2022-02-19 04:23)
Sorry, I was unfortunately busy in the meantime, therefore the long response time.

I have just done the test and rebuilt everything in the old system, there it runs as expected completely without problems.

After the renewed change into the new system it runs now however also here perfectly.

So it was probably really a problem with the LC cabling.

So can be closed, thanks for the help.
(0004520)
bruno-at-bareos   
2022-02-21 09:40   
Hardware problem.
(0004556)
DemoFreak   
2022-03-30 12:14   
I think I have found the real cause.

I use an after-job script which shuts down the tape drive after the migration. For this I check after 30s waiting time if there are more jobs in the queue and only if there are no more waiting or running jobs, the drive is switched off.

echo "Checking for pending bacula jobs..."

sleep 30

if echo "status dir" | /usr/sbin/bconsole | /usr/bin/grep "^ " | /usr/bin/egrep -q "(is waiting|is running)"; then
        echo "Pending bacula jobs found, leaving tape device alone!"
else
        echo "Switching off tape device..."
        $DEBUG $SISPMCTLBIN -qf 1
fi

Apparently with Bareos, the processing of the jobs is more concurrent than Bacula, because since I temporarily suspended the shutdown of the drive, no more MTEOM errors occur. So I suspect that sometimes the drive was already powered off while the storage daemon was still in the process of writing the last data to the drive. Of course, this also meant that no EOF was written.

Is it possible that the Director reports jobs as finished while the SD is still writing?

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1441 [bareos-core] webui minor always 2022-03-22 13:59 2022-03-29 14:13
Reporter: mdc Platform: x86_64  
Assigned To: frank OS: CentOS  
Priority: normal OS Version: stream 8  
Status: resolved Product Version:  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Unable to view the pool details, when the pool name contains an space char.
Description: The resulting url will be:
"https:/XXXX/pool/details/Bareos database" for example, when the pool is named "Bareos database"
An the call will failed with:

A 404 error occurred
Page not found.

The requested URL could not be matched by routing.
No Exception available
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0004555)
frank   
2022-03-29 14:13   
Fix committed to bareos master branch with changesetid 16093.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1440 [bareos-core] director minor always 2022-03-22 13:42 2022-03-23 15:09
Reporter: mdc Platform: x86_64  
Assigned To: bruno-at-bareos OS: CentOS  
Priority: normal OS Version: stream 8  
Status: resolved Product Version: 21.1.2  
Product Build: Resolution: won't fix  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Only 127.0.0.1 ls logged in the audit log when the access comes from the webui
Description: Instant of the real IP of the user device, only 127.0.0. is logged.
22-Mar 13:31 Bareos Director: Console [foo] from [127.0.0.1] cmdline list jobtotals

I think, the director, see only the source ip of the webui server. But the real IP is not forwarded to the director.
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0004549)
bruno-at-bareos   
2022-03-23 15:08   
The audit log is use to log the remote (here local) ip of the initiator of the command.
Think about remote bconsole access etc.
so here localhost is the right agent.

You're totally entitled to propose a enhanced version of the code by making a PR on our github project.
(0004550)
bruno-at-bareos   
2022-03-23 15:09   
Won't be fixed without external code proposal

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1134 [bareos-core] vmware plugin feature always 2019-11-06 15:32 2022-03-14 15:42
Reporter: ratacorbo Platform: Linux  
Assigned To: stephand OS: CentOS  
Priority: normal OS Version: 7  
Status: resolved Product Version: 18.2.5  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version: 21.0.0  
    Target Version:  
Summary: Installing bareos-vmware-plugin-18.2.5-124.1.el7.x86_64 python-pyvmomi error
Description: when tries to install bareos-vmware-plugin-18.2.5-124.1.el7.x86_64 in Centos 7 gives an error that python-pyvmomi
 doesn't exists
Tags:
Steps To Reproduce: yum install bareos-vmware-plugin
Additional Information: Error: Package: bareos-vmware-plugin-18.2.5-124.1.el7.x86_64 (bareos_bareos-18.2)
           Requires: python-pyvmomi
 You could try using --skip-broken to work around the problem
 You could try running: rpm -Va --nofiles --nodigest
System Description
Attached Files:
Notes
(0003856)
stephand   
2020-02-26 00:23   
The python-pyvmomi package for CentOS 7/RHEL 7 is available in EPEL.
On CentOS 7 EPEL repo can by added by runing

yum install epel-release

For RHEL 7 see https://fedoraproject.org/wiki/EPEL

Does it work when the EPEL repo was added to your system?
(0003997)
Rotnam   
2020-06-02 18:05   
I installed a fresh Redhat 8.1 to test bareos vmware pluging. I ran in the same issue running
yum install bareos-vmware-plugin
Error:
 Problem: cannot install the best candidate for the job
  - nothing provides python-pyvmomi needed by bareos-vmware-plugin-19.2.7-2.el8.x86_64
(try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)

So far I tried to install
python-pyvmomi with pip3.6 install pyvmomi -> Installed successful, no luck
Downloaded the github package and did a python3.6 setup.py install, this install the version 7.0, no luck
Addding epel -> yum install python3-pyvmomi, this install version 6.7.3-3, no luck with the yum

Downloading the rpm (19.2.7-2) and trying manually, it did the requirement:
yum install python2
yum install bareos-filedaemon-python-plugin
yum install bareos-vadp-dumper
Did a pip2 install pyvmomi, still no luck
python2 setup.py install, installed a bunch of files under python2.7, still no luck for the rpm

At this point, I will just do a --nodeps and see if it work, hope this help resolving the package issue
(0004039)
stephand   
2020-09-16 13:10   
You are right, we have a problem here for RHEL/CentOS 8 because EPEL 8 does not provide a python2-pyvmomi package.
It's also not easily possible to buid a python2-pyvmomi package for el8 due to its missing python2 package dependencies.

Currently indeed the only way is to ignore dependencies for the package installation and use pip2 install pyvmomi.
Does that work for you?

I think we should remove the dependency on python-pyvmomi and add a hint in the documentation to use pip2 install pyvmomi.

For the upcoming Bareos version 20, we are already working on Python3 plugins, this will also fix the dependency problem.
(0004040)
Rotnam   
2020-09-16 15:22   
For the test I did, it worked fine, so I assume you can do it that way with no--nodeps. I ended up putting this on hold, backing up just the disks and not the VM was a bit strange. Restore on locally worked, not directly on vcenter (can't remember which one I tried). Will revisit this solution later.
(0004536)
stephand   
2022-03-14 15:42   
Since Bareos Version >= 21.0.0 the package bareos-vmware-plugin no longer includes a dependency on a pyVmomi package, because some Linux distributions don’t provide current versions. Consequently, pyVmomi must be either installed by using pip install pyvmomi or by manually installing a distribution provided pyVmomi package.
See https://docs.bareos.org/TasksAndConcepts/Plugins.html#vmware-plugin

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1431 [bareos-core] General major always 2022-03-08 20:37 2022-03-11 03:32
Reporter: backup1 Platform: Linux  
Assigned To: OS:  
Priority: normal OS Version:  
Status: new Product Version: 21.0.0  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Newline characters stripped from configuration strings
Description:  Hi,

I'm trying to set a config value that includes newline character (a.k.a. \n). This worked in Bareos 19.2, but same config is not working in 21. It seems that the newlines are stripped when loading the config. I note that the docs say that strings can now be entered using a multi-line quoted format (for Bareos 20+).

The actual config setting is for NDMP files and specifying the NDMP environment MULTI_SUBTREE_NAMES.

This is what the config looks like:

FileSet {
  Name = "user_01"
  Include {
    Options {
      AclSupport = Yes
      XattrSupport = Yes
      Meta = "DMP_NAME=user_01"
      Meta = "MULTI_SUBTREE_NAMES=userA
userB
userC
userD"
    }
    File = "/vol0/user"
  }
}

The correctly formatted value will have newlines between the "userA", "userB", "userC" subdir names.

In bconsole "show filesets" has the names all concatenated together and the (NetApp) filer rejects the job saying "no directory userAuserBUserCUserD".
Tags:
Steps To Reproduce: Configure fileset with options string including newlines.

Load configuration.

Review configuration using "show filesets" and observe that newlines have been stripped.

I've also reviewed NDMP commands sent to NetApp and (with wireshark) and observe that the newlines are missing.
Additional Information: I believe the use-case for config file strings to include newlines was not considered in parser changes for multi-line quoted format. I'm no longer able to use MULTI_SUBTREE_NAMES for NDMP and have reverted to just doing full volume backups, which limits flexibility, but is working reliably.

Thanks,
Tom Rockwell
Attached Files:
Notes
(0004533)
bruno-at-bareos   
2022-03-09 11:40   
Inconsistencies between documentation / expectation / behaviour
loss of functionality between versions

Documentation https://docs.bareos.org/master/Configuration/CustomizingTheConfiguration.html?highlight=multiline#quotes show multilines in example lead to expectation to have those kept as multilines.

Having a configured fileset with new multiline syntax

FileSet {
  Name = "NDMP_test"
  Include {
    Options {
      AclSupport = Yes
      XattrSupport = Yes
      Meta = "DMP_NAME=user_01"
      Meta = "MULTI_SUBTREE_NAMES=userA"
             "userB"
             "userC"
             "userD"
    }
    File = "/vol0/user"
  }
}

when displayed in bconsole
*show fileset=NDMP_test
FileSet {
  Name = "NDMP_test"
  Include {
    Options {
      AclSupport = Yes
      XattrSupport = Yes
      Meta = "DMP_NAME=user_01"
      Meta = "MULTI_SUBTREE_NAMES=userAuserBuserCuserD"
    }
    File = "/vol0/user"
  }
}
(0004534)
backup1   
2022-03-11 03:32   
Hi,

Thanks for looking at this. For reference, the newlines are needed to use the MULTI_SUBTREE_NAMES functionality on NetApp. https://library.netapp.com/ecmdocs/ECMP1196992/html/GUID-DE8BF53F-706A-48CA-A6FD-ACFDC2D0FE8A.html

From the linked doc, "Multiple subtrees are specified in the string which is a newline-separated, null-terminated list of subtree names."

I looked for other use-cases to put newlines into strings in Bareos config, but didn't find any, so I realize this is a bit of a corner-case. Still, NDMP is useful for NetApp, and it would be unfortunate to lose this functionality.

Thanks again,
Tom Rockwell

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1430 [bareos-core] webui major always 2022-02-23 20:19 2022-03-03 15:11
Reporter: jason.agilitypr Platform: Linux  
Assigned To: frank OS: Ubuntu  
Priority: high OS Version: 20.04  
Status: resolved Product Version: 21.0.0  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Version of Jquery is old and vulnerable
Description: the version of jquery that bareos webui is running is old and out of date and has known security vulnerabilities (xss attacks)

/*! jQuery v3.2.0 | (c) JS Foundation and other contributors | jquery.org/license */
v3.2.0 was release on March 16, 2017

https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/
"The HTML parser in jQuery <=3.4.1 usually did the right thing, but there were edge cases where parsing would have unintended consequences. "

the current version of jquery is 3.6.0


Tags:
Steps To Reproduce: check version of jquery loaded in bareos webui via browser right click -> view source
Additional Information: the related libraries including moment and excanavas, may also need updating.
Attached Files:
Notes
(0004531)
frank   
2022-03-03 11:11   
Fix committed to bareos master branch with changesetid 15977.
(0004532)
frank   
2022-03-03 15:11   
Fix committed to bareos bareos-19.2 branch with changesetid 15981.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1426 [bareos-core] director minor always 2022-02-07 10:37 2022-02-24 11:46
Reporter: mschiff Platform: Linux  
Assigned To: stephand OS: any  
Priority: normal OS Version: 3  
Status: acknowledged Product Version: 21.0.0  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: bareos send useless operator "mount" messages
Description: The default configuration has messages/Standard.conf which contains:

operator = <email> = mount

which shouldsend an email if an operator is required for a job to continue.

But these mails will also be triggered on a busy bareos-sd with multiple virtual drives and multiple jobs running, when a job just needs to wait a bit for a volume to become available.
Every month, when our systems are doing virtual full backups at night, we get lots of mails like:

06-Feb 23:37 kilo-sd JobId 58793: Please mount read Volume "VolIncr-0034" for:
    Job: BackupIndia.2022-02-06_23.15.01_31
    Storage: "MultiFileStorage0001" (/srv/backup/bareos)
    Pool: Incr
    Media type: File

But in the morning, all jobs are finished successsfully.

So when one job is reading a volume and another job is waiting for the same volume, an email is triggered. But after waiting a couple of minutes, this "issue" solves itself.

It should be possible to set some timeout, after which such messages are being sent, so that they will only be sent on really hanging jobs.

This is part of the joblog:

 2022-02-06 23:25:38 kilo-dir JobId 58793: Start Virtual Backup JobId 58793, Job=BackupIndia.2022-02-06_23.15.01_31
 2022-02-06 23:25:38 kilo-dir JobId 58793: Consolidating JobIds 58147,58164,58182,58200,58218,58236,58254,58272,58290,58308,58326,58344,58362,58380,58398,58416,58434,58452,58470,58488,58506
,58524,58542,58560,58578,58596,58614,58632,58650,58668,58686,58704,58722,58740,58758,58764
 2022-02-06 23:25:40 kilo-dir JobId 58793: Bootstrap records written to /var/lib/bareos/kilo-dir.restore.16.bsr
 2022-02-06 23:25:40 kilo-dir JobId 58793: Connected Storage daemon at kilo.sys4.de:9103, encryption: TLS_AES_256_GCM_SHA384 TLSv1.3
 2022-02-06 23:26:42 kilo-dir JobId 58793: Using Device "MultiFileStorage0001" to read.
 2022-02-06 23:26:42 kilo-dir JobId 58793: Using Device "MultiFileStorage0002" to write.
 2022-02-06 23:26:42 kilo-sd JobId 58793: Ready to read from volume "VolFull-0165" on device "MultiFileStorage0001" (/srv/backup/bareos).
 2022-02-06 23:26:42 kilo-sd JobId 58793: Forward spacing Volume "VolFull-0165" to file:block 0:3367481982.
 2022-02-06 23:26:53 kilo-sd JobId 58793: End of Volume at file 1 on device "MultiFileStorage0001" (/srv/backup/bareos), Volume "VolFull-0165"
 2022-02-06 23:26:53 kilo-sd JobId 58793: Ready to read from volume "VolFull-0168" on device "MultiFileStorage0001" (/srv/backup/bareos).
 2022-02-06 23:26:53 kilo-sd JobId 58793: Forward spacing Volume "VolFull-0168" to file:block 2:1033779909.
 2022-02-06 23:26:54 kilo-sd JobId 58793: End of Volume at file 2 on device "MultiFileStorage0001" (/srv/backup/bareos), Volume "VolFull-0168"
 2022-02-06 23:26:54 kilo-sd JobId 58793: Ready to read from volume "VolFull-0169" on device "MultiFileStorage0001" (/srv/backup/bareos).
 2022-02-06 23:26:54 kilo-sd JobId 58793: Forward spacing Volume "VolFull-0169" to file:block 0:64702.
 2022-02-06 23:27:03 kilo-sd JobId 58793: End of Volume at file 1 on device "MultiFileStorage0001" (/srv/backup/bareos), Volume "VolFull-0169"
 2022-02-06 23:27:03 kilo-sd JobId 58793: Warning: stored/vol_mgr.cc:542 Need volume from other drive, but swap not possible. Status: read=1 num_writers=0 num_reserve=0 swap=0 vol=VolIncr-0
022 from dev="MultiFileStorage0004" (/srv/backup/bareos) to "MultiFileStorage0001" (/srv/backup/bareos)
 2022-02-06 23:27:03 kilo-sd JobId 58793: Warning: stored/acquire.cc:348 Read acquire: stored/label.cc:268 Could not reserve volume VolIncr-0022 on "MultiFileStorage0001" (/srv/backup/bareo
s)
 2022-02-06 23:27:03 kilo-sd JobId 58793: Please mount read Volume "VolIncr-0022" for:
    Job: BackupIndia.2022-02-06_23.15.01_31
    Storage: "MultiFileStorage0001" (/srv/backup/bareos)
    Pool: Incr
    Media type: File
 2022-02-06 23:32:03 kilo-sd JobId 58793: Ready to read from volume "VolIncr-0022" on device "MultiFileStorage0001" (/srv/backup/bareos).
 2022-02-06 23:32:03 kilo-sd JobId 58793: Forward spacing Volume "VolIncr-0022" to file:block 0:3331542115.
 2022-02-06 23:32:03 kilo-sd JobId 58793: End of Volume at file 0 on device "MultiFileStorage0001" (/srv/backup/bareos), Volume "VolIncr-0022"
 2022-02-06 23:32:03 kilo-sd JobId 58793: Ready to read from volume "VolIncr-0023" on device "MultiFileStorage0001" (/srv/backup/bareos).
 2022-02-06 23:32:03 kilo-sd JobId 58793: Forward spacing Volume "VolIncr-0023" to file:block 0:750086502.
Tags:
Steps To Reproduce:
Additional Information:
System Description General issues regarding all linux platforms or not specific to only one distribution
Attached Files:
Notes
(0004526)
stephand   
2022-02-24 11:46   
Thanks for reporting this issue. I also already noticed that problem.
It will be very hard to fix this properly without a complete redesign of the whole reservation logic, which would be a huge effort.
But meanwhile we could think about a workaround to mitigate this somehow.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1349 [bareos-core] file daemon major always 2021-05-07 18:29 2022-02-02 10:47
Reporter: oskarsr Platform: Linux  
Assigned To: bruno-at-bareos OS: Debian  
Priority: urgent OS Version: 9  
Status: resolved Product Version: 20.0.1  
Product Build: Resolution: unable to reproduce  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Fatal error: bareos-fd on the following backup after the successful backup of postgresql database using the PostgreSQL Plugin.
Description: Fatal error: bareos-fd on the following backup after the successful backup of postgresql database using the PostgreSQL Plugin.
When the Client daemon is restarted, the backup of Postgresql database goes without the error, but just once. On the second attempt, there is an error again.

it-fd JobId 118: Fatal error: bareosfd: Traceback (most recent call last):
File "/usr/lib/bareos/plugins/bareos-fd-postgres.py", line 40, in
import BareosFdPluginPostgres
File "/usr/lib/bareos/plugins/BareosFdPluginPostgres.py", line 30, in
import psycopg2
File "/usr/local/lib/python3.5/dist-packages/psycopg2/__init__.py", line 51, in
from psycopg2._psycopg import ( # noqa
SystemError: initialization of _psycopg raised unreported exception
Tags:
Steps To Reproduce: When the backup is executed right after the client daemon restart, the debug log is following:

it-fd (100): filed/fileset.cc:271-150 P python:module_path=/usr/lib/bareos/plugins:module_name=bareos-fd-postgres:postgresDataDir=/var/lib/postgresql/10/main:walArchive=/var/lib/postgresql/10/wal_archive/:dbuser=bareos:dbname=bareos
it-fd (150): filed/fd_plugins.cc:542-150 plugin cmd=python:module_path=/usr/lib/bareos/plugins:module_name=bareos-fd-postgres:postgresDataDir=/var/lib/postgresql/10/main:walArchive=/var/lib/postgresql/10/wal_archive/:dbuser=bareos:dbname=bareos
it-fd (150): filed/fd_plugins.cc:441-150 plugin_ctx=7f3964015250 JobId=150
it-fd (150): filed/fd_plugins.cc:229-150 IsEventForThisPlugin? name=python:module_path=/usr/lib/bareos/plugins:module_name=bareos-fd-postgres:postgresDataDir=/var/lib/postgresql/10/main:walArchive=/var/lib/postgresql/10/wal_archive/:dbuser=bareos:dbname=bareos len=6 plugin=python3-fd.so plen=7
it-fd (150): filed/fd_plugins.cc:261-150 IsEventForThisPlugin: yes, without last character: (plugin=python3-fd.so, name=python:module_path=/usr/lib/bareos/plugins:module_name=bareos-fd-postgres:postgresDataDir=/var/lib/postgresql/10/main:walArchive=/var/lib/postgresql/10/wal_archive/:dbuser=bareos:dbname=bareos)
it-fd (150): python/python-fd.cc:992-150 python3-fd: Trying to load module with name bareos-fd-postgres
it-fd (150): python/python-fd.cc:1006-150 python3-fd: Successfully loaded module with name bareos-fd-postgres
it-fd (100): module/bareosfd.cc:1442-150 python3-fd-mod: Constructor called in module BareosFdPluginPostgres with plugindef=postgresDataDir=/var/lib/postgresql/10/main:walArchive=/var/lib/postgresql/10/wal_archive/:dbuser=bareos:dbname=bareos:
it-fd (100): module/bareosfd.cc:1442-150 python3-fd-mod: Constructor called in module BareosFdPluginLocalFilesBaseclass with plugindef=postgresDataDir=/var/lib/postgresql/10/main:walArchive=/var/lib/postgresql/10/wal_archive/:dbuser=bareos:dbname=bareos:
it-fd (100): module/bareosfd.cc:1442-150 python3-fd-mod: Constructor called in module BareosFdPluginBaseclass with plugindef=postgresDataDir=/var/lib/postgresql/10/main:walArchive=/var/lib/postgresql/10/wal_archive/:dbuser=bareos:dbname=bareos:
it-fd (150): module/bareosfd.cc:1495-150 python3-fd-mod: PyBareosRegisterEvents registering event 2
it-fd (150): filed/fd_plugins.cc:2266-150 fd-plugin: Plugin registered event=2
it-fd (150): module/bareosfd.cc:1495-150 python3-fd-mod: PyBareosRegisterEvents registering event 4
it-fd (150): filed/fd_plugins.cc:2266-150 fd-plugin: Plugin registered event=4
it-fd (150): module/bareosfd.cc:1495-150 python3-fd-mod: PyBareosRegisterEvents registering event 16
it-fd (150): filed/fd_plugins.cc:2266-150 fd-plugin: Plugin registered event=16
it-fd (150): module/bareosfd.cc:1495-150 python3-fd-mod: PyBareosRegisterEvents registering event 19
it-fd (150): filed/fd_plugins.cc:2266-150 fd-plugin: Plugin registered event=19
it-fd (150): module/bareosfd.cc:1495-150 python3-fd-mod: PyBareosRegisterEvents registering event 3
it-fd (150): filed/fd_plugins.cc:2266-150 fd-plugin: Plugin registered event=3
it-fd (150): module/bareosfd.cc:1495-150 python3-fd-mod: PyBareosRegisterEvents registering event 5
it-fd (150): filed/fd_plugins.cc:2266-150 fd-plugin: Plugin registered event=5


But, when the backup is started repeatedly for the same client, the log consists of following:

it-fd (100): filed/fileset.cc:271-151 P python:module_path=/usr/lib/bareos/plugins:module_name=bareos-fd-postgres:postgresDataDir=/var/lib/postgresql/10/main:walArchive=/var/lib/postgresql/10/wal_archive/:dbuser=bareos:dbname=bareos
it-fd (150): filed/fd_plugins.cc:542-151 plugin cmd=python:module_path=/usr/lib/bareos/plugins:module_name=bareos-fd-postgres:postgresDataDir=/var/lib/postgresql/10/main:walArchive=/var/lib/postgresql/10/wal_archive/:dbuser=bareos:dbname=bareos
it-fd (150): filed/fd_plugins.cc:441-151 plugin_ctx=7f39641d1b60 JobId=151
it-fd (150): filed/fd_plugins.cc:229-151 IsEventForThisPlugin? name=python:module_path=/usr/lib/bareos/plugins:module_name=bareos-fd-postgres:postgresDataDir=/var/lib/postgresql/10/main:walArchive=/var/lib/postgresql/10/wal_archive/:dbuser=bareos:dbname=bareos len=6 plugin=python3-fd.so plen=7
it-fd (150): filed/fd_plugins.cc:261-151 IsEventForThisPlugin: yes, without last character: (plugin=python3-fd.so, name=python:module_path=/usr/lib/bareos/plugins:module_name=bareos-fd-postgres:postgresDataDir=/var/lib/postgresql/10/main:walArchive=/var/lib/postgresql/10/wal_archive/:dbuser=bareos:dbname=bareos)
it-fd (150): python/python-fd.cc:992-151 python3-fd: Trying to load module with name bareos-fd-postgres
it-fd (150): python/python-fd.cc:1000-151 python3-fd: Failed to load module with name bareos-fd-postgres
it-fd (150): include/python_plugins_common.inc:124-151 bareosfd: Traceback (most recent call last):
  File "/usr/lib/bareos/plugins/bareos-fd-postgres.py", line 40, in <module>
    import BareosFdPluginPostgres
  File "/usr/lib/bareos/plugins/BareosFdPluginPostgres.py", line 30, in <module>
    import psycopg2
  File "/usr/lib/python3/dist-packages/psycopg2/__init__.py", line 51, in <module>
    from psycopg2._psycopg import ( # noqa
SystemError: initialization of _psycopg raised unreported exception

it-fd (150): filed/fd_plugins.cc:480-151 Cancel return from GeneratePluginEvent
it-fd (100): filed/fileset.cc:271-151 N
it-fd (100): filed/dir_cmd.cc:462-151 <dird: getSecureEraseCmd
Additional Information:
System Description
Attached Files:
Notes
(0004129)
oskarsr   
2021-05-12 17:33   
(Last edited: 2021-05-12 17:34)
Does anybody has tried to back up postgresql DB by using bareos-fd-postgres python plugin ?

(0004263)
perkons   
2021-09-13 15:38   
We are experiencing exactly the same issue on Ubuntu 18.04.
(0004297)
bruno-at-bareos   
2021-10-11 13:31   
To Both of you, could you share the installed bareos package (and confirm they're coming from bareos.org) and also the python3 version used
also python packages (and from where they come) related (main core + psycopg)
(0004298)
perkons   
2021-10-11 14:52   
We installed the bareos-filedaemon from https://download.bareos.org
The python modules are installed from Ubuntu repositories. The reason we use both python and python3 modules is because if one is missing the backups fail. This seems pretty wrong to me, but as I understand there is active work to migrate to python3.
We also have both of these python modules (python2 and python3) on our RHEL based hosts and have not had any problems with the PostgreSQL Plugin.

# dpkg -l | grep psycopg
ii python-psycopg2 2.8.4-1~pgdg18.04+1 amd64 Python module for PostgreSQL
ii python3-psycopg2 2.8.6-2~pgdg18.04+1 amd64 Python 3 module for PostgreSQL
# dpkg -l | grep dateutil
ii python-dateutil 2.6.1-1 all powerful extensions to the standard Python datetime module
ii python3-dateutil 2.6.1-1 all powerful extensions to the standard Python 3 datetime module
# dpkg -l | grep bareos
ii bareos-common 20.0.1-3 amd64 Backup Archiving Recovery Open Sourced - common files
ii bareos-filedaemon 20.0.1-3 amd64 Backup Archiving Recovery Open Sourced - file daemon
ii bareos-filedaemon-postgresql-python-plugin 20.0.1-3 amd64 Backup Archiving Recovery Open Sourced - file daemon PostgreSQL plugin
ii bareos-filedaemon-python-plugins-common 20.0.1-3 amd64 Backup Archiving Recovery Open Sourced - file daemon Python plugin common files
ii bareos-filedaemon-python3-plugin 20.0.1-3 amd64 Backup Archiving Recovery Open Sourced - file daemon Python plugin
# dpkg -s bareos-filedaemon
Package: bareos-filedaemon
Status: install ok installed
Priority: optional
Section: admin
Installed-Size: 384
Maintainer: Joerg Steffens <joerg.steffens@bareos.com>
Architecture: amd64
Source: bareos
Version: 20.0.1-3
Replaces: bacula-fd
Depends: bareos-common (= 20.0.1-3), lsb-base (>= 3.2-13), lsof, libc6 (>= 2.14), libgcc1 (>= 1:3.0), libjansson4 (>= 2.0.1), libstdc++6 (>= 5.2), zlib1g (>= 1:1.1.4)
Pre-Depends: debconf (>= 1.4.30) | debconf-2.0, adduser
Conflicts: bacula-fd
Conffiles:
 /etc/init.d/bareos-fd bcc61ad57fde8a771a5002365130c3ec
Description: Backup Archiving Recovery Open Sourced - file daemon
 Bareos is a set of programs to manage backup, recovery and verification of
 data across a network of computers of different kinds.
 .
 The file daemon has to be installed on the machine to be backed up. It is
 responsible for providing the file attributes and data when requested by
 the Director, and also for the file system-dependent part of restoration.
 .
 This package contains the Bareos File daemon.
Homepage: http://www.bareos.org/
# cat /etc/apt/sources.list.d/bareos-20.list
deb https://download.bareos.org/bareos/release/20/xUbuntu_18.04 /
#
(0004299)
bruno-at-bareos   
2021-10-11 15:48   
Thanks for your report, as you stated the python/python3 situation is far from ideal, but PR are progressing, the tunnel's end is near.
Also as you mentioned there's no trouble on RHEL system, I'm aware too.

I would have tried to use only python2 code on such version.
I make a note about testing that with the future new code on Ubuntu 18... But I just can't say when.
(0004497)
bruno-at-bareos   
2022-02-02 10:46   
For the issue reported there's something that look wrong

File "/usr/lib/bareos/plugins/BareosFdPluginPostgres.py", line 30, in
import psycopg2
File "/usr/local/lib/python3.5/dist-packages/psycopg2/__init__.py", line 51, in
from psycopg2._psycopg import ( # noqa
SystemError: initialization of _psycopg raised unreported exception

here it is /usr/local/lib/python3.5

And then

it-fd (150): include/python_plugins_common.inc:124-151 bareosfd: Traceback (most recent call last):
  File "/usr/lib/bareos/plugins/bareos-fd-postgres.py", line 40, in <module>
    import BareosFdPluginPostgres
  File "/usr/lib/bareos/plugins/BareosFdPluginPostgres.py", line 30, in <module>
    import psycopg2
  File "/usr/lib/python3/dist-packages/psycopg2/__init__.py", line 51, in <module>
    from psycopg2._psycopg import ( # noqa
SystemError: initialization of _psycopg raised unreported exception

/usr/lib/python3

So seems you have mixed python env which create strange behaviour, cause the module loaded is not always the same.
Our best advice would be to clean up the global environment and make sure only one consistent version is used for bareos.

Also python3 support has been greetly improved in Bareos 21.
Will closing, as we are not able to reproduce such environment.

btw postgresql plugin is tested each time the code is updated.
(0004498)
bruno-at-bareos   
2022-02-02 10:47   
Mixed python version used with different psyscopg. /usr/local/lib/python3.5 and /usr/lib/python3

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1418 [bareos-core] storage daemon major always 2022-01-04 14:23 2022-01-31 09:34
Reporter: Scorpionking83 Platform: Linux  
Assigned To: bruno-at-bareos OS: RHEL  
Priority: immediate OS Version: 7  
Status: resolved Product Version: 19.2.11  
Product Build: Resolution: duplicate  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Still autoprune and recycle not working in Bareos 19.2.7
Description: Dear developers,

I have still a problem wit autoprune and recycle tapes:
1. Everything works, but when he reaches the max volume tapes with retention of 90 day's , he cannot create any backup any more. Then update the incrementail pool>
update --> Option 2 "Pool from resource" --> Option 3 Incremental
update --> Option 1 "Volumes parameters" --> Option 13: All volumes from pool --> Option 1 Incremental
update --> Option 1 "Volumes parameters" --> Option 13: All volumes from pool -->Option 2 Full
update --> Option 1 "Volumes parameters" --> Option 13: All volumes from pool -->Option 3 Incremental

2. I get the following error:
Volume "Incrementail-0001" has Volume Retention of 7776000 sec. and has 0 jobs that will be pruned

Max volume tapes is set to 400

But way does autoprune and recycle not work if max volumes tapes has been reaches and retention period is not yet been reach?
Is it also possible to detele old tapes from file and database?

I need a answer soon.
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0004451)
Scorpionking83   
2022-01-04 14:36   
Why close this, but the issue is not resolved.
(0004452)
bruno-at-bareos   
2022-01-04 14:39   
This issue is the same as the report 001318 made by the same user.
This is clearly a duplicate case.
(0004493)
Scorpionking83   
2022-01-29 17:14   
Can someone please check my other bug report 0001318.
I still looking for a solution.
(0004496)
bruno-at-bareos   
2022-01-31 09:34   
duplicate of 0001318

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1409 [bareos-core] director tweak always 2021-12-19 00:33 2022-01-13 14:22
Reporter: jalseos Platform: Linux  
Assigned To: bruno-at-bareos OS: Debian  
Priority: low OS Version: 10  
Status: resolved Product Version: 20.0.2  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: DB error on restore with ExitOnFatal=true
Description: I was trying to use ExitOnFatal=true in director and noticed a persistent error when trying to initiate a restore:

bareos-dir: ERROR TERMINATION at cats/postgresql.cc:675
Fatal database error

The error does not happen with unset/default ExitOnFatal=false

The postgresql (11) log reveals:
STATEMENT: DROP TABLE temp
ERROR: table "temp1" does not exist
STATEMENT: DROP TABLE temp1
ERROR: table "temp1" does not exist

I found the SQL statements in thes files in the code:
/core/src/cats/dml/0018_uar_del_temp
/core/src/cats/dml/0019_uar_del_temp1

I am wondering if something like this might be in order: (like 0012_drop_deltabs.postgresql)
/core/src/cats/dml/0018_uar_del_temp.postgres
DROP TABLE IF EXISTS temp
Tags:
Steps To Reproduce: $ bconsole
* restore
9
bareos-dir: ERROR TERMINATION at cats/postgresql.cc:675
Fatal database error
Additional Information:
System Description
Attached Files:
Notes
(0004400)
bruno-at-bareos   
2021-12-21 15:58   
The behavior is exiting in case of error when ExitOnFatal = true

STATEMENT: DROP TABLE temp
ERROR: table "temp1" does not exist

this is an error and the product the obey strictly to the paramet Exit On Fatal.

Now with future version, where only postgresql will be kept as database, but also older postgresql will never be installed the code can be reviewed to chase every drop table without and if exist.

Files to change

```
core/src/cats/dml/0018_uar_del_temp:DROP TABLE temp
core/src/cats/dml/0019_uar_del_temp1:DROP TABLE temp1
core/src/cats/mysql_queries.inc:"DROP TABLE temp "
core/src/cats/mysql_queries.inc:"DROP TABLE temp1 "
core/src/cats/postgresql_queries.inc:"DROP TABLE temp "
core/src/cats/postgresql_queries.inc:"DROP TABLE temp1 "
core/src/cats/sqlite_queries.inc:"DROP TABLE temp "
core/src/cats/sqlite_queries.inc:"DROP TABLE temp1 "
core/src/dird/query.sql:!DROP TABLE temp;
core/src/dird/query.sql:!DROP TABLE temp2;
```
Do you want to propose a PR for ?
(0004405)
bruno-at-bareos   
2021-12-21 16:50   
PR proposed
https://github.com/bareos/bareos/pull/1035

Once the PR will be build, there's will be some testing package available, would you like to test them ?
(0004443)
jalseos   
2022-01-02 16:52   
Hi, thank you for looking into this issue! I will try to test the built package (deb preferred) if a subsequent code/package "downgrade" (ie. no Catalog DB changes, ...) to a published Community Edition release remains possible afterwards.
(0004473)
bruno-at-bareos   
2022-01-13 14:22   
Fix committed to bareos master branch with changesetid 15753.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1416 [bareos-core] General minor have not tried 2021-12-30 11:43 2022-01-11 21:50
Reporter: hostedpower Platform: Linux  
Assigned To: joergs OS: Debian  
Priority: low OS Version: 10  
Status: assigned Product Version: 21.0.1  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Bareos python3 contrib plugin filedaemon
Description: Hi,


We used a version of the bareos contrib mysql plugin which seems to support Python3, however in recent builds the file seems to be regressed to be only compatible with pyton2 again.

Tags:
Steps To Reproduce:
Additional Information: In attachment you can find the python3 compatible version which was previously found on git under "dev/joergs/master/contrib-build" branch, however since this branch was updated, the older python2 version is back in there
System Description
Attached Files: MySQL-Python3.zip (3,594 bytes) 2021-12-30 11:43
https://bugs.bareos.org/file_download.php?file_id=488&type=bug
Notes
(0004469)
joergs   
2022-01-11 21:32   
I just verified this. In my environment, the module is working fine with Python3.
I even added a systemtest to verify this: https://github.com/bareos/bareos/tree/dev/joergs/master/contrib-build/systemtests/tests/py3plug-fd-contrib-mysql_dump

However, I guess you have already noted that the path and the initialisation of the module have changed to the bareos_mysql_dump directory. Maybe this is not reflected in your environment?

Please be aware that we currently are in the process of finding a resonable file and directory structure for this plugins.

Without further information, I'd judge this bug entry as invalid.
(0004470)
hostedpower   
2022-01-11 21:39   
I think you could be right, I tried the v21 one : https://github.com/bareos/bareos/blob/bareos-21/contrib/fd-plugins/mysql-python/BareosFdMySQLclass.py

So master is working, but not v21 ?
(0004471)
joergs   
2022-01-11 21:50   
Correct. v21 should be identical to v20 , and both versions only work with Python 2.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1389 [bareos-core] installer / packages minor always 2021-09-20 12:23 2022-01-05 13:23
Reporter: colttt Platform:  
Assigned To: bruno-at-bareos OS:  
Priority: normal OS Version:  
Status: resolved Product Version: 20.0.2  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: no repository for debian 11
Description: Debian 11 (bullseye) was released on 14th august 2021 but there is no bareos repository yet.
I would appreciate if debian 11 would be supported as well.
Tags:
Steps To Reproduce:
Additional Information:
Attached Files:
Notes
(0004276)
bruno-at-bareos   
2021-09-27 13:37   
Thanks for your report.

Starting September 14th Debian 11 is available for all customers with a subscription contract.
Nightly is also build for Debian 11, and will be part of bareos 21 release.
(0004292)
brechsteiner   
2021-10-02 22:51   
what is about the Community Repository? https://download.bareos.org/bareos/release/20/
(0004293)
bruno-at-bareos   
2021-10-04 09:30   
Sorry if it wasn't clear in my previous statement, Debian 11 will be available for next Bareos 21.
(0004455)
bruno-at-bareos   
2022-01-05 13:23   
community repository published https://download.bareos.org/bareos/release/21/Debian_11/

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1408 [bareos-core] director minor have not tried 2021-12-18 20:32 2021-12-28 09:44
Reporter: embareossed Platform: Linux  
Assigned To: bruno-at-bareos OS: Debian  
Priority: normal OS Version: 10  
Status: resolved Product Version: 20.0.2  
Product Build: Resolution: no change required  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: "Backup OK" email message subject line no longer displays the job name
Description: In bareos 18, backups which concluded successfully would be followed up by an email with a subject line indicating the name of the specific job that ran. However, in bareos 20, the subject line now only indicates the name of the client for which the job ran.

This is a minor nuisance, but I found the more distinguishing subject line to be more useful. In a case where there are multiple backup jobs for a single client where one but not all jobs fail, it is not immediately obvious -- as it was in bareos 18 -- as to which job for that client failed.
Tags:
Steps To Reproduce: Run two jobs on a host which has more than 1 backup job associated with it.
The email subject lines will be identical even though they are for 2 different jobs.
Additional Information:
System Description
Attached Files:
Notes
(0004401)
bruno-at-bareos   
2021-12-21 16:05   
Maybe some configuration file example used would help

From code we can see that the line was not changed since 2016
67ad14188a src/defaultconfigs/bareos-dir.d/messages/Standard.conf.in (Joerg Steffens 2016-08-01 14:03:06 +0200 5) mailcommand = "@bindir@/bsmtp -h @smtp_host@ -f \"\(Bareos\) \<%r\>\" -s \"Bareos: %t %e of %c %l\" %r"
(0004415)
embareossed   
2021-12-24 17:58   
Here is what my configs look like:
# grep mailcommand *
Daemon.conf: mailcommand = "/usr/bin/bsmtp -h localhost -f \"\(Bareos\) \<%r\>\" -s \"Bareos daemon message\" %r"
Standard.conf: mailcommand = "/usr/bin/bsmtp -h localhost -f \"\(Bareos\) \<%r\>\" -s \"Bareos: %t %e of %c %l\" %r"

All references to message resources are for Standard, except for the director which uses Daemon. I copied most of my config files from the old director (bareos 18) to the setup for the new director (bareos 20); I did not make any changes to messages, afair. I'll take a deeper look at this and see what I can figure out. Maybe bsmtp semantics have changed?
(0004416)
embareossed   
2021-12-24 18:12   
OK, it appears that in bareos 20, as per doc, the %c stands for the client, not the jobname (which should be %n). However, in bareos 18 and prior, this same setup seems to be generating the jobname, not the clientname. So it appears that the semantics have changed to properly implement the documented purpose of the %c macro (and perhaps others; I haven't tested those).

Changing the macro to %n works as desired.
(0004428)
bruno-at-bareos   
2021-12-28 09:44   
Adapting configuration following documentation

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1407 [bareos-core] General minor always 2021-12-18 20:26 2021-12-28 09:43
Reporter: embareossed Platform: Linux  
Assigned To: bruno-at-bareos OS: Debian  
Priority: normal OS Version: 10  
Status: resolved Product Version: 20.0.2  
Product Build: Resolution: no change required  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Run before script hangs unless debug tracing enabled in script
Description: A "run before job script" which has been working since bareos 18 no longer works in bareos 20 (20.0.3 or 20.0.4). This is a simple bash shell script that performs some chores before the backup. It only sends output to stdout, not stderr (I've checked this). The script works properly in bareos 18, but causes the job to hang in bareos 20.

The script is actually being run on a remote file daemon. This may be a clue to the behavior. But again, this has been working in bareos 18.

Interestingly, I found that by enabling bash tracing (-xv options) inside the script itself to try to see what was causing the hang, it actually alleviated the hang!
Tags:
Steps To Reproduce: Create a bash shell on a remote bareos 20 client.
Create a job in a bareos 20 director on a local system that calls a "run before job script" on the remote client.
Run the job.
If this is reproducible, the job will hang when it reaches the call to the remote script.

if this is reproducible, try setting traces in the bash script.

Additional Information: I built the 20.0.3 executables from the git source code on a devuan beowulf host and distributed the packages to the bareos director server and the bareos file daemon client, both of which are also devuan beowulf.
System Description
Attached Files:
Notes
(0004403)
bruno-at-bareos   
2021-12-21 16:10   
Would you mind to share the job definition so we can try to reproduce.
The script would be nice, but perhaps it do something secret.
(0004404)
bruno-at-bareos   
2021-12-21 16:17   
I can't reproduce, it works here

with a job definiton

```
 2021-12-21 16:14:12 qt-kt-dir JobId 8204: Start Backup JobId 8204, Job=yoda.2021-12-21_16.14.10_06
 2021-12-21 16:14:12 qt-kt-dir JobId 8204: Connected Storage daemon at qt-kt.labaroche.ioda.net:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
 2021-12-21 16:14:12 qt-kt-dir JobId 8204: Using Device "admin" to write.
 2021-12-21 16:14:12 qt-kt-dir JobId 8204: Probing client protocol... (result will be saved until config reload)
 2021-12-21 16:14:12 qt-kt-dir JobId 8204: Connected Client: yoda-fd at yoda.labaroche.ioda.net:9102, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
 2021-12-21 16:14:12 qt-kt-dir JobId 8204: Handshake: Immediate TLS 2021-12-21 16:14:12 qt-kt-dir JobId 8204: Encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
 2021-12-21 16:14:12 yoda-fd JobId 8204: Connected Storage daemon at qt-kt.labaroche.ioda.net:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
 2021-12-21 16:14:12 yoda-fd JobId 8204: shell command: run ClientBeforeJob "sh -c 'snapper list && snapper -c ioda list'"
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: # | Type | Pre # | Date | User | Used Space | Cleanup | Description | Userdata
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: -------+--------+-------+----------------------------------+------+------------+----------+-----------------------+--------------
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 0 | single | | | root | | | current |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 1* | single | | Sun 21 Jun 2020 05:17:47 PM CEST | root | 92.00 KiB | | first root filesystem |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 4803 | single | | Fri 01 Jan 2021 12:00:23 AM CET | root | 13.97 GiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 10849 | single | | Wed 01 Sep 2021 12:00:02 AM CEST | root | 12.58 GiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 11582 | single | | Fri 01 Oct 2021 12:00:01 AM CEST | root | 7.90 GiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 12342 | single | | Mon 01 Nov 2021 12:00:08 AM CET | root | 8.07 GiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13085 | single | | Wed 01 Dec 2021 12:00:07 AM CET | root | 156.00 KiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13272 | pre | | Wed 08 Dec 2021 06:23:04 PM CET | root | 156.00 KiB | number | zypp(zypper) | important=yes
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13273 | post | 13272 | Wed 08 Dec 2021 06:46:13 PM CET | root | 3.28 MiB | number | | important=yes
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13278 | pre | | Wed 08 Dec 2021 10:11:11 PM CET | root | 304.00 KiB | number | zypp(zypper) | important=yes
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13279 | post | 13278 | Wed 08 Dec 2021 10:11:26 PM CET | root | 124.00 KiB | number | | important=yes
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13447 | pre | | Wed 15 Dec 2021 09:57:35 PM CET | root | 48.00 KiB | number | zypp(zypper) | important=no
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13448 | post | 13447 | Wed 15 Dec 2021 09:57:42 PM CET | root | 48.00 KiB | number | | important=no
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13499 | single | | Sat 18 Dec 2021 12:00:06 AM CET | root | 156.00 KiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13523 | single | | Sun 19 Dec 2021 12:00:05 AM CET | root | 156.00 KiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13547 | single | | Mon 20 Dec 2021 12:00:05 AM CET | root | 156.00 KiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13557 | pre | | Mon 20 Dec 2021 09:27:21 AM CET | root | 156.00 KiB | number | zypp(zypper) | important=yes
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13559 | pre | | Mon 20 Dec 2021 10:30:43 AM CET | root | 156.00 KiB | number | zypp(zypper) | important=yes
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13560 | post | 13559 | Mon 20 Dec 2021 10:52:01 AM CET | root | 1.76 MiB | number | | important=yes
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13562 | pre | | Mon 20 Dec 2021 11:53:40 AM CET | root | 352.00 KiB | number | zypp(zypper) | important=yes
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13563 | post | 13562 | Mon 20 Dec 2021 11:53:56 AM CET | root | 124.00 KiB | number | | important=yes
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13576 | single | | Tue 21 Dec 2021 12:00:00 AM CET | root | 172.00 KiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13585 | single | | Tue 21 Dec 2021 09:00:00 AM CET | root | 172.00 KiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13586 | single | | Tue 21 Dec 2021 10:00:00 AM CET | root | 172.00 KiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13587 | single | | Tue 21 Dec 2021 11:00:00 AM CET | root | 172.00 KiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13588 | single | | Tue 21 Dec 2021 12:00:00 PM CET | root | 172.00 KiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13589 | single | | Tue 21 Dec 2021 01:00:00 PM CET | root | 172.00 KiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13590 | single | | Tue 21 Dec 2021 02:00:00 PM CET | root | 172.00 KiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13591 | single | | Tue 21 Dec 2021 03:00:00 PM CET | root | 172.00 KiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13592 | single | | Tue 21 Dec 2021 04:00:00 PM CET | root | 92.00 KiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: # | Type | Pre # | Date | User | Cleanup | Description | Userdata
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: -------+--------+-------+---------------------------------+------+----------+-------------+---------
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 0 | single | | | root | | current |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13050 | single | | Mon 20 Dec 2021 12:00:05 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13061 | single | | Mon 20 Dec 2021 11:00:00 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13062 | single | | Mon 20 Dec 2021 12:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13063 | single | | Mon 20 Dec 2021 01:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13064 | single | | Mon 20 Dec 2021 02:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13065 | single | | Mon 20 Dec 2021 03:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13066 | single | | Mon 20 Dec 2021 04:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13067 | single | | Mon 20 Dec 2021 05:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13068 | single | | Mon 20 Dec 2021 06:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13069 | single | | Mon 20 Dec 2021 07:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13070 | single | | Mon 20 Dec 2021 08:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13071 | single | | Mon 20 Dec 2021 09:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13072 | single | | Mon 20 Dec 2021 10:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13073 | single | | Mon 20 Dec 2021 11:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13074 | single | | Tue 21 Dec 2021 12:00:00 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13075 | single | | Tue 21 Dec 2021 01:00:00 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13076 | single | | Tue 21 Dec 2021 02:00:00 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13077 | single | | Tue 21 Dec 2021 03:00:00 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13078 | single | | Tue 21 Dec 2021 04:00:00 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13079 | single | | Tue 21 Dec 2021 05:00:00 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13080 | single | | Tue 21 Dec 2021 06:00:00 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13081 | single | | Tue 21 Dec 2021 07:00:00 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13082 | single | | Tue 21 Dec 2021 08:00:00 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13083 | single | | Tue 21 Dec 2021 09:00:00 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13084 | single | | Tue 21 Dec 2021 10:00:00 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13085 | single | | Tue 21 Dec 2021 11:00:00 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13086 | single | | Tue 21 Dec 2021 12:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13087 | single | | Tue 21 Dec 2021 01:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13088 | single | | Tue 21 Dec 2021 02:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13089 | single | | Tue 21 Dec 2021 03:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13090 | single | | Tue 21 Dec 2021 04:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: Extended attribute support is enabled
 2021-12-21 16:14:36 yoda-fd JobId 8204: ACL support is enabled
 
 RunScript {
    RunsWhen = Before
    RunsOnClient = Yes
    FailJobOnError = No
    Command = "sh -c 'snapper list && snapper -c ioda list'"
  }

```
(0004414)
embareossed   
2021-12-24 17:45   
Nothing secret really. It's just a script that runs "estimate" and parses the output for the size of the backup. Then it decides (based on a value in a config file for the backup name) whether to proceed or not. This way, estimates can be used to determine whether to proceed with a backup or not. This was my workaround to my request in https://bugs.bareos.org/view.php?id=1135.

I did some upgrades recently and the problem has disappeared. So you can close this.
(0004427)
bruno-at-bareos   
2021-12-28 09:43   
Upgrade solve this.
estimate can take time, and from the bconsole point of view, look it is stalled or blocked, when you use the "listing instruction" you'll see the file by file proceed output.
closing

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1413 [bareos-core] bconsole major always 2021-12-27 15:29 2021-12-28 09:38
Reporter: jcottin Platform: Linux  
Assigned To: bruno-at-bareos OS: Debian  
Priority: high OS Version: 10  
Status: resolved Product Version: 21.0.0  
Product Build: Resolution: no change required  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: While restoring a directory using always incremental scheme, Bareos looks for a volume in the wrong storage.
Description: I configured the Always Incremental scheme using 2 different storage (FILE) as advised in the documentation.
-----------------------------------
https://docs.bareos.org/TasksAndConcepts/AlwaysIncrementalBackupScheme.html?highlight=job#storages-and-pools

While restoring a directory using always incremental scheme, Bareos looks for a volume in the wrong storage.

he job will require the following
   Volume(s) Storage(s) SD Device(s)
===========================================================================

    AI-Consolidated-vm-aiqi-linux-test-backup-0006 FileStorage-AI-Consolidated FileStorage-AI-Consolidated
    AI-Incremental-vm-aiqi-linux-test-backup0012 FileStorage-AI-Incremental FileStorage-AI-Incremental

It looks for AI-Incremental-vm-aiqi-linux-test-backup0012 in FileStorage-AI-Consolidated.
it should look for it in FileStorage-AI-Incremental.

Is there a problem with my setup ?
Tags: always incremental, storage
Steps To Reproduce: Using bconsole I target a backup before : 2021-12-27 19:00:00
I can find 3 backup (1 Full, 2 Incremental)
=======================================================
+-------+-------+----------+----------------+---------------------+------------------------------------------------+
| jobid | level | jobfiles | jobbytes | starttime | volumename |
+-------+-------+----------+----------------+---------------------+------------------------------------------------+
| 24 | F | 108,199 | 13,145,763,765 | 2021-12-25 08:06:41 | AI-Consolidated-vm-aiqi-linux-test-backup-0006 |
| 27 | I | 95 | 68,530 | 2021-12-25 20:00:04 | AI-Incremental-vm-aiqi-linux-test-backup0008 |
| 32 | I | 40 | 1,322,314 | 2021-12-26 20:00:09 | AI-Incremental-vm-aiqi-linux-test-backup0012 |
+-------+-------+----------+----------------+---------------------+------------------------------------------------+
-----------------------------------
$ cd /var/lib/mysql.dumps/wordpressdb/
cwd is: /var/lib/mysql.dumps/wordpressdb/
-----------------------------------
$ dir
-rw-r--r-- 1 0 (root) 112 (bareos) 1830 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/%create.gz
-rw-r--r-- 1 0 (root) 112 (bareos) 149 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/%tables
-rw-r--r-- 1 0 (root) 112 (bareos) 783 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/wp_commentmeta.sql.gz
-rw-r--r-- 1 0 (root) 112 (bareos) 1161 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/wp_comments.sql.gz
-rw-r--r-- 1 0 (root) 112 (bareos) 869 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/wp_links.sql.gz
-rw-r--r-- 1 0 (root) 112 (bareos) 235966 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/wp_options.sql.gz
-rw-r--r-- 1 0 (root) 112 (bareos) 830 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/wp_postmeta.sql.gz
-rw-r--r-- 1 0 (root) 112 (bareos) 3470 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/wp_posts.sql.gz
-rw-r--r-- 1 0 (root) 112 (bareos) 770 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/wp_term_relationships.sql.gz
-rw-r--r-- 1 0 (root) 112 (bareos) 838 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/wp_term_taxonomy.sql.gz
-rw-r--r-- 1 0 (root) 112 (bareos) 780 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/wp_termmeta.sql.gz
-rw-r--r-- 1 0 (root) 112 (bareos) 814 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/wp_terms.sql.gz
-rw-r--r-- 1 0 (root) 112 (bareos) 1404 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/wp_usermeta.sql.gz
-rw-r--r-- 1 0 (root) 112 (bareos) 983 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/wp_users.sql.gz
-----------------------------------
$ cd ..
cwd is: /var/lib/mysql.dumps/
-----------------------------------
I mark the folder:
$ mark /var/lib/mysql.dumps/wordpressdb
15 files marked.
$ done
-----------------------------------
The job will require the following
   Volume(s) Storage(s) SD Device(s)
============================================

    AI-Consolidated-vm-aiqi-linux-test-backup-0006 FileStorage-AI-Consolidated FileStorage-AI-Consolidated
    AI-Incremental-vm-aiqi-linux-test-backup0012 FileStorage-AI-Incremental FileStorage-AI-Incremental

Volumes marked with "*" are online.
18 files selected to be restored.

Using Catalog "MyCatalog"
Run Restore job
JobName: RestoreFiles
Bootstrap: /var/lib/bareos/bareos-dir.restore.2.bsr
Where: /tmp/bareos-restores
Replace: Always
FileSet: LinuxAll
Backup Client: vm-aiqi-linux-test-backup-fd
Restore Client: vm-aiqi-linux-test-backup-fd
Format: Native
Storage: FileStorage-AI-Consolidated
When: 2021-12-27 22:10:13
Catalog: MyCatalog
Priority: 10
Plugin Options: *None*
OK to run? (yes/mod/no): yes

I get these two messages.
============================================
27-Dec 22:15 bareos-sd JobId 43: Warning: stored/acquire.cc:286 Read open device "FileStorage-AI-Consolidated" (/var/lib/bareos/storage-AI-Consolidated) Volume "AI-Incremental-vm-aiqi-linux-test-backup0012" failed: ERR=stored/dev.cc:716 Could not open: /var/lib/bareos/storage-AI-Consolidated/AI-Incremental-vm-aiqi-linux-test-backup0012, ERR=No such file or directory

27-Dec 22:15 bareos-sd JobId 43: Please mount read Volume "AI-Incremental-vm-aiqi-linux-test-backup0012" for:
    Job: RestoreFiles.2021-12-27_22.15.29_31
    Storage: "FileStorage-AI-Consolidated" (/var/lib/bareos/storage-AI-Consolidated)
    Pool: Incremental-BareOS
    Media type: File
============================================

Bareos try to find AI-Incremental-vm-aiqi-linux-test-backup0012 in the wrong storage.
Additional Information:
===========================================
Job {
  Name = vm-aiqi-linux-test-backup-job
  Client = vm-aiqi-linux-test-backup-fd

  Accurate = yes
  Always Incremental = yes
  Always Incremental Job Retention = 30 days
  Always Incremental Keep Number = 15
  Always Incremental Max Full Age = 60 days

  Level = Incremental
  Type = Backup
  FileSet = "LinuxAll-vm-aiqi-linux-test-backup" # LinuxAll fileset (0000013)
  Schedule = "WeeklyCycleCustomers"
  Storage = FileStorage-AI-Incremental
  Messages = Standard
  Pool = AI-Incremental-vm-aiqi-linux-test-backup
  Priority = 10
  Write Bootstrap = "/var/lib/bareos/%c.bsr"

  Full Backup Pool = AI-Consolidated-vm-aiqi-linux-test-backup # write Full Backups into "Full" Pool (0000005)
  Incremental Backup Pool = AI-Incremental-vm-aiqi-linux-test-backup # write Incr Backups into "Incremental" Pool (0000011)

  Enabled = yes

  RunScript {
    FailJobOnError = Yes
    RunsOnClient = Yes
    RunsWhen = Before
    Command = "sh /SCRIPTS/mysql/pre.mysql.sh"
  }

  Run Script {
    Console = ".bvfs_update jobid=%i"
    RunsWhen = After
    RunsOnClient = No
  }
}
===========================================
Pool {
  Name = AI-Consolidated-vm-aiqi-linux-test-backup
  Pool Type = Backup
  Recycle = yes # Bareos can automatically recycle Volumes
  AutoPrune = yes # Prune expired volumes
  Volume Retention = 360 days # How long should the Full Backups be kept? (0000006)
  Maximum Volume Bytes = 50G # Limit Volume size to something reasonable
  Label Format = "AI-Consolidated-vm-aiqi-linux-test-backup-" # Volumes will be labeled "Full-<volume-id>"
  Storage = FileStorage-AI-Consolidated
}
===========================================
Pool {
  Name = AI-Incremental-vm-aiqi-linux-test-backup
  Pool Type = Backup
  Recycle = yes # Bareos can automatically recycle Volumes
  AutoPrune = yes # Prune expired volumes
  Volume Retention = 360 days # How long should the Incremental Backups be kept? (0000012)
  Maximum Volume Bytes = 50G # Limit Volume size to something reasonable
  Label Format = "AI-Incremental-vm-aiqi-linux-test-backup" # Volumes will be labeled "Incremental-<volume-id>"
  Volume Use Duration = 23h
  Storage = FileStorage-AI-Incremental
  Next Pool = AI-Consolidated-vm-aiqi-linux-test-backup
}

Both volume are available in their respective storage.

root@vm-aiqi-testbackup:~# ls -l /var/lib/bareos/storage-AI-Consolidated/AI-Consolidated-vm-aiqi-linux-test-backup-0006
-rw-r----- 1 bareos bareos 26349467738 Dec 25 08:09 /var/lib/bareos/storage-AI-Consolidated/AI-Consolidated-vm-aiqi-linux-test-backup-0006

root@vm-aiqi-testbackup:~# ls -l /var/lib/bareos/storage-AI-Incremental/AI-Incremental-vm-aiqi-linux-test-backup0012
-rw-r----- 1 bareos bareos 1329612 Dec 26 20:00 /var/lib/bareos/storage-AI-Incremental/AI-Incremental-vm-aiqi-linux-test-backup0012
System Description
Attached Files: Bareos-always-incremental-restore-fail.txt (7,259 bytes) 2021-12-27 15:53
https://bugs.bareos.org/file_download.php?file_id=487&type=bug
Notes
(0004421)
jcottin   
2021-12-27 15:53   
Output with TXT might be easier to read.
(0004422)
jcottin   
2021-12-27 16:32   
Device {
  Name = FileStorage-AI-Consolidated
  Media Type = File
  Archive Device = /var/lib/bareos/storage-AI-Consolidated
  LabelMedia = yes; # lets Bareos label unlabeled media
  Random Access = yes;
  AutomaticMount = yes; # when device opened, read it
  RemovableMedia = no;
  AlwaysOpen = no;
  Description = "File device. A connecting Director must have the same Name and MediaType."
}

Device {
  Name = FileStorage-AI-Incremental
  Media Type = File
  Archive Device = /var/lib/bareos/storage-AI-Incremental
  LabelMedia = yes; # lets Bareos label unlabeled media
  Random Access = yes;
  AutomaticMount = yes; # when device opened, read it
  RemovableMedia = no;
  AlwaysOpen = no;
  Description = "File device. A connecting Director must have the same Name and MediaType."
}

Storage {
  Name = FileStorage-AI-Consolidated
  Address = bareos-server # N.B. Use a fully qualified name here (do not use "localhost" here).
  Password = "22222222222222222222222222222222222222222222"
  Device = FileStorage-AI-Consolidated
  Media Type = File
}

Storage {
  Name = FileStorage-AI-Incremental
  Address = bareos-server # N.B. Use a fully qualified name here (do not use "localhost" here).
  Password = "22222222222222222222222222222222222222222222"
  Device = FileStorage-AI-Incremental
  Media Type = File
}
(0004423)
jcottin   
2021-12-27 16:43   
The documentation said 2 storage.
But I created 2 device.

1 storage => 1 device.

I moved the data from one device (FILE: Directory) to another.
2 storage => 1 device.

Problem solved.
(0004425)
bruno-at-bareos   
2021-12-28 09:37   
Thanks for sharing. Yes when the documentation talk about 2 storages it's in the director view, and not bareos-storage having 2 devices.
I close the issue.
(0004426)
bruno-at-bareos   
2021-12-28 09:38   
AI need 2 storages on the director but One device able to read/write both Incremental and Full

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1339 [bareos-core] webui minor always 2021-04-19 11:49 2021-12-23 08:39
Reporter: khvalera Platform:  
Assigned To: frank OS:  
Priority: normal OS Version: archlinux  
Status: assigned Product Version: 20.0.1  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: When going to the Run jobs tab I get an error
Description: when going to the Run jobs https://127.0.0.1/bareos-webui/job/run/ tab I get an error:
Notice: Undefined index: value in /usr/share/webapps/bareos-webui/module/Pool/src/Pool/Model/PoolModel.php on line 152
Tags: webui
Steps To Reproduce: https://127.0.0.1/bareos-webui/job/run/
Additional Information:
Attached Files: Снимок экрана_2021-04-19_12-52-56.png (110,528 bytes) 2021-04-19 11:54
https://bugs.bareos.org/file_download.php?file_id=464&type=bug
png
Notes
(0004112)
khvalera   
2021-04-19 11:54   
I am attaching a screenshot:
(0004156)
khvalera   
2021-06-11 22:36   
You need to correct the expression: preg_match('/\s*Next\s*Pool\s*=\s*("|\')?(?<value>.*)(?(1)\1|)/i', $result, $matches);
(0004157)
khvalera   
2021-06-11 22:39   
I temporarily corrected myself so that the error does not appear: preg_match('/\s*Pool\s*=?(?<value>.*)(?(1)\1|)/i', $result, $matches);
But most of all this is not the right decision.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1397 [bareos-core] documentation minor always 2021-11-01 16:45 2021-12-21 16:07
Reporter: Norst Platform: Linux  
Assigned To: bruno-at-bareos OS: Debian  
Priority: normal OS Version: 9  
Status: resolved Product Version: 20.0.2  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: "Tapespeed and blocksizes" chapter location is wrong
Description: "Tapespeed and blocksizes" chapter is a general topic. Therefore, it must be moved away from "Autochanger Support" page/category.
https://docs.bareos.org/TasksAndConcepts/AutochangerSupport.html#setblocksizes
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0004360)
bruno-at-bareos   
2021-11-25 10:35   
Would you like to propose a PR changing the place, it would be really appreciate.
Are you doing backup on tapes on a single drive ? (Most of the use case we seen actually are all using autochanger, that's why the chapter is actually there).
(0004376)
Norst   
2021-11-30 21:01   
(Last edited: 2021-11-30 21:03)
Yes, I use standalone tape drive, but for infrequent, long-term archiving rather than regular backup.

PR to move "Tapespeed and blocksizes" one level up, to "Tasks and Concepts": https://github.com/bareos/bareos/pull/1009

(0004383)
bruno-at-bareos   
2021-12-09 09:42   
Did you see the comment in the PR ?
(0004402)
bruno-at-bareos   
2021-12-21 16:07   
PR#1009 merged last week.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1369 [bareos-core] webui crash always 2021-07-12 11:54 2021-12-21 13:58
Reporter: jarek_herisz Platform: Linux  
Assigned To: frank OS: Debian  
Priority: normal OS Version: 10  
Status: resolved Product Version: 20.0.2  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: webui tries to load a nonexistent file
Description: When the Polish language is chosen at the login stage. Webui tries to load the file:
bareos-webui/js/locale/pl_PL/LC_MESSAGES/pl_PL.po

Such a file does not exist, it results in an error:
i_gettext.js:413 iJS-gettext:'try_load_lang_po': failed. Unable to exec XMLHttpRequest for link

Remaining javasctipt is terminated. Interfeis becomes inoperable
Tags: webui
Steps To Reproduce: With version 20.0.1
On the webui login page, select Polish.
Additional Information:
System Description
Attached Files: Przechwytywanie.PNG (78,772 bytes) 2021-07-19 10:36
https://bugs.bareos.org/file_download.php?file_id=472&type=bug
png
Notes
(0004182)
jarek_herisz   
2021-07-19 10:36   
System:
root@backup:~# cat /etc/debian_version
10.10
(0004206)
frank   
2021-08-09 17:22   
Fix committed to bareos bareos-19.2 branch with changesetid 15093.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1324 [bareos-core] webui major always 2021-03-02 10:26 2021-12-21 13:57
Reporter: Emmanuel Garette Platform: Linux Ubuntu  
Assigned To: frank OS: 20.04  
Priority: normal OS Version:  
Status: resolved Product Version: 20.0.0  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Infinite loop when trying to log with invalid account
Description: I'm using this community version of Webui: http://download.bareos.org/bareos/release/20/xUbuntu_20.04/

When I'm trying to log with invalid account, webui didn't return nothing and apache seems to run an infinite loop. The log file increases rapidly.

I think the problem is in this two lines:

          $send = fwrite($this->socket, $msg, $str_length);
         if($send === 0) {

The fwrite function returns false when an error provides (see: https://www.php.net/manual/en/function.fwrite.php ).

If a replace 0 by false, everything is ok.

In attachement a patch to solve this issues.
Tags:
Steps To Reproduce:
Additional Information:
Attached Files: webui.patch (483 bytes) 2021-03-02 10:26
https://bugs.bareos.org/file_download.php?file_id=458&type=bug
Notes
(0004163)
frank   
2021-06-28 15:22   
Fix committed to bareos master branch with changesetid 15006.
(0004165)
frank   
2021-06-29 14:22   
Fix committed to bareos bareos-19.2 branch with changesetid 15017.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1316 [bareos-core] storage daemon major always 2021-01-30 10:01 2021-12-21 13:57
Reporter: kardel Platform:  
Assigned To: franku OS:  
Priority: high OS Version:  
Status: resolved Product Version: 20.0.0  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: storage daemon loses a configured device instance causing major confusion in device handling
Description: After start "status storage=<name>" show the device as not open or with it parameters - that is expected.

After the first backup with spooling "status storage=<name>" shows the device as "not open or does not exist" - that is a hint
=> the the configured "device_resource->dev" value is nullptr.

The follow up effects are that the reservation code is unable to match the same active device and the same volume in all cases.
When the match fails (log shows "<name> (/dev/<tapename>) and "<name> (/dev/<tapename>) " with no differences) it attempts to allocate new volumes possibly with operator intervention even though the expected volume is available in the drive.

The root cause is a temporary device created in spool.cc::295 => auto rdev(std::make_unique<SpoolDevice>());
Line 302 sets device resource rdev->device_resource = dcr->dev->device_resource;
When rdev leaves scope the Device::~Device() Dtor is called which happily sets this.device_resource->dev = nullptr in
dev.cc:1281 if (device_resource) { device_resource->dev = nullptr; } (=> potential memory leak)

At this point the configured device_resource is lost (even though it might still be known by active volume reservations).
After that the reservation code is completely confused due to new default allocations of devices (see additional info).

A fix is provided as patch against 20.0.0. It only clears this.device_resource->dev when
this.device_resource->dev references this instance.
Tags:
Steps To Reproduce: start bareos system.
observe "status storage=..."
run a spooling job
observer "status storage=..."

If you want to see the confusion it involves a more elaborate test setup with multipe jobs where a spooling job finishes before
another job for the same volume and device begins to run.
Additional Information: It might be worthwhile to check the validity of creating a device in dir_cmd.cc:932. During testing
a difference of device pointer was seen in vol_mgr.cc:916 although the device parameter where the same.
This is most likely cause by Device::this.device_resource->dev being a nullptr and the device creation
in dir_cmd.cc:932. The normal expected lifetime of a device is from reading the configuration until the
program termination. Autochanger support might change that rule though - I didn't analyze that far.
Attached Files: dev.cc.patch (568 bytes) 2021-01-30 10:01
https://bugs.bareos.org/file_download.php?file_id=455&type=bug
Notes
(0004088)
franku   
2021-02-12 12:15   
Thank you for your deep analysis and the proposed fix which solves the issue.

See github PR https://github.com/bareos/bareos/pull/724/commits for more information on the fix and systemtests (which is draft at the time of adding this note).
(0004089)
franku   
2021-02-15 11:38   
Experimental binaries with the proposed bugfix can be found here: http://download.bareos.org/bareos/experimental/CD/PR-724/
(0004091)
franku   
2021-02-24 13:22   
Fix committed to bareos master branch with changesetid 14543.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1300 [bareos-core] webui minor always 2021-01-11 16:27 2021-12-21 13:57
Reporter: fapg Platform: Linux  
Assigned To: frank OS: CentOS  
Priority: normal OS Version: 7  
Status: resolved Product Version: 20.0.0  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: some job status are not categorized properly
Description: in the dashboard, when we click in waiting jobs, the url is:

https://bareos-server/bareos-webui/job//?period=1&status=Waiting
but should be:
https://bareos-server/bareos-webui/job//?period=1&status=Queued

Best Regards,
Fernando Gomes
Tags:
Steps To Reproduce:
Additional Information: affects table column filter
System Description
Attached Files:
Notes
(0004168)
frank   
2021-06-29 18:45   
It's not a query parameter issue. WebUI categorizes all the different job status flags into groups. I had a look into it and some job status are not categorized properly so the column filter on the table does not work as expected in those cases. A fix will follow.
(0004175)
frank   
2021-07-06 11:22   
Fix committed to bareos master branch with changesetid 15053.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1251 [bareos-core] webui tweak always 2020-06-11 09:13 2021-12-21 13:57
Reporter: juanpebalsa Platform: Linux  
Assigned To: frank OS: CentOS  
Priority: low OS Version: 7  
Status: resolved Product Version: 19.2.7  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Error when displaying pool detail
Description: When I try to see the detail of the Pools under Storage -> Pools -> 15-Days (one of my pools), I get an error message because I can't find the page.

http://xxxxxxxxx.com/bareos-webui/pool/details/15-Days:
|A 404 error occurred
|Page not found.
|The requested URL could not be matched by routing.
|
|No Exception available
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files: Captura de pantalla 2020-06-11 a las 9.13.02.png (20,870 bytes) 2020-06-11 09:13
https://bugs.bareos.org/file_download.php?file_id=442&type=bug
png
Notes
(0004207)
frank   
2021-08-09 17:22   
Fix committed to bareos bareos-19.2 branch with changesetid 15094.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1232 [bareos-core] installer / packages minor always 2020-04-21 09:26 2021-12-21 13:57
Reporter: rogern Platform: Linux  
Assigned To: bruno-at-bareos OS: CentOS  
Priority: normal OS Version: 8  
Status: resolved Product Version: 19.2.7  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: bareos logrotate errors
Description: Problem with logrotate seems to be back (previously addressed and fixed in 0000417) due to missing

su bareos bareos

in /etc/logrotate.d/bareos-dir

Logrotate gives "error: skipping "/var/log/bareos/bareos.log" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation."
Also the same for bareos-audit.log
Tags:
Steps To Reproduce: Two fresh installs of 19.2.7 with same error from logrotate and lacking "su bareos bareos" in /etc/logrotate.d/bareos-dir
Additional Information:
Attached Files:
Notes
(0004256)
bruno-at-bareos   
2021-09-08 13:46   
PR is now proposed with also backport to supported version
https://github.com/bareos/bareos/pull/918
(0004259)
bruno-at-bareos   
2021-09-09 15:07   
PR#918 has been merged, and backport will be made to 20,19,18 Monday 13th. and will be available on next minor release.
(0004260)
bruno-at-bareos   
2021-09-09 15:22   
Fix committed to bareos master branch with changesetid 15139.
(0004261)
bruno-at-bareos   
2021-09-09 16:22   
Fix committed to bareos bareos-19.2 branch with changesetid 15141.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1205 [bareos-core] webui minor always 2020-02-28 09:42 2021-12-21 13:57
Reporter: Ryushin Platform: Linux  
Assigned To: frank OS: Devuan (Debian)  
Priority: normal OS Version: 10  
Status: resolved Product Version: 19.2.6  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: HeadLink.php error with PHP 7.3
Description: I received this error when trying to connecting to the webui:
Notice: compact(): Undefined variable: extras in /usr/share/bareos-webui/vendor/zendframework/zend-view/src/Helper/HeadLink.php on line 403

Seems to be related to this issue:
https://github.com/zendframework/zend-view/issues/172#issue-388080603
Though the line numbers for their fix is not the same.
Tags:
Steps To Reproduce:
Additional Information: I solved the issue by replacing the HeadLink.php file with an updated version from here:
https://raw.githubusercontent.com/zendframework/zend-view/f7242f7d5ccec2b8c319634b4098595382ef651c/src/Helper/HeadLink.php
Attached Files:
Notes
(0004144)
frank   
2021-06-08 12:22   
Fix committed to bareos bareos-19.2 branch with changesetid 14922.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1191 [bareos-core] webui crash always 2020-02-12 15:40 2021-12-21 13:57
Reporter: khvalera Platform: Linux  
Assigned To: frank OS: Arch Linux  
Priority: high OS Version: x64  
Status: resolved Product Version: 19.2.6  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: The web interface runs under any login and password
Description: To enter the web interface starts under any arbitrary username and password.
How to fix it?
Tags: webui
Steps To Reproduce: /etc/bareos/bareos-dir.d/console/web-admin.conf

Console {
  Name = web-admin
  Password = "123"
  Profile = "webui-admin"
}

/etc/bareos/bareos-dir.d/profile/webui-admin.conf

Profile {
  Name = "webui-admin"
  CommandACL = !.bvfs_clear_cache, !.exit, !.sql, !configure, !create, !delete, !purge, !prune, !sqlquery, !umount, !unmount, *all*
  Job ACL = *all*
  Schedule ACL = *all*
  Catalog ACL = *all*
  Pool ACL = *all*
  Storage ACL = *all*
  Client ACL = *all*
  FileSet ACL = *all*
  Where ACL = *all*
}

/etc/bareos-webui/directors.ini

[bareos_dir]
enabled = "yes"
diraddress = "localhost"
dirport>= 9101
;UsePamAuthentication = yes
pam_console_name = "web-admin"
pam_console_password = "123"
Additional Information:
Attached Files:
Notes
(0003936)
khvalera   
2020-04-10 00:10   
UsePamAuthentication = yes
#pam_console_name = "web-admin"
#pam_console_password = "123"
(0004289)
frank   
2021-09-29 18:22   
Fix committed to bareos master branch with changesetid 15252.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1020 [bareos-core] webui major always 2018-10-10 09:37 2021-12-21 13:56
Reporter: linkstat Platform: Linux  
Assigned To: frank OS: Debian  
Priority: normal OS Version: 9  
Status: resolved Product Version: 18.2.4-rc1  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Can not restore a client with spaces in its name
Description: All my clients have names with spaces on their names like "client-fd using Catalog-XXX"; correctly handled (ie, enclosing the name in quotation marks, or escaping the space with \), this has not represented any problem... until now. Webui can even perform backup tasks (previously defined in the configuration files) and has not presented problems with the spaces. But when it came time to restore something ... it just does not seem to be able to properly handle the character strings that contain spaces inside. Apparently, cuts the string to first place especially found (as you can see by looking at the attached image).
Tags:
Steps To Reproduce: Define a client whose name contains spaces inside, such as "hostname-fd Testing Client".
Try to restore a backup from Webui to that client (it does not matter that the backup was originally made in that client or that the newly defined client is a new destination for the restoration of a backup previously made in another client).
Webui will fail by saying "invalid client argument: hostname-fd". As you can see, Webui will "cut" the client's name to the first string before the first space, and since there is no hostname-fd client, the task will fail; or worse, if additionally there was a client whose name matched the first string before the first space, Webui will restore the wrong client.
Additional Information: bconsole does not present any problem when the clients contain spaces in their names (this of course, when the spaces are correctly handled by the human operator who writes the commands, either by enclosing the name with quotation marks, or escaping spaces with a backslash).
System Description
Attached Files: Bareos - Can not restore when a client name has spaces in their name.jpg (139,884 bytes) 2018-10-10 09:37
https://bugs.bareos.org/file_download.php?file_id=311&type=bug
jpg
Notes
(0003546)
linkstat   
2019-07-31 18:03   
Hello!

Any news regarding this problem? (or any ideas about how to patch it temporarily so that you can use webui for the case described)?
Sometimes it is tedious to use bconsole all the time instead of webui ...

Regards!
(0004185)
frank   
2021-07-21 15:22   
Fix committed to bareos master branch with changesetid 15068.
(0004188)
frank   
2021-07-22 15:22   
Fix committed to bareos bareos-19.2 branch with changesetid 15079.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
971 [bareos-core] webui major always 2018-06-25 11:54 2021-12-21 13:56
Reporter: Masanetz Platform: Linux  
Assigned To: frank OS: any  
Priority: normal OS Version: 3  
Status: resolved Product Version: 17.2.5  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Error building tree for filenames with backslashes
Description: WebUI Restore fails building the tree if directory contains filenames with backslashes.

Some time ago the adobe reader plugin created a file named "C:\nppdf32Log\debuglog.txt" in the working dir.
Building the restore tree in WebUI fails with popup "Oops, something went wrong, probably too many files.".

Filebname handling for backslashes should be adapted for backslashes (e.g. like https://github.com/bareos/bareos-webui/commit/ee232a6f04eaf2a7c1084fee981f011ede000e8a)
Tags:
Steps To Reproduce: 1. Put an empty file with a filename with backslashes (e.g. C:\nppdf32Log\debuglog.txt) in your home directory
2. Backup
3. Try to restore any file from your home directory from this backup via WebUI
Additional Information: Attached diff of my "workaround"
System Description General issues regarding all linux platforms or not specific to only one distribution
Attached Files: RestoreController.php.diff (1,669 bytes) 2018-06-25 11:54
https://bugs.bareos.org/file_download.php?file_id=299&type=bug
Notes
(0004184)
frank   
2021-07-21 15:22   
Fix committed to bareos master branch with changesetid 15067.
(0004189)
frank   
2021-07-22 15:22   
Fix committed to bareos bareos-19.2 branch with changesetid 15080.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
871 [bareos-core] webui block always 2017-11-04 16:10 2021-12-21 13:56
Reporter: tuxmaster Platform: Linux  
Assigned To: frank OS: CentOS  
Priority: normal OS Version: 7  
Status: resolved Product Version: 17.4.2  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: UI will not load complete
Description: After the login, the website will not load complete.
Only the spinner are shown. (See picture)

The php error log will flooded with:
PHP Notice: Undefined index: meta in /usr/share/bareos-webui/module/Job/src/Job/Model/JobModel.php on line 120

The bareos director will run with version 16.2.7.
Tags:
Steps To Reproduce:
Additional Information: PHP 7.1 via fpm
System Description
Attached Files: Bildschirmfoto von »2017-11-04 16-06-19«.png (50,705 bytes) 2017-11-04 16:10
https://bugs.bareos.org/file_download.php?file_id=270&type=bug
png
Notes
(0002812)
frank   
2017-11-09 15:35   
DIRD and WebUI need to have the same version currently.

WebUI 17.2.4 is not compatible to a 16.2.7 director yet, which may change in future.
(0002813)
tuxmaster   
2017-11-09 17:36   
Thanks for the information.
But this shout be noted in the release notes, or better result in an error message about an unsupported version.
(0004169)
frank   
2021-06-30 11:49   
There is a note in the installation chapter, see https://docs.bareos.org/master/IntroductionAndTutorial/InstallingBareosWebui.html#system-requirements .
Nevertheless, I'm going to have a look if we can somehow improve the error handling reagarding version compatibility.
(0004176)
frank   
2021-07-06 17:22   
Fix committed to bareos master branch with changesetid 15057.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
579 [bareos-core] webui block always 2015-12-06 12:41 2021-12-21 13:56
Reporter: tuxmaster Platform: x86_64  
Assigned To: frank OS: Fedora  
Priority: normal OS Version: 22  
Status: resolved Product Version:  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Unable to connect to the director from webui via ipv6
Description: The web ui and the director are running on the same system.
After enter the password the error message "Error: , director seems to be down or blocking our request." will presented.
Tags:
Steps To Reproduce: Open the website enter the credentials and try to log in.
Additional Information: getsebool httpd_can_network_connect
httpd_can_network_connect --> on

Error from the apache log file:
[Sun Dec 06 12:37:32.658104 2015] [:error] [pid 2642] [client ABC] PHP Warning: stream_socket_client(): unable to connect to tcp://[XXX]:9101 (Unknown error) in /usr/share/bareos-webui/vendor/Bareos/library/Bareos/BSock/BareosBSock.php on line 521, referer: http://CDE/bareos-webui/

XXX=ip6addr of the director.

connect(from the web server) via telnet ip6addr 9101 will work.
bconsole will also work.
Attached Files:
Notes
(0002323)
frank   
2016-07-15 16:07   
Note: When specifying a numerical IPv6 address (e.g. fe80::1), you must enclose the IP in square brackets—for example, tcp://[fe80::1]:80.

http://php.net/manual/en/function.stream-socket-client.php

You could try setting your IPv6 address in your directors.ini into square brackets until we provide a fix, that might already work.
(0002324)
tuxmaster   
2016-07-15 17:04   
I have try to set it to:
diraddress = "[XXX]"
XXX are the ipv6 address

But the error are the same.
(0002439)
tuxmaster   
2016-11-06 12:09   
Same on Fedora 24 using php 7.0
(0004159)
pete   
2021-06-23 12:41   
(Last edited: 2021-06-23 12:55)
This is still present in version 20 of the Bareos WebUI, on all RHEL variants I tested (CentOS 8, AlmaLinux 8).

It results from a totally unnecessary "bindto" configuration in line 473 of /usr/share/bareos-webui/vendor/Bareos/library/Bareos/BSock/BareosBSock.php:

      $opts = array(
          'socket' => array(
              'bindto' => '0:0',
          ),
      );

This unnecessarily limits PHP socket binding to IPv4 interfaces as documented in https://www.php.net/manual/en/context.socket.php. The simplest solution is to just comment out the "bindto" line:

      $opts = array(
          'socket' => array(
              // 'bindto' => '0:0',
          ),
      );

Restart php-fpm and now IPv6 works perfectly

(0004167)
frank   
2021-06-29 17:22   
Fix committed to bareos bareos-19.2 branch with changesetid 15043.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
565 [bareos-core] file daemon feature N/A 2015-11-16 08:33 2021-12-07 14:24
Reporter: joergs Platform: Linux  
Assigned To: OS: SLES  
Priority: none OS Version: 12  
Status: acknowledged Product Version:  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: use btrfs features to efficiently detect changed files
Description: btrfs (a default filesystem on SLES 12) provides features to generate a list of changed files between snapshots. This can be useful for Bareos to efficiently generate the file lists for Incremental and Differential backups.
Tags:
Steps To Reproduce:
Additional Information: btrfs send operation compares two subvolumes and writes a description of how to convert one subvolume (the parent subvolume) into the other (the sent subvolume).
btrfs receive does the opposite.

SLES 12 comes the the tool snapper, which provides an abstraction for this functionality (and should work in the same way for LVM and ext4?).
System Description
Attached Files:
Notes
(0004378)
colttt   
2021-12-03 11:08   
6years later any news or plans for that?
the same can be done with zfs and bcachefs (in near feature)

so it would be great if bareos can support zfs/btrfs/bcachefs send/receive/snapshot
(0004382)
bruno-at-bareos   
2021-12-07 14:24   
Note :
+ To work the snapshot has to be configured and run on time for the desired FS (use a subvolume + disk space)
+ To be benchmarked on heavily used FS compared to traditional Accurate module.
+ Think that Accurate is needed in AI

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1388 [bareos-core] regression testing block always 2021-09-20 12:02 2021-11-29 09:12
Reporter: mschiff Platform: Linux  
Assigned To: bruno-at-bareos OS: any  
Priority: normal OS Version: 3  
Status: resolved Product Version: 19.2.11  
Product Build: Resolution: no change required  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: /.../sd_reservation.cc: error: sleep_for is not a member of std::this_thread (maybe gcc-11 related)
Description: It seems like the tests do not build when using gcc-11:


/var/tmp/portage/app-backup/bareos-19.2.9/work/bareos-Release-19.2.9/core/src/tests/sd_reservation.cc: In function ‘void WaitThenUnreserve(std::unique_ptr<TestJob>&)’:
/var/tmp/portage/app-backup/bareos-19.2.9/work/bareos-Release-19.2.9/core/src/tests/sd_reservation.cc:147:21: error: ‘sleep_for’ is not a member of ‘std::this_thread’
  147 | std::this_thread::sleep_for(std::chrono::milliseconds(10));
      | ^~~~~~~~~
ninja: build stopped: subcommand failed.
Tags:
Steps To Reproduce:
Additional Information: Please see: https://bugs.gentoo.org/786789
System Description General issues regarding all linux platforms or not specific to only one distribution
Attached Files:
Notes
(0004288)
bruno-at-bareos   
2021-09-29 13:58   
Would you mind to test with the new 19.2.11 release or even with the not released yet branch-12.2
Building here with gcc-11 under openSUSE Tumbleweed works as expected.
(0004328)
bruno-at-bareos   
2021-11-10 10:05   
Hello, did you make any progress on this ?

As 19.2 will be soon obsoleted, did you try to compile version 20 ?
(0004368)
mschiff   
2021-11-27 12:23   
Hi!

Sorry for the late answer. All current version build fine with gcc-11 here:
 - 18.2.12
 - 19.2.11
 - 20.0.3

Thanks!
(0004369)
bruno-at-bareos   
2021-11-29 09:11   
Ok thanks I will close then,
(0004370)
bruno-at-bareos   
2021-11-29 09:12   
Gentoo get gcc11 fixed

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1151 [bareos-core] webui feature always 2019-12-12 09:25 2021-11-26 13:22
Reporter: DanielB Platform:  
Assigned To: frank OS:  
Priority: normal OS Version:  
Status: resolved Product Version: 18.2.5  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: bareos webui does not show the inchanger flag for volumes
Description: The bareos webui does not show the inchanger flag for tape volumes. The flag is visible in the bconsole.
The flag should be visible as additional column to help volume management with tape changers.
Tags: volume, webui
Steps To Reproduce: Log into the webgui.
Select Storage -> Volumes
Additional Information:
Attached Files:
Notes
(0004367)
frank   
2021-11-26 13:22   
Fix committed to bareos master branch with changesetid 15491.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1396 [bareos-core] bconsole minor always 2021-10-31 21:36 2021-11-24 14:43
Reporter: nelson.gonzalez6 Platform: Linux  
Assigned To: bruno-at-bareos OS: CentOS  
Priority: normal OS Version: 7  
Status: resolved Product Version: 19.2.11  
Product Build: Resolution: no change required  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: HOW TO REMOVE FRON DB ALL ERROR, CANCELED, FAILED JOBS
Description: I HAVE NOTICED THAT WHEN LISTING IN bconsole, pools with media ids notice that have jobs from 3 to 4 years ago, all the clients that are already not in used or eliminated that are still on the db catalog list. how can i remove these jobs that are volstatus Error, experired, canceled.

need sugestions.

Thanks.


-----------------------------

| MediaId | VolumeName | VolStatus | Enabled | VolBytes | VolFiles | VolRetention | Recycle | Slot | InChanger | MediaType | L astWritten | Storage |
+---------+----------------------+-----------+---------+------------+----------+--------------+---------+------+-----------+-------------+-- -------------------+---------+
| 698 | Differentialvpn-0698 | Error | 1 | 35,437,993 | 0 | 864,000 | 1 | 0 | 0 | GlusterFile | 2 021-07-23 04:42:41 | Gluster |
| 900 | Differentialvpn-0900 | Error | 1 | 3,246,132 | 0 | 864,000 | 1 | 0 | 0 | GlusterFile | 2 021-08-29 14:56:06 | Gluster |
| 1,000 | Differentialvpn-1000 | Append | 1 | 226,375 | 0 | 864,000 | 1 | 0 | 0 | GlusterFile | 2 021-10-31 06:11:56 | Gluster |
+---------+----------------------+-----------+---------+------------+----------+-
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0004313)
bruno-at-bareos   
2021-11-02 13:21   
Did you already test dbcheck this tool normally is the right one to cleanup orphan stored in DB.
(0004320)
nelson.gonzalez6   
2021-11-08 13:01   
Hi yes, i have cleaned dbcheck -fv and it took long time due to lots of orphanes but when listing volumes in bconsole it still has the errors.
(0004325)
bruno-at-bareos   
2021-11-10 10:01   
So in that case, the best things to do is to remove them manually: with the delete volume command into bconsole.
You will normally also have to remove manually the volume file from the filesystem.

This is due to the fact that volume in Error state are locked and then not pruned as we can't touch them anymore.
(0004341)
bruno-at-bareos   
2021-11-16 15:42   
Beside removing manually those records, in bconsole or by scripting the delete volume line, you have to remember to delete the corresponding file if it still exist in your storage place.
(0004357)
bruno-at-bareos   
2021-11-24 14:42   
Final note before closing. The manual process is required.
(0004358)
bruno-at-bareos   
2021-11-24 14:43   
A manual process is needed as described to remove volumes in errors.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1401 [bareos-core] webui minor always 2021-11-16 15:14 2021-11-18 09:50
Reporter: Armand Platform:  
Assigned To: bruno-at-bareos OS:  
Priority: normal OS Version:  
Status: resolved Product Version: 20.0.2  
Product Build: Resolution: duplicate  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: No data display on /bareos-webui/media/details/_volume_name_ for french translation
Description: When we are logged to webui with french language, the Volumes details are not displayed due to an issue with French translation

During translation to french we can have some quote (') as in french sometimes we replace letter a / e with a single quote. Exemple "The application" is translate to "L'application" and not to "La application"
Unfortunately, quote is also a string separator in javascript, and we could not have a quote inside a single quote string. To solve this we need to put the string inside a double quote (") or we need to escape the quote 'this is a string with a single quote \' escaped to be valide'

In the file /usr/share/bareos-webui/module/Media/view/media/media/details.phtml we have this issue between line 315 and 445 : inside function detailFormatterVol(index, row) {...}

One solution could be :
  between line 315 and 445,
    replace all : <?php echo $this->translate("XXXXXX"); ?>
    with : <?php echo str_replace("'","\'",$this->translate("XXXXXX")); ?>
  This will replace single quote by escaped single quote

PS: see attache file, including this solution
Tags:
Steps To Reproduce: log to webui with language = French
go to menu : STOCKAGES
go in tab : Volumes
select a volume in the list (My volumes are all DLT tapes)

we expect to see the volume information + the jobs saved on this volume
 
Additional Information:
Attached Files: details.phtml (17,122 bytes) 2021-11-16 15:14
https://bugs.bareos.org/file_download.php?file_id=484&type=bug
Notes
(0004340)
Armand   
2021-11-16 15:22   
Juste saw, this is the same as issue 0001235 ;-/ sorry
(0004345)
bruno-at-bareos   
2021-11-18 09:50   
I will close this as duplicate of 1235.
It is fixed and published in 20.0.3 (available for customer with a affordable subscription) and you can cherry pick the commit which fix this.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1235 [bareos-core] webui major always 2020-04-26 18:43 2021-11-18 09:50
Reporter: kabassanov Platform: Linux  
Assigned To: frank OS: Debian  
Priority: normal OS Version: 9  
Status: resolved Product Version: 19.2.7  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Special characters not escaped in translations
Description: Hi,

I observed erros in in some (at least one ) webpages generation, in presence of special characters in translation strings.

Here is an example:
module/Media/view/media/media/details.phtml: html.push('<th><?php echo $this->translate("Volume writes"); ?></th>');
with French translation containing ' . I customized translations a little bit, so I'm not sure the original translation of this one had this character, but it is a general issue.

Thanks.
Tags:
Steps To Reproduce: Just take a translation containing an apostrophe and observe that webpage is not completely displayed. In debug window you'll see:

      html.push('<th>Nombre d'écritures sur le volume</th>');

where the apostrophe in "Nombre d'écritures" will be considered as push string end character.
Additional Information:
System Description
Attached Files:
Notes
(0004208)
frank   
2021-08-10 11:22   
Fix committed to bareos bareos-19.2 branch with changesetid 15101.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1348 [bareos-core] storage daemon major have not tried 2021-05-07 08:50 2021-11-17 09:49
Reporter: RobertF. Platform: Linux  
Assigned To: bruno-at-bareos OS: CentOS  
Priority: high OS Version: 7  
Status: resolved Product Version: 20.0.1  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version: 21.0.0  
    Target Version:  
Summary: Fatal error: stored/askdir.cc:295 NULL Volume name. This shouldn't happen!!!
Description: I have the following problem we do daily backups on tape and yesterday this error appeared for the first time.
Before the update to version 20. 0. 1, the tape backup was running without any errors.

The error has only appeared so far. Do I have to change something in the settings or what may be?


Backup is done to a LTO-6 library with 4 tape drives.
Here is the job log that shows what is happening:
Tags:
Steps To Reproduce:
Additional Information:
2021-05-04 09:02:59	bareos-dir JobId 220137: Error: Bareos bareos-dir 20.0.1 (02Mar21):
Build OS: Debian GNU/Linux 10 (buster)
JobId: 220137
Job: daily-DbBck-ELVMCDB57.2021-05-04_09.00.00_19
Backup Level: Full
Client: "elbct3" 20.0.1 (02Mar21) Debian GNU/Linux 10 (buster),debian
FileSet: "DbBck-ELVMCDB57" 2020-02-25 08:25:00
Pool: "DailyDbShare" (From Job resource)
Catalog: "MyCatalog" (From Client resource)
Storage: "elbct3-msl4048" (From Pool resource)
Scheduled time: 04-May-2021 09:00:00
Start time: 04-May-2021 09:00:02
End time: 04-May-2021 09:02:59
Elapsed time: 2 mins 57 secs
Priority: 10
FD Files Written: 7
SD Files Written: 7
FD Bytes Written: 28,688,204,350 (28.68 GB)
SD Bytes Written: 28,688,205,321 (28.68 GB)
Rate: 162080.3 KB/s
Software Compression: None
VSS: no
Encryption: no
Accurate: yes
Volume name(s): 1000062
Volume Session Id: 441
Volume Session Time: 1619188643
Last Volume Bytes: 2,327,573,436,416 (2.327 TB)
Non-fatal FD errors: 0
SD Errors: 1
FD termination status: OK
SD termination status: Fatal Error
Bareos binary info: official Bareos subscription
Job triggered by: Scheduler
Termination: *** Backup Error ***

2021-05-04 09:02:59	elbct3-sd JobId 220137: Elapsed time=00:02:57, Transfer rate=162.0 M Bytes/second
2021-05-04 09:02:59	elbct3-sd JobId 220137: Fatal error: stored/askdir.cc:295 NULL Volume name. This shouldn't happen!!!
2021-05-04 09:02:56	elbct3-sd JobId 220137: Releasing device "Drive3" (/dev/tape/by-id/scsi-35001438016033618-nst).
2021-05-04 09:00:00	elbct3-sd JobId 220137: Connected File Daemon at 192.168.219.133:9102, encryption: None
2021-05-04 09:00:02	elbct3-fd JobId 220137: ACL support is enabled
2021-05-04 09:00:02	elbct3-fd JobId 220137: Extended attribute support is enabled
2021-05-04 09:00:00	bareos-dir JobId 220137: FD compression disabled for this Job because AllowCompress=No in Storage resource.
2021-05-04 09:00:00	bareos-dir JobId 220137: Encryption: None
2021-05-04 09:00:00	bareos-dir JobId 220137: Handshake: Cleartext
2021-05-04 09:00:00	bareos-dir JobId 220137: Connected Client: elbct3 at 192.168.219.133:9102, encryption: None
2021-05-04 09:00:00	bareos-dir JobId 220137: Using Device "Drive3" to write.
2021-05-04 09:00:00	bareos-dir JobId 220137: Connected Storage daemon at 192.168.219.133:9103, encryption: None
2021-05-04 09:00:00	bareos-dir JobId 220137: Start Backup JobId 220137, Job=daily-DbBck-ELVMCDB57.2021-05-04_09.00.00_19
System Description
Attached Files:
Notes
(0004342)
bruno-at-bareos   
2021-11-17 09:48   
The root cause has been found and a PR has been merged.
https://github.com/bareos/bareos/pull/975

This will appear as fix in next 21, and for our customer under subscription in 20.0.4

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1311 [bareos-core] director feature always 2021-01-19 03:02 2021-11-08 09:21
Reporter: Ruth Ivimey-Cook Platform: amd64  
Assigned To: OS: Linux  
Priority: normal OS Version: Ubuntu 20.04  
Status: new Product Version: 19.2.9  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Continue spooling from client while previous spool being written
Description: I need to use the spool feature, because the tape is faster than the clients, and that is fine. However, the backups take much longer because the client data is not pulled while the spool file is being written.

Permitting two spool files (or N files) for a single client would enable the client to be backed up independently of the speed of the tape, at the cost, obviously, of spool file space.

In general, there could be N files per client and M clients being backed up; whether it is worth supporting this general case I don't know, but there would be benefits to many sites with just a two file option. If done, I would expect settings to control both N and M both as numbers of spool files, and as total size of spooled data. Once the spool partition or setting size limit is reached, the client is just paused, as is true now.
Tags:
Steps To Reproduce: A setup with a client spooling to bareos at a speed slower than tape.
A spool file on an SSD.
A backup larger than the spool file.
The client is read, then waits, then is read, then waits, ...

If double buffered spool is permitted, the client is read, tape write starts, is read again (during which tape completes), is read again .... and no client waiting.
Additional Information: There could be a case for starting to write to tape as soon as the spool file exists, but I think that would not help; it merely ties up the drive sooner.
Attached Files:
Notes
(0004318)
bruno-at-bareos   
2021-11-08 09:21   
Related to https://github.com/bareos/bareos/pull/886

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
816 [bareos-core] webui major always 2017-05-02 14:48 2021-10-07 10:22
Reporter: Kvazyman Platform: Linux  
Assigned To: frank OS: Debian  
Priority: normal OS Version: 8  
Status: resolved Product Version: 16.2.4  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Incorrect display value of the item Retention/Expiration depending on the selected localization
Description: Incorrect display value of the item Retention/Expiration depending on the selected localization
Tags:
Steps To Reproduce: Login with english localization. Go to https://YOUR_SITE_BAREOS/bareos-webui/media
Select some volume and see it Retention/Expiration

Login with russian localization. Go to https://YOUR_SITE_BAREOS/bareos-webui/media
Select same volume and see it Retention/Expiration (Задержка/Окончание)

Compare the values. The values are different for 5 days.
Additional Information:
System Description
Attached Files:
Notes
(0004294)
frank   
2021-10-07 10:22   
Fix committed to bareos master branch with changesetid 15298.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1381 [bareos-core] webui major always 2021-08-26 10:50 2021-09-17 12:55
Reporter: jens Platform: Linux  
Assigned To: frank OS: Debian  
Priority: normal OS Version: 10  
Status: acknowledged Product Version: 19.2.10  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Webui File selection list shows error when trying to restore
Description: BareOS version: 19.2.7-2

When selecting a backup client with lots of ( millions ) files and folders the File selection area shows following error.

{"id":"#","xhr":{"readyState":4,"responseText":"\n\n\n \n \n \n \n\n \n \n\n\n \n \n\n\n\n\n \n\n \n\n
\n \n\n
An error occurred
\n
An error occurred during execution; please try again later.
\n\n\n
\n
Additional information:
\n
Zend\\Json\\Exception\\RuntimeException
\n
\n
File:
\n
\n
/usr/share/bareos-webui/vendor/zendframework/zend-json/src/Json.php:68
\n
\n
Message:
\n
\n
Decoding failed: Syntax error
\n
\n
Stack trace:
\n
\n
#0 /usr/share/bareos-webui/module/Restore/src/Restore/Model/RestoreModel.php(54): Zend\\Json\\Json::decode('', 1)\n#1 /usr/share/bareos-webui/module/Restore/src/Restore/Controller/RestoreController.php(481): Restore\\Model\\RestoreModel->getDirectories(Object(Bareos\\BSock\\BareosBSock), '67', '#')\n#2 /usr/share/bareos-webui/module/Restore/src/Restore/Controller/RestoreController.php(555): Restore\\Controller\\RestoreController->getDirectories()\n#3 /usr/share/bareos-webui/module/Restore/src/Restore/Controller/RestoreController.php(466): Restore\\Controller\\RestoreController->buildSubtree()\n#4 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/Controller/AbstractActionController.php(82): Restore\\Controller\\RestoreController->filebrowserAction()\n#5 [internal function]: Zend\\Mvc\\Controller\\AbstractActionController->onDispatch(Object(Zend\\Mvc\\MvcEvent))\n#6 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(444): call_user_func(Array, Object(Zend\\Mvc\\MvcEvent))\n#7 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(205): Zend\\EventManager\\EventManager->triggerListeners('dispatch', Object(Zend\\Mvc\\MvcEvent), Object(Closure))\n#8 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/Controller/AbstractController.php(118): Zend\\EventManager\\EventManager->trigger('dispatch', Object(Zend\\Mvc\\MvcEvent), Object(Closure))\n#9 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/DispatchListener.php(93): Zend\\Mvc\\Controller\\AbstractController->dispatch(Object(Zend\\Http\\PhpEnvironment\\Request), Object(Zend\\Http\\PhpEnvironment\\Response))\n#10 [internal function]: Zend\\Mvc\\DispatchListener->onDispatch(Object(Zend\\Mvc\\MvcEvent))\n#11 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(444): call_user_func(Array, Object(Zend\\Mvc\\MvcEvent))\n#12 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(205): Zend\\EventManager\\EventManager->triggerListeners('dispatch', Object(Zend\\Mvc\\MvcEvent), Object(Closure))\n#13 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/Application.php(314): Zend\\EventManager\\EventManager->trigger('dispatch', Object(Zend\\Mvc\\MvcEvent), Object(Closure))\n#14 /usr/share/bareos-webui/public/index.php(24): Zend\\Mvc\\Application->run()\n#15 {main}
\n
\n
\n\n\n
\n\n \n \n\n\n","status":500,"statusText":"Internal Server Error"}}
Tags:
Steps To Reproduce: In restore tab select a client with a lot of files and folders ( File Server )
Additional Information: From Apache error log:

PHP Notice: compact(): Undefined variable: extras in /usr/share/bareos-webui/vendor/zend
framework/zend-view/src/Helper/HeadLink.php on line 403, referer: http://xxx.xxx.xxx.xxx/bareos-webui/restore/?jobid=&client=<client>&restoreclient=&restorejob=&where=&files
et=&mergefilesets=0&mergejobs=0&limit=2000

PHP Notice: compact(): Undefined variable: extras in /usr/share/bareos-webui/vendor/zend
framework/zend-view/src/Helper/HeadLink.php on line 403, referer: http://xxx.xxx.xxx.xxx/bareos-webui/storage//

System Description
Attached Files: bareos_webui.png (84,832 bytes) 2021-08-26 10:50
https://bugs.bareos.org/file_download.php?file_id=479&type=bug
png

bareos-webui-nightly.png (9,043 bytes) 2021-08-27 12:03
https://bugs.bareos.org/file_download.php?file_id=480&type=bug
png

bconsole_api2_test_query_output.txt (53,111 bytes) 2021-09-17 12:55
https://bugs.bareos.org/file_download.php?file_id=482&type=bug
Notes
(0004223)
arogge   
2021-08-26 10:59   
thanks for the report.
Could you try and reproduce this with the latest webui from the nightly-build? It can be installed on a different host or vm and will be able to talk to your 19.2 director.
Also if you can still reproduce the issue there it would really help if you could tell us how many jobs were merged here (i.e. how many incrementals are on top of your full) and how many files are in each of them. This would probably improve our chances to reproduce the problem.

Having said that, the workaround (that you probably already knew) is to restore from within bconsole.
(0004224)
jens   
2021-08-26 11:09   
Thank you for the ultra fast response.
I will try my best to give the nightly build a try but it gonna take me some time to arrange in our environment.

Regarding your question this is the only single backup we took from that machine into a long term tape archive pool.
There are no incrementals on top.
(0004225)
arogge   
2021-08-26 11:35   
Allright! Would you still share the exact amount of files in that backup job, so we can produce a test-case with the same amount of files?
(0004226)
jens   
2021-08-26 11:37   
FD Files Written: 155,482,903
SD Files Written: 155,482,903
FD Bytes Written: 26,776,974,356,682 (26.77 TB)
SD Bytes Written: 26,805,737,848,974 (26.80 TB)
(0004228)
jens   
2021-08-27 12:03   
(Last edited: 2021-08-27 12:20)
So I tried the latest nightly built from here: http://download.bareos.org/bareos/experimental/nightly/Debian_10/all/
Unfortunately it does not want to connect to my director 19.2

(0004229)
jens   
2021-08-27 12:19   
Also tried with the bareos-webui_20.0.1-3
It is able to connect to my 19.2 director but throws the exact same error as initially reported
(0004230)
arogge   
2021-08-27 12:26   
Yes, sorry. That version check was introduced not too long ago, I simply forgot. Thanks for reproducing with 20.0.1 though, that should be recent enough.
(0004241)
frank   
2021-08-31 16:00   
jens:

It seems we are receiving malformed JSON from the director here as the decoding throws a syntax error.

We should have a look at the JSON result the director provides for the particular directory you are
trying to list in webui by using bvfs API commands (https://docs.bareos.org/DeveloperGuide/api.html#bvfs-api) in bconsole.


In bconsole please do the following:


1. Get the jobid of the job that causes the described issue. Replace the jobid from the example below with your specific jobid, e.g. the jobid of the full backup you mentioned.

*.bvfs_lsdirs path= jobid=142
38 0 0 A A A A A A A A A A A A A A .
37 0 0 A A A A A A A A A A A A A A /


2. Navigate to the folder which causes problems by using pathid, pathids will differ at yours.

*.bvfs_lsdirs pathid=37 jobid=142
37 0 0 A A A A A A A A A A A A A A .
38 0 0 A A A A A A A A A A A A A A ..
57 0 0 A A A A A A A A A A A A A A ceph/
*

*.bvfs_lsdirs pathid=57 jobid=142
57 0 0 A A A A A A A A A A A A A A .
37 0 0 A A A A A A A A A A A A A A ..
56 0 0 A A A A A A A A A A A A A A groups/
*

*.bvfs_lsdirs pathid=56 jobid=142
56 0 0 A A A A A A A A A A A A A A .
57 0 0 A A A A A A A A A A A A A A ..
51 11817 142 P0A V9T EHt CcR A A A 8AA BAA L4 BhLhQA BhLhP/ BhLhP/ A A C group_aa/

Let's pretend group_aa (pathid 51) is the folder we can not list properly in webui.


3. Switch to API mode 2 (JSON) now and list the content of folder group_aa (pathid 51) to get the JSON result.

*.api 2
{
  "jsonrpc": "2.0",
  "id": null,
  "result": {
    "api": 2
  }
}*.bvfs_lsdirs pathid=51 jobid=142
{
  "jsonrpc": "2.0",
  "id": null,
  "result": {
    "directories": [
      {
        "type": "D",
        "pathid": 51,
        "fileid": 11817,
        "jobid": 142,
        "lstat": "P0A V9T EHt CcR A A A 8AA BAA L4 BhLhQA BhLhP/ BhLhP/ A A C",
        "name": ".",
        "fullpath": ".",
        "stat": {
          "dev": 64768,
          "ino": 89939,
          "mode": 16877,
          "nlink": 10001,
          "uid": 0,
          "gid": 0,
          "user": "root",
          "group": "root",
          "rdev": 0,
          "size": 245760,
          "atime": 1630409728,
          "mtime": 1630409727,
          "ctime": 1630409727
        },
        "linkfileindex": 0
      },
      {
        "type": "D",
        "pathid": 56,
        "fileid": 0,
        "jobid": 0,
        "lstat": "A A A A A A A A A A A A A A",
        "name": "..",
        "fullpath": "..",
        "stat": {
          "dev": 0,
          "ino": 0,
          "mode": 0,
          "nlink": 0,
          "uid": 0,
          "gid": 0,
          "user": "root",
          "group": "root",
          "rdev": 0,
          "size": 0,
          "atime": 0,
          "mtime": 0,
          "ctime": 0
        },
        "linkfileindex": 0
      },
      {
        "type": "D",
        "pathid": 52,
        "fileid": 1813,
        "jobid": 142,
        "lstat": "P0A BAGIj EHt C A A A G BAA A BhLgvm Bg/+Bp Bg/+Bp A A C",
        "name": "d1/",
        "fullpath": "/ceph/groups/group_aa/d1/",
        "stat": {
          "dev": 64768,
          "ino": 16802339,
          "mode": 16877,
          "nlink": 2,
          "uid": 0,
          "gid": 0,
          "user": "root",
          "group": "root",
          "rdev": 0,
          "size": 6,
          "atime": 1630407654,
          "mtime": 1627381865,
          "ctime": 1627381865
        },
        "linkfileindex": 0
      },
      {
        "type": "D",
        "pathid": 54,
        "fileid": 1814,
        "jobid": 142,
        "lstat": "P0A CCEkI EHt C A A A G BAA A BhLgvm Bg/+Bp Bg/+Bp A A C",
        "name": "d2/",
        "fullpath": "/ceph/groups/group_aa/d2/",
        "stat": {
          "dev": 64768,
          "ino": 34097416,
          "mode": 16877,
          "nlink": 2,
          "uid": 0,
          "gid": 0,
          "user": "root",
          "group": "root",
          "rdev": 0,
          "size": 6,
          "atime": 1630407654,
          "mtime": 1627381865,
          "ctime": 1627381865
        },
        "linkfileindex": 0
      }
    ]
  }
}*


Do you get valid JSON at this point as you can see in the example above?
Please provide the output you get in your case if possible.



Note:

You can substitue step 3 with sth. like following if the output is too big:

[root@centos7]# cat script
.api 2
.bvfs_lsdirs pathid=51 jobid=142
quit

[root@centos7]# cat script | bconsole > out.txt

Remove everything except the JSON you received from the .bvfs_lsdirs command from the out.txt file.

Validate the JSON output with a tool like https://stedolan.github.io/jq/ or https://jsonlint.com/ for example.
(0004243)
jens   
2021-09-01 10:26   
Hi Frank,

thanks for your feedback.
Please note, I receive the JSON error on top level already.
Meaning I am not able to select a folder at all, yet.

I will try to follow your instructions and see how far I can get.
Will keep you posted.

Thank you once again for your support.
Much appreciated.
(0004267)
jens   
2021-09-17 12:55   
Hi Frank,

I found some time to go over your instruction and did some intense testing.


First I queried the top 2 folder levels for the jobid in question.
--------------------------------------------------------------------------------------------
*.bvfs_lsdirs path= jobid=67
Automatically selected Catalog: MyCatalog
Using Catalog "MyCatalog"
32 0 0 A A A A A A A A A A A A A A .
31 0 0 A A A A A A A A A A A A A A /

*.bvfs_lsdirs pathid=31 jobid=67
31 0 0 A A A A A A A A A A A A A A .
32 0 0 A A A A A A A A A A A A A A ..
3037697 0 0 A A A A A A A A A A A A A A imgrep/
3037699 0 0 A A A A A A A A A A A A A A storage/


Since the issue in webgui already occurs on this level I switched to api level 2 already
but couldn't find anything obvious regarding malformatted output.
----------------------------------------------------------------------------------------------------------------------------
*.api 2
{
  "jsonrpc": "2.0",
  "id": null,
  "result": {
    "api": 2
  }
}
*.bvfs_lsdirs pathid=31 jobid=67
{
  "jsonrpc": "2.0",
  "id": null,
  "result": {
    "directories": [
      {
        "type": "D",
        "pathid": 31,
        "fileid": 0,
        "jobid": 0,
        "lstat": "A A A A A A A A A A A A A A",
        "name": ".",
        "fullpath": ".",
        "stat": {
          "dev": 0,
          "ino": 0,
          "mode": 0,
          "nlink": 0,
          "user": "root",
          "group": "root",
          "rdev": 0,
          "size": 0,
          "atime": 0,
          "mtime": 0,
          "ctime": 0
        },
        "linkfileindex": 0
      },
      {
        "type": "D",
        "pathid": 32,
        "fileid": 0,
        "jobid": 0,
        "lstat": "A A A A A A A A A A A A A A",
        "name": "..",
        "fullpath": "..",
        "stat": {
          "dev": 0,
          "ino": 0,
          "mode": 0,
          "nlink": 0,
          "user": "root",
          "group": "root",
          "rdev": 0,
          "size": 0,
          "atime": 0,
          "mtime": 0,
          "ctime": 0
        },
        "linkfileindex": 0
      },
      {
        "type": "D",
        "pathid": 3037697,
        "fileid": 0,
        "jobid": 0,
        "lstat": "A A A A A A A A A A A A A A",
        "name": "imgrep/",
        "fullpath": "/imgrep/",
        "stat": {
          "dev": 0,
          "ino": 0,
          "mode": 0,
          "nlink": 0,
          "user": "root",
          "group": "root",
          "rdev": 0,
          "size": 0,
          "atime": 0,
          "mtime": 0,
          "ctime": 0
        },
        "linkfileindex": 0
      },
      {
        "type": "D",
        "pathid": 3037699,
        "fileid": 0,
        "jobid": 0,
        "lstat": "A A A A A A A A A A A A A A",
        "name": "storage/",
        "fullpath": "/storage/",
        "stat": {
          "dev": 0,
          "ino": 0,
          "mode": 0,
          "nlink": 0,
          "user": "root",
          "group": "root",
          "rdev": 0,
          "size": 0,
          "atime": 0,
          "mtime": 0,
          "ctime": 0
        },
        "linkfileindex": 0
      }
    ]
  }
}*

So I went one level deeper on the imgrep folder but still everything seem to work fine and look valid.
-------------------------------------------------------------------------------------------------------------------------------------------------
-> see attachment ( please note I've shortened the output to a few thousand lines and anonymized all the folder names )


Interesting to me is that this is the only machine we are having trouble with.
Maybe it is something with the filesystem layouts there.
Therefor and to give you a better picture this is what the backup client looks like:

OS:
Distributor ID: Debian
Description: Debian GNU/Linux 8.7 (jessie)
Release: 8.7
Codename: jessie

Storage mounts:
-------------------------
/dev/vdc1 on /storage/bucket-00 type xfs (rw,noatime,nodiratime,attr2,inode64,noquota)
/dev/vdb1 on /imgrep type xfs (rw,noatime,nodiratime,attr2,inode64,noquota)

Disk Free status:
------------------------
/dev/vdc1 5.0T 4.8T 293G 95% /storage/bucket-00
/dev/vdb1 23T 23T 348G 99% /imgrep


Client Fileset:
-------------------
FileSet {
  Name = "xxxxxxxxxxxxxxxxx"
  Include {
    Options {
      Signature = SHA1
      One FS = No
      Checkfilechanges = yes
    }
    File = /imgrep/images
    File = /storage/bucket-00/images
  }
}



Please let me know if you need any additional information or contribution.

Best, Jens

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1188 [bareos-core] webui major always 2020-02-11 21:02 2021-08-10 23:27
Reporter: hostedpower Platform: Linux  
Assigned To: frank OS: Debian  
Priority: urgent OS Version: 9  
Status: assigned Product Version: 19.2.6  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Cannot restore at all after uprgade to 19.2.6 (php error)
Description: Hi,


Not sure what happened, but after upgrading from 19.2.5 to 19.2.6 the restore screen no longer works at all.

We see php errors as well:

[Tue Feb 11 21:02:07.597134 2020] [proxy_fcgi:error] [pid 762:tid 140551107876608] [client 178.117.59.204:49903] AH01071: Got error 'PHP message: PHP Notice: Undefined index: type in /usr/share/bareos-webui/module/Restore/src/Restore/Form/RestoreForm.php on line 91\n', referer: https://xxxxx.hosted-power.com/restore/?jobid=134785&client=xxx.xxxx.com&restoreclient=&restorejob=&where=&fileset=&mergefilesets=0&mergejobs=0&limit=2000
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0003792)
hostedpower   
2020-02-11 21:17   
PS: I just found out that it's only in Internet explorer, chrome and edge work fine.

However the issue of slow loading persists, initial loading of the /restore url is slow, once a client is selected after that, it's finally faster
(0003797)
hostedpower   
2020-02-12 10:00   
So to conclude, we can restore with chrome, but internet explorer gives the weird php error :)
(0004173)
frank   
2021-06-30 17:04   
I'm not able to reproduce the IE issue. Tried it with Microsoft Edge Version 91.0.864.59 (Win10) and Bareos-19.2.10. Does the problem still exists for you?
(0004209)
hostedpower   
2021-08-10 23:27   
Ok this seems to be present still in Bareos 20, however we ditched IE now :)

(Just open IE, go to a server and try to restore it, it doesn't seem to open the file tree, you can see the tree, but not browse it)

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1365 [bareos-core] webui minor have not tried 2021-06-23 02:11 2021-07-30 10:53
Reporter: grizly Platform: Linux  
Assigned To: frank OS: Ubuntu  
Priority: high OS Version: Ubuntu 18.04.5 L  
Status: acknowledged Product Version: 19.2.10  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: apt postinstall is activating
Description: Essentially https://github.com/bareos/bareos-webui/blob/890997a2f6c836beaa7a36160c1e1e737b66df1e/debian/postinst#L4 caused a minor outage this morning.

We run php7.2, which works fine for bareos-webui, however after apt-update ran this morning the postinst script activated php5.. which is not installed. So, not sure how it did that, but it did. /etc/apache2/mods-enabled/php5.load and /etc/apache2/mods-enabled/php5.conf were created and as the packages weren't installed, it prevented apache2 from restarting.


Deleting those two erroneous files allowed apache2 to restart and work just fine.
Tags:
Steps To Reproduce: install apache2 and php7-fpm
install bareos-webui
check apache2
Additional Information: From term.log

Setting up bareos-webui (19.2.7-2) ...
Installing new version of config file /etc/bareos-webui/configuration.ini ...
/usr/sbin/a2enmod
Module rewrite already enabled
/usr/sbin/a2enmod
Module rewrite already enabled
Enabling module php5.
To activate the new configuration, you need to run:
  systemctl restart apache2
/usr/sbin/a2enconf
Conf bareos-webui already enabled


...

Jun 23 06:56:06 server-name systemd[1]: Starting The Apache HTTP Server...
Jun 23 06:56:06 server-name apachectl[21560]: apache2: Syntax error on line 146 of /etc/apache2/apache2.conf: Syntax error on line 1 of /etc/apache2/mods-enabled/php5.load: Cannot load /usr/lib/apache2/modules/libphp5.so into server: /usr/lib/apache2/modules/libphp5.so: cannot open shared object file: No such file or directory
Jun 23 06:56:06 server-name apachectl[21560]: Action 'start' failed.
Jun 23 06:56:06 server-name apachectl[21560]: The Apache error log may have more information.
Jun 23 06:56:06 server-name systemd[1]: apache2.service: Control process exited, code=exited status=1
Jun 23 06:56:06 server-name systemd[1]: apache2.service: Failed with result 'exit-code'.
Jun 23 06:56:06 server-name systemd[1]: Failed to start The Apache HTTP Server.


Attached Files:
Notes
(0004158)
grizly   
2021-06-23 02:13   
Well, that submitted before I was finished, but it's mostly there.

Not sure what the fix would be, but I would guess if it can detect php, just leave the current php alone?

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
981 [bareos-core] webui minor always 2018-07-10 03:43 2021-07-26 10:54
Reporter: NTANMA Platform: Linux  
Assigned To: OS: CentOS  
Priority: normal OS Version: 7  
Status: resolved Product Version: 17.2.6  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: can not see file names when restoring
Description: If cp866 files get into backup, then instead of their names in Webui empty fields are displayed, the restoration itself is correct. sample files in the attachment.

Tags:
Steps To Reproduce: make backup file from attachment
Additional Information:
System Description
Attached Files: webUI.png (97,031 bytes) 2018-07-10 03:43
https://bugs.bareos.org/file_download.php?file_id=304&type=bug
png

cp866.zip (476 bytes) 2018-07-10 03:44
https://bugs.bareos.org/file_download.php?file_id=305&type=bug
Notes
(0003065)
NTANMA   
2018-07-10 04:16   
on the host the files look like this:
Зарплатный проект.egp
Справочник точек.egp
прив зка н.п..xlsx

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1025 [bareos-core] webui major always 2018-10-26 14:32 2021-07-26 10:37
Reporter: Gordon Klimm Platform: Linux  
Assigned To: OS: Debian  
Priority: normal OS Version: 9  
Status: resolved Product Version: 17.2.4  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: restore doesn't display certain files and folders
Description: A certain combination of spaces (and minuses?) leads to not displaying files for restore using webui
Tags:
Steps To Reproduce: mkdir d1
touch "d1/f1" "d1/f 2" "d1/f-f"

=> backup
==> list files using bconsole works fine
===> goto webui->restore, select job, no files visible.
Additional Information:
System Description
Attached Files: bareos_restore.jpg (113,423 bytes) 2018-10-26 14:32
https://bugs.bareos.org/file_download.php?file_id=315&type=bug
jpg
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
963 [bareos-core] file daemon major always 2018-06-12 20:44 2021-07-26 00:03
Reporter: assafin Platform:  
Assigned To: arogge OS: Freebsd  
Priority: normal OS Version: 11.0  
Status: new Product Version: 16.2.4  
Product Build: Resolution: reopened  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Windows Backup using VSS
Description: We have a Baros-Server running on FreeBSD. We are trying to make a backup of a Windows Server on which an Exchange Server is installed.

Everything is running OK. But after some time we noticed that the Exchange Server transactions logs are not being rolled and that the backup size increases substantially.

After digging around on the Windows Server we saw that the volume shadow copies are not being deleted (yes, we are using Enable VSS = yes ). We think that this may result in the growth of the transaction logs.

Here is an example (on the Windows Server via CMD):

diskshadow
add volume f:
begin backup
create
end backup

The "end backup" will delete the shadow copy. And we assume that this will also lead to a truncation of the transaction log.

So, the question is: Does Bareos (the Windows Client) ends the WindowsVSS in a correct way, so that the shadow copy is being deleted?

If you need more infos, we can gladly help.

-- Alaa
Tags:
Steps To Reproduce:
Additional Information:
Attached Files:
Notes
(0003548)
arogge   
2019-08-01 10:18   
Sorry, but Bareos is not able to backup an Exchange database using VSS at this time.
As it doesn't handle Exchange correctly the Logs are not truncated and you'll expierience a growth in transaction logs.
(0004193)
Ryushin   
2021-07-26 00:03   
Is there any progress towards backing up Exchange 2016/2019 and have it commit the logs? Windows Server Backup has two options for the VSS, Full and Copy. The Full method will truncate the logs. Is there any way to pass the Full method for VSS with Bareos?

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1342 [bareos-core] webui crash always 2021-04-23 23:07 2021-06-28 17:27
Reporter: bluecmd Platform: Linux  
Assigned To: frank OS: Debian  
Priority: normal OS Version: 10  
Status: resolved Product Version: 20.0.1  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Invalid login on webui causes apache2 error log to fill disk
Description: Hi,

I am setting up Bareos 20.0.1 from using Bacula.
I followed the instructions to set it up on my Debian 11 testing system by using Debian 10 packages.

When I have configured the webui to talk to my director and I login with the right credentials, things work fine.
When I try to login with the *wrong* credentials however the PHP process seems to go haywire and output an unending loop of the following:

[Fri Apr 23 22:58:18.627635 2021] [php7:notice] [pid 50019] [client 2a07:redacted:51371] PHP Notice: fwrite(): send of 26 bytes failed with errno=104 Connection reset by peer in /usr/share/bareos-webui/vendor/Bareos/library/Bareos/BSock/BareosBSock.php on line 219, referer: https://debian.redacted/bareos-webui/
[Fri Apr 23 22:58:18.627740 2021] [php7:notice] [pid 50019] [client 2a07:redacted:51371] PHP Notice: fwrite(): send of 26 bytes failed with errno=32 Broken pipe in /usr/share/bareos-webui/vendor/Bareos/library/Bareos/BSock/BareosBSock.php on line 219, referer: https://debian.redacted/bareos-webui/
[Fri Apr 23 22:58:18.627768 2021] [php7:notice] [pid 50019] [client 2a07:redacted:51371] PHP Notice: fwrite(): send of 26 bytes failed with errno=32 Broken pipe in /usr/share/bareos-webui/vendor/Bareos/library/Bareos/BSock/BareosBSock.php on line 219, referer: https://debian.redacted/bareos-webui/
[this line repeats indefinitely]

Within seconds I have many hundred of megabytes in the log.
Tags:
Steps To Reproduce: 1. Install 20.0.1 on Debian testing with webui
2. Login using wrong credentials
Additional Information:
System Description
Attached Files:
Notes
(0004116)
bluecmd   
2021-04-23 23:16   
Looking closer on existing bugs, this seems to be the same as https://bugs.bareos.org/view.php?id=1324

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1096 [bareos-core] webui minor always 2019-07-02 21:42 2021-06-28 15:22
Reporter: joergs Platform:  
Assigned To: frank OS:  
Priority: normal OS Version:  
Status: resolved Product Version: 18.2.6  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: webui: when login as a console user without the "TLS Enable = false" setting, a misleading error message will be soon.
Description: When try to login to the WebUI as a console user without the "TLS Enable = false" setting, a following misleading error message will be soon:

Sorry, can not authenticate. Wrong username and/or password.

In this case, it can be expected that the user will retry user and password.
However, he will never be successful, as the "TLS Enable = false" setting is missing.

A better error message would be:

Sorry, can not authenticate. TLS Handshake failed.
Tags:
Steps To Reproduce: Create a user without the "TLS Enable = false" setting using the bconsole:
* configure add console=test1 password=secret profile=webui-admin

Login to the WebUI as user test1
Additional Information:
Attached Files:
Notes
(0004162)
frank   
2021-06-28 15:22   
Fix committed to bareos master branch with changesetid 15004.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1362 [bareos-core] documentation minor always 2021-06-14 09:46 2021-06-14 09:46
Reporter: Int Platform:  
Assigned To: OS:  
Priority: normal OS Version:  
Status: new Product Version: 20.0.2  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Documentation of job type "Archive" is missing
Description: The documentation of the "Always Incremental Concept"
https://docs.bareos.org/TasksAndConcepts/AlwaysIncrementalBackupScheme.html#always-incremental-concept
mentions a jobtype "archive":
"Therefore, the jobtype of the longterm job is updated to “archive”, so that it is not taken as base for then next incrementals and the always incremental job will stand alone."

But the documentation of job type
https://docs.bareos.org/Configuration/Director.html#config-Dir_Job_Type
does not mention or explain this jobtype "archive".
Tags:
Steps To Reproduce:
Additional Information:
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
912 [bareos-core] documentation tweak always 2018-02-15 12:20 2021-06-11 11:08
Reporter: colttt Platform: Linux  
Assigned To: frank OS: Debian  
Priority: normal OS Version: 9  
Status: resolved Product Version: 17.2.4  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: wrong configuration path for apache
Description: Hi,
in http://doc.bareos.org/master/html/bareos-manual-main-reference.html#x1-580003.3.4
you said "/etc/apache2/available-conf/bareos-webui.conf" but "/etc/apache2/conf-enabled/bareos-webui.conf" will be correct
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0004133)
frank   
2021-05-27 13:22   
Fix committed to bareos bareos-19.2 branch with changesetid 14854.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1330 [bareos-core] webui text always 2021-03-22 18:56 2021-06-10 16:24
Reporter: sknust Platform: Linux  
Assigned To: frank OS: any  
Priority: low OS Version: 3  
Status: resolved Product Version: 19.2.9  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Wrong German translation of "Terminated normally" job status in tooltip
Description: The tooltip of the status text of jobs which are in status T (terminated normally) in the German localization is wrong. It says "nicht erfolgreich beendet", which translates to "not finished successfully", the "nicht" corresponding to "not".

Correct translation is either "normal beendet" (literally "terminated normally") or "erfolgreich beendet" (literally "terminated successfully"). Typical German use would be "erfolgreich beendet" IMHO, although the translation for status W ("terminated normally with warnings") is "Normal beendet (mit Warnungen)", so the "normal beendet" would be more consistent with existing translation.
Tags: webui
Steps To Reproduce: 1) Login to a bareos-webui connected to a director with at least one job in status T with German locale
2) List that job
3) Hover over the White-on-green "Erfolgreich" status text in the job listing or the job details
Additional Information: Sole source seems to be line 1383 (master branch) in /webui/module/Application/language/de_DE.po:
https://github.com/bareos/bareos/blob/85a2c521c845327d0de525363b394f28ce65bb62/webui/module/Application/language/de_DE.po#L1383

In branch bareos-19.2 it's line 1327 (https://github.com/bareos/bareos/blob/a78cda310412daf06186614c582e5ff4ac29c384/webui/module/Application/language/de_DE.po#L1327)
System Description General issues regarding all linux platforms or not specific to only one distribution
Attached Files: bareos-webui_de-DE_translation-error.png (81,293 bytes) 2021-03-22 18:56
https://bugs.bareos.org/file_download.php?file_id=461&type=bug
png
Notes
(0004154)
frank   
2021-06-10 16:24   
Will be fixed in the upcoming maintenance releases 20,19 and 18.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
896 [bareos-core] webui major have not tried 2018-01-26 14:30 2021-06-10 13:01
Reporter: rightmirem Platform: Intel  
Assigned To: frank OS: Debian GNU/Linux 8 (jessie)  
Priority: high OS Version: 8  
Status: resolved Product Version: 17.2.4  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version: 18.2.10  
    Target Version:  
Summary: Argument 0000001 is not an array in JobModel.php
Description: After updating to webui 17.2.4, this error repeating in apache2 error.log

[Fri Jan 26 14:12:30.038395 2018] [:error] [pid 11793] [client 129.xxx.xxx.70:61978] PHP Notice: Undefined index: result in /usr/share/bareos-webui/module/Job/src/Job/Model/JobModel.php on line 126, referer: http://129.xxx.xxx.186:xxx/bareos-webui/dashboard/
[Fri Jan 26 14:12:30.038412 2018] [:error] [pid 11793] [client 129.xxx.xxx.70:61978] PHP Warning: array_merge(): Argument 0000001 is not an array in /usr/share/bareos-webui/module/Job/src/Job/Model/JobModel.php on line 126, referer: http://129.xxx.xxx.186:xxx/bareos-webui/dashboard/
Tags:
Steps To Reproduce: Attempting to connect via bareos webui
Additional Information:
System Description
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
747 [bareos-core] documentation feature always 2016-12-30 17:33 2021-06-09 17:44
Reporter: michelv Platform:  
Assigned To: arogge OS:  
Priority: normal OS Version:  
Status: resolved Product Version: 16.4.1  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Consider moving away from one long page
Description: The whole manual is now in one document/page, making it very hard to navigate and find parts easily.
A lot of project use easy to read documentation build with tools such as MkDocs. I would argue it would help the project to transfer the one page document to a layout with parts.
Tags:
Steps To Reproduce:
Additional Information: MkDocs http://www.mkdocs.org/ (I'm not linked)
Example 1: https://gluster.readthedocs.io/en/latest/
Example 2: https://docs.syncthing.net/
Attached Files:
Notes
(0002495)
joergs   
2017-01-03 13:09   
Bareos Main Manual:

While I fully agree, that there is a lot of room for improvements in the Bareos manual, and we also discussed changing from the current LaTex backend to some other format, I don't see changing to this mkdocs Mark Down format soon.
That would require be a lot of effort. Also some features would get lost.
Enhancing it by additionally host it as separate sections could be done. This is already on the TODO list. However, it fear this will not greatly improve readability.

Bareos Developer Guide:
The Bareos Developer Guide (http://doc.bareos.org/master/html/bareos-developer-guide.html) has been migrated to Mark Down a while ago. This could be enhanced by mkdocs.
(0002496)
michelv   
2017-01-03 18:02   
Of course the way of writing is up to you.
From latex to MD can be done by tools, eg http://pandoc.org/demos.html

Another consideration might be to make it easier to help out with docs.
I personally have seen that it can work nicely at github, where people can relatively easily add/adjust with pull requests to the md files.
(0002529)
joergs   
2017-01-25 14:41   
We used pandoc to migrate the Developer Guide from Latex to Mark Down. However, even this relatively simple document causes a lot of trouble. So this is not an option for the Bareos Main Manual.
(0004147)
arogge   
2021-06-09 17:44   
The documentation has been migrated to ReST and is now split into smaller individual files.
I guess that fixes the issue. If it doesn't and you have additional requirements, feel free to reopen.

Thank you!

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1344 [bareos-core] webui minor always 2021-04-27 11:39 2021-04-29 11:00
Reporter: blokhinaleks Platform: Linux  
Assigned To: frank OS: CentOS  
Priority: low OS Version: 7  
Status: assigned Product Version: 20.0.1  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: Last run after WebUI timeout
Description: If webui open in page whith restore or task and timeout session, a can pres start/restart any task before system redirect me to login page.
Tags:
Steps To Reproduce: http://servername/bareos-webui/job//
Wait tiomeout
Run restart job
Login
See running/query job
Additional Information:
System Description
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1298 [bareos-core] webui block always 2021-01-04 14:43 2021-04-29 10:58
Reporter: Dragon Platform: Linux  
Assigned To: frank OS: GenToo  
Priority: normal OS Version:  
Status: assigned Product Version: 20.0.0  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
Summary: WebUI Login shows PHP-Notices then 404 (Dashboard)
Description: In development mode various PHP warnings/notices are shown on the login page

Notice: compact(): Undefined variable: extras in <webroot>/admin/bareos-webui/vendor/zendframework/zend-view/src/Helper/HeadLink.php on line 403

Deprecated: array_key_exists(): Using array_key_exists() on objects is deprecated. Use isset() or property_exists() instead in <webroot>/admin/bareos-webui/vendor/zendframework/zend-i18n/src/Translator/Loader/Gettext.php on line 142

When logging in then ending up with 404 page not found. Looks like after login "<baseurl>/admin/bareos-webui/public/dashboard/" is attempted to be loaded and Apache returns 404. The file really does not exist. It seems as a PHP module so something is wrong.

Apache 2.4 contains this config (besides others):
"""
Alias /admin/bareos-webui <webroot>/admin/bareos-webui/public
<Directory <webroot>/admin/bareos-webui/public>
  AllowOverride None
  Options FollowSymLinks
  # ... authorization parameters for restricted internal access only ...
  RewriteEngine on
  RewriteBase /admin/bareos-webui
  RewriteCond %{REQUEST_FILENAME} -s [OR]
  RewriteCond %{REQUEST_FILENAME} -l [OR]
  RewriteCond %{REQUEST_FILENAME} -d
  RewriteRule ^.*$ - [NC,L]
  RewriteRule ^.*$ index.php [NC,L]
</Directory>
"""

WebUI is located under "<webroot>/admin/bareos-webui"

Apache logs:
XXX - xxx [04/Jan/2021:14:34:27 +0100] "GET /admin/bareos-webui/public/dashboard/ HTTP/1.1" 404 254
Tags: webui
Steps To Reproduce: Fresh install
Additional Information:
Attached Files: webui-update-dep.patch (2,388 bytes) 2021-02-25 10:09
https://bugs.bareos.org/file_download.php?file_id=456&type=bug
Notes
(0004092)
Skylord   
2021-02-25 10:09   
Got the same errors on php 7.4 - it seems that ZendFramework dependencies are too old. Have updated composer.json in webui root and run "composer install" - working fine now. There are few notices on some pages but not very annoying.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1280 [bareos-core] webui minor always 2020-11-09 18:07 2021-04-29 10:53
Reporter: TEKrantz Platform: aarch64  
Assigned To: frank OS: Linux  
Priority: low OS Version: Fedora 33  
Status: assigned Product Version:  
Product Build: Resolution: open  
Projection: none      
ETA: none