View Issue Details

IDProjectCategoryView StatusLast Update
0000740bareos-coreGeneralpublic2017-06-08 13:03
Reporterseitan Assigned To 
PrioritynormalSeveritymajorReproducibilityalways
Status closedResolutionno change required 
PlatformLinuxOSCentOSOS Version6
Summary0000740: No suitable device found to read Volume
DescriptionHello,
i'm trying to use the new AllwaysIncremental function from bareos.
The problem is, that consolidation job, allways gives me an error:

2016-12-14 10:27:06 bareos-dir JobId 4278: Start Virtual Backup JobId 4278, Job=server_root.2016-12-14_10.27.04_46
2016-12-14 10:27:06 bareos-dir JobId 4278: Consolidating JobIds 4264,4195
2016-12-14 10:27:09 bareos-dir JobId 4278: Bootstrap records written to /var/lib/bareos/bareos-dir.restore.3.bsr
2016-12-14 10:27:09 bareos-dir JobId 4278: Using Device "DevInc" to read.
2016-12-14 10:27:09 bareos-dir JobId 4278: Using Device "DevConsolidated" to write.
2016-12-14 10:27:09 bareos-sd JobId 4278: acquire.c:114 Changing read device. Want Media Type="File-Consolidated" have="File-Inc"
  device="DevInc" (/bareos/inc/)
2016-12-14 10:27:09 bareos-sd JobId 4278: Fatal error: acquire.c:168 No suitable device found to read Volume "Consolidated-0816"
2016-12-14 10:27:09 bareos-sd JobId 4278: Elapsed time=411584:27:09, Transfer rate=0 Bytes/second
2016-12-14 10:27:09 bareos-dir JobId 4278: Error: Bareos bareos-dir 16.2.4 (01Jul16):


Could you please explain, what is wrong here? Thank you.
Additional InformationConfiguration:

Device {
  Name = DevInc
  Media Type = File-Inc
  Archive Device = /bareos/inc/
  LabelMedia = yes; # lets Bareos label unlabeled media
  Random Access = yes;
  AutomaticMount = yes; # when device opened, read it
  RemovableMedia = no;
  AlwaysOpen = no;
}
Device {
  Name = DevConsolidated
  Media Type = File-Consolidated
  Archive Device = /bareos/cons/
  LabelMedia = yes; # lets Bareos label unlabeled media
  Random Access = yes;
  AutomaticMount = yes; # when device opened, read it
  RemovableMedia = no;
  AlwaysOpen = no;
}

Storage {
  Name = StorageInc
# Do not use "localhost" here
  Address = bareos # N.B. Use a fully qualified name here
  Password = "pass"
  Device = DevInc
  Media Type = File-Inc
  Maximum Concurrent Jobs = 7
}

Storage {
  Name = StorageConsolidated
# Do not use "localhost" here
  Address = bareos # N.B. Use a fully qualified name here
  Password = "pass"
  Device = DevConsolidated
  Media Type = File-Consolidated
  Maximum Concurrent Jobs = 7
}


Pool {
  Name = Incremental
  Pool Type = Backup
  Recycle = yes # Bareos can automatically recycle Volumes
  AutoPrune = yes # Prune expired volumes
  Volume Retention = 10 days # How long should the Incremental Backups be kept? (0000012)
  Maximum Volume Bytes = 5G # Limit Volume size to something reasonable
  Maximum Volumes = 100 # Limit number of Volumes in Pool
  Label Format = "Incremental-" # Volumes will be labeled "Incremental-<volume-id>"
  Storage = StorageInc
  Next Pool = Consolidated
}
Pool {
  Name = Consolidated
  Pool Type = Backup
  Recycle = yes # Bareos can automatically recycle Volumes
  AutoPrune = yes # Prune expired volumes
  Volume Retention = 50 days # How long should the Incremental Backups be kept? (0000012)
  Maximum Volume Bytes = 5G # Limit Volume size to something reasonable
  Maximum Volumes = 200 # Limit number of Volumes in Pool
  Label Format = "Consolidated-" # Volumes will be labeled "Incremental-<volume-id>"
  Storage = StorageConsolidated
}

Job {
 Name = "server_root"
 Client = "server"
 JobDefs = "root_cons"
 FileSet = "root_server"
 AlwaysIncremental = yes
 Accurate = yes
 AlwaysIncrementalJobRetention = 7 days
 AlwaysIncrementalMaxFullAge = 38 days
 Pool = Incremental.
 Full Backup Pool = Consolidated.
 RunScript {
    RunsWhen = Before
    FailJobOnError = No
    Command = "/usr/local/sbin/postgres_dump.sh"
    }
}

Job {
    Enabled=yes
    Name = "server-consolidate"
    Type = "Consolidate"
    JobDefs = "root_cons"
    Accurate = "yes"
    Client = "server"
    Fileset = "root"
    Priority = 10
}

JobDefs {
  Name = "root_cons"
  Type = Backup
  Level = Incremental
  FileSet = "root"
  Schedule = "WeeklyCycleCons"
  Messages = Standard
  Priority = 9
  Pool = Incremental
  Full Backup Pool = Full # write Full Backups into "Full" Pool (0000005)
  Incremental Backup Pool = Incremental # write Incr Backups into "Incremental" Pool (0000011)
  Write Bootstrap = "/var/lib/bareos/%c.bsr"
  Reschedule On Error = yes
  Reschedule Interval = 45minutes
  Reschedule Times = 3
  Rerun Failed Levels = yes
}

Schedule {
  Name = "WeeklyCycleCons"
  Run = Incremental mon-sun at 22:00 # (0000010)
}

TagsNo tags attached.

Activities

seitan

seitan

2016-12-15 07:57

reporter   ~0002470

Okay, finally got things working. It seems that there was old incremental job, which was made into wrong storage. After deleting that incremental, everything went okay.
hostedpower

hostedpower

2016-12-15 16:27

reporter   ~0002474

Interesting. How did you fix this?

Because what I wonder: is there a quick way to reset all the backups for 1 single client?

Often disk space gets exceeded for a client and backup jobs seem all to go bad afterwards :|
seitan

seitan

2016-12-15 16:49

reporter   ~0002475

The problem was, that i had old client with the same name, which used ordinary full/diff/inc backups.
After i moved to "Always incremental".
Consolidation job tried to merge job Nr 4264 from old client. It was located in wrong storage (happens to be the same storage, where consolidation jobs were stored). So bareos was unable to read and write to the same device at the same time.
`delete jobid=4264` fixed issue.
seitan

seitan

2016-12-19 07:37

reporter   ~0002478

Last edited: 2016-12-19 07:38

It seems, that the problem still persists. It looks like, that consolidation job tries to merge jobs from older consolidations:

2016-12-19 06:56:07 bareos-dir JobId 4334: Start Virtual Backup JobId 4334, Job=server_root.2016-12-19_06.56.06_56
2016-12-19 06:56:07 bareos-dir JobId 4334: Consolidating JobIds 4318,4241,4251
2016-12-19 06:56:33 bareos-dir JobId 4334: Bootstrap records written to /var/lib/bareos/bareos-dir.restore.6.bsr
2016-12-19 06:56:34 bareos-dir JobId 4334: Using Device "DevInc" to read.
2016-12-19 06:56:34 bareos-dir JobId 4334: Using Device "DevConsolidated" to write.
2016-12-19 06:56:34 bareos-sd JobId 4334: acquire.c:114 Changing read device. Want Media Type="File-Consolidated" have="File-Inc"
  device="DevInc" (/bareos/inc/)
2016-12-19 06:56:34 bareos-sd JobId 4334: Fatal error: acquire.c:168 No suitable device found to read Volume "Consolidated-0778"
2016-12-19 06:56:34 bareos-sd JobId 4334: Elapsed time=411700:56:34, Transfer rate=0 Bytes/second
2016-12-19 06:56:34 bareos-dir JobId 4334: Error: Bareos bareos-dir 16.2.4 (01Jul16):


When JobID 4318 was also a Consolidation Job:


2016-12-17 07:21:10 bareos-dir JobId 4318: Start Virtual Backup JobId 4318, Job=server_root.2016-12-17_07.21.08_40
2016-12-17 07:21:10 bareos-dir JobId 4318: Consolidating JobIds 4217,4228
2016-12-17 07:21:57 bareos-dir JobId 4318: Bootstrap records written to /var/lib/bareos/bareos-dir.restore.1.bsr
2016-12-17 07:21:57 bareos-dir JobId 4318: Using Device "DevInc" to read.
2016-12-17 07:21:57 bareos-dir JobId 4318: Using Device "DevConsolidated" to write.
2016-12-17 07:21:57 bareos-sd JobId 4318: Ready to read from volume "Incremental-0082" on device "DevInc" (/bareos/inc/).
2016-12-17 07:21:57 bareos-sd JobId 4318: Volume "Consolidated-0778" previously written, moving to end of data.
2016-12-17 07:21:57 bareos-sd JobId 4318: Ready to append to end of Volume "Consolidated-0778" size=1042555089
2016-12-17 07:21:57 bareos-sd JobId 4318: Forward spacing Volume "Incremental-0082" to file:block 1:704841942.
2016-12-17 07:22:11 bareos-sd JobId 4318: End of Volume at file 1 on device "DevInc" (/bareos/inc/), Volume "Incremental-0082"

hostedpower

hostedpower

2016-12-19 10:07

reporter   ~0002479

I have the feeling that it's not very reliable. It doesn't seem to reliably check if previous backups exists etc.

It should check and if all fails try to do a new full backup in worst case (Or at least have an option to enable something like that)
seitan

seitan

2016-12-19 10:15

reporter   ~0002480

But in my case of "Always incremental" it somehow tries to consolidate already consolidated incremental backups.
Consolidation from "Incremental" pool to "Consolidated" works fine, but when it tries consolidate "Consolidated" pool into itself, it fails. Not sure, why it tries to consolidate already consolidated jobs.
seitan

seitan

2016-12-20 07:37

reporter   ~0002481

Is there any chance, that the problem is in "Next Pool = Consolidated" definition of "Incremental" pool?
Maybe incrementals are moved from Incremental to Consolidated pool and when concolidation occurs, bareos cannot read and write to the same "Consolidated" pool simultaneously?
Though this is what is given in the example of bareos documentation.
seitan

seitan

2016-12-30 06:57

reporter   ~0002490

Okay, I retried to build always incremental backup from scratch. Still fails on second consolidation with:

Using Device "DevConsolidated" to write.
2016-12-30 07:52:55 bareos-sd JobId 4459: acquire.c:114 Changing read device. Want Media Type="File-Consolidated" have="File-Inc"
  device="DevInc" (/bareos/inc/)
2016-12-30 07:52:55 bareos-sd JobId 4459: Fatal error: acquire.c:168 No suitable device found to read Volume "Consolidated-0789"

Moving back to usual full/diff/inc until this bug is fixed, o documentation on "Always incremental" backups gives working example.
Ectopod

Ectopod

2017-01-04 21:52

reporter   ~0002498

We've run into this issue, too.

This seems to be caused by Bareos being unable to read and write from a single storage device at the same time. Per the documentation, each job is set up to store their Full backups in the AI-Consolidated pool, like so:

Job {
    Name = "BackupClient1"
    JobDefs = "DefaultJob"
 
    # Always incremental settings
    AlwaysIncremental = yes
    AlwaysIncrementalJobRetention = 7 days
 
    Accurate = yes
 
    Pool = AI-Incremental
    Full Backup Pool = AI-Consolidated # <-----
}

However, the AI-Incremental pool is set to also use the AI-Consolidated pool when a consolidation job is run:

Pool {
  Name = AI-Incremental
  Pool Type = Backup
  Recycle = yes
  Auto Prune = no
  Maximum Volume Bytes = 50G
  Label Format = "AI-Incremental-"
  Volume Use Duration = 23h
  Storage = File1
  Next Pool = AI-Consolidated # consolidated jobs go to this pool
}

When the first Incremental job is run, it is upgraded to a Full job and stored in the AI-Consolidated pool. The Incremental backups then proceed until the retention period is reached, after which the first consolidated backup is triggered. This will attempt to write a new Virtual-Full backup to the same storage device from which it also tries to read the original Full backup, and the job will fail.

At the very least, this means the documentation provides examples that cannot possibly work. However, I can't see how the always-incremental backup scheme could ever work if Bareos is incapable of reading from the same device to which it is writing. Doesn't it need to do that every time it consolidates?
joergs

joergs

2017-01-25 14:31

developer   ~0002528

Instead of using multiple storages, you should be able to use one Storage with multiple devices as described here: http://doc.bareos.org/master/html/bareos-manual-main-reference.html#UsingMultipleStorageDevices
When doing so, Bareos can read and write volumes, by using multiple devices.
dpcushing

dpcushing

2017-01-26 17:30

reporter   ~0002533

One storage with multiple devices is what I've been running and it works great. Follow the link provided by joergs for the documentation. I'm pasting below an example device file. Also required to make this work is 'Device Reserve By Media Type = yes' in your sd configs file. Also shown below.

/etc/bareos/bareos.sd/device/FileStorageCons.conf:
Device {
  Name = FileStorageCons1
  Media Type = FileCons
  Archive Device = /BareosBackups/storage
  LabelMedia = yes
  Random Access = yes
  AutomaticMount = yes
  RemovableMedia = no
  AlwaysOpen = no
}
Device {
  Name = FileStorageCons2
  Media Type = FileCons
  Archive Device = /BareosBackups/storage
  LabelMedia = yes
  Random Access = yes
  AutomaticMount = yes
  RemovableMedia = no
  AlwaysOpen = no
}
.
.
.

/etc/bareos/bareos.sd/storage/bareos-sd.conf:
Storage {
  Name = bareos-sd
  Maximum Concurrent Jobs = 5

  Device Reserve By Media Type = yes
}

/etc/bareos/bareos-dir.d/storage/FileCons.conf:
Storage {
  Name = FileCons
  Address = qco-util
  Password = "PASSWORD"
  Device = FileStorageCons1
  Device = FileStorageCons2
  Device = FileStorageCons3
  Device = FileStorageCons4
  Device = FileStorageCons5
  Media Type = FileCons
  Maximum Concurrent Jobs = 5
}
seitan

seitan

2017-03-13 09:17

reporter   ~0002605

I can confirm, that above configuration works as intended. This issue can now be marked as solved.

Issue History

Date Modified Username Field Change
2016-12-15 07:26 seitan New Issue
2016-12-15 07:57 seitan Note Added: 0002470
2016-12-15 16:27 hostedpower Note Added: 0002474
2016-12-15 16:49 seitan Note Added: 0002475
2016-12-19 07:37 seitan Note Added: 0002478
2016-12-19 07:38 seitan Note Edited: 0002478
2016-12-19 07:38 seitan Note Edited: 0002478
2016-12-19 07:38 seitan Note Edited: 0002478
2016-12-19 10:07 hostedpower Note Added: 0002479
2016-12-19 10:15 seitan Note Added: 0002480
2016-12-20 07:37 seitan Note Added: 0002481
2016-12-30 06:57 seitan Note Added: 0002490
2017-01-04 21:52 Ectopod Note Added: 0002498
2017-01-25 14:31 joergs Note Added: 0002528
2017-01-26 17:30 dpcushing Note Added: 0002533
2017-03-13 09:17 seitan Note Added: 0002605
2017-06-08 13:02 joergs Status new => resolved
2017-06-08 13:02 joergs Resolution open => no change required
2017-06-08 13:02 joergs Assigned To => joergs
2017-06-08 13:03 joergs Status resolved => closed
2017-06-08 13:03 joergs Assigned To joergs =>