View Issue Details
ID | Project | Category | View Status | Date Submitted | Last Update |
---|---|---|---|---|---|
0001023 | bareos-core | director | public | 2018-10-24 22:21 | 2023-07-17 16:14 |
Reporter | hostedpower | Assigned To | bruno-at-bareos | ||
Priority | normal | Severity | major | Reproducibility | always |
Status | closed | Resolution | unable to reproduce | ||
Product Version | 17.2.7 | ||||
Summary | 0001023: Cleanup of volumes not happning anymore in always incremental schedule | ||||
Description | Hi, This worked properly for a long time, but now we see the volumes building up and building up until the disk gets full :( Anything changed in newer bareos version? Please see attachment I don't think we changed config at all regarding this, so we're puzzled with what we see. In this case it's almost every day, we have other cases where there are big gaps now and then. Why is it happening? It creates an overload of volumes and our disks fill up for nothing :| Your feedback would be greatly appreciated!! :) | ||||
Steps To Reproduce | cat jobdefs/HPJobInc.conf JobDefs { Name = "HPJobInc" Type = Backup Level = Full Write Bootstrap = "/var/lib/bareos/%c.bsr" Accurate=yes Level=Full Messages = Standard Always Incremental = yes Always Incremental Job Retention = 20 days Always Incremental Max Full Age = 35 days Always Incremental Keep Number = 5 Maximum Concurrent Jobs = 5 Run Script { RunsWhen = After RunsOnFailure = No FailJobOnError = No RunsOnClient = No console = ".bvfs_update jobid=%i" } } cat /etc/bareos/pool-defaults.conf #pool defaults used for always incremental scheme Pool Type = Backup # Maximum Volume Jobs = 1 # a new file for each backup that is done Recycle = yes # Bareos can automatically recycle Volumes Auto Prune = yes # Prune expired volumes Volume Use Duration = 23h Action On Purge = Truncate Storage { Name = customerx-incr Device = customerx-incr # bareos-sd Device Media Type = customerx Address = backup06.xxxx # backup server fqdn > sent to client sd Password = "xxx" # password for Storage daemon Maximum Concurrent Jobs = 1 # required for virtual full # @/etc/bareos/storage-defaults.conf } Storage { Name = customerx-cons Device = customerx-cons # bareos-sd Device Media Type = customerx Address = backup06.xxxx # backup server fqdn > sent to client sd Password = "xxx" # password for Storage daemon Maximum Concurrent Jobs = 1 # required for virtual full # @/etc/bareos/storage-defaults.conf } Pool { Name = customerx-incr Storage = customerx-incr LabelFormat = "vol-incr-" Next Pool = customerx-cons @/etc/bareos/pool-defaults.conf } Pool { Name = customerx-cons Storage = customerx-cons LabelFormat = "vol-cons-" @/etc/bareos/pool-defaults.conf } # web.customer.com Job { Name = "backup-web.customer.com" Client = web.customer.com Pool = customerx-incr Full Backup Pool = customerx-cons FileSet = "linux-all-mysql" Schedule = "always-inc-cycle-3" #Defaults JobDefs = "HPJobInc" } Client { Name = web.customer.com Address = web.customer.com Password = "ourpass" # password for FileDaemon AutoPrune = yes } | ||||
Tags | No tags attached. | ||||
|
|
Any idea perhaps? We now almost daily need to purge tons of volumes manually which is getting a real hassle atm :( As far as I know the system has been working fine for months with a certain version of Bareos :) |
|
Is this still reproducible with supported OS and current Bareos version 22x? | |
Please retry and report with recent OS and Bareos version | |
Date Modified | Username | Field | Change |
---|---|---|---|
2018-10-24 22:21 | hostedpower | New Issue | |
2018-10-24 22:21 | hostedpower | File Added: purging-of-volumes.png | |
2018-11-07 13:58 | hostedpower | Note Added: 0003149 | |
2023-07-06 16:25 | bruno-at-bareos | Assigned To | => bruno-at-bareos |
2023-07-06 16:25 | bruno-at-bareos | Status | new => feedback |
2023-07-06 16:25 | bruno-at-bareos | Note Added: 0005173 | |
2023-07-17 16:14 | bruno-at-bareos | Status | feedback => closed |
2023-07-17 16:14 | bruno-at-bareos | Resolution | open => unable to reproduce |
2023-07-17 16:14 | bruno-at-bareos | Note Added: 0005195 |