View Issue Details
ID | Project | Category | View Status | Date Submitted | Last Update |
---|---|---|---|---|---|
0001378 | bareos-core | director | public | 2021-08-11 00:59 | 2023-07-27 15:53 |
Reporter | progserega | Assigned To | bruno-at-bareos | ||
Priority | high | Severity | major | Reproducibility | always |
Status | closed | Resolution | fixed | ||
Platform | Linux | OS | Debian | OS Version | 10 |
Product Version | 20.0.2 | ||||
Summary | 0001378: director show different status for job | ||||
Description | Status job (76069) show as fail. But status dir show as job running: *list jobs client=rsk40srv018-fd +--------+-----------------------------------+----------------+---------------------+------+-------+-----------+-------------------+-----------+ | jobid | name | client | starttime | type | level | jobfiles | jobbytes | jobstatus | +--------+-----------------------------------+----------------+---------------------+------+-------+-----------+-------------------+-----------+ | 76,090 | backup-rsk40srv018-fsDocs-obmen | rsk40srv018-fd | 2021-08-10 21:00:01 | B | I | 0 | 0 | f | | 76,069 | backup-rsk40srv018-fsDocs-upr | rsk40srv018-fd | 2021-08-11 05:01:30 | B | I | 0 | 0 | f | +--------+-----------------------------------+----------------+---------------------+------+-------+-----------+-------------------+-----------+ * status dir Running Jobs: Console connected at 11-aug-2021 08:33 JobId Level Name Status ====================================================================== 76052 Increme backup-rsk40srv035-1cAttachments.2021-08-10_21.00.00_43 is running 76063 Increme backup-rsk40srv035-1cLogsObr.2021-08-10_21.00.00_54 is waiting on max Storage jobs 76069 Increme backup-rsk40srv018-fsDocs-upr.2021-08-10_21.00.01_00 is running 76081 Increme backup-rsk40srv018-fsDocs-pues.2021-08-10_21.00.01_13 is waiting on max Storage jobs | ||||
Additional Information | *list joblog jobid=76069 2021-08-11 05:01:29 bareos-dir JobId 76069: shell command: run BeforeJob "ssh backup@bareos-fs-sd.prim.corp.com /scripts/backup/bacula_sd_free_space_check /dev/sda" 2021-08-11 05:01:29 bareos-dir JobId 76069: BeforeJob: on device /dev/sda (mount point: /mnt/msa2040/backup) free: 22635 Gb. 2021-08-11 05:01:29 bareos-dir JobId 76069: Start Backup JobId 76069, Job=backup-rsk40srv018-fsDocs-upr.2021-08-10_21.00.01_00 2021-08-11 05:01:29 bareos-dir JobId 76069: Connected Storage daemon at bareos-fs-sd.prim.corp.com:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3 2021-08-11 05:01:29 bareos-dir JobId 76069: Using Device "ObmenStorage" to write. 2021-08-11 05:01:29 bareos-dir JobId 76069: Connected Client: rsk40srv018-fd at rsk40srv018.corp.com:9104, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3 2021-08-11 05:01:29 bareos-dir JobId 76069: Handshake: Immediate TLS 2021-08-11 05:01:29 bareos-dir JobId 76069: Encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3 2021-08-11 05:01:30 bareos-sd JobId 76069: Volume "rsk40srv018-fsPool-upr-2020.12.19-28" previously written, moving to end of data. 2021-08-11 05:01:30 bareos-sd JobId 76069: Ready to append to end of Volume "rsk40srv018-fsPool-upr-2020.12.19-28" size=4418893262 2021-08-11 05:01:29 rsk40srv018-fd JobId 76069: Created 18 wildcard excludes from FilesNotToBackup Registry key 2021-08-11 05:01:30 rsk40srv018-fd JobId 76069: Connected Storage daemon at bareos-fs-sd.prim.corp.com:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3 2021-08-11 05:01:32 rsk40srv018-fd JobId 76069: Generate VSS snapshots. Driver="Win64 VSS", Drive(s)="J" 2021-08-11 05:01:32 rsk40srv018-fd JobId 76069: VolumeMountpoints are not processed as onefs = yes. 2021-08-11 07:01:01 bareos-sd JobId 76069: User defined maximum volume capacity 4,650,000,000 exceeded on device "ObmenStorage" (/mnt/msa2040/backup/bareos//ObmenStorage). 2021-08-11 07:01:01 bareos-sd JobId 76069: End of medium on Volume "rsk40srv018-fsPool-upr-2020.12.19-28" Bytes=4,649,975,246 Blocks=72,080 at 11-aug-2021 07:01. 2021-08-11 07:01:02 bareos-sd JobId 76069: Recycled volume "rsk40srv018-fsPool-upr-2020.12.19-29" on device "ObmenStorage" (/mnt/msa2040/backup/bareos//ObmenStorage), all previous data lost. 2021-08-11 07:01:02 bareos-sd JobId 76069: New volume "rsk40srv018-fsPool-upr-2020.12.19-29" mounted on device "ObmenStorage" (/mnt/msa2040/backup/bareos//ObmenStorage) at 11-aug-2021 07:01. * | ||||
Tags | No tags attached. | ||||
*version bareos-dir Version: 20.0.1 (02 March 2021) Debian GNU/Linux 10 (buster) debian Debian GNU/Linux 10 (buster) * root@bareos:/var/log/bareos# dpkg-query -l|grep bareos ii bareos-bconsole 20.0.1-3 amd64 Backup Archiving Recovery Open Sourced - text console ii bareos-client 20.0.1-3 amd64 Backup Archiving Recovery Open Sourced - client metapackage ii bareos-common 20.0.1-3 amd64 Backup Archiving Recovery Open Sourced - common files ii bareos-database-common 20.0.1-3 amd64 Backup Archiving Recovery Open Sourced - common catalog files ii bareos-database-postgresql 20.0.1-3 amd64 Backup Archiving Recovery Open Sourced - PostgreSQL backend ii bareos-database-tools 20.0.1-3 amd64 Backup Archiving Recovery Open Sourced - database tools ii bareos-devel 20.0.0-1 amd64 Backup Archiving Recovery Open Sourced - development files ii bareos-director 20.0.1-3 amd64 Backup Archiving Recovery Open Sourced - director daemon ii bareos-filedaemon 20.0.1-3 amd64 Backup Archiving Recovery Open Sourced - file daemon ii bareos-fuse 0.1.1498568444.fb2539c-13.10 all Backup Archiving Recovery Open Sourced - FUSE ii bareos-webui 20.0.1-3 all Backup Archiving Recovery Open Sourced - webui ii python-bareos 20.0.0-1 all Backup Archiving REcovery Open Sourced - python module (Python 2) root@bareos:/var/log/bareos# |
|
For example: Job 77560 wait storage, but ins list jobs and in db - have status fail: *status dir Running Jobs: Console connected at 23-ав-2021 22:47 JobId Level Name Status ====================================================================== 77558 Increme backup-im.rs.int-LinuxRoot.2021-08-23_21.00.02_22 is running 77560 Increme backup-vs07.rs.int-LinuxRoot.2021-08-23_21.00.02_24 is waiting on max Storage jobs *list jobs client=vs07.rs.int-fd +--------+------------------------------+----------------+---------------------+------+-------+----------+---------------+-----------+ | jobid | name | client | starttime | type | level | jobfiles | jobbytes | jobstatus | +--------+------------------------------+----------------+---------------------+------+-------+----------+---------------+-----------+ | 77,277 | backup-vs07.rs.int-LinuxRoot | vs07.rs.int-fd | 2021-08-20 22:42:11 | B | I | 276 | 40,821,418 | T | | 77,385 | backup-vs07.rs.int-LinuxRoot | vs07.rs.int-fd | 2021-08-22 00:53:57 | B | D | 585 | 47,029,348 | T | | 77,560 | backup-vs07.rs.int-LinuxRoot | vs07.rs.int-fd | 2021-08-23 21:00:02 | B | I | 0 | 0 | f | +--------+------------------------------+----------------+---------------------+------+-------+----------+---------------+-----------+ In db: bareos=# select * from job where jobid=77560; -[ RECORD 1 ]---+---------------------------------------------------- jobid | 77560 job | backup-vs07.rs.int-LinuxRoot.2021-08-23_21.00.02_24 name | backup-vs07.rs.int-LinuxRoot type | B level | I clientid | 63 jobstatus | f schedtime | 2021-08-23 21:00:02 starttime | 2021-08-23 21:00:02 endtime | 2021-08-23 21:00:02 realendtime | jobtdate | 1629716402 volsessionid | 0 volsessiontime | 0 jobfiles | 0 jobbytes | 0 readbytes | 0 joberrors | 0 jobmissingfiles | 0 poolid | 0 filesetid | 0 priorjobid | 0 purgedfiles | 0 hasbase | 0 hascache | 0 reviewed | 0 comment | |
|
Please retry with current Bareos 22.1.0 version | |
Date Modified | Username | Field | Change |
---|---|---|---|
2021-08-11 00:59 | progserega | New Issue | |
2021-08-11 01:00 | progserega | Note Added: 0004210 | |
2021-08-23 14:55 | progserega | Note Added: 0004216 | |
2023-07-27 15:53 | bruno-at-bareos | Assigned To | => bruno-at-bareos |
2023-07-27 15:53 | bruno-at-bareos | Status | new => closed |
2023-07-27 15:53 | bruno-at-bareos | Resolution | open => fixed |
2023-07-27 15:53 | bruno-at-bareos | Note Added: 0005269 |