View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1382 [bareos-core] file daemon minor always 2021-09-01 16:58 2022-11-24 16:17
Reporter: Shodan Platform: Linux  
Assigned To: bruno-at-bareos OS: Debian  
Priority: normal OS Version: 10  
Status: assigned Product Version: 20.0.2  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Allow to block TLS protocol versions
Description: Allow to block specified TLS protocol versions in config. For example "TLS Protocols = !SSLv2, !SSLv3, !TLSv1"
Insecured version TLSv1.0 is enabled by default in bareos filedaemon and cannot be disabled, even with "TLS Enable = no" in bareos-fd.conf
For example "TLS Protocols = !SSLv2, !SSLv3, !TLSv1"

Nmap partial output for client with "TLS Enable = no"
nmap --script ssl-enum-ciphers -p 9102 10.156.103.1
PORT STATE SERVICE
9102/tcp open jetdirect
| ssl-enum-ciphers:
| TLSv1.0:
| ciphers:
Tags:
Steps To Reproduce: Run bareos fileagent with "TLS Enable = no"
Run nmap --script ssl-enum-ciphers -p 9102 bareos_fd_host
Additional Information:
System Description
Attached Files:
Notes
(0004245)
arogge   
2021-09-02 14:40   
Would you please share the configuration of the FD, so we can try to reproduce it?
You might also have fallen for the (unpleasant) fact that you need to quote the parameter to TLS Protocols if it contains multiple values, as it needs to be one single string that is passed to OpenSSL.
(0004246)
arogge   
2021-09-02 15:43   
I just tried it myself again.
I can confirm that setting "TLS Enable = no" does not disable TLS support entirely.
However, setting the following forces the use of at least TLS v1.2 (i.e. nmap doesn't show anything besides TLS 1.2 and 1.3 anymore):

TLS Protocol = "!TLSv1, !TLSv1.1"

I understand that we should improve the documentation concerning this and we should probably support multiple values when using TLS Protocol unquoted or at least warn about it.
(0004247)
Shodan   
2021-09-03 09:01   
Thank you!
Please close the ticket, if required.
(0004248)
arogge   
2021-09-03 10:54   
Right now there is no way you could have known how to configure this correctly.
So I think I'll keep the bug open till the documentation has been improved.
(0004258)
Shodan   
2021-09-09 13:29   
FYI
bareos-filedaemon throws an error with TLS Protocol = "!TLSv1, !TLSv1.1"
"SSL routines:SSL_CONF_cmd:bad value"

This line works for me
TLS Protocol = "-TLSv1, -TLSv1.1"
(0004372)
db10   
2021-11-30 14:49   
I have been trying to disable TLS < 1.2 on CentOS 7 hosts.
None of these settings on 20.0.4 work:

I have tried
TLS Protocol = "-TLSv1, -TLSv1.1"
TLS Protocol = "!TLSv1, !TLSv1.1"
TLS Protocol = "TLSv1.2"

No errors. But never any changes.

This also applies to TLS Cipher List on CentOS 8 (C8 seems to use TLSv1.2 by defualt?) ; no matter what I fill in it doesn't error, or change the outcome
(e.g

TLS Cipher List = "TLS_ECDHE_PSK_WITH_AES_128_CBC_SHA,TLS_ECDHE_PSK_WITH_AES_256_CBC_SHA384,TLS_ECDHE_PSK_WITH_CAMELLIA_128_CBC_SHA256,TLS_ECDHE_PSK_WITH_CAMELLIA_256_CBC_SHA384,TLS_ECDHE_PSK_WITH_CHACHA20_POLY1305_SHA256"


CENTOS 7
nmap --script ssl-enum-ciphers -p 9102 MyHost

Starting Nmap 7.70 ( https://nmap.org ) at 2021-11-30 13:14 GMT
Nmap scan report for xxxx (172.16.167.8)
Host is up (0.00023s latency).

PORT STATE SERVICE
9102/tcp open jetdirect
| ssl-enum-ciphers:
| TLSv1.0:
| ciphers:
| TLS_PSK_WITH_3DES_EDE_CBC_SHA - unknown
| TLS_PSK_WITH_AES_128_CBC_SHA - unknown
| TLS_PSK_WITH_AES_256_CBC_SHA - unknown
| TLS_PSK_WITH_RC4_128_SHA - unknown
| compressors:
| NULL
| cipher preference: client
| warnings:
| 64-bit block cipher 3DES vulnerable to SWEET32 attack
| Broken cipher RC4 is deprecated by RFC 7465
| TLSv1.1:
| ciphers:
| TLS_PSK_WITH_3DES_EDE_CBC_SHA - unknown
| TLS_PSK_WITH_AES_128_CBC_SHA - unknown
| TLS_PSK_WITH_AES_256_CBC_SHA - unknown
| TLS_PSK_WITH_RC4_128_SHA - unknown
| compressors:
| NULL
| cipher preference: client
| warnings:
| 64-bit block cipher 3DES vulnerable to SWEET32 attack
| Broken cipher RC4 is deprecated by RFC 7465
| TLSv1.2:
| ciphers:
| TLS_PSK_WITH_3DES_EDE_CBC_SHA - unknown
| TLS_PSK_WITH_AES_128_CBC_SHA - unknown
| TLS_PSK_WITH_AES_256_CBC_SHA - unknown
| TLS_PSK_WITH_RC4_128_SHA - unknown
| compressors:
| NULL
| cipher preference: client
| warnings:
| 64-bit block cipher 3DES vulnerable to SWEET32 attack
| Broken cipher RC4 is deprecated by RFC 7465
|_ least strength: unknown

Nmap done: 1 IP address (1 host up) scanned in 0.70 seconds

CENTOS8
nmap --script ssl-enum-ciphers -p 9102 MyHost2
Starting Nmap 7.70 ( https://nmap.org ) at 2021-11-30 13:48 GMT
Nmap scan report for (172.16.167.24)
Host is up (0.00025s latency).

PORT STATE SERVICE
9102/tcp open jetdirect
| ssl-enum-ciphers:
| TLSv1.2:
| ciphers:
| TLS_ECDHE_PSK_WITH_3DES_EDE_CBC_SHA (secp256r1) - C
| TLS_ECDHE_PSK_WITH_AES_128_CBC_SHA (secp256r1) - A
| TLS_ECDHE_PSK_WITH_AES_128_CBC_SHA256 (secp256r1) - A
| TLS_ECDHE_PSK_WITH_AES_256_CBC_SHA (secp256r1) - A
| TLS_ECDHE_PSK_WITH_AES_256_CBC_SHA384 (secp256r1) - A
| TLS_ECDHE_PSK_WITH_CAMELLIA_128_CBC_SHA256 (secp256r1) - A
| TLS_ECDHE_PSK_WITH_CAMELLIA_256_CBC_SHA384 (secp256r1) - A
| TLS_ECDHE_PSK_WITH_CHACHA20_POLY1305_SHA256 (secp256r1) - A
| TLS_PSK_WITH_3DES_EDE_CBC_SHA - unknown
| TLS_PSK_WITH_AES_128_CBC_SHA - unknown
| TLS_PSK_WITH_AES_128_CBC_SHA256 - unknown
| TLS_PSK_WITH_AES_128_CCM - unknown
| TLS_PSK_WITH_AES_128_CCM_8 - unknown
| TLS_PSK_WITH_AES_128_GCM_SHA256 - unknown
| TLS_PSK_WITH_AES_256_CBC_SHA - unknown
| TLS_PSK_WITH_AES_256_CBC_SHA384 - unknown
| TLS_PSK_WITH_AES_256_CCM - unknown
| TLS_PSK_WITH_AES_256_CCM_8 - unknown
| TLS_PSK_WITH_AES_256_GCM_SHA384 - unknown
| TLS_PSK_WITH_ARIA_128_GCM_SHA256 - unknown
| TLS_PSK_WITH_ARIA_256_GCM_SHA384 - unknown
| TLS_PSK_WITH_CAMELLIA_128_CBC_SHA256 - unknown
| TLS_PSK_WITH_CAMELLIA_256_CBC_SHA384 - unknown
| TLS_PSK_WITH_CHACHA20_POLY1305_SHA256 - unknown
| compressors:
| NULL
| cipher preference: client
| warnings:
| 64-bit block cipher 3DES vulnerable to SWEET32 attack
|_ least strength: unknown

Nmap done: 1 IP address (1 host up) scanned in 0.66 seconds
(0004373)
arogge   
2021-11-30 15:04   
I just tested it again and it works as I expected (and found out when I first tested it).
Where exactly do you set the parameters? They need to be in the FD's Client resource (usually in /etc/bareos-fd/client/myself.conf)
(0004374)
db10   
2021-11-30 15:44   
Many thanks arogee!

I had the directives in
/etc/bareos/bareos-fd.d/director/bareos-dir.conf

in Director {) where I have "TLS Require" and "TLS Enable"

However, after your message, I have placed them in

/etc/bareos/bareos-fd.d/client/myself.conf

Client {)

TLS Protocol = "-TLSv1,-TLSv1.1"
does indeed work.

TLS Protocol = "TLSv1.2"
does not however.

Schoolboy error on my part.
(0004375)
arogge   
2021-11-30 15:49   
Neverming. As I noted earlier: the documentation on this is *really* lacking.
(0004834)
bruno-at-bareos   
2022-11-24 16:17   
Hello, just to warn you guys, that we try to improve the situation, and to ease life of everybody, we want to document all the step needed.

So the PR1319 https://github.com/bareos/bareos/pull/1319 is an ongoing effort, and if you want to read builded documentation you can go there in a few when the PR will finish its first build.

https://download.bareos.org/bareos/experimental/CD/PR-1319/BareosMainReference/TasksAndConcepts/TransportEncryption.html#tls-restricting-protocol-and-cipher

I will appreciate, if some of you would like to read proof and test the procedure, before we merge the PR.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1433 [bareos-core] General trivial have not tried 2022-03-14 14:52 2022-11-10 16:54
Reporter: arogge_adm Platform:  
Assigned To: OS:  
Priority: normal OS Version:  
Status: new Product Version:  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Release Bareos 22.0.0
Description: This ticket acts as a tracker ticket to collect information about releasing Bareos 22.0.0
Tags: release
Steps To Reproduce:
Additional Information:
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1450 [bareos-core] documentation tweak always 2022-04-20 10:12 2022-11-10 16:53
Reporter: mdc Platform: x86_64  
Assigned To: bruno-at-bareos OS: CentOS  
Priority: normal OS Version: stream 8  
Status: resolved Product Version: 21.1.2  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact: yes
bareos-master: action: fixed
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Wrong link to git hub
Description: The GH link in
https://docs.bareos.org/TasksAndConcepts/Plugins.html#python-fd-plugin
points to:
https://github.com/bareos/bareos/tree/master/contrib/fd-plugins/options-plugin-sample
But correct will be:
https://github.com/bareos/bareos/tree/master/contrib/fd-plugins/bareos_option_example
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0004572)
bruno-at-bareos   
2022-04-20 11:44   
Thanks for your report.
We have a fix in progress for that in https://github.com/bareos/bareos/pull/1165
(0004573)
bruno-at-bareos   
2022-04-21 10:21   
PR1165 merged (master), PR1167 Bareos-21 in progress
(0004576)
bruno-at-bareos   
2022-04-21 15:16   
Fix for bareos-21 (default) documentation has been merged too.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1445 [bareos-core] bconsole minor always 2022-03-31 08:35 2022-11-10 16:52
Reporter: mdc Platform: x86_64  
Assigned To: bruno-at-bareos OS: CentOS  
Priority: normal OS Version: stream 8  
Status: resolved Product Version: 21.1.2  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version: 21.1.3  
    Target Version:  
bareos-master: impact: yes
bareos-master: action: fixed
bareos-19.2: impact: yes
bareos-19.2: action: none
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Quotes are missing at the director name on export
Description: When calling configure export client="Foo" on the console, in the exported file the quotes for the director name are missing.
instant of:
Director {
  Name = "Bareos Director"
this will exported:
Director {
  Name = Bareos Director

As written in the documentation, quotes must be used, when the string contains an space.
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0004562)
bruno-at-bareos   
2022-03-31 10:06   
Hello, I'm just confirmed the missing quote on export.
But even if space is allowed in such resource name, we really advise you to avoid them, it will hurt you in lot a situation.
Space in name for example also doesn't work well with autocompletion in bconsole, etc.

It is safer to consider Name = resource as fqdn name using only ascii alphanumeric and .-_ as special characters.


Regards
(0004577)
bruno-at-bareos   
2022-04-25 16:49   
PR1171 in progress.
(0004590)
bruno-at-bareos   
2022-05-04 17:10   
PR-1171 merged + backport for 21 1173 merged
will appear in next 21.1.3

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1429 [bareos-core] documentation major have not tried 2022-02-14 16:29 2022-11-10 16:52
Reporter: abaguinski Platform:  
Assigned To: bruno-at-bareos OS:  
Priority: high OS Version:  
Status: resolved Product Version: 20.0.5  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Mysql to Postgres migration howto doesn't explain how to initialise the postgres database
Description: I'm trying to figure out how to migrate the catalog from mysql to postgres but I think I'm missing something. The howto (https://docs.bareos.org/Appendix/Howtos.html#prepare-the-new-database) suggests: "Firstly, create a new PostgreSQL database as described in Prepare Bareos database" and links to this document: "https://docs.bareos.org/IntroductionAndTutorial/InstallingBareos.html#prepare-bareos-database", which in turn instructs to run a series of commands that would initialize the database (https://docs.bareos.org/IntroductionAndTutorial/InstallingBareos.html#id9):

su postgres -c /usr/lib/bareos/scripts/create_bareos_database
su postgres -c /usr/lib/bareos/scripts/make_bareos_tables
su postgres -c /usr/lib/bareos/scripts/grant_bareos_privileges

However these commands assume that I mean the currently configured Mysql catalog and fail because Mysql catalog is deprecated:

su postgres -c /usr/lib/bareos/scripts/create_bareos_database
Creating mysql database
The MySQL database backend is deprecated. Please use PostgreSQL instead.
Creating of bareos database failed.

Does that mean I first have to "Add the new PostgreSQL database to the current Bareos Director configuration" (second sentence of the Howto section) and only then go back to the first sentence? Shouldn't the sentences be swapped then (except for "Firstly, ")? And will the create_bareos_database understand which catalog I mean when I configure two catalogs at the same time?

Tags:
Steps To Reproduce: 1. Install bareos 19 with mysql catalog
2. upgrade to bareos 20
3. try to follow the how exactly how it is written
Additional Information:
Attached Files:
Notes
(0004527)
bruno-at-bareos   
2022-02-24 15:56   
I've been able to reproduce the problem, which is due to missing keywords into the documentation. (Passing the dbdriver to scripts)

at the postgresql create database stage could you retry by using this commands

  su - postgres /usr/lib/bareos/scripts/create_bareos_database postgresql
  su - postgres /usr/lib/bareos/scripts/make_bareos_tables postgresql
  su - postgres /usr/lib/bareos/scripts/grant_bareos_privileges postgresql

After that you should be able to use bareos-dbcopy as documented.
Thanks to confirm this works for you, I will then propose and update to the documentation.
(0004528)
abaguinski   
2022-02-25 08:51   
Hi

Thanks for your reaction!

In the mean time we were able to migrate to postgres with a slight difference in the order of steps: 1 added the new catalog resource to the director configuration, 2 created and initialized the postgres database using these scripts. Indeed we've found that the 'postgresql' argument was necessary.

Since we have done it already in this order I unfortunately cannot confirm if only adding the argument was enough (i.e. would the scripts with extra argument work without the catalog resource)

Greetings,
Artem
(0004529)
bruno-at-bareos   
2022-02-28 09:29   
Thanks for your feedback,
Yes the script would have worked without the second catalog resource, when you give them the dbtype.

I will update the documentation to be more precise in that sense.
(0004530)
bruno-at-bareos   
2022-03-01 15:33   
PR#1093 and PR#1094 in review actually
(0004543)
bruno-at-bareos   
2022-03-21 10:56   
PR1094 for updating documentation has been merged.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1480 [bareos-core] documentation minor always 2022-08-30 12:33 2022-11-10 16:51
Reporter: crameleon Platform: Bareos 21.1.3  
Assigned To: frank OS: SUSE Linux Enterprise Server  
Priority: low OS Version: 15 SP4  
Status: resolved Product Version:  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: password string length limitation
Description: Hi,

if I try to log into the web console with the following configuration snippet active:

Console {
  Name = "mygreatusername"
  Password = "SX~E5eMw21shy%z!!B!cZ0PiQ)ex+FOn$Q-A&iv~B3,x|dSGqxsP&4}Zm6iF;[6c6#>LcAvFArcL%d|J}Ae*NB.g8S?$}gJ4mqUH:6aS+Jh6Vtv^Qhno7$>FW24|t2gq"
  Profile = "mygreatwebuiprofile"
  TLS Enable = No
}

The web UI prints the following message:

"Please provide a director, username and password."

If I change the password line to something more simple:

Console {
  Name = "suse-superuser"
  Password = "12345"
  Profile = "webui-superadmin"
  TLS Enable = No
}

Login works as expected.

Since the system does not seem to print any error messages about invalid passwords in its configuration, it would be nice if the allowed characters and lengths (and possibly a sample `pwgen -r <forbidden characters> <length> 1` command) were documented.

Best,
Georg
Tags:
Steps To Reproduce: 1. Configure a web UI user with a complex password such as SX~E5eMw21shy%z!!B!cZ0PiQ)ex+FOn$Q-A&iv~B3,x|dSGqxsP&4}Zm6iF;[6c6#>LcAvFArcL%d|J}Ae*NB.g8S?$}gJ4mqUH:6aS+Jh6Vtv^Qhno7$>FW24|t2gq
2. Copy paste username and password into the browser
3. Try to log in
Additional Information:
Attached Files:
Notes
(0004737)
bruno-at-bareos   
2022-08-31 11:16   
Thanks for your report, the title is a bit misleading, as the problem seems to be present only with the webui.
Having a strong password like described work perfectly with dir<->bconsole for example.

We are now checking where the problem really occur.
(0004738)
bruno-at-bareos   
2022-08-31 11:17   
Long or complicated password are truncated during POST operation with login form.
Those password work well with bconsole for example.
(0004739)
crameleon   
2022-08-31 11:28   
Apologies, I did not consider it to be specific to the webui. Thanks for looking into this! Maybe the POST truncation could be adjusted in my Apache web server?
(0004740)
bruno-at-bareos   
2022-08-31 11:38   
Actual research has proved that the length is important and the password for webui console should be less than 64 chars.
Maybe you can confirm this also on your installation so when our dev's will check this it will be more precise about the symptoms.
(0004741)
crameleon   
2022-09-02 19:00   
Can confirm, with 64 characters it works fine!
(0004742)
crameleon   
2022-09-02 19:02   
And I can also confirm, with one more character, so 65 in total, it returns the "Please provide a director, username and password." message.
(0004744)
frank   
2022-09-08 15:23   
(Last edited: 2022-09-08 16:33)
The form data input filter for password input is set to validate for a PW length between 1 and 64. We simply can remove the max value from the filter to not cause problems like this or set it to a value corresponding to what is allowed in configuration files.
(0004747)
frank   
2022-09-13 18:11   
Fix committed to bareos master branch with changesetid 16581.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1489 [bareos-core] webui minor always 2022-11-02 06:23 2022-11-09 14:11
Reporter: dimmko Platform: Linux  
Assigned To: frank OS: Ubuntu  
Priority: normal OS Version: 22.04  
Status: resolved Product Version: 21.1.4  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: broken storage pool link
Description: Hello!
Sorry for my very bad English!

I have a error when go to see detail
bareos-webui/pool/details/Diff

Tags:
Steps To Reproduce: 1) login in webui
2) click on jobid
3) click on "+"
4) click on pool - Full (for example).
Additional Information: Error:

An error occurred
An error occurred during execution; please try again later.
Additional information:
Exception
File:
/usr/share/bareos-webui/module/Pool/src/Pool/Model/PoolModel.php:94
Message:
Missing argument.
Stack trace:
#0 /usr/share/bareos-webui/module/Pool/src/Pool/Controller/PoolController.php(137): Pool\Model\PoolModel->getPool()
0000001 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/Controller/AbstractActionController.php(82): Pool\Controller\PoolController->detailsAction()
0000002 [internal function]: Zend\Mvc\Controller\AbstractActionController->onDispatch()
0000003 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(444): call_user_func()
0000004 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(205): Zend\EventManager\EventManager->triggerListeners()
0000005 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/Controller/AbstractController.php(118): Zend\EventManager\EventManager->trigger()
0000006 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/DispatchListener.php(93): Zend\Mvc\Controller\AbstractController->dispatch()
0000007 [internal function]: Zend\Mvc\DispatchListener->onDispatch()
0000008 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(444): call_user_func()
0000009 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(205): Zend\EventManager\EventManager->triggerListeners()
0000010 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/Application.php(314): Zend\EventManager\EventManager->trigger()
0000011 /usr/share/bareos-webui/public/index.php(46): Zend\Mvc\Application->run()
0000012 {main}
Attached Files: bareos_webui_error.png (63,510 bytes) 2022-11-02 06:23
https://bugs.bareos.org/file_download.php?file_id=538&type=bug
png
Notes
(0004821)
bruno-at-bareos   
2022-11-03 10:18   
What is needed to try to understand the error is you pool configuration, also maybe you can use your browser console to log POST and GET answer and headers.
Maybe you can also afford the effort to check the php-fpm if used log and apache log (access and error) when the problem occur.

Thanks.
(0004824)
dimmko   
2022-11-07 09:01   
(Last edited: 2022-11-07 09:05)
bruno-at-bareos, thank's for you comment.

1) my pool - Diff
Pool {
  Name = Diff
  Pool Type = Backup
  RecyclePool = Diff
  Purge Oldest Volume = yes
  Recycle = no
  Recycle Oldest Volume = no
  AutoPrune = no
  Volume Retention = 21 days
  ActionOnPurge = Truncate
  Maximum Volume Jobs = 1
  Label Format = "${Client}_${Level}_${Pool}.${Year}-${Month:p/2/0/r}-${Day:p/2/0/r}-${Hour:p/2/0/r}-${Minute:p/2/0/r}_${JobId}"
}

apache2 access.log
[07/Nov/2022:10:40:58 +0300] "GET /pool/details/Diff HTTP/1.1" 500 3225 "http://192.168.5.16/job/?period=1&status=Success" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36"

apache error.log
[Mon Nov 07 10:50:09.844798 2022] [php:warn] [pid 1340] [client 192.168.1.13:61800] PHP Warning: "continue" targeting switch is equivalent to "break". Did you mean to use "continue 2"? in /usr/share/bareos-webui/vendor/zendframework/zend-stdlib/src/ArrayObject.php on line 426, referer: http://192.168.5.16/job/?period=1&status=Success


In Chrome (103):
General:
Request URL: http://192.168.5.16/pool/details/Diff
Request Method: GET
Status Code: 500 Internal Server Error
Remote Address: 192.168.5.16:80
Referrer Policy: strict-origin-when-cross-origin

Response Headers:
HTTP/1.1 500 Internal Server Error
Date: Mon, 07 Nov 2022 07:59:54 GMT
Server: Apache/2.4.52 (Ubuntu)
Expires: Thu, 19 Nov 1981 08:52:00 GMT
Cache-Control: no-store, no-cache, must-revalidate
Pragma: no-cache
Content-Length: 2927
Connection: close
Content-Type: text/html; charset=UTF-8

Request Headers:
GET /pool/details/Diff HTTP/1.1
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9
Accept-Encoding: gzip, deflate
Accept-Language: ru-RU,ru;q=0.9,en-US;q=0.8,en;q=0.7
Cache-Control: no-cache
Connection: keep-alive
Cookie: bareos=o87i7ftkdsf2r160k2j0g5vic2
DNT: 1
Host: 192.168.5.16
Pragma: no-cache
Referer: http://192.168.5.16/job/?period=1&status=Success
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36
(0004825)
dimmko   
2022-11-07 09:18   
Enable display_error in php

[Mon Nov 07 11:17:57.573002 2022] [php:error] [pid 1545] [client 192.168.1.13:63174] PHP Fatal error: Uncaught Zend\\Session\\Exception\\InvalidArgumentException: 'session.name' is not a valid sessions-related ini setting. in /usr/share/bareos-webui/vendor/zendframework/zend-session/src/Config/SessionConfig.php:90\nStack trace:\n#0 /usr/share/bareos-webui/vendor/zendframework/zend-session/src/Config/StandardConfig.php(266): Zend\\Session\\Config\\SessionConfig->setStorageOption()\n#1 /usr/share/bareos-webui/vendor/zendframework/zend-session/src/Config/StandardConfig.php(114): Zend\\Session\\Config\\StandardConfig->setName()\n#2 /usr/share/bareos-webui/module/Application/Module.php(154): Zend\\Session\\Config\\StandardConfig->setOptions()\n#3 [internal function]: Application\\Module->Application\\{closure}()\n#4 /usr/share/bareos-webui/vendor/zendframework/zend-servicemanager/src/ServiceManager.php(939): call_user_func()\n#5 /usr/share/bareos-webui/vendor/zendframework/zend-servicemanager/src/ServiceManager.php(1099): Zend\\ServiceManager\\ServiceManager->createServiceViaCallback()\n#6 /usr/share/bareos-webui/vendor/zendframework/zend-servicemanager/src/ServiceManager.php(638): Zend\\ServiceManager\\ServiceManager->createFromFactory()\n#7 /usr/share/bareos-webui/vendor/zendframework/zend-servicemanager/src/ServiceManager.php(598): Zend\\ServiceManager\\ServiceManager->doCreate()\n#8 /usr/share/bareos-webui/vendor/zendframework/zend-servicemanager/src/ServiceManager.php(530): Zend\\ServiceManager\\ServiceManager->create()\n#9 /usr/share/bareos-webui/module/Application/Module.php(82): Zend\\ServiceManager\\ServiceManager->get()\n#10 /usr/share/bareos-webui/module/Application/Module.php(42): Application\\Module->initSession()\n#11 [internal function]: Application\\Module->onBootstrap()\n#12 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(444): call_user_func()\n#13 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(205): Zend\\EventManager\\EventManager->triggerListeners()\n#14 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/Application.php(157): Zend\\EventManager\\EventManager->trigger()\n#15 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/Application.php(261): Zend\\Mvc\\Application->bootstrap()\n#16 /usr/share/bareos-webui/public/index.php(46): Zend\\Mvc\\Application::init()\n#17 {main}\n\nNext Zend\\ServiceManager\\Exception\\ServiceNotCreatedException: An exception was raised while creating "Zend\\Session\\SessionManager"; no instance returned in /usr/share/bareos-webui/vendor/zendframework/zend-servicemanager/src/ServiceManager.php:946\nStack trace:\n#0 /usr/share/bareos-webui/vendor/zendframework/zend-servicemanager/src/ServiceManager.php(1099): Zend\\ServiceManager\\ServiceManager->createServiceViaCallback()\n#1 /usr/share/bareos-webui/vendor/zendframework/zend-servicemanager/src/ServiceManager.php(638): Zend\\ServiceManager\\ServiceManager->createFromFactory()\n#2 /usr/share/bareos-webui/vendor/zendframework/zend-servicemanager/src/ServiceManager.php(598): Zend\\ServiceManager\\ServiceManager->doCreate()\n#3 /usr/share/bareos-webui/vendor/zendframework/zend-servicemanager/src/ServiceManager.php(530): Zend\\ServiceManager\\ServiceManager->create()\n#4 /usr/share/bareos-webui/module/Application/Module.php(82): Zend\\ServiceManager\\ServiceManager->get()\n#5 /usr/share/bareos-webui/module/Application/Module.php(42): Application\\Module->initSession()\n#6 [internal function]: Application\\Module->onBootstrap()\n#7 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(444): call_user_func()\n#8 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(205): Zend\\EventManager\\EventManager->triggerListeners()\n#9 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/Application.php(157): Zend\\EventManager\\EventManager->trigger()\n#10 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/Application.php(261): Zend\\Mvc\\Application->bootstrap()\n#11 /usr/share/bareos-webui/public/index.php(46): Zend\\Mvc\\Application::init()\n#12 {main}\n thrown in /usr/share/bareos-webui/vendor/zendframework/zend-servicemanager/src/ServiceManager.php on line 946, referer: http://192.168.5.16/job/?period=1&status=Success
(0004826)
bruno-at-bareos   
2022-11-07 09:55   
I was able to reproduce, what is funny is if you go to storage -> pool tab -> pool name it will work.
We will transfer that to developer.
(0004827)
bruno-at-bareos   
2022-11-07 09:57   
There's a subtile difference in url called

by storage, pool, poolname url is bareos-webui/pool/details/?pool=Full
by jobid, + details, pool bareos-webui/pool/details/Full
-> create error missing parameter
(0004828)
dimmko   
2022-11-07 10:34   
bruno-at-bareos, your method is work, thank's.
(0004830)
frank   
2022-11-08 15:11   
Fix committed to bareos master branch with changesetid 16853.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1357 [bareos-core] director crash have not tried 2021-05-18 10:53 2022-11-09 14:09
Reporter: harm Platform:  
Assigned To: arogge OS:  
Priority: normal OS Version:  
Status: resolved Product Version:  
Product Build: Resolution: unable to reproduce  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: bareos-dir: ERROR in lib/mem_pool.cc:215 Failed ASSERT: obuf
Description: Hello folks,

when I try to make a long term copy of an always incremental backup the Bareos director crashes.

Version: 20.0.1 (02 March 2021) Ubuntu 20.04.1 LTS

Please let me know what more information you need.

Best regards
Harm
Tags:
Steps To Reproduce: Follow the instructions of https://docs.bareos.org/TasksAndConcepts/AlwaysIncrementalBackupScheme.html
Additional Information:
Attached Files:
Notes
(0004130)
harm   
2021-05-19 15:15   
The problem seems to occur when a client is selected. I don't seem to have quite grasped the concept yet, but the error should be handled?
(0004149)
arogge   
2021-06-09 17:48   
We need a meaningful backtrace to debug that. Please install a debugger and debug-Packages (or tell me what system your director runs on so I can provide you the commands) and reproduce the issue, so we can see what goes wrong.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1490 [bareos-core] General trivial have not tried 2022-11-09 12:01 2022-11-09 12:05
Reporter: arogge_adm Platform:  
Assigned To: arogge OS:  
Priority: normal OS Version:  
Status: resolved Product Version:  
Product Build: Resolution: reopened  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Release Bareos 20.0.8
Description: This ticket acts as a tracker ticket to collect information about releasing Bareos 20.0.8
Tags: release
Steps To Reproduce:
Additional Information:
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1492 [bareos-core] General trivial have not tried 2022-11-09 12:01 2022-11-09 12:01
Reporter: arogge_adm Platform:  
Assigned To: OS:  
Priority: normal OS Version:  
Status: new Product Version:  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Release Bareos 20.0.9
Description: This ticket acts as a tracker ticket to collect information about releasing Bareos 20.0.9
Tags: release
Steps To Reproduce:
Additional Information:
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1491 [bareos-core] General trivial have not tried 2022-11-09 12:01 2022-11-09 12:01
Reporter: arogge_adm Platform:  
Assigned To: OS:  
Priority: normal OS Version:  
Status: new Product Version:  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Release Bareos 19.2.14
Description: This ticket acts as a tracker ticket to collect information about releasing Bareos 19.2.14
Tags: release
Steps To Reproduce:
Additional Information:
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1487 [bareos-core] webui text have not tried 2022-10-13 14:21 2022-11-04 11:54
Reporter: fcolista Platform: Linux  
Assigned To: frank OS: any  
Priority: normal OS Version: 3  
Status: feedback Product Version: 21.1.4  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: webui support for PHP 8.1
Description: Hello. I'm the maintainer of BareOS for Alpine Linux.
We are almost ready to go ahead with new release of Alpine 3.17, where we are going to drop PHP 8 support in favor of PHP 8.1.
Can you please confirm that in BareOS webui 21.1.4 PHP 8.1 is fully supported?
Thank you.

.: Francesco
Tags:
Steps To Reproduce:
Additional Information:
System Description General issues regarding all linux platforms or not specific to only one distribution
Attached Files:
Notes
(0004820)
fcolista   
2022-10-31 14:57   
Any update on this, please?
(0004823)
frank   
2022-11-04 11:54   
> Can you please confirm that in BareOS webui 21.1.4 PHP 8.1 is fully supported?

Yes, currently there are no known issues that break functionality.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1488 [bareos-core] file daemon block always 2022-10-22 19:29 2022-11-03 10:19
Reporter: tuxmaster Platform: x86_64  
Assigned To: bruno-at-bareos OS: RHEL  
Priority: urgent OS Version: 9  
Status: acknowledged Product Version: 21.1.4  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: PostgreSQL Plugin fails on PostgreSQL 15
Description: Because the backup functions was renamed, the add can't backup anything.
See:
https://www.postgresql.org/docs/15/functions-admin.html
https://www.postgresql.org/docs/current/release-15.html

Functions pg_start_backup()/pg_stop_backup() have been renamed to pg_backup_start()/pg_backup_stop(),
and the functions pg_backup_start_time() and pg_is_in_backup() have been removed.
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0004818)
tuxmaster   
2022-10-22 21:12   
Also the file backup_label is not created any more.
It looks like the backup process has changed.
https://www.postgresql.org/docs/current/continuous-archiving.html#BACKUP-BASE-BACKUP
See 26.3.3
pg_backup_stop will return two fields, that has to be saved into files by the backup tool. (2-> backup_label, 3. -> tablespace_map).
(0004822)
bruno-at-bareos   
2022-11-03 10:19   
Seen by developer, but PR from external is welcomed too ;-)

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
854 [bareos-core] director tweak have not tried 2017-09-21 10:21 2022-10-11 09:43
Reporter: hostedpower Platform: Linux  
Assigned To: stephand OS: Debian  
Priority: normal OS Version: 8  
Status: assigned Product Version: 16.2.5  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Problem with always incremental virtual full
Description: Hi,


All the sudden we have issues with virtual full (for consolidate) no longer working.

We have 2 pools for each customer. One is for the full (consolidate) and the other for the incremental.

We used to have the option to limit a single job to a single volume, we removed that a while ago, maybe there is a relation.

We also had to downgrade 16.2.6 to 16.2.5 because of the MySQL slowness issues, that happened recently, so that's also a possibility.

We have the feeling this software is not very reliable or at least very complex to get it somewhat stable.

PS: The volume it asks is there, but it doesn't want to use it :(

We used this config for a long time without this particular issue, except for the

        Action On Purge = Truncate
and commenting the:
# Maximum Volume Jobs = 1 # a new file for each backup that is done
Tags:
Steps To Reproduce: Config:



Storage {
    Name = customerx-incr
    Device = customerx-incr # bareos-sd Device
    Media Type = customerx
    Address = xxx # backup server fqdn > sent to client sd
    Password = "xxx" # password for Storage daemon
    Maximum Concurrent Jobs = 8 # required for virtual full
}

Storage {
    Name = customerx-cons
    Device = customerx-cons # bareos-sd Device
    Media Type = customerx
    Address = xxx # backup server fqdn > sent to client sd
    Password = "xxx" # password for Storage daemon
    Maximum Concurrent Jobs = 8 # required for virtual full
}

cat /etc/bareos/pool-defaults.conf
#pool defaults used for always incremental scheme
        Pool Type = Backup
# Maximum Volume Jobs = 1 # a new file for each backup that is done
        Recycle = yes # Bareos can automatically recycle Volumes
        Auto Prune = yes # Prune expired volumes
        Volume Use Duration = 23h
        Action On Purge = Truncate
#end


Pool {
    Name = customerx-incr
    Storage = customerx-incr
    LabelFormat = "vol-incr-"
    Next Pool = customerx-cons
    @/etc/bareos/pool-defaults.conf
}

Pool {
    Name = customerx-cons
    Storage = customerx-cons
    LabelFormat = "vol-cons-"
    @/etc/bareos/pool-defaults.conf
}



# lb1.customerx.com
Job {
    Name = "backup-lb1.customerx.com"
    Client = lb1.customerx.com
    Pool = customerx-incr
    Full Backup Pool = customerx-cons
    FileSet = "linux-all"
    Schedule = "always-inc-cycle-4"
    #Defaults
    JobDefs = "HPJobInc"
    Maximum Concurrent Jobs = 8 # Let up to 8 jobs run
}

Client {
    Name = lb1.customerx.com
    Address = lb1.customerx.com
# Catalog = MyCatalog
    Password = "xxx" # password for Storage daemon
    AutoPrune = yes
}


# lb2.customerx.com
Job {
    Name = "backup-lb2.customerx.com"
    Client = lb2.customerx.com
    Pool = customerx-incr
    Full Backup Pool = customerx-cons
    FileSet = "linux-all"
    Schedule = "always-inc-cycle-4"
    #Defaults
    JobDefs = "HPJobInc"
    Maximum Concurrent Jobs = 8 # Let up to 8 jobs run
}

Client {
    Name = lb2.customerx.com
    Address = lb2.customerx.com
# Catalog = MyCatalog
    Password = "xxx" # password for Storage daemon
    AutoPrune = yes
}



# app1.customerx.com
Job {
    Name = "backup-app1.customerx.com"
    Client = app1.customerx.com
    Pool = customerx-incr
    Full Backup Pool = customerx-cons
    FileSet = "linux-all"
    Schedule = "always-inc-cycle-4"
    #Defaults
    JobDefs = "HPJobInc"
    Maximum Concurrent Jobs = 8 # Let up to 8 jobs run
}

Client {
    Name = app1.customerx.com
    Address = app1.customerx.com
# Catalog = MyCatalog
    Password = "xxx" # password for Storage daemon
    AutoPrune = yes
}



# app2.customerx.com
Job {
    Name = "backup-app2.customerx.com"
    Client = app2.customerx.com
    Pool = customerx-incr
    Full Backup Pool = customerx-cons
    FileSet = "linux-all"
    Schedule = "always-inc-cycle-4"
    #Defaults
    JobDefs = "HPJobInc"
    Maximum Concurrent Jobs = 8 # Let up to 8 jobs run
}

Client {
    Name = app2.customerx.com
    Address = app2.customerx.com
# Catalog = MyCatalog
    Password = "xxx" # password for Storage daemon
    AutoPrune = yes
}

# app3.customerx.com
Job {
    Name = "backup-app3.customerx.com"
    Client = app3.customerx.com
    Pool = customerx-incr
    Full Backup Pool = customerx-cons
    FileSet = "linux-all"
    Schedule = "always-inc-cycle-4"
    #Defaults
    JobDefs = "HPJobInc"
    Maximum Concurrent Jobs = 8 # Let up to 8 jobs run
}

Client {
    Name = app3.customerx.com
    Address = app3.customerx.com
# Catalog = MyCatalog
    Password = "xxx" # password for Storage daemon
    AutoPrune = yes
}




# db1.customerx.com
Job {
    Name = "backup-db1.customerx.com"
    Client = db1.customerx.com
    Pool = customerx-incr
    Full Backup Pool = customerx-cons
    FileSet = "linux-all-mysql"
    Schedule = "always-inc-cycle-4"
    #Defaults
    JobDefs = "HPJobInc"
    Maximum Concurrent Jobs = 8 # Let up to 8 jobs run
}

Client {
    Name = db1.customerx.com
    Address = db1.customerx.com
# Catalog = MyCatalog
    Password = "xxx" # password for Storage daemon
    AutoPrune = yes
}


# db2.customerx.com
Job {
    Name = "backup-db2.customerx.com"
    Client = db2.customerx.com
    Pool = customerx-incr
    Full Backup Pool = customerx-cons
    FileSet = "linux-all-mysql"
    Schedule = "always-inc-cycle-4"
    #Defaults
    JobDefs = "HPJobInc"
    Maximum Concurrent Jobs = 8 # Let up to 8 jobs run
}

Client {
    Name = db2.customerx.com
    Address = db2.customerx.com
# Catalog = MyCatalog
    Password = "xxx" # password for Storage daemon
    AutoPrune = yes
}

# db3.customerx.com
Job {
    Name = "backup-db3.customerx.com"
    Client = db3.customerx.com
    Pool = customerx-incr
    Full Backup Pool = customerx-cons
    FileSet = "linux-all-mysql"
    Schedule = "always-inc-cycle-4"
    #Defaults
    JobDefs = "HPJobInc"
    Maximum Concurrent Jobs = 8 # Let up to 8 jobs run
}

Client {
    Name = db3.customerx.com
    Address = db3.customerx.com
# Catalog = MyCatalog
    Password = "xxx" # password for Storage daemon
    AutoPrune = yes
}




# Backup for customerx
Device {
  Name = customerx-incr
  Media Type = customerx
  Archive Device = /home/customerx/bareos
  LabelMedia = yes;
  Random Access = Yes;
  AutomaticMount = yes;
  RemovableMedia = no;
  AlwaysOpen = no;
  Maximum Concurrent Jobs = 8
}

Device {
  Name = customerx-cons
  Media Type = customerx
  Archive Device = /home/customerx/bareos
  LabelMedia = yes;
  Random Access = Yes;
  AutomaticMount = yes;
  RemovableMedia = no;
  AlwaysOpen = no;
  Maximum Concurrent Jobs = 8
}
Additional Information: 2017-09-21 10:07:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 10:02:35 bareos-sd JobId 2467: Please mount read Volume "vol-cons-0378" for:
 Job: backup-db3.customerx.cloud.2017-09-21_09.00.04_05
 Storage: "customerx-incr" (/home/customerx/bareos)
 Pool: customerx-incr
 Media type: customerx
 
2017-09-21 09:57:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:52:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:47:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:42:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:37:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:32:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:27:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:22:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:17:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:12:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:07:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:02:35 hostedpower-dir JobId 2467: Bootstrap records written to /var/lib/bareos/hostedpower-dir.restore.107.bsr
 
2017-09-21 09:02:35 hostedpower-dir JobId 2467: Using Device "customerx-incr" to read.
 
2017-09-21 09:02:35 hostedpower-dir JobId 2467: Using Device "customerx-cons" to write.
 
2017-09-21 09:02:35 bareos-sd JobId 2467: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0378 from dev="customerx-cons" (/home/customerx/bareos) to "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:02:35 bareos-sd JobId 2467: Warning: acquire.c:294 Read acquire: label.c:254 Could not reserve volume vol-cons-0378 on "customerx-incr" (/home/customerx/bareos)
 
2017-09-21 09:02:35 bareos-sd JobId 2467: Please mount read Volume "vol-cons-0378" for:
 Job: backup-db3.customerx.cloud.2017-09-21_09.00.04_05
 Storage: "customerx-incr" (/home/customerx/bareos)
 Pool: customerx-incr
 Media type: customerx
 
2017-09-21 09:02:34 hostedpower-dir JobId 2467: Start Virtual Backup JobId 2467, Job=backup-db3.customerx.cloud.2017-09-21_09.00.04_05
 
2017-09-21 09:02:34 hostedpower-dir JobId 2467: Consolidating JobIds 2392,971
System Description
Attached Files:
Notes
(0002744)
joergs   
2017-09-21 16:08   
I see some problems in this configuration.

You should check section http://doc.bareos.org/master/html/bareos-manual-main-reference.html#ConcurrentDiskJobs from the manual.

Each device can only read/write one volume at a time. VirtualFull requires multiple volumes.

Basically, you need multiple devices to the some "Storage Directory", each with "Maximum Concurrent Jobs = 1" to make it work.

Your setting

Device {
  Name = customerx-incr
  Media Type = customerx
  Archive Device = /home/customerx/bareos
  LabelMedia = yes;
  Random Access = Yes;
  AutomaticMount = yes;
  RemovableMedia = no;
  AlwaysOpen = no;
  Maximum Concurrent Jobs = 8
}

will use interleaving instead, which could cause performance issues on restore.
(0002745)
hostedpower   
2017-09-21 16:14   
So you suggest just making the device:

Device {
  Name = customerx-incr
  Media Type = customerx
  Archive Device = /home/customerx/bareos
  LabelMedia = yes;
  Random Access = Yes;
  AutomaticMount = yes;
  RemovableMedia = no;
  AlwaysOpen = no;
  Maximum Concurrent Jobs = 1
}

This would fix all issues?

Before we had this and I think that worked also: Maximum Volume Jobs = 1 , but it seems discouraged...
(0002746)
joergs   
2017-09-21 16:26   
No, this will not fix the issue, but it prevents interleaving, which might cause other problems.

I suggest, by pointing to the documentation, that you setup multiple Devices all pointing to the same Archive Device. Then, access them all to a Director Storage, like

Storage {
    Name = customerx
    Device = customerx-dev1, customerx-dev2, ... # bareos-sd Device
    Media Type = customerx
    Address = xxx # backup server fqdn > sent to client sd
    Password = "xxx" # password for Storage daemon
    Maximum Concurrent Jobs = 8 # required for virtual full
}
(0002747)
joergs   
2017-09-21 16:26   
With Maximum Concurrent Jobs = 8 you should use 8 Devices.
(0002748)
hostedpower   
2017-09-21 16:37   
Hi Joergs,

Thanks a lot, that documentation makes sense and seems to have improved since I last read it (or I missed it somehow).

Will test it like that, but it looks very promising :)
(0002754)
hostedpower   
2017-09-21 22:57   
I've read it, but it would mean I need to create multiple storage devices in this case? That's a lot of extra definitions to just backup 1 customer. It would be nice if you would simply declare the device object and tell there are 8 from it for example. All in just one definition. That shouldn't be too hard I suppose?

Something like:

Device {
  Name = customerx-incr
  Media Type = customerx
  Archive Device = /home/customerx/bareos
  LabelMedia = yes;
  Random Access = Yes;
  AutomaticMount = yes;
  RemovableMedia = no;
  AlwaysOpen = no;
  Maximum Concurrent Jobs = 8 --> this automatically creates customerx-incr1 --> customerx-incr8, probably with some extra setting to allow it.
}



For now, would it be a solution to set

Device {
  Name = customerx-incr
  Media Type = customerx
  Archive Device = /home/customerx/bareos
  LabelMedia = yes;
  Random Access = Yes;
  AutomaticMount = yes;
  RemovableMedia = no;
  AlwaysOpen = no;
  Maximum Concurrent Jobs = 1 # just use one at the same time
}

Since we have so many clients, it would still run multiple backups at the same time for different clients I suppose? (Each client has his own media, device and storage)


PS: We want to keep about 20 days of data, is the next config ok together with the above scenario?


JobDefs {
    Name = "HPJobInc"
    Type = Backup
    Level = Full
    Write Bootstrap = "/var/lib/bareos/%c.bsr"
    Accurate=yes
    Level=Full
    Messages = Standard
    Always Incremental = yes
    Always Incremental Job Retention = 20 days
    # The resulting interval between full consolidations when running daily backups and daily consolidations is Always Incremental Max Full Age Dir Job - Always Incremental Job Retention Dir Job.
    Always Incremental Max Full Age = 35 days # should NEVER be less then Always Incremental Job Retention -> Every 15 days the full backup is also consolidated ( Always Incremental Max Full Age Dir Job - Always Incremental Job Retention Dir )
    Always Incremental Keep Number = 5 # Guarantee that at least the specified number of Backup Jobs will persist, even if they are older than "Always Incremental Job Retention".
    Maximum Concurrent Jobs = 5 # Let up to 5 jobs run
}
(0002759)
hostedpower   
2017-09-25 09:50   
We used this now:

  Maximum Concurrent Jobs = 1 # just use one at the same time

But some jobs also have the same error:

2017-09-25 09:00:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 
2017-09-25 08:55:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)


It looks almost like multiple jobs can't exist together in one volume (well they can, but then issues like this start to occur.

Before, probably with the "Maximum Volume Jobs = 1", we never encountered these problems.
(0002760)
joergs   
2017-09-25 11:17   
Only one access per volume is possible at a time.
So reading and writing to the same volume is not possible.

I thought, you covered this by "Maximum Use Duration = 23h". Have you disabled it, or did you run multiple jobs during one day?

However, this is a bug tracker. I think further questions about Always Incrementals are best handled using the bareos-users mailing list or a bareos.com support ticket.
(0002761)
hostedpower   
2017-09-25 11:37   
Yes I was wondering why we encountered it now and never before.

It wants to swap consolidate pool for and incremental pool (or vice versa). Don't understand why.
(0002763)
joergs   
2017-09-25 12:15   
Please attach the output of

list joblog jobid=2668
(0002764)
hostedpower   
2017-09-25 12:22   
Enter a period to cancel a command.
*list joblog jobid=2668
Automatically selected Catalog: MyCatalog
Using Catalog "MyCatalog"
 2017-09-24 09:00:13 hostedpower-dir JobId 2668: Start Virtual Backup JobId 2668, Job=backup-web.hosted-power.com.2017-09-24_09.00.03_27
 2017-09-24 09:00:13 hostedpower-dir JobId 2668: Consolidating JobIds 2593,1173
 2017-09-24 09:00:18 hostedpower-dir JobId 2668: Bootstrap records written to /var/lib/bareos/hostedpower-dir.restore.87.bsr
 2017-09-24 09:00:18 hostedpower-dir JobId 2668: Using Device "hostedpower-incr" to read.
 2017-09-24 09:00:18 hostedpower-dir JobId 2668: Using Device "hostedpower-cons" to write.
 2017-09-24 09:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:00:18 bareos-sd JobId 2668: Warning: acquire.c:294 Read acquire: label.c:254 Could not reserve volume vol-cons-0344 on "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:00:18 bareos-sd JobId 2668: Please mount read Volume "vol-cons-0344" for:
    Job: backup-web.hosted-power.com.2017-09-24_09.00.03_27
    Storage: "hostedpower-incr" (/home/hostedpower/bareos)
    Pool: hostedpower-incr
    Media type: hostedpower
 2017-09-24 09:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 09:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 10:00:18 bareos-sd JobId 2668: Please mount read Volume "vol-cons-0344" for:
    Job: backup-web.hosted-power.com.2017-09-24_09.00.03_27
    Storage: "hostedpower-incr" (/home/hostedpower/bareos)
    Pool: hostedpower-incr
    Media type: hostedpower
 2017-09-24 10:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 10:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 10:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 10:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 10:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 10:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 10:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 10:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 10:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 10:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 10:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:20:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:25:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:30:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:35:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:40:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:45:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:50:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 11:55:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 12:00:18 bareos-sd JobId 2668: Please mount read Volume "vol-cons-0344" for:
    Job: backup-web.hosted-power.com.2017-09-24_09.00.03_27
    Storage: "hostedpower-incr" (/home/hostedpower/bareos)
    Pool: hostedpower-incr
    Media type: hostedpower
 2017-09-24 12:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 12:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 12:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 12:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 12:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 12:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 12:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 12:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 12:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 12:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 12:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 13:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 14:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 15:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 16:00:18 bareos-sd JobId 2668: Please mount read Volume "vol-cons-0344" for:
    Job: backup-web.hosted-power.com.2017-09-24_09.00.03_27
    Storage: "hostedpower-incr" (/home/hostedpower/bareos)
    Pool: hostedpower-incr
    Media type: hostedpower
 2017-09-24 16:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 16:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 16:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 16:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 16:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 16:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 16:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 16:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 16:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 16:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 16:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 17:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 18:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 19:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 20:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 21:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 22:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-24 23:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 00:00:18 bareos-sd JobId 2668: Please mount read Volume "vol-cons-0344" for:
    Job: backup-web.hosted-power.com.2017-09-24_09.00.03_27
    Storage: "hostedpower-incr" (/home/hostedpower/bareos)
    Pool: hostedpower-incr
    Media type: hostedpower
 2017-09-25 00:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 00:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 00:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 00:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 00:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 00:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 00:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 00:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 00:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 00:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 00:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 01:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 02:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:30:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:35:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:40:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:45:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:50:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 03:55:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:00:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:05:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:10:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:15:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:20:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:25:18 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:30:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:35:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:40:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:45:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:50:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 04:55:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:00:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:05:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:10:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:15:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:20:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:25:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:30:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:35:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:40:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:45:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:50:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 05:55:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:00:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:05:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:10:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:15:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:20:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:25:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:30:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:35:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:40:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:45:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:50:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 06:55:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:00:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:05:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:10:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:15:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:20:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:25:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:30:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:35:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:40:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:45:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:50:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 07:55:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:00:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:05:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:10:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:15:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:20:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:25:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:30:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:35:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:40:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:45:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:50:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 08:55:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 09:00:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 09:05:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 09:10:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 09:15:19 bareos-sd JobId 2668: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0344 from dev="hostedpower-cons" (/home/hostedpower/bareos) to "hostedpower-incr" (/home/hostedpower/bareos)
 2017-09-25 09:18:14 bareos-sd JobId 2668: acquire.c:221 Job 2668 canceled.
 2017-09-25 09:18:14 bareos-sd JobId 2668: Elapsed time=418423:18:14, Transfer rate=0 Bytes/second
 2017-09-25 09:18:14 hostedpower-dir JobId 2668: Bareos hostedpower-dir 16.2.5 (03Mar17):
  Build OS: x86_64-pc-linux-gnu debian Debian GNU/Linux 8.0 (jessie)
  JobId: 2668
  Job: backup-web.hosted-power.com.2017-09-24_09.00.03_27
  Backup Level: Virtual Full
  Client: "web.hosted-power.com" 16.2.4 (01Jul16) x86_64-pc-linux-gnu,debian,Debian GNU/Linux 7.0 (wheezy),Debian_7.0,x86_64
  FileSet: "linux-all-mysql" 2017-08-09 00:15:00
  Pool: "hostedpower-cons" (From Job Pool's NextPool resource)
  Catalog: "MyCatalog" (From Client resource)
  Storage: "hostedpower-cons" (From Storage from Pool's NextPool resource)
  Scheduled time: 24-Sep-2017 09:00:03
  Start time: 04-Sep-2017 02:15:02
  End time: 04-Sep-2017 02:42:30
  Elapsed time: 27 mins 28 secs
  Priority: 10
  SD Files Written: 0
  SD Bytes Written: 0 (0 B)
  Rate: 0.0 KB/s
  Volume name(s):
  Volume Session Id: 194
  Volume Session Time: 1506027627
  Last Volume Bytes: 0 (0 B)
  SD Errors: 0
  SD termination status: Canceled
  Accurate: yes
  Termination: Backup Canceled

You have messages.
*
(0002765)
hostedpower   
2017-09-27 11:57   
Hi,

Now jobs seem to success for the moment.

They also seem to be set as incremental all the time while before it was set as full after consolidate.

Example of such job

2017-09-27 09:02:07 hostedpower-dir JobId 2892: Joblevel was set to joblevel of first consolidated job: Incremental
 
2017-09-27 09:02:07 hostedpower-dir JobId 2892: Bareos hostedpower-dir 16.2.5 (03Mar17):
 Build OS: x86_64-pc-linux-gnu debian Debian GNU/Linux 8.0 (jessie)
 JobId: 2892
 Job: backup-web.hosted-power.com.2017-09-27_09.00.02_47
 Backup Level: Virtual Full
 Client: "web.hosted-power.com" 16.2.4 (01Jul16) x86_64-pc-linux-gnu,debian,Debian GNU/Linux 7.0 (wheezy),Debian_7.0,x86_64
 FileSet: "linux-all-mysql" 2017-08-09 00:15:00
 Pool: "hostedpower-cons" (From Job Pool's NextPool resource)
 Catalog: "MyCatalog" (From Client resource)
 Storage: "hostedpower-cons" (From Storage from Pool's NextPool resource)
 Scheduled time: 27-Sep-2017 09:00:02
 Start time: 07-Sep-2017 02:15:03
 End time: 07-Sep-2017 02:42:52
 Elapsed time: 27 mins 49 secs
 Priority: 10
 SD Files Written: 2,803
 SD Bytes Written: 8,487,235,164 (8.487 GB)
 Rate: 5085.2 KB/s
 Volume name(s): vol-cons-0010
 Volume Session Id: 121
 Volume Session Time: 1506368550
 Last Volume Bytes: 8,495,713,067 (8.495 GB)
 SD Errors: 0
 SD termination status: OK
 Accurate: yes
 Termination: Backup OK

 
2017-09-27 09:02:07 hostedpower-dir JobId 2892: purged JobIds 2817,1399 as they were consolidated into Job 2892
 
2017-09-27 09:02:07 hostedpower-dir JobId 2892: Begin pruning Jobs older than 6 months .
 
2017-09-27 09:02:07 hostedpower-dir JobId 2892: No Jobs found to prune.
 
2017-09-27 09:02:07 hostedpower-dir JobId 2892: Begin pruning Files.
 
2017-09-27 09:02:07 hostedpower-dir JobId 2892: No Files found to prune.
 
2017-09-27 09:02:07 hostedpower-dir JobId 2892: End auto prune.

 
2017-09-27 09:02:03 bareos-sd JobId 2892: Elapsed time=00:01:46, Transfer rate=80.06 M Bytes/second
 
2017-09-27 09:00:32 bareos-sd JobId 2892: End of Volume at file 1 on device "hostedpower-incr" (/home/hostedpower/bareos), Volume "vol-cons-0344"
 
2017-09-27 09:00:32 bareos-sd JobId 2892: Ready to read from volume "vol-incr-0097" on device "hostedpower-incr" (/home/hostedpower/bareos).
 
2017-09-27 09:00:32 bareos-sd JobId 2892: Forward spacing Volume "vol-incr-0097" to file:block 0:4709591.
 
2017-09-27 09:00:17 bareos-sd JobId 2892: Recycled volume "vol-cons-0010" on device "hostedpower-cons" (/home/hostedpower/bareos), all previous data lost.
 
2017-09-27 09:00:17 bareos-sd JobId 2892: Forward spacing Volume "vol-cons-0344" to file:block 0:215.
 
2017-09-27 09:00:14 hostedpower-dir JobId 2892: Start Virtual Backup JobId 2892, Job=backup-web.hosted-power.com.2017-09-27_09.00.02_47
 
2017-09-27 09:00:14 hostedpower-dir JobId 2892: Consolidating JobIds 2817,1399
 
2017-09-27 09:00:14 hostedpower-dir JobId 2892: Bootstrap records written to /var/lib/bareos/hostedpower-dir.restore.48.bsr
 
2017-09-27 09:00:14 hostedpower-dir JobId 2892: Using Device "hostedpower-incr" to read.
 
2017-09-27 09:00:14 hostedpower-dir JobId 2892: Max configured use duration=82,800 sec. exceeded. Marking Volume "vol-cons-0344" as Used.
 
2017-09-27 09:00:14 hostedpower-dir JobId 2892: There are no more Jobs associated with Volume "vol-cons-0010". Marking it purged.
 
2017-09-27 09:00:14 hostedpower-dir JobId 2892: All records pruned from Volume "vol-cons-0010"; marking it "Purged"
 
2017-09-27 09:00:14 hostedpower-dir JobId 2892: Recycled volume "vol-cons-0010"
 
2017-09-27 09:00:14 hostedpower-dir JobId 2892: Using Device "hostedpower-cons" to write.
 
2017-09-27 09:00:14 bareos-sd JobId 2892: Ready to read from volume "vol-cons-0344" on device "hostedpower-incr" (/home/hostedpower/bareos).
(0002766)
hostedpower   
2017-09-28 12:39   
Very strange things, most work for the moment it seems (but how long).

Some show full , other incremental


hostedpower-dir JobId 2956: Joblevel was set to joblevel of first consolidated job: Full
 
2017-09-28 09:05:03 hostedpower-dir JobId 2956: Bareos hostedpower-dir 16.2.5 (03Mar17):
 Build OS: x86_64-pc-linux-gnu debian Debian GNU/Linux 8.0 (jessie)
 JobId: 2956
 Job: backup-vps53404.2017-09-28_09.00.02_20
 Backup Level: Virtual Full
 Client: "vps53404" 16.2.4 (01Jul16) Microsoft Windows Server 2012 Standard Edition (build 9200), 64-bit,Cross-compile,Win64
 FileSet: "windows-all" 2017-08-08 22:15:00
 Pool: "vps53404-cons" (From Job Pool's NextPool resource)
 Catalog: "MyCatalog" (From Client resource)
 Storage: "vps53404-cons" (From Storage from Pool's NextPool resource)
 Scheduled time: 28-Sep-2017 09:00:02

Before they always all showed full
(0002802)
hostedpower   
2017-10-19 09:43   
Well I downgraded to 16.2.5 and guess what, issue was gone for few weeks.

Now I tried out the 16.2.7 and issue is back again ...

2017-10-19 09:40:03 bareos-sd JobId 4445: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0340 from dev="vps52320-cons" (/home/vps52320/bareos) to "vps52320-incr" (/home/vps52320/bareos)
 
2017-10-19 09:35:03 bareos-sd JobId 4445: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0340 from dev="vps52320-cons" (/home/vps52320/bareos) to "vps52320-incr" (/home/vps52320/bareos)
 
2017-10-19 09:30:03 bareos-sd JobId 4445: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0340 from dev="vps52320-cons" (/home/vps52320/bareos) to "vps52320-incr" (/home/vps52320/bareos)
 
2017-10-19 09:25:03 bareos-sd JobId 4445: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0340 from dev="vps52320-cons" (/home/vps52320/bareos) to "vps52320-incr" (/home/vps52320/bareos)
 
2017-10-19 09:20:03 bareos-sd JobId 4445: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0340 from dev="vps52320-cons" (/home/vps52320/bareos) to "vps52320-incr" (/home/vps52320/bareos)

Coincidence? I start doubting it.
(0002830)
hostedpower   
2017-12-02 12:19   
Hi,


I'm still on 17.2.7 and have to say it's an intermittent error. It goes fine for days and then all the sudden sometimes one or more jobs suffer from it.

Never had it in the past until a certain version, pretty sure of that.
(0002949)
hostedpower   
2018-03-26 20:22   
The problem was long time gone, but now it's back in full force. Any idea what the cause could be?
(0002991)
stephand   
2018-05-04 10:57   
With larger MySQL databases and Bareos 17.2, for incremental jobs with accurate=yes, it seems to help to add the following index:

CREATE INDEX jobtdate_idx ON Job (JobTDate);
ANALYZE TABLE Job;

Could you please check if that works for you?
(0002992)
hostedpower   
2018-05-04 11:16   
ok thanks, we addeed the index, but it took only 0.5 seconds. Usually this means there was not an issue :)

Creating an index which is slow, usually means there is (serious) performance gain.
(0002994)
stephand   
2018-05-04 17:56   
For sure it depends on the size of the Job table. I've measured it 25% faster with this index with 10000 records in the Job table.

However, looking at the logs like
2017-10-19 09:20:03 bareos-sd JobId 4445: Warning: vol_mgr.c:557 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=vol-cons-0340 from dev="vps52320-cons" (/home/vps52320/bareos) to "vps52320-incr" (/home/vps52320/bareos)
that's a different problem, has nothing to do with that index.
As Joerg already suggested to use multiple storage devices, I'd propose increase the number. This is documented meanwhile at
http://doc.bareos.org/master/html/bareos-manual-main-reference.html#UsingMultipleStorageDevices
(0003047)
stephand   
2018-06-21 07:59   
Were you able to solve the problem by using multiple storages devices?
(0003053)
hostedpower   
2018-06-27 23:56   
while this might work, we use at least 50-60 devices atm. So it would be a lot of extra work to add extra storage devices.

Why not create a device of type 'disk volume' and automatically give it the right properties? It would make things SO MUCH EASIER and bring bareos into the 2018 ...

Anything that can be done to get this supported? We would want to sponsor this if needed...
(0003145)
stephand   
2018-10-22 15:54   
Hi,

When you say "a lot of extra work to add extra storage devices" are you talking about the Storage Daemon configuration mentioned at
http://doc.bareos.org/master/html/bareos-manual-main-reference.html#UsingMultipleStorageDevices
which is a always

Device {
  Name = FileStorage1
  Media Type = File
  Archive Device = /var/lib/bareos/storage
  LabelMedia = yes
  Random Access = yes
  AutomaticMount = yes
  RemovableMedia = no
  AlwaysOpen = no
  Maximum Concurrent Jobs = 1
}

and only the number in Name = FileStorage1 is increased for each Device?

Are you already using configuration management tools like Ansible, Puppet etc? Then it shouldn't be too hard to get done.

Or what exactly do you mean by "Why not create a device of type 'disk volume' and automatically give it the right properties? It would make things SO MUCH EASIER and bring bareos into the 2018 ..."?

Let me guess, lets imagine we would have MultiDevice in the SD configuration, then the example config from http://doc.bareos.org/master/html/bareos-manual-main-reference.html#UsingMultipleStorageDevices could be written like this:

The Bareos Director config:

Director {
  Name = bareos-dir.example.com
  QueryFile = "/usr/lib/bareos/scripts/query.sql"
  Maximum Concurrent Jobs = 10
  Password = "<secret>"
}
 
Storage {
  Name = File
  Address = bareos-sd.bareos.com
  Password = "<sd-secret>"
  Device = MultiFileStorage
  # number of devices = Maximum Concurrent Jobs
  Maximum Concurrent Jobs = 4
  Media Type = File
}

The Bareos Storagedaemon config:

Storage {
  Name = bareos-sd.example.com
  # any number >= 4
  Maximum Concurrent Jobs = 20
}
 
Director {
  Name = bareos-dir.example.com
  Password = "<sd-secret>"
}
 
MultiDevice {
  Name = MultiFileStorage
  Media Type = File
  Archive Device = /var/lib/bareos/storage
  LabelMedia = yes
  Random Access = yes
  AutomaticMount = yes
  RemovableMedia = no
  AlwaysOpen = no
  Maximum Concurrent Jobs = 1
  Multi Device Count = 4
}
 
Do you mean that?

Or if not, please give an example of how the config should look like to make things easier for you.
(0003146)
hostedpower   
2018-10-22 16:25   
We're indeed looking into Ansible to automate this, but still, something like:

MultiDevice {
  Name = MultiFileStorage
  Media Type = File
  Archive Device = /var/lib/bareos/storage
  LabelMedia = yes
  Random Access = yes
  AutomaticMount = yes
  RemovableMedia = no
  AlwaysOpen = no
  Maximum Concurrent Jobs = 1
  Multi Device Count = 4
}

would be more than fantastic!!

Just a single device supporting concurrent access in an easy fashion!

Probably we could then also set "Maximum Concurrent Jobs = 4" Pretty safely?


I can imagine if you're used to Bareos (and tapes), this seems maybe a strange way of working.

However if you're used to (most) other backup software supporting harddrives by original design, the way it's designed now for disks is just way too complicated :(
(0003768)
hostedpower   
2020-02-10 21:08   
Hi,


I think this feature was written: https://docs.bareos.org/Configuration/StorageDaemon.html#storageresourcemultiplieddevice

Does it require an autochanger for this to work as disussed in this thread? Or would simply more devices thanks to the count parameter be sufficient?

I ask since lately we see a lot of errors again as reported here :| :)
(0003774)
arogge   
2020-02-11 09:34   
The autochanger is optional, but the feature won't help you if you don't configure an autochanger.
With the autochanger you only need to configure one storage device in your Director. Otherwise you'd need to configure each of the mutliplied devices in your director separately.

This is - of course - not a physical existing autochanger, it is just an autochanger configuration in the storage daemon to group the different storage devices together.
(0003775)
hostedpower   
2020-02-11 10:00   
ok, but in our case we ha
(0003776)
hostedpower   
2020-02-11 10:02   
ok, but in our case we have more than 100 storages configured with different names.

Do we need multiple autochangers as well or just 1 auto changer for all these devices? I'm afraid it's the latter, so we'd have to add tons of autochangers as well right? :)
(0003777)
arogge   
2020-02-11 10:30   
If you have more than 100 storages, I hope you're generating your configuration with something like puppet or ansible.
Why exactly do you need such a large amount of individual storages?
Usually if you're using only File-based storage a single storage (or file-autochanger) per RAID array is enough. Everything else usually just overcomplicates things in the end.
(0003798)
hostedpower   
2020-02-12 10:04   
Well it's because we allocate storage space for each customer, so each customer pays for his own storage. Putting all into one large storage, wouldn't show us anymore who is using what exactly.

Is there a better way to "allocate" storage for individual customers while at the same time use large storage as you suggest?

PS: Yes we generate config, but updating config now to include autochanger, would still be quite some work since we generate this config only once .

Just adding a device count is easy since we use include file. So adding autochanger now isn't really what we hoped for :)
(0004060)
hostedpower   
2020-12-01 12:43   
Hi,

We still have this: Need volume from other drive, but swap not possible

Strange thing is that it works 99% of times, but then we have periods we see this error a lot. I don't understand why this works so well mostly to work so bad at other times.

 It's one of the primary reasons we're now looking to other backup solutions. Next to that we have many storage servers and bareos has currently no way to let "x number of tasks" run on a per storage server basis.
(0004217)
hostedpower   
2021-08-25 11:27   
(Last edited: 2021-08-26 23:41)
Ok we finally re-architectured our whole backup infrastructure, only to find this problem/bug hits us hard again.

We use the latest bareos 20.2 version.

We now use one large folder for all backups with 10 concurrent consolidates (max). We use postgresql as our database engine (so it cannot be because of MySQL). We tried to follow all best practices, I don't understand what is wrong with it :(


Storage {
        Name = AI-Incremental
        Device = AI-Incremental-Autochanger
        Media Type = AI
        Address = xxx
        Password = "xxx"
        Maximum Concurrent Jobs = 10
        Autochanger = yes
}

Storage {
        Name = AI-Consolidated
        Device = AI-Consolidated-Autochanger
        Media Type = AI
        Address = xxx
        Password = "xxx"
        Maximum Concurrent Jobs = 10
        Autochanger = yes
}

Device {
  Name = AI-Incremental
  Media Type = AI
  Archive Device = /var/lib/bareos/storage
  LabelMedia = yes;
  Random Access = Yes;
  AutomaticMount = yes;
  RemovableMedia = no;
  AlwaysOpen = no;
  Maximum Concurrent Jobs = 1 # this has to be 1! (1 for each device)
  Count = 10
}

Autochanger {
  Name = "AI-Incremental-Autochanger"
  Device = AI-Incremental

  Changer Device = /dev/null
  Changer Command = ""
}


Device {
  Name = AI-Consolidated
  Media Type = AI
  Archive Device = /var/lib/bareos/storage
  LabelMedia = yes;
  Random Access = Yes;
  AutomaticMount = yes;
  RemovableMedia = no;
  AlwaysOpen = no;
  Maximum Concurrent Jobs = 1 # this has to be 1! (1 for each device)
  Count = 10
}

Autochanger {
  Name = "AI-Consolidated-Autochanger"
  Device = AI-Consolidated

  Changer Device = /dev/null
  Changer Command = ""
}


I suppose the error must be easy to spot? Or everyone would have this problem :(

(0004218)
hostedpower   
2021-08-25 11:32   
3838 machine.example.com-files machine.example.com Backup VirtualFull 0 0.00 B 0 Running
Messages
Search
#
Timestamp
Message
52 2021-08-25 11:25:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
51 2021-08-25 11:20:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
50 2021-08-25 11:15:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
49 2021-08-25 11:10:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
48 2021-08-25 11:05:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
47 2021-08-25 11:00:25 backup1-sd JobId 3838: Please mount read Volume "vol-cons-0287" for:
Job: machine.example.com-files.2021-08-25_08.00.19_36
Storage: "AI-Incremental0007" (/var/lib/bareos/storage)
Pool: AI-Incremental
Media type: AI
46 2021-08-25 10:55:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
45 2021-08-25 10:50:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
44 2021-08-25 10:45:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
43 2021-08-25 10:40:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
42 2021-08-25 10:35:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
41 2021-08-25 10:30:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
40 2021-08-25 10:25:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
39 2021-08-25 10:20:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
38 2021-08-25 10:15:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
37 2021-08-25 10:10:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
36 2021-08-25 10:05:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
35 2021-08-25 10:00:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
34 2021-08-25 09:55:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
33 2021-08-25 09:50:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
32 2021-08-25 09:45:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
31 2021-08-25 09:40:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
30 2021-08-25 09:35:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
29 2021-08-25 09:30:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
28 2021-08-25 09:25:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
27 2021-08-25 09:20:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
26 2021-08-25 09:15:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
25 2021-08-25 09:10:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
24 2021-08-25 09:05:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
23 2021-08-25 09:00:25 backup1-sd JobId 3838: Please mount read Volume "vol-cons-0287" for:
Job: machine.example.com-files.2021-08-25_08.00.19_36
Storage: "AI-Incremental0007" (/var/lib/bareos/storage)
Pool: AI-Incremental
Media type: AI
22 2021-08-25 08:55:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
21 2021-08-25 08:50:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
20 2021-08-25 08:45:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
19 2021-08-25 08:40:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
18 2021-08-25 08:35:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
17 2021-08-25 08:30:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
16 2021-08-25 08:25:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
15 2021-08-25 08:20:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
14 2021-08-25 08:15:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
13 2021-08-25 08:10:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
12 2021-08-25 08:05:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
11 2021-08-25 08:00:25 backup1-sd JobId 3838: Please mount read Volume "vol-cons-0287" for:
Job: machine.example.com-files.2021-08-25_08.00.19_36
Storage: "AI-Incremental0007" (/var/lib/bareos/storage)
Pool: AI-Incremental
Media type: AI
10 2021-08-25 08:00:25 backup1-sd JobId 3838: Warning: stored/acquire.cc:348 Read acquire: stored/label.cc:269 Could not reserve volume vol-cons-0287 on "AI-Incremental0007" (/var/lib/bareos/storage)
9 2021-08-25 08:00:25 backup1-sd JobId 3838: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0007" (/var/lib/bareos/storage)
8 2021-08-25 08:00:24 backup1-sd JobId 3838: Ready to append to end of Volume "vol-cons-0287" size=12609080131
7 2021-08-25 08:00:24 backup1-sd JobId 3838: Volume "vol-cons-0287" previously written, moving to end of data.
6 2021-08-25 08:00:24 backup1-dir JobId 3838: Using Device "AI-Consolidated0007" to write.
5 2021-08-25 08:00:24 backup1-dir JobId 3838: Using Device "AI-Incremental0007" to read.
4 2021-08-25 08:00:23 backup1-dir JobId 3838: Connected Storage daemon at backupx.xxxx:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
3 2021-08-25 08:00:23 backup1-dir JobId 3838: Bootstrap records written to /var/lib/bareos/backup1-dir.restore.379.bsr
(0004219)
hostedpower   
2021-08-25 11:48   
I see now it tries to mount consolidate volumes on the incremental devices, you can see it in above sample, but also below:
25-Aug 08:02 backup1-dir JobId 3860: Start Virtual Backup JobId 3860, Job=machine.example.com-files.2021-08-25_08.00.31_02
25-Aug 08:02 backup1-dir JobId 3860: Consolidating JobIds 3563,724
25-Aug 08:02 backup1-dir JobId 3860: Bootstrap records written to /var/lib/bareos/backup1-dir.restore.394.bsr
25-Aug 08:02 backup1-dir JobId 3860: Connected Storage daemon at xxx.xxx:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
25-Aug 08:02 backup1-dir JobId 3860: Using Device "AI-Incremental0005" to read.
25-Aug 08:02 backup1-dir JobId 3860: Using Device "AI-Consolidated0005" to write.
25-Aug 08:02 backup1-sd JobId 3860: Volume "vol-cons-0292" previously written, moving to end of data.
25-Aug 08:02 backup1-sd JobId 3860: Ready to append to end of Volume "vol-cons-0292" size=26118365623
25-Aug 08:02 backup1-sd JobId 3860: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0287 from dev="AI-Consolidated0007" (/var/lib/bareos/storage) to "AI-Incremental0005" (/var/lib/bareos/storage)
25-Aug 08:02 backup1-sd JobId 3860: Warning: stored/acquire.cc:348 Read acquire: stored/label.cc:269 Could not reserve volume vol-cons-0287 on "AI-Incremental0005" (/var/lib/bareos/storage)
25-Aug 08:02 backup1-sd JobId 3860: Please mount read Volume "vol-cons-0287" for:
    Job: machine.example.com-files.2021-08-25_08.00.31_02
    Storage: "AI-Incremental0005" (/var/lib/bareos/storage)
    Pool: AI-Incremental
    Media type: AI

This might be the cause? What could be causing this?
(0004220)
hostedpower   
2021-08-25 11:51   
This is the first job today with the messages, but it succeeded anyway; maybe you could see what is going wrong here?

2021-08-24 15:53:54 backup1-dir JobId 3549: console command: run AfterJob ".bvfs_update JobId=3549"
30 2021-08-24 15:53:54 backup1-dir JobId 3549: End auto prune.

29 2021-08-24 15:53:54 backup1-dir JobId 3549: No Files found to prune.
28 2021-08-24 15:53:54 backup1-dir JobId 3549: Begin pruning Files.
27 2021-08-24 15:53:54 backup1-dir JobId 3549: No Jobs found to prune.
26 2021-08-24 15:53:54 backup1-dir JobId 3549: Begin pruning Jobs older than 6 months .
25 2021-08-24 15:53:54 backup1-dir JobId 3549: purged JobIds 3237,648 as they were consolidated into Job 3549
24 2021-08-24 15:53:54 backup1-dir JobId 3549: Bareos backup1-dir 20.0.2 (10Jun21):
Build OS: Debian GNU/Linux 10 (buster)
JobId: 3549
Job: another.xxx-files.2021-08-24_15.48.51_46
Backup Level: Virtual Full
Client: "another.xxx" 20.0.2 (10Jun21) Debian GNU/Linux 10 (buster),debian
FileSet: "linux-files" 2021-07-20 16:03:24
Pool: "AI-Consolidated" (From Job Pool's NextPool resource)
Catalog: "MyCatalog" (From Client resource)
Storage: "AI-Consolidated" (From Storage from Pool's NextPool resource)
Scheduled time: 24-Aug-2021 15:48:51
Start time: 03-Aug-2021 23:08:50
End time: 03-Aug-2021 23:09:30
Elapsed time: 40 secs
Priority: 10
SD Files Written: 653
SD Bytes Written: 55,510,558 (55.51 MB)
Rate: 1387.8 KB/s
Volume name(s): vol-cons-0288
Volume Session Id: 2056
Volume Session Time: 1628888564
Last Volume Bytes: 55,596,662 (55.59 MB)
SD Errors: 0
SD termination status: OK
Accurate: yes
Bareos binary info: official Bareos subscription
Job triggered by: User
Termination: Backup OK

23 2021-08-24 15:53:54 backup1-dir JobId 3549: Joblevel was set to joblevel of first consolidated job: Incremental
22 2021-08-24 15:53:54 backup1-dir JobId 3549: Insert of attributes batch table done
21 2021-08-24 15:53:54 backup1-dir JobId 3549: Insert of attributes batch table with 653 entries start
20 2021-08-24 15:53:54 backup1-sd JobId 3549: Releasing device "AI-Incremental0008" (/var/lib/bareos/storage).
19 2021-08-24 15:53:54 backup1-sd JobId 3549: Releasing device "AI-Consolidated0008" (/var/lib/bareos/storage).
18 2021-08-24 15:53:54 backup1-sd JobId 3549: Elapsed time=00:00:01, Transfer rate=55.51 M Bytes/second
17 2021-08-24 15:53:54 backup1-sd JobId 3549: Forward spacing Volume "vol-incr-0135" to file:block 0:2909195921.
16 2021-08-24 15:53:54 backup1-sd JobId 3549: Ready to read from volume "vol-incr-0135" on device "AI-Incremental0008" (/var/lib/bareos/storage).
15 2021-08-24 15:53:54 backup1-sd JobId 3549: End of Volume at file 0 on device "AI-Incremental0008" (/var/lib/bareos/storage), Volume "vol-cons-0284"
14 2021-08-24 15:53:54 backup1-sd JobId 3549: Forward spacing Volume "vol-cons-0284" to file:block 0:307710024.
13 2021-08-24 15:53:54 backup1-sd JobId 3549: Ready to read from volume "vol-cons-0284" on device "AI-Incremental0008" (/var/lib/bareos/storage).
12 2021-08-24 15:48:54 backup1-sd JobId 3549: Please mount read Volume "vol-cons-0284" for:
Job: another.xxx-files.2021-08-24_15.48.51_46
Storage: "AI-Incremental0008" (/var/lib/bareos/storage)
Pool: AI-Incremental
Media type: AI
11 2021-08-24 15:48:54 backup1-sd JobId 3549: Warning: stored/acquire.cc:348 Read acquire: stored/label.cc:269 Could not reserve volume vol-cons-0284 on "AI-Incremental0008" (/var/lib/bareos/storage)
10 2021-08-24 15:48:54 backup1-sd JobId 3549: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=1 num_writers=0 num_reserve=0 swap=0 vol=vol-cons-0284 from dev="AI-Incremental0005" (/var/lib/bareos/storage) to "AI-Incremental0008" (/var/lib/bareos/storage)
9 2021-08-24 15:48:54 backup1-sd JobId 3549: Wrote label to prelabeled Volume "vol-cons-0288" on device "AI-Consolidated0008" (/var/lib/bareos/storage)
8 2021-08-24 15:48:54 backup1-sd JobId 3549: Labeled new Volume "vol-cons-0288" on device "AI-Consolidated0008" (/var/lib/bareos/storage).
7 2021-08-24 15:48:53 backup1-dir JobId 3549: Using Device "AI-Consolidated0008" to write.
6 2021-08-24 15:48:53 backup1-dir JobId 3549: Created new Volume "vol-cons-0288" in catalog.
5 2021-08-24 15:48:53 backup1-dir JobId 3549: Using Device "AI-Incremental0008" to read.
4 2021-08-24 15:48:52 backup1-dir JobId 3549: Connected Storage daemon at xxx.xxx:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
3 2021-08-24 15:48:52 backup1-dir JobId 3549: Bootstrap records written to /var/lib/bareos/backup1-dir.restore.331.bsr
2 2021-08-24 15:48:52 backup1-dir JobId 3549: Consolidating JobIds 3237,648
1 2021-08-24 15:48:52 backup1-dir JobId 3549: Start Virtual Backup JobId 3549, Job=another.xxx-files.2021-08-24_15.48.51_46
(0004221)
hostedpower   
2021-08-25 11:54   
Just saw this swap not possible error also sometimes happen when the same device/storage/pool was used:

5 2021-08-24 15:54:03 backup1-sd JobId 3553: Ready to read from volume "vol-incr-0136" on device "AI-Incremental0002" (/var/lib/bareos/storage).
14 2021-08-24 15:49:03 backup1-sd JobId 3553: Please mount read Volume "vol-incr-0136" for:
Job: xxx.xxx.bxxe-files.2021-08-24_15.48.52_50
Storage: "AI-Incremental0002" (/var/lib/bareos/storage)
Pool: AI-Incremental
Media type: AI
13 2021-08-24 15:49:03 backup1-sd JobId 3553: Warning: stored/acquire.cc:348 Read acquire: stored/label.cc:269 Could not reserve volume vol-incr-0136 on "AI-Incremental0002" (/var/lib/bareos/storage)
12 2021-08-24 15:49:03 backup1-sd JobId 3553: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=1 num_writers=0 num_reserve=0 swap=0 vol=vol-incr-0136 from dev="AI-Incremental0006" (/var/lib/bareos/storage) to "AI-Incremental0002" (/var/lib/bareos/storage)
11 2021-08-24 15:49:03 backup1-sd JobId 3553: End of Volume at file 0 on device "AI-Incremental0002" (/var/lib/bareos/storage), Volume "vol-cons-0285"
(0004222)
hostedpower   
2021-08-25 12:20   
PS: Consolidate job missing in the posted config above:

Job {
        Name = "Consolidate"
        Type = "Consolidate"
        Accurate = "yes"

        JobDefs = "DefaultJob"

        Schedule = "ConsolidateCycle"
        Max Full Consolidations = 200

        Maximum Concurrent Jobs = 1
        Prune Volumes = yes

        Priority = 11
}
(0004227)
hostedpower   
2021-08-26 13:11   
2021-08-26 11:34:12 backup1-sd JobId 4151: Warning: stored/acquire.cc:348 Read acquire: stored/label.cc:269 Could not reserve volume vol-cons-0282 on "AI-Incremental0004" (/var/lib/bareos/storage) <-------
10 2021-08-26 11:34:12 backup1-sd JobId 4151: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=1 num_reserve=0 swap=0 vol=vol-cons-0282 from dev="AI-Consolidated0010" (/var/lib/bareos/storage) to "AI-Incremental0004" (/var/lib/bareos/storage)
9 2021-08-26 11:34:12 backup1-sd JobId 4151: Wrote label to prelabeled Volume "vol-cons-0298" on device "AI-Incremental0001" (/var/lib/bareos/storage)
8 2021-08-26 11:34:12 backup1-sd JobId 4151: Labeled new Volume "vol-cons-0298" on device "AI-Incremental0001" (/var/lib/bareos/storage).
7 2021-08-26 11:34:12 backup1-dir JobId 4151: Using Device "AI-Incremental0001" to write.
6 2021-08-26 11:34:12 backup1-dir JobId 4151: Created new Volume "vol-cons-0298" in catalog.


All jobs don't even continue after this "event"...
(0004494)
hostedpower   
2022-01-31 08:33   
This still happens, even after using seperate devices and labels etc

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1474 [bareos-core] storage daemon crash always 2022-07-27 16:12 2022-10-04 10:28
Reporter: jens Platform: Linux  
Assigned To: bruno-at-bareos OS: Debian  
Priority: normal OS Version: 9  
Status: resolved Product Version: 19.2.12  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: bareos-sd crashing on VirtualFull with SIGSEGV ../src/lib/serial.cc file not found
Description: When running 'always incremental' backup scheme the storage daemon crashes with segmentation fault
on VirtualFull backup triggered by consolidation.

Job error:
bareos-dir JobId 1267: Fatal error: Director's comm line to SD dropped.

GDB debug:
bareos-sd (200): stored/mac.cc:159-154 joblevel from SOS_LABEL is now F
bareos-sd (130): stored/label.cc:672-154 session_label record=ec015288
bareos-sd (150): stored/label.cc:718-154 Write sesson_label record JobId=154 FI=SOS_LABEL SessId=1 Strm=154 len=165 remainder=0
bareos-sd (150): stored/label.cc:722-154 Leave WriteSessionLabel Block=1351364161d File=0d
bareos-sd (200): stored/mac.cc:221-154 before write JobId=154 FI=1 SessId=1 Strm=UNIX-Attributes-EX len=123
Thread 4 "bareos-sd" received signal SIGSEGV, Segmentation fault.

[Switching to Thread 0x7ffff4c5b700 (LWP 2271)]
serial_uint32 (ptr=ptr@entry=0x7ffff4c5aa70, v=<optimized out>) at ../../../src/lib/serial.cc:76
76 ../../../src/lib/serial.cc: No such file or directory.


I am running daily incrementals into 'File' pool, consolidating every 4 days into 'FileCons' pool, a virtual full every 1st Monday of a month into 'LongTerm-Disk' pool
and finally a migration fto tape every 2nd Monday of a month from 'LongTerm-Disk' pool into 'LongTerm-Tape' pool.

BareOS version: 19.2.7
BareOS director and storage daemon on the same machine.
Disk storage on CEPH mount
Tape storage on Fujitsu Eternus LT2 tape library with 1 LTO-7 drive

---------------------------------------------------------------------------------------------------
Storage Device config:

FileStorage with 10 devices, all into same 1st folder:

Device {
  Name = FileStorage
  Media Type = File
  Archive Device = /storage/backup/bareos_Incremental # storage location
  LabelMedia = yes # lets Bareos label unlabeled media
  Random Access = yes # allow this device to be used by any job
  AutomaticMount = yes # when device opened, read it
  RemovableMedia = no # fixed media ( no tape, no usb )
  AlwaysOpen = no
  Auto Inflate = both # auto-decompress in- out- stream
  Auto Deflate = both # auto-compress in- out- stream ( backup server side compression )
  Auto Deflate Algorithm = LZ4HC # compression algorithm
}
....

FileStorageCons with 10 devices, all into same 2nd folder

Name = FileStorageCons
  Media Type = FileCons
  Archive Device = /storage/backup/bareos_Consolidate # storage location
  LabelMedia = yes # lets Bareos label unlabeled media
  Random Access = yes # allow this device to be used by any job
  AutomaticMount = yes # when device opened, read it
  RemovableMedia = no # fixed media ( no tape, no usb )
  AlwaysOpen = no
  Auto Inflate = both # auto-decompress in- out- stream
  Auto Deflate = both # auto-compress in- out- stream ( backup server side compression )
  Auto Deflate Algorithm = LZ4HC # compression algorithm
}
...

FileStorageVault with 10 devices, all into same 3rd folder

Name = FileStorageVault
  Media Type = FileVLT
  Archive Device = /storage/backup/bareos_LongTermDisk # storage location
  LabelMedia = yes # lets Bareos label unlabeled media
  Random Access = yes # allow this device to be used by any job
  AutomaticMount = yes # when device opened, read it
  RemovableMedia = no # fixed media ( no tape, no usb )
  AlwaysOpen = no
  Auto Inflate = both # auto-decompress in- out- stream
  Auto Deflate = both # auto-compress in- out- stream ( backup server side compression )
  Auto Deflate Algorithm = LZ4HC # compression algorithm
}
....

Tape Device:

Device {
  Name = IBM-ULTRIUM-HH7
  Device Type = Tape
  DriveIndex = 0
  ArchiveDevice = /dev/nst0
  Media Type = IBM-LTO-7
  AutoChanger = yes
  AutomaticMount = yes
  LabelMedia = yes
  RemovableMedia = yes
  Autoselect = yes
  MaximumFileSize = 10GB
  Spool Directory = /storage/scratch
  Maximum Spool Size = 2199023255552 # maximum total spool size in bytes (2Tbyte)
}

---------------------------------------------------------------------------------------------------
Pool Config:

Pool {
  Name = AI-Incremental # name of the media pool
  Pool Type = Backup # pool type
  Recycle = yes # BAReOS can automatically recycle volumes from that pool
  AutoPrune = yes # do not automatically prune expired volumes
  Volume Retention = 72 days
  Storage = File # storage device to be used
  Maximum Volume Bytes = 10G # maximum file size per volume
  Maximum Volumes = 500 # maximum allowed total number of volumes in pool
  Label Format = "AI-Incremental_" # volumes will be labeled "AI-Incremental_-<volume-id>"
  Volume Use Duration = 36 days # volume will be no longer used than
  Next Pool = AI-Consolidate # next pool for consolidation
  Job Retention = 72 days
  File Retention = 36 days
}

Pool {
  Name = AI-Consolidate # name of the media pool
  Pool Type = Backup # pool type
  Recycle = yes # BAReOS can automatically recycle volumes from that pool
  AutoPrune = yes # do not automatically prune expired volumes
  Volume Retention = 366 days
  Job Retention = 180 days
  File Retention = 93 days
  Storage = FileCons # storage device to be used
  Maximum Volume Bytes = 10G # maximum file size per volume
  Maximum Volumes = 1000 # maximum allowed total number of volumes in pool
  Label Format = "AI-Consolidate_" # volumes will be labeled "AI-Consolidate_-<volume-id>"
  Volume Use Duration = 2 days # volume will be no longer used than
  Next Pool = LongTerm-Disk # next pool for long term backups to disk
}

Pool {
  Name = LongTerm-Disk # name of the media pool
  Pool Type = Backup # pool type
  Recycle = yes # BAReOS can automatically recycle volumes from that pool
  AutoPrune = yes # do not automatically prune expired volumes
  Volume Retention = 732 days
  Job Retention = 732 days
  File Retention = 366 days
  Storage = FileVLT # storage device to be used
  Maximum Volume Bytes = 10G # maximum file size per volume
  Maximum Volumes = 1000 # maximum allowed total number of volumes in pool
  Label Format = "LongTerm-Disk_" # volumes will be labeled "LongTerm-Disk_<volume-id>"
  Volume Use Duration = 2 days # volume will be no longer used than
  Next Pool = LongTerm-Tape # next pool for long term backups to disk
  Migration Time = 2 days # Jobs older than 2 days in this pool will be migrated to 'Next Pool'
}

Pool {
  Name = LongTerm-Tape
  Pool Type = Backup
  Recycle = yes # Bareos can automatically recycle Volumes
  AutoPrune = yes # Prune expired volumes
  Volume Retention = 732 days # How long should the Backups be kept? (0000012)
  Job Retention = 732 days
  File Retention = 366 days
  Storage = TapeLibrary # Physical Media
  Maximum Block Size = 1048576
  Recycle Pool = Scratch
  Cleaning Prefix = "CLN"
}

---------------------------------------------------------------------------------------------------
JobDefs:

JobDefs {
  Name = AI-Incremental
  Type = Backup
  Level = Incremental
  Storage = File
  Messages = Standard
  Pool = AI-Incremental
  Incremental Backup Pool = AI-Incremental
  Full Backup Pool = AI-Consolidate
  Accurate = yes
  Allow Mixed Priority = yes
  Always Incremental = yes
  Always Incremental Job Retention = 36 days
  Always Incremental Keep Number = 14
  Always Incremental Max Full Age = 31 days
}

JobDefs {
  Name = AI-Consolidate
  Type = Consolidate
  Storage = File
  Messages = Standard
  Pool = AI-Consolidate
  Priority = 25
  Write Bootstrap = "/storage/bootstrap/%c.bsr"
  Incremental Backup Pool = AI-Incremental
  Full Backup Pool = AI-Consolidate
  Max Full Consolidations = 1
  Prune Volumes = yes
  Accurate = yes
}

JobDefs {
  Name = LongTermDisk
  Type = Backup
  Level = VirtualFull
  Messages = Standard
  Pool = AI-Consolidate
  Priority = 30
  Write Bootstrap = "/storage/bootstrap/%c.bsr"
  Accurate = yes
  Run Script {
    console = "update jobid=%1 jobtype=A"
    Runs When = After
    Runs On Client = No
    Runs On Failure = No
  }
}

JobDefs {
  Name = "LongTermTape"
  Pool = LongTerm-Disk
  Messages = Standard
  Type = Migrate
}


---------------------------------------------------------------------------------------------------
Job Config ( per client )

Job {
  Name = "Incr-<client>"
  Description = "<client> always incremental 36d retention"
  Client = <client>
  Jobdefs = AI-Incremental
  FileSet="fileset-<client>"
  Schedule = "daily_incremental_<client>"
  # Write Bootstrap file for disaster recovery.
  Write Bootstrap = "/storage/bootstrap/%j.bsr"
  # The higher the number the lower the job priority
  Priority = 15
  Run Script {
    Console = ".bvfs_update jobid=%i"
    RunsWhen = After
    RunsOnClient = No
  }
}

Job {
  Name = "AI-Consolidate"
  Description = "consolidation of 'always incremental' jobs"
  Client = backup.mgmt.drs
  FileSet = SelfTest
  Jobdefs = AI-Consolidate
  Schedule = consolidate

  # The higher the number the lower the job priority
  Priority = 25
}

Job {
  Name = "VFull-<client>"
  Description = "<client> monthly virtual full"
  Messages = Standard
  Client = <client>
  Type = Backup
  Level = VirtualFull
  Jobdefs = LongTermDisk
  FileSet=fileset-<client>
  Pool = AI-Consolidate
  Schedule = virtual-full_<client>
  Priority = 30
  Run Script {
    Console = ".bvfs_update"
    RunsWhen = After
    RunsOnClient = No
  }
}

Job {
  Name = "migrate-2-tape"
  Description = "monthly migration of virtual full backups from LongTerm-Disk to LongTerm-Tape pool"
  Jobdefs = LongTermTape
  Selection Type = PoolTime
  Schedule = "migrate-2-tape"
  Priority = 15
  Run Script {
    Console = ".bvfs_update jobid=%i"
    RunsWhen = After
    RunsOnClient = No
  }
}

---------------------------------------------------------------------------------------------------
Schedule config:

Schedule {
  Name = "daily_incremental_<client>"
  Run = daily at 02:00
}

Schedule {
  Name = "consolidate"
  Run = Incremental 3/4 at 00:00
}

Schedule {
  Name = "virtual-full_<client>"
  Run = 1st monday at 10:00
}

Schedule {
  Name = "migrate-2-tape"
  Run = 2nd monday at 8:00
}

---------------------------------------------------------------------------------------------------
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files: bareos_sd_debug.zip (3,771 bytes) 2022-07-27 16:59
https://bugs.bareos.org/file_download.php?file_id=530&type=bug
Notes
(0004688)
bruno-at-bareos   
2022-07-27 16:43   
could you check in your working dir (/var/lib/bareos) of SD any other trace backtrace and debug files
if you have them, please attach them (eventually compressed).
(0004690)
jens   
2022-07-27 17:00   
debug files attached in private note
(0004697)
bruno-at-bareos   
2022-07-28 09:34   
What is the reason behind running 19.2 instead upgrading to 21 ?
(0004699)
jens   
2022-07-28 13:06   
1. missing comprehensive and easy to follow step-by-step guide on how to upgrade
2. being unconfident about a flawless upgrade procedure without rendering backup data unusable
3. lack of experience and skilled personnel resulting in major effort to roll out new version
4. limited access to online repositories to update local mirrors -> very long lead time to get new versions
(0004700)
jens   
2022-07-28 13:09   
(Last edited: 2022-07-28 13:12)
For the above reasons I am little hesitant to take the effort of upgrading.
Currently I am considering an update only if this will be the only chance to get the issue resolved.
I need confirmation from your end first.
My hope is that there is just something wrong in my configuration or I am running an adverse setup and changing either one might resolve the issue ?
(0004701)
bruno-at-bareos   
2022-08-01 11:59   
Hi Jens,

Thanks for the complements.

Did this crash happen each time a consolidation VF is created ?
(0004702)
bruno-at-bareos   
2022-08-01 12:04   
Maybe related to fixed in 19.2.9 (available with subscription)
 - fix a memory corruption when autolabeling with increased maxiumum block size
https://docs.bareos.org/bareos-19.2/Appendix/ReleaseNotes.html#id12
(0004703)
jens   
2022-08-01 12:05   
Hi Bruno,

so far, yes, that is my experience.
It is always failing.
Also when repeating or manually rescheduling the failed job through the web-ui during idle hours where nothing else is running on the director.
(0004704)
jens   
2022-08-01 12:14   
The "- fix a memory corruption when autolabeling with increased maximum block size" indeed could be a lead
as I see the following in the job logs ?

Warning: For Volume "AI-Consolidate_0118": The sizes do not match!
Volume=64574484 Catalog=32964717
Correcting Catalog
(0004705)
bruno-at-bareos   
2022-08-02 13:42   
Hi Jens, a quick note about the size do not match, this is unrelated. Aborted or failed job can have this effect.

This Fix was introduce with this commit https://github.com/bareos/bareos/commit/0086b852d and the 19.2.9 has the fix.
(0004800)
bruno-at-bareos   
2022-10-04 10:27   
Closing as a fix already exist
(0004801)
bruno-at-bareos   
2022-10-04 10:28   
Fix is present in source code and published subscription binaries.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1372 [bareos-core] General major always 2021-07-20 09:27 2022-09-29 16:06
Reporter: Int Platform: Linux  
Assigned To: bruno-at-bareos OS: CentOS  
Priority: high OS Version: 7  
Status: assigned Product Version: 19.2.10  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Restore/Migrate of job distributed on two tapes fails with ERR=Cannot allocate memory
Description: When I try to Restore/Migrate a specific job distributed on two tapes, the first tape is read successfully but when mounting the second tape the job fails with error

```
20-Jul 08:02 bareos-sd JobId 21833: Ready to read from volume "NIX417L6" on device "tapedrive-0" (/dev/tape/by-id/scsi-350050763121a063c-nst).
20-Jul 08:02 bareos-sd JobId 21833: Forward spacing Volume "NIX417L6" to file:block 0:1.
20-Jul 08:02 bareos-sd JobId 21833: Error: stored/block.cc:1057 Read error on fd=7 at file:blk 0:1 on device "tapedrive-0" (/dev/tape/by-id/scsi-350050763121a063c-nst). ERR=Cannot allocate memory.
```

This only happens on this specific tape. Other jobs distributed on multiple tapes work correctly.

I did configure the Bareos Tape device /etc/bareos/bareos-sd.d/device/tapedrive-0.conf with "Maximum Block Size = 1048576" from the very beginning and all tapes were written using this configuration.
I started using Bareos after Version 14.2.0 so the new block size handling (label block 64k, data blocks 1M) was used on all tapes.

But the error suggests that there is a problem with the block size.
This is seconded by /var/log/messages
```
Jul 20 08:02:14 igms07 kernel: st 3:0:0:0: [st0] Failed to read 1048576 byte block with 64512 byte transfer.
```

So it seems that Bareos is trying to read with 64k block size although it is configured otherwise?


Further tests showed that I can read the tape label correctly with btape:

```
]$ btape /dev/tape/by-id/scsi-350050763121a063c-nst
Tape block granularity is 1024 bytes.
btape: stored/butil.cc:290-0 Using device: "/dev/tape/by-id/scsi-350050763121a063c-nst" for writing.
btape: stored/btape.cc:485-0 open device "tapedrive-0" (/dev/tape/by-id/scsi-350050763121a063c-nst): OK
*status
 Bareos status: file=0 block=0
 Device status: BOT ONLINE IM_REP_EN file=0 block=0
Device status: TAPE BOT ONLINE IMMREPORT. ERR=
*readlabel
btape: stored/btape.cc:532-0 Volume label read correctly.

Volume Label:
Id : Bareos 2.0 immortal
VerNo : 20
VolName : NIX417L6
PrevVolName :
VolFile : 0
LabelType : VOL_LABEL
LabelSize : 167
PoolName : Scratch
MediaType : LTO
PoolType : Backup
HostName : igms07.vision.local
Date label written: 05-Nov-2018 11:42
```


I can read the tape with dd with a blocksize of 1M:
dd if=/dev/tape/by-id/scsi-350050763121a063c-nst bs=1M


btape scan fails with the same error as Bareos:

```
]$ btape /dev/tape/by-id/scsi-350050763121a063c-nst
Tape block granularity is 1024 bytes.
btape: stored/butil.cc:290-0 Using device: "/dev/tape/by-id/scsi-350050763121a063c-nst" for writing.
btape: stored/btape.cc:485-0 open device "tapedrive-0" (/dev/tape/by-id/scsi-350050763121a063c-nst): OK
*rewind
btape: stored/btape.cc:581-0 Rewound "tapedrive-0" (/dev/tape/by-id/scsi-350050763121a063c-nst)
*scan
btape: stored/btape.cc:1901-0 Starting scan at file 0
btape: stored/btape.cc:1909-0 Bad status from read -1. ERR=stored/btape.cc:1907 read error on /dev/tape/by-id/scsi-350050763121a063c-nst. ERR=Cannot allocate memory.
```

but btape scanblocks works fine:

```
]$ btape /dev/tape/by-id/scsi-350050763121a063c-nst
Tape block granularity is 1024 bytes.
btape: stored/butil.cc:290-0 Using device: "/dev/tape/by-id/scsi-350050763121a063c-nst" for writing.
btape: stored/btape.cc:485-0 open device "tapedrive-0" (/dev/tape/by-id/scsi-350050763121a063c-nst): OK
*rewind
btape: stored/btape.cc:581-0 Rewound "tapedrive-0" (/dev/tape/by-id/scsi-350050763121a063c-nst)
*scanblocks
1 block of 203 bytes in file 0
2213 blocks of 1048576 bytes in file 0
1 block of 1048566 bytes in file 0
1072 blocks of 1048576 bytes in file 0
1 block of 1048568 bytes in file 0
```


The question is, what is the issue here?
And how do I fix it?
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files: Screen Shot 2022-08-04 at 1.27.33 PM.png (256,578 bytes) 2022-08-04 22:28
https://bugs.bareos.org/file_download.php?file_id=531&type=bug
png

joblog_21833.txt (4,077 bytes) 2022-09-21 16:12
https://bugs.bareos.org/file_download.php?file_id=532&type=bug
joblog_5822.txt (6,372 bytes) 2022-09-21 16:22
https://bugs.bareos.org/file_download.php?file_id=533&type=bug
bareos-sd.trace (57,480 bytes) 2022-09-26 15:42
https://bugs.bareos.org/file_download.php?file_id=535&type=bug
Notes
(0004183)
Int   
2021-07-20 10:02   
PS: my device configuration:

Device {

    Name = "tapedrive-0"
    DeviceType = tape
    Maximum Concurrent Jobs = 1

    # default:0, only required if the autoloader have multiple drives.
    DriveIndex = 0

    ArchiveDevice = /dev/tape/by-id/scsi-350050763121a063c-nst # Quantum LTO-7 standalone drive

    MediaType = LTO

    AutoChanger = no # default: no
    AutomaticMount = yes # default: no

    MaximumFileSize = 40GB
    MaximumBlockSize = 1048576

    Maximum Spool Size = 40GB
    Spool Directory = /var/lib/bareos/spool
}
(0004195)
Int   
2021-07-31 09:13   
PPS:
When I read other jobs that are also on this tape, the restore/migration works just fine.
(0004714)
aleigh   
2022-08-04 22:28   
I am having this same problem on a restore which spans 2 tapes. The file is particularly large, so probably the case here is that a single file spans two tapes.

bareos-dir Version: 21.0.0 (21 December 2021) Rocky Linux release 8.4 (Green Obsidian) redhat Rocky Linux release 8.4 (Green Obsidian)

[358442.558093] st 9:0:40:0: [st0] Failed to read 262144 byte block with 64512 byte transfer.

bareos-sd JobId 81: Error: stored/block.cc:1004 Read error on fd=8 at file:blk 0:1 on device "tapedrive-0" (/dev/nst0). ERR=Cannot allocate memory.

See attached job log. Note that despite everything it says, the correct media (LC0004) was inserted and mounted at the correct time. Shortly after mounting LC0004 the job failed.
(0004721)
bruno-at-bareos   
2022-08-10 11:00   
Hello aleigh.

We are interested by getting more debugging information, to be able to reproduce 100% of time the problem (if we can, then getting a fix is almost certain).

So if you are able to rerun that restore, would you be able to run it with a higher debug level ?
You can consult this page to understand how to do this.
https://servicedesk.bareos.com/help/en-us/2-support/2-how-do-i-set-the-debug-level

here a debug level of 1000 would be nice.
director, storage, and concerned client should be activated.

We would like to have attached here in text form not picture.
mainly the following command on bconsole

list joblog jobid=81

(or the new jobid of restore try with debug) and also the initial backup jobid.

The whole configuration and logs are a plus do debug the situation.
Maybe our support-info tool can be used and run on director
see instruction here
https://servicedesk.bareos.com/help/en-us/2-support/3-how-to-use-the-bareos-support-info-sh-tool

You can attach the result if not too big, as private (all credential being already removed by the tool)
(0004751)
bruno-at-bareos   
2022-09-21 15:07   
Ping, none of you are able to run with a higher debug level ?
(0004756)
Int   
2022-09-21 16:12   
As a starting point I attached the joblog of the original failing job.

My configuration is not the same as back then when I filed this bug a year ago. In the mean time I updated my bareos to version 21, but the used storage device is still the same.
I created support-info of my current config with your support-info tool but the generated tgz-file is too large (78MB) to be uploaded here.
Let me know how you want to receive the tgz-file instead.

Next I will enable debug level of 1000 and will try to reproduce the error with bareos 21.
(0004757)
Int   
2022-09-21 16:22   
Here the joblog of the initial backup job.
(0004758)
bruno-at-bareos   
2022-09-21 16:26   
You can upload the tgz to https://cloud.dass-it.de/index.php/s/Xf9ZH79737iastj with password mantis1372

Before creating the report check if you have any *trace* in /var/lib/bareos they often are old crash and just took place.
(0004759)
Int   
2022-09-21 16:35   
I uploaded the support-info.
(0004760)
bruno-at-bareos   
2022-09-21 16:58   
Thanks, did you have other jobs stored on both tapes NIX417L6 and NIX416L6 ? If yes are you able to read from them.
(0004762)
Int   
2022-09-22 08:59   
The failing job 5822 is the only job on tape NIX416L6 and fills it completely, The rest of job 5822 is on tape NIX417L6.
There are other jobs on tape NIX417L6 which can be restored successfully.

I tried to reproduce the issue with debug level of 1000 set on director, storage, and client
but since job 5822 is large (4.5 TB of data) the log files get huge and filled up the free disk space on director and client. This caused the director to crash.

I will retry and enable the debugging only shortly before the restore process switches from tape NIX416L6 to tape NIX417L6.
Hopefully this will catch the issue while leaving enough free disk space.

Or do you have other advice on how to proceed?
(0004764)
bruno-at-bareos   
2022-09-22 10:00   
I think you can set debulevel to 500 on director, (none on fd) but 1000 on SD (which is the failing part).
I have to prepare also some instruction to extract block information from the database for those 2 volumes especially around the split.
(0004775)
Int   
2022-09-26 15:42   
The good new is the error is reproduceable with Bareos 21.

I enabled the debug level of 1000 on the SD only after the restore of the first tape was done and the restore job was requesting to load the second tape.
I hope this was sufficient since the error happens when bareos tries to read the first sector of the second tape. If not let me know and I will repeat with earlier tracing.
I attached the trace file of the SD.

The trace file for the director is quite large (3 GB uncompressed). Shall I upload it to https://cloud.dass-it.de/index.php/s/Xf9ZH79737iastj with password mantis1372 again?
(0004776)
bruno-at-bareos   
2022-09-26 16:01   
Yes please upload it there. It has less interest than the SD, but still good to have it somewhere until we can reproduce that problem automatically.
I just don't know when I will be able to parse them this week.
(0004777)
Int   
2022-09-26 17:41   
I uploaded the director trace file.
(0004787)
bruno-at-bareos   
2022-09-29 15:48   
Just to exclude a documented case was the tape labelled with 1M or 64kb ?

If you check here
https://docs.bareos.org/TasksAndConcepts/AutochangerSupport.html#direct-access-to-volumes-with-with-non-default-block-sizes

You will see the memory allocation error.

And then maybe you need to adapt the Label Block Size for those tapes ?
Thanks for your confirmation.
(0004788)
Int   
2022-09-29 16:06   
I know about that.

My my device configuration has set
MaximumBlockSize = 1048576
see my post https://bugs.bareos.org/view.php?id=1372#c4183
and has been configured like this from the beginning when I started using bareos.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1465 [bareos-core] file daemon feature always 2022-05-23 13:31 2022-09-21 16:02
Reporter: mdc Platform: x86_64  
Assigned To: OS: CentOS  
Priority: normal OS Version: stream 8  
Status: new Product Version: 21.1.3  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Use the virtual file path + prefix when no writer is set
Description: Creating the backup woks fine, but on restore there exists often the problem, that the restore must be done to an normal file like in the PostgreSQL or MariaDB add on.
But when using the an dd or the demo code that is in the documentation, the file can only created in an absolute path.
I think it will be better, when no writer command is set(or :writer=none), that the file restore is done to the file prefix (Where setting of the restore job) + path of the virtual file like it is done in the both python add-on's.

Sample:
bpipe:file=/_mongobackups_/foo_db.archive:reader=sh -c 'cat /foo/backup_pw_file | mongodump <OPT ... > --archive':writer=none"
Then the file will written to <where>/_mongobackups_/foo_db.archive
Tags: bpipe
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0004755)
bruno-at-bareos   
2022-09-21 16:02   
As it looks like more a feature request, why not proposing a PR with ? ;-)

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1476 [bareos-core] file daemon major always 2022-08-03 16:01 2022-08-23 12:08
Reporter: support@ingenium.trading Platform: Linux  
Assigned To: bruno-at-bareos OS: CentOS  
Priority: high OS Version: 7  
Status: resolved Product Version:  
Product Build: Resolution: won't fix  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Backups for Full and Incremental are approx 10 times bigger than the server used
Description: Whenever a backup job is running, it takes very long without errors and the file size is 10 times bigger than the server usual uses.

Earlier we had issues with the connectivity so I enabled heartbeat in the clien/myself.conf => Heartbeat Interval = 1 min
Tags:
Steps To Reproduce: Manually start the job via webui or bconsole.
Additional Information: Backup Server:
OS: Fedora 35
Bareos Version: 22.0.0~pre613.d7109f123

Client Server:
OS: Alma Linux 9 / CentOS7
Bareos Version: 22.0.0~pre613.d7109f123

Backup job:
03-Aug 09:48 bareos-dir JobId 565: No prior Full backup Job record found.
03-Aug 09:48 bareos-dir JobId 565: No prior or suitable Full backup found in catalog. Doing FULL backup.
03-Aug 09:48 bareos-dir JobId 565: Start Backup JobId 565, Job=td02.example.com.2022-08-03_09.48.28_03
03-Aug 09:48 bareos-dir JobId 565: Connected Storage daemon at backup01.example.com:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
03-Aug 09:48 bareos-dir JobId 565: Max configured use duration=82,800 sec. exceeded. Marking Volume "AI-Example-Consolidated-0490" as Used.
03-Aug 09:48 bareos-dir JobId 565: Created new Volume "AI-Example-Consolidated-0584" in catalog.
03-Aug 09:48 bareos-dir JobId 565: Using Device "FileStorage01" to write.
03-Aug 09:48 bareos-dir JobId 565: Probing client protocol... (result will be saved until config reload)
03-Aug 09:48 bareos-dir JobId 565: Connected Client: td02.example.com at td02.example.com:9102, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
03-Aug 09:48 bareos-dir JobId 565: Handshake: Immediate TLS
03-Aug 09:48 bareos-dir JobId 565: Encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
03-Aug 09:48 trade02-fd JobId 565: Connected Storage daemon at backup01.example.com:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
03-Aug 09:48 trade02-fd JobId 565: Extended attribute support is enabled
03-Aug 09:48 trade02-fd JobId 565: ACL support is enabled
03-Aug 09:48 bareos-sd JobId 565: Labeled new Volume "AI-Example-Consolidated-0584" on device "FileStorage01" (/backup/bareos/storage).
03-Aug 09:48 bareos-sd JobId 565: Wrote label to prelabeled Volume "AI-Example-Consolidated-0584" on device "FileStorage01" (/backup/bareos/storage)
03-Aug 09:48 trade02-fd JobId 565: Disallowed filesystem. Will not descend from / into /dev
03-Aug 09:51 trade02-fd JobId 565: Disallowed filesystem. Will not descend from / into /run
03-Aug 09:51 trade02-fd JobId 565: Disallowed filesystem. Will not descend from / into /sys
03-Aug 09:51 trade02-fd JobId 565: Disallowed filesystem. Will not descend from / into /var/lib/nfs/rpc_pipefs
03-Aug 10:27 bareos-sd JobId 565: User defined maximum volume capacity 107,374,182,400 exceeded on device "FileStorage01" (/backup/bareos/storage).
03-Aug 10:27 bareos-sd JobId 565: End of medium on Volume "AI-Example-Consolidated-0584" Bytes=107,374,159,911 Blocks=1,664,406 at 03-Aug-2022 10:27.
03-Aug 10:27 bareos-dir JobId 565: Created new Volume "AI-Example-Consolidated-0585" in catalog.
03-Aug 10:27 bareos-sd JobId 565: Labeled new Volume "AI-Example-Consolidated-0585" on device "FileStorage01" (/backup/bareos/storage).
03-Aug 10:27 bareos-sd JobId 565: Wrote label to prelabeled Volume "AI-Example-Consolidated-0585" on device "FileStorage01" (/backup/bareos/storage)
03-Aug 10:27 bareos-sd JobId 565: New volume "AI-Example-Consolidated-0585" mounted on device "FileStorage01" (/backup/bareos/storage) at 03-Aug-2022 10:27.
03-Aug 11:07 bareos-sd JobId 565: User defined maximum volume capacity 107,374,182,400 exceeded on device "FileStorage01" (/backup/bareos/storage).
03-Aug 11:07 bareos-sd JobId 565: End of medium on Volume "AI-Example-Consolidated-0585" Bytes=107,374,160,137 Blocks=1,664,406 at 03-Aug-2022 11:07.
03-Aug 11:07 bareos-dir JobId 565: Created new Volume "AI-Example-Consolidated-0586" in catalog.
03-Aug 11:07 bareos-sd JobId 565: Labeled new Volume "AI-Example-Consolidated-0586" on device "FileStorage01" (/backup/bareos/storage).
03-Aug 11:07 bareos-sd JobId 565: Wrote label to prelabeled Volume "AI-Example-Consolidated-0586" on device "FileStorage01" (/backup/bareos/storage)
03-Aug 11:07 bareos-sd JobId 565: New volume "AI-Example-Consolidated-0586" mounted on device "FileStorage01" (/backup/bareos/storage) at 03-Aug-2022 11:07.
03-Aug 11:46 bareos-sd JobId 565: User defined maximum volume capacity 107,374,182,400 exceeded on device "FileStorage01" (/backup/bareos/storage).
03-Aug 11:46 bareos-sd JobId 565: End of medium on Volume "AI-Example-Consolidated-0586" Bytes=107,374,160,137 Blocks=1,664,406 at 03-Aug-2022 11:46.
03-Aug 11:46 bareos-dir JobId 565: Created new Volume "AI-Example-Consolidated-0587" in catalog.
03-Aug 11:46 bareos-sd JobId 565: Labeled new Volume "AI-Example-Consolidated-0587" on device "FileStorage01" (/backup/bareos/storage).
03-Aug 11:46 bareos-sd JobId 565: Wrote label to prelabeled Volume "AI-Example-Consolidated-0587" on device "FileStorage01" (/backup/bareos/storage)
03-Aug 11:46 bareos-sd JobId 565: New volume "AI-Example-Consolidated-0587" mounted on device "FileStorage01" (/backup/bareos/storage) at 03-Aug-2022 11:46.
03-Aug 12:25 bareos-sd JobId 565: User defined maximum volume capacity 107,374,182,400 exceeded on device "FileStorage01" (/backup/bareos/storage).
03-Aug 12:25 bareos-sd JobId 565: End of medium on Volume "AI-Example-Consolidated-0587" Bytes=107,374,160,137 Blocks=1,664,406 at 03-Aug-2022 12:25.
03-Aug 12:25 bareos-dir JobId 565: Created new Volume "AI-Example-Consolidated-0588" in catalog.
03-Aug 12:25 bareos-sd JobId 565: Labeled new Volume "AI-Example-Consolidated-0588" on device "FileStorage01" (/backup/bareos/storage).
03-Aug 12:25 bareos-sd JobId 565: Wrote label to prelabeled Volume "AI-Example-Consolidated-0588" on device "FileStorage01" (/backup/bareos/storage)
03-Aug 12:25 bareos-sd JobId 565: New volume "AI-Example-Consolidated-0588" mounted on device "FileStorage01" (/backup/bareos/storage) at 03-Aug-2022 12:25.
03-Aug 12:56 bareos-sd JobId 565: Releasing device "FileStorage01" (/backup/bareos/storage).
03-Aug 12:56 bareos-sd JobId 565: Elapsed time=03:08:04, Transfer rate=45.57 M Bytes/second
03-Aug 12:56 bareos-dir JobId 565: Insert of attributes batch table with 188627 entries start
03-Aug 12:56 bareos-dir JobId 565: Insert of attributes batch table done
03-Aug 12:56 bareos-dir JobId 565: Bareos bareos-dir 22.0.0~pre613.d7109f123 (01Aug22):
  Build OS: Fedora release 35 (Thirty Five)
  JobId: 565
  Job: td02.example.com.2022-08-03_09.48.28_03
  Backup Level: Full (upgraded from Incremental)
  Client: "td02.example.com" 22.0.0~pre553.6a41db3f7 (07Jul22) CentOS Stream release 9,redhat
  FileSet: "ExampleLinux" 2022-08-03 09:48:28
  Pool: "AI-Example-Consolidated" (From Job FullPool override)
  Catalog: "MyCatalog" (From Client resource)
  Storage: "File" (From Pool resource)
  Scheduled time: 03-Aug-2022 09:48:27
  Start time: 03-Aug-2022 09:48:31
  End time: 03-Aug-2022 12:56:50
  Elapsed time: 3 hours 8 mins 19 secs
  Priority: 10
  FD Files Written: 188,627
  SD Files Written: 188,627
  FD Bytes Written: 514,227,307,623 (514.2 GB)
  SD Bytes Written: 514,258,382,470 (514.2 GB)
  Rate: 45510.9 KB/s
  Software Compression: None
  VSS: no
  Encryption: no
  Accurate: yes
  Volume name(s): AI-Example-Consolidated-0584|AI-Example-Consolidated-0585|AI-Example-Consolidated-0586|AI-Example-Consolidated-0587|AI-Example-Consolidated-0588
  Volume Session Id: 4
  Volume Session Time: 1659428963
  Last Volume Bytes: 85,150,808,401 (85.15 GB)
  Non-fatal FD errors: 0
  SD Errors: 0
  FD termination status: OK
  SD termination status: OK
  Bareos binary info: pre-release version: Get official binaries and vendor support on bareos.com
  Job triggered by: User
  Termination: Backup OK

03-Aug 12:56 bareos-dir JobId 565: shell command: run AfterJob "echo '.bvfs_update jobid=565' | bconsole"
03-Aug 12:56 bareos-dir JobId 565: AfterJob: .bvfs_update jobid=565 | bconsole

Client Alma Linux 9:
Filesystem Size Used Avail Use% Mounted on
devtmpfs 47G 0 47G 0% /dev
tmpfs 47G 196K 47G 1% /dev/shm
tmpfs 19G 2.3M 19G 1% /run
/dev/mapper/almalinux-root 12G 3.5G 8.6G 29% /
/dev/sda1 2.0G 237M 1.6G 13% /boot
/dev/mapper/almalinux-opt 8.0G 90M 7.9G 2% /opt
/dev/mapper/almalinux-home 12G 543M 12G 5% /home
/dev/mapper/almalinux-var 8.0G 309M 7.7G 4% /var
/dev/mapper/almalinux-opt_ExampleAd 8.0G 373M 7.7G 5% /opt/ExampleAd
/dev/mapper/almalinux-opt_ExampleEn 32G 7.5G 25G 24% /opt/ExampleEn
/dev/mapper/almalinux-var_log 20G 8.1G 12G 41% /var/log
/dev/mapper/almalinux-var_lib 12G 259M 12G 3% /var/lib
tmpfs 9.3G 0 9.3G 0% /run/user/1703000011
tmpfs 9.3G 0 9.3G 0% /run/user/1703000004



Server JobDefs:
JobDefs {
  Name = "ExampleLinux"
  Type = Backup
  Client = bareos-fd
  FileSet = "ExampleLinux"
  Storage = File
  Messages = Standard
  Schedule = "BasicBackup"
  Pool = AI-Example-Incremental
  Priority = 10
  Write Bootstrap = "/var/lib/bareos/%c.bsr"
  Full Backup Pool = AI-Example-Consolidated # write Full Backups into "Full" Pool (0000005)
  Incremental Backup Pool = AI-Example-Incremental # write Incr Backups into "Incremental" Pool (0000011)
}
System Description
Attached Files:
Notes
(0004710)
bruno-at-bareos   
2022-08-03 18:15   
with bconsole show FileSet="ExampleLinux" we will better understand what you've tried to do.

bconsole
estimate job=td02.example.com listing
will show you all the file included.
(0004731)
bruno-at-bareos   
2022-08-23 12:08   
No informations given to go further

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
835 [bareos-core] director major always 2017-07-18 02:34 2022-08-08 16:32
Reporter: divanikus Platform: Linux  
Assigned To: OS: Debian  
Priority: normal OS Version: 8  
Status: acknowledged Product Version: 16.2.6  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact: yes
bareos-17.2: action: will care
bareos-16.2: impact: yes
bareos-16.2: action: will care
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Restore job picks wrong storage
Description: I'm using version 16.2.6 compiled from source package (https://packages.debian.org/buster/bareos)

When I'm trying to restore fileset I get the following:

*restore client=zabbix-int-fd select current all done yes
Using Catalog "Metahouse"
Automatically selected FileSet: ZabbixServer
+-------+-------+----------+---------------+---------------------+------------------+
| jobid | level | jobfiles | jobbytes | starttime | volumename |
+-------+-------+----------+---------------+---------------------+------------------+
| 4,949 | F | 557 | 5,332,479,991 | 2017-07-02 01:01:08 | Full-0138 |
| 4,949 | F | 557 | 5,332,479,991 | 2017-07-02 01:01:08 | Full-0140 |
| 5,178 | I | 536 | 5,359,064,324 | 2017-07-03 01:01:15 | Incremental-0139 |
| 5,200 | I | 536 | 5,387,606,276 | 2017-07-04 01:01:13 | Incremental-0007 |
| 5,240 | I | 536 | 5,715,316,484 | 2017-07-17 18:10:47 | Incremental-0012 |
| 5,240 | I | 536 | 5,715,316,484 | 2017-07-17 18:10:47 | Incremental-0014 |
| 5,248 | I | 536 | 5,716,501,252 | 2017-07-18 01:01:00 | Incremental-0012 |
+-------+-------+----------+---------------+---------------------+------------------+
You have selected the following JobIds: 4949,5178,5200,5240,5248

Building directory tree for JobId(s) 4949,5178,5200,5240,5248 ... +++++++++++++++++++++++++++++++++++++++++++++
543 files inserted into the tree and marked for extraction.
Bootstrap records written to /var/lib/bareos/bareos-dir.restore.1.bsr

The job will require the following
   Volume(s) Storage(s) SD Device(s)
===========================================================================

    Full-0140 bs-00-full FullFileStorage
    Incremental-0012 bs-00-inc IncFileStorage

Volumes marked with "*" are online.


557 files selected to be restored.

Using Catalog "Metahouse"
Job queued. JobId=5270
You have messages.
*m
18-Jul 02:53 bareos-dir JobId 5269: Start Restore Job RestoreFiles.2017-07-18_02.53.36_33
18-Jul 02:53 bareos-dir JobId 5269: Using Device "FullFileStorage" to read.
18-Jul 02:53 bs-00 JobId 5269: Ready to read from volume "Full-0140" on device "FullFileStorage" (/opt/storage/full/).
18-Jul 02:53 bs-00 JobId 5269: Forward spacing Volume "Full-0140" to file:block 0:2999808192.
18-Jul 02:53 bs-00 JobId 5269: End of Volume at file 0 on device "FullFileStorage" (/opt/storage/full/), Volume "Full-0140"
18-Jul 02:53 bs-00 JobId 5269: Warning: acquire.c:239 Read open device "FullFileStorage" (/opt/storage/full/) Volume "Incremental-0012" failed: ERR=dev.c:661 Could not open: /opt/storage/full/Incremental-0012, ERR=No such file or directory

18-Jul 02:53 bs-00 JobId 5269: Please mount read Volume "Incremental-0012" for:
    Job: RestoreFiles.2017-07-18_02.53.36_33
    Storage: "FullFileStorage" (/opt/storage/full/)
    Pool: Incremental
    Media type: File

It just tries to read Incremental-0012 from wrong storage (FullFileStorage instead of IncFileStorage).
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0002872)
divanikus   
2018-01-12 14:22   
Problem is still here, Bareos 17.2 from official repo.

bs-00 JobId 11502: Warning: acquire.c:241 Read open device "FullFileStorage" (/opt/storage/full/) Volume "Incremental-0043" failed: ERR=dev.c:664 Could not open: /opt/storage/full/Incremental-0043, ERR=No such file or directory
(0003056)
stephand   
2018-06-28 17:14   
The Documentation for Media Type points out:

"...
If you are writing to disk Volumes, you must make doubly sure that each Device resource defined in the Storage daemon (and hence in the Director’s conf file) has a unique media type. Otherwise Bareos may assume, these Volumes can be mounted and read by any Storage daemon File device.
..."

See http://doc.bareos.org/master/html/bareos-manual-main-reference.html#directiveDirStorageMedia%20Type

It is admittedly not very obvious why this is necesary. However, the documentation contains some more detailed explanations.

We should decide if we really want to consider this a bug, as it can be fixed by configuring different media types.
(0003988)
embareossed   
2020-05-15 06:18   
On the other hand, the documentation for the storage daemon says re device:

"...
If a directory is specified, it is used as file storage. The directory must be existing and be specified as absolute path. Bareos will write to file storage in the specified directory and the filename used will be the Volume name as specified in the Catalog. If you want to write into more than one directory (i.e. to spread the load to different disk drives), you will need to define two Device resources, each containing an Archive Device with a different directory.
..."

I am using file storage, and I have my "tape" files in separate directories. I have performed many restores under this configuration, but today when I tried a restore, I get the type of error reported in this bug.

What are you referring to when you say there are "more detailed explanations" so I can understand how to correct my configuration? Thank you for your response.
(0003989)
embareossed   
2020-05-18 00:58   
(Last edited: 2020-05-23 03:21)
For the time being -- in particular, to perform an urgent restore at this moment -- I have decided to combine the tape (type Disk, in bareos parlance) directories into one, create one storage daemon Device, and just have all my director Storage's point to the one Storage I defined for the storage daemon.

It works now. I can both backup and restore as before. I still feel that this is either a bug, or maybe there needs to be additional documentation explaining how to use multiple directories in the storage daemon for Disk type Storage's.

(0004717)
sedlmeier   
2022-08-08 16:32   
Always us a different Media Type if the Device writes/reads from a different directory. Bareos uses the Media Type as an indicator which media types can be put in which device.

If you are using more than one Catalog, some devices get mixed up in the databases, even if they have a different Media Types.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1471 [bareos-core] installer / packages tweak N/A 2022-07-13 11:39 2022-07-28 09:27
Reporter: amodia Platform:  
Assigned To: bruno-at-bareos OS: Debian 11  
Priority: normal OS Version:  
Status: resolved Product Version: 21.1.3  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact: yes
bareos-master: action: fixed
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Shell example script for Bareos installation on Debian / Ubuntu uses deprecated "apt-key"
Description: Using the script raises the warning: "apt-key is deprecated". In order to correct this, it is suggested to change
---
# add package key
wget -q $URL/Release.key -O- | apt-key add -
---
to
+++
# add package key
wget -q $URL/Release.key -O- | gpg --dearmor -o /usr/share/keyrings/bareos.gpg
sed -i -e 's#deb #deb [signed-by=/usr/share/keyrings/bareos.gpg] #' /etc/apt/sources.list.d/bareos.list
+++
Tags:
Steps To Reproduce:
Additional Information:
Attached Files:
Notes
(0004665)
bruno-at-bareos   
2022-07-14 10:09   
Would this be valid for any version of Debian/ubuntu used (deb 9, and ubuntu 18.04) ?
(0004666)
bruno-at-bareos   
2022-07-14 10:44   
We appreciate any effort made to make our software better.
This would be a nice improvement.
Testing on old systems seems ok, checking how much effort to change the code and handle the update/upgrade process on user installation + documentation changes.
(0004667)
bruno-at-bareos   
2022-07-14 11:01   
Adding public reference of the why apt-key should be changed and how,
https://askubuntu.com/questions/1286545/what-commands-exactly-should-replace-the-deprecated-apt-key/1307181#1307181

Maybe changing to Deb822 .sources files is the way to go.
(0004669)
amodia   
2022-07-14 13:06   
I ran into this issue on the update from Bareos 20 to 21. So I can't comment on earlier versions.
My "solution" was the first that worked. Any solution that is better, more compatible and/or requires less effort is appreciated.
(0004695)
bruno-at-bareos   
2022-07-28 09:26   
Changes applied to future documentation
commit c08b56c1a
PR1203
(0004696)
bruno-at-bareos   
2022-07-28 09:27   
Follow status in PR1203 https://github.com/bareos/bareos/pull/1203

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1253 [bareos-core] webui major always 2020-06-17 09:58 2022-07-20 14:09
Reporter: tagort214 Platform: Linux  
Assigned To: frank OS: Ubuntu  
Priority: normal OS Version: 16.04  
Status: acknowledged Product Version: 19.2.7  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Can't restore files from Webui
Description: When I try to restore files from Webui it returns this error:

here was an error while loading data for this tree.

Error: ajax

Plugin: core

Reason: Could not load node

Data:

{"id":"#","xhr":{"readyState":4,"responseText":"\n\n\n \n \n \n \n\n \n \n\n\n \n \n\n\n\n\n \n\n \n\n
\n \n\n
Произошла ошибка
\n
An error occurred during execution; please try again later.
\n\n\n\n
Дополнительная информация:
\n
Zend\\Json\\Exception\\RuntimeException
\n

\n
Файл:
\n
    \n

    /usr/share/bareos-webui/vendor/zendframework/zend-json/src/Json.php:68

    \n \n
Сообщение:
\n
    \n

    Decoding failed: Syntax error

    \n \n
Трассировки стека:
\n
    \n

    #0 /usr/share/bareos-webui/module/Restore/src/Restore/Model/RestoreModel.php(54): Zend\\Json\\Json::decode('', 1)\n#1 /usr/share/bareos-webui/module/Restore/src/Restore/Controller/RestoreController.php(481): Restore\\Model\\RestoreModel->getDirectories(Object(Bareos\\BSock\\BareosBSock), '207685', '#')\n#2 /usr/share/bareos-webui/module/Restore/src/Restore/Controller/RestoreController.php(555): Restore\\Controller\\RestoreController->getDirectories()\n#3 /usr/share/bareos-webui/module/Restore/src/Restore/Controller/RestoreController.php(466): Restore\\Controller\\RestoreController->buildSubtree()\n#4 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/Controller/AbstractActionController.php(82): Restore\\Controller\\RestoreController->filebrowserAction()\n#5 [internal function]: Zend\\Mvc\\Controller\\AbstractActionController->onDispatch(Object(Zend\\Mvc\\MvcEvent))\n#6 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(444): call_user_func(Array, Object(Zend\\Mvc\\MvcEvent))\n#7 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(205): Zend\\EventManager\\EventManager->triggerListeners('dispatch', Object(Zend\\Mvc\\MvcEvent), Object(Closure))\n#8 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/Controller/AbstractController.php(118): Zend\\EventManager\\EventManager->trigger('dispatch', Object(Zend\\Mvc\\MvcEvent), Object(Closure))\n#9 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/DispatchListener.php(93): Zend\\Mvc\\Controller\\AbstractController->dispatch(Object(Zend\\Http\\PhpEnvironment\\Request), Object(Zend\\Http\\PhpEnvironment\\Response))\n#10 [internal function]: Zend\\Mvc\\DispatchListener->onDispatch(Object(Zend\\Mvc\\MvcEvent))\n#11 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(444): call_user_func(Array, Object(Zend\\Mvc\\MvcEvent))\n#12 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(205): Zend\\EventManager\\EventManager->triggerListeners('dispatch', Object(Zend\\Mvc\\MvcEvent), Object(Closure))\n#13 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/Application.php(314): Zend\\EventManager\\EventManager->trigger('dispatch', Object(Zend\\Mvc\\MvcEvent), Object(Closure))\n#14 /usr/share/bareos-webui/public/index.php(24): Zend\\Mvc\\Application->run()\n#15 {main}

    \n \n

\n\n\n
\n\n \n \n\n\n","status":500,"statusText":"Internal Server Error"}}

Also, in apache2 error logs i see this strings:
[:error] [pid 13597] [client 172.32.1.51:56276] PHP Notice: Undefined index: type in /usr/share/bareos-webui/module/Restore/src/Restore/Form/RestoreForm.php on line 91, referer: http://bareos.ivt.lan/bareos-webui/client/details/clientname
 [:error] [pid 14367] [client 172.32.1.51:56278] PHP Warning: unpack(): Type N: not enough input, need 4, have 0 in /usr/share/bareos-webui/vendor/Bareos/library/Bareos/BSock/BareosBSock.php on line 172, referer: http://bareos.ivt.lan/bareos-webui/restore/?mergefilesets=1&mergejobs=1&client=clientname&jobid=207728


Tags:
Steps To Reproduce: 1) Login to webui
2) Select job and click show files (or select client from restore tab)
Additional Information:
System Description
Attached Files: Снимок экрана_2020-06-17_10-57-42.png (37,854 bytes) 2020-06-17 09:58
https://bugs.bareos.org/file_download.php?file_id=443&type=bug
png

Снимок экрана_2020-06-17_10-57-24.png (47,279 bytes) 2020-06-17 09:58
https://bugs.bareos.org/file_download.php?file_id=444&type=bug
png
Notes
(0004242)
frank   
2021-08-31 16:14   
tagort214:

It seems we are receiving malformed JSON from the director here as the decoding throws a syntax error.

We should have a look at the JSON result the director provides for the particular directory you are
trying to list in webui by using bvfs API commands (https://docs.bareos.org/DeveloperGuide/api.html#bvfs-api) in bconsole.


In bconsole please do the following:


1. Get the jobid of the job that causes the described issue, 142 in this example. Replace the jobid from the example below with your specific jobid(s).

*.bvfs_get_jobids jobid=142 all
1,55,142
*.bvfs_lsdirs path= jobid=1,55,142
38 0 0 A A A A A A A A A A A A A A .
37 0 0 A A A A A A A A A A A A A A /


2. Navigate to the folder which causes problems by using pathid, pathids will differ at yours.

*.bvfs_lsdirs pathid=37 jobid=1,55,142
37 0 0 A A A A A A A A A A A A A A .
38 0 0 A A A A A A A A A A A A A A ..
57 0 0 A A A A A A A A A A A A A A ceph/
*

*.bvfs_lsdirs pathid=57 jobid=1,55,142
57 0 0 A A A A A A A A A A A A A A .
37 0 0 A A A A A A A A A A A A A A ..
56 0 0 A A A A A A A A A A A A A A groups/
*

*.bvfs_lsdirs pathid=56 jobid=1,55,142
56 0 0 A A A A A A A A A A A A A A .
57 0 0 A A A A A A A A A A A A A A ..
51 11817 142 P0A V9T EHt CcR A A A 8AA BAA L4 BhLhQA BhLhP/ BhLhP/ A A C group_aa/

Let's pretend group_aa (pathid 51) is the folder we can not list properly in webui.


3. Switch to API mode 2 (JSON) now and list the content of folder group_aa (pathid 51) to get the JSON result.

*.api 2
{
  "jsonrpc": "2.0",
  "id": null,
  "result": {
    "api": 2
  }
}*.bvfs_lsdirs pathid=51 jobid=1,55,142
{
  "jsonrpc": "2.0",
  "id": null,
  "result": {
    "directories": [
      {
        "type": "D",
        "pathid": 51,
        "fileid": 11817,
        "jobid": 142,
        "lstat": "P0A V9T EHt CcR A A A 8AA BAA L4 BhLhQA BhLhP/ BhLhP/ A A C",
        "name": ".",
        "fullpath": ".",
        "stat": {
          "dev": 64768,
          "ino": 89939,
          "mode": 16877,
          "nlink": 10001,
          "uid": 0,
          "gid": 0,
          "user": "root",
          "group": "root",
          "rdev": 0,
          "size": 245760,
          "atime": 1630409728,
          "mtime": 1630409727,
          "ctime": 1630409727
        },
        "linkfileindex": 0
      },
      {
        "type": "D",
        "pathid": 56,
        "fileid": 0,
        "jobid": 0,
        "lstat": "A A A A A A A A A A A A A A",
        "name": "..",
        "fullpath": "..",
        "stat": {
          "dev": 0,
          "ino": 0,
          "mode": 0,
          "nlink": 0,
          "uid": 0,
          "gid": 0,
          "user": "root",
          "group": "root",
          "rdev": 0,
          "size": 0,
          "atime": 0,
          "mtime": 0,
          "ctime": 0
        },
        "linkfileindex": 0
      },
      {
        "type": "D",
        "pathid": 52,
        "fileid": 1813,
        "jobid": 142,
        "lstat": "P0A BAGIj EHt C A A A G BAA A BhLgvm Bg/+Bp Bg/+Bp A A C",
        "name": "d1/",
        "fullpath": "/ceph/groups/group_aa/d1/",
        "stat": {
          "dev": 64768,
          "ino": 16802339,
          "mode": 16877,
          "nlink": 2,
          "uid": 0,
          "gid": 0,
          "user": "root",
          "group": "root",
          "rdev": 0,
          "size": 6,
          "atime": 1630407654,
          "mtime": 1627381865,
          "ctime": 1627381865
        },
        "linkfileindex": 0
      },
      {
        "type": "D",
        "pathid": 54,
        "fileid": 1814,
        "jobid": 142,
        "lstat": "P0A CCEkI EHt C A A A G BAA A BhLgvm Bg/+Bp Bg/+Bp A A C",
        "name": "d2/",
        "fullpath": "/ceph/groups/group_aa/d2/",
        "stat": {
          "dev": 64768,
          "ino": 34097416,
          "mode": 16877,
          "nlink": 2,
          "uid": 0,
          "gid": 0,
          "user": "root",
          "group": "root",
          "rdev": 0,
          "size": 6,
          "atime": 1630407654,
          "mtime": 1627381865,
          "ctime": 1627381865
        },
        "linkfileindex": 0
      }
    ]
  }
}*


Do you get valid JSON at this point as you can see in the example above?
Please provide the output you get in your case if possible.



Note:

You can substitue step 3 with sth. like following if the output is too big:

[root@centos7]# cat script
.api 2
.bvfs_lsdirs pathid=51 jobid=1,55,142
quit

[root@centos7]# cat script | bconsole > out.txt

Remove everything except the JSON you received from the .bvfs_lsdirs command from the out.txt file.

Validate the JSON output with a tool like https://stedolan.github.io/jq/ or https://jsonlint.com/ for example.
(0004681)
khvalera   
2022-07-20 14:09   
Try to increase in configuration.ini:
[restore]
; Restore filetree refresh timeout after n milliseconds
; Default: 120000 milliseconds
filetree_refresh_timeout=220000

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1472 [bareos-core] General tweak have not tried 2022-07-13 11:53 2022-07-19 14:55
Reporter: amodia Platform:  
Assigned To: bruno-at-bareos OS: Debian 11  
Priority: normal OS Version:  
Status: resolved Product Version:  
Product Build: Resolution: unable to reproduce  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: No explanation on "delcandidates"
Description: After an upgrade to Bareos v21 the following message appeared in the status of bareos-director:

HINWEIS: Tabelle »delcandidates« existiert nicht, wird übersprungen
(engl.: NOTE: Table »delcandidates« does not exist, will be skipped)

Searching the Bareos website for "delcandidates" does not give any matching site!

It would be nice to give a hint to update the tables in the database by running:

su - postgres -c /usr/lib/bareos/scripts/update_bareos_tables
Tags:
Steps To Reproduce:
Additional Information:
Attached Files: log_dbcheck_2022-07-18.log (35,702 bytes) 2022-07-18 15:42
https://bugs.bareos.org/file_download.php?file_id=528&type=bug
bareos_dbcheck_debian11.log.xz (30,660 bytes) 2022-07-19 14:55
https://bugs.bareos.org/file_download.php?file_id=529&type=bug
Notes
(0004664)
bruno-at-bareos   
2022-07-14 10:08   
From which version did you do the update ?
It is clearly stated in the documentation to run update_table and grant on any update (especially major version).
(0004668)
amodia   
2022-07-14 12:51   
The update was from 20 to 21.
I missed the "run update_table" statement in the documentation.
The documentation regarding "run grant" is misleading:

"Warning:
When using the PostgreSQL backend and updating to Bareos < 14.2.3, it is necessary to manually grant database permissions (grant_bareos_privileges), normally by

su - postgres -c /usr/lib/bareos/scripts/grant_bareos_privileges
"
Because I wondered, who might want to upgread "to Bareos < 14.2.3" when version 21 is available, I thought what was ment is "updating from Bareos < 14.2.3 to a later version". So I skipped the "run grant" for my update. and it worked.
(0004670)
bruno-at-bareos   
2022-07-14 14:06   
I don't know which documentation part you are talking about.

The update bareos chapter as the following for database update
https://docs.bareos.org/bareos-21/IntroductionAndTutorial/UpdatingBareos.html#other-platforms which talk about update & grant.

Maybe you can share a link here ?
(0004671)
amodia   
2022-07-14 14:15   
https://docs.bareos.org/TasksAndConcepts/CatalogMaintenance.html

Firstwarning just before the "Manual Configuration"
(0004672)
bruno-at-bareos   
2022-07-14 14:25   
Ha ok I understand, that's related to dbconfig.
Are you using dbconfig for your installation (for Bareos 20 and 21) ?
(0004673)
amodia   
2022-07-14 16:34   
Well ...
During the update from Bareos 16 to 20 I selected "Yes" for the dbconfig-common option. Unfortunately the database got lost.
This time (Bareos 20 to 21) I selected "No", hoping that a manual update would be more successful. So I have a backup of the database just before the update, but unfortunately I had no success with the manual update. So the "old" data is lost, and the 'bareos' database (bareos-db) gets filled with "new" data since the update.

In the mean time I am able to get some commands working from the command line, at least for user 'bareos':
- bareos-dbcheck *)
- bareos-dir -t -f -d 500

*): selecting test no. 12 "Check for orphaned storage records" crashes bareos-dbcheck with a "memory access error".

The next experiment is to
- create a new database (bareos2-db) from the backup before the update
- run update_table & grant & bareos-dbcheck on this db
- change the MyCatalog.conf accordingly (dbname = bareos2)
- test, if everything is working again

The hope is to "merge" this bareos2-db (data before the update) with the bareos-db (v. above), which collects the data since the update.
Is this possible?
(0004674)
bruno-at-bareos   
2022-07-14 17:34   
Not sure what happen for you, the upgrade process is quite well tested here, manual and dbconfig. (Maybe the switch from Mysql to PostgresQL ?)

Did you run bareos-dbcheck or bareos in a container (beware by default they had a low memory limit usage, which often is not enough).

As you have the dump, I would have simply restored this one, run the manual update&grant and logically bareos-dir -t should work with all the previous data preserved.
(to restore of course you first create the database).

then run dbcheck against it (advise, next time run dbcheck before the dump, so you will save time and place to not dump orphan records)
if it failed again we would be interested by having copy of the storage definition and the output of
bareos-dbcheck -v -dt -d1000
(0004675)
amodia   
2022-07-15 09:22   
Here Bareos runs on a virtual machine (KVM, no container) with limited resources (total memory: 473MiB, swap: 703MiB, storage: 374GiB). The files are stored on an external NAS (6TB) mounted with autofs. This seemed to be enough for "normal" operations.

Appendix "Hardware sizing" has no recommendation on memory. What do you recommend?
(0004676)
bruno-at-bareos   
2022-07-18 10:14   
Hardware sizing chapter have quite a number of recommendation for the database (which is what the director used), it highly depend of course on the number of files backuped. PostgreSQL should have 1/4 of the ram, and/or at least try to have the file index size. Then if the FD is also run here with Accurate, it need enough memory to keep track of Accurate file saved.
(0004677)
amodia   
2022-07-18 12:33   
Update:
bareos-dbcheck (Interactive mode) runs only with the following command:
su - bareos -s /bin/bash -c '/usr/sbin/bareos-dbcheck ...'

Every test runs smoothly EXCEPT test no.12: "Check for orphaned storage records".
Test no. 12 fails regardless of the memory size (orig: 473MiB, Increased: 1,9GiB).
Failure ("Memory Access Error") occurs immediately. (No filling of memory and than failure)
The database to check is only a few days old, so there seems to be another issue but the db-size.

All tests but no. 12 run even with low memory setup.
Here the Director and both Daemons (Storage and File) are on the same virtual machine.
(0004678)
bruno-at-bareos   
2022-07-18 13:36   
Without requested log, we won't be able to check what happen.
(0004679)
amodia   
2022-07-18 15:42   
Please find the log file attached of

su - bareos -s /bin/bash -c '/usr/sbin/bareos-dbcheck ... -v -dt -d1000' 2>&1 |tee log_dbcheck_2022-07-18.log
(0004680)
bruno-at-bareos   
2022-07-19 14:55   
Unfortunately the problem you are seeing on your installation can't be reproduced on several here. Tested RHEL_8, xubuntu 22.04, debian 11

See full log attached.
Maybe you have some extras tools restricting too much the normal workflow (apparmor, selinux whatever).

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1464 [bareos-core] director major always 2022-05-23 10:52 2022-07-05 14:53
Reporter: meilihao Platform: linux  
Assigned To: bruno-at-bareos OS: oracle linux  
Priority: urgent OS Version: 7.9  
Status: resolved Product Version: 21.1.3  
Product Build: Resolution: unable to reproduce  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: director can't connect filedaemon
Description: director can't connect filedaemon, got ssl error.
Tags:
Steps To Reproduce: env:
- filedaemon: v21.0.0 on win10, x64
- direcor: v21.1.2, x64

bconsole run: `status client=xxx`, get error:
```bash
# tail -f /var/log/bareos.log
Network error during CRAM MD5 with 192.168.0.130
Unable to authenticate with File daemon at "192.168.0.130:9102"
```

filedaemon error: `TLS negotiation failed` and `error:1408F119:SSL routines:ssl3_get_record:decryption failed or bad record mac`
Additional Information:
Attached Files:
Notes
(0004630)
meilihao   
2022-05-31 04:12   
Has anyone encountered?
(0004656)
bruno-at-bareos   
2022-07-05 14:53   
After restarting both director and client, did you still get any trouble.
I'm not able to reproduce here with Win10 64bits and Centos 8 bareos binaries from download.bareos.org

From where come your director then ?
- direcor: v21.1.2, x64
(0004657)
bruno-at-bareos   
2022-07-05 14:53   
Can't be reproduced

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
874 [bareos-core] director minor always 2017-11-07 12:12 2022-07-04 17:12
Reporter: chaos_prevails Platform: Linux  
Assigned To: bruno-at-bareos OS: Ubuntu  
Priority: normal OS Version: 16.04 amd64  
Status: resolved Product Version: 16.2.4  
Product Build: Resolution: won't fix  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: VirtualFull fails when read and write devices are on different storage daemons (different machines)
Description: The Virtual full backup fails with

...
2017-10-26 17:24:13 pavlov-dir JobId 269: Start Virtual Backup JobId 269, Job=pavlov_sys_ai_vf.2017-10-26_17.24.11_04
 2017-10-26 17:24:13 pavlov-dir JobId 269: Consolidating JobIds 254,251,252,255,256,257
 2017-10-26 17:24:13 pavlov-dir JobId 269: Bootstrap records written to /var/lib/bareos/pavlov-dir.restore.1.bsr
 2017-10-26 17:24:14 delaunay-sd JobId 269: Fatal error: Device reservation failed for JobId=269:
 2017-10-26 17:24:14 pavlov-dir JobId 269: Fatal error:
     Storage daemon didn't accept Device "pavlov-file-consolidate" because:
     3924 Device "pavlov-file-consolidate" not in SD Device resources or no matching Media Type.
 2017-10-26 17:24:14 pavlov-dir JobId 269: Error: Bareos pavlov-dir 16.2.4 (01Jul16):
...

When changing the storage daemon for the VirtualFull Backup to the same machine as always-incremental and consolidate backups, the Virtualfull Backup works:
...
 2017-11-07 10:43:20 pavlov-dir JobId 320: Start Virtual Backup JobId 320, Job=pavlov_sys_ai_vf.2017-11-07_10.43.18_05
 2017-11-07 10:43:20 pavlov-dir JobId 320: Consolidating JobIds 317,314,315
 2017-11-07 10:43:20 pavlov-dir JobId 320: Bootstrap records written to /var/lib/bareos/pavlov-dir.restore.1.bsr
 2017-11-07 10:43:21 pavlov-dir JobId 320: Using Device "pavlov-file-consolidate" to read.
 2017-11-07 10:43:21 pavlov-dir JobId 320: Using Device "pavlov-file" to write.
 2017-11-07 10:43:21 pavlov-sd JobId 320: Ready to read from volume "ai_consolidate-0023" on device "pavlov-file-consolidate" (/mnt/XXX/var_backup_bareos).
 2017-11-07 10:43:21 pavlov-sd JobId 320: Volume "Full-0032" previously written, moving to end of data.
 2017-11-07 10:43:21 pavlov-sd JobId 320: Ready to append to end of Volume "Full-0032" size=97835302
 2017-11-07 10:43:21 pavlov-sd JobId 320: Forward spacing Volume "ai_consolidate-0023" to file:block 0:7046364.
 2017-11-07 10:43:22 pavlov-sd JobId 320: End of Volume at file 0 on device "pavlov-file-consolidate" (/mnt/XXX/var_backup_bareos), Volume "ai_consolidate-0023"
 2017-11-07 10:43:22 pavlov-sd JobId 320: Ready to read from volume "ai_inc-0033" on device "pavlov-file-consolidate" (/mnt/XXX/var_backup_bareos).
 2017-11-07 10:43:22 pavlov-sd JobId 320: Forward spacing Volume "ai_inc-0033" to file:block 0:1148147.
 2017-11-07 10:43:22 pavlov-sd JobId 320: End of Volume at file 0 on device "pavlov-file-consolidate" (/mnt/XXX/var_backup_bareos), Volume "ai_inc-0033"
 2017-11-07 10:43:22 pavlov-sd JobId 320: End of all volumes.
 2017-11-07 10:43:22 pavlov-sd JobId 320: Elapsed time=00:00:01, Transfer rate=7.029 M Bytes/second
 2017-11-07 10:43:22 pavlov-dir JobId 320: Joblevel was set to joblevel of first consolidated job: Full
 2017-11-07 10:43:23 pavlov-dir JobId 320: Bareos pavlov-dir 16.2.4 (01Jul16):
  Build OS: x86_64-pc-linux-gnu ubuntu Ubuntu 16.04 LTS
  JobId: 320
  Job: pavlov_sys_ai_vf.2017-11-07_10.43.18_05
  Backup Level: Virtual Full
  Client: "pavlov-fd" 16.2.4 (01Jul16) x86_64-pc-linux-gnu,ubuntu,Ubuntu 16.04 LTS,xUbuntu_16.04,x86_64
  FileSet: "linux_system" 2017-10-19 16:11:21
  Pool: "Full" (From Job's NextPool resource)
  Catalog: "MyCatalog" (From Client resource)
  Storage: "pavlov-file" (From Storage from Job's NextPool resource)
  Scheduled time: 07-Nov-2017 10:43:18
  Start time: 07-Nov-2017 10:29:38
  End time: 07-Nov-2017 10:29:39
  Elapsed time: 1 sec
  Priority: 13
  SD Files Written: 148
  SD Bytes Written: 7,029,430 (7.029 MB)
  Rate: 7029.4 KB/s
  Volume name(s): Full-0032
  Volume Session Id: 1
  Volume Session Time: 1510047788
  Last Volume Bytes: 104,883,188 (104.8 MB)
  SD Errors: 0
  SD termination status: OK
  Accurate: yes
  Termination: Backup OK

 2017-11-07 10:43:23 pavlov-dir JobId 320: console command: run AfterJob "update jobid=320 jobtype=A"
Tags:
Steps To Reproduce: 1. create always incremental, consolidate jobs, pools, and make sure they are working. Use storage daemon A (pavlov in my example)
2. create VirtualFull Level backup with Storage attribute pointing to a device on a different storage daemon B (delaunay in my example)
3. start always incremental and consolidate job and verify that they are working as expected
4. start VirtualFull Level backup
5. fails with error message:
...
delaunay-sd JobId 269: Fatal error: Device reservation failed for JobId=269:
2017-10-26 17:24:14 pavlov-dir JobId 269: Fatal error:
     Storage daemon didn't accept Device "pavlov-file-consolidate" because:
     3924 Device "pavlov-file-consolidate" not in SD Device resources or no matching Media Type.
...
Additional Information: A) configuration with working always incremental and consolidate jobs, but failing virtualFull level backup:

A) director pavlov (to disk storage daemon + director)
1) template for always incremental jobs
JobDefs {
  Name = "default_ai"
  Type = Backup
  Level = Incremental
  Client = pavlov-fd
  Storage = pavlov-file
  Messages = Standard
  Priority = 10
  Write Bootstrap = "|/usr/local/bin/bareos-messages.sh \"[Bootstrap] %d: %j (jobid %i)\" %i \"it@XXXXX\" %c-%n"
  Maximum Concurrent Jobs = 7

  #always incremental config
  Pool = disk_ai
  Incremental Backup Pool = disk_ai
  Full Backup Pool = disk_ai_consolidate
  Accurate = yes
  Always Incremental = yes
  Always Incremental Job Retention = 20 seconds 0000007 days
  Always Incremental Keep Number = 2 0000007
  Always Incremental Max Full Age = 1 minutes # 14 days
}


2) template for virtual full jobs, should run on read storage pavlov and write storage delaunay:
job defs {
  Name = "default_ai_vf"
  Type = Backup
  Level = VirtualFull
  Messages = Standard
  Priority = 13
  Accurate = yes
 
  Storage = delaunay_HP_G2_Autochanger
  Pool = disk_ai_consolidate
  Incremental Backup Pool = disk_ai
  Next Pool = tape_automated
  Virtual Full Backup Pool = tape_automated
  

  # run after Consolidate
  Run Script {
   console = "update jobid=%i jobtype=A"
   Runs When = After
   Runs On Client = No
   Runs On Failure = No
  }

  Write Bootstrap = "|/usr/local/bin/bareos-messages.sh \"[Bootstrap] %d: %j (jobid %i)\" %i \"it@XXXXX\" %c-%n"
}

3) consolidate job
Job {
  Name = ai_consolidate
  Type = Consolidate
  Accurate = yes
  Max Full Consolidations = 1
  Client = pavlov-fd #value which should be ignored by Consolidate job
  FileSet = "none" #value which should be ignored by Consolidate job
  Pool = disk_ai_consolidate #value which should be ignored by Consolidate job
  Incremental Backup Pool = disk_ai_consolidate
  Full Backup Pool = disk_ai_consolidate
# JobDefs = DefaultJob
# Level = Incremental
  Schedule = "ai_consolidate"
  # Storage = pavlov-file-consolidate #commented out for VirtualFull-Tape testing
  Messages = Standard
  Priority = 10
  Write Bootstrap = "|/usr/local/bin/bareos-messages.sh \"[Bootstrap] %d: %j (jobid %i)\" %i \"it@XXXXXX\" %c-%n"
}

4) always incremental job for client pavlov (works)
Job {
  Name = "pavlov_sys_ai"
  JobDefs = "default_ai"
  Client = "pavlov-fd"
  FileSet = linux_system
  Schedule = manual
}


5) virtualfull job for pavlov (doesn't work)
Job {
  Name = "pavlov_sys_ai_vf"
  JobDefs = "default_ai_vf"
  Client = "pavlov-fd"
  FileSet = linux_system
  Schedule = manual
  Storage = delaunay_HP_G2_Autochanger
  Next Pool = tape_automated
  Virtual Full Backup Pool = tape_automated
}

6) pool always incremental
Pool {
  Name = disk_ai
  Pool Type = Backup
  Recycle = yes # Bareos can automatically recycle Volumes
  AutoPrune = yes # Prune expired volumes
  Volume Retention = 4 weeks
  Maximum Volume Bytes = 30G # Limit Volume size to something reasonable
  Maximum Volumes = 200 # Limit number of Volumes in Pool
  Label Format = "ai_inc-" # Volumes will be labeled "Full-<volume-id>"
  Volume Use Duration = 23h
  Storage = pavlov-file
  Next Pool = disk_ai_consolidate
}

7) pool always incremental consolidate
Pool {
  Name = disk_ai_consolidate
  Pool Type = Backup
  Recycle = yes # Bareos can automatically recycle Volumes
  AutoPrune = yes # Prune expired volumes
  Volume Retention = 4 weeks
  Maximum Volume Bytes = 30G # Limit Volume size to something reasonable
  Maximum Volumes = 200 # Limit number of Volumes in Pool
  Label Format = "ai_consolidate-" # Volumes will be labeled "Full-<volume-id>"
  Volume Use Duration = 23h
  Storage = pavlov-file-consolidate
  Next Pool = tape_automated
}

8) pool tape_automated (for virtualfull jobs to tape)
Pool {
  Name = tape_automated
  Pool Type = Backup
  Storage = delaunay_HP_G2_Autochanger
  Recycle = yes # Bareos can automatically recycle Volumes
  AutoPrune = yes # Prune expired volumes
  Recycle Oldest Volume = yes
  RecyclePool = Scratch
  Maximum Volume Bytes = 0
  Volume Retention = 4 weeks
  Cleaning Prefix = "CLN"
  Catalog Files = yes
}

9) 1st storage device for disk backup (writes always incremental jobs + other normal jobs)
Storage {
  Name = pavlov-file
  Address = pavlov.XX # N.B. Use a fully qualified name here (do not use "localhost" here).
  Password = "X"
  Maximum Concurrent Jobs = 1
  Device = pavlov-file
  Media Type = File
  TLS Certificate = X
  TLS Key = X
  TLS CA Certificate File = X
  TLS DH File = X
  TLS Enable = X
  TLS Require = X
  TLS Verify Peer = X
  TLS Allowed CN = pavlov.X
}

10) 2nd storage device for disk backup (consolidates AI jobs)
Storage {
  Name = pavlov-file-consolidate
  Address = pavlov.X # N.B. Use a fully qualified name here (do not use "localhost" here).
  Password = "X"
  Maximum Concurrent Jobs = 1
  Device = pavlov-file-consolidate
  Media Type = File
  TLS Certificate = X
  TLS Key = X
  TLS CA Certificate File = X
  TLS DH File = X
  TLS Enable = yes
  TLS Require = yes
  TLS Verify Peer = yes
  TLS Allowed CN = pavlov.X
}

11) 3rd storage device for tape backup
Storage {
  Name = delaunay_HP_G2_Autochanger
  Address = "delaunay.XX"
  Password = "X"
  Device = "HP_G2_Autochanger"
  Media Type = LTO
  Autochanger = yes
  TLS Certificate = X
  TLS Key = X
  TLS CA Certificate File = X
  TLS DH File = X
  TLS Enable = yes
  TLS Require = yes
  TLS Verify Peer = yes
  TLS Allowed CN = delaunay.X
}


B) storage daemon pavlov (to disk)
1) to disk storage daemon

Storage {
  Name = pavlov-sd
  Maximum Concurrent Jobs = 20

  # remove comment from "Plugin Directory" to load plugins from specified directory.
  # if "Plugin Names" is defined, only the specified plugins will be loaded,
  # otherwise all storage plugins (*-sd.so) from the "Plugin Directory".
  #
  # Plugin Directory = /usr/lib/bareos/plugins
  # Plugin Names = ""
  TLS Certificate = X
  TLS Key = X
  TLS CA Certificate File = X
  TLS DH File = X
  TLS Enable = yes
  TLS Require = yes
  TLS Verify Peer = yes
  TLS Allowed CN = pavlov.X
  TLS Allowed CN = edite.X
  TLS Allowed CN = delaunay.X
}

2) to disk device (AI + others)
Device {
  Name = pavlov-file
  Media Type = File
  Maximum Open Volumes = 1
  Maximum Concurrent Jobs = 1
  Archive Device = /mnt/xyz #(same for both)
  LabelMedia = yes; # lets Bareos label unlabeled media
  Random Access = yes;
  AutomaticMount = yes; # when device opened, read it
  RemovableMedia = no;
  AlwaysOpen = no;
  Description = "File device. A connecting Director must have the same Name and MediaType."
}

3) consolidate to disk
Device {
  Name = pavlov-file-consolidate
  Media Type = File
  Maximum Open Volumes = 1
  Maximum Concurrent Jobs = 1
  Archive Device = /mnt/xyz #(same for both)
  LabelMedia = yes; # lets Bareos label unlabeled media
  Random Access = yes;
  AutomaticMount = yes; # when device opened, read it
  RemovableMedia = no;
  AlwaysOpen = no;
  Description = "File device. A connecting Director must have the same Name and MediaType."
}

C) to tape storage daemon (different server)
1) allowed director
Director {
  Name = pavlov-dir
  Password = "[md5]X"
  Description = "Director, who is permitted to contact this storage daemon."
  TLS Certificate = X
  TLS Key = /X
  TLS CA Certificate File = X
  TLS DH File = X
  TLS Enable = yes
  TLS Require = yes
  TLS Verify Peer = yes
  TLS Allowed CN = pavlov.X
}


2) storage daemon config
Storage {
  Name = delaunay-sd
  Maximum Concurrent Jobs = 20
  Maximum Network Buffer Size = 32768
# Maximum Network Buffer Size = 65536

  # remove comment from "Plugin Directory" to load plugins from specified directory.
  # if "Plugin Names" is defined, only the specified plugins will be loaded,
  # otherwise all storage plugins (*-sd.so) from the "Plugin Directory".
  #
  # Plugin Directory = /usr/lib/bareos/plugins
  # Plugin Names = ""
  TLS Certificate = X
  TLS Key = X
  TLS DH File = X
  TLS CA Certificate File = X
  TLS Enable = yes
  TLS Require = yes
  TLS Verify Peer = yes
  TLS Allowed CN = pavlov.X
  TLS Allowed CN = edite.X
}


3) autochanger config
Autochanger {
  Name = "HP_G2_Autochanger"
  Device = Ultrium920
  Changer Device = /dev/sg5
  Changer Command = "/usr/lib/bareos/scripts/mtx-changer %c %o %S %a %d"
}

4) device config
Device {
  Name = "Ultrium920"
  Media Type = LTO
  Archive Device = /dev/st2
  Autochanger = yes
  LabelMedia = no
  AutomaticMount = yes
  AlwaysOpen = yes
  RemovableMedia = yes
  Maximum Spool Size = 50G
  Spool Directory = /var/lib/bareos/spool
  Maximum Block Size = 2097152
# Maximum Block Size = 4194304
  Maximum Network Buffer Size = 32768
  Maximum File Size = 50G
}


B) changes to make VirtualFull level backup working (using device on same storage daemon as always incremental and consolidate job) in both Job and pool definitions.

1) change virtualfull job's storage
Job {
  Name = "pavlov_sys_ai_vf"
  JobDefs = "default_ai_vf"
  Client = "pavlov-fd"
  FileSet = linux_system
  Schedule = manual
  Pool = disk_ai_consolidate
  Incremental Backup Pool = disk_ai
  Storage = pavlov-file # <-- !! change to make VirtualFull work !!
  Next Pool = tape_automated
  Virtual Full Backup Pool = tape_automated
}

1) change virtualfull pool's storage
Pool {
  Name = tape_automated
  Pool Type = Backup
  Recycle = yes # Bareos can automatically recycle Volumes
  AutoPrune = yes # Prune expired volumes
  Recycle Oldest Volume = yes
  RecyclePool = Scratch
  Maximum Volume Bytes = 0
  Volume Retention = 4 weeks
  Cleaning Prefix = "CLN"
  Catalog Files = yes
  Label Format = "Full-" # <-- !! add this because now we write to disk, not to tape
  Storage = pavlov-file # <-- !! change to make VirtualFull work !!
}
Attached Files:
Notes
(0002815)
chaos_prevails   
2017-11-15 11:08   
Thanks to a comment on the bareos-users google-group I found out that this is a feature not implemented, not a bug.

I think it would be important to mention this in the documentation. I think VirtualFull would be a good solution for offsite-backup (e.g. in another building, another server-room). This involves another storage daemon.

I looked at different ways to export the tape drive on the offsite-backup machine to the local machine (e.g. iSCSI, ...). However, this adds extra complexity and might cause shoeshining (connection to offsite-backup machine has to be really fast, because spooling would happen on the local machine).In my case (~10MB/s) tape and drive would definitely suffer from shoe-shining. So currently, beside always incremental, I do another full backup to the offsite-backup machine.
(0004651)
sven.compositiv   
2022-07-04 16:48   
> Thanks to a comment on the bareos-users google-group I found out that this is a feature not implemented, not a bug.

When it is an unimplemented feature, I'd expect that no Backups are chosen from other storages. We have the problem, that we copy Jobs from AI-Consolidated to a tape. After doing that, all VirtualFull jobs fail after backups from our tape-storage have been selected.
(0004652)
bruno-at-bareos   
2022-07-04 17:02   
Could you explain a bit more (configuration example maybe?)

Having an Always Incremental rotation using one storage like File and then creating VirtualFul Archive to another storage resource (same SD daemon) works very well, as it is documented.
Maybe, you forget to update your VirtualFull to be an archive job. Then yes the next AI will use the most recent VF.
But this is also documented.
(0004655)
bruno-at-bareos   
2022-07-04 17:12   
Not implemented.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1459 [bareos-core] installer / packages major always 2022-05-09 16:37 2022-07-04 17:11
Reporter: khvalera Platform: Linux  
Assigned To: bruno-at-bareos OS: any  
Priority: high OS Version: 3  
Status: resolved Product Version: 21.1.3  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact: yes
bareos-master: action: fixed
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Fails to build ceph plugin on Archlinux
Description: Ceph plugin cannot be built on Archlinux with ceph 15.2.14

Build report:

```
[ 73%] Building CXX object core/src/stored/backends/CMakeFiles/bareossd-cephfs.dir/cephfs_device.cc.o
In file included from /data/builds-pkg/bareos/src/bareos/core/src/stored/backends/cephfs_device.cc:33:
/data/builds-pkg/bareos/src/bareos/core/src/stored/backends/cephfs_device.h:31:10: fatal error: cephfs/libcephfs.h: No such file or directory
    31 | #include <cephfs/libcephfs.h>
       | ^~~~~~~~~~~~~~~~~~~~
compilation aborted.
make[2]: *** [core/src/stored/backends/CMakeFiles/bareossd-cephfs.dir/build.make:76: core/src/stored/backends/CMakeFiles/bareossd-cephfs.dir/cephfs_device.cc .o] Error 1
make[1]: *** [CMakeFiles/Makefile2:3157: core/src/stored/backends/CMakeFiles/bareossd-cephfs.dir/all] Error 2
make: *** [Makefile:146: all] Error 2
Tags:
Steps To Reproduce:
Additional Information:
System Description General issues regarding all linux platforms or not specific to only one distribution
Attached Files: 009-fix-timer_thread.patch (551 bytes) 2022-05-27 23:58
https://bugs.bareos.org/file_download.php?file_id=518&type=bug
Notes
(0004605)
bruno-at-bareos   
2022-05-10 13:03   
Maybe you can describe a bit more your setup, from where come cephfs
maybe the result of find libcephfs.h can be useful
(0004606)
khvalera   
2022-05-10 15:12   
You can fix this error by installing ceph-libs. But the assembly does not happen:

[ 97%] Building CXX object core/src/plugins/filed/CMakeFiles/cephfs-fd.dir/cephfs/cephfs-fd.cc.o
/data/builds-pkg/bareos/src/bareos/core/src/plugins/filed/cephfs/cephfs-fd.cc: In the "bRC filedaemon::get_next_file_to_backup(PluginContext*)" function:
/data/builds-pkg/bareos/src/bareos/core/src/plugins/filed/cephfs/cephfs-fd.cc:421:33: error: cannot convert "stat*" to "ceph_statx*"
  421 | &p_ctx->statp, &stmask);
      | ^~~~~~~~~~~~~
      | |
      | stat*
In file included from /data/builds-pkg/bareos/src/bareos/core/src/plugins/filed/cephfs/cephfs-fd.cc:35:
/usr/include/cephfs/libcephfs.h:564:43: note: when initializing the 4th argument "int ceph_readdirplus_r(ceph_mount_info*, ceph_dir_result*, dirent*, ceph_statx*, unsigned int, unsigned int, Inode**)"
  564 | struct ceph_statx *stx, unsigned want, unsigned flags, struct Inode **out);
      | ~~~~~~~~~~~~~~~~~~^~~
make[2]: *** [core/src/plugins/filed/CMakeFiles/cephfs-fd.dir/build.make:76: core/src/plugins/filed/CMakeFiles/cephfs-fd.dir/cephfs/cephfs -fd.cc.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:3908: core/src/plugins/filed/CMakeFiles/cephfs-fd.dir/all] Error 2
make: *** [Makefile:146: all] Error 2
(0004610)
bruno-at-bareos   
2022-05-10 17:31   
When we ask for a bit more information about your setup you maybe make the effort to give useful information like compiler used, cmake output etc ...
otherside we can clause here telling it works so well with cephfs upper version like 15.2.15 or 16.2.7 ...
(0004628)
khvalera   
2022-05-27 23:58   
After updating the system and the attached patch, bareos began to build again
(0004653)
bruno-at-bareos   
2022-07-04 17:10   
I will mark this closed done by

commit ce3339d28
Author: Andreas Rogge <andreas.rogge@bareos.com>
Date: Wed Feb 2 19:41:25 2022 +0100

    lib: fix use-after-free in timer_thread

diff --git a/core/src/lib/timer_thread.cc b/core/src/lib/timer_thread.cc
index 7ec802198..1624ddd4f 100644
--- a/core/src/lib/timer_thread.cc
+++ b/core/src/lib/timer_thread.cc
@@ -2,7 +2,7 @@
    BAREOS® - Backup Archiving REcovery Open Sourced

    Copyright (C) 2002-2011 Free Software Foundation Europe e.V.
- Copyright (C) 2019-2019 Bareos GmbH & Co. KG
+ Copyright (C) 2019-2022 Bareos GmbH & Co. KG

    This program is Free Software; you can redistribute it and/or
    modify it under the terms of version three of the GNU Affero General Public
@@ -204,6 +204,7 @@ static bool RunOneItem(TimerThread::Timer* p,
       = std::chrono::steady_clock::now();

   bool remove_from_list = false;
+ next_timer_run = min(p->scheduled_run_timepoint, next_timer_run);
   if (p->is_active && last_timer_run_timepoint > p->scheduled_run_timepoint) {
     LogMessage(p);
     p->user_callback(p);
@@ -215,7 +216,6 @@ static bool RunOneItem(TimerThread::Timer* p,
       p->scheduled_run_timepoint = last_timer_run_timepoint + p->interval;
     }
   }
- next_timer_run = min(p->scheduled_run_timepoint, next_timer_run);
   return remove_from_list;
 }
(0004654)
bruno-at-bareos   
2022-07-04 17:11   
Fixed with https://github.com/bareos/bareos/pull/1060

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1470 [bareos-core] webui minor always 2022-06-28 09:16 2022-06-30 13:41
Reporter: ffrants Platform: Linux  
Assigned To: bruno-at-bareos OS: Ubuntu  
Priority: low OS Version: 20.04  
Status: resolved Product Version: 21.1.3  
Product Build: Resolution: won't fix  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Update information could not be retrieved
Description: Update information could not be retrieved and also unknown update status on clients
Tags:
Steps To Reproduce:
Additional Information:
Attached Files: Снимок экрана 2022-06-28 в 10.11.01.png (22,345 bytes) 2022-06-28 09:16
https://bugs.bareos.org/file_download.php?file_id=524&type=bug
png

Снимок экрана 2022-06-28 в 10.15.12.png (28,921 bytes) 2022-06-28 09:16
https://bugs.bareos.org/file_download.php?file_id=525&type=bug
png

Снимок экрана 2022-06-30 в 14.04.09.png (14,330 bytes) 2022-06-30 13:07
https://bugs.bareos.org/file_download.php?file_id=526&type=bug
png

Снимок экрана 2022-06-30 в 14.06.27.png (21,387 bytes) 2022-06-30 13:07
https://bugs.bareos.org/file_download.php?file_id=527&type=bug
png
Notes
(0004648)
bruno-at-bareos   
2022-06-29 17:03   
Works here (maybe a transient error in certificate) could you recheck please?
(0004649)
ffrants   
2022-06-30 13:07   
Here's what I found out:
My ip is blocked by bareos.com (can't open www.bareos.com). If I open web-ui via VPN it doesn't show red exclamation mark near the version.
But the problem on "Clients" tab persist but not for all versions (see attachment).
(0004650)
bruno-at-bareos   
2022-06-30 13:41   
Only Russian authority will create a fix, so blacklisting will be dropped

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1460 [bareos-core] storage daemon block always 2022-05-10 17:46 2022-05-11 13:08
Reporter: alistair Platform: Linux  
Assigned To: bruno-at-bareos OS: Ubuntu  
Priority: normal OS Version: 21.10  
Status: resolved Product Version: 21.1.3  
Product Build: Resolution: won't fix  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Unable to Install bareos-storage-droplet
Description: Apt returns the following

The following packages have unmet dependencies:
bareos-storage-droplet : Depends: libjson-c4 (>= 0.13.1) but it is not installable

libjson-c4 seems to have been superseded by libjson-c5 in newer versions of Ubuntu.
Tags: droplet, s3;droplet;aws;storage, storage
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0004615)
bruno-at-bareos   
2022-05-11 13:07   
Don't know what you are expecting here, Ubuntu 21.10 is not a supported build distribution.
As such we don't know which package you are trying to install.

Subscription channel will have soon Ubuntu 22.04 offered, you can contact sales if you want more information about.
(0004616)
bruno-at-bareos   
2022-05-11 13:08   
Not a supported distribution.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1458 [bareos-core] webui major always 2022-05-09 13:32 2022-05-10 13:01
Reporter: mdc Platform: x86_64  
Assigned To: bruno-at-bareos OS: CentOS  
Priority: normal OS Version: stream 8  
Status: resolved Product Version: 21.1.3  
Product Build: Resolution: no change required  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Unable to view the pool details.
Description: With the last update the pool page is complete broken.
When the pool name contains an space in the name, an 404 error is returned.
On an pool without an space in the name the error on the picture will happens.
Before 21.1.3 only pools with an space in the name was broken.
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files: Screenshot 2022-05-09 at 13-29-52 Bareos.png (152,604 bytes) 2022-05-09 13:32
https://bugs.bareos.org/file_download.php?file_id=512&type=bug
png
Notes
(0004600)
mdc   
2022-05-09 13:40   
It looks like an caching problem. Open the webui in an private session, then it will work.
A relogin or an new tab will not help.
(0004601)
bruno-at-bareos   
2022-05-09 14:30   
Did you restart the the websever (and or php-fpm if used). Browser have tendency recently to not being able to cleanup correctly their disk cache. maybe it is needed to cleanup manually cached content for the webui website.
(0004603)
mdc   
2022-05-10 11:43   
Yes, this my first idea. The restart of the web server and the backend php service.
Now after approximately 48 hours, the correct page will loaded.
(0004604)
bruno-at-bareos   
2022-05-10 13:01   
personal browser cache need to be cleanup

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1454 [bareos-core] director feature always 2022-05-03 07:02 2022-05-05 14:22
Reporter: mdc Platform: x86_64  
Assigned To: OS: CentOS  
Priority: normal OS Version: stream 8  
Status: new Product Version: 21.1.2  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Add config option to let run scripts only on an specific job level
Description: Until now, an script can only runs before or after an job. But for some jobs like backup an PostgreSQL database using your add on an third situation will occurs.
An script run before an full backup job is needed, which removes the old wal archive files.
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0004586)
bruno-at-bareos   
2022-05-03 10:42   
Maybe you want to use the Characters substitution flag and use that to determine which kind of job you are running
%l = Job Level

https://docs.bareos.org/bareos-21/Configuration/Director.html?highlight=run%20script#config-Dir_Job_RunScript
(0004588)
mdc   
2022-05-04 07:12   
This have I done as an workaround, but this needs an wrapper script to call the "real" script/application.
(0004596)
bruno-at-bareos   
2022-05-05 14:00   
Maybe you are encline to propose a PR for that option ?
(0004598)
mdc   
2022-05-05 14:22   
I think it will needs deeper internal changes and I don't have deeper code knowledge.
So my idea is, because for me an workaround exists, that the ticket can be put with an low priority on the "which list".

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1453 [bareos-core] director major always 2022-05-02 15:56 2022-05-02 15:56
Reporter: inazo Platform: Linux  
Assigned To: OS: Debian  
Priority: normal OS Version: 10  
Status: new Product Version: 20.0.6  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Rate issue when prune and truncate volume
Description: Hi,

Every day of month i run task with a pool which are limited to 31 volumes, each volume is used one time and maximum retention is 30 days. Each days since i have the 31 volumes in my pools the rate decrease to 14150 Kb/s to 141 Kb/s... So my backup which take initialy 5 minutes to run, take now 30 minutes... I thin it's when the job truncate / recycle / prune the volume.

First day of month i run an other pool in full mode. And it's not affected by the rate decrease beacuse for the moment the job don't have to recycle/prune/truncate volume.

Other information i use S3 storage.
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files: bareos-dir.conf (893 bytes) 2022-05-02 15:56
https://bugs.bareos.org/file_download.php?file_id=503&type=bug
client.conf (79 bytes) 2022-05-02 15:56
https://bugs.bareos.org/file_download.php?file_id=504&type=bug
fileset.conf (333 bytes) 2022-05-02 15:56
https://bugs.bareos.org/file_download.php?file_id=505&type=bug
job.conf (435 bytes) 2022-05-02 15:56
https://bugs.bareos.org/file_download.php?file_id=506&type=bug
jobdefs.conf (655 bytes) 2022-05-02 15:56
https://bugs.bareos.org/file_download.php?file_id=507&type=bug
pool.conf (2,697 bytes) 2022-05-02 15:56
https://bugs.bareos.org/file_download.php?file_id=508&type=bug
schedule.conf (167 bytes) 2022-05-02 15:56
https://bugs.bareos.org/file_download.php?file_id=509&type=bug
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1447 [bareos-core] file daemon tweak always 2022-04-06 14:12 2022-04-06 14:12
Reporter: mdc Platform: x86_64  
Assigned To: OS: CentOS  
Priority: normal OS Version: stream 8  
Status: new Product Version: 21.1.2  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Restore of unencrypted files on an encrypted fd thrown an error, bur works.
Description: When restore files from an client that stores the files unencrypted on an client that normally only runs encrypted backups, the restore will work, but an error is thrown.
Tags:
Steps To Reproduce: Sample config:
Client A:
Client {
...
PKI Signatures = Yes
PKI Encryption = Yes
PKI Cipher = aes256
PKI Master Key = ".../master.key"
PKI Keypair = ".../all.keys"
}
Client B:
Client {
...
# without the cryptor config
}

Both can do its' backup and restore for itself to the storage. But when an restore is done, with files from client B on client A, then the files are restored as request, but for every file an error is logged:
clienta JobId 72: Error: filed/crypto.cc:168 Missing cryptographic signature for /var/tmp/bareos/var/log/journal/e882cedd07af40b386b29cfa9c88466f/user-70255@bdb4fa2d506c45ba8f8163f7e4ee7dac-0000000000b6f8c1-0005d99dd2d23d5a.journal
and the hole job is marked as failed.
Additional Information: Because the restore itself works, I think the job should only marked as "OK with warnings" and the "Missing cryptographic signature ..." only as an warning instant of an error.
System Description
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1446 [bareos-core] bconsole crash always 2022-04-01 13:24 2022-04-01 13:24
Reporter: mdc Platform: x86_64  
Assigned To: OS: CentOS  
Priority: normal OS Version: stream 8  
Status: new Product Version: 21.1.2  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: bconsole will crash when the unneeded password is missing
Description: When using pam authentication, only the console password is needed for the connection.
But when this is so done, the bconsole will crash with:
 bconsole -c /etc/bareos/bconsole-tui.conf
bconsole: ABORTING due to ERROR in console/console_conf.cc:181
Password item is required in Director resource, but not found.
BAREOS interrupted by signal 6: IOT trap
bconsole, bconsole got signal 6 - IOT trap. Attempting traceback.

So an empty and unneeded Password entry must added as an workaround.
Tags:
Steps To Reproduce:
Additional Information: Sample crash config:
Director {
  Name = bareos-dir
  Address = localhost
  }
Console {
  Name = "PamConsole"
  @/etc/bareos/pam.pw
}
Sample working config:
Director {
  Name = bareos-dir
  Address = localhost
  Password = ""
}
Console {
  Name = "PamConsole"
  @/etc/bareos/pam.pw
}

System Description
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1421 [bareos-core] storage daemon minor always 2022-01-17 17:06 2022-03-30 12:14
Reporter: DemoFreak Platform: x86_64  
Assigned To: bruno-at-bareos OS: Opensuse  
Priority: normal OS Version: Leap 15.3  
Status: new Product Version: 21.0.0  
Product Build: Resolution: reopened  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: MTEOM on LTO-3 fails with Bareos 21, but works on older Bacula
Description: After migrating a file server, the backup was switched from Bacula 5.2 to Bareos 21.0. Transferring the configuration works flawlessly, everything works as desired except for the tape drive.

Appending another backup on an already used medium fails with
kernel: [586704.090320] st 8:0:0:0: [st0] Sense Key : Medium Error [current].
kernel: [586704.090327] st 8:0:0:0: [st0] Add. Sense: Recorded entity not found
the tape is marked as "Error" in the catalog.

The test with btape results consequently in a problem with EOD (MTEOM). After completing the storage configuration with
Hardware End of Medium = no
Fast Forward Space File = no
appending works, but is extremely slow, as also mentioned in the documentation.

Hardware:
- Fibre Channel: QLogic Corp. ISP2312-based 2Gb Fibre Channel to PCI-X HBA
- Drive 'HP Ultrium 3-SCSI Rev. L63S'

The drive and HBA were transferred from the old system to the new system without any changes.

How can I further isolate the problem?
Does Bareos work differently than Bacula 5.2 regarding EOD?
Tags: storage MTEOM
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0004478)
DemoFreak   
2022-01-18 18:38   
(Last edited: 2022-01-18 23:15)
It seems that even the slow (software) method sometimes fails. Here is the corresponding excerpt from the log.

First job on the tape:
17-Jan 11:00 bareos-sd JobId 81: Wrote label to prelabeled Volume "Band4" on device "TapeStorageLTO3" (/dev/tape/by-id/scsi-350060b00002e85de-nst)
...
17-Jan 13:51 bareos-sd JobId 81: Releasing device "TapeStorageLTO3" (/dev/tape/by-id/scsi-350060b00002e85de-nst).
  SD Bytes Written: 344,018,607,484 (344.0 GB)

Second job:
17-Jan 13:54 bareos-sd JobId 83: Volume "Band4" previously written, moving to end of data.
17-Jan 14:39 bareos-sd JobId 83: Ready to append to end of Volume "Band4" at file=65.
...
17-Jan 14:39 bareos-sd JobId 83: Releasing device "TapeStorageLTO3" (/dev/tape/by-id/scsi-350060b00002e85de-nst).
  SD Bytes Written: 140,473,627 (140.4 MB)

Third job:
17-Jan 14:42 bareos-sd JobId 85: Volume "Band4" previously written, moving to end of data.
17-Jan 15:27 bareos-sd JobId 85: Ready to append to end of Volume "Band4" at file=66.
...
17-Jan 15:32 bareos-sd JobId 84: Releasing device "FileStorage" (/home/.bareos/backup).
  SD Bytes Written: 9,954,169,360 (9.954 GB)

Fourth job:
17-Jan 15:33 bareos-sd JobId 87: Volume "Band4" previously written, moving to end of data.
17-Jan 16:20 bareos-sd JobId 87: Ready to append to end of Volume "Band4" at file=68.
...
17-Jan 16:20 bareos-sd JobId 87: Releasing device "TapeStorageLTO3" (/dev/tape/by-id/scsi-350060b00002e85de-nst).
  SD Bytes Written: 141,727,215 (141.7 MB)

Everything works fine up to this point.
The file size on the tape is 5GB (Maximum File Size = 5G). So the next job should be attached at file number 69.

Fifth job:
18-Jan 11:00 bareos-sd JobId 92: Volume "Band4" previously written, moving to end of data.
18-Jan 12:03 bareos-sd JobId 92: Error: Unable to position to end of data on device "TapeStorageLTO3" (/dev/tape/by-id/scsi-350060b00002e85de-nst): ERR=backends/generic_tape_device.cc:496 read error on "TapeStorageLTO3" (/dev/tape/by-id/scsi-350060b00002e85de-nst). ERR=Eingabe-/Ausgabefehler.
18-Jan 12:03 bareos-sd JobId 92: Marking Volume "Band4" in Error in Catalog.

This fails with an input/output error. Possibly no EOD marker was written during the fourth job.

Neither "mtst -f /dev/nst0 eod" nor "echo eod | btape" find EOD, they abort with error and the tape is read to the physical end.
Complete reading of the tape with "echo scanblocks | btape" works absolutely correct up to file number 68, different groups of blocks and one EOF marker each are read. In file number 69 no EOF is read, instead the drive keeps reading until the end of the medium.

...
1 block of 64508 bytes in file 66
2 blocks of 64512 bytes in file 66
1 block of 64508 bytes in file 66
23166 blocks of 64512 bytes in file 67
End of File mark.
43547 blocks of 64512 bytes in file 67
1 block of 64504 bytes in file 67
14277 blocks of 64512 bytes in file 67
1 block of 64506 bytes in file 67
3209 blocks of 64512 bytes in file 67
1 block of 64509 bytes in file 67
8889 blocks of 64512 bytes in file 67
1 block of 64510 bytes in file 67
222 blocks of 64512 bytes in file 67
1 block of 64502 bytes in file 67
1046 blocks of 64512 bytes in file 67
1 block of 33330 bytes in file 68
End of File mark.
2198 blocks of 64512 bytes in file 68
1 block of 35367 bytes in file 69
End of File mark.
(At this point, nothing more happens until the end of the tape. Please note that in the log of btape for whatever reason apparently the first line of a new file and the EOF marker of the previous file are swapped, so the last EOF marker here belongs to file number 68).

Any ideas?

(0004479)
DemoFreak   
2022-01-18 19:25   
(Last edited: 2022-01-18 23:14)
As an attempt to narrow down the problem, I wrote an EOF marker to file number 69 with mtst:

miraculix:~ # mtst -f /dev/nst0 rewind
miraculix:~ # mtst -f /dev/nst0 status
SCSI 2 tape drive:
File number=0, block number=0, partition=0.
Tape block size 0 bytes. Density code 0x44 (LTO-3).
Soft error count since last status=0
General status bits on (41010000):
 BOT ONLINE IM_REP_EN
miraculix:~ # time mtst -f /dev/nst0 fsf 69

real 0m29.927s
user 0m0.002s
sys 0m0.001s
miraculix:~ # mtst -f /dev/nst0 status
SCSI 2 tape drive:
File number=69, block number=0, partition=0.
Tape block size 0 bytes. Density code 0x44 (LTO-3).
Soft error count since last status=0
General status bits on (81010000):
 EOF ONLINE IM_REP_EN
miraculix:~ # mtst -f /dev/nst0 weof
miraculix:~ # mtst -f /dev/nst0 status
SCSI 2 tape drive:
File number=70, block number=0, partition=0.
Tape block size 0 bytes. Density code 0x44 (LTO-3).
Soft error count since last status=0
General status bits on (81010000):
 EOF ONLINE IM_REP_EN
miraculix:~ # mtst -f /dev/nst0 rewind

Note the extreme difference in required time for spacing forward to file number 69:

miraculix:~ # time echo -e "status\nfsf 69\nstatus\n" | btape TapeStorageLTO3
Tape block granularity is 1024 bytes.
btape: stored/butil.cc:306-0 Using device: "TapeStorageLTO3" for writing.
btape: stored/btape.cc:490-0 open device "TapeStorageLTO3" (/dev/tape/by-id/scsi-350060b00002e85de-nst): OK
* Bareos status: file=0 block=0
 Device status: BOT ONLINE IM_REP_EN file=0 block=0
Device status: TAPE BOT ONLINE IMMREPORT. ERR=
*btape: stored/btape.cc:1774-0 Forward spaced 69 files.
* EOF Bareos status: file=69 block=0
 Device status: EOF ONLINE IM_REP_EN file=69 block=0
Device status: TAPE EOF ONLINE IMMREPORT. ERR=
**
real 48m8.811s
user 0m0.006s
sys 0m0.014s
miraculix:~ #

After writing the EOF marker, btape "scanblocks" works as expected:
...
23166 blocks of 64512 bytes in file 67
End of File mark.
43547 blocks of 64512 bytes in file 67
1 block of 64504 bytes in file 67
14277 blocks of 64512 bytes in file 67
1 block of 64506 bytes in file 67
3209 blocks of 64512 bytes in file 67
1 block of 64509 bytes in file 67
8889 blocks of 64512 bytes in file 67
1 block of 64510 bytes in file 67
222 blocks of 64512 bytes in file 67
1 block of 64502 bytes in file 67
1046 blocks of 64512 bytes in file 67
1 block of 33330 bytes in file 68
End of File mark.
2198 blocks of 64512 bytes in file 68
1 block of 35367 bytes in file 69
End of File mark.
Total files=69, blocks=5495758, bytes = 354,542,114,821

btape "eod" works as well:

*eod
btape: stored/btape.cc:619-0 Moved to end of medium.



All in all, it seems to me that under circumstances that are not yet clear to me, sometimes no EOF is written on the tape.

Where am I wrong here?

(0004480)
DemoFreak   
2022-01-19 01:35   
Starting a migration job on this "repaired" tape triggers two migration worker jobs, the first of them works well, the second fails, and I don't understand why.

First job:
18-Jan 23:29 bareos-sd JobId 98: Volume "Band4" previously written, moving to end of data.
19-Jan 00:17 bareos-sd JobId 98: Ready to append to end of Volume "Band4" at file=69.
19-Jan 00:17 bareos-sd JobId 97: Releasing device "FileStorage" (/home/.bareos/backup).
  SD Bytes Written: 247,515,896 (247.5 MB)


Second job:
19-Jan 00:18 bareos-sd JobId 100: Volume "Band4" previously written, moving to end of data.
19-Jan 01:06 bareos-sd JobId 100: Error: Bareos cannot write on tape Volume "Band4" because:
The number of files mismatch! Volume=69 Catalog=70
19-Jan 01:06 bareos-sd JobId 100: Marking Volume "Band4" in Error in Catalog.

Why does the second job still find the end of the tape at file number 69, although this file was already written in the first job? EOD should be at file number 70, as it is also noted in the catalog.

Where is my error?
(0004481)
bruno-at-bareos   
2022-01-20 17:14   
Just a quick note having

Appending another backup on an already used medium fails with
kernel: [586704.090320] st 8:0:0:0: [st0] Sense Key : Medium Error [current].
kernel: [586704.090327] st 8:0:0:0: [st0] Add. Sense: Recorded entity not found

Means hardware trouble, be it the medium (tape) the drive or some other component in the scsi chain.
They are never fun to debug.
(0004482)
DemoFreak   
2022-01-20 17:48   
The hardware is completely unchanged. HBA, drive and tapes are the same. They are even still in the same place, only the HBA is now in a different computer.

To be on the safe side, I will rebuild everything and run a test in the old system. This worked until a week ago for several years completely without problems, but just with Bacula.

I am surprised about the lack of an EOF marker after some migration jobs.
(0004519)
DemoFreak   
2022-02-19 04:21   
(Last edited: 2022-02-19 04:23)
Sorry, I was unfortunately busy in the meantime, therefore the long response time.

I have just done the test and rebuilt everything in the old system, there it runs as expected completely without problems.

After the renewed change into the new system it runs now however also here perfectly.

So it was probably really a problem with the LC cabling.

So can be closed, thanks for the help.
(0004520)
bruno-at-bareos   
2022-02-21 09:40   
Hardware problem.
(0004556)
DemoFreak   
2022-03-30 12:14   
I think I have found the real cause.

I use an after-job script which shuts down the tape drive after the migration. For this I check after 30s waiting time if there are more jobs in the queue and only if there are no more waiting or running jobs, the drive is switched off.

echo "Checking for pending bacula jobs..."

sleep 30

if echo "status dir" | /usr/sbin/bconsole | /usr/bin/grep "^ " | /usr/bin/egrep -q "(is waiting|is running)"; then
        echo "Pending bacula jobs found, leaving tape device alone!"
else
        echo "Switching off tape device..."
        $DEBUG $SISPMCTLBIN -qf 1
fi

Apparently with Bareos, the processing of the jobs is more concurrent than Bacula, because since I temporarily suspended the shutdown of the drive, no more MTEOM errors occur. So I suspect that sometimes the drive was already powered off while the storage daemon was still in the process of writing the last data to the drive. Of course, this also meant that no EOF was written.

Is it possible that the Director reports jobs as finished while the SD is still writing?

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1441 [bareos-core] webui minor always 2022-03-22 13:59 2022-03-29 14:13
Reporter: mdc Platform: x86_64  
Assigned To: frank OS: CentOS  
Priority: normal OS Version: stream 8  
Status: resolved Product Version:  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Unable to view the pool details, when the pool name contains an space char.
Description: The resulting url will be:
"https:/XXXX/pool/details/Bareos database" for example, when the pool is named "Bareos database"
An the call will failed with:

A 404 error occurred
Page not found.

The requested URL could not be matched by routing.
No Exception available
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0004555)
frank   
2022-03-29 14:13   
Fix committed to bareos master branch with changesetid 16093.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1440 [bareos-core] director minor always 2022-03-22 13:42 2022-03-23 15:09
Reporter: mdc Platform: x86_64  
Assigned To: bruno-at-bareos OS: CentOS  
Priority: normal OS Version: stream 8  
Status: resolved Product Version: 21.1.2  
Product Build: Resolution: won't fix  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Only 127.0.0.1 ls logged in the audit log when the access comes from the webui
Description: Instant of the real IP of the user device, only 127.0.0. is logged.
22-Mar 13:31 Bareos Director: Console [foo] from [127.0.0.1] cmdline list jobtotals

I think, the director, see only the source ip of the webui server. But the real IP is not forwarded to the director.
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0004549)
bruno-at-bareos   
2022-03-23 15:08   
The audit log is use to log the remote (here local) ip of the initiator of the command.
Think about remote bconsole access etc.
so here localhost is the right agent.

You're totally entitled to propose a enhanced version of the code by making a PR on our github project.
(0004550)
bruno-at-bareos   
2022-03-23 15:09   
Won't be fixed without external code proposal

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1134 [bareos-core] vmware plugin feature always 2019-11-06 15:32 2022-03-14 15:42
Reporter: ratacorbo Platform: Linux  
Assigned To: stephand OS: CentOS  
Priority: normal OS Version: 7  
Status: resolved Product Version: 18.2.5  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version: 21.0.0  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Installing bareos-vmware-plugin-18.2.5-124.1.el7.x86_64 python-pyvmomi error
Description: when tries to install bareos-vmware-plugin-18.2.5-124.1.el7.x86_64 in Centos 7 gives an error that python-pyvmomi
 doesn't exists
Tags:
Steps To Reproduce: yum install bareos-vmware-plugin
Additional Information: Error: Package: bareos-vmware-plugin-18.2.5-124.1.el7.x86_64 (bareos_bareos-18.2)
           Requires: python-pyvmomi
 You could try using --skip-broken to work around the problem
 You could try running: rpm -Va --nofiles --nodigest
System Description
Attached Files:
Notes
(0003856)
stephand   
2020-02-26 00:23   
The python-pyvmomi package for CentOS 7/RHEL 7 is available in EPEL.
On CentOS 7 EPEL repo can by added by runing

yum install epel-release

For RHEL 7 see https://fedoraproject.org/wiki/EPEL

Does it work when the EPEL repo was added to your system?
(0003997)
Rotnam   
2020-06-02 18:05   
I installed a fresh Redhat 8.1 to test bareos vmware pluging. I ran in the same issue running
yum install bareos-vmware-plugin
Error:
 Problem: cannot install the best candidate for the job
  - nothing provides python-pyvmomi needed by bareos-vmware-plugin-19.2.7-2.el8.x86_64
(try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)

So far I tried to install
python-pyvmomi with pip3.6 install pyvmomi -> Installed successful, no luck
Downloaded the github package and did a python3.6 setup.py install, this install the version 7.0, no luck
Addding epel -> yum install python3-pyvmomi, this install version 6.7.3-3, no luck with the yum

Downloading the rpm (19.2.7-2) and trying manually, it did the requirement:
yum install python2
yum install bareos-filedaemon-python-plugin
yum install bareos-vadp-dumper
Did a pip2 install pyvmomi, still no luck
python2 setup.py install, installed a bunch of files under python2.7, still no luck for the rpm

At this point, I will just do a --nodeps and see if it work, hope this help resolving the package issue
(0004039)
stephand   
2020-09-16 13:10   
You are right, we have a problem here for RHEL/CentOS 8 because EPEL 8 does not provide a python2-pyvmomi package.
It's also not easily possible to buid a python2-pyvmomi package for el8 due to its missing python2 package dependencies.

Currently indeed the only way is to ignore dependencies for the package installation and use pip2 install pyvmomi.
Does that work for you?

I think we should remove the dependency on python-pyvmomi and add a hint in the documentation to use pip2 install pyvmomi.

For the upcoming Bareos version 20, we are already working on Python3 plugins, this will also fix the dependency problem.
(0004040)
Rotnam   
2020-09-16 15:22   
For the test I did, it worked fine, so I assume you can do it that way with no--nodeps. I ended up putting this on hold, backing up just the disks and not the VM was a bit strange. Restore on locally worked, not directly on vcenter (can't remember which one I tried). Will revisit this solution later.
(0004536)
stephand   
2022-03-14 15:42   
Since Bareos Version >= 21.0.0 the package bareos-vmware-plugin no longer includes a dependency on a pyVmomi package, because some Linux distributions don’t provide current versions. Consequently, pyVmomi must be either installed by using pip install pyvmomi or by manually installing a distribution provided pyVmomi package.
See https://docs.bareos.org/TasksAndConcepts/Plugins.html#vmware-plugin

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1431 [bareos-core] General major always 2022-03-08 20:37 2022-03-11 03:32
Reporter: backup1 Platform: Linux  
Assigned To: OS:  
Priority: normal OS Version:  
Status: new Product Version: 21.0.0  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Newline characters stripped from configuration strings
Description:  Hi,

I'm trying to set a config value that includes newline character (a.k.a. \n). This worked in Bareos 19.2, but same config is not working in 21. It seems that the newlines are stripped when loading the config. I note that the docs say that strings can now be entered using a multi-line quoted format (for Bareos 20+).

The actual config setting is for NDMP files and specifying the NDMP environment MULTI_SUBTREE_NAMES.

This is what the config looks like:

FileSet {
  Name = "user_01"
  Include {
    Options {
      AclSupport = Yes
      XattrSupport = Yes
      Meta = "DMP_NAME=user_01"
      Meta = "MULTI_SUBTREE_NAMES=userA
userB
userC
userD"
    }
    File = "/vol0/user"
  }
}

The correctly formatted value will have newlines between the "userA", "userB", "userC" subdir names.

In bconsole "show filesets" has the names all concatenated together and the (NetApp) filer rejects the job saying "no directory userAuserBUserCUserD".
Tags:
Steps To Reproduce: Configure fileset with options string including newlines.

Load configuration.

Review configuration using "show filesets" and observe that newlines have been stripped.

I've also reviewed NDMP commands sent to NetApp and (with wireshark) and observe that the newlines are missing.
Additional Information: I believe the use-case for config file strings to include newlines was not considered in parser changes for multi-line quoted format. I'm no longer able to use MULTI_SUBTREE_NAMES for NDMP and have reverted to just doing full volume backups, which limits flexibility, but is working reliably.

Thanks,
Tom Rockwell
Attached Files:
Notes
(0004533)
bruno-at-bareos   
2022-03-09 11:40   
Inconsistencies between documentation / expectation / behaviour
loss of functionality between versions

Documentation https://docs.bareos.org/master/Configuration/CustomizingTheConfiguration.html?highlight=multiline#quotes show multilines in example lead to expectation to have those kept as multilines.

Having a configured fileset with new multiline syntax

FileSet {
  Name = "NDMP_test"
  Include {
    Options {
      AclSupport = Yes
      XattrSupport = Yes
      Meta = "DMP_NAME=user_01"
      Meta = "MULTI_SUBTREE_NAMES=userA"
             "userB"
             "userC"
             "userD"
    }
    File = "/vol0/user"
  }
}

when displayed in bconsole
*show fileset=NDMP_test
FileSet {
  Name = "NDMP_test"
  Include {
    Options {
      AclSupport = Yes
      XattrSupport = Yes
      Meta = "DMP_NAME=user_01"
      Meta = "MULTI_SUBTREE_NAMES=userAuserBuserCuserD"
    }
    File = "/vol0/user"
  }
}
(0004534)
backup1   
2022-03-11 03:32   
Hi,

Thanks for looking at this. For reference, the newlines are needed to use the MULTI_SUBTREE_NAMES functionality on NetApp. https://library.netapp.com/ecmdocs/ECMP1196992/html/GUID-DE8BF53F-706A-48CA-A6FD-ACFDC2D0FE8A.html

From the linked doc, "Multiple subtrees are specified in the string which is a newline-separated, null-terminated list of subtree names."

I looked for other use-cases to put newlines into strings in Bareos config, but didn't find any, so I realize this is a bit of a corner-case. Still, NDMP is useful for NetApp, and it would be unfortunate to lose this functionality.

Thanks again,
Tom Rockwell

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1430 [bareos-core] webui major always 2022-02-23 20:19 2022-03-03 15:11
Reporter: jason.agilitypr Platform: Linux  
Assigned To: frank OS: Ubuntu  
Priority: high OS Version: 20.04  
Status: resolved Product Version: 21.0.0  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Version of Jquery is old and vulnerable
Description: the version of jquery that bareos webui is running is old and out of date and has known security vulnerabilities (xss attacks)

/*! jQuery v3.2.0 | (c) JS Foundation and other contributors | jquery.org/license */
v3.2.0 was release on March 16, 2017

https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/
"The HTML parser in jQuery <=3.4.1 usually did the right thing, but there were edge cases where parsing would have unintended consequences. "

the current version of jquery is 3.6.0


Tags:
Steps To Reproduce: check version of jquery loaded in bareos webui via browser right click -> view source
Additional Information: the related libraries including moment and excanavas, may also need updating.
Attached Files:
Notes
(0004531)
frank   
2022-03-03 11:11   
Fix committed to bareos master branch with changesetid 15977.
(0004532)
frank   
2022-03-03 15:11   
Fix committed to bareos bareos-19.2 branch with changesetid 15981.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1426 [bareos-core] director minor always 2022-02-07 10:37 2022-02-24 11:46
Reporter: mschiff Platform: Linux  
Assigned To: stephand OS: any  
Priority: normal OS Version: 3  
Status: acknowledged Product Version: 21.0.0  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: bareos send useless operator "mount" messages
Description: The default configuration has messages/Standard.conf which contains:

operator = <email> = mount

which shouldsend an email if an operator is required for a job to continue.

But these mails will also be triggered on a busy bareos-sd with multiple virtual drives and multiple jobs running, when a job just needs to wait a bit for a volume to become available.
Every month, when our systems are doing virtual full backups at night, we get lots of mails like:

06-Feb 23:37 kilo-sd JobId 58793: Please mount read Volume "VolIncr-0034" for:
    Job: BackupIndia.2022-02-06_23.15.01_31
    Storage: "MultiFileStorage0001" (/srv/backup/bareos)
    Pool: Incr
    Media type: File

But in the morning, all jobs are finished successsfully.

So when one job is reading a volume and another job is waiting for the same volume, an email is triggered. But after waiting a couple of minutes, this "issue" solves itself.

It should be possible to set some timeout, after which such messages are being sent, so that they will only be sent on really hanging jobs.

This is part of the joblog:

 2022-02-06 23:25:38 kilo-dir JobId 58793: Start Virtual Backup JobId 58793, Job=BackupIndia.2022-02-06_23.15.01_31
 2022-02-06 23:25:38 kilo-dir JobId 58793: Consolidating JobIds 58147,58164,58182,58200,58218,58236,58254,58272,58290,58308,58326,58344,58362,58380,58398,58416,58434,58452,58470,58488,58506
,58524,58542,58560,58578,58596,58614,58632,58650,58668,58686,58704,58722,58740,58758,58764
 2022-02-06 23:25:40 kilo-dir JobId 58793: Bootstrap records written to /var/lib/bareos/kilo-dir.restore.16.bsr
 2022-02-06 23:25:40 kilo-dir JobId 58793: Connected Storage daemon at kilo.sys4.de:9103, encryption: TLS_AES_256_GCM_SHA384 TLSv1.3
 2022-02-06 23:26:42 kilo-dir JobId 58793: Using Device "MultiFileStorage0001" to read.
 2022-02-06 23:26:42 kilo-dir JobId 58793: Using Device "MultiFileStorage0002" to write.
 2022-02-06 23:26:42 kilo-sd JobId 58793: Ready to read from volume "VolFull-0165" on device "MultiFileStorage0001" (/srv/backup/bareos).
 2022-02-06 23:26:42 kilo-sd JobId 58793: Forward spacing Volume "VolFull-0165" to file:block 0:3367481982.
 2022-02-06 23:26:53 kilo-sd JobId 58793: End of Volume at file 1 on device "MultiFileStorage0001" (/srv/backup/bareos), Volume "VolFull-0165"
 2022-02-06 23:26:53 kilo-sd JobId 58793: Ready to read from volume "VolFull-0168" on device "MultiFileStorage0001" (/srv/backup/bareos).
 2022-02-06 23:26:53 kilo-sd JobId 58793: Forward spacing Volume "VolFull-0168" to file:block 2:1033779909.
 2022-02-06 23:26:54 kilo-sd JobId 58793: End of Volume at file 2 on device "MultiFileStorage0001" (/srv/backup/bareos), Volume "VolFull-0168"
 2022-02-06 23:26:54 kilo-sd JobId 58793: Ready to read from volume "VolFull-0169" on device "MultiFileStorage0001" (/srv/backup/bareos).
 2022-02-06 23:26:54 kilo-sd JobId 58793: Forward spacing Volume "VolFull-0169" to file:block 0:64702.
 2022-02-06 23:27:03 kilo-sd JobId 58793: End of Volume at file 1 on device "MultiFileStorage0001" (/srv/backup/bareos), Volume "VolFull-0169"
 2022-02-06 23:27:03 kilo-sd JobId 58793: Warning: stored/vol_mgr.cc:542 Need volume from other drive, but swap not possible. Status: read=1 num_writers=0 num_reserve=0 swap=0 vol=VolIncr-0
022 from dev="MultiFileStorage0004" (/srv/backup/bareos) to "MultiFileStorage0001" (/srv/backup/bareos)
 2022-02-06 23:27:03 kilo-sd JobId 58793: Warning: stored/acquire.cc:348 Read acquire: stored/label.cc:268 Could not reserve volume VolIncr-0022 on "MultiFileStorage0001" (/srv/backup/bareo
s)
 2022-02-06 23:27:03 kilo-sd JobId 58793: Please mount read Volume "VolIncr-0022" for:
    Job: BackupIndia.2022-02-06_23.15.01_31
    Storage: "MultiFileStorage0001" (/srv/backup/bareos)
    Pool: Incr
    Media type: File
 2022-02-06 23:32:03 kilo-sd JobId 58793: Ready to read from volume "VolIncr-0022" on device "MultiFileStorage0001" (/srv/backup/bareos).
 2022-02-06 23:32:03 kilo-sd JobId 58793: Forward spacing Volume "VolIncr-0022" to file:block 0:3331542115.
 2022-02-06 23:32:03 kilo-sd JobId 58793: End of Volume at file 0 on device "MultiFileStorage0001" (/srv/backup/bareos), Volume "VolIncr-0022"
 2022-02-06 23:32:03 kilo-sd JobId 58793: Ready to read from volume "VolIncr-0023" on device "MultiFileStorage0001" (/srv/backup/bareos).
 2022-02-06 23:32:03 kilo-sd JobId 58793: Forward spacing Volume "VolIncr-0023" to file:block 0:750086502.
Tags:
Steps To Reproduce:
Additional Information:
System Description General issues regarding all linux platforms or not specific to only one distribution
Attached Files:
Notes
(0004526)
stephand   
2022-02-24 11:46   
Thanks for reporting this issue. I also already noticed that problem.
It will be very hard to fix this properly without a complete redesign of the whole reservation logic, which would be a huge effort.
But meanwhile we could think about a workaround to mitigate this somehow.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1349 [bareos-core] file daemon major always 2021-05-07 18:29 2022-02-02 10:47
Reporter: oskarsr Platform: Linux  
Assigned To: bruno-at-bareos OS: Debian  
Priority: urgent OS Version: 9  
Status: resolved Product Version: 20.0.1  
Product Build: Resolution: unable to reproduce  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Fatal error: bareos-fd on the following backup after the successful backup of postgresql database using the PostgreSQL Plugin.
Description: Fatal error: bareos-fd on the following backup after the successful backup of postgresql database using the PostgreSQL Plugin.
When the Client daemon is restarted, the backup of Postgresql database goes without the error, but just once. On the second attempt, there is an error again.

it-fd JobId 118: Fatal error: bareosfd: Traceback (most recent call last):
File "/usr/lib/bareos/plugins/bareos-fd-postgres.py", line 40, in
import BareosFdPluginPostgres
File "/usr/lib/bareos/plugins/BareosFdPluginPostgres.py", line 30, in
import psycopg2
File "/usr/local/lib/python3.5/dist-packages/psycopg2/__init__.py", line 51, in
from psycopg2._psycopg import ( # noqa
SystemError: initialization of _psycopg raised unreported exception
Tags:
Steps To Reproduce: When the backup is executed right after the client daemon restart, the debug log is following:

it-fd (100): filed/fileset.cc:271-150 P python:module_path=/usr/lib/bareos/plugins:module_name=bareos-fd-postgres:postgresDataDir=/var/lib/postgresql/10/main:walArchive=/var/lib/postgresql/10/wal_archive/:dbuser=bareos:dbname=bareos
it-fd (150): filed/fd_plugins.cc:542-150 plugin cmd=python:module_path=/usr/lib/bareos/plugins:module_name=bareos-fd-postgres:postgresDataDir=/var/lib/postgresql/10/main:walArchive=/var/lib/postgresql/10/wal_archive/:dbuser=bareos:dbname=bareos
it-fd (150): filed/fd_plugins.cc:441-150 plugin_ctx=7f3964015250 JobId=150
it-fd (150): filed/fd_plugins.cc:229-150 IsEventForThisPlugin? name=python:module_path=/usr/lib/bareos/plugins:module_name=bareos-fd-postgres:postgresDataDir=/var/lib/postgresql/10/main:walArchive=/var/lib/postgresql/10/wal_archive/:dbuser=bareos:dbname=bareos len=6 plugin=python3-fd.so plen=7
it-fd (150): filed/fd_plugins.cc:261-150 IsEventForThisPlugin: yes, without last character: (plugin=python3-fd.so, name=python:module_path=/usr/lib/bareos/plugins:module_name=bareos-fd-postgres:postgresDataDir=/var/lib/postgresql/10/main:walArchive=/var/lib/postgresql/10/wal_archive/:dbuser=bareos:dbname=bareos)
it-fd (150): python/python-fd.cc:992-150 python3-fd: Trying to load module with name bareos-fd-postgres
it-fd (150): python/python-fd.cc:1006-150 python3-fd: Successfully loaded module with name bareos-fd-postgres
it-fd (100): module/bareosfd.cc:1442-150 python3-fd-mod: Constructor called in module BareosFdPluginPostgres with plugindef=postgresDataDir=/var/lib/postgresql/10/main:walArchive=/var/lib/postgresql/10/wal_archive/:dbuser=bareos:dbname=bareos:
it-fd (100): module/bareosfd.cc:1442-150 python3-fd-mod: Constructor called in module BareosFdPluginLocalFilesBaseclass with plugindef=postgresDataDir=/var/lib/postgresql/10/main:walArchive=/var/lib/postgresql/10/wal_archive/:dbuser=bareos:dbname=bareos:
it-fd (100): module/bareosfd.cc:1442-150 python3-fd-mod: Constructor called in module BareosFdPluginBaseclass with plugindef=postgresDataDir=/var/lib/postgresql/10/main:walArchive=/var/lib/postgresql/10/wal_archive/:dbuser=bareos:dbname=bareos:
it-fd (150): module/bareosfd.cc:1495-150 python3-fd-mod: PyBareosRegisterEvents registering event 2
it-fd (150): filed/fd_plugins.cc:2266-150 fd-plugin: Plugin registered event=2
it-fd (150): module/bareosfd.cc:1495-150 python3-fd-mod: PyBareosRegisterEvents registering event 4
it-fd (150): filed/fd_plugins.cc:2266-150 fd-plugin: Plugin registered event=4
it-fd (150): module/bareosfd.cc:1495-150 python3-fd-mod: PyBareosRegisterEvents registering event 16
it-fd (150): filed/fd_plugins.cc:2266-150 fd-plugin: Plugin registered event=16
it-fd (150): module/bareosfd.cc:1495-150 python3-fd-mod: PyBareosRegisterEvents registering event 19
it-fd (150): filed/fd_plugins.cc:2266-150 fd-plugin: Plugin registered event=19
it-fd (150): module/bareosfd.cc:1495-150 python3-fd-mod: PyBareosRegisterEvents registering event 3
it-fd (150): filed/fd_plugins.cc:2266-150 fd-plugin: Plugin registered event=3
it-fd (150): module/bareosfd.cc:1495-150 python3-fd-mod: PyBareosRegisterEvents registering event 5
it-fd (150): filed/fd_plugins.cc:2266-150 fd-plugin: Plugin registered event=5


But, when the backup is started repeatedly for the same client, the log consists of following:

it-fd (100): filed/fileset.cc:271-151 P python:module_path=/usr/lib/bareos/plugins:module_name=bareos-fd-postgres:postgresDataDir=/var/lib/postgresql/10/main:walArchive=/var/lib/postgresql/10/wal_archive/:dbuser=bareos:dbname=bareos
it-fd (150): filed/fd_plugins.cc:542-151 plugin cmd=python:module_path=/usr/lib/bareos/plugins:module_name=bareos-fd-postgres:postgresDataDir=/var/lib/postgresql/10/main:walArchive=/var/lib/postgresql/10/wal_archive/:dbuser=bareos:dbname=bareos
it-fd (150): filed/fd_plugins.cc:441-151 plugin_ctx=7f39641d1b60 JobId=151
it-fd (150): filed/fd_plugins.cc:229-151 IsEventForThisPlugin? name=python:module_path=/usr/lib/bareos/plugins:module_name=bareos-fd-postgres:postgresDataDir=/var/lib/postgresql/10/main:walArchive=/var/lib/postgresql/10/wal_archive/:dbuser=bareos:dbname=bareos len=6 plugin=python3-fd.so plen=7
it-fd (150): filed/fd_plugins.cc:261-151 IsEventForThisPlugin: yes, without last character: (plugin=python3-fd.so, name=python:module_path=/usr/lib/bareos/plugins:module_name=bareos-fd-postgres:postgresDataDir=/var/lib/postgresql/10/main:walArchive=/var/lib/postgresql/10/wal_archive/:dbuser=bareos:dbname=bareos)
it-fd (150): python/python-fd.cc:992-151 python3-fd: Trying to load module with name bareos-fd-postgres
it-fd (150): python/python-fd.cc:1000-151 python3-fd: Failed to load module with name bareos-fd-postgres
it-fd (150): include/python_plugins_common.inc:124-151 bareosfd: Traceback (most recent call last):
  File "/usr/lib/bareos/plugins/bareos-fd-postgres.py", line 40, in <module>
    import BareosFdPluginPostgres
  File "/usr/lib/bareos/plugins/BareosFdPluginPostgres.py", line 30, in <module>
    import psycopg2
  File "/usr/lib/python3/dist-packages/psycopg2/__init__.py", line 51, in <module>
    from psycopg2._psycopg import ( # noqa
SystemError: initialization of _psycopg raised unreported exception

it-fd (150): filed/fd_plugins.cc:480-151 Cancel return from GeneratePluginEvent
it-fd (100): filed/fileset.cc:271-151 N
it-fd (100): filed/dir_cmd.cc:462-151 <dird: getSecureEraseCmd
Additional Information:
System Description
Attached Files:
Notes
(0004129)
oskarsr   
2021-05-12 17:33   
(Last edited: 2021-05-12 17:34)
Does anybody has tried to back up postgresql DB by using bareos-fd-postgres python plugin ?

(0004263)
perkons   
2021-09-13 15:38   
We are experiencing exactly the same issue on Ubuntu 18.04.
(0004297)
bruno-at-bareos   
2021-10-11 13:31   
To Both of you, could you share the installed bareos package (and confirm they're coming from bareos.org) and also the python3 version used
also python packages (and from where they come) related (main core + psycopg)
(0004298)
perkons   
2021-10-11 14:52   
We installed the bareos-filedaemon from https://download.bareos.org
The python modules are installed from Ubuntu repositories. The reason we use both python and python3 modules is because if one is missing the backups fail. This seems pretty wrong to me, but as I understand there is active work to migrate to python3.
We also have both of these python modules (python2 and python3) on our RHEL based hosts and have not had any problems with the PostgreSQL Plugin.

# dpkg -l | grep psycopg
ii python-psycopg2 2.8.4-1~pgdg18.04+1 amd64 Python module for PostgreSQL
ii python3-psycopg2 2.8.6-2~pgdg18.04+1 amd64 Python 3 module for PostgreSQL
# dpkg -l | grep dateutil
ii python-dateutil 2.6.1-1 all powerful extensions to the standard Python datetime module
ii python3-dateutil 2.6.1-1 all powerful extensions to the standard Python 3 datetime module
# dpkg -l | grep bareos
ii bareos-common 20.0.1-3 amd64 Backup Archiving Recovery Open Sourced - common files
ii bareos-filedaemon 20.0.1-3 amd64 Backup Archiving Recovery Open Sourced - file daemon
ii bareos-filedaemon-postgresql-python-plugin 20.0.1-3 amd64 Backup Archiving Recovery Open Sourced - file daemon PostgreSQL plugin
ii bareos-filedaemon-python-plugins-common 20.0.1-3 amd64 Backup Archiving Recovery Open Sourced - file daemon Python plugin common files
ii bareos-filedaemon-python3-plugin 20.0.1-3 amd64 Backup Archiving Recovery Open Sourced - file daemon Python plugin
# dpkg -s bareos-filedaemon
Package: bareos-filedaemon
Status: install ok installed
Priority: optional
Section: admin
Installed-Size: 384
Maintainer: Joerg Steffens <joerg.steffens@bareos.com>
Architecture: amd64
Source: bareos
Version: 20.0.1-3
Replaces: bacula-fd
Depends: bareos-common (= 20.0.1-3), lsb-base (>= 3.2-13), lsof, libc6 (>= 2.14), libgcc1 (>= 1:3.0), libjansson4 (>= 2.0.1), libstdc++6 (>= 5.2), zlib1g (>= 1:1.1.4)
Pre-Depends: debconf (>= 1.4.30) | debconf-2.0, adduser
Conflicts: bacula-fd
Conffiles:
 /etc/init.d/bareos-fd bcc61ad57fde8a771a5002365130c3ec
Description: Backup Archiving Recovery Open Sourced - file daemon
 Bareos is a set of programs to manage backup, recovery and verification of
 data across a network of computers of different kinds.
 .
 The file daemon has to be installed on the machine to be backed up. It is
 responsible for providing the file attributes and data when requested by
 the Director, and also for the file system-dependent part of restoration.
 .
 This package contains the Bareos File daemon.
Homepage: http://www.bareos.org/
# cat /etc/apt/sources.list.d/bareos-20.list
deb https://download.bareos.org/bareos/release/20/xUbuntu_18.04 /
#
(0004299)
bruno-at-bareos   
2021-10-11 15:48   
Thanks for your report, as you stated the python/python3 situation is far from ideal, but PR are progressing, the tunnel's end is near.
Also as you mentioned there's no trouble on RHEL system, I'm aware too.

I would have tried to use only python2 code on such version.
I make a note about testing that with the future new code on Ubuntu 18... But I just can't say when.
(0004497)
bruno-at-bareos   
2022-02-02 10:46   
For the issue reported there's something that look wrong

File "/usr/lib/bareos/plugins/BareosFdPluginPostgres.py", line 30, in
import psycopg2
File "/usr/local/lib/python3.5/dist-packages/psycopg2/__init__.py", line 51, in
from psycopg2._psycopg import ( # noqa
SystemError: initialization of _psycopg raised unreported exception

here it is /usr/local/lib/python3.5

And then

it-fd (150): include/python_plugins_common.inc:124-151 bareosfd: Traceback (most recent call last):
  File "/usr/lib/bareos/plugins/bareos-fd-postgres.py", line 40, in <module>
    import BareosFdPluginPostgres
  File "/usr/lib/bareos/plugins/BareosFdPluginPostgres.py", line 30, in <module>
    import psycopg2
  File "/usr/lib/python3/dist-packages/psycopg2/__init__.py", line 51, in <module>
    from psycopg2._psycopg import ( # noqa
SystemError: initialization of _psycopg raised unreported exception

/usr/lib/python3

So seems you have mixed python env which create strange behaviour, cause the module loaded is not always the same.
Our best advice would be to clean up the global environment and make sure only one consistent version is used for bareos.

Also python3 support has been greetly improved in Bareos 21.
Will closing, as we are not able to reproduce such environment.

btw postgresql plugin is tested each time the code is updated.
(0004498)
bruno-at-bareos   
2022-02-02 10:47   
Mixed python version used with different psyscopg. /usr/local/lib/python3.5 and /usr/lib/python3

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1418 [bareos-core] storage daemon major always 2022-01-04 14:23 2022-01-31 09:34
Reporter: Scorpionking83 Platform: Linux  
Assigned To: bruno-at-bareos OS: RHEL  
Priority: immediate OS Version: 7  
Status: resolved Product Version: 19.2.11  
Product Build: Resolution: duplicate  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Still autoprune and recycle not working in Bareos 19.2.7
Description: Dear developers,

I have still a problem wit autoprune and recycle tapes:
1. Everything works, but when he reaches the max volume tapes with retention of 90 day's , he cannot create any backup any more. Then update the incrementail pool>
update --> Option 2 "Pool from resource" --> Option 3 Incremental
update --> Option 1 "Volumes parameters" --> Option 13: All volumes from pool --> Option 1 Incremental
update --> Option 1 "Volumes parameters" --> Option 13: All volumes from pool -->Option 2 Full
update --> Option 1 "Volumes parameters" --> Option 13: All volumes from pool -->Option 3 Incremental

2. I get the following error:
Volume "Incrementail-0001" has Volume Retention of 7776000 sec. and has 0 jobs that will be pruned

Max volume tapes is set to 400

But way does autoprune and recycle not work if max volumes tapes has been reaches and retention period is not yet been reach?
Is it also possible to detele old tapes from file and database?

I need a answer soon.
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0004451)
Scorpionking83   
2022-01-04 14:36   
Why close this, but the issue is not resolved.
(0004452)
bruno-at-bareos   
2022-01-04 14:39   
This issue is the same as the report 001318 made by the same user.
This is clearly a duplicate case.
(0004493)
Scorpionking83   
2022-01-29 17:14   
Can someone please check my other bug report 0001318.
I still looking for a solution.
(0004496)
bruno-at-bareos   
2022-01-31 09:34   
duplicate of 0001318

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1422 [bareos-core] General major always 2022-01-20 11:58 2022-01-27 11:49
Reporter: niklas.skog Platform: Linux  
Assigned To: arogge OS: Debian  
Priority: normal OS Version: 11  
Status: confirmed Product Version: 21.0.0  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Bareos Libcloud Plugin incompatibe
Description: Goal is: to do backups of S3-buckets using Bareos

Situation:

Installed the Bareos 21.0.0-4 and "bareos-filedaemon-libcloud-python-plugin" on Debian 11 from "https://download.bareos.org/bareos/release/21/Debian_11"

Installed the "python3-libcloud" package on which the Plugin "bareos-filedaemon-libcloud-python-plugin" depends.

Configured the plugin according https://docs.bareos.org/TasksAndConcepts/Plugins.html

Trying to start a job, that should backup the data from S3 and getting following error in bconsole output:
---
20-Jan 08:27 bareos-dir JobId 13: Connected Storage daemon at backup:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
20-Jan 08:27 bareos-dir JobId 13: Using Device "FileStorage" to write.
20-Jan 08:27 bareos-dir JobId 13: Connected Client: backup-fd at localhost:9102, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
20-Jan 08:27 bareos-dir JobId 13: Handshake: Immediate TLS
20-Jan 08:27 backup-fd JobId 13: Fatal error: python3-fd-mod: BareosFdPluginLibcloud [50986]: Need Python version < 3.8 for Bareos Libcloud (current version: 3.9.2)
20-Jan 08:27 backup-fd JobId 13: Connected Storage daemon at backup:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
20-Jan 08:27 backup-fd JobId 13: Fatal error: TwoWayAuthenticate failed, because job was canceled.
20-Jan 08:27 backup-fd JobId 13: Fatal error: Failed to authenticate Storage daemon.
20-Jan 08:27 bareos-dir JobId 13: Fatal error: Bad response to Storage command: wanted 2000 OK storage
, got 2902 Bad storage
[...]
---

and the job fails.

Thus, main message:

"Fatal error: python3-fd-mod: BareosFdPluginLibcloud [50986]: Need Python version < 3.8 for Bareos Libcloud (current version: 3.9.2)"

which is understandable, because Debian 11 brings python3.9.*:

---
root@backup:/etc/bareos/bareos-dir.d/fileset# apt-cache policy python3
python3:
  Installed: 3.9.2-3
  Candidate: 3.9.2-3
  Version table:
 *** 3.9.2-3 500
        500 http://cdn-aws.deb.debian.org/debian bullseye/main amd64 Packages
        100 /var/lib/dpkg/status
root@backup:/etc/bareos/bareos-dir.d/fileset#
---


Accordingly, the plugin is incompatible with the current Debian version.
Tags: libcloud, plugin, s3
Steps To Reproduce: * install stock debian 11
* install & configure bareos 21, "python3-libcloud" and "bareos-filedaemon-libcloud-python-plugin"
* configure the plugin according https://docs.bareos.org/TasksAndConcepts/Plugins.html
* try to run a job that is backing up an S3-bucket
* this will fail
Additional Information:
Attached Files:
Notes
(0004487)
arogge   
2022-01-27 11:49   
You cannot use Python 3.9 or newer with python libcloud plugin due to a limitation in Python 3.9.
We're looking into this, but it isn't that easy to work around that limitation.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1414 [bareos-core] director minor random 2021-12-28 02:59 2022-01-16 23:19
Reporter: embareossed Platform: Linux  
Assigned To: OS: Devuan (Debian)  
Priority: normal OS Version: Chimaera (11)  
Status: new Product Version: 21.0.0  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Some jobs not followed up by email
Description: On random days for random jobs, emails are not sent out indicating final disposition. The GUI dashboard shows that a job has completed, even successfully, but no email for the job arrives in my inbox.

The jobs I am referring to are kicked off on a schedule. All of the jobs run -- or at least attempt to run -- according to the schedule, and I normally receive an email for each job as they complete.

I've checked the mail log files, searching around the time the job with the missing email finished, but there is no record of an email being sent. The log does show, however, all the other emails that were sent successfully for the rest of the jobs with the times corresponding correctly to each respective job finishing.

(I experienced this problem on 20.0.0 also, just for comparison. bareos 18 sends me emails for each job, regardless of final disposition, failed or OK).
Tags:
Steps To Reproduce: Set up several backup jobs, add them all to a schedule, and set them to run at some time.

Each job should send an email indicating when and how it finished.
Additional Information: I did not modify schedules or jobs for bareos 20 or 21. They were taken ver batim from my bareos 18 configuration which works correctly.

System Description
Attached Files:
Notes
(0004424)
bruno-at-bareos   
2021-12-28 09:32   
Hello, could you share you Messages and Daemon configuration please ?
Check also if you have any XXXXX.mail left in /var/lib/bareos/ directory ?

Maybe your backup server is sending from time to time too much message and hit a rate limit parameter on one server in mail path ?
Or the message is dropped by some software in the mail delivery chain, thinking its a junk/spam/else ...
This are the only reasons I can think about, and of course is not related to bareos: I've got 100% of bareos mail since more a decade.

Let see if you configuration show something that can hurt messages.
(0004430)
embareossed   
2021-12-28 18:19   
Yes, there are indeed many *.mail files left in the /var/lib/bareos directory!

Here are my messages *.conf files (not modified from bareos 18, but maybe need to be?):

Messages {
  Name = Daemon
  Description = "Message delivery for daemon messages (no job)."
  mailcommand = "/usr/bin/bsmtp -h localhost -f \"\(Bareos\) \<%r\>\" -s \"Bareos daemon message\" %r"
  mail = root = all, !skipped, !audit # (0000002)
  console = all, !skipped, !saved, !audit
  append = "/var/log/bareos/bareos.log" = all, !skipped, !audit
  append = "/var/log/bareos/bareos-audit.log" = audit
}
Messages {
  Name = Standard
  Description = "Reasonable message delivery -- send most everything to email address and to the console."
  operatorcommand = "/usr/bin/bsmtp -h localhost -f \"\(Bareos\) \<%r\>\" -s \"Bareos: Intervention needed for %j\" %r"
  mailcommand = "/usr/bin/bsmtp -h localhost -f \"\(Bareos\) \<%r\>\" -s \"Bareos: %t %e of %n %l\" %r"
  operator = root = mount # (0000003)
  mail = root = all, !skipped, !saved, !audit # (0000002)
  console = all, !skipped, !saved, !audit
  append = "/var/log/bareos/bareos.log" = all, !skipped, !saved, !audit
  catalog = all, !skipped, !saved, !audit
}

As far as my mail config, I am using postfix (and dovecot, to retrieve the messages on an admin host) and have not observed any errors in the mail logs. There are no additional mail systems (MTAs?) in the mail "path" -- it is simply one host, running director, storage, postfix, and dovecot and another host running thunderbird to read the messages. This same arrangement works in my bareos 18 setup, which is where I got most of my config files for bareos 21.
(0004447)
embareossed   
2022-01-03 22:41   
I probably need to remark that I have bareos 18 running on devuan ascii (debian 9) and bareos 21 on devuan chimaera (debian 11).

I suppose there could be differences in the mail systems between the two, but I have mostly implemented both mail systems from the defaults out-of-the-box. Since those *.mail files are sitting in /var/lib/bareos, I am assuming bareos director still has not sent them? Does it not retry after some period of time?
(0004448)
bruno-at-bareos   
2022-01-04 11:45   
Nothing special in your config (you already report the changes of %c and %n between 18 and newer version)

Usually if there are files .mail left that should mean the job is still running.
If the job is terminated, and the .mail are still there's that mean they are just orphans.
This can be the case when the director crash followed by a restart (depending how the systemd unit service is setup)
but a crash is normally noticed in machine logs and systemctl status.
(0004453)
embareossed   
2022-01-05 01:51   
I had to change %c to %n due to a change in the behavior in bareos 21. But that is a superficial change anyway.

The jobs were not running, afaik. But if I notice missing emails again, I'll double check.

I am not aware of crashes -- these jobs run overnight, automatically, per the schedule object set up for them.

Devuan does not use systemd/systemctl, but I can still look at the system logs if/when this occurs again.

I have not observed missing emails for several days now.
(0004477)
embareossed   
2022-01-16 23:15   
(Last edited: 2022-01-16 23:19)
1 missing email today. This time, though, I do not see any *.mail messages in /var/lib/bareos.

I looked at the source to the mail messages that did make it to the other system, and mapped them to the mail.log messages on the bareos system. It appears that there was another message that did get processed by the mail server on the bareos system that does not map to any message received by my mail reader.

It seems likely that this is not a bareos problem after all. I will continue looking at this problem on my side for now. I'd say to close this issue for now.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1409 [bareos-core] director tweak always 2021-12-19 00:33 2022-01-13 14:22
Reporter: jalseos Platform: Linux  
Assigned To: bruno-at-bareos OS: Debian  
Priority: low OS Version: 10  
Status: resolved Product Version: 20.0.2  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: DB error on restore with ExitOnFatal=true
Description: I was trying to use ExitOnFatal=true in director and noticed a persistent error when trying to initiate a restore:

bareos-dir: ERROR TERMINATION at cats/postgresql.cc:675
Fatal database error

The error does not happen with unset/default ExitOnFatal=false

The postgresql (11) log reveals:
STATEMENT: DROP TABLE temp
ERROR: table "temp1" does not exist
STATEMENT: DROP TABLE temp1
ERROR: table "temp1" does not exist

I found the SQL statements in thes files in the code:
/core/src/cats/dml/0018_uar_del_temp
/core/src/cats/dml/0019_uar_del_temp1

I am wondering if something like this might be in order: (like 0012_drop_deltabs.postgresql)
/core/src/cats/dml/0018_uar_del_temp.postgres
DROP TABLE IF EXISTS temp
Tags:
Steps To Reproduce: $ bconsole
* restore
9
bareos-dir: ERROR TERMINATION at cats/postgresql.cc:675
Fatal database error
Additional Information:
System Description
Attached Files:
Notes
(0004400)
bruno-at-bareos   
2021-12-21 15:58   
The behavior is exiting in case of error when ExitOnFatal = true

STATEMENT: DROP TABLE temp
ERROR: table "temp1" does not exist

this is an error and the product the obey strictly to the paramet Exit On Fatal.

Now with future version, where only postgresql will be kept as database, but also older postgresql will never be installed the code can be reviewed to chase every drop table without and if exist.

Files to change

```
core/src/cats/dml/0018_uar_del_temp:DROP TABLE temp
core/src/cats/dml/0019_uar_del_temp1:DROP TABLE temp1
core/src/cats/mysql_queries.inc:"DROP TABLE temp "
core/src/cats/mysql_queries.inc:"DROP TABLE temp1 "
core/src/cats/postgresql_queries.inc:"DROP TABLE temp "
core/src/cats/postgresql_queries.inc:"DROP TABLE temp1 "
core/src/cats/sqlite_queries.inc:"DROP TABLE temp "
core/src/cats/sqlite_queries.inc:"DROP TABLE temp1 "
core/src/dird/query.sql:!DROP TABLE temp;
core/src/dird/query.sql:!DROP TABLE temp2;
```
Do you want to propose a PR for ?
(0004405)
bruno-at-bareos   
2021-12-21 16:50   
PR proposed
https://github.com/bareos/bareos/pull/1035

Once the PR will be build, there's will be some testing package available, would you like to test them ?
(0004443)
jalseos   
2022-01-02 16:52   
Hi, thank you for looking into this issue! I will try to test the built package (deb preferred) if a subsequent code/package "downgrade" (ie. no Catalog DB changes, ...) to a published Community Edition release remains possible afterwards.
(0004473)
bruno-at-bareos   
2022-01-13 14:22   
Fix committed to bareos master branch with changesetid 15753.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1161 [bareos-core] director minor always 2019-12-15 22:43 2022-01-11 23:26
Reporter: embareossed Platform: Linux  
Assigned To: OS: Debian  
Priority: normal OS Version: 9  
Status: new Product Version: 18.2.5  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: JobId 0: Prior failed job found in catalog. Upgrading to Differential.
Description: Differential backups always produce this error. It is confusing on two counts: 1) Scheduled differential backups should not generate this message since the backup is scheduled to be differential, and 2) Message does not indicate which backup is being (falsely) "upgraded." Since I have 5 differential jobs every week (except once a month when full backups are run), there are exactly 5 of these messages, so apparently such message is generated for each differential job.
Tags:
Steps To Reproduce: Schedule differential jobs. One such message will be generated for each job.

NB: My backup schedule is identical for all backup jobs. I have it set to perform monthly full backups, then differentials once a week, and incrementals all other days.
Additional Information: I am simply following the model offered in the documentation, so I assumed that this was a valid, workable approach.

These messages show up in separate emails from the backup emails.
System Description
Attached Files: daemon_message.eml (562 bytes) 2022-01-06 08:22
https://bugs.bareos.org/file_download.php?file_id=494&type=bug
Notes
(0004399)
bruno-at-bareos   
2021-12-21 15:30   
Are you still using those components ?
Debian 9 is out of maintenance, and Bareos 18 will get out of support, soon with the release of 21 ?

To help debugging it would be nice to add at least the joblog and the job definition
in bconsole
show job=YOURJOB

list joblog jobid=9999

Thanks
(0004413)
embareossed   
2021-12-24 17:40   
I am migrating to devuan chimaera (debian buster) with bareos 20. I am not seeing this message in this version. Apparently it has been fixed, so it would be OK to close it.
(0004417)
embareossed   
2021-12-26 22:37   
I spoke too soon. This morning, I was greeted to these messages again from jobs run on bareos 20. However, I will be upgrading my bareos 20 director (and storage daemon) -- ASAP -- to bareos 21. So it remains to be seen if this problem is corrected in bareos 21.

As to job definition/job log, that is precisely the problem: I have many jobs, and not all of them display this behavior. I get these emails for some subset of the entire set of jobs on the schedule. I wish I were getting the jobid=9999 (in your example) since this would give me more information. Instead, it gives me "jobid=0" rather than an actual job number.

If this remains an issue in bareos 21, I will dig deeper to try to figure this out, maybe trying to run various backup jobs in bareos 21 repeatedly until such an error email comes.
(0004418)
embareossed   
2021-12-26 22:40   
One note aside. I see that sometimes a connection failure message also appears in the email. In recent cases, a job is failing because my wireless is down so I can't successfully backup my (wireless) laptop. So it could be related. If so, it would be better if the "prior failed job" message came with the associated jobid.
(0004419)
embareossed   
2021-12-27 02:30   
Upgraded to bareos 21. More bad news on this issue: It still sends these emails. I think I can tell which job or jobs are causing these to be generated, but that's only because I just reset my bareos 21 catalog. This is not a deterministic way of finding the cause reliably, of course.

Also note that these are intermittent and somewhat random-seeming. Some days I will have successful runs of all of my backups, yet a few of them will spout these messages. And, as I said in my last update here, occasionally I also see other messages in the same emails about connectivity or other issues.

As far as finding a solution to this... may I suggest the following? In the source code, place a trap for jobid == 0. Since that should never happen (right?), I'd think an assertion at that point would be appropriate. I realize there might be several suspect places where the message is generated, but if you are using a common routine throughout the source for the email messaging, it shouldn't be hard to implement a stack traceback to find the specific caller.

As for me, I will see what I can do to run down the source of this problem, including combing over my job defs looking for problems. I was hoping that the director's --test option would detect anything that doesn't look right.
(0004429)
bruno-at-bareos   
2021-12-28 09:50   
The information about the failing jobs would still be really appreciate.

One probable cause could be a differential looking for a full jobs on which is has to be based, and that full job get pruned too earlier.
(0004431)
embareossed   
2021-12-28 18:27   
The summary line is the ENTIRE contents of the email message. There is no further info, per se. No indication as to which job generated the message, etc. And there are no failing jobs necessarily. Most days, all of my jobs run to completion successfully and yet I still see these messages.
(0004450)
bruno-at-bareos   
2022-01-04 14:28   
Hello, I'm sorry, but I feel completely confused by your previous notes.

If I return to the subject of this report
0001161: JobId 0: Prior failed job found in catalog. Upgrading to Differential.
            Differential backups always produce this error.

I would like to have a joblog extract like the following

```
list joblog jobid=8100
2021-12-07 01:30:00 qt-kt-dir JobId 8100: No prior Full backup Job record found.
2021-12-07 01:30:00 qt-kt-dir JobId 8100: No prior or suitable Full backup found in catalog. Doing FULL backup.
2021-12-07 01:30:00 qt-kt-dir JobId 8100: Start Backup JobId 8100, Job=ai_share.2021-12-07_01.30.00_07
```
When you are telling you don't know which job is giving the message, I can only show you that there's always a jobid on the line so you can always refer to which job it is.
Maybe with your extract we will be able to move forward.
(0004454)
embareossed   
2022-01-05 02:02   
(Last edited: 2022-01-05 02:03)
Based on an email whose contents are only

JobId 0: Prior failed job found in catalog. Upgrading to Differential.

Please tell me how to proceed to give you the extract information you need. I am more than happy to do this if you will explain how to.

(0004457)
embareossed   
2022-01-06 08:22   
Here is an example of the email I receive (this is typical).
(0004458)
embareossed   
2022-01-06 08:23   
*list joblog jobid=0
Automatically selected Catalog: MyCatalog
Using Catalog "MyCatalog"
jobid not found in db, access to job or client denied by ACL, or client not found in db
*
(0004459)
embareossed   
2022-01-06 08:26   
I am not using ACLs, just to be clear. And I am sure that jobid 0 is not found in the database since it is not a legit job number.

As far as which client it is crowing about, there is no indication of it in the email (see example email above).
(0004462)
bruno-at-bareos   
2022-01-10 13:29   
jobid 0 doesn't exist for obvious reason.

Having a look at source code, I realized you are not getting the standard upgrade message when no prior job exist. You got the return when failed jobs exists.

Globally the code run the following query and got a leat one element (there's a limit 1 that I remove)

SELECT Level FROM Job
WHERE JobStatus NOT IN ('T','W')
AND Type='D'
AND Level IN ('F','D')
AND Name='%s'
AND StartTime>'%s'
ORDER BY StartTime DESC

Maybe you will be able to run that query with replacing (job by job) the %s for name, and starttime ?
I've not enough element here to give you more precise sql. but the return should show records that are wired.
I'm interested to have this report here.

Thanks.
(0004463)
embareossed   
2022-01-10 17:58   
Please tell me exactly where to find the name of the job. I have about a dozen jobs that are started at the same time by way of a schedule object.
(0004464)
bruno-at-bareos   
2022-01-11 09:30   
At the beginning of the report you were talking about 5 differential, then use those name one by one.
(0004465)
embareossed   
2022-01-11 10:19   
(Last edited: 2022-01-11 10:22)
The first order of business is to figure out which jobs (by jobname) is generating these messages. That is what I have been unable to do.

Please take another look at the email message I attached (https://bugs.bareos.org/view.php?id=1161#c4457), as an example. Tell me, based on that little information, how to proceed. I will be happy to explore whatever files and information is needed. I simply do not know where this information would be--I am not a bareos dev so I am not familiar with the code base to any great extent.

(0004466)
bruno-at-bareos   
2022-01-11 10:25   
Sorry, you're confusing me again, I can't at your place tell what's the name of the job you are running.
>> Since I have 5 differential jobs every week
I had understood you know which job are running, was, or will. with a "status dir days=7" you will have an overview of the next week, then you should be able to pick the name of the next Differential.
(0004467)
embareossed   
2022-01-11 10:55   
(Last edited: 2022-01-11 10:58)
All of the jobs are run together in the schedule. They run a full back first sunday of the month, differentials every other sunday, and incrementals daily.

BTW, since I opened this ticket, I have added more jobs. I run about a dozen every night.

Also note that I did reveal this in the "Steps to Reproduce," above (please see NB).

(0004468)
bruno-at-bareos   
2022-01-11 16:49   
Hello I think you didn't understood (take the time to) what I was asking. I was doing this on my free time, which become sparse actually at least for the next two weeks.
Your step to reproduce give only working solution here, that's why we were trying to narrow down your problem.
Maybe you don't want to share the information like the job name publicly or whatever, but I try hardly to guide you with the right command to execute, I would be really pleased to see the result of those and move forward.

Your last comment let me thinking I'm just wasting my precious contribution free time.
(0004472)
embareossed   
2022-01-11 23:26   
You still have not given me instructions on how to get the name of the job. There are numerous jobs (about a dozen, as I said) that kick off by the schedule. So it can be any of them. The emails do not indicate the name of the job either; again, see my sample email message, above (have you looked at it?).

I understand if you don't want to work on this, but could you pass this issue to someone else who might be able to help? Do not close it because it is an open issue. Thank you.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1416 [bareos-core] General minor have not tried 2021-12-30 11:43 2022-01-11 21:50
Reporter: hostedpower Platform: Linux  
Assigned To: joergs OS: Debian  
Priority: low OS Version: 10  
Status: assigned Product Version: 21.0.1  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Bareos python3 contrib plugin filedaemon
Description: Hi,


We used a version of the bareos contrib mysql plugin which seems to support Python3, however in recent builds the file seems to be regressed to be only compatible with pyton2 again.

Tags:
Steps To Reproduce:
Additional Information: In attachment you can find the python3 compatible version which was previously found on git under "dev/joergs/master/contrib-build" branch, however since this branch was updated, the older python2 version is back in there
System Description
Attached Files: MySQL-Python3.zip (3,594 bytes) 2021-12-30 11:43
https://bugs.bareos.org/file_download.php?file_id=488&type=bug
Notes
(0004469)
joergs   
2022-01-11 21:32   
I just verified this. In my environment, the module is working fine with Python3.
I even added a systemtest to verify this: https://github.com/bareos/bareos/tree/dev/joergs/master/contrib-build/systemtests/tests/py3plug-fd-contrib-mysql_dump

However, I guess you have already noted that the path and the initialisation of the module have changed to the bareos_mysql_dump directory. Maybe this is not reflected in your environment?

Please be aware that we currently are in the process of finding a resonable file and directory structure for this plugins.

Without further information, I'd judge this bug entry as invalid.
(0004470)
hostedpower   
2022-01-11 21:39   
I think you could be right, I tried the v21 one : https://github.com/bareos/bareos/blob/bareos-21/contrib/fd-plugins/mysql-python/BareosFdMySQLclass.py

So master is working, but not v21 ?
(0004471)
joergs   
2022-01-11 21:50   
Correct. v21 should be identical to v20 , and both versions only work with Python 2.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1405 [bareos-core] storage daemon major sometimes 2021-12-02 12:01 2022-01-09 23:29
Reporter: gusevmk Platform: Linux  
Assigned To: OS: RHEL  
Priority: urgent OS Version: 8.2  
Status: new Product Version: 19.2.11  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Storage daemon not answer, when there are no available devices.
Description: UPD: Version is 19.2.7
I have troubles when no more available devices on storage daemon.
how storage=pgbkps01.mgmt.
Storage {
  Name = "pgbkps01_loop"
  Address = "10.*.*.192"
  Password = "***"
  Device = "local-storage-0","local-storage-1","local-storage-2","local-storage-3"
  MediaType = "File"
  MaximumConcurrentJobs = 4
  SddPort = 9103
}

There are 4 jobs running at the moment.
When starting new job, it does not queue, but ends with error:

02-Dec 13:13 bareos-dir JobId 1903: Fatal error: Authorization key rejected bareos-dir.
02-Dec 13:13 bareos-dir JobId 1903: Fatal error: Director unable to authenticate with Storage daemon at "10.*.*.192:9103". Possible causes:
Passwords or names not the same or
TLS negotiation problem or
Maximum Concurrent Jobs exceeded on the SD or
SD networking messed up (restart daemon).

*status storage=pgbkps01_loop
Connecting to Storage daemon pgbkps01_loop at 10.*.*.192:9103
Failed to connect to Storage daemon pgbkps01_loop

When one of 4 jobs ends, there are no problem.

I did check strace of bareos-sd threads:
ps -eLo pid,tid,ppid,user:11,comm,state,wchan | grep bareos-sd
3256361 3256361 1 bareos bareos-sd S x64_sys_poll
3256361 3256363 1 bareos bareos-sd S -
3256361 1332308 1 bareos bareos-sd S x64_sys_poll
3256361 2252738 1 bareos bareos-sd S x64_sys_poll
3256361 2407428 1 bareos bareos-sd S x64_sys_poll
========================================================================
WHEN IS PROBLEM:
strace -p 3256361:
accept(3, {sa_family=AF_INET, sin_port=htons(48954), sin_addr=inet_addr("10.76.74.192")}, [128->16]) = 6
setsockopt(6, SOL_SOCKET, SO_KEEPALIVE, [1], 4) = 0
clone(child_stack=0x7ff13d75feb0, flags=CLONE_VM|CLONE_FS|CLONE_FILES|CLONE_SIGHAND|CLONE_THREAD|CLONE_SYSVSEM|CLONE_SETTLS|CLONE_PARENT_SETTID|CLONE_CHILD_CLEARTID, parent_tidptr=0x7ff13d7609d0, tls=0x7ff13d760700, child_tidptr=0x7ff13d7609d0) = 2458575
futex(0x2487490, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 0, {tv_sec=1638442869, tv_nsec=440823368}, FUTEX_BITSET_MATCH_ANY) = 0
futex(0x2487418, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x24874c0, FUTEX_WAKE_PRIVATE, 1) = 1
futex(0x2487440, FUTEX_WAKE_PRIVATE, 1) = 1

strace -p 3256363
strace: Process 3256363 attached
restart_syscall(<... resuming interrupted futex ...>
========================================================================
WHEN IS NO PROBLEM:
strace -p 3256361:
accept(3, {sa_family=AF_INET, sin_port=htons(48954), sin_addr=inet_addr("10.76.74.192")}, [128->16]) = 6
setsockopt(6, SOL_SOCKET, SO_KEEPALIVE, [1], 4) = 0
clone(child_stack=0x7ff13d75feb0, flags=CLONE_VM|CLONE_FS|CLONE_FILES|CLONE_SIGHAND|CLONE_THREAD|CLONE_SYSVSEM|CLONE_SETTLS|CLONE_PARENT_SETTID|CLONE_CHILD_CLEARTID, parent_tidptr=0x7ff13d7609d0, tls=0x7ff13d760700, child_tidptr=0x7ff13d7609d0) = 2458575
futex(0x2487490, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 0, {tv_sec=1638442869, tv_nsec=440823368}, FUTEX_BITSET_MATCH_ANY) = 0
futex(0x2487418, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x24874c0, FUTEX_WAKE_PRIVATE, 1) = 1
futex(0x2487440, FUTEX_WAKE_PRIVATE, 1) = 1

strace -p 3256363
strace: Process 3256363 attached
restart_syscall(<... resuming interrupted futex ...>
futex(0x7ff1508ce340, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7ff1508ce328, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 0, {tv_sec=1638442749, tv_nsec=490329000}, FUTEX_BITSET_MATCH_ANY) = 0
futex(0x7ff1508ce340, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7ff1508ce32c, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 0, {tv_sec=1638442807, tv_nsec=745975000}, FUTEX_BITSET_MATCH_ANY) = 0
futex(0x7ff1508ce340, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7ff1508ce328, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 0, {tv_sec=1638442807, tv_nsec=747064000}, FUTEX_BITSET_MATCH_ANY) = 0
futex(0x7ff1508ce340, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7ff1508ce32c, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 0, {tv_sec=1638442807, tv_nsec=792002000}, FUTEX_BITSET_MATCH_ANY) = -1 EAGAIN (Resource temporarily unavailable)
futex(0x7ff1508ce340, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7ff1508ce328, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 0, {tv_sec=1638442807, tv_nsec=792408000}, FUTEX_BITSET_MATCH_ANY
=============================================

I think here is some inter process communication issue
Tags:
Steps To Reproduce:
Additional Information:
Attached Files:
Notes
(0004379)
gusevmk   
2021-12-04 06:52   
This problem happens, when director configuration was reload, steps to reproduce:
1. Set 4 maximum concurrent jobs on storage = 4
2. Run 4 simultaneous jobs
3. Run one more job - all ok, this job is in queue
4. Reload director configuration (add new job/schedule)
5. Start one more job - fail, this job starts and fail - because - no more available devices/Maximum concurent jobs limit exceeded on storage daemon
(0004380)
bruno-at-bareos   
2021-12-06 14:17   
Could you provide you bareos-sd configuration bareos-sd.d/storage/bareos-sd.conf (normally) and also the bareos-dir.d/director/bareos-dir.conf (usually)

You have to check if you change the default values if there's enough for those (here the default)

Maximum Concurrent Jobs (Sd->Storage) = PINT32 20
Maximum Connections (Sd->Storage) = PINT32 42

As you see Maximum Connections is higher to allow to pass command to SD (like status etc).
(0004408)
bruno-at-bareos   
2021-12-22 16:35   
Any news here ?
(0004446)
embareossed   
2022-01-03 22:35   
(Last edited: 2022-01-09 23:29)
[EDIT]: Sorry, I saw the strace output, and it appeared to be the same as my own situation. Trouble is, I am now seeing something similar to the OP's situation, but not quite the same.

I will start a separate ticket for mine. Please disregard my original post here.


View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1389 [bareos-core] installer / packages minor always 2021-09-20 12:23 2022-01-05 13:23
Reporter: colttt Platform:  
Assigned To: bruno-at-bareos OS:  
Priority: normal OS Version:  
Status: resolved Product Version: 20.0.2  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact: yes
bareos-master: action: fixed
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: no repository for debian 11
Description: Debian 11 (bullseye) was released on 14th august 2021 but there is no bareos repository yet.
I would appreciate if debian 11 would be supported as well.
Tags:
Steps To Reproduce:
Additional Information:
Attached Files:
Notes
(0004276)
bruno-at-bareos   
2021-09-27 13:37   
Thanks for your report.

Starting September 14th Debian 11 is available for all customers with a subscription contract.
Nightly is also build for Debian 11, and will be part of bareos 21 release.
(0004292)
brechsteiner   
2021-10-02 22:51   
what is about the Community Repository? https://download.bareos.org/bareos/release/20/
(0004293)
bruno-at-bareos   
2021-10-04 09:30   
Sorry if it wasn't clear in my previous statement, Debian 11 will be available for next Bareos 21.
(0004455)
bruno-at-bareos   
2022-01-05 13:23   
community repository published https://download.bareos.org/bareos/release/21/Debian_11/

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1408 [bareos-core] director minor have not tried 2021-12-18 20:32 2021-12-28 09:44
Reporter: embareossed Platform: Linux  
Assigned To: bruno-at-bareos OS: Debian  
Priority: normal OS Version: 10  
Status: resolved Product Version: 20.0.2  
Product Build: Resolution: no change required  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: "Backup OK" email message subject line no longer displays the job name
Description: In bareos 18, backups which concluded successfully would be followed up by an email with a subject line indicating the name of the specific job that ran. However, in bareos 20, the subject line now only indicates the name of the client for which the job ran.

This is a minor nuisance, but I found the more distinguishing subject line to be more useful. In a case where there are multiple backup jobs for a single client where one but not all jobs fail, it is not immediately obvious -- as it was in bareos 18 -- as to which job for that client failed.
Tags:
Steps To Reproduce: Run two jobs on a host which has more than 1 backup job associated with it.
The email subject lines will be identical even though they are for 2 different jobs.
Additional Information:
System Description
Attached Files:
Notes
(0004401)
bruno-at-bareos   
2021-12-21 16:05   
Maybe some configuration file example used would help

From code we can see that the line was not changed since 2016
67ad14188a src/defaultconfigs/bareos-dir.d/messages/Standard.conf.in (Joerg Steffens 2016-08-01 14:03:06 +0200 5) mailcommand = "@bindir@/bsmtp -h @smtp_host@ -f \"\(Bareos\) \<%r\>\" -s \"Bareos: %t %e of %c %l\" %r"
(0004415)
embareossed   
2021-12-24 17:58   
Here is what my configs look like:
# grep mailcommand *
Daemon.conf: mailcommand = "/usr/bin/bsmtp -h localhost -f \"\(Bareos\) \<%r\>\" -s \"Bareos daemon message\" %r"
Standard.conf: mailcommand = "/usr/bin/bsmtp -h localhost -f \"\(Bareos\) \<%r\>\" -s \"Bareos: %t %e of %c %l\" %r"

All references to message resources are for Standard, except for the director which uses Daemon. I copied most of my config files from the old director (bareos 18) to the setup for the new director (bareos 20); I did not make any changes to messages, afair. I'll take a deeper look at this and see what I can figure out. Maybe bsmtp semantics have changed?
(0004416)
embareossed   
2021-12-24 18:12   
OK, it appears that in bareos 20, as per doc, the %c stands for the client, not the jobname (which should be %n). However, in bareos 18 and prior, this same setup seems to be generating the jobname, not the clientname. So it appears that the semantics have changed to properly implement the documented purpose of the %c macro (and perhaps others; I haven't tested those).

Changing the macro to %n works as desired.
(0004428)
bruno-at-bareos   
2021-12-28 09:44   
Adapting configuration following documentation

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1407 [bareos-core] General minor always 2021-12-18 20:26 2021-12-28 09:43
Reporter: embareossed Platform: Linux  
Assigned To: bruno-at-bareos OS: Debian  
Priority: normal OS Version: 10  
Status: resolved Product Version: 20.0.2  
Product Build: Resolution: no change required  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Run before script hangs unless debug tracing enabled in script
Description: A "run before job script" which has been working since bareos 18 no longer works in bareos 20 (20.0.3 or 20.0.4). This is a simple bash shell script that performs some chores before the backup. It only sends output to stdout, not stderr (I've checked this). The script works properly in bareos 18, but causes the job to hang in bareos 20.

The script is actually being run on a remote file daemon. This may be a clue to the behavior. But again, this has been working in bareos 18.

Interestingly, I found that by enabling bash tracing (-xv options) inside the script itself to try to see what was causing the hang, it actually alleviated the hang!
Tags:
Steps To Reproduce: Create a bash shell on a remote bareos 20 client.
Create a job in a bareos 20 director on a local system that calls a "run before job script" on the remote client.
Run the job.
If this is reproducible, the job will hang when it reaches the call to the remote script.

if this is reproducible, try setting traces in the bash script.

Additional Information: I built the 20.0.3 executables from the git source code on a devuan beowulf host and distributed the packages to the bareos director server and the bareos file daemon client, both of which are also devuan beowulf.
System Description
Attached Files:
Notes
(0004403)
bruno-at-bareos   
2021-12-21 16:10   
Would you mind to share the job definition so we can try to reproduce.
The script would be nice, but perhaps it do something secret.
(0004404)
bruno-at-bareos   
2021-12-21 16:17   
I can't reproduce, it works here

with a job definiton

```
 2021-12-21 16:14:12 qt-kt-dir JobId 8204: Start Backup JobId 8204, Job=yoda.2021-12-21_16.14.10_06
 2021-12-21 16:14:12 qt-kt-dir JobId 8204: Connected Storage daemon at qt-kt.labaroche.ioda.net:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
 2021-12-21 16:14:12 qt-kt-dir JobId 8204: Using Device "admin" to write.
 2021-12-21 16:14:12 qt-kt-dir JobId 8204: Probing client protocol... (result will be saved until config reload)
 2021-12-21 16:14:12 qt-kt-dir JobId 8204: Connected Client: yoda-fd at yoda.labaroche.ioda.net:9102, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
 2021-12-21 16:14:12 qt-kt-dir JobId 8204: Handshake: Immediate TLS 2021-12-21 16:14:12 qt-kt-dir JobId 8204: Encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
 2021-12-21 16:14:12 yoda-fd JobId 8204: Connected Storage daemon at qt-kt.labaroche.ioda.net:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
 2021-12-21 16:14:12 yoda-fd JobId 8204: shell command: run ClientBeforeJob "sh -c 'snapper list && snapper -c ioda list'"
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: # | Type | Pre # | Date | User | Used Space | Cleanup | Description | Userdata
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: -------+--------+-------+----------------------------------+------+------------+----------+-----------------------+--------------
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 0 | single | | | root | | | current |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 1* | single | | Sun 21 Jun 2020 05:17:47 PM CEST | root | 92.00 KiB | | first root filesystem |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 4803 | single | | Fri 01 Jan 2021 12:00:23 AM CET | root | 13.97 GiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 10849 | single | | Wed 01 Sep 2021 12:00:02 AM CEST | root | 12.58 GiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 11582 | single | | Fri 01 Oct 2021 12:00:01 AM CEST | root | 7.90 GiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 12342 | single | | Mon 01 Nov 2021 12:00:08 AM CET | root | 8.07 GiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13085 | single | | Wed 01 Dec 2021 12:00:07 AM CET | root | 156.00 KiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13272 | pre | | Wed 08 Dec 2021 06:23:04 PM CET | root | 156.00 KiB | number | zypp(zypper) | important=yes
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13273 | post | 13272 | Wed 08 Dec 2021 06:46:13 PM CET | root | 3.28 MiB | number | | important=yes
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13278 | pre | | Wed 08 Dec 2021 10:11:11 PM CET | root | 304.00 KiB | number | zypp(zypper) | important=yes
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13279 | post | 13278 | Wed 08 Dec 2021 10:11:26 PM CET | root | 124.00 KiB | number | | important=yes
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13447 | pre | | Wed 15 Dec 2021 09:57:35 PM CET | root | 48.00 KiB | number | zypp(zypper) | important=no
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13448 | post | 13447 | Wed 15 Dec 2021 09:57:42 PM CET | root | 48.00 KiB | number | | important=no
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13499 | single | | Sat 18 Dec 2021 12:00:06 AM CET | root | 156.00 KiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13523 | single | | Sun 19 Dec 2021 12:00:05 AM CET | root | 156.00 KiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13547 | single | | Mon 20 Dec 2021 12:00:05 AM CET | root | 156.00 KiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13557 | pre | | Mon 20 Dec 2021 09:27:21 AM CET | root | 156.00 KiB | number | zypp(zypper) | important=yes
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13559 | pre | | Mon 20 Dec 2021 10:30:43 AM CET | root | 156.00 KiB | number | zypp(zypper) | important=yes
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13560 | post | 13559 | Mon 20 Dec 2021 10:52:01 AM CET | root | 1.76 MiB | number | | important=yes
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13562 | pre | | Mon 20 Dec 2021 11:53:40 AM CET | root | 352.00 KiB | number | zypp(zypper) | important=yes
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13563 | post | 13562 | Mon 20 Dec 2021 11:53:56 AM CET | root | 124.00 KiB | number | | important=yes
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13576 | single | | Tue 21 Dec 2021 12:00:00 AM CET | root | 172.00 KiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13585 | single | | Tue 21 Dec 2021 09:00:00 AM CET | root | 172.00 KiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13586 | single | | Tue 21 Dec 2021 10:00:00 AM CET | root | 172.00 KiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13587 | single | | Tue 21 Dec 2021 11:00:00 AM CET | root | 172.00 KiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13588 | single | | Tue 21 Dec 2021 12:00:00 PM CET | root | 172.00 KiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13589 | single | | Tue 21 Dec 2021 01:00:00 PM CET | root | 172.00 KiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13590 | single | | Tue 21 Dec 2021 02:00:00 PM CET | root | 172.00 KiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13591 | single | | Tue 21 Dec 2021 03:00:00 PM CET | root | 172.00 KiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13592 | single | | Tue 21 Dec 2021 04:00:00 PM CET | root | 92.00 KiB | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob:
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: # | Type | Pre # | Date | User | Cleanup | Description | Userdata
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: -------+--------+-------+---------------------------------+------+----------+-------------+---------
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 0 | single | | | root | | current |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13050 | single | | Mon 20 Dec 2021 12:00:05 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13061 | single | | Mon 20 Dec 2021 11:00:00 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13062 | single | | Mon 20 Dec 2021 12:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13063 | single | | Mon 20 Dec 2021 01:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13064 | single | | Mon 20 Dec 2021 02:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13065 | single | | Mon 20 Dec 2021 03:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13066 | single | | Mon 20 Dec 2021 04:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13067 | single | | Mon 20 Dec 2021 05:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13068 | single | | Mon 20 Dec 2021 06:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13069 | single | | Mon 20 Dec 2021 07:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13070 | single | | Mon 20 Dec 2021 08:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13071 | single | | Mon 20 Dec 2021 09:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13072 | single | | Mon 20 Dec 2021 10:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13073 | single | | Mon 20 Dec 2021 11:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13074 | single | | Tue 21 Dec 2021 12:00:00 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13075 | single | | Tue 21 Dec 2021 01:00:00 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13076 | single | | Tue 21 Dec 2021 02:00:00 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13077 | single | | Tue 21 Dec 2021 03:00:00 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13078 | single | | Tue 21 Dec 2021 04:00:00 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13079 | single | | Tue 21 Dec 2021 05:00:00 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13080 | single | | Tue 21 Dec 2021 06:00:00 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13081 | single | | Tue 21 Dec 2021 07:00:00 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13082 | single | | Tue 21 Dec 2021 08:00:00 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13083 | single | | Tue 21 Dec 2021 09:00:00 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13084 | single | | Tue 21 Dec 2021 10:00:00 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13085 | single | | Tue 21 Dec 2021 11:00:00 AM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13086 | single | | Tue 21 Dec 2021 12:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13087 | single | | Tue 21 Dec 2021 01:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13088 | single | | Tue 21 Dec 2021 02:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13089 | single | | Tue 21 Dec 2021 03:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: ClientBeforeJob: 13090 | single | | Tue 21 Dec 2021 04:00:00 PM CET | root | timeline | timeline |
 2021-12-21 16:14:36 yoda-fd JobId 8204: Extended attribute support is enabled
 2021-12-21 16:14:36 yoda-fd JobId 8204: ACL support is enabled
 
 RunScript {
    RunsWhen = Before
    RunsOnClient = Yes
    FailJobOnError = No
    Command = "sh -c 'snapper list && snapper -c ioda list'"
  }

```
(0004414)
embareossed   
2021-12-24 17:45   
Nothing secret really. It's just a script that runs "estimate" and parses the output for the size of the backup. Then it decides (based on a value in a config file for the backup name) whether to proceed or not. This way, estimates can be used to determine whether to proceed with a backup or not. This was my workaround to my request in https://bugs.bareos.org/view.php?id=1135.

I did some upgrades recently and the problem has disappeared. So you can close this.
(0004427)
bruno-at-bareos   
2021-12-28 09:43   
Upgrade solve this.
estimate can take time, and from the bconsole point of view, look it is stalled or blocked, when you use the "listing instruction" you'll see the file by file proceed output.
closing

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1413 [bareos-core] bconsole major always 2021-12-27 15:29 2021-12-28 09:38
Reporter: jcottin Platform: Linux  
Assigned To: bruno-at-bareos OS: Debian  
Priority: high OS Version: 10  
Status: resolved Product Version: 21.0.0  
Product Build: Resolution: no change required  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: While restoring a directory using always incremental scheme, Bareos looks for a volume in the wrong storage.
Description: I configured the Always Incremental scheme using 2 different storage (FILE) as advised in the documentation.
-----------------------------------
https://docs.bareos.org/TasksAndConcepts/AlwaysIncrementalBackupScheme.html?highlight=job#storages-and-pools

While restoring a directory using always incremental scheme, Bareos looks for a volume in the wrong storage.

he job will require the following
   Volume(s) Storage(s) SD Device(s)
===========================================================================

    AI-Consolidated-vm-aiqi-linux-test-backup-0006 FileStorage-AI-Consolidated FileStorage-AI-Consolidated
    AI-Incremental-vm-aiqi-linux-test-backup0012 FileStorage-AI-Incremental FileStorage-AI-Incremental

It looks for AI-Incremental-vm-aiqi-linux-test-backup0012 in FileStorage-AI-Consolidated.
it should look for it in FileStorage-AI-Incremental.

Is there a problem with my setup ?
Tags: always incremental, storage
Steps To Reproduce: Using bconsole I target a backup before : 2021-12-27 19:00:00
I can find 3 backup (1 Full, 2 Incremental)
=======================================================
+-------+-------+----------+----------------+---------------------+------------------------------------------------+
| jobid | level | jobfiles | jobbytes | starttime | volumename |
+-------+-------+----------+----------------+---------------------+------------------------------------------------+
| 24 | F | 108,199 | 13,145,763,765 | 2021-12-25 08:06:41 | AI-Consolidated-vm-aiqi-linux-test-backup-0006 |
| 27 | I | 95 | 68,530 | 2021-12-25 20:00:04 | AI-Incremental-vm-aiqi-linux-test-backup0008 |
| 32 | I | 40 | 1,322,314 | 2021-12-26 20:00:09 | AI-Incremental-vm-aiqi-linux-test-backup0012 |
+-------+-------+----------+----------------+---------------------+------------------------------------------------+
-----------------------------------
$ cd /var/lib/mysql.dumps/wordpressdb/
cwd is: /var/lib/mysql.dumps/wordpressdb/
-----------------------------------
$ dir
-rw-r--r-- 1 0 (root) 112 (bareos) 1830 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/%create.gz
-rw-r--r-- 1 0 (root) 112 (bareos) 149 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/%tables
-rw-r--r-- 1 0 (root) 112 (bareos) 783 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/wp_commentmeta.sql.gz
-rw-r--r-- 1 0 (root) 112 (bareos) 1161 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/wp_comments.sql.gz
-rw-r--r-- 1 0 (root) 112 (bareos) 869 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/wp_links.sql.gz
-rw-r--r-- 1 0 (root) 112 (bareos) 235966 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/wp_options.sql.gz
-rw-r--r-- 1 0 (root) 112 (bareos) 830 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/wp_postmeta.sql.gz
-rw-r--r-- 1 0 (root) 112 (bareos) 3470 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/wp_posts.sql.gz
-rw-r--r-- 1 0 (root) 112 (bareos) 770 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/wp_term_relationships.sql.gz
-rw-r--r-- 1 0 (root) 112 (bareos) 838 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/wp_term_taxonomy.sql.gz
-rw-r--r-- 1 0 (root) 112 (bareos) 780 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/wp_termmeta.sql.gz
-rw-r--r-- 1 0 (root) 112 (bareos) 814 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/wp_terms.sql.gz
-rw-r--r-- 1 0 (root) 112 (bareos) 1404 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/wp_usermeta.sql.gz
-rw-r--r-- 1 0 (root) 112 (bareos) 983 2021-12-25 07:55:02 /var/lib/mysql.dumps/wordpressdb/wp_users.sql.gz
-----------------------------------
$ cd ..
cwd is: /var/lib/mysql.dumps/
-----------------------------------
I mark the folder:
$ mark /var/lib/mysql.dumps/wordpressdb
15 files marked.
$ done
-----------------------------------
The job will require the following
   Volume(s) Storage(s) SD Device(s)
============================================

    AI-Consolidated-vm-aiqi-linux-test-backup-0006 FileStorage-AI-Consolidated FileStorage-AI-Consolidated
    AI-Incremental-vm-aiqi-linux-test-backup0012 FileStorage-AI-Incremental FileStorage-AI-Incremental

Volumes marked with "*" are online.
18 files selected to be restored.

Using Catalog "MyCatalog"
Run Restore job
JobName: RestoreFiles
Bootstrap: /var/lib/bareos/bareos-dir.restore.2.bsr
Where: /tmp/bareos-restores
Replace: Always
FileSet: LinuxAll
Backup Client: vm-aiqi-linux-test-backup-fd
Restore Client: vm-aiqi-linux-test-backup-fd
Format: Native
Storage: FileStorage-AI-Consolidated
When: 2021-12-27 22:10:13
Catalog: MyCatalog
Priority: 10
Plugin Options: *None*
OK to run? (yes/mod/no): yes

I get these two messages.
============================================
27-Dec 22:15 bareos-sd JobId 43: Warning: stored/acquire.cc:286 Read open device "FileStorage-AI-Consolidated" (/var/lib/bareos/storage-AI-Consolidated) Volume "AI-Incremental-vm-aiqi-linux-test-backup0012" failed: ERR=stored/dev.cc:716 Could not open: /var/lib/bareos/storage-AI-Consolidated/AI-Incremental-vm-aiqi-linux-test-backup0012, ERR=No such file or directory

27-Dec 22:15 bareos-sd JobId 43: Please mount read Volume "AI-Incremental-vm-aiqi-linux-test-backup0012" for:
    Job: RestoreFiles.2021-12-27_22.15.29_31
    Storage: "FileStorage-AI-Consolidated" (/var/lib/bareos/storage-AI-Consolidated)
    Pool: Incremental-BareOS
    Media type: File
============================================

Bareos try to find AI-Incremental-vm-aiqi-linux-test-backup0012 in the wrong storage.
Additional Information:
===========================================
Job {
  Name = vm-aiqi-linux-test-backup-job
  Client = vm-aiqi-linux-test-backup-fd

  Accurate = yes
  Always Incremental = yes
  Always Incremental Job Retention = 30 days
  Always Incremental Keep Number = 15
  Always Incremental Max Full Age = 60 days

  Level = Incremental
  Type = Backup
  FileSet = "LinuxAll-vm-aiqi-linux-test-backup" # LinuxAll fileset (0000013)
  Schedule = "WeeklyCycleCustomers"
  Storage = FileStorage-AI-Incremental
  Messages = Standard
  Pool = AI-Incremental-vm-aiqi-linux-test-backup
  Priority = 10
  Write Bootstrap = "/var/lib/bareos/%c.bsr"

  Full Backup Pool = AI-Consolidated-vm-aiqi-linux-test-backup # write Full Backups into "Full" Pool (0000005)
  Incremental Backup Pool = AI-Incremental-vm-aiqi-linux-test-backup # write Incr Backups into "Incremental" Pool (0000011)

  Enabled = yes

  RunScript {
    FailJobOnError = Yes
    RunsOnClient = Yes
    RunsWhen = Before
    Command = "sh /SCRIPTS/mysql/pre.mysql.sh"
  }

  Run Script {
    Console = ".bvfs_update jobid=%i"
    RunsWhen = After
    RunsOnClient = No
  }
}
===========================================
Pool {
  Name = AI-Consolidated-vm-aiqi-linux-test-backup
  Pool Type = Backup
  Recycle = yes # Bareos can automatically recycle Volumes
  AutoPrune = yes # Prune expired volumes
  Volume Retention = 360 days # How long should the Full Backups be kept? (0000006)
  Maximum Volume Bytes = 50G # Limit Volume size to something reasonable
  Label Format = "AI-Consolidated-vm-aiqi-linux-test-backup-" # Volumes will be labeled "Full-<volume-id>"
  Storage = FileStorage-AI-Consolidated
}
===========================================
Pool {
  Name = AI-Incremental-vm-aiqi-linux-test-backup
  Pool Type = Backup
  Recycle = yes # Bareos can automatically recycle Volumes
  AutoPrune = yes # Prune expired volumes
  Volume Retention = 360 days # How long should the Incremental Backups be kept? (0000012)
  Maximum Volume Bytes = 50G # Limit Volume size to something reasonable
  Label Format = "AI-Incremental-vm-aiqi-linux-test-backup" # Volumes will be labeled "Incremental-<volume-id>"
  Volume Use Duration = 23h
  Storage = FileStorage-AI-Incremental
  Next Pool = AI-Consolidated-vm-aiqi-linux-test-backup
}

Both volume are available in their respective storage.

root@vm-aiqi-testbackup:~# ls -l /var/lib/bareos/storage-AI-Consolidated/AI-Consolidated-vm-aiqi-linux-test-backup-0006
-rw-r----- 1 bareos bareos 26349467738 Dec 25 08:09 /var/lib/bareos/storage-AI-Consolidated/AI-Consolidated-vm-aiqi-linux-test-backup-0006

root@vm-aiqi-testbackup:~# ls -l /var/lib/bareos/storage-AI-Incremental/AI-Incremental-vm-aiqi-linux-test-backup0012
-rw-r----- 1 bareos bareos 1329612 Dec 26 20:00 /var/lib/bareos/storage-AI-Incremental/AI-Incremental-vm-aiqi-linux-test-backup0012
System Description
Attached Files: Bareos-always-incremental-restore-fail.txt (7,259 bytes) 2021-12-27 15:53
https://bugs.bareos.org/file_download.php?file_id=487&type=bug
Notes
(0004421)
jcottin   
2021-12-27 15:53   
Output with TXT might be easier to read.
(0004422)
jcottin   
2021-12-27 16:32   
Device {
  Name = FileStorage-AI-Consolidated
  Media Type = File
  Archive Device = /var/lib/bareos/storage-AI-Consolidated
  LabelMedia = yes; # lets Bareos label unlabeled media
  Random Access = yes;
  AutomaticMount = yes; # when device opened, read it
  RemovableMedia = no;
  AlwaysOpen = no;
  Description = "File device. A connecting Director must have the same Name and MediaType."
}

Device {
  Name = FileStorage-AI-Incremental
  Media Type = File
  Archive Device = /var/lib/bareos/storage-AI-Incremental
  LabelMedia = yes; # lets Bareos label unlabeled media
  Random Access = yes;
  AutomaticMount = yes; # when device opened, read it
  RemovableMedia = no;
  AlwaysOpen = no;
  Description = "File device. A connecting Director must have the same Name and MediaType."
}

Storage {
  Name = FileStorage-AI-Consolidated
  Address = bareos-server # N.B. Use a fully qualified name here (do not use "localhost" here).
  Password = "22222222222222222222222222222222222222222222"
  Device = FileStorage-AI-Consolidated
  Media Type = File
}

Storage {
  Name = FileStorage-AI-Incremental
  Address = bareos-server # N.B. Use a fully qualified name here (do not use "localhost" here).
  Password = "22222222222222222222222222222222222222222222"
  Device = FileStorage-AI-Incremental
  Media Type = File
}
(0004423)
jcottin   
2021-12-27 16:43   
The documentation said 2 storage.
But I created 2 device.

1 storage => 1 device.

I moved the data from one device (FILE: Directory) to another.
2 storage => 1 device.

Problem solved.
(0004425)
bruno-at-bareos   
2021-12-28 09:37   
Thanks for sharing. Yes when the documentation talk about 2 storages it's in the director view, and not bareos-storage having 2 devices.
I close the issue.
(0004426)
bruno-at-bareos   
2021-12-28 09:38   
AI need 2 storages on the director but One device able to read/write both Incremental and Full

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1339 [bareos-core] webui minor always 2021-04-19 11:49 2021-12-23 08:39
Reporter: khvalera Platform:  
Assigned To: frank OS:  
Priority: normal OS Version: archlinux  
Status: assigned Product Version: 20.0.1  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: When going to the Run jobs tab I get an error
Description: when going to the Run jobs https://127.0.0.1/bareos-webui/job/run/ tab I get an error:
Notice: Undefined index: value in /usr/share/webapps/bareos-webui/module/Pool/src/Pool/Model/PoolModel.php on line 152
Tags: webui
Steps To Reproduce: https://127.0.0.1/bareos-webui/job/run/
Additional Information:
Attached Files: Снимок экрана_2021-04-19_12-52-56.png (110,528 bytes) 2021-04-19 11:54
https://bugs.bareos.org/file_download.php?file_id=464&type=bug
png
Notes
(0004112)
khvalera   
2021-04-19 11:54   
I am attaching a screenshot:
(0004156)
khvalera   
2021-06-11 22:36   
You need to correct the expression: preg_match('/\s*Next\s*Pool\s*=\s*("|\')?(?<value>.*)(?(1)\1|)/i', $result, $matches);
(0004157)
khvalera   
2021-06-11 22:39   
I temporarily corrected myself so that the error does not appear: preg_match('/\s*Pool\s*=?(?<value>.*)(?(1)\1|)/i', $result, $matches);
But most of all this is not the right decision.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1404 [bareos-core] director major have not tried 2021-11-25 11:24 2021-12-22 16:32
Reporter: Int Platform: Linux  
Assigned To: OS: CentOS  
Priority: high OS Version: 7  
Status: new Product Version: 19.2.11  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: prune deletes virtualfull job from volume although the retention period is not expired
Description: executing the command
  prune volume=Bilddaten-0408 yes
caused the virtualfull job, which was stored on this volume to be deleted, although the retention period of "9 months 6 days" was not expired. The volume was last written 1 day ago.

The bconsole output was:

Automatically selected Catalog: MyCatalog
Using Catalog "MyCatalog"
The current Volume retention period is: 9 months 6 days
There are no more Jobs associated with Volume "Bilddaten-0408". Marking it purged.
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0004361)
Int   
2021-11-25 11:32   
how can I recover the deleted virtualfull job from the now purged volume?
(0004362)
Int   
2021-11-25 11:41   
The bareos daemon sent this message:

25-Nov 11:00 bareos-dir JobId 0: Volume "Bilddaten-0408" has Volume Retention of 23846400 sec. and has 0 jobs that will be pruned
25-Nov 11:00 bareos-dir JobId 0: Pruning volume Bilddaten-0408: 0 Jobs have expired and can be pruned.
25-Nov 11:00 bareos-dir JobId 0: Volume "" contains no jobs after pruning.
(0004363)
bruno-at-bareos   
2021-11-25 18:48   
What was the purpose of running prune volume=Bilddaten-0408 yes ?
If the media has not been used or truncated (depending of your configuration) you can still use bscan to reimport it inside the database.
(0004371)
Int   
2021-11-29 10:52   
the pool contains other volumes that were expired. My goal was to truncate the expired volumes to gain free disk space.
I ran the shell script

#!/bin/bash
for f in `echo "list volumes pool=Bilddaten" | bconsole | grep Bilddaten- | cut -d '|' -f3`; do
  echo "prune volume=$f yes" | bconsole;
done

to prune each volume in the pool.
But the prune command did not only prune the expired volumes but also all volumes that contained the virtualfull job.

My second step would have been to truncate the pruned volumes.
But since I did not execute the truncate I will try to restore the virtualfull job by using bscan as you suggested.
(0004406)
bruno-at-bareos   
2021-12-22 16:32   
Sorry busy on other tasks.

In fact what you expect was not was is configured and how it works with virtualfull.
The date of the virtualfull job used it the one of the first job composing the virtualfull (let say the first job was done in 1st January)
even if the volume has been written yesterday, the date of the job on it is 1st jan

That's why you get the result you had, and not the one you expected ...

In AI the documentation stated clearly that Volumes/Files/Client retention should be setup with high number
It is the job of consolidate to do the cleanup, and nothing else should be used.

I hope this will help you understand better how to setup and use the product.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1397 [bareos-core] documentation minor always 2021-11-01 16:45 2021-12-21 16:07
Reporter: Norst Platform: Linux  
Assigned To: bruno-at-bareos OS: Debian  
Priority: normal OS Version: 9  
Status: resolved Product Version: 20.0.2  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: "Tapespeed and blocksizes" chapter location is wrong
Description: "Tapespeed and blocksizes" chapter is a general topic. Therefore, it must be moved away from "Autochanger Support" page/category.
https://docs.bareos.org/TasksAndConcepts/AutochangerSupport.html#setblocksizes
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0004360)
bruno-at-bareos   
2021-11-25 10:35   
Would you like to propose a PR changing the place, it would be really appreciate.
Are you doing backup on tapes on a single drive ? (Most of the use case we seen actually are all using autochanger, that's why the chapter is actually there).
(0004376)
Norst   
2021-11-30 21:01   
(Last edited: 2021-11-30 21:03)
Yes, I use standalone tape drive, but for infrequent, long-term archiving rather than regular backup.

PR to move "Tapespeed and blocksizes" one level up, to "Tasks and Concepts": https://github.com/bareos/bareos/pull/1009

(0004383)
bruno-at-bareos   
2021-12-09 09:42   
Did you see the comment in the PR ?
(0004402)
bruno-at-bareos   
2021-12-21 16:07   
PR#1009 merged last week.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1369 [bareos-core] webui crash always 2021-07-12 11:54 2021-12-21 13:58
Reporter: jarek_herisz Platform: Linux  
Assigned To: frank OS: Debian  
Priority: normal OS Version: 10  
Status: resolved Product Version: 20.0.2  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: webui tries to load a nonexistent file
Description: When the Polish language is chosen at the login stage. Webui tries to load the file:
bareos-webui/js/locale/pl_PL/LC_MESSAGES/pl_PL.po

Such a file does not exist, it results in an error:
i_gettext.js:413 iJS-gettext:'try_load_lang_po': failed. Unable to exec XMLHttpRequest for link

Remaining javasctipt is terminated. Interfeis becomes inoperable
Tags: webui
Steps To Reproduce: With version 20.0.1
On the webui login page, select Polish.
Additional Information:
System Description
Attached Files: Przechwytywanie.PNG (78,772 bytes) 2021-07-19 10:36
https://bugs.bareos.org/file_download.php?file_id=472&type=bug
png
Notes
(0004182)
jarek_herisz   
2021-07-19 10:36   
System:
root@backup:~# cat /etc/debian_version
10.10
(0004206)
frank   
2021-08-09 17:22   
Fix committed to bareos bareos-19.2 branch with changesetid 15093.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1324 [bareos-core] webui major always 2021-03-02 10:26 2021-12-21 13:57
Reporter: Emmanuel Garette Platform: Linux Ubuntu  
Assigned To: frank OS: 20.04  
Priority: normal OS Version:  
Status: resolved Product Version: 20.0.0  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Infinite loop when trying to log with invalid account
Description: I'm using this community version of Webui: http://download.bareos.org/bareos/release/20/xUbuntu_20.04/

When I'm trying to log with invalid account, webui didn't return nothing and apache seems to run an infinite loop. The log file increases rapidly.

I think the problem is in this two lines:

          $send = fwrite($this->socket, $msg, $str_length);
         if($send === 0) {

The fwrite function returns false when an error provides (see: https://www.php.net/manual/en/function.fwrite.php ).

If a replace 0 by false, everything is ok.

In attachement a patch to solve this issues.
Tags:
Steps To Reproduce:
Additional Information:
Attached Files: webui.patch (483 bytes) 2021-03-02 10:26
https://bugs.bareos.org/file_download.php?file_id=458&type=bug
Notes
(0004163)
frank   
2021-06-28 15:22   
Fix committed to bareos master branch with changesetid 15006.
(0004165)
frank   
2021-06-29 14:22   
Fix committed to bareos bareos-19.2 branch with changesetid 15017.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1316 [bareos-core] storage daemon major always 2021-01-30 10:01 2021-12-21 13:57
Reporter: kardel Platform:  
Assigned To: franku OS:  
Priority: high OS Version:  
Status: resolved Product Version: 20.0.0  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: storage daemon loses a configured device instance causing major confusion in device handling
Description: After start "status storage=<name>" show the device as not open or with it parameters - that is expected.

After the first backup with spooling "status storage=<name>" shows the device as "not open or does not exist" - that is a hint
=> the the configured "device_resource->dev" value is nullptr.

The follow up effects are that the reservation code is unable to match the same active device and the same volume in all cases.
When the match fails (log shows "<name> (/dev/<tapename>) and "<name> (/dev/<tapename>) " with no differences) it attempts to allocate new volumes possibly with operator intervention even though the expected volume is available in the drive.

The root cause is a temporary device created in spool.cc::295 => auto rdev(std::make_unique<SpoolDevice>());
Line 302 sets device resource rdev->device_resource = dcr->dev->device_resource;
When rdev leaves scope the Device::~Device() Dtor is called which happily sets this.device_resource->dev = nullptr in
dev.cc:1281 if (device_resource) { device_resource->dev = nullptr; } (=> potential memory leak)

At this point the configured device_resource is lost (even though it might still be known by active volume reservations).
After that the reservation code is completely confused due to new default allocations of devices (see additional info).

A fix is provided as patch against 20.0.0. It only clears this.device_resource->dev when
this.device_resource->dev references this instance.
Tags:
Steps To Reproduce: start bareos system.
observe "status storage=..."
run a spooling job
observer "status storage=..."

If you want to see the confusion it involves a more elaborate test setup with multipe jobs where a spooling job finishes before
another job for the same volume and device begins to run.
Additional Information: It might be worthwhile to check the validity of creating a device in dir_cmd.cc:932. During testing
a difference of device pointer was seen in vol_mgr.cc:916 although the device parameter where the same.
This is most likely cause by Device::this.device_resource->dev being a nullptr and the device creation
in dir_cmd.cc:932. The normal expected lifetime of a device is from reading the configuration until the
program termination. Autochanger support might change that rule though - I didn't analyze that far.
Attached Files: dev.cc.patch (568 bytes) 2021-01-30 10:01
https://bugs.bareos.org/file_download.php?file_id=455&type=bug
Notes
(0004088)
franku   
2021-02-12 12:15   
Thank you for your deep analysis and the proposed fix which solves the issue.

See github PR https://github.com/bareos/bareos/pull/724/commits for more information on the fix and systemtests (which is draft at the time of adding this note).
(0004089)
franku   
2021-02-15 11:38   
Experimental binaries with the proposed bugfix can be found here: http://download.bareos.org/bareos/experimental/CD/PR-724/
(0004091)
franku   
2021-02-24 13:22   
Fix committed to bareos master branch with changesetid 14543.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1300 [bareos-core] webui minor always 2021-01-11 16:27 2021-12-21 13:57
Reporter: fapg Platform: Linux  
Assigned To: frank OS: CentOS  
Priority: normal OS Version: 7  
Status: resolved Product Version: 20.0.0  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact: yes
bareos-master: action: will care
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: some job status are not categorized properly
Description: in the dashboard, when we click in waiting jobs, the url is:

https://bareos-server/bareos-webui/job//?period=1&status=Waiting
but should be:
https://bareos-server/bareos-webui/job//?period=1&status=Queued

Best Regards,
Fernando Gomes
Tags:
Steps To Reproduce:
Additional Information: affects table column filter
System Description
Attached Files:
Notes
(0004168)
frank   
2021-06-29 18:45   
It's not a query parameter issue. WebUI categorizes all the different job status flags into groups. I had a look into it and some job status are not categorized properly so the column filter on the table does not work as expected in those cases. A fix will follow.
(0004175)
frank   
2021-07-06 11:22   
Fix committed to bareos master branch with changesetid 15053.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1251 [bareos-core] webui tweak always 2020-06-11 09:13 2021-12-21 13:57
Reporter: juanpebalsa Platform: Linux  
Assigned To: frank OS: CentOS  
Priority: low OS Version: 7  
Status: resolved Product Version: 19.2.7  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Error when displaying pool detail
Description: When I try to see the detail of the Pools under Storage -> Pools -> 15-Days (one of my pools), I get an error message because I can't find the page.

http://xxxxxxxxx.com/bareos-webui/pool/details/15-Days:
|A 404 error occurred
|Page not found.
|The requested URL could not be matched by routing.
|
|No Exception available
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files: Captura de pantalla 2020-06-11 a las 9.13.02.png (20,870 bytes) 2020-06-11 09:13
https://bugs.bareos.org/file_download.php?file_id=442&type=bug
png
Notes
(0004207)
frank   
2021-08-09 17:22   
Fix committed to bareos bareos-19.2 branch with changesetid 15094.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1232 [bareos-core] installer / packages minor always 2020-04-21 09:26 2021-12-21 13:57
Reporter: rogern Platform: Linux  
Assigned To: bruno-at-bareos OS: CentOS  
Priority: normal OS Version: 8  
Status: resolved Product Version: 19.2.7  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact: yes
bareos-master: action: fixed
bareos-19.2: impact: yes
bareos-19.2: action: fixed
bareos-18.2: impact: yes
bareos-18.2: action: fixed
bareos-17.2: impact: yes
bareos-17.2: action: none
bareos-16.2: impact: yes
bareos-16.2: action: none
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: bareos logrotate errors
Description: Problem with logrotate seems to be back (previously addressed and fixed in 0000417) due to missing

su bareos bareos

in /etc/logrotate.d/bareos-dir

Logrotate gives "error: skipping "/var/log/bareos/bareos.log" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation."
Also the same for bareos-audit.log
Tags:
Steps To Reproduce: Two fresh installs of 19.2.7 with same error from logrotate and lacking "su bareos bareos" in /etc/logrotate.d/bareos-dir
Additional Information:
Attached Files:
Notes
(0004256)
bruno-at-bareos   
2021-09-08 13:46   
PR is now proposed with also backport to supported version
https://github.com/bareos/bareos/pull/918
(0004259)
bruno-at-bareos   
2021-09-09 15:07   
PR#918 has been merged, and backport will be made to 20,19,18 Monday 13th. and will be available on next minor release.
(0004260)
bruno-at-bareos   
2021-09-09 15:22   
Fix committed to bareos master branch with changesetid 15139.
(0004261)
bruno-at-bareos   
2021-09-09 16:22   
Fix committed to bareos bareos-19.2 branch with changesetid 15141.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1205 [bareos-core] webui minor always 2020-02-28 09:42 2021-12-21 13:57
Reporter: Ryushin Platform: Linux  
Assigned To: frank OS: Devuan (Debian)  
Priority: normal OS Version: 10  
Status: resolved Product Version: 19.2.6  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: HeadLink.php error with PHP 7.3
Description: I received this error when trying to connecting to the webui:
Notice: compact(): Undefined variable: extras in /usr/share/bareos-webui/vendor/zendframework/zend-view/src/Helper/HeadLink.php on line 403

Seems to be related to this issue:
https://github.com/zendframework/zend-view/issues/172#issue-388080603
Though the line numbers for their fix is not the same.
Tags:
Steps To Reproduce:
Additional Information: I solved the issue by replacing the HeadLink.php file with an updated version from here:
https://raw.githubusercontent.com/zendframework/zend-view/f7242f7d5ccec2b8c319634b4098595382ef651c/src/Helper/HeadLink.php
Attached Files:
Notes
(0004144)
frank   
2021-06-08 12:22   
Fix committed to bareos bareos-19.2 branch with changesetid 14922.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1191 [bareos-core] webui crash always 2020-02-12 15:40 2021-12-21 13:57
Reporter: khvalera Platform: Linux  
Assigned To: frank OS: Arch Linux  
Priority: high OS Version: x64  
Status: resolved Product Version: 19.2.6  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: The web interface runs under any login and password
Description: To enter the web interface starts under any arbitrary username and password.
How to fix it?
Tags: webui
Steps To Reproduce: /etc/bareos/bareos-dir.d/console/web-admin.conf

Console {
  Name = web-admin
  Password = "123"
  Profile = "webui-admin"
}

/etc/bareos/bareos-dir.d/profile/webui-admin.conf

Profile {
  Name = "webui-admin"
  CommandACL = !.bvfs_clear_cache, !.exit, !.sql, !configure, !create, !delete, !purge, !prune, !sqlquery, !umount, !unmount, *all*
  Job ACL = *all*
  Schedule ACL = *all*
  Catalog ACL = *all*
  Pool ACL = *all*
  Storage ACL = *all*
  Client ACL = *all*
  FileSet ACL = *all*
  Where ACL = *all*
}

/etc/bareos-webui/directors.ini

[bareos_dir]
enabled = "yes"
diraddress = "localhost"
dirport>= 9101
;UsePamAuthentication = yes
pam_console_name = "web-admin"
pam_console_password = "123"
Additional Information:
Attached Files:
Notes
(0003936)
khvalera   
2020-04-10 00:10   
UsePamAuthentication = yes
#pam_console_name = "web-admin"
#pam_console_password = "123"
(0004289)
frank   
2021-09-29 18:22   
Fix committed to bareos master branch with changesetid 15252.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1020 [bareos-core] webui major always 2018-10-10 09:37 2021-12-21 13:56
Reporter: linkstat Platform: Linux  
Assigned To: frank OS: Debian  
Priority: normal OS Version: 9  
Status: resolved Product Version: 18.2.4-rc1  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Can not restore a client with spaces in its name
Description: All my clients have names with spaces on their names like "client-fd using Catalog-XXX"; correctly handled (ie, enclosing the name in quotation marks, or escaping the space with \), this has not represented any problem... until now. Webui can even perform backup tasks (previously defined in the configuration files) and has not presented problems with the spaces. But when it came time to restore something ... it just does not seem to be able to properly handle the character strings that contain spaces inside. Apparently, cuts the string to first place especially found (as you can see by looking at the attached image).
Tags:
Steps To Reproduce: Define a client whose name contains spaces inside, such as "hostname-fd Testing Client".
Try to restore a backup from Webui to that client (it does not matter that the backup was originally made in that client or that the newly defined client is a new destination for the restoration of a backup previously made in another client).
Webui will fail by saying "invalid client argument: hostname-fd". As you can see, Webui will "cut" the client's name to the first string before the first space, and since there is no hostname-fd client, the task will fail; or worse, if additionally there was a client whose name matched the first string before the first space, Webui will restore the wrong client.
Additional Information: bconsole does not present any problem when the clients contain spaces in their names (this of course, when the spaces are correctly handled by the human operator who writes the commands, either by enclosing the name with quotation marks, or escaping spaces with a backslash).
System Description
Attached Files: Bareos - Can not restore when a client name has spaces in their name.jpg (139,884 bytes) 2018-10-10 09:37
https://bugs.bareos.org/file_download.php?file_id=311&type=bug
jpg
Notes
(0003546)
linkstat   
2019-07-31 18:03   
Hello!

Any news regarding this problem? (or any ideas about how to patch it temporarily so that you can use webui for the case described)?
Sometimes it is tedious to use bconsole all the time instead of webui ...

Regards!
(0004185)
frank   
2021-07-21 15:22   
Fix committed to bareos master branch with changesetid 15068.
(0004188)
frank   
2021-07-22 15:22   
Fix committed to bareos bareos-19.2 branch with changesetid 15079.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
971 [bareos-core] webui major always 2018-06-25 11:54 2021-12-21 13:56
Reporter: Masanetz Platform: Linux  
Assigned To: frank OS: any  
Priority: normal OS Version: 3  
Status: resolved Product Version: 17.2.5  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Error building tree for filenames with backslashes
Description: WebUI Restore fails building the tree if directory contains filenames with backslashes.

Some time ago the adobe reader plugin created a file named "C:\nppdf32Log\debuglog.txt" in the working dir.
Building the restore tree in WebUI fails with popup "Oops, something went wrong, probably too many files.".

Filebname handling for backslashes should be adapted for backslashes (e.g. like https://github.com/bareos/bareos-webui/commit/ee232a6f04eaf2a7c1084fee981f011ede000e8a)
Tags:
Steps To Reproduce: 1. Put an empty file with a filename with backslashes (e.g. C:\nppdf32Log\debuglog.txt) in your home directory
2. Backup
3. Try to restore any file from your home directory from this backup via WebUI
Additional Information: Attached diff of my "workaround"
System Description General issues regarding all linux platforms or not specific to only one distribution
Attached Files: RestoreController.php.diff (1,669 bytes) 2018-06-25 11:54
https://bugs.bareos.org/file_download.php?file_id=299&type=bug
Notes
(0004184)
frank   
2021-07-21 15:22   
Fix committed to bareos master branch with changesetid 15067.
(0004189)
frank   
2021-07-22 15:22   
Fix committed to bareos bareos-19.2 branch with changesetid 15080.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
871 [bareos-core] webui block always 2017-11-04 16:10 2021-12-21 13:56
Reporter: tuxmaster Platform: Linux  
Assigned To: frank OS: CentOS  
Priority: normal OS Version: 7  
Status: resolved Product Version: 17.4.2  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: UI will not load complete
Description: After the login, the website will not load complete.
Only the spinner are shown. (See picture)

The php error log will flooded with:
PHP Notice: Undefined index: meta in /usr/share/bareos-webui/module/Job/src/Job/Model/JobModel.php on line 120

The bareos director will run with version 16.2.7.
Tags:
Steps To Reproduce:
Additional Information: PHP 7.1 via fpm
System Description
Attached Files: Bildschirmfoto von »2017-11-04 16-06-19«.png (50,705 bytes) 2017-11-04 16:10
https://bugs.bareos.org/file_download.php?file_id=270&type=bug
png
Notes
(0002812)
frank   
2017-11-09 15:35   
DIRD and WebUI need to have the same version currently.

WebUI 17.2.4 is not compatible to a 16.2.7 director yet, which may change in future.
(0002813)
tuxmaster   
2017-11-09 17:36   
Thanks for the information.
But this shout be noted in the release notes, or better result in an error message about an unsupported version.
(0004169)
frank   
2021-06-30 11:49   
There is a note in the installation chapter, see https://docs.bareos.org/master/IntroductionAndTutorial/InstallingBareosWebui.html#system-requirements .
Nevertheless, I'm going to have a look if we can somehow improve the error handling reagarding version compatibility.
(0004176)
frank   
2021-07-06 17:22   
Fix committed to bareos master branch with changesetid 15057.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
579 [bareos-core] webui block always 2015-12-06 12:41 2021-12-21 13:56
Reporter: tuxmaster Platform: x86_64  
Assigned To: frank OS: Fedora  
Priority: normal OS Version: 22  
Status: resolved Product Version:  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Unable to connect to the director from webui via ipv6
Description: The web ui and the director are running on the same system.
After enter the password the error message "Error: , director seems to be down or blocking our request." will presented.
Tags:
Steps To Reproduce: Open the website enter the credentials and try to log in.
Additional Information: getsebool httpd_can_network_connect
httpd_can_network_connect --> on

Error from the apache log file:
[Sun Dec 06 12:37:32.658104 2015] [:error] [pid 2642] [client ABC] PHP Warning: stream_socket_client(): unable to connect to tcp://[XXX]:9101 (Unknown error) in /usr/share/bareos-webui/vendor/Bareos/library/Bareos/BSock/BareosBSock.php on line 521, referer: http://CDE/bareos-webui/

XXX=ip6addr of the director.

connect(from the web server) via telnet ip6addr 9101 will work.
bconsole will also work.
Attached Files:
Notes
(0002323)
frank   
2016-07-15 16:07   
Note: When specifying a numerical IPv6 address (e.g. fe80::1), you must enclose the IP in square brackets—for example, tcp://[fe80::1]:80.

http://php.net/manual/en/function.stream-socket-client.php

You could try setting your IPv6 address in your directors.ini into square brackets until we provide a fix, that might already work.
(0002324)
tuxmaster   
2016-07-15 17:04   
I have try to set it to:
diraddress = "[XXX]"
XXX are the ipv6 address

But the error are the same.
(0002439)
tuxmaster   
2016-11-06 12:09   
Same on Fedora 24 using php 7.0
(0004159)
pete   
2021-06-23 12:41   
(Last edited: 2021-06-23 12:55)
This is still present in version 20 of the Bareos WebUI, on all RHEL variants I tested (CentOS 8, AlmaLinux 8).

It results from a totally unnecessary "bindto" configuration in line 473 of /usr/share/bareos-webui/vendor/Bareos/library/Bareos/BSock/BareosBSock.php:

      $opts = array(
          'socket' => array(
              'bindto' => '0:0',
          ),
      );

This unnecessarily limits PHP socket binding to IPv4 interfaces as documented in https://www.php.net/manual/en/context.socket.php. The simplest solution is to just comment out the "bindto" line:

      $opts = array(
          'socket' => array(
              // 'bindto' => '0:0',
          ),
      );

Restart php-fpm and now IPv6 works perfectly

(0004167)
frank   
2021-06-29 17:22   
Fix committed to bareos bareos-19.2 branch with changesetid 15043.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
565 [bareos-core] file daemon feature N/A 2015-11-16 08:33 2021-12-07 14:24
Reporter: joergs Platform: Linux  
Assigned To: OS: SLES  
Priority: none OS Version: 12  
Status: acknowledged Product Version:  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: use btrfs features to efficiently detect changed files
Description: btrfs (a default filesystem on SLES 12) provides features to generate a list of changed files between snapshots. This can be useful for Bareos to efficiently generate the file lists for Incremental and Differential backups.
Tags:
Steps To Reproduce:
Additional Information: btrfs send operation compares two subvolumes and writes a description of how to convert one subvolume (the parent subvolume) into the other (the sent subvolume).
btrfs receive does the opposite.

SLES 12 comes the the tool snapper, which provides an abstraction for this functionality (and should work in the same way for LVM and ext4?).
System Description
Attached Files:
Notes
(0004378)
colttt   
2021-12-03 11:08   
6years later any news or plans for that?
the same can be done with zfs and bcachefs (in near feature)

so it would be great if bareos can support zfs/btrfs/bcachefs send/receive/snapshot
(0004382)
bruno-at-bareos   
2021-12-07 14:24   
Note :
+ To work the snapshot has to be configured and run on time for the desired FS (use a subvolume + disk space)
+ To be benchmarked on heavily used FS compared to traditional Accurate module.
+ Think that Accurate is needed in AI

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1388 [bareos-core] regression testing block always 2021-09-20 12:02 2021-11-29 09:12
Reporter: mschiff Platform: Linux  
Assigned To: bruno-at-bareos OS: any  
Priority: normal OS Version: 3  
Status: resolved Product Version: 19.2.11  
Product Build: Resolution: no change required  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: /.../sd_reservation.cc: error: sleep_for is not a member of std::this_thread (maybe gcc-11 related)
Description: It seems like the tests do not build when using gcc-11:


/var/tmp/portage/app-backup/bareos-19.2.9/work/bareos-Release-19.2.9/core/src/tests/sd_reservation.cc: In function ‘void WaitThenUnreserve(std::unique_ptr<TestJob>&)’:
/var/tmp/portage/app-backup/bareos-19.2.9/work/bareos-Release-19.2.9/core/src/tests/sd_reservation.cc:147:21: error: ‘sleep_for’ is not a member of ‘std::this_thread’
  147 | std::this_thread::sleep_for(std::chrono::milliseconds(10));
      | ^~~~~~~~~
ninja: build stopped: subcommand failed.
Tags:
Steps To Reproduce:
Additional Information: Please see: https://bugs.gentoo.org/786789
System Description General issues regarding all linux platforms or not specific to only one distribution
Attached Files:
Notes
(0004288)
bruno-at-bareos   
2021-09-29 13:58   
Would you mind to test with the new 19.2.11 release or even with the not released yet branch-12.2
Building here with gcc-11 under openSUSE Tumbleweed works as expected.
(0004328)
bruno-at-bareos   
2021-11-10 10:05   
Hello, did you make any progress on this ?

As 19.2 will be soon obsoleted, did you try to compile version 20 ?
(0004368)
mschiff   
2021-11-27 12:23   
Hi!

Sorry for the late answer. All current version build fine with gcc-11 here:
 - 18.2.12
 - 19.2.11
 - 20.0.3

Thanks!
(0004369)
bruno-at-bareos   
2021-11-29 09:11   
Ok thanks I will close then,
(0004370)
bruno-at-bareos   
2021-11-29 09:12   
Gentoo get gcc11 fixed

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1151 [bareos-core] webui feature always 2019-12-12 09:25 2021-11-26 13:22
Reporter: DanielB Platform:  
Assigned To: frank OS:  
Priority: normal OS Version:  
Status: resolved Product Version: 18.2.5  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: bareos webui does not show the inchanger flag for volumes
Description: The bareos webui does not show the inchanger flag for tape volumes. The flag is visible in the bconsole.
The flag should be visible as additional column to help volume management with tape changers.
Tags: volume, webui
Steps To Reproduce: Log into the webgui.
Select Storage -> Volumes
Additional Information:
Attached Files:
Notes
(0004367)
frank   
2021-11-26 13:22   
Fix committed to bareos master branch with changesetid 15491.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
936 [bareos-core] webui feature always 2018-03-28 23:28 2021-11-25 12:32
Reporter: msilveira Platform: Linux  
Assigned To: OS: any  
Priority: high OS Version: 3  
Status: resolved Product Version: 17.2.5  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: The field "Restore location on client" Should be updated after selecting "Restore Job"
Description: I've been engaged in programming this feature, because it bothered me too much to have to type the destination folder, considering that I have a real path ( Where parameter ) set for every RestoreJob.
Tags:
Steps To Reproduce:
Additional Information: I've written a patch to make the webui force a RestoreJob selection and then, the system will check for the selected RestoreJob's "Where = " parameter.

The patch has been written against the source from bareos-webui-17.2.4-15.1.el7.src.rpm .

I'm not a professional programmer, it took me almost half a day to find out how to get this working. But it was worthy. I hope it makes into the master branch.
System Description General issues regarding all linux platforms or not specific to only one distribution
Attached Files: bareos-webui-populate-where-from-restorejob.patch (4,813 bytes) 2018-03-28 23:28
https://bugs.bareos.org/file_download.php?file_id=283&type=bug
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1396 [bareos-core] bconsole minor always 2021-10-31 21:36 2021-11-24 14:43
Reporter: nelson.gonzalez6 Platform: Linux  
Assigned To: bruno-at-bareos OS: CentOS  
Priority: normal OS Version: 7  
Status: resolved Product Version: 19.2.11  
Product Build: Resolution: no change required  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact: no
bareos-master: action: none
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: HOW TO REMOVE FRON DB ALL ERROR, CANCELED, FAILED JOBS
Description: I HAVE NOTICED THAT WHEN LISTING IN bconsole, pools with media ids notice that have jobs from 3 to 4 years ago, all the clients that are already not in used or eliminated that are still on the db catalog list. how can i remove these jobs that are volstatus Error, experired, canceled.

need sugestions.

Thanks.


-----------------------------

| MediaId | VolumeName | VolStatus | Enabled | VolBytes | VolFiles | VolRetention | Recycle | Slot | InChanger | MediaType | L astWritten | Storage |
+---------+----------------------+-----------+---------+------------+----------+--------------+---------+------+-----------+-------------+-- -------------------+---------+
| 698 | Differentialvpn-0698 | Error | 1 | 35,437,993 | 0 | 864,000 | 1 | 0 | 0 | GlusterFile | 2 021-07-23 04:42:41 | Gluster |
| 900 | Differentialvpn-0900 | Error | 1 | 3,246,132 | 0 | 864,000 | 1 | 0 | 0 | GlusterFile | 2 021-08-29 14:56:06 | Gluster |
| 1,000 | Differentialvpn-1000 | Append | 1 | 226,375 | 0 | 864,000 | 1 | 0 | 0 | GlusterFile | 2 021-10-31 06:11:56 | Gluster |
+---------+----------------------+-----------+---------+------------+----------+-
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0004313)
bruno-at-bareos   
2021-11-02 13:21   
Did you already test dbcheck this tool normally is the right one to cleanup orphan stored in DB.
(0004320)
nelson.gonzalez6   
2021-11-08 13:01   
Hi yes, i have cleaned dbcheck -fv and it took long time due to lots of orphanes but when listing volumes in bconsole it still has the errors.
(0004325)
bruno-at-bareos   
2021-11-10 10:01   
So in that case, the best things to do is to remove them manually: with the delete volume command into bconsole.
You will normally also have to remove manually the volume file from the filesystem.

This is due to the fact that volume in Error state are locked and then not pruned as we can't touch them anymore.
(0004341)
bruno-at-bareos   
2021-11-16 15:42   
Beside removing manually those records, in bconsole or by scripting the delete volume line, you have to remember to delete the corresponding file if it still exist in your storage place.
(0004357)
bruno-at-bareos   
2021-11-24 14:42   
Final note before closing. The manual process is required.
(0004358)
bruno-at-bareos   
2021-11-24 14:43   
A manual process is needed as described to remove volumes in errors.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1390 [bareos-core] installer / packages minor always 2021-10-07 16:39 2021-11-24 10:49
Reporter: tastydr Platform: Linux  
Assigned To: arogge OS: Debian  
Priority: normal OS Version: 10  
Status: acknowledged Product Version: 20.0.2  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: debian package python-bareos and python3-bareos not co-installable
Description: Hi all,
  If I try to install python3-bareos after python-bareos is already installed there is an error showing that the two packages both contain the file '/usr/bin/bareos-fd-connect.py'.
  This makes it difficult to migrate python 2 scripts to python 3. :)

Thanks for your work!
C.

# apt install python3-bareos
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following NEW packages will be installed:
  python3-bareos
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 0 B/16.9 kB of archives.
After this operation, 112 kB of additional disk space will be used.
(Reading database ... 107524 files and directories currently installed.)
Preparing to unpack .../python3-bareos_20.0.1-3_all.deb ...
Unpacking python3-bareos (20.0.1-3) ...
dpkg: error processing archive /var/cache/apt/archives/python3-bareos_20.0.1-3_all.deb (--unpack):
 trying to overwrite '/usr/bin/bareos-fd-connect.py', which is also in package python-bareos 20.0.1-3
Errors were encountered while processing:
 /var/cache/apt/archives/python3-bareos_20.0.1-3_all.deb
E: Sub-process /usr/bin/dpkg returned an error code (1)
Tags:
Steps To Reproduce:
Additional Information:
Attached Files:
Notes
(0004354)
arogge   
2021-11-23 16:01   
I can reproduce the issue.
If it is just for migrating your scripts, you can simply install python-bareos using pip which will also work per user or in a virtual env.

Hope that helps!
(0004356)
bruno-at-bareos   
2021-11-24 10:49   
dpkg --force-all --install /var/cache/apt/archives/python3-bareos_20.0.4-1_all.deb (or whatever version you have) can be used to force the python3 to be installed.
To add information : such trouble doesn't exist in the rpm delivered packages as only python3 contain /usr/bin/ scripts.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1401 [bareos-core] webui minor always 2021-11-16 15:14 2021-11-18 09:50
Reporter: Armand Platform:  
Assigned To: bruno-at-bareos OS:  
Priority: normal OS Version:  
Status: resolved Product Version: 20.0.2  
Product Build: Resolution: duplicate  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: No data display on /bareos-webui/media/details/_volume_name_ for french translation
Description: When we are logged to webui with french language, the Volumes details are not displayed due to an issue with French translation

During translation to french we can have some quote (') as in french sometimes we replace letter a / e with a single quote. Exemple "The application" is translate to "L'application" and not to "La application"
Unfortunately, quote is also a string separator in javascript, and we could not have a quote inside a single quote string. To solve this we need to put the string inside a double quote (") or we need to escape the quote 'this is a string with a single quote \' escaped to be valide'

In the file /usr/share/bareos-webui/module/Media/view/media/media/details.phtml we have this issue between line 315 and 445 : inside function detailFormatterVol(index, row) {...}

One solution could be :
  between line 315 and 445,
    replace all : <?php echo $this->translate("XXXXXX"); ?>
    with : <?php echo str_replace("'","\'",$this->translate("XXXXXX")); ?>
  This will replace single quote by escaped single quote

PS: see attache file, including this solution
Tags:
Steps To Reproduce: log to webui with language = French
go to menu : STOCKAGES
go in tab : Volumes
select a volume in the list (My volumes are all DLT tapes)

we expect to see the volume information + the jobs saved on this volume
 
Additional Information:
Attached Files: details.phtml (17,122 bytes) 2021-11-16 15:14
https://bugs.bareos.org/file_download.php?file_id=484&type=bug
Notes
(0004340)
Armand   
2021-11-16 15:22   
Juste saw, this is the same as issue 0001235 ;-/ sorry
(0004345)
bruno-at-bareos   
2021-11-18 09:50   
I will close this as duplicate of 1235.
It is fixed and published in 20.0.3 (available for customer with a affordable subscription) and you can cherry pick the commit which fix this.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1235 [bareos-core] webui major always 2020-04-26 18:43 2021-11-18 09:50
Reporter: kabassanov Platform: Linux  
Assigned To: frank OS: Debian  
Priority: normal OS Version: 9  
Status: resolved Product Version: 19.2.7  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Special characters not escaped in translations
Description: Hi,

I observed erros in in some (at least one ) webpages generation, in presence of special characters in translation strings.

Here is an example:
module/Media/view/media/media/details.phtml: html.push('<th><?php echo $this->translate("Volume writes"); ?></th>');
with French translation containing ' . I customized translations a little bit, so I'm not sure the original translation of this one had this character, but it is a general issue.

Thanks.
Tags:
Steps To Reproduce: Just take a translation containing an apostrophe and observe that webpage is not completely displayed. In debug window you'll see:

      html.push('<th>Nombre d'écritures sur le volume</th>');

where the apostrophe in "Nombre d'écritures" will be considered as push string end character.
Additional Information:
System Description
Attached Files:
Notes
(0004208)
frank   
2021-08-10 11:22   
Fix committed to bareos bareos-19.2 branch with changesetid 15101.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1348 [bareos-core] storage daemon major have not tried 2021-05-07 08:50 2021-11-17 09:49
Reporter: RobertF. Platform: Linux  
Assigned To: bruno-at-bareos OS: CentOS  
Priority: high OS Version: 7  
Status: resolved Product Version: 20.0.1  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version: 21.0.0  
    Target Version:  
bareos-master: impact: yes
bareos-master: action: fixed
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Fatal error: stored/askdir.cc:295 NULL Volume name. This shouldn't happen!!!
Description: I have the following problem we do daily backups on tape and yesterday this error appeared for the first time.
Before the update to version 20. 0. 1, the tape backup was running without any errors.

The error has only appeared so far. Do I have to change something in the settings or what may be?


Backup is done to a LTO-6 library with 4 tape drives.
Here is the job log that shows what is happening:
Tags:
Steps To Reproduce:
Additional Information:
2021-05-04 09:02:59	bareos-dir JobId 220137: Error: Bareos bareos-dir 20.0.1 (02Mar21):
Build OS: Debian GNU/Linux 10 (buster)
JobId: 220137
Job: daily-DbBck-ELVMCDB57.2021-05-04_09.00.00_19
Backup Level: Full
Client: "elbct3" 20.0.1 (02Mar21) Debian GNU/Linux 10 (buster),debian
FileSet: "DbBck-ELVMCDB57" 2020-02-25 08:25:00
Pool: "DailyDbShare" (From Job resource)
Catalog: "MyCatalog" (From Client resource)
Storage: "elbct3-msl4048" (From Pool resource)
Scheduled time: 04-May-2021 09:00:00
Start time: 04-May-2021 09:00:02
End time: 04-May-2021 09:02:59
Elapsed time: 2 mins 57 secs
Priority: 10
FD Files Written: 7
SD Files Written: 7
FD Bytes Written: 28,688,204,350 (28.68 GB)
SD Bytes Written: 28,688,205,321 (28.68 GB)
Rate: 162080.3 KB/s
Software Compression: None
VSS: no
Encryption: no
Accurate: yes
Volume name(s): 1000062
Volume Session Id: 441
Volume Session Time: 1619188643
Last Volume Bytes: 2,327,573,436,416 (2.327 TB)
Non-fatal FD errors: 0
SD Errors: 1
FD termination status: OK
SD termination status: Fatal Error
Bareos binary info: official Bareos subscription
Job triggered by: Scheduler
Termination: *** Backup Error ***

2021-05-04 09:02:59	elbct3-sd JobId 220137: Elapsed time=00:02:57, Transfer rate=162.0 M Bytes/second
2021-05-04 09:02:59	elbct3-sd JobId 220137: Fatal error: stored/askdir.cc:295 NULL Volume name. This shouldn't happen!!!
2021-05-04 09:02:56	elbct3-sd JobId 220137: Releasing device "Drive3" (/dev/tape/by-id/scsi-35001438016033618-nst).
2021-05-04 09:00:00	elbct3-sd JobId 220137: Connected File Daemon at 192.168.219.133:9102, encryption: None
2021-05-04 09:00:02	elbct3-fd JobId 220137: ACL support is enabled
2021-05-04 09:00:02	elbct3-fd JobId 220137: Extended attribute support is enabled
2021-05-04 09:00:00	bareos-dir JobId 220137: FD compression disabled for this Job because AllowCompress=No in Storage resource.
2021-05-04 09:00:00	bareos-dir JobId 220137: Encryption: None
2021-05-04 09:00:00	bareos-dir JobId 220137: Handshake: Cleartext
2021-05-04 09:00:00	bareos-dir JobId 220137: Connected Client: elbct3 at 192.168.219.133:9102, encryption: None
2021-05-04 09:00:00	bareos-dir JobId 220137: Using Device "Drive3" to write.
2021-05-04 09:00:00	bareos-dir JobId 220137: Connected Storage daemon at 192.168.219.133:9103, encryption: None
2021-05-04 09:00:00	bareos-dir JobId 220137: Start Backup JobId 220137, Job=daily-DbBck-ELVMCDB57.2021-05-04_09.00.00_19
System Description
Attached Files:
Notes
(0004342)
bruno-at-bareos   
2021-11-17 09:48   
The root cause has been found and a PR has been merged.
https://github.com/bareos/bareos/pull/975

This will appear as fix in next 21, and for our customer under subscription in 20.0.4

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1399 [bareos-core] General major always 2021-11-05 15:37 2021-11-10 09:55
Reporter: amodia Platform: Linux  
Assigned To: OS: Debian 11  
Priority: normal OS Version: 10  
Status: new Product Version: 17.2.4  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Postgres Cluster-Version ist hart verlinkt
Description: Das Upgrade der Debia-Version von 10 auf 11 führt zum Upgrade der Postgres Cluster-Version von 11 auf 13. Danach funktioniert zwar das Backup der Clients unter der Bareos-Version 16.2.6-5 (okay, etwas alt ;-) ), das Sichern der BackupCatalog Datenbank jedoch nicht (Meldung: "BeforeJob: pg_dump: Abbruch wegen unpassender Serverversion")

Grund hierfür ist, dass in zwei Dateien die Postgres Cluster-Version fest im Skript verankert ist:

- /usr/lib/bareos/scripts/bareos-config-lib.sh (Zeile: POSTGRESQL_BINDIR="/usr/lib/postgresql/11/bin")
- /usr/lib/bareos/scripts/make_catalog_backup.pl (Zeile: $ENV{PATH}="/usr/lib/postgresql/11/bin:$ENV{PATH}"; )
Tags:
Steps To Reproduce: Solange die Version in der Datei "/usr/lib/bareos/scripts/make_catalog_backup.pl" fest auf "11" steht, kommt der Fehler. Wenn man die Version im Skript hart auf "13" setzt, läuft alles wieder.

Frage:
Kann man vom Bareos-Skript nicht prüfen lasse, ob eine neuere Version existiert und dann deren Binaries verwenden?
Additional Information:
System Description
Attached Files:
Notes
(0004322)
bruno-at-bareos   
2021-11-10 09:55   
In multiversion package, normally the update alternative place the newer as default.
so you should have a /usr/bin/pg_dump pointing to /etc/alternatives/pg_dump pointing to where is located pg_dump version 13

You can use pg_dump 13 to backup an 11 cluster.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1311 [bareos-core] director feature always 2021-01-19 03:02 2021-11-08 09:21
Reporter: Ruth Ivimey-Cook Platform: amd64  
Assigned To: OS: Linux  
Priority: normal OS Version: Ubuntu 20.04  
Status: new Product Version: 19.2.9  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Continue spooling from client while previous spool being written
Description: I need to use the spool feature, because the tape is faster than the clients, and that is fine. However, the backups take much longer because the client data is not pulled while the spool file is being written.

Permitting two spool files (or N files) for a single client would enable the client to be backed up independently of the speed of the tape, at the cost, obviously, of spool file space.

In general, there could be N files per client and M clients being backed up; whether it is worth supporting this general case I don't know, but there would be benefits to many sites with just a two file option. If done, I would expect settings to control both N and M both as numbers of spool files, and as total size of spooled data. Once the spool partition or setting size limit is reached, the client is just paused, as is true now.
Tags:
Steps To Reproduce: A setup with a client spooling to bareos at a speed slower than tape.
A spool file on an SSD.
A backup larger than the spool file.
The client is read, then waits, then is read, then waits, ...

If double buffered spool is permitted, the client is read, tape write starts, is read again (during which tape completes), is read again .... and no client waiting.
Additional Information: There could be a case for starting to write to tape as soon as the spool file exists, but I think that would not help; it merely ties up the drive sooner.
Attached Files:
Notes
(0004318)
bruno-at-bareos   
2021-11-08 09:21   
Related to https://github.com/bareos/bareos/pull/886

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1331 [bareos-core] storage daemon minor always 2021-03-23 14:59 2021-10-14 20:09
Reporter: kaushalshriyan Platform: Linux  
Assigned To: OS: CentOS  
Priority: normal OS Version: 7.9.2009 (Core)  
Status: new Product Version: 20.0.1  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Backup data to AWS S3 bucket using the BareOS utility
Description: I am running bareos 20.0 on CentOS Linux release 7.9.2009 (Core). I have installed bareos 20.0 using https://download.bareos.org/bareos/release/20/CentOS_7/x86_64/ I am trying to push the backup to AWS S3 by following the document https://docs.bareos.org/TasksAndConcepts/StorageBackends.html. When I execute the run process I am encountering

23-Mar 10:28 bareos-sd: ERROR in backends/droplet_device.cc:113 error: ../../../../../core/src/droplet/libdroplet/src/conn.c:392: init_ssl_conn: SSL connect error: 0 (error:00000005:lib(0):func(0):DH lib)

23-Mar 10:28 bareos-sd: ERROR in backends/droplet_device.cc:113 error: ../../../../../core/src/droplet/libdroplet/src/conn.c:395: init_ssl_conn: SSL certificate verification status: 0: ok

#cat /etc/bareos/bareos-sd.d/device/droplet/aws.profile
# Generic host, but can't access buckets younger than 24h:
# Region specific host name. Can access also new buckets.
host="s3.amazonaws.com"
use_https = "true"
backend = "s3"
aws_region = "ap-south-1"
aws_auth_sign_version = "4"
access_key = "XXXXXXXXXXXXXXXXXXX"
secret_key = "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
pricing_dir = ""

cat /etc/bareos/bareos-sd.d/device/AWS_S3_1-00.conf

Device {
  Name = "AWS_S3_1-00"
  Media Type = "S3_Object1"
  Archive Device = "AWS S3 Storage"
  Device Type = droplet
  Device Options = "profile=/etc/bareos/bareos-sd.d/device/droplet/aws.profile,bucket=droplet-bareos,chunksize=100M"
  Label Media = yes # Lets Bareos label unlabeled media
  Random Access = yes
  Automatic Mount = yes # When device opened, read it
  Removable Media = no
  Always Open = no
  Maximum Concurrent Jobs = 1
}

cat /etc/bareos/bareos-dir.d/storage/S3_Object.conf

Storage {
  Name = "S3_Object"
  Address = "krithilinux" # N.B. Use a fully qualified name here (do not use "localhost" here).
  Password = "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
  Maximum Concurrent Jobs = 1
  Collect Statistics = yes
  Allow Compression = Yes
  Device = "AWS_S3_1-00"
  Media Type = "S3_Object1"
}

#rpm -qa |grep bareos
bareos-database-postgresql-20.0.1-3.el7.x86_64
bareos-client-20.0.1-3.el7.x86_64
bareos-storage-droplet-20.0.1-3.el7.x86_64
bareos-common-20.0.1-3.el7.x86_64
bareos-database-common-20.0.1-3.el7.x86_64
bareos-database-tools-20.0.1-3.el7.x86_64
bareos-filedaemon-20.0.1-3.el7.x86_64
bareos-tools-20.0.1-3.el7.x86_64
bareos-20.0.1-3.el7.x86_64
bareos-webui-20.0.1-3.el7.x86_64
bareos-bconsole-20.0.1-3.el7.x86_64
bareos-director-20.0.1-3.el7.x86_64
bareos-storage-20.0.1-3.el7.x86_64
Tags: droplet
Steps To Reproduce: I have attached the screenshot for reproducing the issue executing in http://192.168.0.177/bareos-webui/job/details/
Additional Information: ==> bareos.log <==
23-Mar 16:36 bareos-sd: ERROR in backends/droplet_device.cc:113 error: ../../../../../core/src/droplet/libdroplet/src/conn.c:392: init_ssl_conn: SSL connect error: 0 (error:00000005:lib(0):func(0):DH lib)
23-Mar 16:36 bareos-sd: ERROR in backends/droplet_device.cc:113 error: ../../../../../core/src/droplet/libdroplet/src/conn.c:395: init_ssl_conn: SSL certificate verification status: 0: ok
23-Mar 16:36 bareos-sd: ERROR in backends/droplet_device.cc:113 error: ../../../../../core/src/droplet/libdroplet/src/conn.c:392: init_ssl_conn: SSL connect error: 0 (error:00000005:lib(0):func(0):DH lib)
23-Mar 16:36 bareos-sd: ERROR in backends/droplet_device.cc:113 error: ../../../../../core/src/droplet/libdroplet/src/conn.c:395: init_ssl_conn: SSL certificate verification status: 0: ok
23-Mar 16:37 bareos-sd: ERROR in backends/droplet_device.cc:113 error: ../../../../../core/src/droplet/libdroplet/src/conn.c:392: init_ssl_conn: SSL connect error: 0 (error:00000005:lib(0):func(0):DH lib)
23-Mar 16:37 bareos-sd: ERROR in backends/droplet_device.cc:113 error: ../../../../../core/src/droplet/libdroplet/src/conn.c:395: init_ssl_conn: SSL certificate verification status: 0: ok
23-Mar 16:37 bareos-sd: ERROR in backends/droplet_device.cc:113 error: ../../../../../core/src/droplet/libdroplet/src/conn.c:392: init_ssl_conn: SSL connect error: 0 (error:00000005:lib(0):func(0):DH lib)
23-Mar 16:37 bareos-sd: ERROR in backends/droplet_device.cc:113 error: ../../../../../core/src/droplet/libdroplet/src/conn.c:395: init_ssl_conn: SSL certificate verification status: 0: ok
23-Mar 16:38 bareos-sd: ERROR in backends/droplet_device.cc:113 error: ../../../../../core/src/droplet/libdroplet/src/conn.c:392: init_ssl_conn: SSL connect error: 0 (error:00000005:lib(0):func(0):DH lib)
23-Mar 16:38 bareos-sd: ERROR in backends/droplet_device.cc:113 error: ../../../../../core/src/droplet/libdroplet/src/conn.c:395: init_ssl_conn: SSL certificate verification status: 0: ok
23-Mar 16:38 bareos-sd: ERROR in backends/droplet_device.cc:113 error: ../../../../../core/src/droplet/libdroplet/src/conn.c:392: init_ssl_conn: SSL connect error: 0 (error:00000005:lib(0):func(0):DH lib)
23-Mar 16:38 bareos-sd: ERROR in backends/droplet_device.cc:113 error: ../../../../../core/src/droplet/libdroplet/src/conn.c:395: init_ssl_conn: SSL certificate verification status: 0: ok
23-Mar 16:39 bareos-sd: ERROR in backends/droplet_device.cc:113 error: ../../../../../core/src/droplet/libdroplet/src/conn.c:392: init_ssl_conn: SSL connect error: 0 (error:00000005:lib(0):func(0):DH lib)
23-Mar 16:39 bareos-sd: ERROR in backends/droplet_device.cc:113 error: ../../../../../core/src/droplet/libdroplet/src/conn.c:395: init_ssl_conn: SSL certificate verification status: 0: ok
23-Mar 16:39 bareos-sd: ERROR in backends/droplet_device.cc:113 error: ../../../../../core/src/droplet/libdroplet/src/conn.c:392: init_ssl_conn: SSL connect error: 0 (error:00000005:lib(0):func(0):DH lib)
23-Mar 16:39 bareos-sd: ERROR in backends/droplet_device.cc:113 error: ../../../../../core/src/droplet/libdroplet/src/conn.c:395: init_ssl_conn: SSL certificate verification status: 0: ok
23-Mar 16:39 bareos-sd: ERROR in backends/droplet_device.cc:113 error: ../../../../../core/src/droplet/libdroplet/src/conn.c:392: init_ssl_conn: SSL connect error: 0 (error:00000005:lib(0):func(0):DH lib)
23-Mar 16:39 bareos-sd: ERROR in backends/droplet_device.cc:113 error: ../../../../../core/src/droplet/libdroplet/src/conn.c:395: init_ssl_conn: SSL certificate verification status: 0: ok
23-Mar 16:40 bareos-sd: ERROR in backends/droplet_device.cc:113 error: ../../../../../core/src/droplet/libdroplet/src/conn.c:392: init_ssl_conn: SSL connect error: 0 (error:00000005:lib(0):func(0):DH lib)
23-Mar 16:40 bareos-sd: ERROR in backends/droplet_device.cc:113 error: ../../../../../core/src/droplet/libdroplet/src/conn.c:395: init_ssl_conn: SSL certificate verification status: 0: ok
23-Mar 16:40 bareos-sd: ERROR in backends/droplet_device.cc:113 error: ../../../../../core/src/droplet/libdroplet/src/conn.c:392: init_ssl_conn: SSL connect error: 0 (error:00000005:lib(0):func(0):DH lib)
23-Mar 16:40 bareos-sd: ERROR in backends/droplet_device.cc:113 error: ../../../../../core/src/droplet/libdroplet/src/conn.c:395: init_ssl_conn: SSL certificate verification status: 0: ok
23-Mar 16:41 bareos-sd: ERROR in backends/droplet_device.cc:113 error: ../../../../../core/src/droplet/libdroplet/src/conn.c:392: init_ssl_conn: SSL connect error: 0 (error:00000005:lib(0):func(0):DH lib)
23-Mar 16:41 bareos-sd: ERROR in backends/droplet_device.cc:113 error: ../../../../../core/src/droplet/libdroplet/src/conn.c:395: init_ssl_conn: SSL certificate verification status: 0: ok
23-Mar 16:41 bareos-sd: ERROR in backends/droplet_device.cc:113 error: ../../../../../core/src/droplet/libdroplet/src/conn.c:392: init_ssl_conn: SSL connect error: 0 (error:00000005:lib(0):func(0):DH lib)
23-Mar 16:41 bareos-sd: ERROR in backends/droplet_device.cc:113 error: ../../../../../core/src/droplet/libdroplet/src/conn.c:395: init_ssl_conn: SSL certificate verification status: 0: ok
23-Mar 16:42 bareos-sd: ERROR in backends/droplet_device.cc:113 error: ../../../../../core/src/droplet/libdroplet/src/conn.c:392: init_ssl_conn: SSL connect error: 0 (error:00000005:lib(0):func(0):DH lib)
23-Mar 16:42 bareos-sd: ERROR in backends/droplet_device.cc:113 error: ../../../../../core/src/droplet/libdroplet/src/conn.c:395: init_ssl_conn: SSL certificate verification status: 0: ok
23-Mar 16:42 bareos-sd: ERROR in backends/droplet_device.cc:113 error: ../../../../../core/src/droplet/libdroplet/src/conn.c:392: init_ssl_conn: SSL connect error: 0 (error:00000005:lib(0):func(0):DH lib)
23-Mar 16:42 bareos-sd: ERROR in backends/droplet_device.cc:113 error: ../../../../../core/src/droplet/libdroplet/src/conn.c:395: init_ssl_conn: SSL certificate verification status: 0: ok
23-Mar 16:42 bareos-sd: ERROR in backends/droplet_device.cc:113 error: ../../../../../core/src/droplet/libdroplet/src/conn.c:392: init_ssl_conn: SSL connect error: 0 (error:00000005:lib(0):func(0):DH lib)
23-Mar 16:42 bareos-sd: ERROR in backends/droplet_device.cc:113 error: ../../../../../core/src/droplet/libdroplet/src/conn.c:395: init_ssl_conn: SSL certificate verification status: 0: ok
23-Mar 16:42 bareos-sd JobId 44: Warning: stored/label.cc:389 Open device "AWS_S3_1-00" (AWS S3 Storage) Volume "Full-0003" failed: ERR=stored/dev.cc:734 Could not open: AWS S3 Storage/Full-0003, ERR=Success

23-Mar 16:42 bareos-sd JobId 44: Warning: stored/mount.cc:276 Open device "AWS_S3_1-00" (AWS S3 Storage) Volume "Full-0003" failed: ERR=stored/dev.cc:734 Could not open: AWS S3 Storage/Full-0003, ERR=Success

23-Mar 16:42 apigeeapicrafterprodbackup-fd JobId 44: Fatal error: filed/dir_cmd.cc:2697 Bad response to Append Data command. Wanted 3000 OK data
, got 3903 Error append data

23-Mar 16:42 bareos-dir JobId 44: Fatal error: Director's comm line to SD dropped.
23-Mar 16:42 bareos-dir JobId 44: Error: Bareos bareos-dir 20.0.1 (02Mar21):
  Build OS: CentOS Linux release 7.6.1810 (Core)
  JobId: 44
  Job: apigeebackup.2021-03-23_16.17.19_08
  Backup Level: Full
  Client: "apigeeapicrafterprodbackup" 20.0.0 (16Dec20) CentOS Linux release 7.6.1810 (Core),redhat
  FileSet: "LinuxConfig" 2021-03-22 19:26:06
  Pool: "Full" (From command line)
  Catalog: "MyCatalog" (From Client resource)
  Storage: "S3_Object" (From Job resource)
  Scheduled time: 23-Mar-2021 16:17:19
  Start time: 23-Mar-2021 16:17:21
  End time: 23-Mar-2021 16:42:58
  Elapsed time: 25 mins 37 secs
  Priority: 10
  FD Files Written: 0
  SD Files Written: 0
  FD Bytes Written: 0 (0 B)
  SD Bytes Written: 0 (0 B)
  Rate: 0.0 KB/s
  Software Compression: None
  VSS: no
  Encryption: no
  Accurate: no
  Volume name(s):
  Volume Session Id: 1
  Volume Session Time: 1616496336
  Last Volume Bytes: 0 (0 B)
  Non-fatal FD errors: 1
  SD Errors: 0
  FD termination status: Fatal Error
  SD termination status: Error
  Bareos binary info: bareos.org build: Get official binaries and vendor support on bareos.com
  Job triggered by: User
  Termination: *** Backup Error ***
System Description
Attached Files: bareossslerror.png (649,167 bytes) 2021-03-23 14:59
https://bugs.bareos.org/file_download.php?file_id=462&type=bug
steptoreproduce.png (611,432 bytes) 2021-03-23 14:59
https://bugs.bareos.org/file_download.php?file_id=463&type=bug
Notes
(0004107)
ideacloud   
2021-03-30 06:25   
I have the same problem. No AWS connections using the plugin work correctly. They all show the same error in
(0004134)
perhallenborg   
2021-06-02 11:46   
I also have the same problem.

02-Jun 10:49 bareos-sd JobId 16: Warning: stored/label.cc:389 Open device "AWS_S3_1-00" (AWS S3 Storage) Volume "S3_Full-0003" failed: ERR=stored/dev.cc:734 Could not open: AWS S3 Storage/S3_Full-0003, ERR=Success

02-Jun 10:50 bareos-sd JobId 16: Warning: stored/label.cc:389 Open device "AWS_S3_1-00" (AWS S3 Storage) Volume "S3_Full-0003" failed: ERR=stored/dev.cc:734 Could not open: AWS S3 Storage/S3_Full-0003, ERR=Success

02-Jun 10:50 bareos-sd JobId 16: Warning: stored/mount.cc:276 Open device "AWS_S3_1-00" (AWS S3 Storage) Volume "S3_Full-0003" failed: ERR=stored/dev.cc:734 Could not open: AWS S3 Storage/S3_Full-0003, ERR=Success

02-Jun 10:50 bareos-sd JobId 16: Please mount append Volume "S3_Full-0003" or label a new one for:
    Job: S3_backup-bareos-fd.2021-06-02_10.38.04_45
    Storage: "AWS_S3_1-00" (AWS S3 Storage)
    Pool: S3_Full
    Media type: S3_Object1
02-Jun 10:56 bareos-sd JobId 16: Warning: stored/mount.cc:276 Open device "AWS_S3_1-00" (AWS S3 Storage) Volume "S3_Full-0003" failed: ERR=stored/dev.cc:734 Could not open: AWS S3 Storage/S3_Full-0003, ERR=Success


I can mount via console but not label:

*mount
The defined Storage resources are:
     1: File
     2: S3_Object
Select Storage resource (1-2): 2
Connecting to Storage daemon S3_Object at bareos-test:9103 ...
3001 OK mount requested. Device="AWS_S3_1-00" (AWS S3 Storage)
*label
The defined Storage resources are:
     1: File
     2: S3_Object
Select Storage resource (1-2): 2
Enter new Volume name: S3_Full-0004
Defined Pools:
     1: Scratch
     2: S3_Incremental
     3: S3_Full
     4: S3_Differential
     5: Incremental
     6: Full
     7: Differential
Select the Pool (1-7): 3
Connecting to Storage daemon S3_Object at bareos-test:9103 ...
Sending label command for Volume "S3_Full-0004" Slot 0 ...
3910 Unable to open device ""AWS_S3_1-00" (AWS S3 Storage)": ERR=stored/dev.cc:734 Could not open: AWS S3 Storage/S3_Full-0004, ERR=Success

Label command failed for Volume S3_Full-0004.
Do not forget to mount the drive!!!


Centos 7.4, latest bareos
(0004135)
perhallenborg   
2021-06-02 17:00   
(Last edited: 2021-06-03 09:15)
Problem solved. Added AWS policy "AmazonS3FullAccess" to the bucket and it connected and tried to label and write to volumes. But no volumes were written:

105 2021-06-02 17:04:56 bareos-sd JobId 19: End of medium on Volume "S3_Full-0015" Bytes=104,832,224 Blocks=1,625 at 02-Jun-2021 17:04.
104 2021-06-02 17:04:56 bareos-sd JobId 19: Error: stored/block.cc:822 Write error on fd=0 at file:blk 0:104832223 on device "AWS_S3_1-00" (AWS S3 Storage). ERR=No such file or directory.
103 2021-06-02 17:04:56 bareos-sd JobId 19: Error: stored/block.cc:803 Write error at 0:104832223 on device "AWS_S3_1-00" (AWS S3 Storage). ERR=No such file or directory.
102 2021-06-02 17:04:30 bareos-sd JobId 19: New volume "S3_Full-0015" mounted on device "AWS_S3_1-00" (AWS S3 Storage) at 02-Jun-2021 17:04.
101 2021-06-02 17:04:30 bareos-sd JobId 19: Wrote label to prelabeled Volume "S3_Full-0015" on device "AWS_S3_1-00" (AWS S3 Storage)
100 2021-06-02 17:04:30 bareos-sd JobId 19: Labeled new Volume "S3_Full-0015" on device "AWS_S3_1-00" (AWS S3 Storage).
99 2021-06-02 17:04:30 bareos-dir JobId 19: Created new Volume "S3_Full-0015" in catalog.

No complaints on permissions or likewise. Just nothing written.

(0004136)
Mc.Sim   
2021-06-03 09:53   
@perhallenborg
Have you created the bucket `droplet-bareos` before running the task?
One suggestion: try to test with these options (iothreads and retries):
`Device Options = "profile=/etc/bareos/bareos-sd.d/device/droplet/aws.profile,bucket=droplet-bareos,chunksize=100M,iothreads=0,retries=1"`
(0004137)
perhallenborg   
2021-06-03 10:03   
Bucket created, but with another name. I can connect and write to the bucket from my Mac, as a mounted device. So the keys and secrets work. And Bareos doesn't complain about not being connected, as far as I can see. Tried also with iothreads=2
Tried nightly build Bareos 21 but same issue.
(0004138)
Mc.Sim   
2021-06-03 10:27   
So, the name of the backet at AWS-side are matched to the Bareos settings name. Correct?
When I debug my S3 storage, it is useful to enable logging on the AWS side.
what does bconsole show you, if you ask something like:
*show storage="AWS_S3_1-00"
*status storage="AWS_S3_1-00"
?
(0004139)
perhallenborg   
2021-06-03 10:33   
Buckets are a match. Tried loggin on AWS side but it is empty.
Config was taken directly from Bareos wiki.

Enter a period (.) to cancel a command.
*show storage="AWS_S3_1_00"
storage resource AWS_S3_1_00 not found.
You have messages.
*status storage="AWS_S3_1_00"
Storage resource "AWS_S3_1_00": not found
The defined Storage resources are:
1: File
2: S3_Object
Select Storage resource (1-2): 2
Connecting to Storage daemon S3_Object at bareos-test.adnet.adlibris.se:9103

bareos-sd Version: 21.0.0~pre540.f3597a14d (01 June 2021) CentOS Linux release 7.9.2009 (Core)
Daemon started 03-Jun-21 10:13. Jobs: run=1, running=0, pre-release version binary
 Sizes: boffset_t=8 size_t=8 int32_t=4 int64_t=8 bwlimit=0kB/s

Running Jobs:
No Jobs running.
====

Jobs waiting to reserve a drive:
====

Terminated Jobs:
 JobId Level Files Bytes Status Finished Name
===================================================================
    23 Full 570 58.88 M Error 03-Jun-21 08:41 S3_backup-bareos-fd
    24 Full 570 58.88 M Cancel 03-Jun-21 08:52 S3_backup-bareos-fd
    25 Incr 0 0 Cancel 03-Jun-21 09:06 S3_backup-bareos-fd
    26 Full 0 0 Cancel 03-Jun-21 09:10 S3_backup-bareos-fd
    27 Incr 0 0 Cancel 03-Jun-21 09:20 S3_backup-bareos-fd
    28 Incr 0 0 Error 03-Jun-21 09:23 S3_backup-bareos-fd
    29 Full 54,796 2.424 G Cancel 03-Jun-21 09:26 S3_backup-bareos-fd
    30 Incr 16 2.967 M Error 03-Jun-21 10:00 S3_backup-bareos-fd
    31 Full 61,355 2.524 G Error 03-Jun-21 10:03 S3_backup-bareos-fd
    32 Full 61,355 2.524 G Error 03-Jun-21 10:26 S3_backup-bareos-fd
====

Device status:

Device "AWS_S3_1-00" (AWS S3 Storage) is not open.
Backend connection is working.
No pending IO flush requests.
==
====

Used Volume status:
====
(0004140)
Mc.Sim   
2021-06-03 10:53   
Ahhh, *show storage="AWS_S3_1-00" should be *show storage="S3_Object"

> Backend connection is working.

Tell u that bareos SD can connect.

That is mine Device configuration

```
#cat storage/bareos-sd.d/device/WASABI_ObjectStorage.conf
Device {
  Name = WASABI_ObjectStorage
  Media Type = S3_Object1
  Archive Device = WASABI Object Storage
  Device Options = "profile=/etc/bareos/bareos-sd.d/device/droplet/wasabi_us-east-1.profile,bucket=i-clf1-bareos,chunksize=100M,iothreads=7,retries=0"
  Device Type = droplet
  Label Media = yes # lets Bareos label unlabeled media
  Random Access = yes
  Automatic Mount = yes # when device opened, read it
  Removable Media = no
  Always Open = no
  Maximum Concurrent Jobs = 1
}
# cat storage/bareos-sd.d/device/droplet/wasabi_us-east-1.profile
host = s3.us-east-1.wasabisys.com:443
use_https = true
backend = s3
aws_region = us-east-1
aws_auth_sign_version = 4
access_key = ""
secret_key = ""
pricing_dir = ""
#

```

I can see only one difference: the host contain the 443 port.
> host = s3.us-east-1.wasabisys.com:443

Try to use it.
(0004141)
perhallenborg   
2021-06-03 13:04   
I already do, doesn't work without it.

host = <bucketname>.s3-eu-north-1.amazonaws.com:443 # This parameter is only used as baseurl and will be prepended with bucket and location set in device resource to form correct url
use_https = true
access_key = xx
secret_key = xx
pricing_dir = "/etc/bareos/pricing" # If not empty, an droplet.csv file will be created which will record all S3 operations.
backend = s3
aws_auth_sign_version = 4 # Currently, AWS S3 uses version 4. The Ceph S3 gateway uses version 2.
aws_region = eu-north-1
(0004308)
bmwiedemann   
2021-10-14 20:09   
I have also hit this issue and one reason behind is that the SSL cert is valid for *.s3.amazonaws.com but that * only allows one level and no dots. Also SSL certs for *.*.fqdn are not allowed.

curl https://some-bucket.eu-central-1.s3.amazonaws.com
curl: (60) SSL: no alternative certificate subject name matches target host name 'some-bucket.eu-central-1.s3.amazonaws.com'

OTOH, this says, it is valid:
openssl s_client -connect some-bucket.eu-central-1.s3.amazonaws.com:443
so maybe one fix could be to let libdroplet do the verifcation the openssl way.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
816 [bareos-core] webui major always 2017-05-02 14:48 2021-10-07 10:22
Reporter: Kvazyman Platform: Linux  
Assigned To: frank OS: Debian  
Priority: normal OS Version: 8  
Status: resolved Product Version: 16.2.4  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Incorrect display value of the item Retention/Expiration depending on the selected localization
Description: Incorrect display value of the item Retention/Expiration depending on the selected localization
Tags:
Steps To Reproduce: Login with english localization. Go to https://YOUR_SITE_BAREOS/bareos-webui/media
Select some volume and see it Retention/Expiration

Login with russian localization. Go to https://YOUR_SITE_BAREOS/bareos-webui/media
Select same volume and see it Retention/Expiration (Задержка/Окончание)

Compare the values. The values are different for 5 days.
Additional Information:
System Description
Attached Files:
Notes
(0004294)
frank   
2021-10-07 10:22   
Fix committed to bareos master branch with changesetid 15298.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1358 [bareos-core] file daemon block always 2021-05-28 00:04 2021-09-17 21:31
Reporter: Dark_Angel Platform: x86  
Assigned To: OS: Windows  
Priority: normal OS Version: 10  
Status: new Product Version: 19.2.9  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Windows File Daemon VSS Snapshots are corrupt
Description: In short: Windows backups that are taken from system volumes are corrupt.

Longer version:
I have setup with Bareos Director and 4 Windows 10 machines of various versions. All of the Windows Bareos-fd packages are 19.2.9. Director is 19.2.9.

Common issue between them, is that I cannot get consistent Windows backup from Windows Drive.

Issues are the following:
- On older Windows version (1709), VSS snapshot is created and backup is successful. However when snapshot is inspected during backup using Shadow Explorer, hive files, i.e. registry, are corrupt. Same is observed when backup is recovered - hives are corrupt.
- On newer versions of Windows (1809+), VSS snapshots are created, however during backup I get "Cannot open "C:/Windows/SoftwareDistribution/DataStore/DataStore.edb": ERR=The process cannot access the file because it is being used by another process." on multiple files regardless of settings on director, i.e. vssenable=yes, portable=no. As result backup is corrupt and cannot be used for recovery.

At the same time, if I use System Restore to create Volume Shadow Copy and analyze it via Shadow Explorer - hives are consistent on all Windows versions.

I haven't updated to version 20.x yet, since I haven't seen anything related to changes of VSS mechanism.
Tags:
Steps To Reproduce: Assuming you have Director (Linux, Centos 6, v. 19.2.9) and File daemon (Windows x64 10, v19.2.9) installed:

1. Put the client configuration on Director like this:
Client {
    Name = Client
    Address = 10.0.4.21
    Password = "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
    # uncomment the following if using bacula
    # Catalog = "MyCatalog"
    TLS Enable = yes
    TLS Require = yes
    Maximum Concurrent Jobs = 2
    Port = 9104
    Auto Prune = yes
# File Retention =
# Enabled = no

}

FileSet {
    Name = Client-file-set

    Enable VSS = yes

    Include {

        # Exclude first
        Options {
          WildFile = "[A-Z]:/pagefile.sys"
          WildFile = "[A-Z]:/hiberfil.sys"
          WildFile = "[A-Z]:/swapfile.sys"
          WildDir = "[A-Z]:/RECYCLER"
          WildDir = "[A-Z]:/$RECYCLE.BIN"
          WildDir = "[A-Z]:/System Volume Information"

          WildFile = "[A-Z]:/VSC_Test/pagefile.sys"
          WildFile = "[A-Z]:/VSC_Test/hiberfil.sys"
          WildFile = "[A-Z]:/VSC_Test/swapfile.sys"
          WildDir = "[A-Z]:/VSC_Test/RECYCLER"
          WildDir = "[A-Z]:/VSC_Test/$RECYCLE.BIN"
          WildDir = "[A-Z]:/VSC_Test/System Volume Information"
          Exclude = yes
        }

        Options {
          RegExFile = ".*"
          Signature = MD5
          Compression = GZIP
          portable = no
          onefs = no
        }

        File = "C:/"
        File = "E:/"
        File = "F:/"

    }
}

Job {
    Name = Client-full-backup
    Type = Backup
    Level = Incremental
    Client = Client
    FileSet = Client-file-set
    Schedule = "WeeklyCycle"
    Storage = File
    Messages = Standard
    Pool = Full
    Full Backup Pool = Full
    Differential Backup Pool = Differential
    Incremental Backup Pool = Incremental
    Reschedule On Error = yes
    Reschedule Times = 120
    Reschedule Interval = 600
    # Every 40 days we force full backups. Otherwise they will be executed every 1st Saturday of the every month
    Max Full Interval = 5184000

    # Do not allow duplicates
    Allow Duplicate Jobs = no
    Cancel Lower Level Duplicates = yes

    Client Run Before Job = "C:/create_lock_file.bat"

    # There is a possibility to wake up, but if computer is asleep, then do not start it up - will backup later on
    Client Run After Job = "C:/remove_lock_file.bat"
    Write Bootstrap = "/mnt/safecopy/bareos/bootstraps/%c.bsr"
}

Job {
    Name = Client-full-restore
    Type = Restore
    Client = Client
    FileSet = Client-file-set
    Storage = File
    Pool = Full
    Full Backup Pool = Full
    Differential Backup Pool = Differential
    Incremental Backup Pool = Incremental
    Messages = Standard
    Where = "/"
}


2. Initiate backup from Director
3. Wait for VSS creation to complete on the File Daemon
4. While backup is running, Open Shadow Explorer
5. Navigate to C:\Windows\System32\config\SYSTEM
6. Copy file from Shadow Explorer on other drive than being backed up
7. Open regedit
8. Navigate to key HKEY_LOCAL_MACHINE and put the cursor there
9. Open "File -> Load Hive" and point to file copied from Shadow Copy.

Expected result:
Hive is imported correctly

Actual result:
Hive import returns error with the message that file is corrupt.

10. Wait for backup to finish
On newer (1809+) versions of Windows 10:

Expected result:
No errors related to file access during backup since VSS is used.

Actual result:
Multiple errors on files which are currently open. Example:
Cannot open "C:/Windows/SoftwareDistribution/DataStore/DataStore.edb": ERR=The process cannot access the file because it is being used by another process.
Additional Information: In all cases VSS Writers are in stable state, i.e.:

27-May 23:29 Client JobId 1509: VSS Writer (BackupComplete): "Task Scheduler Writer", State: 0x1 (VSS_WS_STABLE)
27-May 23:29 Client JobId 1509: VSS Writer (BackupComplete): "VSS Metadata Store Writer", State: 0x1 (VSS_WS_STABLE)
27-May 23:29 Client JobId 1509: VSS Writer (BackupComplete): "Performance Counters Writer", State: 0x1 (VSS_WS_STABLE)
27-May 23:29 Client JobId 1509: VSS Writer (BackupComplete): "System Writer", State: 0x1 (VSS_WS_STABLE)
27-May 23:29 Client JobId 1509: VSS Writer (BackupComplete): "ASR Writer", State: 0x1 (VSS_WS_STABLE)
27-May 23:29 Client JobId 1509: VSS Writer (BackupComplete): "WMI Writer", State: 0x1 (VSS_WS_STABLE)
27-May 23:29 Client JobId 1509: VSS Writer (BackupComplete): "MSSearch Service Writer", State: 0x1 (VSS_WS_STABLE)
27-May 23:29 Client JobId 1509: VSS Writer (BackupComplete): "Shadow Copy Optimization Writer", State: 0x1 (VSS_WS_STABLE)
27-May 23:29 Client JobId 1509: VSS Writer (BackupComplete): "Registry Writer", State: 0x1 (VSS_WS_STABLE)
27-May 23:29 Client JobId 1509: VSS Writer (BackupComplete): "COM+ REGDB Writer", State: 0x1 (VSS_WS_STABLE)
System Description
Attached Files:
Notes
(0004268)
Dark_Angel   
2021-09-17 21:31   
Updated to latest 20.0.1 - issue persists.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1381 [bareos-core] webui major always 2021-08-26 10:50 2021-09-17 12:55
Reporter: jens Platform: Linux  
Assigned To: frank OS: Debian  
Priority: normal OS Version: 10  
Status: acknowledged Product Version: 19.2.10  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Webui File selection list shows error when trying to restore
Description: BareOS version: 19.2.7-2

When selecting a backup client with lots of ( millions ) files and folders the File selection area shows following error.

{"id":"#","xhr":{"readyState":4,"responseText":"\n\n\n \n \n \n \n\n \n \n\n\n \n \n\n\n\n\n \n\n \n\n
\n \n\n
An error occurred
\n
An error occurred during execution; please try again later.
\n\n\n
\n
Additional information:
\n
Zend\\Json\\Exception\\RuntimeException
\n
\n
File:
\n
\n
/usr/share/bareos-webui/vendor/zendframework/zend-json/src/Json.php:68
\n
\n
Message:
\n
\n
Decoding failed: Syntax error
\n
\n
Stack trace:
\n
\n
#0 /usr/share/bareos-webui/module/Restore/src/Restore/Model/RestoreModel.php(54): Zend\\Json\\Json::decode('', 1)\n#1 /usr/share/bareos-webui/module/Restore/src/Restore/Controller/RestoreController.php(481): Restore\\Model\\RestoreModel->getDirectories(Object(Bareos\\BSock\\BareosBSock), '67', '#')\n#2 /usr/share/bareos-webui/module/Restore/src/Restore/Controller/RestoreController.php(555): Restore\\Controller\\RestoreController->getDirectories()\n#3 /usr/share/bareos-webui/module/Restore/src/Restore/Controller/RestoreController.php(466): Restore\\Controller\\RestoreController->buildSubtree()\n#4 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/Controller/AbstractActionController.php(82): Restore\\Controller\\RestoreController->filebrowserAction()\n#5 [internal function]: Zend\\Mvc\\Controller\\AbstractActionController->onDispatch(Object(Zend\\Mvc\\MvcEvent))\n#6 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(444): call_user_func(Array, Object(Zend\\Mvc\\MvcEvent))\n#7 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(205): Zend\\EventManager\\EventManager->triggerListeners('dispatch', Object(Zend\\Mvc\\MvcEvent), Object(Closure))\n#8 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/Controller/AbstractController.php(118): Zend\\EventManager\\EventManager->trigger('dispatch', Object(Zend\\Mvc\\MvcEvent), Object(Closure))\n#9 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/DispatchListener.php(93): Zend\\Mvc\\Controller\\AbstractController->dispatch(Object(Zend\\Http\\PhpEnvironment\\Request), Object(Zend\\Http\\PhpEnvironment\\Response))\n#10 [internal function]: Zend\\Mvc\\DispatchListener->onDispatch(Object(Zend\\Mvc\\MvcEvent))\n#11 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(444): call_user_func(Array, Object(Zend\\Mvc\\MvcEvent))\n#12 /usr/share/bareos-webui/vendor/zendframework/zend-eventmanager/src/EventManager.php(205): Zend\\EventManager\\EventManager->triggerListeners('dispatch', Object(Zend\\Mvc\\MvcEvent), Object(Closure))\n#13 /usr/share/bareos-webui/vendor/zendframework/zend-mvc/src/Application.php(314): Zend\\EventManager\\EventManager->trigger('dispatch', Object(Zend\\Mvc\\MvcEvent), Object(Closure))\n#14 /usr/share/bareos-webui/public/index.php(24): Zend\\Mvc\\Application->run()\n#15 {main}
\n
\n
\n\n\n
\n\n \n \n\n\n","status":500,"statusText":"Internal Server Error"}}
Tags:
Steps To Reproduce: In restore tab select a client with a lot of files and folders ( File Server )
Additional Information: From Apache error log:

PHP Notice: compact(): Undefined variable: extras in /usr/share/bareos-webui/vendor/zend
framework/zend-view/src/Helper/HeadLink.php on line 403, referer: http://xxx.xxx.xxx.xxx/bareos-webui/restore/?jobid=&client=<client>&restoreclient=&restorejob=&where=&files
et=&mergefilesets=0&mergejobs=0&limit=2000

PHP Notice: compact(): Undefined variable: extras in /usr/share/bareos-webui/vendor/zend
framework/zend-view/src/Helper/HeadLink.php on line 403, referer: http://xxx.xxx.xxx.xxx/bareos-webui/storage//

System Description
Attached Files: bareos_webui.png (84,832 bytes) 2021-08-26 10:50
https://bugs.bareos.org/file_download.php?file_id=479&type=bug
png

bareos-webui-nightly.png (9,043 bytes) 2021-08-27 12:03
https://bugs.bareos.org/file_download.php?file_id=480&type=bug
png

bconsole_api2_test_query_output.txt (53,111 bytes) 2021-09-17 12:55
https://bugs.bareos.org/file_download.php?file_id=482&type=bug
Notes
(0004223)
arogge   
2021-08-26 10:59   
thanks for the report.
Could you try and reproduce this with the latest webui from the nightly-build? It can be installed on a different host or vm and will be able to talk to your 19.2 director.
Also if you can still reproduce the issue there it would really help if you could tell us how many jobs were merged here (i.e. how many incrementals are on top of your full) and how many files are in each of them. This would probably improve our chances to reproduce the problem.

Having said that, the workaround (that you probably already knew) is to restore from within bconsole.
(0004224)
jens   
2021-08-26 11:09   
Thank you for the ultra fast response.
I will try my best to give the nightly build a try but it gonna take me some time to arrange in our environment.

Regarding your question this is the only single backup we took from that machine into a long term tape archive pool.
There are no incrementals on top.
(0004225)
arogge   
2021-08-26 11:35   
Allright! Would you still share the exact amount of files in that backup job, so we can produce a test-case with the same amount of files?
(0004226)
jens   
2021-08-26 11:37   
FD Files Written: 155,482,903
SD Files Written: 155,482,903
FD Bytes Written: 26,776,974,356,682 (26.77 TB)
SD Bytes Written: 26,805,737,848,974 (26.80 TB)
(0004228)
jens   
2021-08-27 12:03   
(Last edited: 2021-08-27 12:20)
So I tried the latest nightly built from here: http://download.bareos.org/bareos/experimental/nightly/Debian_10/all/
Unfortunately it does not want to connect to my director 19.2

(0004229)
jens   
2021-08-27 12:19   
Also tried with the bareos-webui_20.0.1-3
It is able to connect to my 19.2 director but throws the exact same error as initially reported
(0004230)
arogge   
2021-08-27 12:26   
Yes, sorry. That version check was introduced not too long ago, I simply forgot. Thanks for reproducing with 20.0.1 though, that should be recent enough.
(0004241)
frank   
2021-08-31 16:00   
jens:

It seems we are receiving malformed JSON from the director here as the decoding throws a syntax error.

We should have a look at the JSON result the director provides for the particular directory you are
trying to list in webui by using bvfs API commands (https://docs.bareos.org/DeveloperGuide/api.html#bvfs-api) in bconsole.


In bconsole please do the following:


1. Get the jobid of the job that causes the described issue. Replace the jobid from the example below with your specific jobid, e.g. the jobid of the full backup you mentioned.

*.bvfs_lsdirs path= jobid=142
38 0 0 A A A A A A A A A A A A A A .
37 0 0 A A A A A A A A A A A A A A /


2. Navigate to the folder which causes problems by using pathid, pathids will differ at yours.

*.bvfs_lsdirs pathid=37 jobid=142
37 0 0 A A A A A A A A A A A A A A .
38 0 0 A A A A A A A A A A A A A A ..
57 0 0 A A A A A A A A A A A A A A ceph/
*

*.bvfs_lsdirs pathid=57 jobid=142
57 0 0 A A A A A A A A A A A A A A .
37 0 0 A A A A A A A A A A A A A A ..
56 0 0 A A A A A A A A A A A A A A groups/
*

*.bvfs_lsdirs pathid=56 jobid=142
56 0 0 A A A A A A A A A A A A A A .
57 0 0 A A A A A A A A A A A A A A ..
51 11817 142 P0A V9T EHt CcR A A A 8AA BAA L4 BhLhQA BhLhP/ BhLhP/ A A C group_aa/

Let's pretend group_aa (pathid 51) is the folder we can not list properly in webui.


3. Switch to API mode 2 (JSON) now and list the content of folder group_aa (pathid 51) to get the JSON result.

*.api 2
{
  "jsonrpc": "2.0",
  "id": null,
  "result": {
    "api": 2
  }
}*.bvfs_lsdirs pathid=51 jobid=142
{
  "jsonrpc": "2.0",
  "id": null,
  "result": {
    "directories": [
      {
        "type": "D",
        "pathid": 51,
        "fileid": 11817,
        "jobid": 142,
        "lstat": "P0A V9T EHt CcR A A A 8AA BAA L4 BhLhQA BhLhP/ BhLhP/ A A C",
        "name": ".",
        "fullpath": ".",
        "stat": {
          "dev": 64768,
          "ino": 89939,
          "mode": 16877,
          "nlink": 10001,
          "uid": 0,
          "gid": 0,
          "user": "root",
          "group": "root",
          "rdev": 0,
          "size": 245760,
          "atime": 1630409728,
          "mtime": 1630409727,
          "ctime": 1630409727
        },
        "linkfileindex": 0
      },
      {
        "type": "D",
        "pathid": 56,
        "fileid": 0,
        "jobid": 0,
        "lstat": "A A A A A A A A A A A A A A",
        "name": "..",
        "fullpath": "..",
        "stat": {
          "dev": 0,
          "ino": 0,
          "mode": 0,
          "nlink": 0,
          "uid": 0,
          "gid": 0,
          "user": "root",
          "group": "root",
          "rdev": 0,
          "size": 0,
          "atime": 0,
          "mtime": 0,
          "ctime": 0
        },
        "linkfileindex": 0
      },
      {
        "type": "D",
        "pathid": 52,
        "fileid": 1813,
        "jobid": 142,
        "lstat": "P0A BAGIj EHt C A A A G BAA A BhLgvm Bg/+Bp Bg/+Bp A A C",
        "name": "d1/",
        "fullpath": "/ceph/groups/group_aa/d1/",
        "stat": {
          "dev": 64768,
          "ino": 16802339,
          "mode": 16877,
          "nlink": 2,
          "uid": 0,
          "gid": 0,
          "user": "root",
          "group": "root",
          "rdev": 0,
          "size": 6,
          "atime": 1630407654,
          "mtime": 1627381865,
          "ctime": 1627381865
        },
        "linkfileindex": 0
      },
      {
        "type": "D",
        "pathid": 54,
        "fileid": 1814,
        "jobid": 142,
        "lstat": "P0A CCEkI EHt C A A A G BAA A BhLgvm Bg/+Bp Bg/+Bp A A C",
        "name": "d2/",
        "fullpath": "/ceph/groups/group_aa/d2/",
        "stat": {
          "dev": 64768,
          "ino": 34097416,
          "mode": 16877,
          "nlink": 2,
          "uid": 0,
          "gid": 0,
          "user": "root",
          "group": "root",
          "rdev": 0,
          "size": 6,
          "atime": 1630407654,
          "mtime": 1627381865,
          "ctime": 1627381865
        },
        "linkfileindex": 0
      }
    ]
  }
}*


Do you get valid JSON at this point as you can see in the example above?
Please provide the output you get in your case if possible.



Note:

You can substitue step 3 with sth. like following if the output is too big:

[root@centos7]# cat script
.api 2
.bvfs_lsdirs pathid=51 jobid=142
quit

[root@centos7]# cat script | bconsole > out.txt

Remove everything except the JSON you received from the .bvfs_lsdirs command from the out.txt file.

Validate the JSON output with a tool like https://stedolan.github.io/jq/ or https://jsonlint.com/ for example.
(0004243)
jens   
2021-09-01 10:26   
Hi Frank,

thanks for your feedback.
Please note, I receive the JSON error on top level already.
Meaning I am not able to select a folder at all, yet.

I will try to follow your instructions and see how far I can get.
Will keep you posted.

Thank you once again for your support.
Much appreciated.
(0004267)
jens   
2021-09-17 12:55   
Hi Frank,

I found some time to go over your instruction and did some intense testing.


First I queried the top 2 folder levels for the jobid in question.
--------------------------------------------------------------------------------------------
*.bvfs_lsdirs path= jobid=67
Automatically selected Catalog: MyCatalog
Using Catalog "MyCatalog"
32 0 0 A A A A A A A A A A A A A A .
31 0 0 A A A A A A A A A A A A A A /

*.bvfs_lsdirs pathid=31 jobid=67
31 0 0 A A A A A A A A A A A A A A .
32 0 0 A A A A A A A A A A A A A A ..
3037697 0 0 A A A A A A A A A A A A A A imgrep/
3037699 0 0 A A A A A A A A A A A A A A storage/


Since the issue in webgui already occurs on this level I switched to api level 2 already
but couldn't find anything obvious regarding malformatted output.
----------------------------------------------------------------------------------------------------------------------------
*.api 2
{
  "jsonrpc": "2.0",
  "id": null,
  "result": {
    "api": 2
  }
}
*.bvfs_lsdirs pathid=31 jobid=67
{
  "jsonrpc": "2.0",
  "id": null,
  "result": {
    "directories": [
      {
        "type": "D",
        "pathid": 31,
        "fileid": 0,
        "jobid": 0,
        "lstat": "A A A A A A A A A A A A A A",
        "name": ".",
        "fullpath": ".",
        "stat": {
          "dev": 0,
          "ino": 0,
          "mode": 0,
          "nlink": 0,
          "user": "root",
          "group": "root",
          "rdev": 0,
          "size": 0,
          "atime": 0,
          "mtime": 0,
          "ctime": 0
        },
        "linkfileindex": 0
      },
      {
        "type": "D",
        "pathid": 32,
        "fileid": 0,
        "jobid": 0,
        "lstat": "A A A A A A A A A A A A A A",
        "name": "..",
        "fullpath": "..",
        "stat": {
          "dev": 0,
          "ino": 0,
          "mode": 0,
          "nlink": 0,
          "user": "root",
          "group": "root",
          "rdev": 0,
          "size": 0,
          "atime": 0,
          "mtime": 0,
          "ctime": 0
        },
        "linkfileindex": 0
      },
      {
        "type": "D",
        "pathid": 3037697,
        "fileid": 0,
        "jobid": 0,
        "lstat": "A A A A A A A A A A A A A A",
        "name": "imgrep/",
        "fullpath": "/imgrep/",
        "stat": {
          "dev": 0,
          "ino": 0,
          "mode": 0,
          "nlink": 0,
          "user": "root",
          "group": "root",
          "rdev": 0,
          "size": 0,
          "atime": 0,
          "mtime": 0,
          "ctime": 0
        },
        "linkfileindex": 0
      },
      {
        "type": "D",
        "pathid": 3037699,
        "fileid": 0,
        "jobid": 0,
        "lstat": "A A A A A A A A A A A A A A",
        "name": "storage/",
        "fullpath": "/storage/",
        "stat": {
          "dev": 0,
          "ino": 0,
          "mode": 0,
          "nlink": 0,
          "user": "root",
          "group": "root",
          "rdev": 0,
          "size": 0,
          "atime": 0,
          "mtime": 0,
          "ctime": 0
        },
        "linkfileindex": 0
      }
    ]
  }
}*

So I went one level deeper on the imgrep folder but still everything seem to work fine and look valid.
-------------------------------------------------------------------------------------------------------------------------------------------------
-> see attachment ( please note I've shortened the output to a few thousand lines and anonymized all the folder names )


Interesting to me is that this is the only machine we are having trouble with.
Maybe it is something with the filesystem layouts there.
Therefor and to give you a better picture this is what the backup client looks like:

OS:
Distributor ID: Debian
Description: Debian GNU/Linux 8.7 (jessie)
Release: 8.7
Codename: jessie

Storage mounts:
-------------------------
/dev/vdc1 on /storage/bucket-00 type xfs (rw,noatime,nodiratime,attr2,inode64,noquota)
/dev/vdb1 on /imgrep type xfs (rw,noatime,nodiratime,attr2,inode64,noquota)

Disk Free status:
------------------------
/dev/vdc1 5.0T 4.8T 293G 95% /storage/bucket-00
/dev/vdb1 23T 23T 348G 99% /imgrep


Client Fileset:
-------------------
FileSet {
  Name = "xxxxxxxxxxxxxxxxx"
  Include {
    Options {
      Signature = SHA1
      One FS = No
      Checkfilechanges = yes
    }
    File = /imgrep/images
    File = /storage/bucket-00/images
  }
}



Please let me know if you need any additional information or contribution.

Best, Jens

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1371 [bareos-core] configuration gui minor always 2021-07-20 09:27 2021-09-09 09:11
Reporter: hostedpower Platform: Linux  
Assigned To: OS: Debian  
Priority: normal OS Version: 10  
Status: new Product Version: 20.0.2  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Backup GUI awfully slow when used over TCP/IP remotely
Description: Hi,


We have created a new setup where we'll use 1 GUI for multiple directors. Because of this the GUI runs on another system than the director itself. This is not a local lan but over internet (Gigabit tough)

We noticed a lot of actions are crazy slow, like loading the main dashboard , but also browing the the "jobs > Run" etc.


Maybe some socket options have to be set in order to have better performance?

Example: https://www.techrepublic.com/article/take-advantage-of-tcp-ip-options-to-optimize-data-transmission/

See conclusions there, but there are many other settings possible like SO_KEEPALIVE to make sure connections keeps open and maybe others?

Maybe some API calls/requests should be grouped more as well?


Anyone else experienced this before? Or is everyone using the GUI on the same server as the director.
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files:
Notes
(0004252)
bruno-at-bareos   
2021-09-07 09:34   
Thanks for your report
Would you mind to share some computed real number (aka using the dev console for example). As "slow" express more a feeling than something tangible ;-)
Also some metrics (iperf, and so) between the console host and the differents directors
is NAT in use ?
Did you also tried to activate and use http/2 for serving the webpage. did it make any changes for you ?
(0004253)
hostedpower   
2021-09-07 09:43   
Hi we use http/2 by default.

I think certain requests require a lot of "small" api calls. For example going to Jobs > Run is exceptionally slow compared to some other pages.
(0004255)
bruno-at-bareos   
2021-09-08 09:36   
Can't you share some numbers ?
As you certainly see, loading the dashboard, is preloading all combo-list. of course this can vary depending of the number of items the api has to load.
having numbers could help us to reproduce or at least understand in which situation the symptom you described are seen.

opening the developer console on your browser (normally F12). tab network and pick number from there. report here will help you and us.
(0004257)
hostedpower   
2021-09-09 09:11   
There is not much too see in the console, it's the call to the run https://xxx/job/run/ which takes over 4 seconds for example. It must be in the api calls it makes underlying, it's not the resources :)

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1015 [bareos-core] webui minor always 2018-10-01 12:05 2021-08-30 11:52
Reporter: Gordon Klimm Platform: Linux  
Assigned To: OS: Debian  
Priority: normal OS Version: 9  
Status: resolved Product Version: 17.2.4  
Product Build: Resolution: fixed  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: pool name "1Test" throws 404 Error
Description: When naming using a pool "1Test", webui throws a 404: "the requested URL could not be matched by routing."

Pool Names starting with a number throw this error.

bconsole behaves fine.
Tags:
Steps To Reproduce: 1) create a pool according to this:

   Pool { Name=1ThisWillFail }

2) reload config

3) try to access pool "1ThisWillFail" using webui
Additional Information:
System Description
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1378 [bareos-core] director major always 2021-08-11 00:59 2021-08-23 14:55
Reporter: progserega Platform: Linux  
Assigned To: OS: Debian  
Priority: high OS Version: 10  
Status: new Product Version: 20.0.2  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: director show different status for job
Description: Status job (76069) show as fail. But status dir show as job running:

*list jobs client=rsk40srv018-fd
+--------+-----------------------------------+----------------+---------------------+------+-------+-----------+-------------------+-----------+
| jobid | name | client | starttime | type | level | jobfiles | jobbytes | jobstatus |
+--------+-----------------------------------+----------------+---------------------+------+-------+-----------+-------------------+-----------+
| 76,090 | backup-rsk40srv018-fsDocs-obmen | rsk40srv018-fd | 2021-08-10 21:00:01 | B | I | 0 | 0 | f |
| 76,069 | backup-rsk40srv018-fsDocs-upr | rsk40srv018-fd | 2021-08-11 05:01:30 | B | I | 0 | 0 | f |
+--------+-----------------------------------+----------------+---------------------+------+-------+-----------+-------------------+-----------+

* status dir

Running Jobs:
Console connected at 11-aug-2021 08:33
 JobId Level Name Status
======================================================================
 76052 Increme backup-rsk40srv035-1cAttachments.2021-08-10_21.00.00_43 is running
 76063 Increme backup-rsk40srv035-1cLogsObr.2021-08-10_21.00.00_54 is waiting on max Storage jobs
 76069 Increme backup-rsk40srv018-fsDocs-upr.2021-08-10_21.00.01_00 is running
 76081 Increme backup-rsk40srv018-fsDocs-pues.2021-08-10_21.00.01_13 is waiting on max Storage jobs

Tags:
Steps To Reproduce:
Additional Information: *list joblog jobid=76069

 2021-08-11 05:01:29 bareos-dir JobId 76069: shell command: run BeforeJob "ssh backup@bareos-fs-sd.prim.corp.com /scripts/backup/bacula_sd_free_space_check /dev/sda"
 2021-08-11 05:01:29 bareos-dir JobId 76069: BeforeJob: on device /dev/sda (mount point: /mnt/msa2040/backup) free: 22635 Gb.
 2021-08-11 05:01:29 bareos-dir JobId 76069: Start Backup JobId 76069, Job=backup-rsk40srv018-fsDocs-upr.2021-08-10_21.00.01_00
 2021-08-11 05:01:29 bareos-dir JobId 76069: Connected Storage daemon at bareos-fs-sd.prim.corp.com:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
 2021-08-11 05:01:29 bareos-dir JobId 76069: Using Device "ObmenStorage" to write.
 2021-08-11 05:01:29 bareos-dir JobId 76069: Connected Client: rsk40srv018-fd at rsk40srv018.corp.com:9104, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
 2021-08-11 05:01:29 bareos-dir JobId 76069: Handshake: Immediate TLS 2021-08-11 05:01:29 bareos-dir JobId 76069: Encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
 2021-08-11 05:01:30 bareos-sd JobId 76069: Volume "rsk40srv018-fsPool-upr-2020.12.19-28" previously written, moving to end of data.
 2021-08-11 05:01:30 bareos-sd JobId 76069: Ready to append to end of Volume "rsk40srv018-fsPool-upr-2020.12.19-28" size=4418893262
 2021-08-11 05:01:29 rsk40srv018-fd JobId 76069: Created 18 wildcard excludes from FilesNotToBackup Registry key
 2021-08-11 05:01:30 rsk40srv018-fd JobId 76069: Connected Storage daemon at bareos-fs-sd.prim.corp.com:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
 2021-08-11 05:01:32 rsk40srv018-fd JobId 76069: Generate VSS snapshots. Driver="Win64 VSS", Drive(s)="J"
 2021-08-11 05:01:32 rsk40srv018-fd JobId 76069: VolumeMountpoints are not processed as onefs = yes.
 2021-08-11 07:01:01 bareos-sd JobId 76069: User defined maximum volume capacity 4,650,000,000 exceeded on device "ObmenStorage" (/mnt/msa2040/backup/bareos//ObmenStorage).
 2021-08-11 07:01:01 bareos-sd JobId 76069: End of medium on Volume "rsk40srv018-fsPool-upr-2020.12.19-28" Bytes=4,649,975,246 Blocks=72,080 at 11-aug-2021 07:01.
 2021-08-11 07:01:02 bareos-sd JobId 76069: Recycled volume "rsk40srv018-fsPool-upr-2020.12.19-29" on device "ObmenStorage" (/mnt/msa2040/backup/bareos//ObmenStorage), all previous data lost.
 2021-08-11 07:01:02 bareos-sd JobId 76069: New volume "rsk40srv018-fsPool-upr-2020.12.19-29" mounted on device "ObmenStorage" (/mnt/msa2040/backup/bareos//ObmenStorage) at 11-aug-2021 07:01.
*
System Description
Attached Files:
Notes
(0004210)
progserega   
2021-08-11 01:00   
*version
bareos-dir Version: 20.0.1 (02 March 2021) Debian GNU/Linux 10 (buster) debian Debian GNU/Linux 10 (buster)
*

root@bareos:/var/log/bareos# dpkg-query -l|grep bareos
ii bareos-bconsole 20.0.1-3 amd64 Backup Archiving Recovery Open Sourced - text console
ii bareos-client 20.0.1-3 amd64 Backup Archiving Recovery Open Sourced - client metapackage
ii bareos-common 20.0.1-3 amd64 Backup Archiving Recovery Open Sourced - common files
ii bareos-database-common 20.0.1-3 amd64 Backup Archiving Recovery Open Sourced - common catalog files
ii bareos-database-postgresql 20.0.1-3 amd64 Backup Archiving Recovery Open Sourced - PostgreSQL backend
ii bareos-database-tools 20.0.1-3 amd64 Backup Archiving Recovery Open Sourced - database tools
ii bareos-devel 20.0.0-1 amd64 Backup Archiving Recovery Open Sourced - development files
ii bareos-director 20.0.1-3 amd64 Backup Archiving Recovery Open Sourced - director daemon
ii bareos-filedaemon 20.0.1-3 amd64 Backup Archiving Recovery Open Sourced - file daemon
ii bareos-fuse 0.1.1498568444.fb2539c-13.10 all Backup Archiving Recovery Open Sourced - FUSE
ii bareos-webui 20.0.1-3 all Backup Archiving Recovery Open Sourced - webui
ii python-bareos 20.0.0-1 all Backup Archiving REcovery Open Sourced - python module (Python 2)
root@bareos:/var/log/bareos#
(0004216)
progserega   
2021-08-23 14:55   
For example:
Job 77560 wait storage, but ins list jobs and in db - have status fail:

*status dir
Running Jobs:
Console connected at 23-ав-2021 22:47
 JobId Level Name Status
======================================================================
 77558 Increme backup-im.rs.int-LinuxRoot.2021-08-23_21.00.02_22 is running
 77560 Increme backup-vs07.rs.int-LinuxRoot.2021-08-23_21.00.02_24 is waiting on max Storage jobs

*list jobs client=vs07.rs.int-fd
+--------+------------------------------+----------------+---------------------+------+-------+----------+---------------+-----------+
| jobid | name | client | starttime | type | level | jobfiles | jobbytes | jobstatus |
+--------+------------------------------+----------------+---------------------+------+-------+----------+---------------+-----------+
| 77,277 | backup-vs07.rs.int-LinuxRoot | vs07.rs.int-fd | 2021-08-20 22:42:11 | B | I | 276 | 40,821,418 | T |
| 77,385 | backup-vs07.rs.int-LinuxRoot | vs07.rs.int-fd | 2021-08-22 00:53:57 | B | D | 585 | 47,029,348 | T |
| 77,560 | backup-vs07.rs.int-LinuxRoot | vs07.rs.int-fd | 2021-08-23 21:00:02 | B | I | 0 | 0 | f |
+--------+------------------------------+----------------+---------------------+------+-------+----------+---------------+-----------+

In db:
bareos=# select * from job where jobid=77560;
-[ RECORD 1 ]---+----------------------------------------------------
jobid | 77560
job | backup-vs07.rs.int-LinuxRoot.2021-08-23_21.00.02_24
name | backup-vs07.rs.int-LinuxRoot
type | B
level | I
clientid | 63
jobstatus | f
schedtime | 2021-08-23 21:00:02
starttime | 2021-08-23 21:00:02
endtime | 2021-08-23 21:00:02
realendtime |
jobtdate | 1629716402
volsessionid | 0
volsessiontime | 0
jobfiles | 0
jobbytes | 0
readbytes | 0
joberrors | 0
jobmissingfiles | 0
poolid | 0
filesetid | 0
priorjobid | 0
purgedfiles | 0
hasbase | 0
hascache | 0
reviewed | 0
comment |

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1231 [bareos-core] storage daemon crash random 2020-04-20 16:52 2021-08-18 14:40
Reporter: mayakov Platform: Linux  
Assigned To: arogge OS: CentOS  
Priority: high OS Version: 7  
Status: feedback Product Version: 18.2.5  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Tasks randomly freeze when using s3 Buckets in "terminated" status
Description: We recently switched to using S3 Buckets and faced the problem of freezing backup jobs.
Example:
Running Jobs:
Console connected at 20-Apr-20 12:03
Console connected at 20-Apr-20 12:03
 JobId Level Name Status
======================================================================
  2680 Full backupcrm.action-crm.local-backup.2020-04-19_10.05.00_33 is running
  2682 Full buhsoft-backup.2020-04-19_21.00.00_36 is running
  2725 Increme schl-sql-backup.2020-04-20_02.00.00_19 has terminated
  2734 Full ceph-armseller-backup.2020-04-20_04.05.00_28 is running
  2741 Full pg-service-1cont-backup.2020-04-20_12.00.00_28 is running

jobid 2725 stays in this status for more than 10 hours
Tags: s3;droplet;aws;storage
Steps To Reproduce: Tasks are started by the scheduler. Randomly one or more of them may freeze in the status of "terminated"
Additional Information: Logs for the problem task:

20-Apr 02:00 bareos-dir JobId 2725: Start Backup JobId 2725, Job=schl-sql-backup.2020-04-20_02.00.00_19
20-Apr 02:00 bareos-dir JobId 2725: Connected Storage daemon at bareos01.backup.infra.msk3.sl.amedia.tech:9103, encryption: PSK-AES256-CBC-SHA
20-Apr 02:00 bareos-dir JobId 2725: Created new Volume "schl-sql-backupVol-1061" in catalog.
20-Apr 02:00 bareos-dir JobId 2725: Using Device "schl-sql-backupDevice" to write.
20-Apr 02:00 bareos-dir JobId 2725: Connected Client: backup.hv.amedia.tech at backup.hv.amedia.tech:9102, encryption: None
20-Apr 02:00 bareos-dir JobId 2725: Handshake: Cleartext
20-Apr 02:00 bareos-dir JobId 2725: Encryption: None
20-Apr 02:01 bareos-sd JobId 2725: Labeled new Volume "schl-sql-backupVol-1061" on device "schl-sql-backupDevice" (S3).
20-Apr 02:01 bareos-sd JobId 2725: Wrote label to prelabeled Volume "schl-sql-backupVol-1061" on device "schl-sql-backupDevice" (S3)
20-Apr 02:01 bareos-dir JobId 2725: Max Volume jobs=1 exceeded. Marking Volume "schl-sql-backupVol-1061" as Used.
20-Apr 02:51 bareos-sd JobId 2725: Releasing device "schl-sql-backupDevice" (S3).

then I try to cancel the task:
cancel JobId=2725

In the status of director, the job is marked "cancel":

Running Jobs:
Console connected at 20-Apr-20 17:12
 JobId Level Name Status
======================================================================
  2682 Full buhsoft-backup.2020-04-19_21.00.00_36 is running
  2734 Full ceph-armseller-backup.2020-04-20_04.05.00_28 is running
====

Terminated Jobs:
 JobId Level Files Bytes Status Finished Name
====================================================================
  2739 Full 18 137.9 G OK 20-Apr-20 07:32 ss1-npd-backup
  2737 Full 2 24.09 G OK 20-Apr-20 07:56 mysql-az02-goszakaz-backup
  2736 Full 2 29.73 G OK 20-Apr-20 08:15 mysql-ap02-goszakaz-backup
  2740 Full 39 61.08 G OK 20-Apr-20 11:14 ss2-srv17-backup
  2725 Incr 835 167.0 G Cancel 20-Apr-20 12:07 schl-sql-backup


Next, check the status storage:

*status storage=schl-sql-backupStorage
Connecting to Storage daemon schl-sql-backupStorage at bareos01.backup.infra....:9103

bareos-sd Version: 18.2.5 (30 January 2019) Linux-4.4.92-6.18-default redhat CentOS Linux release 7.6.1810 (Core)
Daemon started 18-Apr-20 20:09. Jobs: run=137, running=2, bareos.org build binary
 Heap: heap=331,776 smbytes=420,855,257 max_bytes=2,834,647,307 bufs=2,073 max_bufs=2,520
 Sizes: boffset_t=8 size_t=8 int32_t=4 int64_t=8 mode=0 bwlimit=0kB/s

Running Jobs:
Writing: Full Backup job buhsoft-backup JobId=2682 Volume="buhsoft-backupVol-0326"
    pool="buhsoft-backupPool" device="buhsoft-backupDevice" (S3)
    spooling=0 despooling=0 despool_wait=0
    Files=280,608 Bytes=6,232,850,155,858 AveBytes/sec=114,582,893 LastBytes/sec=114,205,268
    FDReadSeqNo=97,439,276 in_msg=96631753 out_msg=5 fd=125
Writing: Incremental Backup job schl-sql-backup JobId=2725 Volume="schl-sql-backupVol-1061"
    pool="schl-sql-backupPool" device="schl-sql-backupDevice" (S3)
    spooling=0 despooling=0 despool_wait=0
    Files=835 Bytes=167,042,516,995 AveBytes/sec=0 LastBytes/sec=0
    FDReadSeqNo=2,555,971 in_msg=2553486 out_msg=9 fd=156
Writing: Full Backup job ceph-armseller-backup JobId=2734 Volume="ceph-armseller-backupVol-1069"
    pool="ceph-armseller-backupPool" device="ceph-armseller-backupDevice" (S3)
    spooling=0 despooling=0 despool_wait=0
    Files=1,570,915 Bytes=562,719,840,061 AveBytes/sec=12,607,520 LastBytes/sec=11,383,272
    FDReadSeqNo=18,690,064 in_msg=15189896 out_msg=5 fd=128
====

Jobs waiting to reserve a drive:
====

Terminated Jobs:
 JobId Level Files Bytes Status Finished Name
===================================================================
  2738 Full 1 2.325 G OK 20-Apr-20 07:01 id2-redismsk7-backup
  2739 Full 18 137.9 G OK 20-Apr-20 07:32 ss1-npd-backup
  2737 Full 2 24.09 G OK 20-Apr-20 07:56 mysql-az02-goszakaz-backup
  2736 Full 2 29.73 G OK 20-Apr-20 08:15 mysql-ap02-goszakaz-backup
  2740 Full 39 61.08 G OK 20-Apr-20 11:14 ss2-srv17-backup
  2742 Incr 0 0 Cancel 20-Apr-20 12:23 schl-sql-backup
  2680 Full 176 14.10 T OK 20-Apr-20 14:25 backupcrm.action-crm.local-backup
  2741 Full 4 35.23 G OK 20-Apr-20 14:53 pg-service-1cont-backup
  2743 Incr 0 0 Cancel 20-Apr-20 15:40 schl-sql-backup
  2744 Incr 0 0 Cancel 20-Apr-20 17:04 schl-sql-backup
====

Device status:

Device "schl-sql-backupDevice" (S3) is mounted with:
    Volume: schl-sql-backupVol-1061
    Pool: schl-sql-backupPool
    Media type: S3_Object1
Backend connection is not working.
Inflight chunks: 0
No pending IO flush requests.
Configured device capabilities:
  EOF BSR BSF FSR FSF EOM !REM RACCESS AUTOMOUNT LABEL !ANONVOLS !ALWAYSOPEN
Device state:
  OPENED !TAPE LABEL !MALLOC APPEND !READ !EOT !WEOT !EOF !NEXTVOL !SHORT MOUNTED
  num_writers=1 reserves=0 block=0
Attached Jobs: 2725
Device parameters:
  Archive name: S3 Device name: schl-sql-backupDevice
  File=38 block=3957652387
  Min block=64512 Max block=64512
    Total Bytes=167,166,409,636 Blocks=2,591,246 Bytes/block=64,511
    Positioned at File=38 Block=3,957,652,387
==
====

Used Volume status:
schl-sql-backupVol-1061 on device "schl-sql-backupDevice" (S3)
    Reader=0 writers=1 reserves=0 volinuse=1
====

====

an attempt to cancel the job yields nothing:

*cancel storage=schl-sql-backupStorage jobid=2725
3000 JobId=2725 Job="schl-sql-backup.2020-04-20_02.00.00_19" marked to be canceled.

when you try to restart the backup or restore task, task freezes in waiting status:
  2745 Increme schl-sql-backup.2020-04-20_17.20.21_52 is waiting on Storage "schl-sql-backupStorage"


The configuration of all backups is the same. We use a separate storage, pool, device, backet for each job:

Device {
  Name = schl-sql-backupDevice
  Media Type = S3_Object1
  Archive Device = S3 Object Storage
  Device Options = "profile=/etc/bareos/bareos-sd.d/device/droplet/droplet.profile,bucket=schl-sql-backup,chunksize=100M"
  Device Type = droplet
  LabelMedia = yes;
  Random Access = yes;
  AutomaticMount = yes;
  RemovableMedia = no;
  AlwaysOpen = no;
  Description = "S3 device"
  Maximum Concurrent Jobs = 60
}

Pool {
  Name = schl-sql-backupPool
  Pool Type = Backup
  Recycle = yes
  AutoPrune = yes
  Volume Retention = 2 week
  Job Retention = 30 days
  File Retention = 30 days
  Maximum Volume Jobs = 1
  Maximum Volume Bytes = 1000G
  Maximum Volumes = 20
  Label Format = "schl-sql-backupVol-"
}

Storage {
  Name = schl-sql-backupStorage
  Address = bareos01.****
  Password = "****"
  Device = schl-sql-backupDevice
  Media Type = S3_Object1
  Maximum Concurrent Jobs = 60
}



Client {
  Name = backup.****
  Address = backup.*****
  Password = "*****"
  Maximum Concurrent Jobs = 60
}
FileSet {
  Name = "schl-sql-fileset"
  Include {
    Options {
      signature = MD5
    }
    File = "/mnt/storage/schl/SQL"
  }
}
Job {
  Name = "schl-sql-backup"
  JobDefs = "DefaultJob"
  Write Bootstrap = "/var/lib/bareos/schlsql.bsr"
  Client = backup.hv.amedia.tech
  FileSet = "schl-sql-fileset"
  Storage = "schl-sql-backupStorage"
  Pool = "schl-sql-backupPool"
  Schedule = "schl-sql-schedule"
}
Schedule {
  Name = "schl-sql-schedule"
  Run = Incremental mon-sat at 2:00
  Run = Full sun at 8:00
}

The problem happens randomly, they can complete all tasks (about 80) without problems. In parallel with the problem knowledge, many tasks are usually completed that will complete well and be able to complete the "Releasing device"

You can release a problem device only by running:
systemctl restart bareos-sd.service
all running tasks die


Sorry, I used google translate.
System Description
Attached Files:
Notes
(0003990)
andrei693   
2020-05-19 12:17   
Have a similar issue to which so far we have the same bareos-sd restart resolution. What happens for me is that the director can't cancel the job on the storage daemon. This is from the joblog:
19-May 06:47 bareos-sd JobId 3693: Releasing device "AWS_S3_1-01" (AWS S3 Storage).
19-May 06:47 bareos-dir-alxp JobId 3693: Fatal error: Director's comm line to SD dropped.

Tried to cancel the job on the storage daemon itself but that does not work either:
*cancel storage=S3_Object_aws jobid=3693
3000 JobId=3693 Job="client1.2020-05-19_06.34.03_30" marked to be canceled.

The jobs stays in the same state on the SD.
(0004009)
mayakov   
2020-06-15 19:44   
@andrei693 version - 18.2.5 ?
(0004010)
andrei693   
2020-06-15 22:33   
@mayakov: yes, 18.2.5.
(0004212)
arogge   
2021-08-18 12:26   
Could you check if the volume, that has just been written, contains a chunk that is one byte short?
It seems like there is a corner-case writing volumes what may lead to a missing byte which will eventually lead to the SD waiting indefinitely until all chunks are completely written (which will never happen as one byte is missing).
(0004213)
andrei693   
2021-08-18 14:40   
Unfortunately we had to move on from that setup and can no longer test/reproduce.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1308 [bareos-core] General feature always 2021-01-19 02:06 2021-08-18 10:55
Reporter: Ruth Ivimey-Cook Platform: amd64  
Assigned To: OS: Linux  
Priority: normal OS Version: Ubuntu 20.04  
Status: new Product Version: 19.2.9  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: If a job fails, but has written files to tape, don't leave tape full of undescribed data.
Description: If a job fails to complete, currently the job is terminated and AFAICT no records are added to the database describing it. Nevertheless, the job exists on tape and is consuming tape space - potentially several tapes.

I would like the behaviour to be modified to either:

1. Cleanup, that is, return the tapes and catalogue to a similar state to before the failed job started:
  - the tape drive is rewound to the end of the last good job and tape-eod written there.
  - If the job spanned multiple complete tapes, those tapes are marked for recycling in the appropriate way in the database.
  - If the job started on a not-mounted tape then the operator is prompted (email/console) to insert it for an 'eod' to be written, with the option of not doing so and leaving that tape Full.


2. Describe what was written in the DB: Let the job be described in the catalogue completely, up to the point of failure.
  - This may involve rewinding the tape to find the end of the good data, or even doing a full scan of the tape (or even the job, though pref not!).
  - I'm not sure whether bareos retains enough info to properly continue from this state, but ideally a subsequent Incr or Diff backup following this would be able to back up the 'right' things, bearing in mind the failed job.


3. Work harder at not failing at all: If a tape error happens, call the operator and provide ways to carry on.
  For example,
  - if there is an error when re-reading the end of tape block, rewind further and rescan the written data to determine what is there, and write an eof, then carry on with a new tape in the usual way.
  - If the tape drive itself fails in some way, enable the operator to direct the job onto another drive and carry on.
Tags:
Steps To Reproduce:
Additional Information:
Attached Files:
Notes
(0004211)
Int   
2021-08-18 10:55   
I would like to see that feature too.
For rewinding tapes and reusing the space a mechanism to handle WORM tapes would have to be implemented, since these can not be reused.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1380 [bareos-core] file daemon major always 2021-08-13 21:13 2021-08-13 21:13
Reporter: matus22 Platform: Linux  
Assigned To: OS: CentOS  
Priority: normal OS Version: 8  
Status: new Product Version: 20.0.2  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Backup failures on concurent backups using ovirt-plugin
Description: We are running multiple backups from ovirt to utilize bandwidth and speedup process of backup.

When running concurrent backups started on same time, only single backup will succeed.
Second and other backups will fail right after successful image download.

Looking at the logs, it looks like bareos-fd is unable to send event to ovirt because of duplicate event_id.

I think, event_id variable generation mechanism is insufficient.
Running multiple backups at the same time. time.time() will generate same number which will ovirt-engine deny
https://github.com/bareos/bareos/blob/de698088e4de495eb8bef0e1cb267e9526be31db/core/src/plugins/filed/python/ovirt/BareosFdPluginOvirt.py#L830

I was able to partially mitigate this issue subtracting small random number from event_id variable.

PS: i am sorry for DUP of this issue ( https://bugs.bareos.org/view.php?id=1302 ), but it was in wrong category and missing many details
Tags:
Steps To Reproduce: - configure host for backups of multiple VMs utilizing ovirt-plugin
- schedule backup of multiple VMs to same Schedule
- concurent backups needs to be started in the same second.
Additional Information: Bareos using python3, ovirtsdk-4.

Bareos Exception:
Traceback (most recent call last):
  File "/usr/lib64/bareos/plugins/BareosFdWrapper.py", line 38, in handle_plugin_event
    return bareos_fd_plugin_object.handle_plugin_event(event)
  File "/usr/lib64/bareos/plugins/BareosFdPluginOvirt.py", line 561, in handle_plugin_event
    return self.start_backup_job()
  File "/usr/lib64/bareos/plugins/BareosFdPluginOvirt.py", line 160, in start_backup_job
    return self.ovirt.prepare_vm_backup()
  File "/usr/lib64/bareos/plugins/BareosFdPluginOvirt.py", line 893, in prepare_vm_backup
    self.create_vm_snapshot()
  File "/usr/lib64/bareos/plugins/BareosFdPluginOvirt.py", line 943, in create_vm_snapshot
    "starting." % (self.vm.name, snap_description)
  File "/usr/local/lib64/python3.6/site-packages/ovirtsdk4/services.py", line 9014, in add
    return self._internal_add(event, headers, query, wait)
  File "/usr/local/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 232, in _internal_add
    return future.wait() if wait else future
  File "/usr/local/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 55, in wait
    return self._code(response)
  File "/usr/local/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 229, in callback
    self._check_fault(response)
  File "/usr/local/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 132, in _check_fault
    self._raise_error(response, body)
  File "/usr/local/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 118, in _raise_error
    raise error
ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is "[Internal Engine Error]". HTTP response code is 400.


Ovirt-engine error:
Caused by: org.postgresql.util.PSQLException: ERROR: duplicate key value violates unique constraint "audit_log_origin_custom_event_id_idx"
  Detail: Key (origin, custom_event_id)=(Bareos oVirt plugin, 1628599716) already exists.
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1379 [bareos-core] director major sometimes 2021-08-11 01:09 2021-08-11 01:09
Reporter: progserega Platform: Linux  
Assigned To: OS: Debian  
Priority: high OS Version: 10  
Status: new Product Version: 20.0.2  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: some failure job have no logs
Description: *list jobs last
+--------+------------------------------------------------+----------------------------------+---------------------+------+-------+----------+-----------------+-----------+
| jobid | name | client | starttime | type | level | jobfiles | jobbytes | jobstatus |
+--------+------------------------------------------------+----------------------------------+---------------------+------+-------+----------+-----------------+-----------+
| 74,537 | backup-rsk40srv018-fsDocs-upr | rsk40srv018-fd | 2021-07-30 04:16:16 | B | I | 0 | 0 | E |
| 76,132 | backup-fb-albatros.rs.int-FirebirdMonth | fb-albatros.rs.int-fd | 2021-08-11 01:29:42 | B | F | 3 | 2,133,475 | T |
| 76,138 | backup-rsk40srv003-skud | rsk40srv003-fd | 2021-08-11 01:30:00 | B | F | 0 | 0 | f |
| 76,069 | backup-rsk40srv018-fsDocs-upr | rsk40srv018-fd | 2021-08-11 05:01:30 | B | I | 0 | 0 | f |
| 76,146 | AdminJobZabbixStatus | bareos.rs.int-fd | 2021-08-11 09:00:01 | D | | 0 | 0 | T |
+--------+------------------------------------------------+----------------------------------+---------------------+------+-------+----------+-----------------+-----------+

*list joblog jobid=76138
No results to list.
*
*list joblog jobid=74537
Automatically selected Catalog: MyCatalog
Using Catalog "MyCatalog"
No results to list.
You have messages.
*
Tags:
Steps To Reproduce:
Additional Information: *version
bareos-dir Version: 20.0.1 (02 March 2021) Debian GNU/Linux 10 (buster) debian Debian GNU/Linux 10 (buster)
*
root@bareos:~# dpkg-query -l|grep bareos
ii bareos-bconsole 20.0.1-3 amd64 Backup Archiving Recovery Open Sourced - text console
ii bareos-client 20.0.1-3 amd64 Backup Archiving Recovery Open Sourced - client metapackage
ii bareos-common 20.0.1-3 amd64 Backup Archiving Recovery Open Sourced - common files
ii bareos-database-common 20.0.1-3 amd64 Backup Archiving Recovery Open Sourced - common catalog files
ii bareos-database-postgresql 20.0.1-3 amd64 Backup Archiving Recovery Open Sourced - PostgreSQL backend
ii bareos-database-tools 20.0.1-3 amd64 Backup Archiving Recovery Open Sourced - database tools
ii bareos-devel 20.0.0-1 amd64 Backup Archiving Recovery Open Sourced - development files
ii bareos-director 20.0.1-3 amd64 Backup Archiving Recovery Open Sourced - director daemon
ii bareos-filedaemon 20.0.1-3 amd64 Backup Archiving Recovery Open Sourced - file daemon
ii bareos-fuse 0.1.1498568444.fb2539c-13.10 all Backup Archiving Recovery Open Sourced - FUSE
ii bareos-webui 20.0.1-3 all Backup Archiving Recovery Open Sourced - webui
ii python-bareos 20.0.0-1 all Backup Archiving REcovery Open Sourced - python module (Python 2)
System Description
Attached Files:
There are no notes attached to this issue.

View Issue Details
ID: Category: Severity: Reproducibility: Date Submitted: Last Update:
1377 [bareos-core] director feature always 2021-08-10 23:30 2021-08-10 23:30
Reporter: hostedpower Platform: Linux  
Assigned To: OS: Debian  
Priority: normal OS Version: 10  
Status: new Product Version: 20.0.2  
Product Build: Resolution: open  
Projection: none      
ETA: none Fixed in Version:  
    Target Version:  
bareos-master: impact:
bareos-master: action:
bareos-19.2: impact:
bareos-19.2: action:
bareos-18.2: impact:
bareos-18.2: action:
bareos-17.2: impact:
bareos-17.2: action:
bareos-16.2: impact:
bareos-16.2: action:
bareos-15.2: impact:
bareos-15.2: action:
bareos-14.2: impact:
bareos-14.2: action:
bareos-13.2: impact:
bareos-13.2: action:
bareos-12.4: impact:
bareos-12.4: action:
Summary: Cannot set max number of jobs for consolidations
Description: Hi,


We set the server to backup up to 20 clients at the same time in order to get maximum network speed.

However this means also 20 concurrent consolidations will be done later on. The consolidation is a very heavy process, not to compare with just synching data to the backup server with normal backups.

We really need a separate option to limit the max concurrent consolidations that can be run at the same time :)

Setting the job max to for example 5 doesn't seem to work, probably because there is only "1" consolidate job.
Tags:
Steps To Reproduce:
Additional Information:
System Description
Attached Files: