View Issue Details

IDProjectCategoryView StatusLast Update
0000350bareos-corefile daemonpublic2014-11-11 10:19
Reporterelonen Assigned To 
PrioritynormalSeverityfeatureReproducibilityalways
Status closedResolutionno change required 
PlatformLinuxOSanyOS Version3
Product Version14.2.1 
Summary0000350: Allow multiple Client resources in bareos-fd.conf
DescriptionRunning multiple Clients on a single file daemon host (in different ports and/or different IP addresses) is necessary when backing up a Linux HA cluster. One is for the "physical" host and one or more are for the cluster resources (on an IP address that can dynamically hop from one cluster host to another).

Currently, trying to define multiple Client{} sections in bareos-fd.conf results in this error:

  "Only one Client resource permitted in /etc/bareos/bareos-fd.conf"

It would be useful if this restriction could be lifted. At the moment the only way to make this work is to start multiple bareos-fd instances, which requires non-elegant customizations of /etc/init.d/ script, as it doesn't easily support another instance either.
TagsNo tags attached.

Activities

mvwieringen

mvwieringen

2014-10-19 16:18

developer   ~0001013

First of all to let the daemon listening on multiple addresses is already
possible as the addresses config is almost a programming language. Currently
there is no support for having it listening on multiple ports however as the
bnet_tcp_server only allows to bind to one port.

Further more I think its a bad design to have your cluster resources getting
started from the normal init scripts. As the IP is floating it would be much
better to add the failover of the bareos-fd to the cluster resources so it
also fails over when the IP moves. You can just create a specific config for
the cluster resource and use that to failover the daemon. You also don't want
to bind the backup of a specific cluster resource to the non-floating address
so I think adding it to the cluster framework is a much cleaner solution.

Last but not least most resources in the Client resource are global variables
like plugin dir, piddir etc. allowing multiple instances of that makes little
sense.
elonen

elonen

2014-10-19 17:55

reporter   ~0001016

Ah, yes, you are probably right in that it's easier to make the cluster start/stop the FD if it's a separate process with separate config file.

The extra init.d script is, in fact, a working (and quite usual) way to handle the floating IP with Pacemaker. It's simply not tied to any runlevel, but rather called by Pacemaker when the cluster IP goes up or down.

For the record, in addition to changing the port and config file (DAEMON_ARGS="-c /etc/bareos/bareos-fd_cluster.conf"), I had to comment out an alternative process killing method that didn't respect pidfile....

   # start-stop-daemon --stop --quiet --oknodo --retry=0/30/KILL/5 $DAEMON_USERGROUP --exec $DAEMON

...and to add "-p $PIDFILE" to at least one status checking line on both the original and the modified init.d script, to get the two instances coexist peacefully. Otherwise the two instances would confuse each other on start/stop/status.
mvwieringen

mvwieringen

2014-10-19 19:44

developer   ~0001019

I must be a stupid cluster user but the clusters I once admin never had their
system and cluster init script mixed. I would expect that a professional
cluster put its init scripts somewhere else then in the default location.
One day some sysadmin will run the script that is supposed to be run by the
cluster. Or someone links it into a runlevel.

The default scripts indeed do not work fully right when you have multiple
instances. Feel free to suggest fixes in a new bug report.
elonen

elonen

2014-10-19 20:23

reporter   ~0001020

Well, stupidity and unprofessionalism aside, while Pacemaker mostly uses OCF resources, it also supports LSB init.d type scripts for services that don't yet have their own OCF scripts:
http://www.linux-ha.org/wiki/LSB_Resource_Agents

It works well, too, as long as the scripts are LSB compliant and you don't link them to runlevels as you noted.

Issue History

Date Modified Username Field Change
2014-10-17 19:25 elonen New Issue
2014-10-19 16:18 mvwieringen Note Added: 0001013
2014-10-19 16:18 mvwieringen Assigned To => mvwieringen
2014-10-19 16:18 mvwieringen Status new => feedback
2014-10-19 17:55 elonen Note Added: 0001016
2014-10-19 17:55 elonen Status feedback => assigned
2014-10-19 19:44 mvwieringen Note Added: 0001019
2014-10-19 19:44 mvwieringen Assigned To mvwieringen =>
2014-10-19 19:44 mvwieringen Status assigned => feedback
2014-10-19 20:23 elonen Note Added: 0001020
2014-10-19 20:23 elonen Status feedback => new
2014-11-11 10:19 mvwieringen Status new => closed
2014-11-11 10:19 mvwieringen Resolution open => no change required