View Issue Details

IDProjectCategoryView StatusLast Update
0000992bareos-corefile daemonpublic2023-07-31 15:05
ReporterJack Assigned Tobruno-at-bareos  
PrioritynormalSeveritycrashReproducibilityalways
Status closedResolutionwon't fix 
PlatformLinuxOSDebianOS Version9
Product Version16.2.4 
Summary0000992: Plugin cephfs-fd crash against a Luminous Ceph cluster
DescriptionHi,

I am trying to use the cephfs filedaemon plugin
Sadly, it crashes with the following output (debug 1000):
backup1-fd (300): backup.c:82-50 filed: opened data connection 5 to stored
backup1-fd (450): find.c:86-50 Enter set_find_options()
backup1-fd (450): find.c:89-50 Leave set_find_options()
backup1-fd (50): find.c:162-50 Verify=<V> Accurate=<Cmcs> BaseJob=<Jspug5> flags=<1543511808>
backup1-fd (450): find.c:184-50 PluginCommand: cephfs:conffile=/etc/bareos/ceph/ceph.conf:basedir=/smartapi
backup1-fd (150): fd_plugins.c:498-50 plugin cmd=cephfs:conffile=/etc/bareos/ceph/ceph.conf:basedir=/smartapi
backup1-fd (150): fd_plugins.c:670-50 plugin=rados-fd.so plen=5 cmd=cephfs:conffile=/etc/bareos/ceph/ceph.conf:basedir=/smartapi len=6
backup1-fd (150): fd_plugins.c:199-50 name=cephfs:conffile=/etc/bareos/ceph/ceph.conf:basedir=/smartapi len=6 plugin=rados-fd.so plen=5
backup1-fd (150): fd_plugins.c:670-50 plugin=bpipe-fd.so plen=5 cmd=cephfs:conffile=/etc/bareos/ceph/ceph.conf:basedir=/smartapi len=6
backup1-fd (150): fd_plugins.c:199-50 name=cephfs:conffile=/etc/bareos/ceph/ceph.conf:basedir=/smartapi len=6 plugin=bpipe-fd.so plen=5
backup1-fd (150): fd_plugins.c:670-50 plugin=python-fd.so plen=6 cmd=cephfs:conffile=/etc/bareos/ceph/ceph.conf:basedir=/smartapi len=6
backup1-fd (150): fd_plugins.c:199-50 name=cephfs:conffile=/etc/bareos/ceph/ceph.conf:basedir=/smartapi len=6 plugin=python-fd.so plen=6
backup1-fd (150): fd_plugins.c:670-50 plugin=cephfs-fd.so plen=6 cmd=cephfs:conffile=/etc/bareos/ceph/ceph.conf:basedir=/smartapi len=6
backup1-fd (150): fd_plugins.c:199-50 name=cephfs:conffile=/etc/bareos/ceph/ceph.conf:basedir=/smartapi len=6 plugin=cephfs-fd.so plen=6
backup1-fd (150): fd_plugins.c:693-50 Command plugin = cephfs:conffile=/etc/bareos/ceph/ceph.conf:basedir=/smartapi
bareos-fd: ./msg/msg_types.h :301 : void entity_addr_t::set_port(int): l'assertion « 0 » a échoué.
backup1-fd (900): signal.c:136-50 sig=6 IOT trap
BAREOS interrupted by signal 6: IOT trap
Kaboom! bareos-fd, backup1-fd got signal 6 - IOT trap. Attempting traceback.
Kaboom! exepath=/root
backup1-fd (300): signal.c:206-50 Working=/var/lib/bareos
backup1-fd (300): signal.c:207-50 btpath=/root/btraceback
backup1-fd (300): signal.c:208-50 exepath=/root/bareos-fd
backup1-fd (500): signal.c:239-50 Doing waitpid
Calling: /root/btraceback /root/bareos-fd 1265029 /var/lib/bareos
execv: /root/btraceback failed: ERR=Aucun fichier ou dossier de ce type
backup1-fd (500): signal.c:241-50 Done waitpid
The btraceback call returned 255
Dumping: /var/lib/bareos/backup1-fd.1265029.bactrace


I presume this is because my cephfs is in a luminous cluster, and the plugin is linked with libcephfs1
Luminous ships libcephfs2

I checked out the latest binary packages from bareos, they are linked against libcephfs1 too
I successfully compiled bareos 17.2.7 against libcephfs2 : it does not crash anymore (yet I cannot do a successful backup, but perhaps this is another issue)

Thanks
Steps To ReproduceRun cephfs-fd.so from binary packages against a Luminous or Mimic ceph cluster
TagsNo tags attached.

Activities

bruno-at-bareos

bruno-at-bareos

2023-07-31 15:05

manager   ~0005284

This plugin has been deprecated and removed since then.
It is recommended to use traditional backup with cephfs mount.

Issue History

Date Modified Username Field Change
2018-07-28 15:56 Jack New Issue
2023-07-31 15:05 bruno-at-bareos Assigned To => bruno-at-bareos
2023-07-31 15:05 bruno-at-bareos Status new => closed
2023-07-31 15:05 bruno-at-bareos Resolution open => won't fix
2023-07-31 15:05 bruno-at-bareos Note Added: 0005284