Bareos Bug Tracker - bareos-core
View Issue Details
0000805bareos-core[All Projects] webuipublic2017-04-03 10:142018-12-06 09:32
ivveh 
stephand 
normalmajoralways
resolvedfixed 
4.4.0-66-generic x86_64Ubuntu16.04
16.2.4 
 
0000805: can't restore vmware-plugin assisted backups via webui
Backups taken via bconsole works (full and incremental).
Backups taken via webui works (full and incremental).
Restores done via bconsole works, tried both the full backup or parts of the increments. Tried with python:localvmdk=yes as well. All OK!

Restores done via webui fails with following message:

30-Mar 13:56 bareos-dir JobId 100: Start Restore Job RestoreVM.2017-03-30_13.46.38_42
30-Mar 13:56 bareos-dir JobId 100: Using Device "FileStorage" to read.
30-Mar 13:56 bareos-sd JobId 100: Ready to read from volume "Full-0001" on device "FileStorage" (/var/lib/bareos/storage).
30-Mar 13:56 bareos-sd JobId 100: Forward spacing Volume "Full-0001" to file:block 4:4022143629.
30-Mar 13:56 bareos-fd JobId 100: Fatal error: python-fd: Traceback (most recent call last):
  File "/usr/lib/bareos/plugins/BareosFdWrapper.py", line 66, in create_file
    return bareos_fd_plugin_object.create_file(context, restorepkt)
  File "/usr/lib/bareos/plugins/vmware_plugin/BareosFdPluginVMware.py", line 288, in create_file
    cbt_data = self.vadp.restore_objects_by_objectname[objectname]['data']
KeyError: ('/VMS/Datacenter/Backup/[datastore2] Backup/Backup.vmdk',)

30-Mar 13:56 bareos-sd JobId 100: Error: bsock_tcp.c:405 Write error sending 31003 bytes to client:127.0.0.1:9103: ERR=Connection reset by peer
30-Mar 13:56 bareos-sd JobId 100: Fatal error: read.c:154 Error sending to File daemon. ERR=Connection reset by peer
30-Mar 13:56 bareos-sd JobId 100: Error: bsock_tcp.c:434 Socket has errors=1 on call to client:127.0.0.1:9103
30-Mar 13:56 bareos-dir JobId 100: Error: Bareos bareos-dir 16.2.4 (01Jul16):
  Build OS: x86_64-pc-linux-gnu ubuntu Ubuntu 16.04 LTS
  JobId: 100
  Job: RestoreVM.2017-03-30_13.46.38_42
  Restore Client: bareos-fd
  Start time: 30-Mar-2017 13:56:02
  End time: 30-Mar-2017 13:56:03
  Elapsed time: 1 sec
  Files Expected: 2
  Files Restored: 0
  Bytes Restored: 0
  Rate: 0.0 KB/s
  FD Errors: 1
  FD termination status: Error
  SD termination status: Fatal Error
  Termination: *** Restore Error ***

Does not really matter even if I choose to UNCHECK "merge all client filesets" which would of course screw a vmware restore if there are other backups on this client.
I have also tried all possible combination of settings in the ui, it always fails with this error.
1) Set up VMWare environment with a virtual guest.
2) Install vmware-plugin with deps described in Additonal Information.
3) Create filesets, schedules, jobdefs, etc. for a vmware guest as per Bareos manual.
4) Enable CBT on the guest as per Bareos manual.
5) Take backups of the guest.
6) Try to restore via webui
bareos-dir Version: 16.2.4 (01 July 2016) x86_64-pc-linux-gnu ubuntu Ubuntu 16.04 LTS xUbuntu_16.04 x86_64
bareos-vadp-dumper_16.2.4_amd64.deb
bareos-vmware-plugin_16.2.4_amd64.deb
bareos-vmware-vix-disklib5_5.5.4-2454786_amd64.deb

I would also suggest adding possiblity to enter plugin variables in the webui restore mechanism.
No tags attached.
Issue History
2017-04-03 10:14ivvehNew Issue
2017-04-03 10:22ivvehNote Added: 0002620
2017-04-03 16:25stephandAssigned To => stephand
2017-04-03 16:25stephandStatusnew => assigned
2017-04-03 16:32stephandNote Added: 0002622
2017-04-03 16:32stephandStatusassigned => acknowledged
2018-10-05 13:46maikNote Added: 0003130
2018-11-15 09:26pstorzChangeset attached => bareos master b840892b
2018-11-15 09:26pstorzNote Added: 0003151
2018-11-15 09:26pstorzStatusacknowledged => resolved
2018-11-15 09:26pstorzResolutionopen => fixed
2018-12-06 09:32frankuChangeset attached => bareos bareos-18.2 2171b21d
2018-12-06 09:32frankuNote Added: 0003155

Notes
(0002620)
ivveh   
2017-04-03 10:22   
Job {
  Name = "Backup-virtual-machine"
  JobDefs = "DefaultJob"
  FileSet = "Backup"
}

...

FileSet {
  Name = "Backup"

  Include {
    Options {
      Signature = md5
      Compression = gzip9
    }
    Plugin = "python:module_path=/usr/lib/bareos/plugins/vmware_plugin:module_name=bareos-fd-vmware:dc=Datacenter:folder=/:vmname=Backup:vcserver=x.x.x.x:vcuser=administrator@vsphere.local:vcpass=xxx"
  }
}

...

JobDefs {
  Name = "DefaultJob"
  Type = Backup
  Level = Incremental
  Client = bareos-fd
  FileSet = "SelfTest" # selftest fileset (0000013)
  Schedule = "Nightly"
  Storage = File
  Messages = Standard
  Pool = Incremental
  Priority = 10
  Write Bootstrap = "/var/lib/bareos/%c.bsr"
  Full Backup Pool = Full # write Full Backups into "Full" Pool (0000005)
  Differential Backup Pool = Differential # write Diff Backups into "Differential" Pool (0000008)
  Incremental Backup Pool = Incremental # write Incr Backups into "Incremental" Pool (0000011)
}

...

Schedule {

Name = "Nightly"
Run = daily at 23:00

}
(0002622)
stephand   
2017-04-03 16:32   
I was able to reproduce this. Essentially, the problem is that restoreobjects are not processed when trying to restore a vmware-plugin job via webui. We'll have to investigate further if it could be best fixed in bareos core or in webui.
(0003130)
maik   
2018-10-05 13:46   
The cause is probably that restore objects are not restored, if restore job is trigerred via WebUI.
Same effect has been observed with the percona xtrabackup plugin:

2018-09-19 17:55:05 | bareos-fd JobId 1896: Fatal error: python-fd: Restore with xbstream needs empty directoy: /tmp/bareos-restores//_percona/1896/
2018-09-19 17:55:05 | bareos-fd JobId 1896: Error: python-fd: No lsn information found in restore object for file /tmp/bareos-restores//_percona/xbstream.0000000004 from job 4

Potentially fixed by https://github.com/bareos/bareos/pull/119 [^]
(0003151)
pstorz   
2018-11-15 09:26   
Fix committed to bareos master branch with changesetid 10353.
(0003155)
franku   
2018-12-06 09:32   
Fix committed to bareos bareos-18.2 branch with changesetid 10629.