View Issue Details

IDProjectCategoryView StatusLast Update
0001008bareos-corevmware pluginpublic2023-07-31 16:00
Reporterfpierfed Assigned Tobruno-at-bareos  
PriorityhighSeveritycrashReproducibilityalways
Status closedResolutionunable to reproduce 
PlatformLinuxOSUbuntuOS Version16.04
Product Version17.2.7 
Summary0001008: VMWare Plugin crashes on incremental backup for some VMs
DescriptionWe have some VMs (two at the moment) for which an incremental backup fails. The full backup worked. The traceback we get is this:
18-Sep 10:30 ubuntu-dir JobId 217: Start Backup JobId 217, Job=vm-vmware.mrt.iram.es-DNS_DHCP_TFTP_Debian_9.2018-09-18_10.30.57_08
18-Sep 10:30 ubuntu-dir JobId 217: Using Device "FileStorage" to write.
18-Sep 10:30 ubuntu-sd JobId 217: Volume "Incremental-0042" previously written, moving to end of data.
18-Sep 10:30 ubuntu-sd JobId 217: Ready to append to end of Volume "Incremental-0042" size=157040042
18-Sep 10:31 ubuntu-fd JobId 217: Fatal error: python-fd: Traceback (most recent call last):
  File "/usr/lib/bareos/plugins/BareosFdWrapper.py", line 42, in start_backup_file
    return bareos_fd_plugin_object.start_backup_file(context, savepkt)
  File "/usr/lib/bareos/plugins/BareosFdPluginVMware.py", line 202, in start_backup_file
    if not self.vadp.get_vm_disk_cbt(context):
  File "/usr/lib/bareos/plugins/BareosFdPluginVMware.py", line 1003, in get_vm_disk_cbt
    changeId=cbt_changeId)
  File "/usr/lib/python2.7/dist-packages/pyVmomi/VmomiSupport.py", line 580, in <lambda>
    self.f(*(self.args + (obj,) + args), **kwargs)
  File "/usr/lib/python2.7/dist-packages/pyVmomi/VmomiSupport.py", line 386, in _InvokeMethod
    return self._stub.InvokeMethod(self, info, args)
  File "/usr/lib/python2.7/dist-packages/pyVmomi/SoapAdapter.py", line 1366, in InvokeMethod
    raise obj # pylint: disable-msg=E0702
vim.fault.FileFault: (vim.fault.FileFault) {
   dynamicType = <unset>,
   dynamicProperty = (vmodl.DynamicProperty) [],
   msg = 'Error caused by file /vmfs/volumes/5aeb277e-cbbff164-6fdc-90e2bae7a3a8/DNS DHCP TFTP Debian 9/DNS DHCP TFTP Debian 9-000001.vmdk',
   faultCause = <unset>,
   faultMessage = (vmodl.LocalizableMessage) [],
   file = '/vmfs/volumes/5aeb277e-cbbff164-6fdc-90e2bae7a3a8/DNS DHCP TFTP Debian 9/DNS DHCP TFTP Debian 9-000001.vmdk'
}

18-Sep 10:31 ubuntu-fd JobId 217: Fatal error: fd_plugins.c:2519 Command plugin: no fname in bareosCheckChanges packet.
18-Sep 10:31 ubuntu-sd JobId 217: Elapsed time=00:00:06, Transfer rate=0 Bytes/second
18-Sep 10:31 ubuntu-dir JobId 217: Error: Director's comm line to SD dropped.
18-Sep 10:31 ubuntu-dir JobId 217: Error: Bareos ubuntu-dir 17.2.4 (21Sep17):
  Build OS: x86_64-pc-linux-gnu ubuntu Ubuntu 16.04 LTS
  JobId: 217
  Job: vm-vmware.mrt.iram.es-DNS_DHCP_TFTP_Debian_9.2018-09-18_10.30.57_08
  Backup Level: Incremental, since=2018-09-15 21:04:28
  Client: "ubuntu-fd" 17.2.4 (21Sep17) x86_64-pc-linux-gnu,ubuntu,Ubuntu 16.04 LTS,xUbuntu_16.04,x86_64
  FileSet: "vm-vmware.mrt.iram.es-DNS DHCP TFTP Debian 9_fileset" 2018-09-13 21:00:03
  Pool: "Incremental" (From Job IncPool override)
  Catalog: "MyCatalog" (From Client resource)
  Storage: "File" (From Job resource)
  Scheduled time: 18-Sep-2018 10:30:52
  Start time: 18-Sep-2018 10:30:59
  End time: 18-Sep-2018 10:31:06
  Elapsed time: 7 secs
  Priority: 10
  FD Files Written: 0
  SD Files Written: 0
  FD Bytes Written: 0 (0 B)
  SD Bytes Written: 0 (0 B)
  Rate: 0.0 KB/s
  Software Compression: None
  VSS: no
  Encryption: no
  Accurate: no
  Volume name(s):
  Volume Session Id: 156
  Volume Session Time: 1536833661
  Last Volume Bytes: 0 (0 B)
  Non-fatal FD errors: 2
  SD Errors: 0
  FD termination status: Fatal Error
  SD termination status: Error
  Termination: *** Backup Error ***
Steps To ReproduceSimply run the backup
Additional InformationI do not know if it is relevant, but the VM in question has more than one disk. I can see the snapshot being created and then immediately deleted.
TagsNo tags attached.

Activities

fullmetalucard

fullmetalucard

2018-12-06 17:21

reporter   ~0003157

Hi there, same problem here
OS: CentOS 7
bareos version : 17.2.4-6

We actually can backup a vm and restore it with the bareos-vmware-plugin.
But once a vm has been restored with bareos, the next backup will fail, with this error:


06-déc. 17:12 bareos-sd JobId 179: Ready to append to end of Volume "VMS_StorageWeekend0005" size=37785999243
*m
06-déc. 17:12 bareos-fd JobId 179: Fatal error: python-fd: Traceback (most recent call last):
  File "/usr/lib64/bareos/plugins/BareosFdWrapper.py", line 42, in start_backup_file
    return bareos_fd_plugin_object.start_backup_file(context, savepkt)
  File "/usr/lib64/bareos/plugins/BareosFdPluginVMware.py", line 202, in start_backup_file
    if not self.vadp.get_vm_disk_cbt(context):
  File "/usr/lib64/bareos/plugins/BareosFdPluginVMware.py", line 1003, in get_vm_disk_cbt
    changeId=cbt_changeId)
  File "/usr/lib/python2.7/site-packages/pyVmomi/VmomiSupport.py", line 580, in <lambda>
    self.f(*(self.args + (obj,) + args), **kwargs)
  File "/usr/lib/python2.7/site-packages/pyVmomi/VmomiSupport.py", line 386, in _InvokeMethod
    return self._stub.InvokeMethod(self, info, args)
  File "/usr/lib/python2.7/site-packages/pyVmomi/SoapAdapter.py", line 1374, in InvokeMethod
    raise obj # pylint: disable-msg=E0702
vim.fault.FileFault: (vim.fault.FileFault) {
   dynamicType = <unset>,
   dynamicProperty = (vmodl.DynamicProperty) [],
   msg = 'Error caused by file /vmfs/volumes/vsan:529e79064efc1e89-e36758101e043745/c142095c-a822-9d22-3a7e-0cc47a9389aa/test-restore3.vmdk',
   faultCause = <unset>,
   faultMessage = (vmodl.LocalizableMessage) [],
   file = '/vmfs/volumes/vsan:529e79064efc1e89-e36758101e043745/c142095c-a822-9d22-3a7e-0cc47a9389aa/test-restore3.vmdk'
}

06-déc. 17:12 bareos-fd JobId 179: Fatal error: fd_plugins.c:2519 Command plugin: no fname in bareosCheckChanges packet.


My vm has only one vmdk and this error occurs in this precise scenario.
We tried many things as disable CBT before restoring vm, put the vm out vmware's inventory et put it back again, without success.

Any idea about that?
stephand

stephand

2019-01-08 16:07

developer   ~0003167

Could you please try if it gets fixed removing all eventually existing snapshots an then running
vmware_cbt_tool.py ... --resetcbt

I've seen this kind of error message and resetting CBT did help.
alex.kanaykin

alex.kanaykin

2019-02-01 13:57

reporter   ~0003239

Hi stephand!

vmware_cbt_tool.py ... --resetcbt
Doesn't help. I've tried --disablecbt then --enablecbt.
Error is the same. Centos7, bareos-dir Version: 18.2.5 (30 January 2019)
bruno-at-bareos

bruno-at-bareos

2023-07-17 16:34

manager   ~0005198

Is this still reproducible with Bareos 22.1.0 and recent OS?
bruno-at-bareos

bruno-at-bareos

2023-07-31 16:00

manager   ~0005290

closing after no answer. no report with recent code.

Issue History

Date Modified Username Field Change
2018-09-18 10:53 fpierfed New Issue
2018-12-06 17:21 fullmetalucard Note Added: 0003157
2019-01-08 16:07 stephand Note Added: 0003167
2019-02-01 13:57 alex.kanaykin Note Added: 0003239
2023-07-17 16:34 bruno-at-bareos Assigned To => bruno-at-bareos
2023-07-17 16:34 bruno-at-bareos Status new => feedback
2023-07-17 16:34 bruno-at-bareos Note Added: 0005198
2023-07-31 16:00 bruno-at-bareos Status feedback => closed
2023-07-31 16:00 bruno-at-bareos Resolution open => unable to reproduce
2023-07-31 16:00 bruno-at-bareos Note Added: 0005290