View Issue Details

IDProjectCategoryView StatusLast Update
0000492bareos-core[All Projects] vmware pluginpublic2015-07-17 20:02
ReporteroliversAssigned Tostephand 
PrioritynormalSeverityfeatureReproducibilityalways
Status resolvedResolutionfixed 
PlatformLinuxOSRHELOS Version7
Product Version14.2.5 
Target VersionFixed in Version 
Summary0000492: WMware plugin tries to backup independent disk resulting backup job failure
DescriptionVADP plugin works great but do not skip independent disks (no backup possible within vadp). Resulting backup job failes.

Dumping lnxv-7346.vmdk works fine, but independent marked disk lnxv-7346_1.vmdk fails the whole backup job.

lnxv-9075-dir JobId 91: Error: Bareos lnxv-9075-dir 15.2.0 (25apr15):
Build OS: x86_64-redhat-linux-gnu redhat Red Hat Enterprise Linux Server release 6.5 (Santiago)
JobId: 91
Job: lnxv-7346.2015-07-03_09.52.39_14
Backup Level: Full (upgraded from Incremental)
Client: "lnxp-3762-fd" 15.2.0 (25apr15) x86_64-redhat-linux-gnu,redhat,Red Hat Enterprise Linux Server release 7.0 (Maipo)
FileSet: "lnxv-7346" 2015-07-03 09:52:39
Pool: "lnxv-7346" (From Job resource)
Catalog: "MyCatalog" (From Client resource)
Storage: "lnxv-7346" (From Pool resource)
Scheduled time: 03-Jul-2015 09:52:39
Start time: 03-Jul-2015 09:52:40
End time: 03-Jul-2015 10:12:24
Elapsed time: 19 mins 44 secs
Priority: 10
FD Files Written: 2
SD Files Written: 0
FD Bytes Written: 13,166,805,884 (13.16 GB)
SD Bytes Written: 1,860 (1.860 KB)
Rate: 11120.6 KB/s
Software Compression: 45.3 % (lz4)
VSS: no
Encryption: no
Accurate: no
Volume name(s): lnxv-7346.2015-07-03_09.52.39_14.91
Volume Session Id: 4
Volume Session Time: 1435909499
Last Volume Bytes: 13,178,568,795 (13.17 GB)
Non-fatal FD errors: 2
SD Errors: 0
FD termination status: Error
SD termination status: Error
Termination: *** Backup Error ***

...

lnxp-3762-fd JobId 91: Fatal error: Traceback (most recent call last):
File "/usr/lib64/bareos/plugins/BareosFdWrapper.py", line 42, in start_backup_file
return bareos_fd_plugin_object.start_backup_file(context, savepkt)
File "/usr/lib64/bareos/plugins/vmware_plugin/BareosFdPluginVMware.py", line 150, in start_backup_file
if not self.vadp.get_vm_disk_cbt(context):
File "/usr/lib64/bareos/plugins/vmware_plugin/BareosFdPluginVMware.py", line 842, in get_vm_disk_cbt
changeId=cbt_changeId)
File "/usr/lib/python2.7/site-packages/pyVmomi/VmomiSupport.py", line 566, in <lambda>
self.f(*(self.args + (obj,) + args), **kwargs)
File "/usr/lib/python2.7/site-packages/pyVmomi/VmomiSupport.py", line 375, in _InvokeMethod
return self._stub.InvokeMethod(self, info, args)
File "/usr/lib/python2.7/site-packages/pyVmomi/SoapAdapter.py", line 1301, in InvokeMethod
raise obj # pylint: disable-msg=E0702
vim.fault.FileFault: (vim.fault.FileFault) {
dynamicType = <unset>,
dynamicProperty = (vmodl.DynamicProperty) [],
msg = 'Error caused by file /vmfs/volumes/50c06139-9c38bec2-1ac2-e83935b34d22/lnxv-7346/lnxv-7346_1.vmdk',
faultCause = <unset>,
faultMessage = (vmodl.LocalizableMessage) [],
file = '/vmfs/volumes/50c06139-9c38bec2-1ac2-e83935b34d22/lnxv-7346/lnxv-7346_1.vmdk'
}

...

Job {
  Name = lnxv-7346
  Client = lnxp-3762-fd
  Type = Backup
  Schedule = WeeklyCycle
  FileSet = lnxv-7346
  Max Full Interval = 6 days
  Pool = lnxv-7346
  Messages = Standard
  Write Bootstrap = "/var/lib/bareos/%c.bsr"
}

Pool {
  Name = lnxv-7346
  Storage = lnxv-7346
  Pool Type = Backup
  Recycle = yes
  Recycle Oldest Volume = yes
  AutoPrune = yes
  Volume Retention = 14 days
  Maximum Volume Jobs = 1
  Maximum Volumes = 16
  Label Format = "${JobName}.${JobId}"
}

Storage {
  Name = lnxv-7346
  Address = 10.0.0.22
  Password = "xxxxxxx"
  Device = lnxv-7346
  Media Type = lnxv-7346
}

FileSet {
  Name = lnxv-7346
  Include {
    Options {
      signature = MD5
      compression = LZ4
    }
    Plugin = "python:module_path=/usr/lib64/bareos/plugins/vmware_plugin:module_name=bareos-fd-vmware:dc=datacenter:folder=/:vmname=lnxv-7346:vcserver=vcenter:vcuser=bareos:vcpass=xxxxxx"
  }
}

Steps To Reproduce"echo run job=lnxv-7346 yes | bconsole"
TagsNo tags attached.
bareos-master: impact
bareos-master: action
bareos-18.2: impact
bareos-18.2: action
bareos-17.2: impact
bareos-17.2: action
bareos-16.2: impact
bareos-16.2: action
bareos-15.2: impact
bareos-15.2: action
bareos-14.2: impact
bareos-14.2: action
bareos-13.2: impact
bareos-13.2: action
bareos-12.4: impact
bareos-12.4: action

Activities

stephand

stephand

2015-07-06 15:18

developer   ~0001780

Thanks for reporting this issue.
This case is indeed not yet covered by the plugin code. I'll take care to get this fixed.

There are some discussions in forums for other such backup tools regarding independend disks, it seems that most users want to see their backup jobs terminate with OK status although independend disks are not backed up at all. Only few users would like to see such jobs at least with warning state.

As most users seem to be aware that independend disks are excluded from snapshots and don't get backed up, it's probably best to only produce an info level message about skipped independend disks.
mvwieringen adm

mvwieringen adm

2015-07-17 20:02

administrator   ~0001788

Fix committed to bareos-vmware master branch with changesetid 5429.

Related Changesets

bareos-vmware: master 04ba4d7e

2015-07-17 19:37:50

stephand


Committer: mvwieringen adm

Ported: N/A

Details Diff
Skip independent disks, as that disk type is implicitly excluded
from snapshot so that they can not be handled by CBT based backup

Fixes 0000492: WMware plugin tries to backup independent disk resulting backup job failure
Affected Issues
0000492
mod - vmware_plugin/BareosFdPluginVMware.py Diff File

Issue History

Date Modified Username Field Change
2015-07-03 14:27 olivers New Issue
2015-07-03 14:42 joergs Assigned To => stephand
2015-07-03 14:42 joergs Status new => assigned
2015-07-06 15:18 stephand Note Added: 0001780
2015-07-17 20:02 mvwieringen adm Changeset attached => bareos-vmware master 04ba4d7e
2015-07-17 20:02 mvwieringen adm Note Added: 0001788
2015-07-17 20:02 mvwieringen adm Status assigned => resolved
2015-07-17 20:02 mvwieringen adm Resolution open => fixed