View Issue Details
ID | Project | Category | View Status | Date Submitted | Last Update |
---|---|---|---|---|---|
0001533 | bareos-core | vmware plugin | public | 2023-04-29 13:41 | 2023-06-26 15:11 |
Reporter | CirocN | Assigned To | stephand | ||
Priority | urgent | Severity | major | Reproducibility | always |
Status | resolved | Resolution | fixed | ||
Platform | Linux | OS | RHEL (and clones) | OS Version | 8 |
Product Version | 22.0.3 | ||||
Summary | 0001533: Restoring vmware vms keeps failing, can't restore the data. | ||||
Description | First I need to mention I am new to Bareos and I have been working with it for the past couple of weeks to replace our old backup solution. I am trying to use the VMware plugin to get backups of our vmdks for disaster recovery or if we need to extract a specific file using guestfish. I have followed the official documents regarding setting up the VMware plugin at https://docs.bareos.org/TasksAndConcepts/Plugins.html#general The backups of the vm in our vSphere server is successful. But when I try to restore the backups it keeps failing with the following information: bareos87.simlab.xyz-fd JobId 3: Fatal error: bareosfd: Traceback (most recent call last): File "/usr/lib64/bareos/plugins/BareosFdWrapper.py", line 85, in create_file return bareos_fd_plugin_object.create_file(restorepkt) File "/usr/lib64/bareos/plugins/BareosFdPluginVMware.py", line 504, in create_file cbt_data = self.vadp.restore_objects_by_objectname[objectname]["data"] KeyError: '/VMS/Datacenter/backup_client/[DatastoreVM] backup_client/backup_client.vmdk I have tried the same steps on Bareos 21 and 20, and also tried it on Redhat 9.1 and I kept getting same exact error. | ||||
Steps To Reproduce | After setting up the vmware plugin accoding to official documents, I have ran the backups using: run job=vm-websrv1 level=Full Web-GUI shows the job instantly and after about 10 minutes the job's status shows success. Right after the backup is done when I try to restore the backup using Web-GUI or console, I keep getting the same error: 19 2023-04-29 07:34:05 bareos-dir JobId 4: Error: Bareos bareos-dir 22.0.4~pre63.807bc5689 (17Apr23): Build OS: Red Hat Enterprise Linux release 8.7 (Ootpa) JobId: 4 Job: RestoreFiles.2023-04-29_07.33.59_43 Restore Client: bareos-fd Start time: 29-Apr-2023 07:34:01 End time: 29-Apr-2023 07:34:05 Elapsed time: 4 secs Files Expected: 1 Files Restored: 0 Bytes Restored: 0 Rate: 0.0 KB/s FD Errors: 1 FD termination status: Fatal Error SD termination status: Fatal Error Bareos binary info: Bareos community build (UNSUPPORTED): Get professional support from https://www.bareos.com Job triggered by: User Termination: *** Restore Error *** 18 2023-04-29 07:34:05 bareos-dir JobId 4: Warning: File count mismatch: expected=1 , restored=0 17 2023-04-29 07:34:05 bareos-sd JobId 4: Releasing device "FileStorage" (/var/lib/bareos/storage). 16 2023-04-29 07:34:05 bareos-sd JobId 4: Error: lib/bsock_tcp.cc:454 Socket has errors=1 on call to client:192.168.111.136:9103 15 2023-04-29 07:34:05 bareos-sd JobId 4: Fatal error: stored/read.cc:146 Error sending to File daemon. ERR=Connection reset by peer 14 2023-04-29 07:34:05 bareos-sd JobId 4: Error: lib/bsock_tcp.cc:414 Wrote 65536 bytes to client:192.168.111.136:9103, but only 16384 accepted. 13 2023-04-29 07:34:04 bareos87.simlab.xyz-fd JobId 4: Fatal error: bareosfd: Traceback (most recent call last): File "/usr/lib64/bareos/plugins/BareosFdWrapper.py", line 85, in create_file return bareos_fd_plugin_object.create_file(restorepkt) File "/usr/lib64/bareos/plugins/BareosFdPluginVMware.py", line 504, in create_file cbt_data = self.vadp.restore_objects_by_objectname[objectname]["data"] KeyError: '/VMS/Datacenter/backup_client/[DatastoreVM] backup_client/backup_client.vmdk' 12 2023-04-29 07:34:04 bareos-sd JobId 4: Forward spacing Volume "Full-0001" to file:block 0:627. 11 2023-04-29 07:34:04 bareos-sd JobId 4: Ready to read from volume "Full-0001" on device "FileStorage" (/var/lib/bareos/storage). 10 2023-04-29 07:34:04 bareos87.simlab.xyz-fd JobId 4: Encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3 9 2023-04-29 07:34:04 bareos87.simlab.xyz-fd JobId 4: Connected Storage daemon at bareos87.simlab.xyz:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3 8 2023-04-29 07:34:02 bareos-dir JobId 4: Encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3 7 2023-04-29 07:34:02 bareos-dir JobId 4: Handshake: Immediate TLS 6 2023-04-29 07:34:02 bareos-dir JobId 4: Connected Client: bareos-fd at localhost:9102, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3 5 2023-04-29 07:34:02 bareos-dir JobId 4: Probing client protocol... (result will be saved until config reload) 4 2023-04-29 07:34:02 bareos-dir JobId 4: Using Device "FileStorage" to read. 3 2023-04-29 07:34:02 bareos-dir JobId 4: Encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3 2 2023-04-29 07:34:02 bareos-dir JobId 4: Connected Storage daemon at bareos87.simlab.xyz:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3 1 2023-04-29 07:34:01 bareos-dir JobId 4: Start Restore Job RestoreFiles.2023-04-29_07.33.59_43 | ||||
Tags | No tags attached. | ||||
|
|
As all continuous tests are working perfectly, you certainly miss a point into your configuration, without the configuration nobody will be able to find the problem. Also if you want to show a job result screeshots are certainly the worse way. please use the text log result of bconsole <<< "list joblog jobid=2" as attachement here. |
|
I have found out that this is the result of miss matching the backup path and restore path. The backup is getting created with /VMS/Datacenter//backup_client/[DatastoreVM] backup_client/backup_client.vmdk while /VMS/Datacenter/backup_client/[DatastoreVM] backup_client/backup_client.vmdk. I have noticed in your trying to strip it off on BareosFdPluginVMware.py on line 366 but it is not working. CODE: if "uuid" in self.options: self.vadp.backup_path = "/VMS/%s" % (self.options["uuid"]) else: self.vadp.backup_path = "/VMS/%s/%s/%s" % ( self.options["dc"], self.options["folder"].strip("/"), self.options["vmname"], ) My configuration for the folder is precisely what is suggested in the official document: folder=/ The workaround I have found for now was to edit the BareosFdPluginVMware.py Python script to the following code but it needs to get properly fixed: CODE: if "uuid" in self.options: self.vadp.backup_path = "/VMS/%s" % (self.options["uuid"]) else: self.vadp.backup_path = "/VMS/%s%s/%s" % ( self.options["dc"], self.options["folder"].strip("/"), self.options["vmname"], ) |
|
Thanks for reporting this, confirming that it doesn't work properly when using folder=/ for backups. The root cause is the double slash in the backup path, like in your example /VMS/Datacenter//backup_client/[DatastoreVM] backup_client/backup_client.vmdk I will provide a proper fix. |
|
Hi, here's a more elegant solution, "non breaking" the actual design in case you're using a mixed with- and without-folder VM hierarchy. Hope this helps others. BR Arnaud BareosFdPluginVMware.patch (612 bytes)
--- /usr/lib/bareos/plugins/BareosFdPluginVMware.py 2023-06-12 09:05:02.000000000 +0000 +++ BareosFdPluginVMware.py 2023-06-15 13:45:46.558101477 +0000 @@ -423,6 +423,11 @@ if "uuid" in self.options: self.vadp.backup_path = "/VMS/%s" % (self.options["uuid"]) + if self.options["folder"] == "/" or self.options["folder"] == "": + self.vadp.backup_path = "/VMS/%s/%s" % ( + self.options["dc"], + self.options["vmname"], + ) else: self.vadp.backup_path = "/VMS/%s/%s/%s" % ( self.options["dc"], |
|
Thanks for you proposed solution. The PR 1484 already contains a fix that for this issue which will work when using a mixed with- and without-folder VM hierarchy. See https://github.com/bareos/bareos/pull/1484 Regards, Stephan |
|
Fix committed to bareos master branch with changesetid 17739. | |
bareos: master 3d30592b 2023-06-12 16:43 Committer: pstorz Ported: N/A Details Diff |
VMware Plugin: improve snapshot cleanup, bootOrder If leftover snapshots exist from previous backups, they are now cleaned up before taking a new snapshot. Additionally, the snapshot cleanup after backup now works more reliable by reconnecting to the vCenter server if necessary. Code refactored to consequently use methods from pyVim.task. - fix a bug when folder=/ is used. Fixes: 0001533: Restoring vmware vms keeps failing, can't restore the data - Fix restore when bootOrder was set When bootOrder was set to a virtualDisk at backup time, ensure the restore works by setting the bootOrder now after disks were added to the recreated VM. - Fix restore when virtualCdrom path is invalid Restore fails when recreating a VM and the backing file of a virtual CDROM is inaccessible, this can occur when it was connected to an ISO file on a NFS datastore which is not accessible at restore time. This change will then create a disconnected virtual CDROM to allow the restore to finish successfully. Co-authored-by: Philipp Storz <philipp.storz@bareos.com> |
Affected Issues 0001533 |
|
mod - core/src/plugins/filed/python/vmware/BareosFdPluginVMware.py | Diff File |
Date Modified | Username | Field | Change |
---|---|---|---|
2023-04-29 13:41 | CirocN | New Issue | |
2023-04-29 13:41 | CirocN | File Added: Restore_Failure.png | |
2023-04-29 13:41 | CirocN | File Added: Backup_Success.png | |
2023-05-03 15:37 | bruno-at-bareos | Note Added: 0004995 | |
2023-05-03 23:32 | CirocN | Note Added: 0005003 | |
2023-05-03 23:32 | CirocN | File Added: BareosFdPluginVMware.png | |
2023-05-03 23:32 | CirocN | File Added: docs.bareos.org-instruction.png | |
2023-06-06 17:52 | stephand | Assigned To | => stephand |
2023-06-06 17:52 | stephand | Status | new => assigned |
2023-06-06 17:58 | stephand | Status | assigned => acknowledged |
2023-06-06 17:58 | stephand | Note Added: 0005065 | |
2023-06-15 16:21 | awillem | Note Added: 0005084 | |
2023-06-15 16:21 | awillem | File Added: BareosFdPluginVMware.patch | |
2023-06-16 11:20 | stephand | Note Added: 0005085 | |
2023-06-16 11:22 | stephand | Status | acknowledged => assigned |
2023-06-26 15:11 | pstorz | Changeset attached | => bareos master 3d30592b |
2023-06-26 15:11 | stephand | Note Added: 0005100 | |
2023-06-26 15:11 | stephand | Status | assigned => resolved |
2023-06-26 15:11 | stephand | Resolution | open => fixed |