Bug 1668000
| Summary: | [Backup and restore] Restore from backup fails over NFS. | ||||||
|---|---|---|---|---|---|---|---|
| Product: | Red Hat Enterprise Virtualization Manager | Reporter: | Nikolai Sednev <nsednev> | ||||
| Component: | ovirt-hosted-engine-setup | Assignee: | Sandro Bonazzola <sbonazzo> | ||||
| Status: | CLOSED DUPLICATE | QA Contact: | meital avital <mavital> | ||||
| Severity: | high | Docs Contact: | |||||
| Priority: | unspecified | ||||||
| Version: | 4.2.8 | CC: | lsurette, stirabos | ||||
| Target Milestone: | --- | ||||||
| Target Release: | --- | ||||||
| Hardware: | x86_64 | ||||||
| OS: | Linux | ||||||
| Whiteboard: | |||||||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |||||
| Doc Text: | Story Points: | --- | |||||
| Clone Of: | Environment: | ||||||
| Last Closed: | 2019-01-21 16:51:42 UTC | Type: | Bug | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | Integration | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Embargoed: | |||||||
| Attachments: |
|
||||||
This is exactly https://bugzilla.redhat.com/show_bug.cgi?id=1644748 Please reopen that if hit with up to date releases. *** This bug has been marked as a duplicate of bug 1644748 *** u'tar: baf3c8f3-b711-4324-8aed-59b66f49c214.ovf: Not found in archive\ntar: Exiting with failure status due to previous errors\nxargs: sudo: terminated by signal 13 means that the VM OVF wasn't in the OVF_STORE as expected. |
Created attachment 1522187 [details] sosreport from first host puma18 Description of problem: [ ERROR ] {u'_ansible_parsed': True, u'stderr_lines': [u'tar: baf3c8f3-b711-4324-8aed-59b66f49c214.ovf: Not found in archive', u'tar: Exiting with failure status due to previous errors', u'xargs: sudo: terminated by signal 13'], u'changed': True, u'end': u'2019-01-21 18:12:55.497834', u'_ansible_item_label': {u'image_id': u'b4dd38e7-5a43-42c2-a094-ee0b7467368b', u'name': u'OVF_STORE', u'id': u'e3b53f5f-4db9-47d5-9a7e-43e87dbbc05a'}, u'stdout': u'', u'failed': True, u'_ansible_item_result': True, u'msg': u'non-zero return code', u'rc': 2, u'start': u'2019-01-21 18:12:54.358643', u'attempts': 12, u'cmd': u"vdsm-client Image prepare storagepoolID=eeb0ba6e-1d66-11e9-9318-00163e7bb860 storagedomainID=fc743c45-ba95-4a4e-87b8-81b00b677e41 imageID=e3b53f5f-4db9-47d5-9a7e-43e87dbbc05a volumeID=b4dd38e7-5a43-42c2-a094-ee0b7467368b | grep path | awk '{ print $2 }' | xargs -I{} sudo -u vdsm dd if={} | tar -tvf - baf3c8f3-b711-4324-8aed-59b66f49c214.ovf", u'item': {u'image_id': u'b4dd38e7-5a43-42c2-a094-ee0b7467368b', u'name': u'OVF_STORE', u'id': u'e3b53f5f-4db9-47d5-9a7e-43e87dbbc05a'}, u'delta': u'0:00:01.139191', u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'_raw_params': u"vdsm-client Image prepare storagepoolID=eeb0ba6e-1d66-11e9-9318-00163e7bb860 storagedomainID=fc743c45-ba95-4a4e-87b8-81b00b677e41 imageID=e3b53f5f-4db9-47d5-9a7e-43e87dbbc05a volumeID=b4dd38e7-5a43-42c2-a094-ee0b7467368b | grep path | awk '{ print $2 }' | xargs -I{} sudo -u vdsm dd if={} | tar -tvf - baf3c8f3-b711-4324-8aed-59b66f49c214.ovf", u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin': None}}, u'stdout_lines': [], u'stderr': u'tar: baf3c8f3-b711-4324-8aed-59b66f49c214.ovf: Not found in archive\ntar: Exiting with failure status due to previous errors\nxargs: sudo: terminated by signal 13', u'_ansible_no_log': False} [ ERROR ] {u'_ansible_parsed': True, u'stderr_lines': [u'tar: baf3c8f3-b711-4324-8aed-59b66f49c214.ovf: Not found in archive', u'tar: Exiting with failure status due to previous errors', u'xargs: sudo: terminated by signal 13'], u'changed': True, u'end': u'2019-01-21 18:15:14.559713', u'_ansible_item_label': {u'image_id': u'72196080-fa4c-41cd-9b4b-1b604defb077', u'name': u'OVF_STORE', u'id': u'756c6eb3-34ce-4e13-a39a-f8e9453dd116'}, u'stdout': u'', u'failed': True, u'_ansible_item_result': True, u'msg': u'non-zero return code', u'rc': 2, u'start': u'2019-01-21 18:15:13.408285', u'attempts': 12, u'cmd': u"vdsm-client Image prepare storagepoolID=eeb0ba6e-1d66-11e9-9318-00163e7bb860 storagedomainID=fc743c45-ba95-4a4e-87b8-81b00b677e41 imageID=756c6eb3-34ce-4e13-a39a-f8e9453dd116 volumeID=72196080-fa4c-41cd-9b4b-1b604defb077 | grep path | awk '{ print $2 }' | xargs -I{} sudo -u vdsm dd if={} | tar -tvf - baf3c8f3-b711-4324-8aed-59b66f49c214.ovf", u'item': {u'image_id': u'72196080-fa4c-41cd-9b4b-1b604defb077', u'name': u'OVF_STORE', u'id': u'756c6eb3-34ce-4e13-a39a-f8e9453dd116'}, u'delta': u'0:00:01.151428', u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'_raw_params': u"vdsm-client Image prepare storagepoolID=eeb0ba6e-1d66-11e9-9318-00163e7bb860 storagedomainID=fc743c45-ba95-4a4e-87b8-81b00b677e41 imageID=756c6eb3-34ce-4e13-a39a-f8e9453dd116 volumeID=72196080-fa4c-41cd-9b4b-1b604defb077 | grep path | awk '{ print $2 }' | xargs -I{} sudo -u vdsm dd if={} | tar -tvf - baf3c8f3-b711-4324-8aed-59b66f49c214.ovf", u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin': None}}, u'stdout_lines': [], u'stderr': u'tar: baf3c8f3-b711-4324-8aed-59b66f49c214.ovf: Not found in archive\ntar: Exiting with failure status due to previous errors\nxargs: sudo: terminated by signal 13', u'_ansible_no_log': False} [ ERROR ] Failed to execute stage 'Closing up': Failed executing ansible-playbook Version-Release number of selected component (if applicable): ovirt-hosted-engine-ha-2.2.19-1.el7ev.noarch ovirt-hosted-engine-setup-2.2.33-1.el7ev.noarch rhvm-appliance-4.2-20190108.0.el7.noarch Linux 3.10.0-957.1.3.el7.x86_64 #1 SMP Thu Nov 15 17:36:42 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux Red Hat Enterprise Linux Server release 7.6 (Maipo) How reproducible: 100% Steps to Reproduce: 1.Deploy HE environment over FC on pair of hosts and create some guest VMs. 2.Make sure engine is running on first host, SPM is running on second host, place environment to global maintenance. 3.Backup the engine. 4.Copy backup to safe place and reprovision first host. 5.Copy backup file to first host and restore using "hosted-engine --deploy --restore-from-file=backupfile". Actual results: Restore fails. Expected results: Restore should succeed. Additional info: Sosreport from host.