Bug 1927199
| Summary: | Unable to import VMs from SD2 based on a template located at SD1 | ||
|---|---|---|---|
| Product: | [oVirt] ovirt-engine | Reporter: | Miguel Martin <mmartinv> |
| Component: | Backup-Restore.VMs | Assignee: | Ahmad Khiet <akhiet> |
| Status: | CLOSED NOTABUG | QA Contact: | Avihai <aefrat> |
| Severity: | high | Docs Contact: | bugs <bugs> |
| Priority: | unspecified | ||
| Version: | 4.4.5.11 | CC: | akhiet, aoconnor, bugs, eshenitz |
| Target Milestone: | ovirt-4.4.6 | Flags: | aoconnor:
blocker-
|
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2021-04-21 10:57:31 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | Storage | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Miguel Martin
2021-02-10 10:45:27 UTC
Hi, I tried to reproduce the bug following the provided steps, and could not produce it. can you please verify the reproducing steps? and if it is still reproducible? I will try to reproduce it again and let you know. I didn't mention in the bug description but both SD1 and SD2 were block based SDs (iSCSI to be precise), not sure if this is indeed important or not. Hi, I did the exact steps on the bug with a thin and preallocated on iscsi, but can't reproduce it. Let's set a meeting, and check it together. Thanks Hi Miguel, after our meeting yesterday, I tried several times on a clean environment as I said in the meeting there was something wrong in my iscsi domains after testing another bug. following yesterday's steps, the bug did not reproduce in my environment. if you like to make another meeting to verify it together. Thanks Hi, to conclude our meeting, we found that the bug did not reproduce in my environment. Miguel will try to test the reproducing steps, and, maybe upgrade to a newer version, to verify it is not reproducing. Thank you It's strange, this issue was detected on 4.4.5.4 in my home lab (I needed to follow the steps in the description above to change the iSCSI portal IP of my Asustor NAS). I was able to reproduce it in a 4.4.5.11 nested environment and a iSCSI portal served by a FreeNAS appliance by following the mentioned steps. After I upgraded that nested environment to 4.4.6.4 and I was not able to reproduce it anymore so I installed a new fresh 4.4.5.11 nested environment just to verify that it was a problem with 4.4.5 versions and the problem is already solved in later versions but I am not able to reproduce it now with 4.4.5.11 version either :( Now it looks like the template is also exported/copied to SD2 so it's possible to import the VM. When the issue was detected and reproduced the Template was not present on SD2 so I guess it was not exported/copied to it for some reason. I found more problems when trying to reproduce the issue, for example: detaching and removing the storage domains (and selecting the "Format domain" checkbox) does not wipe all the data within the storage domain: Trying to create a new storage domain in the same LUN always returned the following error: ``` VDSM ovirt-hypervisor-445-2 command CreateStorageDomainVDS failed: Cannot create Logical Volume: "vgname=cc96ed73-d06f-4c26-ad66-f3e22ed31fbe lvname=master err=['WARNING: ext3 signature detected on /dev/cc96ed73-d06f-4c26-ad66-f3e22ed31fbe/master at offset 1080. Wipe it? [y/n]: [n]', ' Aborted wiping of ext3.', ' 1 existing signature left on the device.', ' Failed to wipe signatures on logical volume cc96ed73-d06f-4c26-ad66-f3e22ed31fbe/master.', ' Aborting. Failed to wipe start of new LV.']" ``` I repeated the steps several times and I always needed to manually 'dd' the LUN to be able to create a new storage domain in it. I also found that importing both SDs at the same time from the same iSCSI portal takes a long time and although the SDs seem to be imported correctly none of the existing templates or the VMs were visible and both SDs seemed empty. ``` 2021-04-21 09:01:41.307+00 | Storage Domain sd1 was attached to Data Center Default by admin@internal-authz 2021-04-21 09:01:41.133+00 | Storage Domain sd2 was attached to Data Center Default by admin@internal-authz 2021-04-21 09:01:11.414+00 | Storage Pool Manager runs on Host ovirt-hypervisor-445-1.example.com (Address: ovirt-hypervisor-445-1.example.com), Data Center Default. 2021-04-21 09:01:09.82+00 | Data Center is being initialized, please wait for initialization to complete. 2021-04-21 09:00:46.661+00 | VDSM command AttachStorageDomainVDS failed: Message timeout which can be caused by communication issues 2021-04-21 08:57:41.06+00 | Storage Domain sd1 was added by admin@internal-authz 2021-04-21 08:57:40.496+00 | Disk Profile sd1 was successfully added (User: admin@internal-authz). 2021-04-21 08:57:39.911+00 | Storage Domain sd2 was added by admin@internal-authz 2021-04-21 08:57:39.325+00 | Disk Profile sd2 was successfully added (User: admin@internal-authz). 2021-04-21 08:56:51.388+00 | Storage Domain sd2 was removed by admin@internal-authz 2021-04-21 08:56:45.343+00 | Storage Domain sd1 was removed by admin@internal-authz 2021-04-21 08:56:42.067+00 | Storage Domain sd2 was detached from Data Center Default by admin@internal-authz 2021-04-21 08:56:37.206+00 | Storage Domain sd1 was detached from Data Center Default by admin@internal-authz ``` So to follow the steps provided in the description I had to attach the SDs one by one. It looks like both problems are reproducible on the 4.4.5.11 nested environment only so I guess they were fixed in later versions or at least in 4.4.6.4. So honestly I am not sure If I am missing some steps to reproduce it or if I did something different the first time, I will try to review the audit logs today and let you know if I find something. Thank you for the update Ahmad please close the bug for now as NOTABUG. Miguel, re-open if you managed to reproduce it. closing as NOTABUG as mentioned in comment #10 |