Red Hat Bugzilla – Bug 1313009
[scale] Failed Disk Migration may lead to disks/vm in unregistered state
Last modified: 2017-03-01 03:32:43 EST
Description of problem:
Attempted disk migration of 6 disks, during this migration 4 completed and 2 failed. An error was reported that disk deletion was not successful.
After the failed disk migration the disks and vm were no longer listed in ovirt web UI but are listed via rest api:
-/ovirt-engine/api/storagedomains/9e60a7d8-f94f-4dfe-a012-3b8ccc216178/vms;unregistered ==> shows 6 disks listed
-/ovirt-engine/api/storagedomains/9e60a7d8-f94f-4dfe-a012-3b8ccc216178/disks;unregistered ==> shows 1 VM
50 iSCSI domains of which all are 20g and 2 are 1TB each
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1. Migrate 6 disks between iSCSI domains
2. VDSM reports host unreachable during migration
3. Host gets marked unresponsive and then recovers, resulting in migration partially failing in that 2 disks of 6 attempted are unsuccessful.
Migration fails, disks belonging to the domain, and connected VM are left in unregistered state.
When migration fails, disks still shown, or message lists disks in unregistered state.
sos log info collected and accessible in private comment
Per dev request we opened this bug, but exact reproduction is unclear, as its _assumed_ that failed deletion during cold disk migration led to disks being left in unregistered state.
Bug tickets must have version flags set prior to targeting them to a release. Please ask maintainer to set the correct version flags and only then set the target milestone.
The live migration flow has been re-written since 3.6. The issue doesn't seem relevant any more. Closing, please re-open if reproduced.