Bug 1313009 - [scale] Failed Disk Migration may lead to disks/vm in unregistered state
[scale] Failed Disk Migration may lead to disks/vm in unregistered state
Status: CLOSED WORKSFORME
Product: ovirt-engine
Classification: oVirt
Component: BLL.Storage (Show other bugs)
3.6.3.2
Unspecified Unspecified
unspecified Severity high (vote)
: ovirt-4.2.0
: ---
Assigned To: Daniel Erez
eberman
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2016-02-29 12:24 EST by mlehrer
Modified: 2017-03-01 03:32 EST (History)
3 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2017-03-01 03:32:43 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: Storage
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
tnisan: ovirt‑4.2?
rule-engine: planning_ack?
rule-engine: devel_ack?
rule-engine: testing_ack?


Attachments (Terms of Use)

  None (edit)
Description mlehrer 2016-02-29 12:24:33 EST
Description of problem:

Attempted disk migration of 6 disks, during this migration 4 completed and 2 failed.  An error was reported that disk deletion was not successful.
After the failed disk migration the disks and vm were no longer listed in ovirt web UI but are listed via rest api:

   -/ovirt-engine/api/storagedomains/9e60a7d8-f94f-4dfe-a012-3b8ccc216178/vms;unregistered   ==> shows 6 disks listed
   -/ovirt-engine/api/storagedomains/9e60a7d8-f94f-4dfe-a012-3b8ccc216178/disks;unregistered ==> shows 1 VM


Env details
2 hosts
50 iSCSI domains of which all are 20g and 2 are 1TB each


Version-Release number of selected component (if applicable):

vdsm-python-4.17.21-0.el7ev.noarch
vdsm-jsonrpc-4.17.21-0.el7ev.noarch
vdsm-4.17.21-0.el7ev.noarch
vdsm-yajsonrpc-4.17.21-0.el7ev.noarch
vdsm-hook-vmfex-dev-4.17.21-0.el7ev.noarch
vdsm-xmlrpc-4.17.21-0.el7ev.noarch
vdsm-cli-4.17.21-0.el7ev.noarch
vdsm-infra-4.17.21-0.el7ev.noarch



How reproducible:
unclear/not sure

Steps to Reproduce:
1. Migrate 6 disks between iSCSI domains
2. VDSM reports host unreachable during migration 
3. Host gets marked unresponsive and then recovers, resulting in migration partially failing in that 2 disks of 6 attempted are unsuccessful.

Actual results:

Migration fails, disks belonging to the domain, and connected VM are left in unregistered state.

Expected results:

When migration fails, disks still shown, or message lists disks in unregistered state.

Additional info:

sos log info collected and accessible in private comment

Per dev request we opened this bug, but exact reproduction is unclear, as its _assumed_ that failed deletion during cold disk migration led to disks being left in unregistered state.
Comment 2 Red Hat Bugzilla Rules Engine 2016-03-02 06:21:31 EST
Bug tickets must have version flags set prior to targeting them to a release. Please ask maintainer to set the correct version flags and only then set the target milestone.
Comment 3 Daniel Erez 2017-03-01 03:32:43 EST
The live migration flow has been re-written since 3.6. The issue doesn't seem relevant any more. Closing, please re-open if reproduced.

Note You need to log in before you can comment on or make changes to this bug.