Bug 1313009 - [scale] Failed Disk Migration may lead to disks/vm in unregistered state
Summary: [scale] Failed Disk Migration may lead to disks/vm in unregistered state
Keywords:
Status: CLOSED WORKSFORME
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: BLL.Storage
Version: 3.6.3.2
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ovirt-4.2.0
: ---
Assignee: Daniel Erez
QA Contact: eberman
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-02-29 17:24 UTC by mlehrer
Modified: 2017-03-01 08:32 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-03-01 08:32:43 UTC
oVirt Team: Storage
Embargoed:
tnisan: ovirt-4.2?
rule-engine: planning_ack?
rule-engine: devel_ack?
rule-engine: testing_ack?


Attachments (Terms of Use)

Description mlehrer 2016-02-29 17:24:33 UTC
Description of problem:

Attempted disk migration of 6 disks, during this migration 4 completed and 2 failed.  An error was reported that disk deletion was not successful.
After the failed disk migration the disks and vm were no longer listed in ovirt web UI but are listed via rest api:

   -/ovirt-engine/api/storagedomains/9e60a7d8-f94f-4dfe-a012-3b8ccc216178/vms;unregistered   ==> shows 6 disks listed
   -/ovirt-engine/api/storagedomains/9e60a7d8-f94f-4dfe-a012-3b8ccc216178/disks;unregistered ==> shows 1 VM


Env details
2 hosts
50 iSCSI domains of which all are 20g and 2 are 1TB each


Version-Release number of selected component (if applicable):

vdsm-python-4.17.21-0.el7ev.noarch
vdsm-jsonrpc-4.17.21-0.el7ev.noarch
vdsm-4.17.21-0.el7ev.noarch
vdsm-yajsonrpc-4.17.21-0.el7ev.noarch
vdsm-hook-vmfex-dev-4.17.21-0.el7ev.noarch
vdsm-xmlrpc-4.17.21-0.el7ev.noarch
vdsm-cli-4.17.21-0.el7ev.noarch
vdsm-infra-4.17.21-0.el7ev.noarch



How reproducible:
unclear/not sure

Steps to Reproduce:
1. Migrate 6 disks between iSCSI domains
2. VDSM reports host unreachable during migration 
3. Host gets marked unresponsive and then recovers, resulting in migration partially failing in that 2 disks of 6 attempted are unsuccessful.

Actual results:

Migration fails, disks belonging to the domain, and connected VM are left in unregistered state.

Expected results:

When migration fails, disks still shown, or message lists disks in unregistered state.

Additional info:

sos log info collected and accessible in private comment

Per dev request we opened this bug, but exact reproduction is unclear, as its _assumed_ that failed deletion during cold disk migration led to disks being left in unregistered state.

Comment 2 Red Hat Bugzilla Rules Engine 2016-03-02 11:21:31 UTC
Bug tickets must have version flags set prior to targeting them to a release. Please ask maintainer to set the correct version flags and only then set the target milestone.

Comment 3 Daniel Erez 2017-03-01 08:32:43 UTC
The live migration flow has been re-written since 3.6. The issue doesn't seem relevant any more. Closing, please re-open if reproduced.


Note You need to log in before you can comment on or make changes to this bug.