Bug 1502083 - Live storage migration completes but leaves volume un-opened.
Summary: Live storage migration completes but leaves volume un-opened.
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: vdsm
Version: 4.0.7
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ovirt-4.3.0
: 4.3.0
Assignee: Benny Zlotnik
QA Contact: Kevin Alon Goldblatt
URL:
Whiteboard:
Depends On:
Blocks: 1591667
TreeView+ depends on / blocked
 
Reported: 2017-10-14 01:35 UTC by Bimal Chollera
Modified: 2021-05-01 16:53 UTC (History)
8 users (show)

Fixed In Version: v4.30.3
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1591667 (view as bug list)
Environment:
Last Closed: 2019-05-08 12:35:59 UTC
oVirt Team: Storage
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:1077 0 None None None 2019-05-08 12:36:29 UTC
oVirt gerrit 90532 0 master MERGED vm: Do not remove replicate attribute if abort failed 2018-06-12 09:21:48 UTC
oVirt gerrit 92166 0 ovirt-4.2 MERGED vm: Do not remove replicate attribute if abort failed 2018-06-12 16:31:07 UTC
oVirt gerrit 92167 0 ovirt-4.2.4 MERGED vm: Do not remove replicate attribute if abort failed 2018-06-12 16:50:24 UTC

Description Bimal Chollera 2017-10-14 01:35:08 UTC
Description of problem:

Live storage migration of disk was performed from one SD to another SD. Both SD are block storage.  On the engine side - shows disk move completed.  But on the host vdsm side shows images on the old SD failed to de-activate and images on the new SD were not active for the VM.

The Create Volume, CloneImage and syncImage complete on the SPM.  The diskReplicate starts and finish on the host where the VM is running but fails to de-activate the images on the old SD and fails to activate the images on the new D.  On the SPM, the deleteImage runs to remove the images.  The result was none of the images on the new SD were open for the VM.

~~~
CannotDeactivateLogicalVolume: Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\'  Logical volume ...
~~~

The removal of the live merge snapshot fails with the following:

~~~
libvirtError: internal error: qemu block name '/rhev/data-center/90546923-401b-440e-bdfc-84ab4cc08695/e8b55375-a2c3-4342-b003-de38ef1361e7/images/afe94069-cb21-4fa2-a873-162803b244e9/22778eb5-d441-4374-bafe-80ce8c5ea460' doesn't match expected '/rhev/data-center/90546923-401b-440e-bdfc-84ab4cc08695/f73c7530-cf20-49dd-83d2-58052595c09a/images/afe94069-cb21-4fa2-a873-162803b244e9/22778eb5-d441-4374-bafe-80ce8c5ea460'
~~~

Version-Release number of selected component (if applicable):

ovirt-engine-4.0.7.5-0.1.el7ev.noarch
vdsm-4.19.28-1.el7ev.x86_64
libvirt-3.2.0-14.el7_4.3.x86_64


How reproducible:



Steps to Reproduce:
1.
2.
3.

Actual results:



Expected results:


Additional info:

Comment 10 Benny Zlotnik 2018-03-28 08:33:45 UTC
Created attachment 1414077 [details]
relevant_vdsm_logs

Comment 14 Elad 2018-08-21 12:35:17 UTC
Verify according to https://bugzilla.redhat.com/show_bug.cgi?id=1591667#c17

Comment 15 Kevin Alon Goldblatt 2018-10-18 11:47:07 UTC
Verified with the following code:
--------------------------------------
ovirt-engine-4.3.0-0.0.master.20181012165724.gitd25f971.el7.noarch
vdsm-4.30.0-640.git6fd8327.el7.x86_64


Verified with the following scenario:
--------------------------------------
1. Ran a LSM of iscsi disk to another iscsi domain which failed due to error injection
2. Ran the LSM again and this time is completed successfully


Moving to VERIFIED!

Comment 17 errata-xmlrpc 2019-05-08 12:35:59 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:1077


Note You need to log in before you can comment on or make changes to this bug.