Bug 1591667

Summary: [downstream clone - 4.2.4] Live storage migration completes but leaves volume un-opened.
Product: Red Hat Enterprise Virtualization Manager Reporter: RHV bug bot <rhv-bugzilla-bot>
Component: vdsmAssignee: Benny Zlotnik <bzlotnik>
Status: CLOSED ERRATA QA Contact: Kevin Alon Goldblatt <kgoldbla>
Severity: high Docs Contact:
Priority: high    
Version: 4.0.7CC: ahino, bzlotnik, ebenahar, eblake, lsurette, lsvaty, srevivo, tnisan, ycui, ykaul, ylavi
Target Milestone: ovirt-4.2.4Keywords: ZStream
Target Release: 4.2.4   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: v4.20.31 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1502083 Environment:
Last Closed: 2018-06-27 10:02:46 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Storage RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1502083    
Bug Blocks:    

Description RHV bug bot 2018-06-15 09:04:18 UTC
+++ This bug is a downstream clone. The original bug is: +++
+++   bug 1502083 +++
======================================================================

Description of problem:

Live storage migration of disk was performed from one SD to another SD. Both SD are block storage.  On the engine side - shows disk move completed.  But on the host vdsm side shows images on the old SD failed to de-activate and images on the new SD were not active for the VM.

The Create Volume, CloneImage and syncImage complete on the SPM.  The diskReplicate starts and finish on the host where the VM is running but fails to de-activate the images on the old SD and fails to activate the images on the new D.  On the SPM, the deleteImage runs to remove the images.  The result was none of the images on the new SD were open for the VM.

~~~
CannotDeactivateLogicalVolume: Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\'  Logical volume ...
~~~

The removal of the live merge snapshot fails with the following:

~~~
libvirtError: internal error: qemu block name '/rhev/data-center/90546923-401b-440e-bdfc-84ab4cc08695/e8b55375-a2c3-4342-b003-de38ef1361e7/images/afe94069-cb21-4fa2-a873-162803b244e9/22778eb5-d441-4374-bafe-80ce8c5ea460' doesn't match expected '/rhev/data-center/90546923-401b-440e-bdfc-84ab4cc08695/f73c7530-cf20-49dd-83d2-58052595c09a/images/afe94069-cb21-4fa2-a873-162803b244e9/22778eb5-d441-4374-bafe-80ce8c5ea460'
~~~

Version-Release number of selected component (if applicable):

ovirt-engine-4.0.7.5-0.1.el7ev.noarch
vdsm-4.19.28-1.el7ev.x86_64
libvirt-3.2.0-14.el7_4.3.x86_64


How reproducible:



Steps to Reproduce:
1.
2.
3.

Actual results:



Expected results:


Additional info:

(Originally by Bimal Chollera)

Comment 11 RHV bug bot 2018-06-15 09:05:05 UTC
Created attachment 1414077 [details]
relevant_vdsm_logs

(Originally by Benny Zlotnik)

Comment 17 Kevin Alon Goldblatt 2018-06-19 14:46:46 UTC
Verified with the following code:
--------------------------------------
ovirt-engine-4.2.4.4-0.1.el7_3.noarch
vdsm-4.20.31-1.el7ev.x86_64


Verified with the following scenario:
--------------------------------------
1. Ran a LSM of iscsi disk to another iscsi domain which failed due to error injection
2. Ran the LSM again and this time is completed successfully


Moving to VERIFIED!

Comment 19 errata-xmlrpc 2018-06-27 10:02:46 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2018:2072

Comment 20 Franta Kust 2019-05-16 13:08:23 UTC
BZ<2>Jira Resync