Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1364339

Summary: RHEL 7.3: Failed to do live storage migration between iscsi backends in RHEVM (due to LVM error code change in RHEL 7.3)
Product: [oVirt] vdsm Reporter: Han Han <hhan>
Component: GeneralAssignee: Nir Soffer <nsoffer>
Status: CLOSED CURRENTRELEASE QA Contact: Kevin Alon Goldblatt <kgoldbla>
Severity: high Docs Contact:
Priority: unspecified    
Version: 4.18.10CC: amureini, bugs, dyuan, gveitmic, hhan, nsoffer, pzhang, xuzhang, yanyang, ykaul
Target Milestone: ovirt-4.0.4Keywords: Reopened
Target Release: 4.17.12Flags: amureini: ovirt-4.0.z?
rule-engine: planning_ack?
amureini: devel_ack+
rule-engine: testing_ack+
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-09-26 12:34:43 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Storage RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1354396, 1365186    
Bug Blocks:    
Attachments:
Description Flags
VDSM log none

Description Han Han 2016-08-05 06:47:23 UTC
Created attachment 1187759 [details]
VDSM log

Description of problem:
As subject

Version-Release number of selected component (if applicable):
vdsm-4.18.10-1.el7ev.x86_64
libvirt-2.0.0-4.el7.x86_64
qemu-kvm-rhev-2.6.0-18.el7.x86_64

How reproducible:
100%

Steps to Reproduce:
1. In RHEVM, create a datacenter, a cluster, add a host to the cluster and add two iscsi storage domain.
2. Create a VM with os based on iscsi storage.And move the disk to another iscsi domain.
You will get following error message in REVM:
Aug 5, 2016 2:42:23 AM	User admin@internal-authz have failed to move disk I_vnc_Disk1 to domain IA.
Aug 5, 2016 2:42:12 AM  VDSM B command failed: Logical Volume extend failed 

And the disk is not moved.

Actual results:
As step2

Expected results:
No error message and the disk moved to another iscsi storage.

Additional info:

Comment 1 Yaniv Kaul 2016-08-07 06:00:06 UTC
Is this on RHEL 7.3? If so, looks like a LVM regression, where they change RC code (from 3 to 5), as can be seen at:
f2592211-8f54-4d29-85c9-ef958a9a6829::DEBUG::2016-08-05 14:41:06,924::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/taskset --cpu-list 0-7 /usr/bin/sudo -n /usr/sbin/lvm lvextend --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ '\''a|/dev/mapper/1libvirt_test-hhan-1|'\'', '\''r|.*|'\'' ] }  global {  locking_type=1  prioritise_write_locks=1  wait_for_locks=1  use_lvmetad=0 }  backup {  retain_min = 50  retain_days = 0 } ' --autobackup n --size 5120m 933a15d5-3b60-4037-a992-02b4329018d2/6abb2eed-ec44-432b-93b4-9671c78c4750 (cwd None)
f2592211-8f54-4d29-85c9-ef958a9a6829::DEBUG::2016-08-05 14:41:06,955::lvm::288::Storage.Misc.excCmd::(cmd) FAILED: <err> = '  WARNING: Not using lvmetad because config setting use_lvmetad=0.\n  WARNING: To avoid corruption, rescan devices to make changes visible (pvscan --cache).\n  New size (40 extents) matches existing size (40 extents)\n'; <rc> = 5
f2592211-8f54-4d29-85c9-ef958a9a6829::DEBUG::2016-08-05 14:41:06,957::lvm::298::Storage.Misc.excCmd::(cmd) /usr/bin/taskset --cpu-list 0-7 /usr/bin/sudo -n /usr/sbin/lvm lvextend --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ '\''a|/dev/mapper/1libvirt_test-hhan-1|/dev/mapper/1libvirt_test-pzhang-1|'\'', '\''r|.*|'\'' ] }  global {  locking_type=1  prioritise_write_locks=1  wait_for_locks=1  use_lvmetad=0 }  backup {  retain_min = 50  retain_days = 0 } ' --autobackup n --size 5120m 933a15d5-3b60-4037-a992-02b4329018d2/6abb2eed-ec44-432b-93b4-9671c78c4750 (cwd None)
f2592211-8f54-4d29-85c9-ef958a9a6829::DEBUG::2016-08-05 14:41:06,992::lvm::298::Storage.Misc.excCmd::(cmd) FAILED: <err> = '  WARNING: Not using lvmetad because config setting use_lvmetad=0.\n  WARNING: To avoid corruption, rescan devices to make changes visible (pvscan --cache).\n  New size (40 extents) matches existing size (40 extents)\n'; <rc> = 5
f2592211-8f54-4d29-85c9-ef958a9a6829::ERROR::2016-08-05 14:41:06,992::image::437::Storage.Image::(_createTargetImage) Unexpected error
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/image.py", line 426, in _createTargetImage
    dstVol.extend((volParams['apparentsize'] + 511) / 512)
  File "/usr/share/vdsm/storage/blockVolume.py", line 582, in extend
    lvm.extendLV(self.sdUUID, self.volUUID, sizemb)
  File "/usr/share/vdsm/storage/lvm.py", line 1179, in extendLV
    _resizeLV("lvextend", vgName, lvName, size)
  File "/usr/share/vdsm/storage/lvm.py", line 1175, in _resizeLV
    raise se.LogicalVolumeExtendError(vgName, lvName, "%sM" % (size, ))
LogicalVolumeExtendError: Logical Volume extend failed: 'vgname=933a15d5-3b60-4037-a992-02b4329018d2 lvname=6abb2eed-ec44-432b-93b4-9671c78c4750 newsize=5120M'

Comment 2 Han Han 2016-08-08 01:47:58 UTC
Yes, it is on RHEL7.3. 
My lvm is lvm2-2.02.162-1.el7.x86_64.
So could you extract the lvm cmds from vdsm log? I will try to reproduce it by lvm.

Comment 3 Yaniv Kaul 2016-08-08 06:49:30 UTC
It's a dup of a known bug which I can't find right now.

Comment 4 Allon Mureinik 2016-08-08 08:11:21 UTC

*** This bug has been marked as a duplicate of bug 1363734 ***

Comment 5 Allon Mureinik 2016-08-11 12:20:38 UTC
Seems like the underlying bug 1365186 will not be handled by the LVM team. This may need some work form our side, reopening for visibility.

Comment 6 Nir Soffer 2016-08-15 21:54:23 UTC
Can be closed as duplicate of bug 1363734.

Comment 7 Allon Mureinik 2016-08-16 08:19:11 UTC
(In reply to Nir Soffer from comment #6)
> Can be closed as duplicate of bug 1363734.

The user facing flow is different, even if the same patch solves this. Let's leave them so QE can validate both flows.

Comment 8 Kevin Alon Goldblatt 2016-09-06 16:40:12 UTC
ested with the following code:
----------------------------------------
rhevm-4.0.4-0.1.el7ev.noarch
vdsm-4.18.12-1.el7ev.x86_64

Tested with the following scenario:

Steps to Reproduce:
1. In RHEVM, create a datacenter, a cluster, add a host to the cluster and add two iscsi storage domain.
2. Create a VM with os based on iscsi storage.And move the disk to another iscsi domain - THE DISK IS MOVED SUCCESSFULLY and no error messages are reported

Actual results:
The move operation is successful and no errors are reported




Moving to VERIFIED!