Bug 1364339 - RHEL 7.3: Failed to do live storage migration between iscsi backends in RHEVM (due to LVM error code change in RHEL 7.3)
Summary: RHEL 7.3: Failed to do live storage migration between iscsi backends in RHEVM...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: vdsm
Classification: oVirt
Component: General
Version: 4.18.10
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ovirt-4.0.4
: 4.17.12
Assignee: Nir Soffer
QA Contact: Kevin Alon Goldblatt
URL:
Whiteboard:
Depends On: 1354396 1365186
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-08-05 06:47 UTC by Han Han
Modified: 2017-02-17 02:46 UTC (History)
10 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2016-09-26 12:34:43 UTC
oVirt Team: Storage
Embargoed:
amureini: ovirt-4.0.z?
rule-engine: planning_ack?
amureini: devel_ack+
rule-engine: testing_ack+


Attachments (Terms of Use)
VDSM log (1.82 MB, text/plain)
2016-08-05 06:47 UTC, Han Han
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 2933491 0 None None None 2017-02-17 02:46:20 UTC
oVirt gerrit 62338 0 master MERGED lvm: Fix error handling when resizing lvs 2020-10-14 10:15:08 UTC
oVirt gerrit 62734 0 ovirt-3.6 MERGED lvm: Fix error handling when resizing lvs 2020-10-14 10:15:08 UTC
oVirt gerrit 62735 0 ovirt-3.6 MERGED lvm: Separate lv reduce and extend 2020-10-14 10:15:08 UTC
oVirt gerrit 62739 0 ovirt-4.0 MERGED lvm: Fix error handling when resizing lvs 2020-10-14 10:15:08 UTC
oVirt gerrit 62740 0 ovirt-4.0 MERGED lvm: Separate lv reduce and extend 2020-10-14 10:15:08 UTC

Description Han Han 2016-08-05 06:47:23 UTC
Created attachment 1187759 [details]
VDSM log

Description of problem:
As subject

Version-Release number of selected component (if applicable):
vdsm-4.18.10-1.el7ev.x86_64
libvirt-2.0.0-4.el7.x86_64
qemu-kvm-rhev-2.6.0-18.el7.x86_64

How reproducible:
100%

Steps to Reproduce:
1. In RHEVM, create a datacenter, a cluster, add a host to the cluster and add two iscsi storage domain.
2. Create a VM with os based on iscsi storage.And move the disk to another iscsi domain.
You will get following error message in REVM:
Aug 5, 2016 2:42:23 AM	User admin@internal-authz have failed to move disk I_vnc_Disk1 to domain IA.
Aug 5, 2016 2:42:12 AM  VDSM B command failed: Logical Volume extend failed 

And the disk is not moved.

Actual results:
As step2

Expected results:
No error message and the disk moved to another iscsi storage.

Additional info:

Comment 1 Yaniv Kaul 2016-08-07 06:00:06 UTC
Is this on RHEL 7.3? If so, looks like a LVM regression, where they change RC code (from 3 to 5), as can be seen at:
f2592211-8f54-4d29-85c9-ef958a9a6829::DEBUG::2016-08-05 14:41:06,924::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/taskset --cpu-list 0-7 /usr/bin/sudo -n /usr/sbin/lvm lvextend --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ '\''a|/dev/mapper/1libvirt_test-hhan-1|'\'', '\''r|.*|'\'' ] }  global {  locking_type=1  prioritise_write_locks=1  wait_for_locks=1  use_lvmetad=0 }  backup {  retain_min = 50  retain_days = 0 } ' --autobackup n --size 5120m 933a15d5-3b60-4037-a992-02b4329018d2/6abb2eed-ec44-432b-93b4-9671c78c4750 (cwd None)
f2592211-8f54-4d29-85c9-ef958a9a6829::DEBUG::2016-08-05 14:41:06,955::lvm::288::Storage.Misc.excCmd::(cmd) FAILED: <err> = '  WARNING: Not using lvmetad because config setting use_lvmetad=0.\n  WARNING: To avoid corruption, rescan devices to make changes visible (pvscan --cache).\n  New size (40 extents) matches existing size (40 extents)\n'; <rc> = 5
f2592211-8f54-4d29-85c9-ef958a9a6829::DEBUG::2016-08-05 14:41:06,957::lvm::298::Storage.Misc.excCmd::(cmd) /usr/bin/taskset --cpu-list 0-7 /usr/bin/sudo -n /usr/sbin/lvm lvextend --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ '\''a|/dev/mapper/1libvirt_test-hhan-1|/dev/mapper/1libvirt_test-pzhang-1|'\'', '\''r|.*|'\'' ] }  global {  locking_type=1  prioritise_write_locks=1  wait_for_locks=1  use_lvmetad=0 }  backup {  retain_min = 50  retain_days = 0 } ' --autobackup n --size 5120m 933a15d5-3b60-4037-a992-02b4329018d2/6abb2eed-ec44-432b-93b4-9671c78c4750 (cwd None)
f2592211-8f54-4d29-85c9-ef958a9a6829::DEBUG::2016-08-05 14:41:06,992::lvm::298::Storage.Misc.excCmd::(cmd) FAILED: <err> = '  WARNING: Not using lvmetad because config setting use_lvmetad=0.\n  WARNING: To avoid corruption, rescan devices to make changes visible (pvscan --cache).\n  New size (40 extents) matches existing size (40 extents)\n'; <rc> = 5
f2592211-8f54-4d29-85c9-ef958a9a6829::ERROR::2016-08-05 14:41:06,992::image::437::Storage.Image::(_createTargetImage) Unexpected error
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/image.py", line 426, in _createTargetImage
    dstVol.extend((volParams['apparentsize'] + 511) / 512)
  File "/usr/share/vdsm/storage/blockVolume.py", line 582, in extend
    lvm.extendLV(self.sdUUID, self.volUUID, sizemb)
  File "/usr/share/vdsm/storage/lvm.py", line 1179, in extendLV
    _resizeLV("lvextend", vgName, lvName, size)
  File "/usr/share/vdsm/storage/lvm.py", line 1175, in _resizeLV
    raise se.LogicalVolumeExtendError(vgName, lvName, "%sM" % (size, ))
LogicalVolumeExtendError: Logical Volume extend failed: 'vgname=933a15d5-3b60-4037-a992-02b4329018d2 lvname=6abb2eed-ec44-432b-93b4-9671c78c4750 newsize=5120M'

Comment 2 Han Han 2016-08-08 01:47:58 UTC
Yes, it is on RHEL7.3. 
My lvm is lvm2-2.02.162-1.el7.x86_64.
So could you extract the lvm cmds from vdsm log? I will try to reproduce it by lvm.

Comment 3 Yaniv Kaul 2016-08-08 06:49:30 UTC
It's a dup of a known bug which I can't find right now.

Comment 4 Allon Mureinik 2016-08-08 08:11:21 UTC

*** This bug has been marked as a duplicate of bug 1363734 ***

Comment 5 Allon Mureinik 2016-08-11 12:20:38 UTC
Seems like the underlying bug 1365186 will not be handled by the LVM team. This may need some work form our side, reopening for visibility.

Comment 6 Nir Soffer 2016-08-15 21:54:23 UTC
Can be closed as duplicate of bug 1363734.

Comment 7 Allon Mureinik 2016-08-16 08:19:11 UTC
(In reply to Nir Soffer from comment #6)
> Can be closed as duplicate of bug 1363734.

The user facing flow is different, even if the same patch solves this. Let's leave them so QE can validate both flows.

Comment 8 Kevin Alon Goldblatt 2016-09-06 16:40:12 UTC
ested with the following code:
----------------------------------------
rhevm-4.0.4-0.1.el7ev.noarch
vdsm-4.18.12-1.el7ev.x86_64

Tested with the following scenario:

Steps to Reproduce:
1. In RHEVM, create a datacenter, a cluster, add a host to the cluster and add two iscsi storage domain.
2. Create a VM with os based on iscsi storage.And move the disk to another iscsi domain - THE DISK IS MOVED SUCCESSFULLY and no error messages are reported

Actual results:
The move operation is successful and no errors are reported




Moving to VERIFIED!


Note You need to log in before you can comment on or make changes to this bug.