Created attachment 1187759 [details] VDSM log Description of problem: As subject Version-Release number of selected component (if applicable): vdsm-4.18.10-1.el7ev.x86_64 libvirt-2.0.0-4.el7.x86_64 qemu-kvm-rhev-2.6.0-18.el7.x86_64 How reproducible: 100% Steps to Reproduce: 1. In RHEVM, create a datacenter, a cluster, add a host to the cluster and add two iscsi storage domain. 2. Create a VM with os based on iscsi storage.And move the disk to another iscsi domain. You will get following error message in REVM: Aug 5, 2016 2:42:23 AM User admin@internal-authz have failed to move disk I_vnc_Disk1 to domain IA. Aug 5, 2016 2:42:12 AM VDSM B command failed: Logical Volume extend failed And the disk is not moved. Actual results: As step2 Expected results: No error message and the disk moved to another iscsi storage. Additional info:
Is this on RHEL 7.3? If so, looks like a LVM regression, where they change RC code (from 3 to 5), as can be seen at: f2592211-8f54-4d29-85c9-ef958a9a6829::DEBUG::2016-08-05 14:41:06,924::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/taskset --cpu-list 0-7 /usr/bin/sudo -n /usr/sbin/lvm lvextend --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ '\''a|/dev/mapper/1libvirt_test-hhan-1|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --autobackup n --size 5120m 933a15d5-3b60-4037-a992-02b4329018d2/6abb2eed-ec44-432b-93b4-9671c78c4750 (cwd None) f2592211-8f54-4d29-85c9-ef958a9a6829::DEBUG::2016-08-05 14:41:06,955::lvm::288::Storage.Misc.excCmd::(cmd) FAILED: <err> = ' WARNING: Not using lvmetad because config setting use_lvmetad=0.\n WARNING: To avoid corruption, rescan devices to make changes visible (pvscan --cache).\n New size (40 extents) matches existing size (40 extents)\n'; <rc> = 5 f2592211-8f54-4d29-85c9-ef958a9a6829::DEBUG::2016-08-05 14:41:06,957::lvm::298::Storage.Misc.excCmd::(cmd) /usr/bin/taskset --cpu-list 0-7 /usr/bin/sudo -n /usr/sbin/lvm lvextend --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ '\''a|/dev/mapper/1libvirt_test-hhan-1|/dev/mapper/1libvirt_test-pzhang-1|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --autobackup n --size 5120m 933a15d5-3b60-4037-a992-02b4329018d2/6abb2eed-ec44-432b-93b4-9671c78c4750 (cwd None) f2592211-8f54-4d29-85c9-ef958a9a6829::DEBUG::2016-08-05 14:41:06,992::lvm::298::Storage.Misc.excCmd::(cmd) FAILED: <err> = ' WARNING: Not using lvmetad because config setting use_lvmetad=0.\n WARNING: To avoid corruption, rescan devices to make changes visible (pvscan --cache).\n New size (40 extents) matches existing size (40 extents)\n'; <rc> = 5 f2592211-8f54-4d29-85c9-ef958a9a6829::ERROR::2016-08-05 14:41:06,992::image::437::Storage.Image::(_createTargetImage) Unexpected error Traceback (most recent call last): File "/usr/share/vdsm/storage/image.py", line 426, in _createTargetImage dstVol.extend((volParams['apparentsize'] + 511) / 512) File "/usr/share/vdsm/storage/blockVolume.py", line 582, in extend lvm.extendLV(self.sdUUID, self.volUUID, sizemb) File "/usr/share/vdsm/storage/lvm.py", line 1179, in extendLV _resizeLV("lvextend", vgName, lvName, size) File "/usr/share/vdsm/storage/lvm.py", line 1175, in _resizeLV raise se.LogicalVolumeExtendError(vgName, lvName, "%sM" % (size, )) LogicalVolumeExtendError: Logical Volume extend failed: 'vgname=933a15d5-3b60-4037-a992-02b4329018d2 lvname=6abb2eed-ec44-432b-93b4-9671c78c4750 newsize=5120M'
Yes, it is on RHEL7.3. My lvm is lvm2-2.02.162-1.el7.x86_64. So could you extract the lvm cmds from vdsm log? I will try to reproduce it by lvm.
It's a dup of a known bug which I can't find right now.
*** This bug has been marked as a duplicate of bug 1363734 ***
Seems like the underlying bug 1365186 will not be handled by the LVM team. This may need some work form our side, reopening for visibility.
Can be closed as duplicate of bug 1363734.
(In reply to Nir Soffer from comment #6) > Can be closed as duplicate of bug 1363734. The user facing flow is different, even if the same patch solves this. Let's leave them so QE can validate both flows.
ested with the following code: ---------------------------------------- rhevm-4.0.4-0.1.el7ev.noarch vdsm-4.18.12-1.el7ev.x86_64 Tested with the following scenario: Steps to Reproduce: 1. In RHEVM, create a datacenter, a cluster, add a host to the cluster and add two iscsi storage domain. 2. Create a VM with os based on iscsi storage.And move the disk to another iscsi domain - THE DISK IS MOVED SUCCESSFULLY and no error messages are reported Actual results: The move operation is successful and no errors are reported Moving to VERIFIED!