Bug 1559929

Summary: Second move disk operation fails with CannotCreateLogicalVolume: Logical Volume "" aleady exists in volume group
Product: [oVirt] vdsm Reporter: Elad <ebenahar>
Component: CoreAssignee: Benny Zlotnik <bzlotnik>
Status: CLOSED DUPLICATE QA Contact: Raz Tamir <ratamir>
Severity: high Docs Contact:
Priority: unspecified    
Version: 4.20.19CC: amureini, bugs, bzlotnik, tnisan
Target Milestone: ovirt-4.2.3Keywords: Automation, Regression
Target Release: ---Flags: rule-engine: ovirt-4.2+
rule-engine: blocker+
Hardware: x86_64   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-04-11 11:07:38 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Storage RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
logs none

Description Elad 2018-03-23 14:44:38 UTC
Created attachment 1412153 [details]
logs

Description of problem:
An attempt to move a disk (cold or live) to a block based domain after this disk was moved to a file based domain, while the VM the disk is attached to was created as thin copy from template and the VM has more disks attached, fails on CannotCreateLogicalVolume error in vdsm.

I'm trying to narrow down the scenario, so far no luck. Tried with 1 disk attached, 2 disks, move the disk to same type storage domain. All succeeded.
Therefore, setting severity to medium for now.
The bug occurs all the times with an automation job we execute.


Version-Release number of selected component (if applicable):

RHEL7.5
lvm2-libs-2.02.177-4.el7.x86_64
qemu-img-rhev-2.10.0-21.el7_5.1.x86_64
libvirt-client-3.9.0-14.el7.x86_64
libvirt-daemon-driver-network-3.9.0-14.el7.x86_64
libvirt-daemon-driver-storage-logical-3.9.0-14.el7.x86_64
libselinux-utils-2.5-12.el7.x86_64
vdsm-hook-ethtool-options-4.20.22-1.el7ev.noarch
vdsm-network-4.20.22-1.el7ev.x86_64
libvirt-daemon-config-nwfilter-3.9.0-14.el7.x86_64
libvirt-daemon-driver-storage-gluster-3.9.0-14.el7.x86_64
qemu-kvm-rhev-2.10.0-21.el7_5.1.x86_64
libvirt-daemon-config-network-3.9.0-14.el7.x86_64
sanlock-python-3.6.0-1.el7.x86_64
vdsm-common-4.20.22-1.el7ev.noarch
vdsm-jsonrpc-4.20.22-1.el7ev.noarch
libvirt-daemon-3.9.0-14.el7.x86_64
libvirt-daemon-driver-nwfilter-3.9.0-14.el7.x86_64
libvirt-daemon-driver-storage-scsi-3.9.0-14.el7.x86_64
libvirt-daemon-driver-storage-mpath-3.9.0-14.el7.x86_64
libvirt-daemon-driver-secret-3.9.0-14.el7.x86_64
selinux-policy-targeted-3.13.1-192.el7.noarch
vdsm-4.20.22-1.el7ev.x86_64
vdsm-hook-openstacknet-4.20.22-1.el7ev.noarch
udisks2-lvm2-2.7.3-6.el7.x86_64
libvirt-daemon-driver-lxc-3.9.0-14.el7.x86_64
ipxe-roms-qemu-20170123-1.git4e85b27.el7_4.1.noarch
libvirt-python-3.9.0-1.el7.x86_64
vdsm-http-4.20.22-1.el7ev.noarch
qemu-kvm-common-rhev-2.10.0-21.el7_5.1.x86_64
vdsm-yajsonrpc-4.20.22-1.el7ev.noarch
libvirt-daemon-driver-qemu-3.9.0-14.el7.x86_64
libvirt-daemon-driver-storage-rbd-3.9.0-14.el7.x86_64
libvirt-daemon-driver-storage-disk-3.9.0-14.el7.x86_64
libvirt-lock-sanlock-3.9.0-14.el7.x86_64
libvirt-daemon-driver-interface-3.9.0-14.el7.x86_64
qemu-guest-agent-2.8.0-2.el7.x86_64
vdsm-hook-vmfex-dev-4.20.22-1.el7ev.noarch
vdsm-client-4.20.22-1.el7ev.noarch
libblockdev-lvm-2.12-3.el7.x86_64
vdsm-hook-vfio-mdev-4.20.22-1.el7ev.noarch
vdsm-hook-vhostmd-4.20.22-1.el7ev.noarch
libvirt-3.9.0-14.el7.x86_64
sanlock-3.6.0-1.el7.x86_64
libselinux-2.5-12.el7.x86_64
vdsm-python-4.20.22-1.el7ev.noarch
selinux-policy-3.13.1-192.el7.noarch
libvirt-daemon-driver-storage-3.9.0-14.el7.x86_64
libvirt-daemon-kvm-3.9.0-14.el7.x86_64
libselinux-python-2.5-12.el7.x86_64
sanlock-lib-3.6.0-1.el7.x86_64
vdsm-api-4.20.22-1.el7ev.noarch
lvm2-2.02.177-4.el7.x86_64
libvirt-libs-3.9.0-14.el7.x86_64
libvirt-daemon-driver-storage-core-3.9.0-14.el7.x86_64
libvirt-daemon-driver-storage-iscsi-3.9.0-14.el7.x86_64
libvirt-daemon-driver-nodedev-3.9.0-14.el7.x86_64
vdsm-hook-fcoe-4.20.22-1.el7ev.noarch


How reproducible:
100%

Steps to Reproduce:
1. Create a template from a VM with 1 disk on iSCSI domain
2. Create a VM from the template as thin copy
3. Create and attach 4 disks, reside on iSCSI to the VM
4. Move (cold, reproduces also as LSM) one of the disks to the gluster domain
5. Move the disk back to the second iSCSI domain

Actual results:
Move disk fails on vdsm.


2018-03-23 17:02:32,743+0300 ERROR (tasks/1) [storage.Volume] Failed to create volume /rhev/data-center/mnt/blockSD/ab29bd07-7797-49dd-9321-d5850f66f64e/images/b513bf0a-c257-4f3b-b91e-6730c4b64562/574e599a-07f6-
4937-a017-0b61a1d5ef12: Cannot create Logical Volume: 'vgname=ab29bd07-7797-49dd-9321-d5850f66f64e lvname=574e599a-07f6-4937-a017-0b61a1d5ef12 err=[\'  Logical Volume "574e599a-07f6-4937-a017-0b61a1d5ef12" alrea
dy exists in volume group "ab29bd07-7797-49dd-9321-d5850f66f64e"\']' (volume:1209)
2018-03-23 17:02:32,744+0300 ERROR (tasks/1) [storage.Volume] Unexpected error (volume:1246)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line 1206, in create
    initialSize=initialSize)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/blockVolume.py", line 505, in _create
    initialTags=(sc.TAG_VOL_UNINIT,))
  File "/usr/lib/python2.7/site-packages/vdsm/storage/lvm.py", line 1153, in createLV
    raise se.CannotCreateLogicalVolume(vgName, lvName, err)
CannotCreateLogicalVolume: Cannot create Logical Volume: 'vgname=ab29bd07-7797-49dd-9321-d5850f66f64e lvname=574e599a-07f6-4937-a017-0b61a1d5ef12 err=[\'  Logical Volume "574e599a-07f6-4937-a017-0b61a1d5ef12" al
ready exists in volume group "ab29bd07-7797-49dd-9321-d5850f66f64e"\']'



engine.log:

2018-03-23 17:02:33,930+03 ERROR [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedThreadFactory-engineScheduled-Thread-90) [] BaseAsyncTask::logEndTaskFailure: Task '91457fb3-48ce-4f44-8845-85323306cb5f
' (Parent Command 'CreateVolumeContainer', Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') ended with failure:
-- Result: 'cleanSuccess'
-- Message: 'VDSGenericException: VDSErrorException: Failed to HSMGetAllTasksStatusesVDS, error = Cannot create Logical Volume: 'vgname=ab29bd07-7797-49dd-9321-d5850f66f64e lvname=574e599a-07f6-4937-a017-0b61a1d
5ef12 err=[\'  Logical Volume "574e599a-07f6-4937-a017-0b61a1d5ef12" already exists in volume group "ab29bd07-7797-49dd-9321-d5850f66f64e"\']', code = 550',
-- Exception: 'VDSGenericException: VDSErrorException: Failed to HSMGetAllTasksStatusesVDS, error = Cannot create Logical Volume: 'vgname=ab29bd07-7797-49dd-9321-d5850f66f64e lvname=574e599a-07f6-4937-a017-0b61a
1d5ef12 err=[\'  Logical Volume "574e599a-07f6-4937-a017-0b61a1d5ef12" already exists in volume group "ab29bd07-7797-49dd-9321-d5850f66f64e"\']', code = 550'



Expected results:
Move disk should succeed.

Additional info:
logs

Comment 1 Elad 2018-03-23 18:08:04 UTC
Actually, this fails every permutation of test case 5995:
- Live storage migration between same type storage domains 
- Cold storage migration between same type storage domains 
- Live storage migration between mixed types storage domains 
- Cold storage migration between mixed types storage domains 

Hence, raising severity to high

Comment 2 Elad 2018-03-23 20:24:00 UTC
Found out also that this case has passed in previous 4.2 builds:

https://rhv-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/rhv-4.2-ge-runner-tier2/45/testReport/rhevmtests.storage.storage_migration.test_live_storage_migration_same_type/TestCase5995/



rhv-4.2.0-12
ovirt-engine-4.2.0.2-0.1.el7
vdsm-4.20.9.3-1.el7ev.x86_64
qemu-img-rhev-2.9.0-16.el7_4.11.x86_64


Marking as a regression

Comment 3 Red Hat Bugzilla Rules Engine 2018-03-25 06:21:56 UTC
This bug report has Keywords: Regression or TestBlocker.
Since no regressions or test blockers are allowed between releases, it is also being identified as a blocker for this release. Please resolve ASAP.

Comment 6 Benny Zlotnik 2018-03-27 08:49:03 UTC
I'll make sure it's the same but it looks like a duplicate

Comment 7 Yaniv Kaul 2018-04-11 09:24:48 UTC
(In reply to Benny Zlotnik from comment #6)
> I'll make sure it's the same but it looks like a duplicate

Any updates?

Comment 8 Benny Zlotnik 2018-04-11 11:07:38 UTC
Couldn't find any evidence that it's a different bug, closing as duplicate

*** This bug has been marked as a duplicate of bug 1497931 ***