Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 903274

Summary: [RHEVM] [Live Storage Migration] [FC] Failed in SnapshotVDS method in concurrently live migrate several disks of the same VM scenario.
Product: Red Hat Enterprise Virtualization Manager Reporter: vvyazmin <vvyazmin>
Component: vdsmAssignee: Federico Simoncelli <fsimonce>
Status: CLOSED WORKSFORME QA Contact: vvyazmin <vvyazmin>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 3.2.0CC: abaron, acathrow, amureini, bazulay, dyasny, hateya, iheim, lpeer, Rhev-m-bugs, yeylon, ykaul
Target Milestone: ---   
Target Release: 3.2.0   
Hardware: x86_64   
OS: Linux   
Whiteboard: storage
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2013-01-27 10:50:32 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Storage RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
## Logs vdsm, rhevm
none
## Logs vdsm, rhevm on FC environment none

Description vvyazmin@redhat.com 2013-01-23 15:57:48 UTC
Created attachment 686059 [details]
## Logs vdsm, rhevm

Description of problem:
Failed in SnapshotVDS method
 in concurrently live migrate several disks of the same VM scenario. 

Version-Release number of selected component (if applicable):
RHEVM 3.2 - SF03 environment 

RHEVM: rhevm-3.2.0-4.el6ev.noarch
VDSM: vdsm-4.10.2-3.0.el6ev.x86_64
LIBVIRT: libvirt-0.10.2-13.el6.x86_64
QEMU & KVM: qemu-kvm-rhev-0.12.1.2-2.348.el6.x86_64
SANLOCK: sanlock-2.6-2.el6.x86_64

How reproducible:
100%

Build on scenario BZ879227

Steps to Reproduce:
1. Create DC environment (in my case it was FC DC)
2. Create a multiple Storage Domain 
3. Create a new VM with multiple disk & power on this VM
4. Select multiple disks & move them on different SD in you DC
  
Actual results:
Action succeed but get errors in engine.log & vdsm.log

Expected results:
No exception should be found

Additional info:

/var/log/ovirt-engine/engine.log

2013-01-23 18:28:31,277 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] (pool-3-thread-41) START, SnapshotVDSCommand(HostName = green-vdsb, HostId = 8b73ac4
c-d681-43dc-b348-b1fc20da0d5b, vmId=50fe6550-7b9f-4b26-ae19-df17b8e4b53a), log id: 6a43d399
2013-01-23 18:28:49,955 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-3-thread-41) Failed in SnapshotVDS method
2013-01-23 18:28:49,956 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-3-thread-41) Error code SNAPSHOT_FAILED and error message VDSGenericException: VDSErrorException: Failed to SnapshotVDS, error = Snapshot failed
2013-01-23 18:28:49,959 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-3-thread-41) Command org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand return value 
 Class Name: org.ovirt.engine.core.vdsbroker.vdsbroker.StatusOnlyReturnForXmlRpc
mStatus                       Class Name: org.ovirt.engine.core.vdsbroker.vdsbroker.StatusForXmlRpc
mCode                         48
mMessage                      Snapshot failed


2013-01-23 18:28:49,959 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-3-thread-41) HostName = green-vdsb
2013-01-23 18:28:49,959 ERROR [org.ovirt.engine.core.vdsbroker.VDSCommandBase] (pool-3-thread-41) Command SnapshotVDS execution failed. Exception: VDSErrorException: VDSGenericException: VDSErrorException: Failed to SnapshotVDS, error = Snapshot failed
2013-01-23 18:28:49,960 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] (pool-3-thread-41) FINISH, SnapshotVDSCommand, log id: 6a43d399
2013-01-23 18:28:49,960 WARN  [org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand] (pool-3-thread-41) Wasnt able to live snpashot due to error: VdcBLLException: VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to SnapshotVDS, error = Snapshot failed. VM will still be configured to the new created snapshot
2013-01-23 18:28:49,961 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (pool-3-thread-41) Try to add duplicate values with same name. Type: USER_CREATE_SNAPSHOT_FINISHED_FAILURE. Value: snapshotname
2013-01-23 18:28:49,964 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (pool-3-thread-41) Try to add duplicate values with same name. Type: USER_CREATE_SNAPSHOT_FINISHED_FAILURE. Value: vmname

/var/log/vdsm/vdsm.log

Thread-1727::DEBUG::2013-01-23 18:28:49,753::resourceManager::565::ResourceManager::(releaseResource) No one is waiting for resource 'Storage.e4d32d17-3117-4e76-a65f-905cac969b4
5', Clearing records.
Thread-1727::ERROR::2013-01-23 18:28:49,753::dispatcher::66::Storage.Dispatcher.Protect::(run) {'status': {'message': 'Cannot deactivate Logical Volume: (\'General Storage Excep
tion: ("5 [] [\\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-
mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  fa
iled: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource bu
sy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper
: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: 
Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\'
, \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remo
ve ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device
 or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'
  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioc
tl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  Unable to deactivate e4d32d17--3117--4e76--a65f--905
cac969b45-2f8821b3--cb44--4691--ad32--b5a38c070e97 (253:45)\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on 
 failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource
 busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-map
per: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  faile
d: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\
\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: r
emove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Dev
ice or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \
\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove 
ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or
 resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  U
nable to deactivate e4d32d17--3117--4e76--a65f--905cac969b45-34da1d71--99ec--41bf--ad8a--e0befed4d616 (253:46)\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resou
rce busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-
mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  fa
iled: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource bu
sy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper
: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: 
Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\'
, \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remo
ve ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  Unable to deactivate e4d32d17--3117--4e76--a65f--905cac969b45-9edc353f--b1f0--455a--919d--39e831bfe641 (253:38)\\\']\\\\ne4d32d17-3117-4e76-a65f-905cac969b45/[\\\'2f8821b3-cb44-4691-ad32-b5a38c070e97\\\', \\\'34da1d71-99ec-41bf-ad8a-e0befed4d616\\\', \\\'9edc353f-b1f0-455a-919d-39e831bfe641\\\', \\\'e52066c0-328c-4eb2-8403-4c75563b9d12\\\']",)\',)', 'code': 552}}
Thread-1727::ERROR::2013-01-23 18:28:49,753::libvirtvm::2015::vm.Vm::(snapshot) vmId=`50fe6550-7b9f-4b26-ae19-df17b8e4b53a`::The base volume doesn't exist: {'device': 'disk', 'domainID': 'a324e963-3819-487a-8835-0d42e71b1d74', 'volumeID': '4aaa9c87-0125-45e7-aeb9-9afc4a912f9b', 'imageID': '2795ea4a-78f4-46ef-9745-0600fa792773'}
Thread-1727::DEBUG::2013-01-23 18:28:49,754::BindingXMLRPC::915::vds::(wrapper) return vmSnapshot with {'status': {'message': 'Snapshot failed', 'code': 48}}

Comment 1 Allon Mureinik 2013-01-23 17:12:09 UTC
Daniel - shouldn't this have been solved already?
Vladimir - can you try and reproduce with sf4?

Comment 2 vvyazmin@redhat.com 2013-01-24 08:31:14 UTC
(In reply to comment #1)
> Daniel - shouldn't this have been solved already?
> Vladimir - can you try and reproduce with sf4?

RHEVM 3.2 - SF04, not released yet

Comment 3 vvyazmin@redhat.com 2013-01-24 09:27:04 UTC
Verified with R&D: Daniel Erez

The problem found only on FC DC environment (in iSCSI works correctly).

Same issue found on Live Snapshot on FC DC environment only.

Comment 4 vvyazmin@redhat.com 2013-01-24 09:29:23 UTC
Created attachment 686596 [details]
## Logs vdsm, rhevm on FC environment

Comment 5 vvyazmin@redhat.com 2013-01-27 10:50:32 UTC
No issues are found

Verified on RHEVM 3.2 - SF04 environment (iSCSI and FC) :

RHEVM: rhevm-3.2.0-5.el6ev.noarch
VDSM: vdsm-4.10.2-4.0.el6ev.x86_64
LIBVIRT: libvirt-0.10.2-16.el6.x86_64
QEMU & KVM: qemu-kvm-rhev-0.12.1.2-2.348.el6.x86_64
SANLOCK: sanlock-2.6-2.el6.x86_64