Bug 1251956 - Live storage migration is broken
Live storage migration is broken
Status: CLOSED CURRENTRELEASE
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine (Show other bugs)
3.6.0
Unspecified Unspecified
urgent Severity urgent
: ovirt-3.6.0-rc
: 3.6.0
Assigned To: Daniel Erez
Kevin Alon Goldblatt
: Regression
: 1256786 (view as bug list)
Depends On:
Blocks: 1058757
  Show dependency treegraph
 
Reported: 2015-08-10 08:02 EDT by Arik
Modified: 2016-03-10 07:01 EST (History)
12 users (show)

See Also:
Fixed In Version: 3.6.0-10
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed:
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: Storage
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
engine log (6.78 MB, text/plain)
2015-08-10 08:04 EDT, Arik
no flags Details


External Trackers
Tracker ID Priority Status Summary Last Updated
oVirt gerrit 44647 master MERGED core: avoid disks lock check on create snapshot while LSM Never
oVirt gerrit 44741 ovirt-engine-3.6 MERGED core: avoid disks lock check on create snapshot while LSM Never

  None (edit)
Description Arik 2015-08-10 08:02:26 EDT
Description of problem:
Unable to migrate a disk between different storage domains while the VM is up.

Version-Release number of selected component (if applicable):


How reproducible:
100%

Steps to Reproduce:
1. Run a VM
2. Try to move one of its disks to a different storage domain
3.

Actual results:
The opreation fails, the disk remains locked in the DB

Expected results:
The disk should move and then the lock should be released

Additional info:
It seems that the problem is that CreateAllSnapshotsFromVmCommand try to lock the disk while it is already locked (exclusively) by LiveMigrateVmDisksCommand
Comment 1 Arik 2015-08-10 08:04:40 EDT
Created attachment 1061017 [details]
engine log

The relevant part is:
Call Stack: null, Custom Event ID: -1, Message: VM windows7 started on Host bamba
2015-08-10 14:45:11,441 INFO  [org.ovirt.engine.core.bll.MoveDisksCommand] (default task-27) [37ac4b8c] Running command: MoveDisksCommand internal: false. Entities affected :  ID: 63213433-b3e9-4fab-809a-60972897baea Type: DiskAction group CONFIGURE_DISK_STORAGE with role type USER
2015-08-10 14:45:11,660 INFO  [org.ovirt.engine.core.bll.lsm.LiveMigrateVmDisksCommand] (default task-27) [37ac4b8c] Lock Acquired to object 'EngineLock:{exclusiveLocks='[63213433-b3e9-4fab-809a-60972897baea=<DISK, ACTION_TYPE_FAILED_DISK_IS_BEING_MIGRATED$DiskName windows7>]', sharedLocks='[564dffd1-06ca-ccae-a533-97aac010ea3d=<VM, ACTION_TYPE_FAILED_OBJECT_LOCKED>]'}'
2015-08-10 14:45:11,784 INFO  [org.ovirt.engine.core.bll.lsm.LiveMigrateVmDisksCommand] (org.ovirt.thread.pool-8-thread-7) [37ac4b8c] Running command: LiveMigrateVmDisksCommand Task handler: LiveSnapshotTaskHandler internal: false. Entities affected :  ID: 63213433-b3e9-4fab-809a-60972897baea Type: DiskAction group DISK_LIVE_STORAGE_MIGRATION with role type USER
2015-08-10 14:45:12,080 WARN  [org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand] (org.ovirt.thread.pool-8-thread-7) [42da1270] CanDoAction of action 'CreateAllSnapshotsFromVm' failed for user admin@internal. Reasons: VAR__ACTION__CREATE,VAR__TYPE__SNAPSHOT,ACTION_TYPE_FAILED_DISKS_LOCKED,$diskAliases windows7
2015-08-10 14:45:12,151 INFO  [org.ovirt.engine.core.bll.lsm.LiveMigrateVmDisksCommand] (org.ovirt.thread.pool-8-thread-7) [42da1270] Lock freed to object 'EngineLock:{exclusiveLocks='[63213433-b3e9-4fab-809a-60972897baea=<DISK, ACTION_TYPE_FAILED_DISK_IS_BEING_MIGRATED$DiskName windows7>]', sharedLocks='[564dffd1-06ca-ccae-a533-97aac010ea3d=<VM, ACTION_TYPE_FAILED_OBJECT_LOCKED>]'}
Comment 2 Carlos Mestre González 2015-08-10 10:35:10 EDT
Can confirm it happens with last build ovirt-engine-3.6.0-0.0.master.20150804111407.git122a3a0.el6.noarch, I tried to move between iscsi domains and RHEL 7.1
Comment 3 Allon Mureinik 2015-09-02 09:10:47 EDT
*** Bug 1256786 has been marked as a duplicate of this bug. ***
Comment 4 Carlos Mestre González 2015-09-03 04:21:06 EDT
I encountered other issue in the last build 3.6.0-10, operation succeeds but the job "Migrating ..." is never marked as FINISHED, should I open a new BZ or add the details here?
Comment 5 Daniel Erez 2015-09-03 04:28:44 EDT
(In reply to Carlos Mestre González from comment #4)
> I encountered other issue in the last build 3.6.0-10, operation succeeds but
> the job "Migrating ..." is never marked as FINISHED, should I open a new BZ
> or add the details here?

It's an in the task monitoring so please open a new BZ.
Can you please also attach the relevant logs and screenshots.
Comment 6 Kevin Alon Goldblatt 2015-09-16 05:14:03 EDT
Verified this with the following version:
----------------------------------------------------
rhevm-3.6.0-0.12.master.el6.noarch
vdsm-4.17.3-1.el7ev.noarch

Verified using the following scenario:
---------------------------------------------------
Steps to Reproduce:
1. Run a VM
2. Try to move one of its disks to a different storage domain >>>>> OPERATION WORKS FINE

Moving to VERIFIED!
Comment 7 Allon Mureinik 2016-03-10 05:39:19 EST
RHEV 3.6.0 has been released, setting status to CLOSED CURRENTRELEASE
Comment 8 Allon Mureinik 2016-03-10 05:39:24 EST
RHEV 3.6.0 has been released, setting status to CLOSED CURRENTRELEASE
Comment 9 Allon Mureinik 2016-03-10 05:45:10 EST
RHEV 3.6.0 has been released, setting status to CLOSED CURRENTRELEASE
Comment 10 Allon Mureinik 2016-03-10 07:01:48 EST
RHEV 3.6.0 has been released, setting status to CLOSED CURRENTRELEASE

Note You need to log in before you can comment on or make changes to this bug.