Bug 1248493 - After failed MIXED Live Storage Migration - correct problem - second attempt fails
Summary: After failed MIXED Live Storage Migration - correct problem - second attempt ...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: oVirt
Classification: Retired
Component: ovirt-engine-core
Version: 3.6
Hardware: x86_64
OS: Unspecified
unspecified
high
Target Milestone: m1
: 3.6.0
Assignee: Daniel Erez
QA Contact: Kevin Alon Goldblatt
URL:
Whiteboard: storage
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-07-30 12:08 UTC by Kevin Alon Goldblatt
Modified: 2016-03-10 06:12 UTC (History)
12 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-11-04 13:46:09 UTC
oVirt Team: Storage
Embargoed:


Attachments (Terms of Use)
engine, vdsm, server logs (1.75 MB, application/x-gzip)
2015-07-30 12:09 UTC, Kevin Alon Goldblatt
no flags Details

Description Kevin Alon Goldblatt 2015-07-30 12:08:46 UTC
Description of problem:
Failed a LSM operation by killing the PID of the VM during the session. Started the VM again and ran the LSM again - It failed again

Version-Release number of selected component (if applicable):
vdsm-4.17.0-1054.git562e711.el7.noarch
ovirt-engine-3.6.0-0.0.master.20150627185750.git6f063c1.el6.noarch

How reproducible:
happend once at least

Steps to Reproduce:
1. Created VM with block, nfs and gluster disks
2. Started the VM
3. Ran LSM FROM gluster to block domains - just after the automatic snapshot is created, killed the VM PID on the vdsm host >>>>> as expected the LSM operation fails
4. Started the VM again and ran the LSM again >>>>> the operation fails again

Actual results:
After a planned failed LSM operation by killing the vm PID, running another LSM operation fails

Expected results:
After a failed LSM operation due to klling the vm PID, restarting the VM again and running the operation again should work 

Additional info:
Engine log
2015-07-29 15:28:28,651 INFO  [org.ovirt.engine.core.bll.lsm.LiveMigrateVmDisksCommand] (default task-8) [461c39da] Lock Acquired to object 'EngineLock:{exclusiveLocks='[d4ecc827-89e0-4be6-94c6-a389705540a1=<DISK, ACTION_TYPE_FAILED_DISK_IS_BEING_MIGRATED$DiskName vm14_Disk3>, d7204318-2c03-42b3-ac34-77445245fb0a=<DISK, ACTION_TYPE_FAILED_DISK_IS_BEING_MIGRATED$DiskName vm14_Disk2>]', sharedLocks='[6a2413eb-7df4-4dde-b8e1-6e7380527d49=<VM, ACTION_TYPE_FAILED_OBJECT_LOCKED>]'}'

.
.
.
.
2015-07-29 15:29:10,859 INFO  [org.ovirt.engine.core.bll.lsm.LiveMigrateDiskCommand] (org.ovirt.thread.pool-8-thread-17) [160fd18b] Lock Acquired to object 'EngineLock:{exclusiveLocks='null', sharedLocks='[6a2413eb-7df4-4dde-b8e1-6e7380527d49=<VM, ACTION_TYPE_FAILED_DISK_IS_BEING_MIGRATED$DiskName vm14_Disk3>]'}'
2015-07-29 15:29:10,887 INFO  [org.ovirt.engine.core.bll.lsm.LiveMigrateDiskCommand] (org.ovirt.thread.pool-8-thread-17) [160fd18b] Running command: LiveMigrateDiskCommand Task handler: CreateImagePlaceholderTaskHandler internal: true. Entities affected :  ID: d4ecc827-89e0-4be6-94c6-a389705540a1 Type: DiskAction group CONFIGURE_DISK_STORAGE with role type USER,  ID: bcfe8b41-869f-4897-af49-6f9bda07f0b5 Type: StorageAction group CREATE_DISK with role type USER
2015-07-29 15:29:10,896 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.CloneImageGroupStructureVDSCommand] (org.ovirt.thread.pool-8-thread-17) [160fd18b] START, CloneImageGroupStructureVDSCommand( TargetDomainImageGroupVDSCommandParameters:{runAsync='true', storagePoolId='00000001-0001-0001-0001-000000000157', ignoreFailoverLimit='false', storageDomainId='8a98e03c-93d2-430a-aef8-d4e410939eaa', imageGroupId='d4ecc827-89e0-4be6-94c6-a389705540a1', dstDomainId='bcfe8b41-869f-4897-af49-6f9bda07f0b5'}), log id: 4032b454
2015-07-29 15:29:10,978 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.CloneImageGroupStructureVDSCommand] (org.ovirt.thread.pool-8-thread-17) [160fd18b] FINISH, CloneImageGroupStructureVDSCommand, log id: 4032b454
2015-07-29 15:29:10,996 INFO  [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (org.ovirt.thread.pool-8-thread-17) [160fd18b] CommandAsyncTask::Adding CommandMultiAsyncTasks object for command '93e2c214-d13e-4889-bf2b-2c8b4d8889b0'
2015-07-29 15:29:10,997 INFO  [org.ovirt.engine.core.bll.CommandMultiAsyncTasks] (org.ovirt.thread.pool-8-thread-17) [160fd18b] CommandMultiAsyncTasks::AttachTask: Attaching task '479c87a3-2f86-4b50-9f0d-333333883d79' to command '93e2c214-d13e-4889-bf2b-2c8b4d8889b0'.
2015-07-29 15:29:11,008 INFO  [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (org.ovirt.thread.pool-8-thread-17) [160fd18b] Adding task '479c87a3-2f86-4b50-9f0d-333333883d79' (Parent Command 'LiveMigrateDisk', Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters'), polling hasn't started yet..
2015-07-29 15:29:11,029 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-8-thread-17) [160fd18b] Correlation ID: 160fd18b, Job ID: 01ed0301-56cd-4138-bf5e-637a478752c9, Call Stack: null, Custom Event ID: -1, Message: User admin@internal moving disk vm14_Disk3 to domain block4.
2015-07-29 15:29:11,029 INFO  [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (org.ovirt.thread.pool-8-thread-17) [160fd18b] BaseAsyncTask::startPollingTask: Starting to poll task '479c87a3-2f86-4b50-9f0d-333333883d79'.
2015-07-29 15:29:11,042 INFO  [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (org.ovirt.thread.pool-8-thread-17) [160fd18b] BaseAsyncTask::startPollingTask: Starting to poll task '9b4ec936-f599-45cf-b1a6-8c771fc07892'.
2015-07-29 15:29:11,042 INFO  [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (org.ovirt.thread.pool-8-thread-17) [160fd18b] BaseAsyncTask::startPollingTask: Starting to poll task '479c87a3-2f86-4b50-9f0d-333333883d79'.
2015-07-29 15:29:11,045 INFO  [org.ovirt.engine.core.bll.lsm.LiveMigrateVmDisksCommand] (org.ovirt.thread.pool-8-thread-17) [160fd18b] Lock freed to object 'EngineLock:{exclusiveLocks='[d4ecc827-89e0-4be6-94c6-a389705540a1=<DISK, ACTION_TYPE_FAILED_DISK_IS_BEING_MIGRATED$DiskName vm14_Disk3>, d7204
ILED_OBJECT_LOCKED>]'}'
.
.
.
2015-07-29 15:29:11,046 INFO  [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (org.ovirt.thread.pool-8-thread-17) [160fd18b] CommandAsyncTask::HandleEndActionResult [within thread]: endAction for action type 'LiveMigrateVmDisks' completed, handling the result.
2015-07-29 15:29:11,046 INFO  [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (org.ovirt.thread.pool-8-thread-17) [160fd18b] CommandAsyncTask::HandleEndActionResult [within thread]: endAction for action type 'LiveMigrateVmDisks' succeeded, clearing tasks.
2015-07-29 15:29:11,053 INFO  [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (org.ovirt.thread.pool-8-thread-17) [160fd18b] SPMAsyncTask::ClearAsyncTask: Attempting to clear task '66b9f329-aeba-45d8-bc45-9d442ea05880'
2015-07-29 15:29:11,055 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand] (org.ovirt.thread.pool-8-thread-17) [160fd18b] START, SPMClearTaskVDSCommand( SPMTaskGuidBaseVDSCommandParameters:{runAsync='true', storagePoolId='00000001-0001-0001-0001-000000000157', ignoreFailoverLimit='false', taskId='66b9f329-aeba-45d8-bc45-9d442ea05880'}), log id: 12961282
2015-07-29 15:29:11,055 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (org.ovirt.thread.pool-8-thread-17) [160fd18b] START, HSMClearTaskVDSCommand(HostName = fake_host_1, HSMTaskGuidBaseVDSCommandParameters:{runAsync='true', hostId='be6c31b0-8b62-44bc-bcfb-1cc377cc0280', taskId='66b9f329-aeba-45d8-bc45-9d442ea05880'}), log id: 3a991e56
2015-07-29 15:29:11,124 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (org.ovirt.thread.pool-8-thread-17) [160fd18b] FINISH, HSMClearTaskVDSCommand, log id: 3a991e56
2015-07-29 15:29:11,124 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand] (org.ovirt.thread.pool-8-thread-17) [160fd18b] FINISH, SPMClearTaskVDSCommand, log id: 12961282
2015-07-29 15:29:11,140 INFO  [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (org.ovirt.thread.pool-8-thread-17) [160fd18b] BaseAsyncTask::removeTaskFromDB: Removed task '66b9f329-aeba-45d8-bc45-9d442ea05880' from DataBase
2015-07-29 15:29:11,140 INFO  [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (org.ovirt.thread.pool-8-thread-17) [160fd18b] SPMAsyncTask::ClearAsyncTask: Attempting to clear task '8aec7c0c-9730-42c8-be82-a063d181fd8e'
2015-07-29 15:29:11,142 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand] (org.ovirt.thread.pool-8-thread-17) [160fd18b] START, SPMClearTaskVDSCommand( SPMTaskGuidBaseVDSCommandParameters:{runAsync='true', storagePoolId='00000001-0001-0001-0001-000000000157', ignoreFailoverLimit='false', taskId='8aec7c0c-9730-42c8-be82-a063d181fd8e'}), log id: 6a61e712
2015-07-29 15:29:11,142 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (org.ovirt.thread.pool-8-thread-17) [160fd18b] START, HSMClearTaskVDSCommand(HostName = fake_host_1, HSMTaskGuidBaseVDSCommandParameters:{runAsync='true', hostId='be6c31b0-8b62-44bc-bcfb-1cc377cc0280', taskId='8aec7c0c-9730-42c8-be82-a063d181fd8e'}), log id: 4189854e
2015-07-29 15:29:11,167 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (org.ovirt.thread.pool-8-thread-17) [160fd18b] FINISH, HSMClearTaskVDSCommand, log id: 4189854e
2015-07-29 15:29:11,167 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand] (org.ovirt.thread.pool-8-thread-17) [160fd18b] FINISH, SPMClearTaskVDSCommand, log id: 6a61e712
2015-07-29 15:29:11,178 INFO  [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (org.ovirt.thread.pool-8-thread-17) [160fd18b] BaseAsyncTask::removeTaskFromDB: Removed task '8aec7c0c-9730-42c8-be82-a063d181fd8e' from DataBase
2015-07-29 15:29:11,178 INFO  [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (org.ovirt.thread.pool-8-thread-17) [160fd18b] CommandAsyncTask::HandleEndActionResult [within thread]: Removing CommandMultiAsyncTasks object for entity '51eef5fe-0585-4b6a-ae0c-9a2489cd8bca'
2015-07-29 15:29:18,060 INFO  [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (DefaultQuartzScheduler_Worker-77) [2ee9c7de] Polling and updating Async Tasks: 4 tasks, 2 tasks to poll now
2015-07-29 15:29:18,070 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (DefaultQuartzScheduler_Worker-77) [2ee9c7de] Failed in 'HSMGetAllTasksStatusesVDS' method
2015-07-29 15:29:18,078 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-77) [2ee9c7de] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: VDSM 'fake_host_1' command failed: Cannot create Logical Volume
2015-07-29 15:29:18,078 INFO  [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (DefaultQuartzScheduler_Worker-77) [2ee9c7de] SPMAsyncTask::PollTask: Polling task '9b4ec936-f599-45cf-b1a6-8c771fc07892' (Parent Command 'LiveMigrateDisk', Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') returned status 'finished', result 'success'.
2015-07-29 15:29:18,092 INFO  [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (DefaultQuartzScheduler_Worker-77) [2ee9c7de] BaseAsyncTask::onTaskEndSuccess: Task '9b4ec936-f599-45cf-b1a6-8c771fc07892' (Parent Command 'LiveMigrateDisk', Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') ended successfully.
2015-07-29 15:29:18,096 INFO  [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (DefaultQuartzScheduler_Worker-77) [2ee9c7de] CommandAsyncTask::endActionIfNecessary: All tasks of command 'c4f51321-bbce-401f-8e28-416d9611c539' has ended -> executing 'endAction'
2015-07-29 15:29:18,096 INFO  [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (DefaultQuartzScheduler_Worker-77) [2ee9c7de] CommandAsyncTask::endAction: Ending action for '1' tasks (command ID: 'c4f51321-bbce-401f-8e28-416d9611c539'): calling endAction '.
2015-07-29 15:29:18,096 INFO  [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (org.ovirt.thread.pool-8-thread-19) [2ee9c7de] CommandAsyncTask::endCommandAction [within thread] context: Attempting to endAction 'LiveMigrateDisk', executionIndex: '0'
2015-07-29 15:29:18,101 INFO  [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (DefaultQuartzScheduler_Worker-77) [2ee9c7de] SPMAsyncTask::PollTask: Polling task '479c87a3-2f86-4b50-9f0d-333333883d79' (Parent Command 'LiveMigrateDisk', Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') returned status 'finished', result 'cleanSuccess'.
2015-07-29 15:29:18,128 ERROR [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (DefaultQuartzScheduler_Worker-77) [2ee9c7de] BaseAsyncTask::logEndTaskFailure: Task '479c87a3-2f86-4b50-9f0d-333333883d79' (Parent Command 'LiveMigrateDisk', Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') ended with failure:
-- Result: 'cleanSuccess'
-- Message: 'VDSGenericException: VDSErrorException: Failed to HSMGetAllTasksStatusesVDS, error = Cannot create Logical Volume, code = 550',
-- Exception: 'VDSGenericException: VDSErrorException: Failed to HSMGetAllTasksStatusesVDS, error = Cannot create Logical Volume, code = 550'
2015-07-29 15:29:18,131 INFO  [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (DefaultQuartzScheduler_Worker-77) [2ee9c7de] CommandAsyncTask::endActionIfNecessary: All tasks of command '93e2c214-d13e-4889-bf2b-2c8b4d8889b0' has ended -> executing 'endAction'
2015-07-29 15:29:18,132 INFO  [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (DefaultQuartzScheduler_Worker-77) [2ee9c7de] CommandAsyncTask::endAction: Ending action for '1' tasks (command ID: '93e2c214-d13e-4889-bf2b-2c8b4d8889b0'): calling endAction '.
2015-07-29 15:29:18,136 INFO  [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (org.ovirt.thread.pool-8-thread-32) [2ee9c7de] CommandAsyncTask::endCommandAction [within thread] context: Attempting to endAction 'LiveMigrateDisk', executionIndex: '0'
2015-07-29 15:29:18,402 INFO  [org.ovirt.engine.core.bll.lsm.LiveMigrateDiskCommand] (org.ovirt.thread.pool-8-thread-19) [1441d068] Ending command 'org.ovirt.engine.core.bll.lsm.LiveMigrateDiskCommand' successfully.
2015-07-29 15:29:18,406 INFO  [org.ovirt.engine.core.bll.lsm.LiveMigrateDiskCommand] (org.ovirt.thread.pool-8-thread-19) [1441d068] Running command: LiveMigrateDiskCommand Task handler: VmReplicateDiskStartTaskHandler internal: false. Entities affected :  ID: d7204318-2c03-42b3-ac34-77445245fb0a Type: DiskAction group CONFIGURE_DISK_STORAGE with role type USER,  ID: 8a98e03c-93d2-430a-aef8-d4e410939eaa Type: StorageAction group CREATE_DISK with role type USER
2015-07-29 15:29:18,407 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskStartVDSCommand] (org.ovirt.thread.pool-8-thread-19) [1441d068] START, VmReplicateDiskStartVDSCommand(HostName = fake_host_1, VmReplicateDiskParameters:{runAsync='true', hostId='be6c31b0-8b62-44bc-bcfb-1cc377cc0280', vmId='6a2413eb-7df4-4dde-b8e1-6e7380527d49'}), log id: 4d2d4b8b
2015-07-29 15:29:18,425 ERROR [org.ovirt.engine.core.bll.lsm.LiveMigrateDiskCommand] (org.ovirt.thread.pool-8-thread-32) [160fd18b] Ending command 'org.ovirt.engine.core.bll.lsm.LiveMigrateDiskCommand' with failure.
2015-07-29 15:29:18,429 INFO  [org.ovirt.engine.core.bll.lsm.LiveMigrateDiskCommand] (org.ovirt.thread.pool-8-thread-32) [160fd18b] Lock freed to object 'EngineLock:{exclusiveLocks='null', sharedLocks='[6a2413eb-7df4-4dde-b8e1-6e7380527d49=<VM, ACTION_TYPE_FAILED_DISK_IS_BEING_MIGRATED$DiskName vm14_Disk3>]'}'
2015-07-29 15:29:18,451 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-8-thread-32) [160fd18b] Correlation ID: 160fd18b, Job ID: 01ed0301-56cd-4138-bf5e-637a478752c9, Call Stack: null, Custom Event ID: -1, Message: User admin@internal have failed to move disk vm14_Disk3 to domain block4.


VDSM.log
-------------------------------------

libvirtEventLoop::DEBUG::2015-07-29 18:33:45,256::lvm::514::Storage.OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' got the operation mutex
libvirtEventLoop::DEBUG::2015-07-29 18:33:45,256::lvm::526::Storage.OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' released the operation mutex
libvirtEventLoop::ERROR::2015-07-29 18:33:45,257::task::866::Storage.TaskManager.Task::(_setError) Task=`814c5013-b6b2-4a3a-929c-1f495a2298fb`::Unexpected error
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/task.py", line 873, in _run
    return fn(*args, **kargs)
  File "/usr/share/vdsm/logUtils.py", line 49, in wrapper
    res = f(*args, **kwargs)
  File "/usr/share/vdsm/storage/hsm.py", line 3254, in teardownImage
    dom.deactivateImage(imgUUID)
  File "/usr/share/vdsm/storage/blockSD.py", line 1014, in deactivateImage
    lvm.deactivateLVs(self.sdUUID, volUUIDs)
  File "/usr/share/vdsm/storage/lvm.py", line 1193, in deactivateLVs
    _setLVAvailability(vgName, toDeactivate, "n")
  File "/usr/share/vdsm/storage/lvm.py", line 827, in _setLVAvailability
    raise error(str(e))
CannotDeactivateLogicalVolume: Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\'  WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\', \'  Logical volume b534bd52-01a3-47d1-a32c-830d02b382ef/383a503c-7a5d-4ea8-b6de-0c2950c26b58 in use.\']\\nb534bd52-01a3-47d1-a32c-830d02b382ef/[\'ca67229e-e245-4fa5-a57f-47568dc9bf7c\', \'383a503c-7a5d-4ea8-b6de-0c2950c26b58\']",)',)
libvirtEventLoop::DEBUG::2015-07-29 18:33:45,257::task::885::Storage.TaskManager.Task::(_run) Task=`814c5013-b6b2-4a3a-929c-1f495a2298fb`::Task._run: 814c5013-b6b2-4a3a-929c-1f495a2298fb (u'b534bd52-01a3-47d1-a32c-830d02b382ef', u'00000001-0001-0001-0001-000000000157', u'd7204318-2c03-42b3-ac34-77445245fb0a') {} failed - stopping task
libvirtEventLoop::DEBUG::2015-07-29 18:33:45,257::task::1246::Storage.TaskManager.Task::(stop) Task=`814c5013-b6b2-4a3a-929c-1f495a2298fb`::stopping in state preparing (force False)
libvirtEventLoop::DEBUG::2015-07-29 18:33:45,258::task::993::Storage.TaskManager.Task::(_decref) Task=`814c5013-b6b2-4a3a-929c-1f495a2298fb`::ref 1 aborting True
libvirtEventLoop::INFO::2015-07-29 18:33:45,258::task::1171::Storage.TaskManager.Task::(prepare) Task=`814c5013-b6b2-4a3a-929c-1f495a2298fb`::aborting: Task is aborted: 'Cannot deactivate Logical Volume' - code 552
libvirtEventLoop::DEBUG::2015-07-29 18:33:45,258::task::1176::Storage.TaskManager.Task::(prepare) Task=`814c5013-b6b2-4a3a-929c-1f495a2298fb`::Prepare: aborted: Cannot deactivate Logical Volume
libvirtEventLoop::DEBUG::2015-07-29 18:33:45,258::task::993::Storage.TaskManager.Task::(_decref) Task=`814c5013-b6b2-4a3a-929c-1f495a2298fb`::ref 0 aborting True
libvirtEventLoop::DEBUG::2015-07-29 18:33:45,258::task::928::Storage.TaskManager.Task::(_doAbort) Task=`814c5013-b6b2-4a3a-929c-1f495a2298fb`::Task._doAbort: force False
libvirtEventLoop::DEBUG::2015-07-29 18:33:45,259::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
libvirtEventLoop::DEBUG::2015-07-29 18:33:45,259::task::595::Storage.TaskManager.Task::(_updateState) Task=`814c5013-b6b2-4a3a-929c-1f495a2298fb`::moving from state preparing -> state aborting
libvirtEventLoop::DEBUG::2015-07-29 18:33:45,259::task::550::Storage.TaskManager.Task::(__state_aborting) Task=`814c5013-b6b2-4a3a-929c-1f495a2298fb`::_aborting: recover policy none
libvirtEventLoop::DEBUG::2015-07-29 18:33:45,259::task::595::Storage.TaskManager.Task::(_updateState) Task=`814c5013-b6b2-4a3a-929c-1f495a2298fb`::moving from state aborting -> state failed
libvirtEventLoop::DEBUG::2015-07-29 18:33:45,259::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {u'Storage.b534bd52-01a3-47d1-a32c-830d02b382ef': < ResourceRef 'Storage.b534bd52-01a3-47d1-a32c-830d02b382ef', isValid: 'True' obj: 'None'>}
libvirtEventLoop::DEBUG::2015-07-29 18:33:45,259::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
libvirtEventLoop::DEBUG::2015-07-29 18:33:45,260::resourceManager::616::Storage.ResourceManager::(releaseResource) Trying to release resource 'Storage.b534bd52-01a3-47d1-a32c-830d02b382ef'
libvirtEventLoop::DEBUG::2015-07-29 18:33:45,260::resourceManager::635::Storage.ResourceManager::(releaseResource) Released resource 'Storage.b534bd52-01a3-47d1-a32c-830d02b382ef' (1 active users)
libvirtEventLoop::ERROR::2015-07-29 18:33:45,260::dispatcher::76::Storage.Dispatcher::(wrapper) {'status': {'message': 'Cannot deactivate Logical Volume: (\'General Storage Exception: ("5 [] [\\\'  WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\\\', \\\'  Logical volume b534bd52-01a3-47d1-a32c-830d02b382ef/383a503c-7a5d-4ea8-b6de-0c2950c26b58 in use.\\\']\\\\nb534bd52-01a3-47d1-a32c-830d02b382ef/[\\\'ca67229e-e245-4fa5-a57f-47568dc9bf7c\\\', \\\'383a503c-7a5d-4ea8-b6de-0c2950c26b58\\\']",)\',)', 'code': 552}}

.
.
.
479c87a3-2f86-4b50-9f0d-333333883d79::DEBUG::2015-07-29 18:41:49,981::lvm::291::Storage.Misc.excCmd::(cmd) FAILED: <err> = '  WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n  Logical volume "f55fc630-ef20-
486b-8485-42151d265326" already exists in volume group "bcfe8b41-869f-4897-af49-6f9bda07f0b5"\n'; <rc> = 5
479c87a3-2f86-4b50-9f0d-333333883d79::DEBUG::2015-07-29 18:41:49,987::lvm::301::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /usr/sbin/lvm lvcreate --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_
cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/360060160f4a03000798dd6205b34e511|/dev/mapper/360060160f4a030007b8dd6205b34e511|/dev/mapper/360060160f4a030007d8dd6205b34e511|/dev/mapp
er/360060160f4a03000979e521debece411|/dev/mapper/360060160f4a03000999e521debece411|/dev/mapper/360060160f4a030009b9e521debece411|'\'', '\''r|.*|'\'' ] }  global {  locking_type=1  prioritise_write_locks=1  wait_for_locks=1  use_lvmetad=0
 }  backup {  retain_min = 50  retain_days = 0 } ' --autobackup n --contiguous n --size 2048m --addtag OVIRT_VOL_INITIALIZING --name f55fc630-ef20-486b-8485-42151d265326 bcfe8b41-869f-4897-af49-6f9bda07f0b5 (cwd None)
9b4ec936-f599-45cf-b1a6-8c771fc07892::INFO::2015-07-29 18:41:50,000::volume::415::Storage.Volume::(create) Creating volume 383a503c-7a5d-4ea8-b6de-0c2950c26b58
9b4ec936-f599-45cf-b1a6-8c771fc07892::DEBUG::2015-07-29 18:41:50,002::task::752::Storage.TaskManager.Task::(_save) Task=`9b4ec936-f599-45cf-b1a6-8c771fc07892`::_save: orig /rhev/data-center/00000001-0001-0001-0001-000000000157/mastersd/m
aster/tasks/9b4ec936-f599-45cf-b1a6-8c771fc07892 temp /rhev/data-center/00000001-0001-0001-0001-000000000157/mastersd/master/tasks/9b4ec936-f599-45cf-b1a6-8c771fc07892.temp
479c87a3-2f86-4b50-9f0d-333333883d79::DEBUG::2015-07-29 18:41:50,298::lvm::301::Storage.Misc.excCmd::(cmd) FAILED: <err> = '  WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n  Logical volume "f55fc630-ef20-
486b-8485-42151d265326" already exists in volume group "bcfe8b41-869f-4897-af49-6f9bda07f0b5"\n'; <rc> = 5
479c87a3-2f86-4b50-9f0d-333333883d79::ERROR::2015-07-29 18:41:50,298::volume::437::Storage.Volume::(create) Failed to create volume /rhev/data-center/00000001-0001-0001-0001-000000000157/bcfe8b41-869f-4897-af49-6f9bda07f0b5/images/d4ecc827-89e0-4be6-94c6-a389705540a1/f55fc630-ef20-486b-8485-42151d265326: Cannot create Logical Volume: ('bcfe8b41-869f-4897-af49-6f9bda07f0b5', 'f55fc630-ef20-486b-8485-42151d265326')
479c87a3-2f86-4b50-9f0d-333333883d79::ERROR::2015-07-29 18:41:50,298::volume::473::Storage.Volume::(create) Unexpected error
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/volume.py", line 435, in create
    srcVolUUID, volPath)
  File "/usr/share/vdsm/storage/blockVolume.py", line 133, in _create
    initialTag=TAG_VOL_UNINIT)
  File "/usr/share/vdsm/storage/lvm.py", line 1090, in createLV
    raise se.CannotCreateLogicalVolume(vgName, lvName)
CannotCreateLogicalVolume: Cannot create Logical Volume: ('bcfe8b41-869f-4897-af49-6f9bda07f0b5', 'f55fc630-ef20-486b-8485-42151d265326')
479c87a3-2f86-4b50-9f0d-333333883d79::ERROR::2015-07-29 18:41:50,299::image::402::Storage.Image::(_createTargetImage) Unexpected error
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/image.py", line 385, in _createTargetImage
    srcVolUUID=volParams['parent'])
  File "/usr/share/vdsm/storage/sd.py", line 430, in createVolume
    preallocate, diskType, volUUID, desc, srcImgUUID, srcVolUUID)
  File "/usr/share/vdsm/storage/volume.py", line 435, in create
    srcVolUUID, volPath)
  File "/usr/share/vdsm/storage/blockVolume.py", line 133, in _create
    initialTag=TAG_VOL_UNINIT)
  File "/usr/share/vdsm/storage/lvm.py", line 1090, in createLV
    raise se.CannotCreateLogicalVolume(vgName, lvName)
CannotCreateLogicalVolume: Cannot create Logical Volume: ('bcfe8b41-869f-4897-af49-6f9bda07f0b5', 'f55fc630-ef20-486b-8485-42151d265326')

.
.
.

479c87a3-2f86-4b50-9f0d-333333883d79::DEBUG::2015-07-29 18:41:49,981::lvm::291::Storage.Misc.excCmd::(cmd) FAILED: <err> = '  WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n  Logical volume "f55fc630-ef20-
486b-8485-42151d265326" already exists in volume group "bcfe8b41-869f-4897-af49-6f9bda07f0b5"\n'; <rc> = 5
479c87a3-2f86-4b50-9f0d-333333883d79::DEBUG::2015-07-29 18:41:49,987::lvm::301::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /usr/sbin/lvm lvcreate --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_
cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/360060160f4a03000798dd6205b34e511|/dev/mapper/360060160f4a030007b8dd6205b34e511|/dev/mapper/360060160f4a030007d8dd6205b34e511|/dev/mapp
er/360060160f4a03000979e521debece411|/dev/mapper/360060160f4a03000999e521debece411|/dev/mapper/360060160f4a030009b9e521debece411|'\'', '\''r|.*|'\'' ] }  global {  locking_type=1  prioritise_write_locks=1  wait_for_locks=1  use_lvmetad=0
 }  backup {  retain_min = 50  retain_days = 0 } ' --autobackup n --contiguous n --size 2048m --addtag OVIRT_VOL_INITIALIZING --name f55fc630-ef20-486b-8485-42151d265326 bcfe8b41-869f-4897-af49-6f9bda07f0b5 (cwd None)
9b4ec936-f599-45cf-b1a6-8c771fc07892::INFO::2015-07-29 18:41:50,000::volume::415::Storage.Volume::(create) Creating volume 383a503c-7a5d-4ea8-b6de-0c2950c26b58
9b4ec936-f599-45cf-b1a6-8c771fc07892::DEBUG::2015-07-29 18:41:50,002::task::752::Storage.TaskManager.Task::(_save) Task=`9b4ec936-f599-45cf-b1a6-8c771fc07892`::_save: orig /rhev/data-center/00000001-0001-0001-0001-000000000157/mastersd/m
aster/tasks/9b4ec936-f599-45cf-b1a6-8c771fc07892 temp /rhev/data-center/00000001-0001-0001-0001-000000000157/mastersd/master/tasks/9b4ec936-f599-45cf-b1a6-8c771fc07892.temp
479c87a3-2f86-4b50-9f0d-333333883d79::DEBUG::2015-07-29 18:41:50,298::lvm::301::Storage.Misc.excCmd::(cmd) FAILED: <err> = '  WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n  Logical volume "f55fc630-ef20-
486b-8485-42151d265326" already exists in volume group "bcfe8b41-869f-4897-af49-6f9bda07f0b5"\n'; <rc> = 5
479c87a3-2f86-4b50-9f0d-333333883d79::ERROR::2015-07-29 18:41:50,298::volume::437::Storage.Volume::(create) Failed to create volume /rhev/data-center/00000001-0001-0001-0001-000000000157/bcfe8b41-869f-4897-af49-6f9bda07f0b5/images/d4ecc827-89e0-4be6-94c6-a389705540a1/f55fc630-ef20-486b-8485-42151d265326: Cannot create Logical Volume: ('bcfe8b41-869f-4897-af49-6f9bda07f0b5', 'f55fc630-ef20-486b-8485-42151d265326')
479c87a3-2f86-4b50-9f0d-333333883d79::ERROR::2015-07-29 18:41:50,298::volume::473::Storage.Volume::(create) Unexpected error
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/volume.py", line 435, in create
    srcVolUUID, volPath)
  File "/usr/share/vdsm/storage/blockVolume.py", line 133, in _create
    initialTag=TAG_VOL_UNINIT)
  File "/usr/share/vdsm/storage/lvm.py", line 1090, in createLV
    raise se.CannotCreateLogicalVolume(vgName, lvName)
CannotCreateLogicalVolume: Cannot create Logical Volume: ('bcfe8b41-869f-4897-af49-6f9bda07f0b5', 'f55fc630-ef20-486b-8485-42151d265326')
479c87a3-2f86-4b50-9f0d-333333883d79::ERROR::2015-07-29 18:41:50,299::image::402::Storage.Image::(_createTargetImage) Unexpected error
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/image.py", line 385, in _createTargetImage
    srcVolUUID=volParams['parent'])
  File "/usr/share/vdsm/storage/sd.py", line 430, in createVolume
    preallocate, diskType, volUUID, desc, srcImgUUID, srcVolUUID)
  File "/usr/share/vdsm/storage/volume.py", line 435, in create
    srcVolUUID, volPath)
  File "/usr/share/vdsm/storage/blockVolume.py", line 133, in _create
    initialTag=TAG_VOL_UNINIT)
  File "/usr/share/vdsm/storage/lvm.py", line 1090, in createLV
    raise se.CannotCreateLogicalVolume(vgName, lvName)
CannotCreateLogicalVolume: Cannot create Logical Volume: ('bcfe8b41-869f-4897-af49-6f9bda07f0b5', 'f55fc630-ef20-486b-8485-42151d265326')
n

Comment 1 Kevin Alon Goldblatt 2015-07-30 12:09:48 UTC
Created attachment 1057640 [details]
engine, vdsm, server logs

Adding logs

Comment 2 Allon Mureinik 2015-07-30 12:47:54 UTC
Daniel, can you take a look please?
Seems we aren't clearing up something we should.

Comment 3 Nir Soffer 2015-07-30 12:57:49 UTC
(In reply to Kevin Alon Goldblatt from comment #0)
> Description of problem:
> Failed a LSM operation by killing the PID of the VM during the session.
> Started the VM again and ran the LSM again - It failed again

Is this related to lsm between different domain types?

Please check migration from block to block or file to file, and update
the title of the bug if this issue is not related to different storage
domains type.

Also, is this a new problem in 3.6 or it also happens in 3.5 engine?

Comment 4 Kevin Alon Goldblatt 2015-08-19 11:17:28 UTC
(In reply to Nir Soffer from comment #3)
> (In reply to Kevin Alon Goldblatt from comment #0)
> > Description of problem:
> > Failed a LSM operation by killing the PID of the VM during the session.
> > Started the VM again and ran the LSM again - It failed again
> 
> Is this related to lsm between different domain types?
> 
> Please check migration from block to block or file to file, and update
> the title of the bug if this issue is not related to different storage
> domains type.
> 
> Also, is this a new problem in 3.6 or it also happens in 3.5 engine?

I tested this scenario with Mixed LSM:
Steps to Reproduce:
1. Created VM with block, nfs and gluster disks
2. Started the VM
3. Ran LSM FROM (gluster) to (block) domains - just after the automatic snapshot is created, killed the VM PID on the vdsm host >>>>> as expected the LSM operation fails
4. Started the VM again and ran the LSM again >>>>> the operation fails again

I will test this for block to block on V3.6 to see whether this happens there too
Testing this on v3.5 will have to wait as sur systems are all 3.6 now

Comment 5 Kevin Alon Goldblatt 2015-08-25 15:08:05 UTC
Tried to reproduce this scen(In reply to Nir Soffer from comment #3)
> (In reply to Kevin Alon Goldblatt from comment #0)
> > Description of problem:
> > Failed a LSM operation by killing the PID of the VM during the session.
> > Started the VM again and ran the LSM again - It failed again
> 
> Is this related to lsm between different domain types?
> 
> Please check migration from block to block or file to file, and update
> the title of the bug if this issue is not related to different storage
> domains type.
> 
> Also, is this a new problem in 3.6 or it also happens in 3.5 engine?

I tested this again as requested. I used V3.6.0.9 on a Rhel6.7 engine and a Rhel7.2 host.
Tried to reproduce this on a block to block LSM scenario. The LSM failed with CDA during the Create Snapshot operation. Submitted bz:
https://bugzilla.redhat.com/show_bug.cgi?id=1256786

Comment 6 Yaniv Lavi 2015-09-06 14:40:43 UTC
Can this bug be closed? I see the fix is ON_QA for the RHEL 7.2 issue.

Comment 7 Kevin Alon Goldblatt 2015-09-08 19:20:56 UTC
I retested this scenario again with the NEW Version:
-------------------------------------------------------------
rhevm-3.6.0-0.12.master.el6.noarch V3.6.10
vdsm-4.17.3-1.el7ev.noarch


The Mixed LSM works fine now with the following scenario:
------------------------------------------------------------
Steps to Reproduce:
1. Created VM with block, nfs and gluster disks
2. Started the VM
3. Ran LSM FROM gluster to block domains - just after the automatic snapshot is created, killed the VM PID on the vdsm host >>>>> as expected the LSM operation fails
4. Started the VM again and ran the LSM again >>>>> NOW THE OPERATION IS SUCCESSFULL


This BZ can now be moved to VERIFIED / CLOSED

Comment 8 Allon Mureinik 2015-09-08 19:30:21 UTC
(In reply to Kevin Alon Goldblatt from comment #7)
> This BZ can now be moved to VERIFIED / CLOSED
Moving to VERIFIED based on that comment. We'll move it to CLOSED when 3.6.0 is released.

Comment 9 Sandro Bonazzola 2015-11-04 13:46:09 UTC
oVirt 3.6.0 has been released on November 4th, 2015 and should fix this issue.
If problems still persist, please open a new BZ and reference this one.


Note You need to log in before you can comment on or make changes to this bug.