Bug 1753168 - [downstream clone - 4.3.6] teardownImage attempts to deactivate in-use LV's rendering the VM disk image/volumes in locked state.
Summary: [downstream clone - 4.3.6] teardownImage attempts to deactivate in-use LV's r...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine
Version: 4.3.5
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ovirt-4.3.6
: 4.3.6
Assignee: Eyal Shenitzky
QA Contact: Shir Fishbain
URL:
Whiteboard:
Depends On: 1749944
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-09-18 10:03 UTC by RHV bug bot
Modified: 2023-03-24 15:27 UTC (History)
20 users (show)

Fixed In Version: ovirt-engine-4.3.6.6
Doc Type: Bug Fix
Doc Text:
Previously, a snapshot disk that downloaded when it was attached to a backup Virtual Machine got locked due to a failure to teardown the disk. The current release fixes this error by skipping the disk teardown in case of a snapshot disk.
Clone Of: 1749944
Environment:
Last Closed: 2019-10-10 15:37:20 UTC
oVirt Team: Storage
Target Upstream Version:
Embargoed:
lsvaty: testing_plan_complete-


Attachments (Terms of Use)
Logs (2.83 MB, application/zip)
2019-09-25 13:59 UTC, Shir Fishbain
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 4420841 0 None None None 2019-09-18 10:04:38 UTC
Red Hat Product Errata RHEA-2019:3010 0 None None None 2019-10-10 15:37:22 UTC
oVirt gerrit 103408 0 'None' MERGED core: prevent teardown of a snapshot disk that is used by running VM 2021-01-06 17:17:44 UTC

Description RHV bug bot 2019-09-18 10:03:52 UTC
+++ This bug is a downstream clone. The original bug is: +++
+++   bug 1749944 +++
======================================================================

Description of problem:

When using the Backup API, the following sequence takes place:

1) Snapshot Disk X of VM-A
2) Attach Disk X (transient) of the snapshot to the agent-VM
3) Agent-VM does backup of disk
4) Download the disk image
5) Teardown Disk X
6) Detach the Disk from the agent-VM
7) Remove snapshot of Disk X

The teardownImage fails, as the LVs of Disk X are still in use by VM A.
The VM-A and agent-VM can be on the same or different host.

-	RHV-M audit log info:

~~~
           log_time            |            correlation_id            |     message                                                                                                                                                                                                                                                                      
-------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------------
 2019-09-06 10:55:20.463-04    |                                      | Backup of virtual machine 'AAAAA7.6' using snapshot 'AAAAA7.6-backup-cc1cd3c3-7c0b-476b-9d2b-daee8cb119b9' is starting.
 2019-09-06 10:55:24.27-04     | 1aee9d40-457e-4806-83c1-149dfec83925 | Snapshot 'AAAAA7.6-backup-cc1cd3c3-7c0b-476b-9d2b-daee8cb119b9' creation for VM 'AAAAA7.6' was initiated by admin@internal-authz.
 2019-09-06 10:55:43.825-04    | 1aee9d40-457e-4806-83c1-149dfec83925 | Snapshot 'AAAAA7.6-backup-cc1cd3c3-7c0b-476b-9d2b-daee8cb119b9' creation for VM 'AAAAA7.6' has been completed.
 2019-09-06 10:55:48.838-04    | 8b2bc54a-0073-4481-957e-4d00be3e739a | Disk rhel76_Disk1 was successfully attached to VM rhel76 by admin@internal-authz.
 2019-09-06 10:55:54.137-04    | 972fea1a-b008-4e38-bc8c-cf8d359e2195 | Image Download with disk rhel76_Disk1 was initiated by admin@internal-authz.
 2019-09-06 11:07:58.95-04     |                                      | VDSM rhevh-10.example.com command TeardownImageVDS failed: Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\'  Logical volume 4f5ab93e-697b-4542-a7df-1b132fe3655c/f4d7e644-631a-47b9-85cc-84edf0fd4057 in use.\', \'  Logical volume 4f5ab93e
-697b-4542-a7df-1b132fe3655c/96a91fbd-7c60-4e8e-a2ef-ecab164c87d7 in use.\']\\n4f5ab93e-697b-4542-a7df-1b132fe3655c/[\'f4d7e644-631a-47b9-85cc-84edf0fd4057\', \'96a91fbd-7c60-4e8e-a2ef-ecab164c87d7\']",)',)
 2019-09-06 11:08:00.435-04    | 972fea1a-b008-4e38-bc8c-cf8d359e2195 | Image Download with disk rhel76_Disk1 succeeded.
~~~


-	From step 5 and 6.  
	
	VDSM logs:
~~~
2019-09-06 11:07:38,773-0400 INFO  (jsonrpc/3) [vdsm.api] START teardownImage(sdUUID=u'4f5ab93e-697b-4542-a7df-1b132fe3655c', spUUID=u'5a85dd39-0046-03e0-0383-000000000324', imgUUID=u'daa39bb3-38da-4325-a73c-c34abf58063d', volUUID=None) from=::ffff:10.10.178.31,37630, flow_id=972fea1a-b008-4e38-bc8c-cf8d359e2195, task_id=a44708ab-5ee2-4163-971d-35160d564f40 (api:48)

2019-09-06 11:07:58,848-0400 INFO  (jsonrpc/3) [vdsm.api] FINISH teardownImage error=Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\'  Logical volume 4f5ab93e-697b-4542-a7df-1b132fe3655c/f4d7e644-631a-47b9-85cc-84edf0fd4057 in use.\', \'  Logical volume 4f5ab93e-697b-4542-a7df-1b132fe3655c/96a91fbd-7c60-4e8e-a2ef-ecab164c87d7 in use.\']\\n4f5ab93e-697b-4542-a7df-1b132fe3655c/[\'f4d7e644-631a-47b9-85cc-84edf0fd4057\', \'96a91fbd-7c60-4e8e-a2ef-ecab164c87d7\']",)',) from=::ffff:10.10.178.31,37630, flow_id=972fea1a-b008-4e38-bc8c-cf8d359e2195, task_id=a44708ab-5ee2-4163-971d-35160d564f40 (api:52)
2019-09-06 11:07:58,849-0400 ERROR (jsonrpc/3) [storage.TaskManager.Task] (Task='a44708ab-5ee2-4163-971d-35160d564f40') Unexpected error (task:875)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run
    return fn(*args, **kargs)
  File "<string>", line 2, in teardownImage
  File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in method
    ret = func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 3250, in teardownImage
    dom.deactivateImage(imgUUID)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/blockSD.py", line 1390, in deactivateImage
    lvm.deactivateLVs(self.sdUUID, volUUIDs)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/lvm.py", line 1423, in deactivateLVs
    _setLVAvailability(vgName, toDeactivate, "n")
  File "/usr/lib/python2.7/site-packages/vdsm/storage/lvm.py", line 926, in _setLVAvailability
    raise error(str(e))
CannotDeactivateLogicalVolume: Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\'  Logical volume 4f5ab93e-697b-4542-a7df-1b132fe3655c/f4d7e644-631a-47b9-85cc-84edf0fd4057 in use.\', \'  Logical volume 4f5ab93e-697b-4542-a7df-1b132fe3655c/96a91fbd-7c60-4e8e-a2ef-ecab164c87d7 in use.\']\\n4f5ab93e-697b-4542-a7df-1b132fe3655c/[\'f4d7e644-631a-47b9-85cc-84edf0fd4057\', \'96a91fbd-7c60-4e8e-a2ef-ecab164c87d7\']",)',)
2019-09-06 11:07:58,849-0400 INFO  (jsonrpc/3) [storage.TaskManager.Task] (Task='a44708ab-5ee2-4163-971d-35160d564f40') aborting: Task is aborted: 'Cannot deactivate Logical Volume: (\'General Storage Exception: ("5 [] [\\\'  Logical volume 4f5ab93e-697b-4542-a7df-1b132fe3655c/f4d7e644-631a-47b9-85cc-84edf0fd4057 in use.\\\', \\\'  Logical volume 4f5ab93e-697b-4542-a7df-1b132fe3655c/96a91fbd-7c60-4e8e-a2ef-ecab164c87d7 in use.\\\']\\\\n4f5ab93e-697b-4542-a7df-1b132fe3655c/[\\\'f4d7e644-631a-47b9-85cc-84edf0fd4057\\\', \\\'96a91fbd-7c60-4e8e-a2ef-ecab164c87d7\\\']",)\',)' - code 552 (task:1181)
2019-09-06 11:07:58,850-0400 ERROR (jsonrpc/3) [storage.Dispatcher] FINISH teardownImage error=Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\'  Logical volume 4f5ab93e-697b-4542-a7df-1b132fe3655c/f4d7e644-631a-47b9-85cc-84edf0fd4057 in use.\', \'  Logical volume 4f5ab93e-697b-4542-a7df-1b132fe3655c/96a91fbd-7c60-4e8e-a2ef-ecab164c87d7 in use.\']\\n4f5ab93e-697b-4542-a7df-1b132fe3655c/[\'f4d7e644-631a-47b9-85cc-84edf0fd4057\', \'96a91fbd-7c60-4e8e-a2ef-ecab164c87d7\']",)',) (dispatcher:83)
2019-09-06 11:07:58,850-0400 INFO  (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC call Image.teardown failed (error 552) in 20.07 seconds (__init__:312)

2019-09-06 11:09:03,197-0400 INFO  (jsonrpc/7) [api.virt] START hotunplugDisk(params={u'xml': u'<?xml version="1.0" encoding="UTF-8" standalone="yes"?><hotunplug><devices><disk><alias name="ua-daa39bb3-38da-4325-a73c-c34abf58063d"/></disk></devices></hotunplug>', u'vmId': u'dcbd4752-b844-4d05-a34d-fdd471931b45'}) from=::ffff:10.10.178.31,37630, flow_id=e232a75f-7d29-4023-a7f9-c808d06dff42, vmId=dcbd4752-b844-4d05-a34d-fdd471931b45 (api:48)
2019-09-06 11:09:23,438-0400 ERROR (jsonrpc/7) [storage.Dispatcher] FINISH teardownImage error=Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\'  Logical volume 4f5ab93e-697b-4542-a7df-1b132fe3655c/f4d7e644-631a-47b9-85cc-84edf0fd4057 in use.\', \'  Logical volume 4f5ab93e-697b-4542-a7df-1b132fe3655c/96a91fbd-7c60-4e8e-a2ef-ecab164c87d7 in use.\']\\n4f5ab93e-697b-4542-a7df-1b132fe3655c/[\'f4d7e644-631a-47b9-85cc-84edf0fd4057\', \'96a91fbd-7c60-4e8e-a2ef-ecab164c87d7\']",)',) (dispatcher:83)
2019-09-06 11:09:23,463-0400 INFO  (jsonrpc/7) [api.virt] FINISH hotunplugDisk return=
2019-09-06 11:09:23,467-0400 INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call VM.hotunplugDisk succeeded in 20.27 seconds (__init__:312)
~~~

-	Engine logs Null Expection Pointer.

~~~
2019-09-06 11:07:59,273-04 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-10) [972fea1a-b008-4e38-bc8c-cf8d359e2195] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), VDSM rhevh-10.example.com command TeardownImageVDS failed: Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\'  Logical volume 4f5ab93e-697b-4542-a7df-1b132fe3655c/f4d7e644-631a-47b9-85cc-84edf0fd4057 in use.\', \'  Logical volume 4f5ab93e-697b-4542-a7df-1b132fe3655c/96a91fbd-7c60-4e8e-a2ef-ecab164c87d7 in use.\']\\n4f5ab93e-697b-4542-a7df-1b132fe3655c/[\'f4d7e644-631a-47b9-85cc-84edf0fd4057\', \'96a91fbd-7c60-4e8e-a2ef-ecab164c87d7\']",)',)
2019-09-06 11:07:59,273-04 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.TeardownImageVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-10) [972fea1a-b008-4e38-bc8c-cf8d359e2195] Command 'TeardownImageVDSCommand(HostName = rhevh-10.example.com, ImageActionsVDSCommandParameters:{hostId='878c17db-b9b3-4bb7-8569-f6249159e74d'})' execution failed: VDSGenericException: VDSErrorException: Failed in vdscommand to TeardownImageVDS, error = Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\'  Logical volume 4f5ab93e-697b-4542-a7df-1b132fe3655c/f4d7e644-631a-47b9-85cc-84edf0fd4057 in use.\', \'  Logical volume 4f5ab93e-697b-4542-a7df-1b132fe3655c/96a91fbd-7c60-4e8e-a2ef-ecab164c87d7 in use.\']\\n4f5ab93e-697b-4542-a7df-1b132fe3655c/[\'f4d7e644-631a-47b9-85cc-84edf0fd4057\', \'96a91fbd-7c60-4e8e-a2ef-ecab164c87d7\']",)',)
2019-09-06 11:07:59,273-04 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.TeardownImageVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-10) [972fea1a-b008-4e38-bc8c-cf8d359e2195] FINISH, TeardownImageVDSCommand, return: , log id: 6f3fad40
2019-09-06 11:07:59,273-04 INFO  [org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller] (EE-ManagedThreadFactory-engineScheduled-Thread-10) [972fea1a-b008-4e38-bc8c-cf8d359e2195] Exception in invoking callback of command TransferDiskImage (9674c7ed-ce8a-47fe-aff0-015eff1f4e01): VDSErrorException: VDSGenericException: VDSErrorException: Failed in vdscommand to TeardownImageVDS, error = Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\'  Logical volume 4f5ab93e-697b-4542-a7df-1b132fe3655c/f4d7e644-631a-47b9-85cc-84edf0fd4057 in use.\', \'  Logical volume 4f5ab93e-697b-4542-a7df-1b132fe3655c/96a91fbd-7c60-4e8e-a2ef-ecab164c87d7 in use.\']\\n4f5ab93e-697b-4542-a7df-1b132fe3655c/[\'f4d7e644-631a-47b9-85cc-84edf0fd4057\', \'96a91fbd-7c60-4e8e-a2ef-ecab164c87d7\']",)',)
2019-09-06 11:07:59,273-04 ERROR [org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller] (EE-ManagedThreadFactory-engineScheduled-Thread-10) [972fea1a-b008-4e38-bc8c-cf8d359e2195] Error invoking callback method 'doPolling' for 'ACTIVE' command '9674c7ed-ce8a-47fe-aff0-015eff1f4e01'
2019-09-06 11:07:59,273-04 ERROR [org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller] (EE-ManagedThreadFactory-engineScheduled-Thread-10) [972fea1a-b008-4e38-bc8c-cf8d359e2195] Exception: org.ovirt.engine.core.common.errors.EngineException: EngineException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed in vdscommand to TeardownImageVDS, error = Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\'  Logical volume 4f5ab93e-697b-4542-a7df-1b132fe3655c/f4d7e644-631a-47b9-85cc-84edf0fd4057 in use.\', \'  Logical volume 4f5ab93e-697b-4542-a7df-1b132fe3655c/96a91fbd-7c60-4e8e-a2ef-ecab164c87d7 in use.\']\\n4f5ab93e-697b-4542-a7df-1b132fe3655c/[\'f4d7e644-631a-47b9-85cc-84edf0fd4057\', \'96a91fbd-7c60-4e8e-a2ef-ecab164c87d7\']",)',) (Failed with error CannotDeactivateLogicalVolume and code 552)
        at org.ovirt.engine.core.bll.VdsHandler.handleVdsResult(VdsHandler.java:118) [bll.jar:]
        at org.ovirt.engine.core.bll.VDSBrokerFrontendImpl.runVdsCommand(VDSBrokerFrontendImpl.java:33) [bll.jar:]
        at org.ovirt.engine.core.bll.CommandBase.runVdsCommand(CommandBase.java:2075) [bll.jar:]
        at org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand.tearDownImage(TransferDiskImageCommand.java:258) [bll.jar:]
        at org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand.handleFinalizingSuccess(TransferDiskImageCommand.java:805) [bll.jar:]
        at org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand.executeStateHandler(TransferDiskImageCommand.java:487) [bll.jar:]
        at org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand.proceedCommandExecution(TransferDiskImageCommand.java:453) [bll.jar:]
        at org.ovirt.engine.core.bll.storage.disk.image.TransferImageCommandCallback.doPolling(TransferImageCommandCallback.java:21) [bll.jar:]
        at org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethodsImpl(CommandCallbacksPoller.java:175) [bll.jar:]
        at org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethods(CommandCallbacksPoller.java:109) [bll.jar:]
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [rt.jar:1.8.0_222]
        at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [rt.jar:1.8.0_222]
        at org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.access$201(ManagedScheduledThreadPoolExecutor.java:383) [javax.enterprise.concurrent.jar:1.0.0.redhat-1]
        at org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.run(ManagedScheduledThreadPoolExecutor.java:534) [javax.enterprise.concurrent.jar:1.0.0.redhat-1]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [rt.jar:1.8.0_222]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [rt.jar:1.8.0_222]
        at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_222]
        at org.glassfish.enterprise.concurrent.ManagedThreadFactoryImpl$ManagedThread.run(ManagedThreadFactoryImpl.java:250) [javax.enterprise.concurrent.jar:1.0.0.redhat-1]
Caused by: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed in vdscommand to TeardownImageVDS, error = Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\'  Logical volume 4f5ab93e-697b-4542-a7df-1b132fe3655c/f4d7e644-631a-47b9-85cc-84edf0fd4057 in use.\', \'  Logical volume 4f5ab93e-697b-4542-a7df-1b132fe3655c/96a91fbd-7c60-4e8e-a2ef-ecab164c87d7 in use.\']\\n4f5ab93e-697b-4542-a7df-1b132fe3655c/[\'f4d7e644-631a-47b9-85cc-84edf0fd4057\', \'96a91fbd-7c60-4e8e-a2ef-ecab164c87d7\']",)',)
        at org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase.proceedProxyReturnValue(BrokerCommandBase.java:183) [vdsbroker.jar:]
        at org.ovirt.engine.core.vdsbroker.vdsbroker.ImageActionsVDSCommandBase.executeVdsBrokerCommand(ImageActionsVDSCommandBase.java:20) [vdsbroker.jar:]
        at org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.executeVdsCommandWithNetworkEvent(VdsBrokerCommand.java:123) [vdsbroker.jar:]
        at org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.executeVDSCommand(VdsBrokerCommand.java:111) [vdsbroker.jar:]
        at org.ovirt.engine.core.vdsbroker.VDSCommandBase.executeCommand(VDSCommandBase.java:65) [vdsbroker.jar:]
        at org.ovirt.engine.core.dal.VdcCommandBase.execute(VdcCommandBase.java:31) [dal.jar:]
        at org.ovirt.engine.core.vdsbroker.vdsbroker.DefaultVdsCommandExecutor.execute(DefaultVdsCommandExecutor.java:14) [vdsbroker.jar:]
        at org.ovirt.engine.core.vdsbroker.ResourceManager.runVdsCommand(ResourceManager.java:398) [vdsbroker.jar:]
        at org.ovirt.engine.core.vdsbroker.ResourceManager$Proxy$_$$_WeldSubclass.runVdsCommand$$super(Unknown Source) [vdsbroker.jar:]
        at sun.reflect.GeneratedMethodAccessor132.invoke(Unknown Source) [:1.8.0_222]
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [rt.jar:1.8.0_222]
        at java.lang.reflect.Method.invoke(Method.java:498) [rt.jar:1.8.0_222]
        at org.jboss.weld.interceptor.proxy.TerminalAroundInvokeInvocationContext.proceedInternal(TerminalAroundInvokeInvocationContext.java:51) [weld-core-impl.jar:3.0.6.Final-redhat-00001]
        at org.jboss.weld.interceptor.proxy.AroundInvokeInvocationContext.proceed(AroundInvokeInvocationContext.java:78) [weld-core-impl.jar:3.0.6.Final-redhat-00001]
        at org.ovirt.engine.core.common.di.interceptor.LoggingInterceptor.apply(LoggingInterceptor.java:12) [common.jar:]
        at sun.reflect.GeneratedMethodAccessor71.invoke(Unknown Source) [:1.8.0_222]
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [rt.jar:1.8.0_222]
        at java.lang.reflect.Method.invoke(Method.java:498) [rt.jar:1.8.0_222]
        at org.jboss.weld.interceptor.reader.SimpleInterceptorInvocation$SimpleMethodInvocation.invoke(SimpleInterceptorInvocation.java:73) [weld-core-impl.jar:3.0.6.Final-redhat-00001]
        at org.jboss.weld.interceptor.proxy.InterceptorMethodHandler.executeAroundInvoke(InterceptorMethodHandler.java:84) [weld-core-impl.jar:3.0.6.Final-redhat-00001]
        at org.jboss.weld.interceptor.proxy.InterceptorMethodHandler.executeInterception(InterceptorMethodHandler.java:72) [weld-core-impl.jar:3.0.6.Final-redhat-00001]
        at org.jboss.weld.interceptor.proxy.InterceptorMethodHandler.invoke(InterceptorMethodHandler.java:56) [weld-core-impl.jar:3.0.6.Final-redhat-00001]
        at org.jboss.weld.bean.proxy.CombinedInterceptorAndDecoratorStackMethodHandler.invoke(CombinedInterceptorAndDecoratorStackMethodHandler.java:79) [weld-core-impl.jar:3.0.6.Final-redhat-00001]
        at org.jboss.weld.bean.proxy.CombinedInterceptorAndDecoratorStackMethodHandler.invoke(CombinedInterceptorAndDecoratorStackMethodHandler.java:68) [weld-core-impl.jar:3.0.6.Final-redhat-00001]
        at org.ovirt.engine.core.vdsbroker.ResourceManager$Proxy$_$$_WeldSubclass.runVdsCommand(Unknown Source) [vdsbroker.jar:]
        ... 17 more

2019-09-06 11:08:00,412-04 INFO  [org.ovirt.engine.core.bll.storage.disk.image.ImageTransferUpdater] (EE-ManagedThreadFactory-engineScheduled-Thread-94) [972fea1a-b008-4e38-bc8c-cf8d359e2195] Updating image transfer 9674c7ed-ce8a-47fe-aff0-015eff1f4e01 phase to Finished Failure
2019-09-06 11:08:00,414-04 ERROR [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-94) [972fea1a-b008-4e38-bc8c-cf8d359e2195] Failed to transfer disk 'f4d7e644-631a-47b9-85cc-84edf0fd4057' (command id '9674c7ed-ce8a-47fe-aff0-015eff1f4e01')
2019-09-06 11:08:00,415-04 INFO  [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-94) [972fea1a-b008-4e38-bc8c-cf8d359e2195] Ending command 'org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand' successfully.

2019-09-06 11:09:02,858-04 INFO  [org.ovirt.engine.core.bll.storage.disk.DetachDiskFromVmCommand] (default task-561) [e232a75f-7d29-4023-a7f9-c808d06dff42] Running command: DetachDiskFromVmCommand internal: false. Entities affected :  ID: dcbd4752-b844-4d05-a34d-fdd471931b45 Type: VMAction group CONFIGURE_VM_STORAGE with role type USER
2019-09-06 11:09:02,860-04 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HotUnPlugDiskVDSCommand] (default task-561) [e232a75f-7d29-4023-a7f9-c808d06dff42] START, HotUnPlugDiskVDSCommand(HostName = rhevh-10.example.com, HotPlugDiskVDSParameters:{hostId='878c17db-b9b3-4bb7-8569-f6249159e74d', vmId='dcbd4752-b844-4d05-a34d-fdd471931b45', diskId='daa39bb3-38da-4325-a73c-c34abf58063d', addressMap='[bus=0x00, domain=0x0000, function=0x0, slot=0x08, type=pci]'}), log id: 54d4b853

2019-09-06 11:09:23,472-04 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HotUnPlugDiskVDSCommand] (default task-561) [e232a75f-7d29-4023-a7f9-c808d06dff42] FINISH, HotUnPlugDiskVDSCommand, return: , log id: 54d4b853

2019-09-06 11:09:24,014-04 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-561) [e232a75f-7d29-4023-a7f9-c808d06dff42] EVENT_ID: USER_DETACH_DISK_FROM_VM(2,018), Disk rhel76_Disk1 was successfully detached from VM rhel76 by admin@internal-authz.

2019-09-06 11:09:25,233-04 ERROR [org.ovirt.engine.api.restapi.resource.AbstractBackendResource] (default task-561) [] Operation Failed: [Cannot remove Snapshot: The following disks are locked: rhel76_Disk1. Please try again in a few minutes.]
~~~


-	This also leave the disk image in a LOCKED state (imagestatus 2).

~~~
image_guid            | f4d7e644-631a-47b9-85cc-84edf0fd4057
creation_date         | 2019-09-05 20:59:37-04
size                  | 21474836480
it_guid               | 446a8769-d7b4-42ae-a909-7101d03fa8b6
parentid              | 446a8769-d7b4-42ae-a909-7101d03fa8b6
imagestatus           | 2 <<=========================================
lastmodified          | 2019-09-06 10:55:23.111-04
vm_snapshot_id        | be74c80a-69a3-45e1-9ebe-f05cfce29383
volume_type           | 2  
volume_format         | 4
image_group_id        | daa39bb3-38da-4325-a73c-c34abf58063d
_create_date          | 2019-09-05 20:59:37.184831-04
_update_date          | 2019-09-06 10:55:23.111584-04
active                | f
volume_classification | 1
qcow_compat           | 2
~~~

Version-Release number of selected component (if applicable):

ovirt-engine-4.3.5.4-0.1.el7.noarch
vdsm-4.30.13-1.el7ev.x86_64


How reproducible:

100%

Steps to Reproduce:

Using block storage, use the following;

https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/vm_backup.py
https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/download_disk_snapshots.py

Modify vm_backup.py around line 224 to call download_disk_snapshots.py.

(Originally by Bimal Chollera)

Comment 5 RHV bug bot 2019-09-18 10:04:04 UTC
Sounds similar to previously reported BZ1644142, which was closed.

(Originally by Germano Veit Michel)

Comment 6 RHV bug bot 2019-09-18 10:04:06 UTC
raising severity to urgent as customers are not able to backup the vms using vprotect anymore

(Originally by Marian Jankular)

Comment 7 RHV bug bot 2019-09-18 10:04:08 UTC
Eyal, you closed bug 1644142 which is very similar, can you please have a look?

(Originally by Tal Nisan)

Comment 8 RHV bug bot 2019-09-18 10:04:09 UTC
The difference between this bug and bug 1644142 is that the teardown error is raised during the attempt to finalize the image transfer.
I believe the solution will be to prevent the teardown if the image is snapshot disk and attached to more than one VM.

Is this a regression?

(Originally by Eyal Shenitzky)

Comment 10 RHV bug bot 2019-09-18 10:04:13 UTC
*** Bug 1668366 has been marked as a duplicate of this bug. ***

(Originally by Eyal Shenitzky)

Comment 19 RHV bug bot 2019-09-25 08:46:45 UTC
INFO: Bug status (ON_QA) wasn't changed but the folowing should be fixed:

[Tag 'ovirt-engine-4.3.5.6' doesn't contain patch 'https://gerrit.ovirt.org/103408']
gitweb: https://gerrit.ovirt.org/gitweb?p=ovirt-engine.git;a=shortlog;h=refs/tags/ovirt-engine-4.3.5.6

For more info please contact: rhv-devops

Comment 20 Shir Fishbain 2019-09-25 13:47:23 UTC
From the functionality side, the bug verified. All the steps passed.

Moreover when detach the disk from source VM , the following error appears :

from vdsm.log:
2019-09-25 16:20:14,750+0300 ERROR (jsonrpc/3) [storage.TaskManager.Task] (Task='0af1c3a1-b7d3-4097-85cd-7710de207fa7') Unexpected error (task:875)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run
    return fn(*args, **kargs)
  File "<string>", line 2, in teardownImage
  File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in method
    ret = func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 3269, in teardownImage
    dom.deactivateImage(imgUUID)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/blockSD.py", line 1406, in deactivateImage
    lvm.deactivateLVs(self.sdUUID, volUUIDs)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/lvm.py", line 1451, in deactivateLVs
    _setLVAvailability(vgName, toDeactivate, "n")
  File "/usr/lib/python2.7/site-packages/vdsm/storage/lvm.py", line 954, in _setLVAvailability
    raise error(str(e))
CannotDeactivateLogicalVolume: Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\'  Logical volume db849598-6d71-43b0-aa22-fece4eb139ae/1f177b3b-e150-458e-887e-471b29cfc360 in use.\', \'  Logical volume db849598-6d71-43b0-aa22-fece4eb139ae/b4e55d8e-7f5d-44ea-8440-96b6fb7bd914 in use.\']\\ndb849598-6d71-43b0-aa22-fece4eb139ae/[\'1f177b3b-e150-458e-887e-471b29cfc360\', \'b4e55d8e-7f5d-44ea-8440-96b6fb7bd914\']",)',)

from engine.log:
2019-09-25 16:19:39,251+03 INFO  [org.ovirt.engine.core.bll.storage.disk.HotUnPlugDiskFromVmCommand] (default task-17) [98ce2330-1b69-4516-ad55-a9337e01078c] Lock Acquired to object 'EngineLock:{exclusiveLocks='[f55df48a-1cc6-490f-a850-97833005eae6=DISK]', sharedLocks='[1cff871e-5e59-4108-a621-c631bbf6897e=VM]'}'
2019-09-25 16:19:39,440+03 INFO  [org.ovirt.engine.core.bll.storage.disk.HotUnPlugDiskFromVmCommand] (EE-ManagedThreadFactory-engine-Thread-621) [98ce2330-1b69-4516-ad55-a9337e01078c] Running command: HotUnPlugDiskFromVmCommand internal: false. Entities affected :  ID: 1cff871e-5e59-4108-a621-c631bbf6897e Type: VMAction group CONFIGURE_VM_STORAGE with role type USER
2019-09-25 16:19:39,460+03 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HotUnPlugDiskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-621) [98ce2330-1b69-4516-ad55-a9337e01078c] START, HotUnPlugDiskVDSCommand(HostName = host_mixed_1, HotPlugDiskVDSParameters:{hostId='2627e621-24b2-40e6-ab11-029d55e3e6c8', vmId='1cff871e-5e59-4108-a621-c631bbf6897e', diskId='f55df48a-1cc6-490f-a850-97833005eae6', addressMap='[bus=0x00, domain=0x0000, function=0x0, slot=0x0a, type=pci]'}), log id: fd6fe75
2019-09-25 16:19:39,509+03 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HotUnPlugDiskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-621) [98ce2330-1b69-4516-ad55-a9337e01078c] Disk hot-unplug: <?xml version="1.0" encoding="UTF-8"?><hotunplug>

ovirt-engine-4.3.6.6-0.1.el7.noarch
vdsm-4.30.30-1.el7ev.x86_64

Logs attached

Comment 21 Shir Fishbain 2019-09-25 13:59:56 UTC
Created attachment 1619041 [details]
Logs

Comment 24 Avihai 2019-09-26 05:05:04 UTC
Hi Eyal,
We need your help in order to proceed with this bug.

As you can see functionality looks good BUT we still get the TearDown Error in VDSM this time in detach disk phase.
The disk does get detached and removed in spite of the VDSM error.

Please advise on how to proceed. 
To be clear we do not see any functionality impact other than the VDSM error when detaching the disk.

I think we can verify this issue and open a new issue for the VDSM error seen in detach disk phase.
WDYT?


(In reply to Shir Fishbain from comment #20)
> From the functionality side, the bug verified. All the steps passed.
> 
> Moreover when detach the disk from source VM , the following error appears :
> 
> from vdsm.log:
> 2019-09-25 16:20:14,750+0300 ERROR (jsonrpc/3) [storage.TaskManager.Task]
> (Task='0af1c3a1-b7d3-4097-85cd-7710de207fa7') Unexpected error (task:875)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in
> _run
>     return fn(*args, **kargs)
>   File "<string>", line 2, in teardownImage
>   File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in
> method
>     ret = func(*args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 3269, in
> teardownImage
>     dom.deactivateImage(imgUUID)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/blockSD.py", line
> 1406, in deactivateImage
>     lvm.deactivateLVs(self.sdUUID, volUUIDs)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/lvm.py", line 1451, in
> deactivateLVs
>     _setLVAvailability(vgName, toDeactivate, "n")
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/lvm.py", line 954, in
> _setLVAvailability
>     raise error(str(e))
> CannotDeactivateLogicalVolume: Cannot deactivate Logical Volume: ('General
> Storage Exception: ("5 [] [\'  Logical volume
> db849598-6d71-43b0-aa22-fece4eb139ae/1f177b3b-e150-458e-887e-471b29cfc360 in
> use.\', \'  Logical volume
> db849598-6d71-43b0-aa22-fece4eb139ae/b4e55d8e-7f5d-44ea-8440-96b6fb7bd914 in
> use.\']\\ndb849598-6d71-43b0-aa22-fece4eb139ae/[\'1f177b3b-e150-458e-887e-
> 471b29cfc360\', \'b4e55d8e-7f5d-44ea-8440-96b6fb7bd914\']",)',)
> 
> from engine.log:
> 2019-09-25 16:19:39,251+03 INFO 
> [org.ovirt.engine.core.bll.storage.disk.HotUnPlugDiskFromVmCommand] (default
> task-17) [98ce2330-1b69-4516-ad55-a9337e01078c] Lock Acquired to object
> 'EngineLock:{exclusiveLocks='[f55df48a-1cc6-490f-a850-97833005eae6=DISK]',
> sharedLocks='[1cff871e-5e59-4108-a621-c631bbf6897e=VM]'}'
> 2019-09-25 16:19:39,440+03 INFO 
> [org.ovirt.engine.core.bll.storage.disk.HotUnPlugDiskFromVmCommand]
> (EE-ManagedThreadFactory-engine-Thread-621)
> [98ce2330-1b69-4516-ad55-a9337e01078c] Running command:
> HotUnPlugDiskFromVmCommand internal: false. Entities affected :  ID:
> 1cff871e-5e59-4108-a621-c631bbf6897e Type: VMAction group
> CONFIGURE_VM_STORAGE with role type USER
> 2019-09-25 16:19:39,460+03 INFO 
> [org.ovirt.engine.core.vdsbroker.vdsbroker.HotUnPlugDiskVDSCommand]
> (EE-ManagedThreadFactory-engine-Thread-621)
> [98ce2330-1b69-4516-ad55-a9337e01078c] START,
> HotUnPlugDiskVDSCommand(HostName = host_mixed_1,
> HotPlugDiskVDSParameters:{hostId='2627e621-24b2-40e6-ab11-029d55e3e6c8',
> vmId='1cff871e-5e59-4108-a621-c631bbf6897e',
> diskId='f55df48a-1cc6-490f-a850-97833005eae6', addressMap='[bus=0x00,
> domain=0x0000, function=0x0, slot=0x0a, type=pci]'}), log id: fd6fe75
> 2019-09-25 16:19:39,509+03 INFO 
> [org.ovirt.engine.core.vdsbroker.vdsbroker.HotUnPlugDiskVDSCommand]
> (EE-ManagedThreadFactory-engine-Thread-621)
> [98ce2330-1b69-4516-ad55-a9337e01078c] Disk hot-unplug: <?xml version="1.0"
> encoding="UTF-8"?><hotunplug>
> 
> ovirt-engine-4.3.6.6-0.1.el7.noarch
> vdsm-4.30.30-1.el7ev.x86_64
> 
> Logs attached

Comment 25 Eyal Shenitzky 2019-09-26 05:26:08 UTC
(In reply to Avihai from comment #24)
> Hi Eyal,
> We need your help in order to proceed with this bug.
> 
> As you can see functionality looks good BUT we still get the TearDown Error
> in VDSM this time in detach disk phase.
> The disk does get detached and removed in spite of the VDSM error.
> 
> Please advise on how to proceed. 
> To be clear we do not see any functionality impact other than the VDSM error
> when detaching the disk.
> 
> I think we can verify this issue and open a new issue for the VDSM error
> seen in detach disk phase.
> WDYT?

According to the logs it seems that this is a known issue that was reported in bug 1644142.

The hot-unplug of the volume succeded on the engine side. 
On VDSM we do see a failure to teardown the volume since it is used 
by the 'source_vm' which is in 'up' state when the hut-unplug occur.

Already discussed in bug 1644142.

Comment 26 Shir Fishbain 2019-09-26 07:54:54 UTC
Verified

ovirt-engine-4.3.6.6-0.1.el7.noarch
vdsm-4.30.30-1.el7ev.x86_64

Comment 28 RHEL Program Management 2019-09-26 20:16:31 UTC
The documentation text flag should only be set after 'doc text' field is provided. Please provide the documentation text and set the flag to '?' again.

Comment 30 errata-xmlrpc 2019-10-10 15:37:20 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2019:3010


Note You need to log in before you can comment on or make changes to this bug.