Bug 1749944 - teardownImage attempts to deactivate in-use LV's rendering the VM disk image/volumes in locked state.
Summary: teardownImage attempts to deactivate in-use LV's rendering the VM disk image/...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine
Version: 4.3.5
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ovirt-4.4.0
: ---
Assignee: Eyal Shenitzky
QA Contact: Ilan Zuckerman
URL:
Whiteboard:
: 1668366 (view as bug list)
Depends On:
Blocks: 1547768 1753168
TreeView+ depends on / blocked
 
Reported: 2019-09-06 20:21 UTC by Bimal Chollera
Modified: 2022-03-13 17:06 UTC (History)
14 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
: 1753168 (view as bug list)
Environment:
Last Closed: 2020-08-04 13:20:31 UTC
oVirt Team: Storage
Target Upstream Version:
Embargoed:
izuckerm: testing_plan_complete+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 4420841 0 None None None 2019-09-17 18:00:04 UTC
Red Hat Product Errata RHSA-2020:3247 0 None None None 2020-08-04 13:21:03 UTC
oVirt gerrit 103255 0 'None' MERGED core: prevent teardown of a snapshot disk that is used by running VM 2021-01-19 13:43:56 UTC

Description Bimal Chollera 2019-09-06 20:21:17 UTC
Description of problem:

When using the Backup API, the following sequence takes place:

1) Snapshot Disk X of VM-A
2) Attach Disk X (transient) of the snapshot to the agent-VM
3) Agent-VM does backup of disk
4) Download the disk image
5) Teardown Disk X
6) Detach the Disk from the agent-VM
7) Remove snapshot of Disk X

The teardownImage fails, as the LVs of Disk X are still in use by VM A.
The VM-A and agent-VM can be on the same or different host.

-	RHV-M audit log info:

~~~
           log_time            |            correlation_id            |     message                                                                                                                                                                                                                                                                      
-------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------------
 2019-09-06 10:55:20.463-04    |                                      | Backup of virtual machine 'AAAAA7.6' using snapshot 'AAAAA7.6-backup-cc1cd3c3-7c0b-476b-9d2b-daee8cb119b9' is starting.
 2019-09-06 10:55:24.27-04     | 1aee9d40-457e-4806-83c1-149dfec83925 | Snapshot 'AAAAA7.6-backup-cc1cd3c3-7c0b-476b-9d2b-daee8cb119b9' creation for VM 'AAAAA7.6' was initiated by admin@internal-authz.
 2019-09-06 10:55:43.825-04    | 1aee9d40-457e-4806-83c1-149dfec83925 | Snapshot 'AAAAA7.6-backup-cc1cd3c3-7c0b-476b-9d2b-daee8cb119b9' creation for VM 'AAAAA7.6' has been completed.
 2019-09-06 10:55:48.838-04    | 8b2bc54a-0073-4481-957e-4d00be3e739a | Disk rhel76_Disk1 was successfully attached to VM rhel76 by admin@internal-authz.
 2019-09-06 10:55:54.137-04    | 972fea1a-b008-4e38-bc8c-cf8d359e2195 | Image Download with disk rhel76_Disk1 was initiated by admin@internal-authz.
 2019-09-06 11:07:58.95-04     |                                      | VDSM rhevh-10.example.com command TeardownImageVDS failed: Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\'  Logical volume 4f5ab93e-697b-4542-a7df-1b132fe3655c/f4d7e644-631a-47b9-85cc-84edf0fd4057 in use.\', \'  Logical volume 4f5ab93e
-697b-4542-a7df-1b132fe3655c/96a91fbd-7c60-4e8e-a2ef-ecab164c87d7 in use.\']\\n4f5ab93e-697b-4542-a7df-1b132fe3655c/[\'f4d7e644-631a-47b9-85cc-84edf0fd4057\', \'96a91fbd-7c60-4e8e-a2ef-ecab164c87d7\']",)',)
 2019-09-06 11:08:00.435-04    | 972fea1a-b008-4e38-bc8c-cf8d359e2195 | Image Download with disk rhel76_Disk1 succeeded.
~~~


-	From step 5 and 6.  
	
	VDSM logs:
~~~
2019-09-06 11:07:38,773-0400 INFO  (jsonrpc/3) [vdsm.api] START teardownImage(sdUUID=u'4f5ab93e-697b-4542-a7df-1b132fe3655c', spUUID=u'5a85dd39-0046-03e0-0383-000000000324', imgUUID=u'daa39bb3-38da-4325-a73c-c34abf58063d', volUUID=None) from=::ffff:10.10.178.31,37630, flow_id=972fea1a-b008-4e38-bc8c-cf8d359e2195, task_id=a44708ab-5ee2-4163-971d-35160d564f40 (api:48)

2019-09-06 11:07:58,848-0400 INFO  (jsonrpc/3) [vdsm.api] FINISH teardownImage error=Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\'  Logical volume 4f5ab93e-697b-4542-a7df-1b132fe3655c/f4d7e644-631a-47b9-85cc-84edf0fd4057 in use.\', \'  Logical volume 4f5ab93e-697b-4542-a7df-1b132fe3655c/96a91fbd-7c60-4e8e-a2ef-ecab164c87d7 in use.\']\\n4f5ab93e-697b-4542-a7df-1b132fe3655c/[\'f4d7e644-631a-47b9-85cc-84edf0fd4057\', \'96a91fbd-7c60-4e8e-a2ef-ecab164c87d7\']",)',) from=::ffff:10.10.178.31,37630, flow_id=972fea1a-b008-4e38-bc8c-cf8d359e2195, task_id=a44708ab-5ee2-4163-971d-35160d564f40 (api:52)
2019-09-06 11:07:58,849-0400 ERROR (jsonrpc/3) [storage.TaskManager.Task] (Task='a44708ab-5ee2-4163-971d-35160d564f40') Unexpected error (task:875)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run
    return fn(*args, **kargs)
  File "<string>", line 2, in teardownImage
  File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in method
    ret = func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 3250, in teardownImage
    dom.deactivateImage(imgUUID)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/blockSD.py", line 1390, in deactivateImage
    lvm.deactivateLVs(self.sdUUID, volUUIDs)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/lvm.py", line 1423, in deactivateLVs
    _setLVAvailability(vgName, toDeactivate, "n")
  File "/usr/lib/python2.7/site-packages/vdsm/storage/lvm.py", line 926, in _setLVAvailability
    raise error(str(e))
CannotDeactivateLogicalVolume: Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\'  Logical volume 4f5ab93e-697b-4542-a7df-1b132fe3655c/f4d7e644-631a-47b9-85cc-84edf0fd4057 in use.\', \'  Logical volume 4f5ab93e-697b-4542-a7df-1b132fe3655c/96a91fbd-7c60-4e8e-a2ef-ecab164c87d7 in use.\']\\n4f5ab93e-697b-4542-a7df-1b132fe3655c/[\'f4d7e644-631a-47b9-85cc-84edf0fd4057\', \'96a91fbd-7c60-4e8e-a2ef-ecab164c87d7\']",)',)
2019-09-06 11:07:58,849-0400 INFO  (jsonrpc/3) [storage.TaskManager.Task] (Task='a44708ab-5ee2-4163-971d-35160d564f40') aborting: Task is aborted: 'Cannot deactivate Logical Volume: (\'General Storage Exception: ("5 [] [\\\'  Logical volume 4f5ab93e-697b-4542-a7df-1b132fe3655c/f4d7e644-631a-47b9-85cc-84edf0fd4057 in use.\\\', \\\'  Logical volume 4f5ab93e-697b-4542-a7df-1b132fe3655c/96a91fbd-7c60-4e8e-a2ef-ecab164c87d7 in use.\\\']\\\\n4f5ab93e-697b-4542-a7df-1b132fe3655c/[\\\'f4d7e644-631a-47b9-85cc-84edf0fd4057\\\', \\\'96a91fbd-7c60-4e8e-a2ef-ecab164c87d7\\\']",)\',)' - code 552 (task:1181)
2019-09-06 11:07:58,850-0400 ERROR (jsonrpc/3) [storage.Dispatcher] FINISH teardownImage error=Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\'  Logical volume 4f5ab93e-697b-4542-a7df-1b132fe3655c/f4d7e644-631a-47b9-85cc-84edf0fd4057 in use.\', \'  Logical volume 4f5ab93e-697b-4542-a7df-1b132fe3655c/96a91fbd-7c60-4e8e-a2ef-ecab164c87d7 in use.\']\\n4f5ab93e-697b-4542-a7df-1b132fe3655c/[\'f4d7e644-631a-47b9-85cc-84edf0fd4057\', \'96a91fbd-7c60-4e8e-a2ef-ecab164c87d7\']",)',) (dispatcher:83)
2019-09-06 11:07:58,850-0400 INFO  (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC call Image.teardown failed (error 552) in 20.07 seconds (__init__:312)

2019-09-06 11:09:03,197-0400 INFO  (jsonrpc/7) [api.virt] START hotunplugDisk(params={u'xml': u'<?xml version="1.0" encoding="UTF-8" standalone="yes"?><hotunplug><devices><disk><alias name="ua-daa39bb3-38da-4325-a73c-c34abf58063d"/></disk></devices></hotunplug>', u'vmId': u'dcbd4752-b844-4d05-a34d-fdd471931b45'}) from=::ffff:10.10.178.31,37630, flow_id=e232a75f-7d29-4023-a7f9-c808d06dff42, vmId=dcbd4752-b844-4d05-a34d-fdd471931b45 (api:48)
2019-09-06 11:09:23,438-0400 ERROR (jsonrpc/7) [storage.Dispatcher] FINISH teardownImage error=Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\'  Logical volume 4f5ab93e-697b-4542-a7df-1b132fe3655c/f4d7e644-631a-47b9-85cc-84edf0fd4057 in use.\', \'  Logical volume 4f5ab93e-697b-4542-a7df-1b132fe3655c/96a91fbd-7c60-4e8e-a2ef-ecab164c87d7 in use.\']\\n4f5ab93e-697b-4542-a7df-1b132fe3655c/[\'f4d7e644-631a-47b9-85cc-84edf0fd4057\', \'96a91fbd-7c60-4e8e-a2ef-ecab164c87d7\']",)',) (dispatcher:83)
2019-09-06 11:09:23,463-0400 INFO  (jsonrpc/7) [api.virt] FINISH hotunplugDisk return=
2019-09-06 11:09:23,467-0400 INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call VM.hotunplugDisk succeeded in 20.27 seconds (__init__:312)
~~~

-	Engine logs Null Expection Pointer.

~~~
2019-09-06 11:07:59,273-04 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-10) [972fea1a-b008-4e38-bc8c-cf8d359e2195] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), VDSM rhevh-10.example.com command TeardownImageVDS failed: Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\'  Logical volume 4f5ab93e-697b-4542-a7df-1b132fe3655c/f4d7e644-631a-47b9-85cc-84edf0fd4057 in use.\', \'  Logical volume 4f5ab93e-697b-4542-a7df-1b132fe3655c/96a91fbd-7c60-4e8e-a2ef-ecab164c87d7 in use.\']\\n4f5ab93e-697b-4542-a7df-1b132fe3655c/[\'f4d7e644-631a-47b9-85cc-84edf0fd4057\', \'96a91fbd-7c60-4e8e-a2ef-ecab164c87d7\']",)',)
2019-09-06 11:07:59,273-04 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.TeardownImageVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-10) [972fea1a-b008-4e38-bc8c-cf8d359e2195] Command 'TeardownImageVDSCommand(HostName = rhevh-10.example.com, ImageActionsVDSCommandParameters:{hostId='878c17db-b9b3-4bb7-8569-f6249159e74d'})' execution failed: VDSGenericException: VDSErrorException: Failed in vdscommand to TeardownImageVDS, error = Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\'  Logical volume 4f5ab93e-697b-4542-a7df-1b132fe3655c/f4d7e644-631a-47b9-85cc-84edf0fd4057 in use.\', \'  Logical volume 4f5ab93e-697b-4542-a7df-1b132fe3655c/96a91fbd-7c60-4e8e-a2ef-ecab164c87d7 in use.\']\\n4f5ab93e-697b-4542-a7df-1b132fe3655c/[\'f4d7e644-631a-47b9-85cc-84edf0fd4057\', \'96a91fbd-7c60-4e8e-a2ef-ecab164c87d7\']",)',)
2019-09-06 11:07:59,273-04 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.TeardownImageVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-10) [972fea1a-b008-4e38-bc8c-cf8d359e2195] FINISH, TeardownImageVDSCommand, return: , log id: 6f3fad40
2019-09-06 11:07:59,273-04 INFO  [org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller] (EE-ManagedThreadFactory-engineScheduled-Thread-10) [972fea1a-b008-4e38-bc8c-cf8d359e2195] Exception in invoking callback of command TransferDiskImage (9674c7ed-ce8a-47fe-aff0-015eff1f4e01): VDSErrorException: VDSGenericException: VDSErrorException: Failed in vdscommand to TeardownImageVDS, error = Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\'  Logical volume 4f5ab93e-697b-4542-a7df-1b132fe3655c/f4d7e644-631a-47b9-85cc-84edf0fd4057 in use.\', \'  Logical volume 4f5ab93e-697b-4542-a7df-1b132fe3655c/96a91fbd-7c60-4e8e-a2ef-ecab164c87d7 in use.\']\\n4f5ab93e-697b-4542-a7df-1b132fe3655c/[\'f4d7e644-631a-47b9-85cc-84edf0fd4057\', \'96a91fbd-7c60-4e8e-a2ef-ecab164c87d7\']",)',)
2019-09-06 11:07:59,273-04 ERROR [org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller] (EE-ManagedThreadFactory-engineScheduled-Thread-10) [972fea1a-b008-4e38-bc8c-cf8d359e2195] Error invoking callback method 'doPolling' for 'ACTIVE' command '9674c7ed-ce8a-47fe-aff0-015eff1f4e01'
2019-09-06 11:07:59,273-04 ERROR [org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller] (EE-ManagedThreadFactory-engineScheduled-Thread-10) [972fea1a-b008-4e38-bc8c-cf8d359e2195] Exception: org.ovirt.engine.core.common.errors.EngineException: EngineException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed in vdscommand to TeardownImageVDS, error = Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\'  Logical volume 4f5ab93e-697b-4542-a7df-1b132fe3655c/f4d7e644-631a-47b9-85cc-84edf0fd4057 in use.\', \'  Logical volume 4f5ab93e-697b-4542-a7df-1b132fe3655c/96a91fbd-7c60-4e8e-a2ef-ecab164c87d7 in use.\']\\n4f5ab93e-697b-4542-a7df-1b132fe3655c/[\'f4d7e644-631a-47b9-85cc-84edf0fd4057\', \'96a91fbd-7c60-4e8e-a2ef-ecab164c87d7\']",)',) (Failed with error CannotDeactivateLogicalVolume and code 552)
        at org.ovirt.engine.core.bll.VdsHandler.handleVdsResult(VdsHandler.java:118) [bll.jar:]
        at org.ovirt.engine.core.bll.VDSBrokerFrontendImpl.runVdsCommand(VDSBrokerFrontendImpl.java:33) [bll.jar:]
        at org.ovirt.engine.core.bll.CommandBase.runVdsCommand(CommandBase.java:2075) [bll.jar:]
        at org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand.tearDownImage(TransferDiskImageCommand.java:258) [bll.jar:]
        at org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand.handleFinalizingSuccess(TransferDiskImageCommand.java:805) [bll.jar:]
        at org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand.executeStateHandler(TransferDiskImageCommand.java:487) [bll.jar:]
        at org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand.proceedCommandExecution(TransferDiskImageCommand.java:453) [bll.jar:]
        at org.ovirt.engine.core.bll.storage.disk.image.TransferImageCommandCallback.doPolling(TransferImageCommandCallback.java:21) [bll.jar:]
        at org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethodsImpl(CommandCallbacksPoller.java:175) [bll.jar:]
        at org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethods(CommandCallbacksPoller.java:109) [bll.jar:]
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [rt.jar:1.8.0_222]
        at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [rt.jar:1.8.0_222]
        at org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.access$201(ManagedScheduledThreadPoolExecutor.java:383) [javax.enterprise.concurrent.jar:1.0.0.redhat-1]
        at org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.run(ManagedScheduledThreadPoolExecutor.java:534) [javax.enterprise.concurrent.jar:1.0.0.redhat-1]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [rt.jar:1.8.0_222]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [rt.jar:1.8.0_222]
        at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_222]
        at org.glassfish.enterprise.concurrent.ManagedThreadFactoryImpl$ManagedThread.run(ManagedThreadFactoryImpl.java:250) [javax.enterprise.concurrent.jar:1.0.0.redhat-1]
Caused by: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed in vdscommand to TeardownImageVDS, error = Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\'  Logical volume 4f5ab93e-697b-4542-a7df-1b132fe3655c/f4d7e644-631a-47b9-85cc-84edf0fd4057 in use.\', \'  Logical volume 4f5ab93e-697b-4542-a7df-1b132fe3655c/96a91fbd-7c60-4e8e-a2ef-ecab164c87d7 in use.\']\\n4f5ab93e-697b-4542-a7df-1b132fe3655c/[\'f4d7e644-631a-47b9-85cc-84edf0fd4057\', \'96a91fbd-7c60-4e8e-a2ef-ecab164c87d7\']",)',)
        at org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase.proceedProxyReturnValue(BrokerCommandBase.java:183) [vdsbroker.jar:]
        at org.ovirt.engine.core.vdsbroker.vdsbroker.ImageActionsVDSCommandBase.executeVdsBrokerCommand(ImageActionsVDSCommandBase.java:20) [vdsbroker.jar:]
        at org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.executeVdsCommandWithNetworkEvent(VdsBrokerCommand.java:123) [vdsbroker.jar:]
        at org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.executeVDSCommand(VdsBrokerCommand.java:111) [vdsbroker.jar:]
        at org.ovirt.engine.core.vdsbroker.VDSCommandBase.executeCommand(VDSCommandBase.java:65) [vdsbroker.jar:]
        at org.ovirt.engine.core.dal.VdcCommandBase.execute(VdcCommandBase.java:31) [dal.jar:]
        at org.ovirt.engine.core.vdsbroker.vdsbroker.DefaultVdsCommandExecutor.execute(DefaultVdsCommandExecutor.java:14) [vdsbroker.jar:]
        at org.ovirt.engine.core.vdsbroker.ResourceManager.runVdsCommand(ResourceManager.java:398) [vdsbroker.jar:]
        at org.ovirt.engine.core.vdsbroker.ResourceManager$Proxy$_$$_WeldSubclass.runVdsCommand$$super(Unknown Source) [vdsbroker.jar:]
        at sun.reflect.GeneratedMethodAccessor132.invoke(Unknown Source) [:1.8.0_222]
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [rt.jar:1.8.0_222]
        at java.lang.reflect.Method.invoke(Method.java:498) [rt.jar:1.8.0_222]
        at org.jboss.weld.interceptor.proxy.TerminalAroundInvokeInvocationContext.proceedInternal(TerminalAroundInvokeInvocationContext.java:51) [weld-core-impl.jar:3.0.6.Final-redhat-00001]
        at org.jboss.weld.interceptor.proxy.AroundInvokeInvocationContext.proceed(AroundInvokeInvocationContext.java:78) [weld-core-impl.jar:3.0.6.Final-redhat-00001]
        at org.ovirt.engine.core.common.di.interceptor.LoggingInterceptor.apply(LoggingInterceptor.java:12) [common.jar:]
        at sun.reflect.GeneratedMethodAccessor71.invoke(Unknown Source) [:1.8.0_222]
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [rt.jar:1.8.0_222]
        at java.lang.reflect.Method.invoke(Method.java:498) [rt.jar:1.8.0_222]
        at org.jboss.weld.interceptor.reader.SimpleInterceptorInvocation$SimpleMethodInvocation.invoke(SimpleInterceptorInvocation.java:73) [weld-core-impl.jar:3.0.6.Final-redhat-00001]
        at org.jboss.weld.interceptor.proxy.InterceptorMethodHandler.executeAroundInvoke(InterceptorMethodHandler.java:84) [weld-core-impl.jar:3.0.6.Final-redhat-00001]
        at org.jboss.weld.interceptor.proxy.InterceptorMethodHandler.executeInterception(InterceptorMethodHandler.java:72) [weld-core-impl.jar:3.0.6.Final-redhat-00001]
        at org.jboss.weld.interceptor.proxy.InterceptorMethodHandler.invoke(InterceptorMethodHandler.java:56) [weld-core-impl.jar:3.0.6.Final-redhat-00001]
        at org.jboss.weld.bean.proxy.CombinedInterceptorAndDecoratorStackMethodHandler.invoke(CombinedInterceptorAndDecoratorStackMethodHandler.java:79) [weld-core-impl.jar:3.0.6.Final-redhat-00001]
        at org.jboss.weld.bean.proxy.CombinedInterceptorAndDecoratorStackMethodHandler.invoke(CombinedInterceptorAndDecoratorStackMethodHandler.java:68) [weld-core-impl.jar:3.0.6.Final-redhat-00001]
        at org.ovirt.engine.core.vdsbroker.ResourceManager$Proxy$_$$_WeldSubclass.runVdsCommand(Unknown Source) [vdsbroker.jar:]
        ... 17 more

2019-09-06 11:08:00,412-04 INFO  [org.ovirt.engine.core.bll.storage.disk.image.ImageTransferUpdater] (EE-ManagedThreadFactory-engineScheduled-Thread-94) [972fea1a-b008-4e38-bc8c-cf8d359e2195] Updating image transfer 9674c7ed-ce8a-47fe-aff0-015eff1f4e01 phase to Finished Failure
2019-09-06 11:08:00,414-04 ERROR [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-94) [972fea1a-b008-4e38-bc8c-cf8d359e2195] Failed to transfer disk 'f4d7e644-631a-47b9-85cc-84edf0fd4057' (command id '9674c7ed-ce8a-47fe-aff0-015eff1f4e01')
2019-09-06 11:08:00,415-04 INFO  [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-94) [972fea1a-b008-4e38-bc8c-cf8d359e2195] Ending command 'org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand' successfully.

2019-09-06 11:09:02,858-04 INFO  [org.ovirt.engine.core.bll.storage.disk.DetachDiskFromVmCommand] (default task-561) [e232a75f-7d29-4023-a7f9-c808d06dff42] Running command: DetachDiskFromVmCommand internal: false. Entities affected :  ID: dcbd4752-b844-4d05-a34d-fdd471931b45 Type: VMAction group CONFIGURE_VM_STORAGE with role type USER
2019-09-06 11:09:02,860-04 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HotUnPlugDiskVDSCommand] (default task-561) [e232a75f-7d29-4023-a7f9-c808d06dff42] START, HotUnPlugDiskVDSCommand(HostName = rhevh-10.example.com, HotPlugDiskVDSParameters:{hostId='878c17db-b9b3-4bb7-8569-f6249159e74d', vmId='dcbd4752-b844-4d05-a34d-fdd471931b45', diskId='daa39bb3-38da-4325-a73c-c34abf58063d', addressMap='[bus=0x00, domain=0x0000, function=0x0, slot=0x08, type=pci]'}), log id: 54d4b853

2019-09-06 11:09:23,472-04 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HotUnPlugDiskVDSCommand] (default task-561) [e232a75f-7d29-4023-a7f9-c808d06dff42] FINISH, HotUnPlugDiskVDSCommand, return: , log id: 54d4b853

2019-09-06 11:09:24,014-04 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-561) [e232a75f-7d29-4023-a7f9-c808d06dff42] EVENT_ID: USER_DETACH_DISK_FROM_VM(2,018), Disk rhel76_Disk1 was successfully detached from VM rhel76 by admin@internal-authz.

2019-09-06 11:09:25,233-04 ERROR [org.ovirt.engine.api.restapi.resource.AbstractBackendResource] (default task-561) [] Operation Failed: [Cannot remove Snapshot: The following disks are locked: rhel76_Disk1. Please try again in a few minutes.]
~~~


-	This also leave the disk image in a LOCKED state (imagestatus 2).

~~~
image_guid            | f4d7e644-631a-47b9-85cc-84edf0fd4057
creation_date         | 2019-09-05 20:59:37-04
size                  | 21474836480
it_guid               | 446a8769-d7b4-42ae-a909-7101d03fa8b6
parentid              | 446a8769-d7b4-42ae-a909-7101d03fa8b6
imagestatus           | 2 <<=========================================
lastmodified          | 2019-09-06 10:55:23.111-04
vm_snapshot_id        | be74c80a-69a3-45e1-9ebe-f05cfce29383
volume_type           | 2  
volume_format         | 4
image_group_id        | daa39bb3-38da-4325-a73c-c34abf58063d
_create_date          | 2019-09-05 20:59:37.184831-04
_update_date          | 2019-09-06 10:55:23.111584-04
active                | f
volume_classification | 1
qcow_compat           | 2
~~~

Version-Release number of selected component (if applicable):

ovirt-engine-4.3.5.4-0.1.el7.noarch
vdsm-4.30.13-1.el7ev.x86_64


How reproducible:

100%

Steps to Reproduce:

Using block storage, use the following;

https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/vm_backup.py
https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/download_disk_snapshots.py

Modify vm_backup.py around line 224 to call download_disk_snapshots.py.

Comment 5 Germano Veit Michel 2019-09-09 03:39:32 UTC
Sounds similar to previously reported BZ1644142, which was closed.

Comment 6 Marian Jankular 2019-09-09 14:28:41 UTC
raising severity to urgent as customers are not able to backup the vms using vprotect anymore

Comment 7 Tal Nisan 2019-09-09 15:01:10 UTC
Eyal, you closed bug 1644142 which is very similar, can you please have a look?

Comment 8 Eyal Shenitzky 2019-09-10 08:00:36 UTC
The difference between this bug and bug 1644142 is that the teardown error is raised during the attempt to finalize the image transfer.
I believe the solution will be to prevent the teardown if the image is snapshot disk and attached to more than one VM.

Is this a regression?

Comment 10 Eyal Shenitzky 2019-09-12 06:32:07 UTC
*** Bug 1668366 has been marked as a duplicate of this bug. ***

Comment 16 Shir Fishbain 2019-09-18 12:53:06 UTC
This bug has reproduced.
ovirt-engine-4.3.6.5-0.1.el7.noarch
vdsm-4.30.30-1.el7ev.x86_64

Steps to reproduce:

1) Create two VMs from template (with disk): source VM and backup VM
2) Start two VMs
3) Create a snapshot in source VM (with memory)
4) Attach the disk snapshot that was created to backup VM (by REST API)
5) Download the disk image (Run the script attached
https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/download_disk_snapshots.py)
6) Check the status of the backup VM disk.

Actual results (same errors):
The status of the disk is locked.

vdsm.log:

2019-09-18 13:45:39,369+0300 INFO  (jsonrpc/7) [vdsm.api] START teardownImage(sdUUID=u'43cefde7-c44a-4c17-a7b1-f68d2b5716ae', spUUID=u'b51b71f1-bcea-41d3-bff3-18f964659c96', imgUUID=u'80c37d22-6a96-4904-8494-394d47b5f6a4', volUUID=None) from=::ffff:10.46.16.255,33370, flow_id=0ff8b66a-d2b1-4228-889d-2913da29038a, task_id=aaa17693-f529-4826-9fad-c5139373c20e (api:48)

2019-09-18 13:46:01,947+0300 INFO  (jsonrpc/7) [vdsm.api] FINISH teardownImage error=Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\'  Logical volume 43cefde7-c44a-4c17-a7b1-f68d2b5716ae/5941ca7b-2ee4-4cf4-9bfe-bc0685c78e31 in use.\', \'  Logical volume 43cefde7-c44a-4c17-a7b1-f68d2b5716ae/d8dc3ee7-cd76-4d9f-9cc2-e624f3466198 in use.\']\\n43cefde7-c44a-4c17-a7b1-f68d2b5716ae/[\'d8dc3ee7-cd76-4d9f-9cc2-e624f3466198\', \'5941ca7b-2ee4-4cf4-9bfe-bc0685c78e31\']",)',) from=::ffff:10.46.16.255,33370, flow_id=0ff8b66a-d2b1-4228-889d-2913da29038a, task_id=aaa17693-f529-4826-9fad-c5139373c20e (api:52)
  File "<string>", line 2, in teardownImage
  File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 3269, in teardownImage

2019-09-18 13:46:01,949+0300 ERROR (jsonrpc/7) [storage.Dispatcher] FINISH teardownImage error=Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\'  Logical volume 43cefde7-c44a-4c17-a7b1-f68d2b5716ae/5941ca7b-2ee4-4cf4-9bfe-bc0685c78e31 in use.\', \'  Logical volume 43cefde7-c44a-4c17-a7b1-f68d2b5716ae/d8dc3ee7-cd76-4d9f-9cc2-e624f3466198 in use.\']\\n43cefde7-c44a-4c17-a7b1-f68d2b5716ae/[\'d8dc3ee7-cd76-4d9f-9cc2-e624f3466198\', \'5941ca7b-2ee4-4cf4-9bfe-bc0685c78e31\']",)',) (dispatcher:83)

2019-09-18 13:46:01,948+0300 ERROR (jsonrpc/7) [storage.TaskManager.Task] (Task='aaa17693-f529-4826-9fad-c5139373c20e') Unexpected error (task:875)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run
    return fn(*args, **kargs)
  File "<string>", line 2, in teardownImage
  File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in method
    ret = func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 3269, in teardownImage
    dom.deactivateImage(imgUUID)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/blockSD.py", line 1406, in deactivateImage
    lvm.deactivateLVs(self.sdUUID, volUUIDs)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/lvm.py", line 1451, in deactivateLVs
    _setLVAvailability(vgName, toDeactivate, "n")
  File "/usr/lib/python2.7/site-packages/vdsm/storage/lvm.py", line 954, in _setLVAvailability
    raise error(str(e))
CannotDeactivateLogicalVolume: Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\'  Logical volume 43cefde7-c44a-4c17-a7b1-f68d2b5716ae/5941ca7b-2ee4-4cf4-9bfe-bc0685c78e31 in use.\', \'  Logical volume 43cefde7-c44a-4c17-a7b1-f68d2b5716ae/d8dc3ee7-cd76-4d9f-9cc2-e624f3466198 in use.\']\\n43cefde7-c44a-4c17-a7b1-f68d2b5716ae/[\'d8dc3ee7-cd76-4d9f-9cc2-e624f3466198\', \'5941ca7b-2ee4-4cf4-9bfe-bc0685c78e31\']",)',)


2019-09-18 13:46:01,949+0300 ERROR (jsonrpc/7) [storage.Dispatcher] FINISH teardownImage error=Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\'  Logical volume 43cefde7-c44a-4c17-a7b1-f68d2b5716ae/5941ca7b-2ee4-4cf4-9bfe-bc0685c78e31 in use.\', \'  Logical volume 43cefde7-c44a-4c17-a7b1-f68d2b5716ae/d8dc3ee7-cd76-4d9f-9cc2-e624f3466198 in use.\']\\n43cefde7-c44a-4c17-a7b1-f68d2b5716ae/[\'d8dc3ee7-cd76-4d9f-9cc2-e624f3466198\', \'5941ca7b-2ee4-4cf4-9bfe-bc0685c78e31\']",)',) (dispatcher:83)

engine.log:

2019-09-18 13:46:01,962+03 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-100) [0ff8b66a-d2b1-4228-889d-2913da29038a] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), VDSM host_mixed_1 command TeardownImageVDS failed: Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\'  Logical volume 43cefde7-c44a-4c17-a7b1-f68d2b5716ae/5941ca7b-2ee4-4cf4-9bfe-bc0685c78e31 in use.\', \'  Logical volume 43cefde7-c44a-4c17-a7b1-f68d2b5716ae/d8dc3ee7-cd76-4d9f-9cc2-e624f3466198 in use.\']\\n43cefde7-c44a-4c17-a7b1-f68d2b5716ae/[\'d8dc3ee7-cd76-4d9f-9cc2-e624f3466198\', \'5941ca7b-2ee4-4cf4-9bfe-bc0685c78e31\']",)',)

2019-09-18 13:46:01,968+03 ERROR [org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller] (EE-ManagedThreadFactory-engineScheduled-Thread-100) [0ff8b66a-d2b1-4228-889d-2913da29038a] Error invoking callback method 'doPolling' for 'ACTIVE' command '0788193b-e5f9-43dd-8c6e-b821ab045149'

2019-09-18 13:46:01,968+03 ERROR [org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller] (EE-ManagedThreadFactory-engineScheduled-Thread-100) [0ff8b66a-d2b1-4228-889d-2913da29038a] Exception: org.ovirt.engine.core.common.errors.EngineException: EngineException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed in vdscommand to TeardownImageVDS, error = Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\'  Logical volume 43cefde7-c44a-4c17-a7b1-f68d2b5716ae/5941ca7b-2ee4-4cf4-9bfe-bc0685c78e31 in use.\', \'  Logical volume 43cefde7-c44a-4c17-a7b1-f68d2b5716ae/d8dc3ee7-cd76-4d9f-9cc2-e624f3466198 in use.\']\\n43cefde7-c44a-4c17-a7b1-f68d2b5716ae/[\'d8dc3ee7-cd76-4d9f-9cc2-e624f3466198\', \'5941ca7b-2ee4-4cf4-9bfe-bc0685c78e31\']",)',) (Failed with error CannotDeactivateLogicalVolume and code 552)
        at org.ovirt.engine.core.bll.VdsHandler.handleVdsResult(VdsHandler.java:118) [bll.jar:]
        at org.ovirt.engine.core.bll.VDSBrokerFrontendImpl.runVdsCommand(VDSBrokerFrontendImpl.java:33) [bll.jar:]
        at org.ovirt.engine.core.bll.CommandBase.runVdsCommand(CommandBase.java:2112) [bll.jar:]
        at org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand.tearDownImage(TransferDiskImageCommand.java:258) [bll.jar:]
        at org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand.handleFinalizingSuccess(TransferDiskImageCommand.java:815) [bll.jar:]
        at org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand.executeStateHandler(TransferDiskImageCommand.java:490) [bll.jar:]
        at org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand.proceedCommandExecution(TransferDiskImageCommand.java:456) [bll.jar:]
        at org.ovirt.engine.core.bll.storage.disk.image.TransferImageCommandCallback.doPolling(TransferImageCommandCallback.java:21) [bll.jar:]
        at org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethodsImpl(CommandCallbacksPoller.java:175) [bll.jar:]
        at org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethods(CommandCallbacksPoller.java:109) [bll.jar:]
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [rt.jar:1.8.0_222]
        at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [rt.jar:1.8.0_222]
        at org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.access$201(ManagedScheduledThreadPoolExecutor.java:383) [javax.enterprise.concurrent.jar:1.0.0.redhat-1]
        at org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.run(ManagedScheduledThreadPoolExecutor.java:534) [javax.enterprise.concurrent.jar:1.0.0.redhat-1]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [rt.jar:1.8.0_222]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [rt.jar:1.8.0_222]
        at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_222]
        at org.glassfish.enterprise.concurrent.ManagedThreadFactoryImpl$ManagedThread.run(ManagedThreadFactoryImpl.java:250) [javax.enterprise.concurrent.jar:1.0.0.redhat-1]
Caused by: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed in vdscommand to TeardownImageVDS, error = Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\'  Logical volume 43cefde7-c44a-4c17-a7b1-f68d2b5716ae/5941ca7b-2ee4-4cf4-9bfe-bc0685c78e31 in use.\', \'  Logical volume 43cefde7-c44a-4c17-a7b1-f68d2b5716ae/d8dc3ee7-cd76-4d9f-9cc2-e624f3466198 in use.\']\\n43cefde7-c44a-4c17-a7b1-f68d2b5716ae/[\'d8dc3ee7-cd76-4d9f-9cc2-e624f3466198\', \'5941ca7b-2ee4-4cf4-9bfe-bc0685c78e31\']",)',)
        at org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase.proceedProxyReturnValue(BrokerCommandBase.java:183) [vdsbroker.jar:]
        at org.ovirt.engine.core.vdsbroker.vdsbroker.ImageActionsVDSCommandBase.executeVdsBrokerCommand(ImageActionsVDSCommandBase.java:20) [vdsbroker.jar:]
        at org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.executeVdsCommandWithNetworkEvent(VdsBrokerCommand.java:123) [vdsbroker.jar:]
        at org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.executeVDSCommand(VdsBrokerCommand.java:111) [vdsbroker.jar:]
        at org.ovirt.engine.core.vdsbroker.VDSCommandBase.executeCommand(VDSCommandBase.java:65) [vdsbroker.jar:]
        at org.ovirt.engine.core.dal.VdcCommandBase.execute(VdcCommandBase.java:31) [dal.jar:]
        at org.ovirt.engine.core.vdsbroker.vdsbroker.DefaultVdsCommandExecutor.execute(DefaultVdsCommandExecutor.java:14) [vdsbroker.jar:]
        at org.ovirt.engine.core.vdsbroker.ResourceManager.runVdsCommand(ResourceManager.java:398) [vdsbroker.jar:]
        at org.ovirt.engine.core.vdsbroker.ResourceManager$Proxy$_$$_WeldSubclass.runVdsCommand$$super(Unknown Source) [vdsbroker.jar:]
        at sun.reflect.GeneratedMethodAccessor70.invoke(Unknown Source) [:1.8.0_222]
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [rt.jar:1.8.0_222]
        at java.lang.reflect.Method.invoke(Method.java:498) [rt.jar:1.8.0_222]
        at org.jboss.weld.interceptor.proxy.TerminalAroundInvokeInvocationContext.proceedInternal(TerminalAroundInvokeInvocationContext.java:51) [weld-core-impl.jar:3.0.6.Final-redhat-00001]
        at org.jboss.weld.interceptor.proxy.AroundInvokeInvocationContext.proceed(AroundInvokeInvocationContext.java:78) [weld-core-impl.jar:3.0.6.Final-redhat-00001]
        at org.ovirt.engine.core.common.di.interceptor.LoggingInterceptor.apply(LoggingInterceptor.java:12) [common.jar:]
        at sun.reflect.GeneratedMethodAccessor59.invoke(Unknown Source) [:1.8.0_222]
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [rt.jar:1.8.0_222]
        at java.lang.reflect.Method.invoke(Method.java:498) [rt.jar:1.8.0_222]
        at org.jboss.weld.interceptor.reader.SimpleInterceptorInvocation$SimpleMethodInvocation.invoke(SimpleInterceptorInvocation.java:73) [weld-core-impl.jar:3.0.6.Final-redhat-00001]
        at org.jboss.weld.interceptor.proxy.InterceptorMethodHandler.executeAroundInvoke(InterceptorMethodHandler.java:84) [weld-core-impl.jar:3.0.6.Final-redhat-00001]
        at org.jboss.weld.interceptor.proxy.InterceptorMethodHandler.executeInterception(InterceptorMethodHandler.java:72) [weld-core-impl.jar:3.0.6.Final-redhat-00001]
        at org.jboss.weld.interceptor.proxy.InterceptorMethodHandler.invoke(InterceptorMethodHandler.java:56) [weld-core-impl.jar:3.0.6.Final-redhat-00001]
        at org.jboss.weld.bean.proxy.CombinedInterceptorAndDecoratorStackMethodHandler.invoke(CombinedInterceptorAndDecoratorStackMethodHandler.java:79) [weld-core-impl.jar:3.0.6.Final-redhat-00001]
        at org.jboss.weld.bean.proxy.CombinedInterceptorAndDecoratorStackMethodHandler.invoke(CombinedInterceptorAndDecoratorStackMethodHandler.java:68) [weld-core-impl.jar:3.0.6.Final-redhat-00001]
        at org.ovirt.engine.core.vdsbroker.ResourceManager$Proxy$_$$_WeldSubclass.runVdsCommand(Unknown Source) [vdsbroker.jar:]

2019-09-18 13:46:02,988+03 ERROR [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-19) [0ff8b66a-d2b1-4228-889d-2913da29038a] Failed to transfer disk '5941ca7b-2ee4-4cf4-9bfe-bc0685c78e31' (command id '0788193b-e5f9-43dd-8c6e-b821ab045149')

Comment 18 RHV bug bot 2019-12-13 13:15:30 UTC
WARN: Bug status (ON_QA) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops: Bug status (ON_QA) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops

Comment 19 RHV bug bot 2019-12-20 17:45:09 UTC
WARN: Bug status (ON_QA) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops: Bug status (ON_QA) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops

Comment 20 RHV bug bot 2020-01-08 14:49:16 UTC
WARN: Bug status (ON_QA) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops: Bug status (ON_QA) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops

Comment 21 RHV bug bot 2020-01-08 15:16:39 UTC
WARN: Bug status (ON_QA) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops: Bug status (ON_QA) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops

Comment 22 RHV bug bot 2020-01-24 19:51:04 UTC
WARN: Bug status (ON_QA) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops: Bug status (ON_QA) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops

Comment 23 Ilan Zuckerman 2020-03-04 13:21:50 UTC
Following the steps from comment #16, i can verify that the issue is now resolved. 
The disk was downloaded successfully with the script, no errors seen on engine log, disk status is OK after the download.

Comment 27 errata-xmlrpc 2020-08-04 13:20:31 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: RHV Manager (ovirt-engine) 4.4 security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:3247


Note You need to log in before you can comment on or make changes to this bug.