Bug 1749944
Summary: | teardownImage attempts to deactivate in-use LV's rendering the VM disk image/volumes in locked state. | |||
---|---|---|---|---|
Product: | Red Hat Enterprise Virtualization Manager | Reporter: | Bimal Chollera <bcholler> | |
Component: | ovirt-engine | Assignee: | Eyal Shenitzky <eshenitz> | |
Status: | CLOSED ERRATA | QA Contact: | Ilan Zuckerman <izuckerm> | |
Severity: | urgent | Docs Contact: | ||
Priority: | urgent | |||
Version: | 4.3.5 | CC: | aefrat, aperotti, dfediuck, eshenitz, fgarciad, frolland, gveitmic, kshukla, mjankula, mkalinin, mtessun, pelauter, sfishbai, tnisan | |
Target Milestone: | ovirt-4.4.0 | Keywords: | ZStream | |
Target Release: | --- | Flags: | izuckerm:
testing_plan_complete+
|
|
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | ||||
Fixed In Version: | Doc Type: | No Doc Update | ||
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 1753168 (view as bug list) | Environment: | ||
Last Closed: | 2020-08-04 13:20:31 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | Storage | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1547768, 1753168 |
Description
Bimal Chollera
2019-09-06 20:21:17 UTC
Sounds similar to previously reported BZ1644142, which was closed. raising severity to urgent as customers are not able to backup the vms using vprotect anymore Eyal, you closed bug 1644142 which is very similar, can you please have a look? The difference between this bug and bug 1644142 is that the teardown error is raised during the attempt to finalize the image transfer. I believe the solution will be to prevent the teardown if the image is snapshot disk and attached to more than one VM. Is this a regression? *** Bug 1668366 has been marked as a duplicate of this bug. *** This bug has reproduced. ovirt-engine-4.3.6.5-0.1.el7.noarch vdsm-4.30.30-1.el7ev.x86_64 Steps to reproduce: 1) Create two VMs from template (with disk): source VM and backup VM 2) Start two VMs 3) Create a snapshot in source VM (with memory) 4) Attach the disk snapshot that was created to backup VM (by REST API) 5) Download the disk image (Run the script attached https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/download_disk_snapshots.py) 6) Check the status of the backup VM disk. Actual results (same errors): The status of the disk is locked. vdsm.log: 2019-09-18 13:45:39,369+0300 INFO (jsonrpc/7) [vdsm.api] START teardownImage(sdUUID=u'43cefde7-c44a-4c17-a7b1-f68d2b5716ae', spUUID=u'b51b71f1-bcea-41d3-bff3-18f964659c96', imgUUID=u'80c37d22-6a96-4904-8494-394d47b5f6a4', volUUID=None) from=::ffff:10.46.16.255,33370, flow_id=0ff8b66a-d2b1-4228-889d-2913da29038a, task_id=aaa17693-f529-4826-9fad-c5139373c20e (api:48) 2019-09-18 13:46:01,947+0300 INFO (jsonrpc/7) [vdsm.api] FINISH teardownImage error=Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\' Logical volume 43cefde7-c44a-4c17-a7b1-f68d2b5716ae/5941ca7b-2ee4-4cf4-9bfe-bc0685c78e31 in use.\', \' Logical volume 43cefde7-c44a-4c17-a7b1-f68d2b5716ae/d8dc3ee7-cd76-4d9f-9cc2-e624f3466198 in use.\']\\n43cefde7-c44a-4c17-a7b1-f68d2b5716ae/[\'d8dc3ee7-cd76-4d9f-9cc2-e624f3466198\', \'5941ca7b-2ee4-4cf4-9bfe-bc0685c78e31\']",)',) from=::ffff:10.46.16.255,33370, flow_id=0ff8b66a-d2b1-4228-889d-2913da29038a, task_id=aaa17693-f529-4826-9fad-c5139373c20e (api:52) File "<string>", line 2, in teardownImage File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 3269, in teardownImage 2019-09-18 13:46:01,949+0300 ERROR (jsonrpc/7) [storage.Dispatcher] FINISH teardownImage error=Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\' Logical volume 43cefde7-c44a-4c17-a7b1-f68d2b5716ae/5941ca7b-2ee4-4cf4-9bfe-bc0685c78e31 in use.\', \' Logical volume 43cefde7-c44a-4c17-a7b1-f68d2b5716ae/d8dc3ee7-cd76-4d9f-9cc2-e624f3466198 in use.\']\\n43cefde7-c44a-4c17-a7b1-f68d2b5716ae/[\'d8dc3ee7-cd76-4d9f-9cc2-e624f3466198\', \'5941ca7b-2ee4-4cf4-9bfe-bc0685c78e31\']",)',) (dispatcher:83) 2019-09-18 13:46:01,948+0300 ERROR (jsonrpc/7) [storage.TaskManager.Task] (Task='aaa17693-f529-4826-9fad-c5139373c20e') Unexpected error (task:875) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run return fn(*args, **kargs) File "<string>", line 2, in teardownImage File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in method ret = func(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 3269, in teardownImage dom.deactivateImage(imgUUID) File "/usr/lib/python2.7/site-packages/vdsm/storage/blockSD.py", line 1406, in deactivateImage lvm.deactivateLVs(self.sdUUID, volUUIDs) File "/usr/lib/python2.7/site-packages/vdsm/storage/lvm.py", line 1451, in deactivateLVs _setLVAvailability(vgName, toDeactivate, "n") File "/usr/lib/python2.7/site-packages/vdsm/storage/lvm.py", line 954, in _setLVAvailability raise error(str(e)) CannotDeactivateLogicalVolume: Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\' Logical volume 43cefde7-c44a-4c17-a7b1-f68d2b5716ae/5941ca7b-2ee4-4cf4-9bfe-bc0685c78e31 in use.\', \' Logical volume 43cefde7-c44a-4c17-a7b1-f68d2b5716ae/d8dc3ee7-cd76-4d9f-9cc2-e624f3466198 in use.\']\\n43cefde7-c44a-4c17-a7b1-f68d2b5716ae/[\'d8dc3ee7-cd76-4d9f-9cc2-e624f3466198\', \'5941ca7b-2ee4-4cf4-9bfe-bc0685c78e31\']",)',) 2019-09-18 13:46:01,949+0300 ERROR (jsonrpc/7) [storage.Dispatcher] FINISH teardownImage error=Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\' Logical volume 43cefde7-c44a-4c17-a7b1-f68d2b5716ae/5941ca7b-2ee4-4cf4-9bfe-bc0685c78e31 in use.\', \' Logical volume 43cefde7-c44a-4c17-a7b1-f68d2b5716ae/d8dc3ee7-cd76-4d9f-9cc2-e624f3466198 in use.\']\\n43cefde7-c44a-4c17-a7b1-f68d2b5716ae/[\'d8dc3ee7-cd76-4d9f-9cc2-e624f3466198\', \'5941ca7b-2ee4-4cf4-9bfe-bc0685c78e31\']",)',) (dispatcher:83) engine.log: 2019-09-18 13:46:01,962+03 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-100) [0ff8b66a-d2b1-4228-889d-2913da29038a] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), VDSM host_mixed_1 command TeardownImageVDS failed: Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\' Logical volume 43cefde7-c44a-4c17-a7b1-f68d2b5716ae/5941ca7b-2ee4-4cf4-9bfe-bc0685c78e31 in use.\', \' Logical volume 43cefde7-c44a-4c17-a7b1-f68d2b5716ae/d8dc3ee7-cd76-4d9f-9cc2-e624f3466198 in use.\']\\n43cefde7-c44a-4c17-a7b1-f68d2b5716ae/[\'d8dc3ee7-cd76-4d9f-9cc2-e624f3466198\', \'5941ca7b-2ee4-4cf4-9bfe-bc0685c78e31\']",)',) 2019-09-18 13:46:01,968+03 ERROR [org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller] (EE-ManagedThreadFactory-engineScheduled-Thread-100) [0ff8b66a-d2b1-4228-889d-2913da29038a] Error invoking callback method 'doPolling' for 'ACTIVE' command '0788193b-e5f9-43dd-8c6e-b821ab045149' 2019-09-18 13:46:01,968+03 ERROR [org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller] (EE-ManagedThreadFactory-engineScheduled-Thread-100) [0ff8b66a-d2b1-4228-889d-2913da29038a] Exception: org.ovirt.engine.core.common.errors.EngineException: EngineException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed in vdscommand to TeardownImageVDS, error = Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\' Logical volume 43cefde7-c44a-4c17-a7b1-f68d2b5716ae/5941ca7b-2ee4-4cf4-9bfe-bc0685c78e31 in use.\', \' Logical volume 43cefde7-c44a-4c17-a7b1-f68d2b5716ae/d8dc3ee7-cd76-4d9f-9cc2-e624f3466198 in use.\']\\n43cefde7-c44a-4c17-a7b1-f68d2b5716ae/[\'d8dc3ee7-cd76-4d9f-9cc2-e624f3466198\', \'5941ca7b-2ee4-4cf4-9bfe-bc0685c78e31\']",)',) (Failed with error CannotDeactivateLogicalVolume and code 552) at org.ovirt.engine.core.bll.VdsHandler.handleVdsResult(VdsHandler.java:118) [bll.jar:] at org.ovirt.engine.core.bll.VDSBrokerFrontendImpl.runVdsCommand(VDSBrokerFrontendImpl.java:33) [bll.jar:] at org.ovirt.engine.core.bll.CommandBase.runVdsCommand(CommandBase.java:2112) [bll.jar:] at org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand.tearDownImage(TransferDiskImageCommand.java:258) [bll.jar:] at org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand.handleFinalizingSuccess(TransferDiskImageCommand.java:815) [bll.jar:] at org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand.executeStateHandler(TransferDiskImageCommand.java:490) [bll.jar:] at org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand.proceedCommandExecution(TransferDiskImageCommand.java:456) [bll.jar:] at org.ovirt.engine.core.bll.storage.disk.image.TransferImageCommandCallback.doPolling(TransferImageCommandCallback.java:21) [bll.jar:] at org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethodsImpl(CommandCallbacksPoller.java:175) [bll.jar:] at org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethods(CommandCallbacksPoller.java:109) [bll.jar:] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [rt.jar:1.8.0_222] at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [rt.jar:1.8.0_222] at org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.access$201(ManagedScheduledThreadPoolExecutor.java:383) [javax.enterprise.concurrent.jar:1.0.0.redhat-1] at org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.run(ManagedScheduledThreadPoolExecutor.java:534) [javax.enterprise.concurrent.jar:1.0.0.redhat-1] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [rt.jar:1.8.0_222] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [rt.jar:1.8.0_222] at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_222] at org.glassfish.enterprise.concurrent.ManagedThreadFactoryImpl$ManagedThread.run(ManagedThreadFactoryImpl.java:250) [javax.enterprise.concurrent.jar:1.0.0.redhat-1] Caused by: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed in vdscommand to TeardownImageVDS, error = Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\' Logical volume 43cefde7-c44a-4c17-a7b1-f68d2b5716ae/5941ca7b-2ee4-4cf4-9bfe-bc0685c78e31 in use.\', \' Logical volume 43cefde7-c44a-4c17-a7b1-f68d2b5716ae/d8dc3ee7-cd76-4d9f-9cc2-e624f3466198 in use.\']\\n43cefde7-c44a-4c17-a7b1-f68d2b5716ae/[\'d8dc3ee7-cd76-4d9f-9cc2-e624f3466198\', \'5941ca7b-2ee4-4cf4-9bfe-bc0685c78e31\']",)',) at org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase.proceedProxyReturnValue(BrokerCommandBase.java:183) [vdsbroker.jar:] at org.ovirt.engine.core.vdsbroker.vdsbroker.ImageActionsVDSCommandBase.executeVdsBrokerCommand(ImageActionsVDSCommandBase.java:20) [vdsbroker.jar:] at org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.executeVdsCommandWithNetworkEvent(VdsBrokerCommand.java:123) [vdsbroker.jar:] at org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.executeVDSCommand(VdsBrokerCommand.java:111) [vdsbroker.jar:] at org.ovirt.engine.core.vdsbroker.VDSCommandBase.executeCommand(VDSCommandBase.java:65) [vdsbroker.jar:] at org.ovirt.engine.core.dal.VdcCommandBase.execute(VdcCommandBase.java:31) [dal.jar:] at org.ovirt.engine.core.vdsbroker.vdsbroker.DefaultVdsCommandExecutor.execute(DefaultVdsCommandExecutor.java:14) [vdsbroker.jar:] at org.ovirt.engine.core.vdsbroker.ResourceManager.runVdsCommand(ResourceManager.java:398) [vdsbroker.jar:] at org.ovirt.engine.core.vdsbroker.ResourceManager$Proxy$_$$_WeldSubclass.runVdsCommand$$super(Unknown Source) [vdsbroker.jar:] at sun.reflect.GeneratedMethodAccessor70.invoke(Unknown Source) [:1.8.0_222] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [rt.jar:1.8.0_222] at java.lang.reflect.Method.invoke(Method.java:498) [rt.jar:1.8.0_222] at org.jboss.weld.interceptor.proxy.TerminalAroundInvokeInvocationContext.proceedInternal(TerminalAroundInvokeInvocationContext.java:51) [weld-core-impl.jar:3.0.6.Final-redhat-00001] at org.jboss.weld.interceptor.proxy.AroundInvokeInvocationContext.proceed(AroundInvokeInvocationContext.java:78) [weld-core-impl.jar:3.0.6.Final-redhat-00001] at org.ovirt.engine.core.common.di.interceptor.LoggingInterceptor.apply(LoggingInterceptor.java:12) [common.jar:] at sun.reflect.GeneratedMethodAccessor59.invoke(Unknown Source) [:1.8.0_222] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [rt.jar:1.8.0_222] at java.lang.reflect.Method.invoke(Method.java:498) [rt.jar:1.8.0_222] at org.jboss.weld.interceptor.reader.SimpleInterceptorInvocation$SimpleMethodInvocation.invoke(SimpleInterceptorInvocation.java:73) [weld-core-impl.jar:3.0.6.Final-redhat-00001] at org.jboss.weld.interceptor.proxy.InterceptorMethodHandler.executeAroundInvoke(InterceptorMethodHandler.java:84) [weld-core-impl.jar:3.0.6.Final-redhat-00001] at org.jboss.weld.interceptor.proxy.InterceptorMethodHandler.executeInterception(InterceptorMethodHandler.java:72) [weld-core-impl.jar:3.0.6.Final-redhat-00001] at org.jboss.weld.interceptor.proxy.InterceptorMethodHandler.invoke(InterceptorMethodHandler.java:56) [weld-core-impl.jar:3.0.6.Final-redhat-00001] at org.jboss.weld.bean.proxy.CombinedInterceptorAndDecoratorStackMethodHandler.invoke(CombinedInterceptorAndDecoratorStackMethodHandler.java:79) [weld-core-impl.jar:3.0.6.Final-redhat-00001] at org.jboss.weld.bean.proxy.CombinedInterceptorAndDecoratorStackMethodHandler.invoke(CombinedInterceptorAndDecoratorStackMethodHandler.java:68) [weld-core-impl.jar:3.0.6.Final-redhat-00001] at org.ovirt.engine.core.vdsbroker.ResourceManager$Proxy$_$$_WeldSubclass.runVdsCommand(Unknown Source) [vdsbroker.jar:] 2019-09-18 13:46:02,988+03 ERROR [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-19) [0ff8b66a-d2b1-4228-889d-2913da29038a] Failed to transfer disk '5941ca7b-2ee4-4cf4-9bfe-bc0685c78e31' (command id '0788193b-e5f9-43dd-8c6e-b821ab045149') WARN: Bug status (ON_QA) wasn't changed but the folowing should be fixed: [Found non-acked flags: '{}', ] For more info please contact: rhv-devops: Bug status (ON_QA) wasn't changed but the folowing should be fixed: [Found non-acked flags: '{}', ] For more info please contact: rhv-devops WARN: Bug status (ON_QA) wasn't changed but the folowing should be fixed: [Found non-acked flags: '{}', ] For more info please contact: rhv-devops: Bug status (ON_QA) wasn't changed but the folowing should be fixed: [Found non-acked flags: '{}', ] For more info please contact: rhv-devops WARN: Bug status (ON_QA) wasn't changed but the folowing should be fixed: [Found non-acked flags: '{}', ] For more info please contact: rhv-devops: Bug status (ON_QA) wasn't changed but the folowing should be fixed: [Found non-acked flags: '{}', ] For more info please contact: rhv-devops WARN: Bug status (ON_QA) wasn't changed but the folowing should be fixed: [Found non-acked flags: '{}', ] For more info please contact: rhv-devops: Bug status (ON_QA) wasn't changed but the folowing should be fixed: [Found non-acked flags: '{}', ] For more info please contact: rhv-devops WARN: Bug status (ON_QA) wasn't changed but the folowing should be fixed: [Found non-acked flags: '{}', ] For more info please contact: rhv-devops: Bug status (ON_QA) wasn't changed but the folowing should be fixed: [Found non-acked flags: '{}', ] For more info please contact: rhv-devops Following the steps from comment #16, i can verify that the issue is now resolved. The disk was downloaded successfully with the script, no errors seen on engine log, disk status is OK after the download. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: RHV Manager (ovirt-engine) 4.4 security, bug fix, and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2020:3247 |