Bug 1753168
Summary: | [downstream clone - 4.3.6] teardownImage attempts to deactivate in-use LV's rendering the VM disk image/volumes in locked state. | ||||||
---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise Virtualization Manager | Reporter: | RHV bug bot <rhv-bugzilla-bot> | ||||
Component: | ovirt-engine | Assignee: | Eyal Shenitzky <eshenitz> | ||||
Status: | CLOSED ERRATA | QA Contact: | Shir Fishbain <sfishbai> | ||||
Severity: | urgent | Docs Contact: | |||||
Priority: | urgent | ||||||
Version: | 4.3.5 | CC: | aefrat, aoconnor, aperotti, bcholler, dfediuck, dhuertas, emarcus, eshenitz, fgarciad, frolland, gveitmic, kshukla, mjankula, mkalinin, mtessun, pelauter, pkovar, Rhev-m-bugs, sfishbai, tnisan | ||||
Target Milestone: | ovirt-4.3.6 | Keywords: | ZStream | ||||
Target Release: | 4.3.6 | Flags: | lsvaty:
testing_plan_complete-
|
||||
Hardware: | Unspecified | ||||||
OS: | Unspecified | ||||||
Whiteboard: | |||||||
Fixed In Version: | ovirt-engine-4.3.6.6 | Doc Type: | Bug Fix | ||||
Doc Text: |
Previously, a snapshot disk that downloaded when it was attached to a backup Virtual Machine got locked due to a failure to teardown the disk.
The current release fixes this error by skipping the disk teardown in case of a snapshot disk.
|
Story Points: | --- | ||||
Clone Of: | 1749944 | Environment: | |||||
Last Closed: | 2019-10-10 15:37:20 UTC | Type: | --- | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | Storage | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Bug Depends On: | 1749944 | ||||||
Bug Blocks: | |||||||
Attachments: |
|
Description
RHV bug bot
2019-09-18 10:03:52 UTC
Sounds similar to previously reported BZ1644142, which was closed. (Originally by Germano Veit Michel) raising severity to urgent as customers are not able to backup the vms using vprotect anymore (Originally by Marian Jankular) Eyal, you closed bug 1644142 which is very similar, can you please have a look? (Originally by Tal Nisan) The difference between this bug and bug 1644142 is that the teardown error is raised during the attempt to finalize the image transfer. I believe the solution will be to prevent the teardown if the image is snapshot disk and attached to more than one VM. Is this a regression? (Originally by Eyal Shenitzky) *** Bug 1668366 has been marked as a duplicate of this bug. *** (Originally by Eyal Shenitzky) INFO: Bug status (ON_QA) wasn't changed but the folowing should be fixed: [Tag 'ovirt-engine-4.3.5.6' doesn't contain patch 'https://gerrit.ovirt.org/103408'] gitweb: https://gerrit.ovirt.org/gitweb?p=ovirt-engine.git;a=shortlog;h=refs/tags/ovirt-engine-4.3.5.6 For more info please contact: rhv-devops From the functionality side, the bug verified. All the steps passed. Moreover when detach the disk from source VM , the following error appears : from vdsm.log: 2019-09-25 16:20:14,750+0300 ERROR (jsonrpc/3) [storage.TaskManager.Task] (Task='0af1c3a1-b7d3-4097-85cd-7710de207fa7') Unexpected error (task:875) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run return fn(*args, **kargs) File "<string>", line 2, in teardownImage File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in method ret = func(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 3269, in teardownImage dom.deactivateImage(imgUUID) File "/usr/lib/python2.7/site-packages/vdsm/storage/blockSD.py", line 1406, in deactivateImage lvm.deactivateLVs(self.sdUUID, volUUIDs) File "/usr/lib/python2.7/site-packages/vdsm/storage/lvm.py", line 1451, in deactivateLVs _setLVAvailability(vgName, toDeactivate, "n") File "/usr/lib/python2.7/site-packages/vdsm/storage/lvm.py", line 954, in _setLVAvailability raise error(str(e)) CannotDeactivateLogicalVolume: Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\' Logical volume db849598-6d71-43b0-aa22-fece4eb139ae/1f177b3b-e150-458e-887e-471b29cfc360 in use.\', \' Logical volume db849598-6d71-43b0-aa22-fece4eb139ae/b4e55d8e-7f5d-44ea-8440-96b6fb7bd914 in use.\']\\ndb849598-6d71-43b0-aa22-fece4eb139ae/[\'1f177b3b-e150-458e-887e-471b29cfc360\', \'b4e55d8e-7f5d-44ea-8440-96b6fb7bd914\']",)',) from engine.log: 2019-09-25 16:19:39,251+03 INFO [org.ovirt.engine.core.bll.storage.disk.HotUnPlugDiskFromVmCommand] (default task-17) [98ce2330-1b69-4516-ad55-a9337e01078c] Lock Acquired to object 'EngineLock:{exclusiveLocks='[f55df48a-1cc6-490f-a850-97833005eae6=DISK]', sharedLocks='[1cff871e-5e59-4108-a621-c631bbf6897e=VM]'}' 2019-09-25 16:19:39,440+03 INFO [org.ovirt.engine.core.bll.storage.disk.HotUnPlugDiskFromVmCommand] (EE-ManagedThreadFactory-engine-Thread-621) [98ce2330-1b69-4516-ad55-a9337e01078c] Running command: HotUnPlugDiskFromVmCommand internal: false. Entities affected : ID: 1cff871e-5e59-4108-a621-c631bbf6897e Type: VMAction group CONFIGURE_VM_STORAGE with role type USER 2019-09-25 16:19:39,460+03 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HotUnPlugDiskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-621) [98ce2330-1b69-4516-ad55-a9337e01078c] START, HotUnPlugDiskVDSCommand(HostName = host_mixed_1, HotPlugDiskVDSParameters:{hostId='2627e621-24b2-40e6-ab11-029d55e3e6c8', vmId='1cff871e-5e59-4108-a621-c631bbf6897e', diskId='f55df48a-1cc6-490f-a850-97833005eae6', addressMap='[bus=0x00, domain=0x0000, function=0x0, slot=0x0a, type=pci]'}), log id: fd6fe75 2019-09-25 16:19:39,509+03 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HotUnPlugDiskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-621) [98ce2330-1b69-4516-ad55-a9337e01078c] Disk hot-unplug: <?xml version="1.0" encoding="UTF-8"?><hotunplug> ovirt-engine-4.3.6.6-0.1.el7.noarch vdsm-4.30.30-1.el7ev.x86_64 Logs attached Created attachment 1619041 [details]
Logs
Hi Eyal, We need your help in order to proceed with this bug. As you can see functionality looks good BUT we still get the TearDown Error in VDSM this time in detach disk phase. The disk does get detached and removed in spite of the VDSM error. Please advise on how to proceed. To be clear we do not see any functionality impact other than the VDSM error when detaching the disk. I think we can verify this issue and open a new issue for the VDSM error seen in detach disk phase. WDYT? (In reply to Shir Fishbain from comment #20) > From the functionality side, the bug verified. All the steps passed. > > Moreover when detach the disk from source VM , the following error appears : > > from vdsm.log: > 2019-09-25 16:20:14,750+0300 ERROR (jsonrpc/3) [storage.TaskManager.Task] > (Task='0af1c3a1-b7d3-4097-85cd-7710de207fa7') Unexpected error (task:875) > Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in > _run > return fn(*args, **kargs) > File "<string>", line 2, in teardownImage > File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in > method > ret = func(*args, **kwargs) > File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 3269, in > teardownImage > dom.deactivateImage(imgUUID) > File "/usr/lib/python2.7/site-packages/vdsm/storage/blockSD.py", line > 1406, in deactivateImage > lvm.deactivateLVs(self.sdUUID, volUUIDs) > File "/usr/lib/python2.7/site-packages/vdsm/storage/lvm.py", line 1451, in > deactivateLVs > _setLVAvailability(vgName, toDeactivate, "n") > File "/usr/lib/python2.7/site-packages/vdsm/storage/lvm.py", line 954, in > _setLVAvailability > raise error(str(e)) > CannotDeactivateLogicalVolume: Cannot deactivate Logical Volume: ('General > Storage Exception: ("5 [] [\' Logical volume > db849598-6d71-43b0-aa22-fece4eb139ae/1f177b3b-e150-458e-887e-471b29cfc360 in > use.\', \' Logical volume > db849598-6d71-43b0-aa22-fece4eb139ae/b4e55d8e-7f5d-44ea-8440-96b6fb7bd914 in > use.\']\\ndb849598-6d71-43b0-aa22-fece4eb139ae/[\'1f177b3b-e150-458e-887e- > 471b29cfc360\', \'b4e55d8e-7f5d-44ea-8440-96b6fb7bd914\']",)',) > > from engine.log: > 2019-09-25 16:19:39,251+03 INFO > [org.ovirt.engine.core.bll.storage.disk.HotUnPlugDiskFromVmCommand] (default > task-17) [98ce2330-1b69-4516-ad55-a9337e01078c] Lock Acquired to object > 'EngineLock:{exclusiveLocks='[f55df48a-1cc6-490f-a850-97833005eae6=DISK]', > sharedLocks='[1cff871e-5e59-4108-a621-c631bbf6897e=VM]'}' > 2019-09-25 16:19:39,440+03 INFO > [org.ovirt.engine.core.bll.storage.disk.HotUnPlugDiskFromVmCommand] > (EE-ManagedThreadFactory-engine-Thread-621) > [98ce2330-1b69-4516-ad55-a9337e01078c] Running command: > HotUnPlugDiskFromVmCommand internal: false. Entities affected : ID: > 1cff871e-5e59-4108-a621-c631bbf6897e Type: VMAction group > CONFIGURE_VM_STORAGE with role type USER > 2019-09-25 16:19:39,460+03 INFO > [org.ovirt.engine.core.vdsbroker.vdsbroker.HotUnPlugDiskVDSCommand] > (EE-ManagedThreadFactory-engine-Thread-621) > [98ce2330-1b69-4516-ad55-a9337e01078c] START, > HotUnPlugDiskVDSCommand(HostName = host_mixed_1, > HotPlugDiskVDSParameters:{hostId='2627e621-24b2-40e6-ab11-029d55e3e6c8', > vmId='1cff871e-5e59-4108-a621-c631bbf6897e', > diskId='f55df48a-1cc6-490f-a850-97833005eae6', addressMap='[bus=0x00, > domain=0x0000, function=0x0, slot=0x0a, type=pci]'}), log id: fd6fe75 > 2019-09-25 16:19:39,509+03 INFO > [org.ovirt.engine.core.vdsbroker.vdsbroker.HotUnPlugDiskVDSCommand] > (EE-ManagedThreadFactory-engine-Thread-621) > [98ce2330-1b69-4516-ad55-a9337e01078c] Disk hot-unplug: <?xml version="1.0" > encoding="UTF-8"?><hotunplug> > > ovirt-engine-4.3.6.6-0.1.el7.noarch > vdsm-4.30.30-1.el7ev.x86_64 > > Logs attached (In reply to Avihai from comment #24) > Hi Eyal, > We need your help in order to proceed with this bug. > > As you can see functionality looks good BUT we still get the TearDown Error > in VDSM this time in detach disk phase. > The disk does get detached and removed in spite of the VDSM error. > > Please advise on how to proceed. > To be clear we do not see any functionality impact other than the VDSM error > when detaching the disk. > > I think we can verify this issue and open a new issue for the VDSM error > seen in detach disk phase. > WDYT? According to the logs it seems that this is a known issue that was reported in bug 1644142. The hot-unplug of the volume succeded on the engine side. On VDSM we do see a failure to teardown the volume since it is used by the 'source_vm' which is in 'up' state when the hut-unplug occur. Already discussed in bug 1644142. Verified ovirt-engine-4.3.6.6-0.1.el7.noarch vdsm-4.30.30-1.el7ev.x86_64 The documentation text flag should only be set after 'doc text' field is provided. Please provide the documentation text and set the flag to '?' again. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2019:3010 |