Bug 1725915 - Vdsm tries to tear down in-use volume of ISO in block storage domain
Summary: Vdsm tries to tear down in-use volume of ISO in block storage domain
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: vdsm
Classification: oVirt
Component: Core
Version: 4.30.0
Hardware: Unspecified
OS: Unspecified
unspecified
low
Target Milestone: ovirt-4.4.7
: ---
Assignee: Roman Bednář
QA Contact: Avihai
URL:
Whiteboard:
: 1870110 (view as bug list)
Depends On:
Blocks: 1721455
TreeView+ depends on / blocked
 
Reported: 2019-07-01 17:46 UTC by Tomáš Golembiovský
Modified: 2021-11-04 19:28 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-07-14 13:08:57 UTC
oVirt Team: Storage
Embargoed:
pm-rhel: ovirt-4.4+


Attachments (Terms of Use)
vdsm.log of failed teardownImage (564.69 KB, text/plain)
2019-07-01 17:46 UTC, Tomáš Golembiovský
no flags Details


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 101657 0 'None' ABANDONED storage: support multiple prepare image calls 2021-01-26 10:26:53 UTC
oVirt gerrit 114257 0 master MERGED lvm: avoid exception for in use volume deactivation 2021-06-02 10:24:21 UTC

Description Tomáš Golembiovský 2019-07-01 17:46:36 UTC
Created attachment 1586390 [details]
vdsm.log of failed teardownImage

When I start two VMs with 'Run Once' with same ISO from block storage domain attached, engine tries to detach the volume after first VM terminates. I did not see any error in web admin, but there is failure in vdsm.log:

2019-07-01 18:29:42,270+0200 INFO  (libvirt/events) [vdsm.api] FINISH teardownImage error=Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\'  Logical volume ed35fce7-6296-4d7e-931f-e1a403649206/d5eb955c-4437-4011-8e4d-0fa5fa15f317 in use.\']\\ned35fce7-6296-4d7e-931f-e1a403649206/[\'d5eb955c-4437-4011-8e4d-0fa5fa15f317\']",)',) from=internal, task_id=1f3846d8-fd50-4aaf-9445-843e434bd700 (api:52)
2019-07-01 18:29:42,271+0200 ERROR (libvirt/events) [storage.TaskManager.Task] (Task='1f3846d8-fd50-4aaf-9445-843e434bd700') Unexpected error (task:875)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run
    return fn(*args, **kargs)
  File "<string>", line 2, in teardownImage
  File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in method
    ret = func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 3264, in teardownImage
    dom.deactivateImage(imgUUID)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/blockSD.py", line 1377, in deactivateImage
    lvm.deactivateLVs(self.sdUUID, volUUIDs)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/lvm.py", line 1452, in deactivateLVs
    _setLVAvailability(vgName, toDeactivate, "n")
  File "/usr/lib/python2.7/site-packages/vdsm/storage/lvm.py", line 955, in _setLVAvailability
    raise error(str(e))
CannotDeactivateLogicalVolume: Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\'  Logical volume ed35fce7-6296-4d7e-931f-e1a403649206/d5eb955c-4437-4011-8e4d-0fa5fa15f317 in use.\']\\ned35fce7-6296-4d7e-931f-e1a403649206/[\'d5eb955c-4437-4011-8e4d-0fa5fa15f317\']",)',)


Engine should not do the teardownImage API call when the disk is still in use.

Steps to reproduce:

1) select first VM and open Run Once dialog
2) in Boot options select 'Attach CD'
3) in ISO list pick ISO located on block domain
4) run the VM
5) select second VM and open Run Once dialog
6) in Boot options select the same ISO as in step 3)
7) run the VM
8) once the VMs are up shutdown one of them while keeping the other running
9) check vdsm.log

Comment 1 Michal Skrivanek 2019-07-02 06:57:33 UTC
Tal, might be better if you take it?

Comment 3 Vojtech Juranek 2020-08-20 09:09:23 UTC
*** Bug 1870110 has been marked as a duplicate of this bug. ***

Comment 4 Nir Soffer 2020-08-20 09:20:31 UTC
Moving to vdsm since this cannot be fixe in engine. Vdsm need to support shared block device properly.

Comment 6 Sandro Bonazzola 2021-06-04 05:34:12 UTC
All referenced patches have been merged, can you please check this bug status?

Comment 7 Roman Bednář 2021-06-09 08:52:31 UTC
Indeed the state of this BZ should have been flipped, doing it now.

Comment 8 Lukas Svaty 2021-07-14 13:08:57 UTC
This bug has low overall severity and passed an automated regression 
suite, and is not going to be further verified by QE. If you believe 
special care is required, feel free to re-open to ON_QA status.


Note You need to log in before you can comment on or make changes to this bug.