Bug 1725915

Summary: Vdsm tries to tear down in-use volume of ISO in block storage domain
Product: [oVirt] vdsm Reporter: Tomáš Golembiovský <tgolembi>
Component: CoreAssignee: Roman Bednář <rbednar>
Status: CLOSED CURRENTRELEASE QA Contact: Avihai <aefrat>
Severity: low Docs Contact:
Priority: unspecified    
Version: 4.30.0CC: bugs, dfodor, eshenitz, lleistne, michal.skrivanek, nsoffer, rbednar, sfishbai, tnisan, vjuranek
Target Milestone: ovirt-4.4.7Flags: pm-rhel: ovirt-4.4+
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-07-14 13:08:57 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Storage RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1721455    
Attachments:
Description Flags
vdsm.log of failed teardownImage none

Description Tomáš Golembiovský 2019-07-01 17:46:36 UTC
Created attachment 1586390 [details]
vdsm.log of failed teardownImage

When I start two VMs with 'Run Once' with same ISO from block storage domain attached, engine tries to detach the volume after first VM terminates. I did not see any error in web admin, but there is failure in vdsm.log:

2019-07-01 18:29:42,270+0200 INFO  (libvirt/events) [vdsm.api] FINISH teardownImage error=Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\'  Logical volume ed35fce7-6296-4d7e-931f-e1a403649206/d5eb955c-4437-4011-8e4d-0fa5fa15f317 in use.\']\\ned35fce7-6296-4d7e-931f-e1a403649206/[\'d5eb955c-4437-4011-8e4d-0fa5fa15f317\']",)',) from=internal, task_id=1f3846d8-fd50-4aaf-9445-843e434bd700 (api:52)
2019-07-01 18:29:42,271+0200 ERROR (libvirt/events) [storage.TaskManager.Task] (Task='1f3846d8-fd50-4aaf-9445-843e434bd700') Unexpected error (task:875)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run
    return fn(*args, **kargs)
  File "<string>", line 2, in teardownImage
  File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in method
    ret = func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 3264, in teardownImage
    dom.deactivateImage(imgUUID)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/blockSD.py", line 1377, in deactivateImage
    lvm.deactivateLVs(self.sdUUID, volUUIDs)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/lvm.py", line 1452, in deactivateLVs
    _setLVAvailability(vgName, toDeactivate, "n")
  File "/usr/lib/python2.7/site-packages/vdsm/storage/lvm.py", line 955, in _setLVAvailability
    raise error(str(e))
CannotDeactivateLogicalVolume: Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\'  Logical volume ed35fce7-6296-4d7e-931f-e1a403649206/d5eb955c-4437-4011-8e4d-0fa5fa15f317 in use.\']\\ned35fce7-6296-4d7e-931f-e1a403649206/[\'d5eb955c-4437-4011-8e4d-0fa5fa15f317\']",)',)


Engine should not do the teardownImage API call when the disk is still in use.

Steps to reproduce:

1) select first VM and open Run Once dialog
2) in Boot options select 'Attach CD'
3) in ISO list pick ISO located on block domain
4) run the VM
5) select second VM and open Run Once dialog
6) in Boot options select the same ISO as in step 3)
7) run the VM
8) once the VMs are up shutdown one of them while keeping the other running
9) check vdsm.log

Comment 1 Michal Skrivanek 2019-07-02 06:57:33 UTC
Tal, might be better if you take it?

Comment 3 Vojtech Juranek 2020-08-20 09:09:23 UTC
*** Bug 1870110 has been marked as a duplicate of this bug. ***

Comment 4 Nir Soffer 2020-08-20 09:20:31 UTC
Moving to vdsm since this cannot be fixe in engine. Vdsm need to support shared block device properly.

Comment 6 Sandro Bonazzola 2021-06-04 05:34:12 UTC
All referenced patches have been merged, can you please check this bug status?

Comment 7 Roman Bednář 2021-06-09 08:52:31 UTC
Indeed the state of this BZ should have been flipped, doing it now.

Comment 8 Lukas Svaty 2021-07-14 13:08:57 UTC
This bug has low overall severity and passed an automated regression 
suite, and is not going to be further verified by QE. If you believe 
special care is required, feel free to re-open to ON_QA status.