Created attachment 1586390 [details]
vdsm.log of failed teardownImage
When I start two VMs with 'Run Once' with same ISO from block storage domain attached, engine tries to detach the volume after first VM terminates. I did not see any error in web admin, but there is failure in vdsm.log:
2019-07-01 18:29:42,270+0200 INFO (libvirt/events) [vdsm.api] FINISH teardownImage error=Cannot deactivate Logical Volume: ('General Storage Exception: ("5  [\' Logical volume ed35fce7-6296-4d7e-931f-e1a403649206/d5eb955c-4437-4011-8e4d-0fa5fa15f317 in use.\']\\ned35fce7-6296-4d7e-931f-e1a403649206/[\'d5eb955c-4437-4011-8e4d-0fa5fa15f317\']",)',) from=internal, task_id=1f3846d8-fd50-4aaf-9445-843e434bd700 (api:52)
2019-07-01 18:29:42,271+0200 ERROR (libvirt/events) [storage.TaskManager.Task] (Task='1f3846d8-fd50-4aaf-9445-843e434bd700') Unexpected error (task:875)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run
return fn(*args, **kargs)
File "<string>", line 2, in teardownImage
File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in method
ret = func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 3264, in teardownImage
File "/usr/lib/python2.7/site-packages/vdsm/storage/blockSD.py", line 1377, in deactivateImage
File "/usr/lib/python2.7/site-packages/vdsm/storage/lvm.py", line 1452, in deactivateLVs
_setLVAvailability(vgName, toDeactivate, "n")
File "/usr/lib/python2.7/site-packages/vdsm/storage/lvm.py", line 955, in _setLVAvailability
CannotDeactivateLogicalVolume: Cannot deactivate Logical Volume: ('General Storage Exception: ("5  [\' Logical volume ed35fce7-6296-4d7e-931f-e1a403649206/d5eb955c-4437-4011-8e4d-0fa5fa15f317 in use.\']\\ned35fce7-6296-4d7e-931f-e1a403649206/[\'d5eb955c-4437-4011-8e4d-0fa5fa15f317\']",)',)
Engine should not do the teardownImage API call when the disk is still in use.
Steps to reproduce:
1) select first VM and open Run Once dialog
2) in Boot options select 'Attach CD'
3) in ISO list pick ISO located on block domain
4) run the VM
5) select second VM and open Run Once dialog
6) in Boot options select the same ISO as in step 3)
7) run the VM
8) once the VMs are up shutdown one of them while keeping the other running
9) check vdsm.log
Tal, might be better if you take it?
*** Bug 1870110 has been marked as a duplicate of this bug. ***
Moving to vdsm since this cannot be fixe in engine. Vdsm need to support shared block device properly.
All referenced patches have been merged, can you please check this bug status?
Indeed the state of this BZ should have been flipped, doing it now.
This bug has low overall severity and passed an automated regression
suite, and is not going to be further verified by QE. If you believe
special care is required, feel free to re-open to ON_QA status.