Bug 1870110 - Shutting down VM with CD attached throws an error if CD is attached by another VM at the same time
Summary: Shutting down VM with CD attached throws an error if CD is attached by anothe...
Keywords:
Status: CLOSED DUPLICATE of bug 1725915
Alias: None
Product: vdsm
Classification: oVirt
Component: Core
Version: 4.40.25
Hardware: Unspecified
OS: Unspecified
unspecified
low
Target Milestone: ovirt-4.4.3
: ---
Assignee: Vojtech Juranek
QA Contact: Avihai
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-08-19 11:31 UTC by Vojtech Juranek
Modified: 2020-08-20 09:09 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-08-20 09:09:23 UTC
oVirt Team: Storage
Embargoed:


Attachments (Terms of Use)

Description Vojtech Juranek 2020-08-19 11:31:04 UTC
Description of problem:
When two (or more) VMs have attached same ISO as CDROM and one of the VMs is shutting down, it logs an error in vdsm logs. The error is actually ignored and shutdown is successful, but it pollutes the log and requires more time for shutdown due to lvm command retries. We should have a way to detect that ISO is used by another VM and skip deactivation of the volume or at least ignore the error completely.


How reproducible:
always

Steps to Reproduce:
1. Start 2 VMs
2. Attach same ISO as CD to both VMs
3. Shut down one of the VMs

Actual results:
Error log in vdsm log

Expected results:
No error in the log

Additional info:
vdsm exception:

2020-08-17 09:35:26,099-0400 INFO  (libvirt/events) [vdsm.api] FINISH teardownImage error=Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\'  Logical volume aee38bf8-9f69-4a56-b30a-6e1cf6e1355f/51067df7-c19d-4d4f-945e-d26cc1e95db6 in use.\']\\naee38bf8-9f69-4a56-b30a-6e1cf6e1355f/[\'51067df7-c19d-4d4f-945e-d26cc1e95db6\']",)',) from=internal, task_id=840d7cfc-32bb-4f7f-b072-d25b396f3fcb (api:52)
2020-08-17 09:35:26,099-0400 ERROR (libvirt/events) [storage.TaskManager.Task] (Task='840d7cfc-32bb-4f7f-b072-d25b396f3fcb') Unexpected error (task:880)
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/vdsm/storage/lvm.py", line 1189, in _setLVAvailability
    changelv(vg, lvs, ("--available", available))
  File "/usr/lib/python3.6/site-packages/vdsm/storage/lvm.py", line 1184, in changelv
    raise se.StorageException("%d %s %s\n%s/%s" % (rc, out, err, vg, lvs))
vdsm.storage.exception.StorageException: General Storage Exception: ("5 [] ['  Logical volume aee38bf8-9f69-4a56-b30a-6e1cf6e1355f/51067df7-c19d-4d4f-945e-d26cc1e95db6 in use.']\naee38bf8-9f69-4a56-b30a-6e1cf6e1355f/['51067df7-c19d-4d4f-945e-d26cc1e95db6']",)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/vdsm/storage/task.py", line 887, in _run
    return fn(*args, **kargs)
  File "<decorator-gen-169>", line 2, in teardownImage
  File "/usr/lib/python3.6/site-packages/vdsm/common/api.py", line 50, in method
    ret = func(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/vdsm/storage/hsm.py", line 3295, in teardownImage
    dom.deactivateImage(imgUUID)
  File "/usr/lib/python3.6/site-packages/vdsm/storage/blockSD.py", line 1313, in deactivateImage
    lvm.deactivateLVs(self.sdUUID, volUUIDs)
  File "/usr/lib/python3.6/site-packages/vdsm/storage/lvm.py", line 1705, in deactivateLVs
    _setLVAvailability(vgName, toDeactivate, "n")
  File "/usr/lib/python3.6/site-packages/vdsm/storage/lvm.py", line 1194, in _setLVAvailability
    raise error(str(e))
vdsm.storage.exception.CannotDeactivateLogicalVolume: Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\'  Logical volume aee38bf8-9f69-4a56-b30a-6e1cf6e1355f/51067df7-c19d-4d4f-945e-d26cc1e95db6 in use.\']\\naee38bf8-9f69-4a56-b30a-6e1cf6e1355f/[\'51067df7-c19d-4d4f-945e-d26cc1e95db6\']",)',)

Comment 1 Nir Soffer 2020-08-19 11:42:34 UTC
Yes, this is a known issue, vdsm does not support shared disks. If multiple
vms or storage operations use the same block storage volume, the will
try to deactivate it while another operation or vm is using the volume
and will fail.

This can actually fail the other operation if it did not open the logical
volume yet.

To solve this vdsm need to add a reference counting mechanism, so tearing
down a volume will do nothing unless the caller is the last user of this
volume.

Comment 2 Michal Skrivanek 2020-08-20 08:52:44 UTC
It's just a duplicate of bug 1725915. And it's not properly prepared either(see bug 1589763). It's a shame this is still opened after a year...

either way, please use just a single bug to track

Comment 3 Vojtech Juranek 2020-08-20 09:09:23 UTC

*** This bug has been marked as a duplicate of bug 1725915 ***


Note You need to log in before you can comment on or make changes to this bug.