Bug 1507691 - SD into maintenance should fail when there's an open LV in it.
Summary: SD into maintenance should fail when there's an open LV in it.
Keywords:
Status: CLOSED DUPLICATE of bug 1449968
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine
Version: 4.0.7
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ovirt-4.1.8
: ---
Assignee: Daniel Erez
QA Contact: Elad
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-10-30 23:09 UTC by Javier Coscia
Modified: 2021-09-09 12:48 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-11-01 10:54:39 UTC
oVirt Team: Storage
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHV-43461 0 None None None 2021-09-09 12:48:27 UTC
oVirt gerrit 83476 0 master ABANDONED vm: hotunplugDisk - catch StorageException 2017-11-01 10:39:19 UTC

Description Javier Coscia 2017-10-30 23:09:53 UTC
Description of problem:

SD moved to maintenance mode while previous HotUnPlug operation failed due to busy/open volume

Version-Release number of selected component (if applicable):

rhevm-4.0.7.5-0.1.el7ev.noarch
vdsm-4.19.31-1.el7ev.x86_64

How reproducible:

Happened once in customer's environment after upgrade from 3.6 to 4.0

Steps to Reproduce:
1. Have a vDisk associated to a VM
2. Deactivate the vDisk. In this case, the LV was open and in use when the HotUnPlug operation failed
3. Move the SD to maintenance mode
4. Detach the SD from DC

Actual results:

SD switched to maintenance mode and later was detached from the DC

Expected results:

SD maintenance mode should fail/be blocked, with a message indicating it can't be switched until all volumes in the SD are deactivated (not open) 

Additional info:

This was discovered in a graceful DR exercise, where the application inside the VM was stopped, the FS was unmounted, the VG inside the VM was deactivated and also the vDisk was deactivated from the UI (not removed/detached)

Comment 4 Allon Mureinik 2017-10-31 17:23:04 UTC
If the hotunplugging failed, I'd expect this to be reflected back to the eninge, and the disk should remain active.

In such a case, we should have an engine-side validation that deactivating can't be attempted since there are running VMs that use the domain.

Comment 5 Daniel Erez 2017-10-31 18:38:40 UTC
(In reply to Allon Mureinik from comment #4)
> If the hotunplugging failed, I'd expect this to be reflected back to the
> eninge, and the disk should remain active.
> 
> In such a case, we should have an engine-side validation that deactivating
> can't be attempted since there are running VMs that use the domain.

We do have a validation for blocking deactivation when VMs are still running.
@Javier - what was the status of the VM and the disk prior to domain deactivation?

Comment 17 Elad 2018-08-02 08:11:49 UTC
DUP bug 1449968 has qe_test_coverage+ so set this one to -


Note You need to log in before you can comment on or make changes to this bug.