Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1507691

Summary: SD into maintenance should fail when there's an open LV in it.
Product: Red Hat Enterprise Virtualization Manager Reporter: Javier Coscia <jcoscia>
Component: ovirt-engineAssignee: Daniel Erez <derez>
Status: CLOSED DUPLICATE QA Contact: Elad <ebenahar>
Severity: high Docs Contact:
Priority: unspecified    
Version: 4.0.7CC: amureini, ebenahar, jcoscia, lsurette, mkalinin, nsoffer, rbalakri, Rhev-m-bugs, srevivo, tnisan
Target Milestone: ovirt-4.1.8   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-11-01 10:54:39 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Storage RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Javier Coscia 2017-10-30 23:09:53 UTC
Description of problem:

SD moved to maintenance mode while previous HotUnPlug operation failed due to busy/open volume

Version-Release number of selected component (if applicable):

rhevm-4.0.7.5-0.1.el7ev.noarch
vdsm-4.19.31-1.el7ev.x86_64

How reproducible:

Happened once in customer's environment after upgrade from 3.6 to 4.0

Steps to Reproduce:
1. Have a vDisk associated to a VM
2. Deactivate the vDisk. In this case, the LV was open and in use when the HotUnPlug operation failed
3. Move the SD to maintenance mode
4. Detach the SD from DC

Actual results:

SD switched to maintenance mode and later was detached from the DC

Expected results:

SD maintenance mode should fail/be blocked, with a message indicating it can't be switched until all volumes in the SD are deactivated (not open) 

Additional info:

This was discovered in a graceful DR exercise, where the application inside the VM was stopped, the FS was unmounted, the VG inside the VM was deactivated and also the vDisk was deactivated from the UI (not removed/detached)

Comment 4 Allon Mureinik 2017-10-31 17:23:04 UTC
If the hotunplugging failed, I'd expect this to be reflected back to the eninge, and the disk should remain active.

In such a case, we should have an engine-side validation that deactivating can't be attempted since there are running VMs that use the domain.

Comment 5 Daniel Erez 2017-10-31 18:38:40 UTC
(In reply to Allon Mureinik from comment #4)
> If the hotunplugging failed, I'd expect this to be reflected back to the
> eninge, and the disk should remain active.
> 
> In such a case, we should have an engine-side validation that deactivating
> can't be attempted since there are running VMs that use the domain.

We do have a validation for blocking deactivation when VMs are still running.
@Javier - what was the status of the VM and the disk prior to domain deactivation?

Comment 17 Elad 2018-08-02 08:11:49 UTC
DUP bug 1449968 has qe_test_coverage+ so set this one to -