Description of problem:
SD moved to maintenance mode while previous HotUnPlug operation failed due to busy/open volume
Version-Release number of selected component (if applicable):
rhevm-4.0.7.5-0.1.el7ev.noarch
vdsm-4.19.31-1.el7ev.x86_64
How reproducible:
Happened once in customer's environment after upgrade from 3.6 to 4.0
Steps to Reproduce:
1. Have a vDisk associated to a VM
2. Deactivate the vDisk. In this case, the LV was open and in use when the HotUnPlug operation failed
3. Move the SD to maintenance mode
4. Detach the SD from DC
Actual results:
SD switched to maintenance mode and later was detached from the DC
Expected results:
SD maintenance mode should fail/be blocked, with a message indicating it can't be switched until all volumes in the SD are deactivated (not open)
Additional info:
This was discovered in a graceful DR exercise, where the application inside the VM was stopped, the FS was unmounted, the VG inside the VM was deactivated and also the vDisk was deactivated from the UI (not removed/detached)
If the hotunplugging failed, I'd expect this to be reflected back to the eninge, and the disk should remain active.
In such a case, we should have an engine-side validation that deactivating can't be attempted since there are running VMs that use the domain.
(In reply to Allon Mureinik from comment #4)
> If the hotunplugging failed, I'd expect this to be reflected back to the
> eninge, and the disk should remain active.
>
> In such a case, we should have an engine-side validation that deactivating
> can't be attempted since there are running VMs that use the domain.
We do have a validation for blocking deactivation when VMs are still running.
@Javier - what was the status of the VM and the disk prior to domain deactivation?