Description of problem: VMI won't deleted when VM stopped and there is no hpp pod in the node Version-Release number of selected component (if applicable): 4.13 How reproducible: 100% Steps to Reproduce: 1. Create a running hpp VM 2. use node selector to remove hpp pod from the VM node 3. stop the VM Actual results: VM Status stopped, VMI stay with status Succeeded Expected results: the VMI been deleted or some sort of an error Additional info: http://pastebin.test.redhat.com/1094947 oc describe vmi vm-5616-1678978679-0687053 ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 8m31s virtualmachine-controller Created virtual machine pod virt-launcher-vm-5616-1678978679-0687053-dqlkk Normal SuccessfulDelete 8m25s (x8 over 8m31s) virtualmachine-controller Deleted WaitForFirstConsumer temporary pod virt-launcher-vm-5616-1678978679-0687053-dqlkk Normal SuccessfulCreate 8m10s virtualmachine-controller Created virtual machine pod virt-launcher-vm-5616-1678978679-0687053-72wqg Normal Created 8m4s virt-handler VirtualMachineInstance defined. Normal Started 8m4s virt-handler VirtualMachineInstance started. Normal SuccessfulDelete 4m55s virtualmachine-controller Deleted virtual machine pod virt-launcher-vm-5616-1678978679-0687053-72wqg Normal ShuttingDown 4m55s (x4 over 4m55s) virt-handler Signaled Graceful Shutdown Normal Deleted 4m53s virt-handler Signaled Deletion Normal Stopped 4m53s virt-handler The VirtualMachineInstance was shut down.
*** Bug 2179104 has been marked as a duplicate of this bug. ***
*** Bug 2179102 has been marked as a duplicate of this bug. ***
So by essentially breaking the storage when forcing the hpp pod to not be allowed to be on the node, you made it impossible for the PVC to get properly unmounted from the container (that is the hpp pods job). So the container and thus the pod cannot be properly cleaned up. If you look at the virt-launcher pod, I am sure there is an appropriate error in there. I am not entirely sure what you expect to happen here? If you restore the hpp pod, does it get cleaned up properly?
Can we indicate the vmi is in error state in some way? instead of it remain in succeeded status?
So HPP doesn't know anything about VMIs and thus cannot influence them. And from the VMI perspective all it knows is that the pod cannot be cleaned up (the pod is finished though, just can't clean up). So the VMI status of succeeded seems appropriate.
Bumping to next Z stream, bug is hard to trigger so not a blocker, and we are already in code freeze.
Since this issue is going to be resolved by documentation I am closing this bug.