Bug 2179105 - HPP VMI won't deleted when VM stopped
Summary: HPP VMI won't deleted when VM stopped
Keywords:
Status: CLOSED DEFERRED
Alias: None
Product: Container Native Virtualization (CNV)
Classification: Red Hat
Component: Storage
Version: 4.13.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 4.13.3
Assignee: Alexander Wels
QA Contact: Natalie Gavrielov
URL:
Whiteboard:
: 2179102 2179104 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-03-16 16:12 UTC by dalia
Modified: 2023-08-09 17:53 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-08-09 17:53:15 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker CNV-26960 0 None None None 2023-03-16 16:13:43 UTC
Red Hat Issue Tracker CNV-28201 0 None None None 2023-08-09 17:53:14 UTC

Description dalia 2023-03-16 16:12:06 UTC
Description of problem:
VMI won't deleted when VM stopped and there is no hpp pod in the node

Version-Release number of selected component (if applicable):
4.13

How reproducible:
100%

Steps to Reproduce:
1. Create a running hpp VM 
2. use node selector to remove hpp pod from the VM node 
3. stop the VM

Actual results:
VM Status stopped, VMI stay with status Succeeded

Expected results:
the VMI been deleted or some sort of an error 

Additional info:

http://pastebin.test.redhat.com/1094947


oc describe vmi vm-5616-1678978679-0687053
...

Events:
  Type    Reason            Age                    From                       Message
  ----    ------            ----                   ----                       -------
  Normal  SuccessfulCreate  8m31s                  virtualmachine-controller  Created virtual machine pod virt-launcher-vm-5616-1678978679-0687053-dqlkk
  Normal  SuccessfulDelete  8m25s (x8 over 8m31s)  virtualmachine-controller  Deleted WaitForFirstConsumer temporary pod virt-launcher-vm-5616-1678978679-0687053-dqlkk
  Normal  SuccessfulCreate  8m10s                  virtualmachine-controller  Created virtual machine pod virt-launcher-vm-5616-1678978679-0687053-72wqg
  Normal  Created           8m4s                   virt-handler               VirtualMachineInstance defined.
  Normal  Started           8m4s                   virt-handler               VirtualMachineInstance started.
  Normal  SuccessfulDelete  4m55s                  virtualmachine-controller  Deleted virtual machine pod virt-launcher-vm-5616-1678978679-0687053-72wqg
  Normal  ShuttingDown      4m55s (x4 over 4m55s)  virt-handler               Signaled Graceful Shutdown
  Normal  Deleted           4m53s                  virt-handler               Signaled Deletion
  Normal  Stopped           4m53s                  virt-handler               The VirtualMachineInstance was shut down.

Comment 1 dalia 2023-03-16 20:10:03 UTC
*** Bug 2179104 has been marked as a duplicate of this bug. ***

Comment 2 dalia 2023-03-19 10:14:38 UTC
*** Bug 2179102 has been marked as a duplicate of this bug. ***

Comment 3 Alexander Wels 2023-03-22 15:51:36 UTC
So by essentially breaking the storage when forcing the hpp pod to not be allowed to be on the node, you made it impossible for the PVC to get properly unmounted from the container (that is the hpp pods job). So the container and thus the pod cannot be properly cleaned up. If you look at the virt-launcher pod, I am sure there is an appropriate error in there. I am not entirely sure what you expect to happen here? If you restore the hpp pod, does it get cleaned up properly?

Comment 4 dalia 2023-03-26 12:07:47 UTC
Can we indicate the vmi is in error state in some way? instead of it remain in succeeded status?

Comment 5 Alexander Wels 2023-03-27 12:04:29 UTC
So HPP doesn't know anything about VMIs and thus cannot influence them. And from the VMI perspective all it knows is that the pod cannot be cleaned up (the pod is finished though, just can't clean up). So the VMI status of succeeded seems appropriate.

Comment 7 Maya Rashish 2023-06-28 13:50:17 UTC
Bumping to next Z stream, bug is hard to trigger so not a blocker, and we are already in code freeze.

Comment 8 Adam Litke 2023-08-09 17:53:15 UTC
Since this issue is going to be resolved by documentation I am closing this bug.


Note You need to log in before you can comment on or make changes to this bug.