Bug 1812775 - Cannot cancel vm migration when launch pod is pending
Summary: Cannot cancel vm migration when launch pod is pending
Keywords:
Status: CLOSED DUPLICATE of bug 1719190
Alias: None
Product: Container Native Virtualization (CNV)
Classification: Red Hat
Component: Virtualization
Version: 2.3.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: ---
Assignee: sgott
QA Contact: Israel Pinto
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-03-12 06:31 UTC by Guohua Ouyang
Modified: 2020-03-17 01:15 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-03-16 17:21:35 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Guohua Ouyang 2020-03-12 06:31:57 UTC
Description of problem:
Create a VM with large memory/cpu and try to migrate it, the migration is pending because no available node. Then cancel the migration, it has no effect.

"$ oc describe pod virt-launcher-f31-8wgck
Events:
  Type     Reason            Age        From               Message
  ----     ------            ----       ----               -------
  Warning  FailedScheduling  <unknown>  default-scheduler  0/6 nodes are available: 1 node(s) didn't match pod affinity/anti-affinity, 2 Insufficient memory, 3 Insufficient devices.kubevirt.io/kvm."


Version-Release number of selected component (if applicable):
4.4.0-0.nightly-2020-03-06-170328

How reproducible:
100%

Steps to Reproduce:
1. Create a VM with large memory/cpu
2. Migrate the VM and it goes into pending
3. Cancel the migration

Actual results:
Cancel migration does not work

Expected results:
Cancel VM migration works

Additional info:

Comment 1 Tomas Jelinek 2020-03-13 15:11:52 UTC
Guohua: how does it behave when you try to delete the vmim from the command line?

Comment 2 Guohua Ouyang 2020-03-16 04:15:28 UTC
Delete vmim never returns, but delete the pending pod can cancel the migration and the original VM is back to running.

$ oc get pod
NAME                      READY   STATUS    RESTARTS   AGE
virt-launcher-f31-kn6sr   0/2     Pending   0          9m25s
virt-launcher-f31-qb66w   2/2     Running   0          16m

[cloud-user@ocp-psi-executor gouyangocp44]$ oc get vmim
NAME                  AGE
f31-migration-6psmq   9m44s
[cloud-user@ocp-psi-executor gouyangocp44]$ oc delete vmim f31-migration-6psmq
virtualmachineinstancemigration.kubevirt.io "f31-migration-6psmq" deleted
^C
[cloud-user@ocp-psi-executor gouyangocp44]$ oc delete pod virt-launcher-f31-kn6sr
pod "virt-launcher-f31-kn6sr" deleted
[cloud-user@ocp-psi-executor gouyangocp44]$

Comment 3 Tomas Jelinek 2020-03-16 06:20:53 UTC
I believe the correct way to use the API is to only delete the vmim which should do all the cleanup.
Moving to virt for further investigation.

Comment 4 sgott 2020-03-16 15:55:04 UTC
Guohau, is this bug distinct from https://bugzilla.redhat.com/show_bug.cgi?id=1719190 ?

Comment 5 sgott 2020-03-16 17:21:35 UTC
Upon further inspection, I'm confident this bz is a duplicate. If you feel this is in error, please re-open.

*** This bug has been marked as a duplicate of bug 1719190 ***

Comment 6 Guohua Ouyang 2020-03-17 01:15:39 UTC
yes, it's a duplicate of bug 1719190.


Note You need to log in before you can comment on or make changes to this bug.