Description of problem: Create a VM with large memory/cpu and try to migrate it, the migration is pending because no available node. Then cancel the migration, it has no effect. "$ oc describe pod virt-launcher-f31-8wgck Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling <unknown> default-scheduler 0/6 nodes are available: 1 node(s) didn't match pod affinity/anti-affinity, 2 Insufficient memory, 3 Insufficient devices.kubevirt.io/kvm." Version-Release number of selected component (if applicable): 4.4.0-0.nightly-2020-03-06-170328 How reproducible: 100% Steps to Reproduce: 1. Create a VM with large memory/cpu 2. Migrate the VM and it goes into pending 3. Cancel the migration Actual results: Cancel migration does not work Expected results: Cancel VM migration works Additional info:
Guohua: how does it behave when you try to delete the vmim from the command line?
Delete vmim never returns, but delete the pending pod can cancel the migration and the original VM is back to running. $ oc get pod NAME READY STATUS RESTARTS AGE virt-launcher-f31-kn6sr 0/2 Pending 0 9m25s virt-launcher-f31-qb66w 2/2 Running 0 16m [cloud-user@ocp-psi-executor gouyangocp44]$ oc get vmim NAME AGE f31-migration-6psmq 9m44s [cloud-user@ocp-psi-executor gouyangocp44]$ oc delete vmim f31-migration-6psmq virtualmachineinstancemigration.kubevirt.io "f31-migration-6psmq" deleted ^C [cloud-user@ocp-psi-executor gouyangocp44]$ oc delete pod virt-launcher-f31-kn6sr pod "virt-launcher-f31-kn6sr" deleted [cloud-user@ocp-psi-executor gouyangocp44]$
I believe the correct way to use the API is to only delete the vmim which should do all the cleanup. Moving to virt for further investigation.
Guohau, is this bug distinct from https://bugzilla.redhat.com/show_bug.cgi?id=1719190 ?
Upon further inspection, I'm confident this bz is a duplicate. If you feel this is in error, please re-open. *** This bug has been marked as a duplicate of bug 1719190 ***
yes, it's a duplicate of bug 1719190.