Bug 2010334

Summary: VM is not able to be migrated after failed migration
Product: Container Native Virtualization (CNV) Reporter: lpivarc
Component: VirtualizationAssignee: lpivarc
Status: CLOSED ERRATA QA Contact: Israel Pinto <ipinto>
Severity: high Docs Contact:
Priority: high    
Version: 2.6.7CC: cnv-qe-bugs, fdeutsch, sgott, zpeng
Target Milestone: ---   
Target Release: 2.6.8   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: hco-bundle-registry-container-v2.6.8-18 virt-operator-container-v2.6.8-3 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-11-17 18:40:02 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description lpivarc 2021-10-04 13:33:36 UTC
Description of problem:
VM is not able to be migrated after a failed migration. Previous target pod still exist and never exit.


Version-Release number of selected component (if applicable):
2.6.z

How reproducible:
1. Migrate a VM and make sure the migration fails(e.g by continuously killing virt-handler on target node)
2. Migrate VM one more time

Actual results:
Migration will be pending

Expected results:
Migration will proceed and succeed

Additional info:
The problem is known. This is a defect in our recognition if a domain was found. Here we rely only on libvirt`s events but that is not enough. We need to query the domain and ensure it exists. The second defect is that "signaled cleanup" isn't considered once the domain is found and therefore target pod will hang around indefinitely.

Comment 2 zhe peng 2021-10-25 06:54:41 UTC
Verify with build CNV-v2.6.8-15
iib:122137

step:
1. create vm and start
2. do live migration
3. continuously killing virt-handler on target node until migration failed.
check migration status.
$ oc get virtualmachineinstancemigrations.kubevirt.io vm-rhel8-migration-w49sm -o yaml
....
- apiVersion: kubevirt.io/v1alpha3
  kind: VirtualMachineInstanceMigration
  metadata:
    annotations:
      kubevirt.io/latest-observed-api-version: v1alpha3
      kubevirt.io/storage-observed-api-version: v1alpha3
    creationTimestamp: "2021-10-25T06:41:29Z"
    generateName: vm-rhel8-migration-
    generation: 1
    labels:
      kubevirt.io/vmi-name: vm-rhel8
    name: vm-rhel8-migration-w49sm
    namespace: default
    resourceVersion: "4213467"
    selfLink: /apis/kubevirt.io/v1alpha3/namespaces/default/virtualmachineinstancemigrations/vm-rhel8-migration-w49sm
    uid: b3689a80-3989-4b1c-ad8c-8b965d3b79ed
  spec:
    vmiName: vm-rhel8
  status:
    phase: Failed
....

4. do live migration again.
5. migration is succeed. vm is running in target node.
move to verified.

Comment 8 errata-xmlrpc 2021-11-17 18:40:02 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Virtualization 2.6.8 Images security and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:4725