Bug 2010334 - VM is not able to be migrated after failed migration
Summary: VM is not able to be migrated after failed migration
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Container Native Virtualization (CNV)
Classification: Red Hat
Component: Virtualization
Version: 2.6.7
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 2.6.8
Assignee: lpivarc
QA Contact: Israel Pinto
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-10-04 13:33 UTC by lpivarc
Modified: 2021-11-17 18:40 UTC (History)
4 users (show)

Fixed In Version: hco-bundle-registry-container-v2.6.8-18 virt-operator-container-v2.6.8-3
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-11-17 18:40:02 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github kubevirt kubevirt pull 6512 0 None open Fix domain lookup to ensure target pod of migration is cleanup 2021-10-04 15:02:33 UTC
Red Hat Product Errata RHSA-2021:4725 0 None None None 2021-11-17 18:40:41 UTC

Description lpivarc 2021-10-04 13:33:36 UTC
Description of problem:
VM is not able to be migrated after a failed migration. Previous target pod still exist and never exit.


Version-Release number of selected component (if applicable):
2.6.z

How reproducible:
1. Migrate a VM and make sure the migration fails(e.g by continuously killing virt-handler on target node)
2. Migrate VM one more time

Actual results:
Migration will be pending

Expected results:
Migration will proceed and succeed

Additional info:
The problem is known. This is a defect in our recognition if a domain was found. Here we rely only on libvirt`s events but that is not enough. We need to query the domain and ensure it exists. The second defect is that "signaled cleanup" isn't considered once the domain is found and therefore target pod will hang around indefinitely.

Comment 2 zhe peng 2021-10-25 06:54:41 UTC
Verify with build CNV-v2.6.8-15
iib:122137

step:
1. create vm and start
2. do live migration
3. continuously killing virt-handler on target node until migration failed.
check migration status.
$ oc get virtualmachineinstancemigrations.kubevirt.io vm-rhel8-migration-w49sm -o yaml
....
- apiVersion: kubevirt.io/v1alpha3
  kind: VirtualMachineInstanceMigration
  metadata:
    annotations:
      kubevirt.io/latest-observed-api-version: v1alpha3
      kubevirt.io/storage-observed-api-version: v1alpha3
    creationTimestamp: "2021-10-25T06:41:29Z"
    generateName: vm-rhel8-migration-
    generation: 1
    labels:
      kubevirt.io/vmi-name: vm-rhel8
    name: vm-rhel8-migration-w49sm
    namespace: default
    resourceVersion: "4213467"
    selfLink: /apis/kubevirt.io/v1alpha3/namespaces/default/virtualmachineinstancemigrations/vm-rhel8-migration-w49sm
    uid: b3689a80-3989-4b1c-ad8c-8b965d3b79ed
  spec:
    vmiName: vm-rhel8
  status:
    phase: Failed
....

4. do live migration again.
5. migration is succeed. vm is running in target node.
move to verified.

Comment 8 errata-xmlrpc 2021-11-17 18:40:02 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Virtualization 2.6.8 Images security and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:4725


Note You need to log in before you can comment on or make changes to this bug.