Bug 2017394 - After upgrade, live migration is Pending
Summary: After upgrade, live migration is Pending
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Container Native Virtualization (CNV)
Classification: Red Hat
Component: Installation
Version: 4.9.0
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ---
: 4.9.0
Assignee: Simone Tiraboschi
QA Contact: Debarati Basu-Nag
URL:
Whiteboard:
Depends On: 2017573
Blocks: 2017802 2021992
TreeView+ depends on / blocked
 
Reported: 2021-10-26 12:28 UTC by Ruth Netser
Modified: 2023-09-15 01:20 UTC (History)
11 users (show)

Fixed In Version: hco-bundle-registry-container-v4.9.0-262
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 2017802 2021992 (view as bug list)
Environment:
Last Closed: 2021-11-02 16:01:44 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github kubevirt hyperconverged-cluster-operator pull 1577 0 None Merged Disable default workloadUpdates strategies 2021-10-27 06:29:23 UTC
Github kubevirt hyperconverged-cluster-operator pull 1578 0 None Merged [release-1.5] Disable default workloadUpdates strategies 2021-10-27 06:29:23 UTC
Red Hat Issue Tracker CNV-14615 0 None None None 2023-09-15 01:20:36 UTC
Red Hat Product Errata RHSA-2021:4104 0 None None None 2021-11-02 16:01:48 UTC

Description Ruth Netser 2021-10-26 12:28:21 UTC
Description of problem:
VM live migration, after upgrade from CNV 4.8.2 to CNV 4.9.0 (over OCP 4.9.1), sometimes end up in Pending phase.


Version-Release number of selected component (if applicable):
OCP 4.9.1 + CNV 4.9.0

How reproducible:


Steps to Reproduce:
1. Upgrade to ocp 4.9.1 and ocs 4.9
2. Create VMs for upgrade; relevant VMs for the failure - both have runStrategy (and not 'running); the VMs have evictionStrategy: LiveMigrate and a cpu model (so they can be migrated to any of the nodes in the cluster)
3. Verify live migration works with ocp 4.9.1 and cnv 4.8.2
4. Upgrade to cnv 4.9.0

Actual results:
2 VMs cannot be migrated after the upgrade:

Expected results:
All VMs should successfully migrate (to use an updated virt-launcher image)

Additional info:
====
2nd run: all kubevirt-workload-update migrations are succeeded; manual migration on one VM (always-run-strategy-vm-1635246424-6423054-migration-qjvks) remains in Pending.



===============
admission webhook "migration-create-validator.kubevirt.io" denied the request: in-flight migration detected. Active migration job (18a55396-348a-43c0-a04c-092b460bad3f) is currently already in progress for VMI always-run-strategy-vm-1635237973-0614927.

Checking virtualmachineinstancemigration, the relevant ones for those VMs are pending.
VMI migration status is:
  Migration State:
    Completed:        true



$ oc describe -n upgrade-operators-product-upgrade-test-upgrade virtualmachineinstancemigration kubevirt-workload-update-77dsn
Name:         kubevirt-workload-update-77dsn
Namespace:    upgrade-operators-product-upgrade-test-upgrade
Labels:       kubevirt.io/vmi-name=always-run-strategy-vm-1635237973-0614927
Annotations:  kubevirt.io/latest-observed-api-version: v1
              kubevirt.io/storage-observed-api-version: v1alpha3
              kubevirt.io/workloadUpdateMigration:
API Version:  kubevirt.io/v1
Kind:         VirtualMachineInstanceMigration
Metadata:
  Creation Timestamp:  2021-10-26T09:22:57Z
  Finalizers:
    kubevirt.io/migrationJobFinalize
  Generate Name:  kubevirt-workload-update-
  Generation:     1
  Managed Fields:
    API Version:  kubevirt.io/v1alpha3
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .:
          f:kubevirt.io/latest-observed-api-version:
          f:kubevirt.io/storage-observed-api-version:
          f:kubevirt.io/workloadUpdateMigration:
        f:generateName:
      f:spec:
        .:
        f:vmiName:
    Manager:      Go-http-client
    Operation:    Update
    Time:         2021-10-26T09:22:57Z
    API Version:  kubevirt.io/v1alpha3
    Fields Type:  FieldsV1
    fieldsV1:
      f:status:
        .:
        f:phase:
    Manager:         Go-http-client
    Operation:       Update
    Subresource:     status
    Time:            2021-10-26T09:22:57Z
  Resource Version:  1206670
  UID:               18a55396-348a-43c0-a04c-092b460bad3f
Spec:
  Vmi Name:  always-run-strategy-vm-1635237973-0614927
Status:
  Phase:  Pending
Events:   <none>

VMI:
$ oc describe vmi -n upgrade-operators-product-upgrade-test-upgrade always-run-strategy-vm-1635237973-0614927
Name:         always-run-strategy-vm-1635237973-0614927
Namespace:    upgrade-operators-product-upgrade-test-upgrade
Labels:       kubevirt.io/domain=always-run-strategy-vm-1635237973-0614927
              kubevirt.io/migrationTargetNodeName=ruty-482osbs-plzwb-worker-0-n2qzn
              kubevirt.io/nodeName=ruty-482osbs-plzwb-worker-0-n2qzn
              kubevirt.io/outdatedLauncherImage=
              kubevirt.io/size=tiny
              kubevirt.io/vm=always-run-strategy-vm-1635237973-0614927
Annotations:  kubevirt.io/latest-observed-api-version: v1
              kubevirt.io/storage-observed-api-version: v1alpha3
              vm.kubevirt.io/flavor: tiny
              vm.kubevirt.io/os: rhel8
              vm.kubevirt.io/workload: server
API Version:  kubevirt.io/v1
Kind:         VirtualMachineInstance
Metadata:
  Creation Timestamp:  2021-10-26T08:49:27Z
  Finalizers:
    foregroundDeleteVirtualMachine
  Generation:  34
  Managed Fields:
    API Version:  kubevirt.io/v1alpha3
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:labels:
          f:kubevirt.io/nodeName:
      f:status:
        f:guestOSInfo:
          f:id:
          f:kernelRelease:
          f:kernelVersion:
          f:name:
          f:prettyName:
          f:version:
          f:versionId:
        f:migrationMethod:
        f:migrationState:
          f:completed:
          f:endTimestamp:
          f:mode:
          f:startTimestamp:
          f:targetDirectMigrationNodePorts:
            .:
            f:35785:
            f:36251:
            f:45505:
          f:targetNodeAddress:
          f:targetNodeDomainDetected:
        f:nodeName:
        f:phase:
    Manager:      virt-handler
    Operation:    Update
    Time:         2021-10-26T09:11:06Z
    API Version:  kubevirt.io/v1alpha3
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .:
          f:kubevirt.io/latest-observed-api-version:
          f:kubevirt.io/storage-observed-api-version:
          f:vm.kubevirt.io/flavor:
          f:vm.kubevirt.io/os:
          f:vm.kubevirt.io/workload:
        f:labels:
          .:
          f:kubevirt.io/domain:
          f:kubevirt.io/migrationTargetNodeName:
          f:kubevirt.io/size:
          f:kubevirt.io/vm:
        f:ownerReferences:
          .:
          k:{"uid":"c82aeffb-b2b2-4337-97ca-5f64bcb9cca0"}:
      f:spec:
        .:
        f:domain:
          .:
          f:cpu:
            .:
            f:cores:
            f:model:
            f:sockets:
            f:threads:
          f:devices:
            .:
            f:disks:
            f:interfaces:
            f:networkInterfaceMultiqueue:
            f:rng:
          f:firmware:
            .:
            f:uuid:
          f:machine:
            .:
            f:type:
          f:resources:
            .:
            f:requests:
              .:
              f:memory:
        f:evictionStrategy:
        f:networks:
        f:terminationGracePeriodSeconds:
        f:volumes:
      f:status:
        .:
        f:activePods:
          .:
          f:3895c7d4-583b-48ae-ba44-9d33575b14b3:
        f:conditions:
        f:guestOSInfo:
        f:launcherContainerImageVersion:
        f:migrationState:
          .:
          f:migrationUid:
          f:sourceNode:
          f:targetNode:
          f:targetPod:
        f:qosClass:
    Manager:      virt-controller
    Operation:    Update
    Time:         2021-10-26T09:11:08Z
    API Version:  kubevirt.io/v1alpha3
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:labels:
          f:kubevirt.io/outdatedLauncherImage:
      f:status:
        f:interfaces:
        f:volumeStatus:
    Manager:    Go-http-client
    Operation:  Update
    Time:       2021-10-26T10:03:36Z
  Owner References:
    API Version:           kubevirt.io/v1
    Block Owner Deletion:  true
    Controller:            true
    Kind:                  VirtualMachine
    Name:                  always-run-strategy-vm-1635237973-0614927
    UID:                   c82aeffb-b2b2-4337-97ca-5f64bcb9cca0
  Resource Version:        1249368
  UID:                     fac1dd72-1ccb-4d8d-885c-df97d9d2c4af
Spec:
  Domain:
    Cpu:
      Cores:    1
      Model:    Haswell-noTSX-IBRS
      Sockets:  1
      Threads:  1
    Devices:
      Disks:
        Disk:
          Bus:  virtio
        Name:   always-run-strategy-vm-1635237973-0614927
        Disk:
          Bus:  virtio
        Name:   cloudinitdisk
      Interfaces:
        Mac Address:  02:1c:31:00:00:12
        Masquerade:
        Name:  default
        Bridge:
        Mac Address:                 02:1c:31:00:00:13
        Name:                        br1upgrade
      Network Interface Multiqueue:  true
      Rng:
    Features:
      Acpi:
        Enabled:  true
    Firmware:
      Uuid:  9abfb9d1-4e68-5713-a0a6-792f719a523d
    Machine:
      Type:  pc-q35-rhel8.4.0
    Resources:
      Requests:
        Cpu:          100m
        Memory:       1536Mi
  Eviction Strategy:  LiveMigrate
  Networks:
    Name:  default
    Pod:
    Multus:
      Network Name:                  br1upgrade
    Name:                            br1upgrade
  Termination Grace Period Seconds:  180
  Volumes:
    Data Volume:
      Name:  always-run-strategy-vm-1635237973-0614927
    Name:    always-run-strategy-vm-1635237973-0614927
    Cloud Init No Cloud:
      User Data:  #cloud-config
user: cloud-user
password: password
chpasswd: { expire: False }
ssh_authorized_keys:
 [ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCj47ubVnxR16JU7ZfDli3N5QVBAwJBRh2xMryyjk5dtfugo5JIPGB2cyXTqEDdzuRmI+Vkb/A5duJyBRlA+9RndGGmhhMnj8and3wu5/cEb7DkF6ZJ25QV4LQx3K/i57LStUHXRTvruHOZ2nCuVXWqi7wSvz5YcvEv7O8pNF5uGmqHlShBdxQxcjurXACZ1YY0YDJDr3AJai1KF9zehVJODuSbrnOYpThVWGjFuFAnNxbtuZ8EOSougN2aYTf2qr/KFGDHtewIkzZmP6cjzKO5bN3pVbXxmb2Gces/BYHntY4MXBTUqwsmsCRC5SAz14bEP/vsLtrNhjq9vCS+BjMT root]
runcmd: ["sudo sed -i '/^PubkeyAccepted/ s/$/,ssh-rsa/' /etc/crypto-policies/back-ends/opensshserver.config", "sudo sed -i 's/^#\\?PasswordAuthentication no/PasswordAuthentication yes/g' /etc/ssh/sshd_config", 'sudo systemctl enable sshd', 'sudo systemctl restart sshd']
    Name:  cloudinitdisk
Status:
  Active Pods:
    3895c7d4-583b-48ae-ba44-9d33575b14b3:  ruty-482osbs-plzwb-worker-0-n2qzn
  Conditions:
    Last Probe Time:       <nil>
    Last Transition Time:  <nil>
    Status:                True
    Type:                  LiveMigratable
    Last Probe Time:       2021-10-26T08:50:02Z
    Last Transition Time:  <nil>
    Status:                True
    Type:                  AgentConnected
    Last Probe Time:       <nil>
    Last Transition Time:  2021-10-26T09:10:42Z
    Status:                True
    Type:                  Ready
  Guest OS Info:
    Id:              rhel
    Kernel Release:  4.18.0-305.23.1.el8_4.x86_64
    Kernel Version:  #1 SMP Mon Oct 4 14:47:09 EDT 2021
    Name:            Red Hat Enterprise Linux
    Pretty Name:     Red Hat Enterprise Linux 8.4 (Ootpa)
    Version:         8.4
    Version Id:      8.4
  Interfaces:
    Interface Name:  eth0
    Ip Address:      10.131.0.147
    Ip Addresses:
      10.131.0.147
    Mac:                             02:1c:31:00:00:12
    Name:                            default
    Interface Name:                  eth1
    Mac:                             02:1c:31:00:00:13
    Name:                            br1upgrade
  Launcher Container Image Version:  registry.redhat.io/container-native-virtualization/virt-launcher@sha256:9d4c36ee0ecfac2ce3081a73c2c2c7dde07d35f34e147b7f25be077e1ef6e740
  Migration Method:                  BlockMigration
  Migration State:
    Completed:        true
    End Timestamp:    2021-10-26T09:11:06Z
    Migration UID:    3fc30391-45fe-48ab-9c0c-470d24bd7c82
    Mode:             PreCopy
    Source Node:      ruty-482osbs-plzwb-worker-0-n55zk
    Start Timestamp:  2021-10-26T09:11:00Z
    Target Direct Migration Node Ports:
      35785:                      49152
      36251:                      49153
      45505:                      0
    Target Node:                  ruty-482osbs-plzwb-worker-0-n2qzn
    Target Node Address:          10.131.0.3
    Target Node Domain Detected:  true
    Target Pod:                   virt-launcher-always-run-strategy-vm-1635237973-0614927-cc2rk
  Node Name:                      ruty-482osbs-plzwb-worker-0-n2qzn
  Phase:                          Running
  Qos Class:                      Burstable
  Volume Status:
    Name:    always-run-strategy-vm-1635237973-0614927
    Target:  vda
    Name:    cloudinitdisk
    Size:    1048576
    Target:  vdb
Events:
  Type    Reason            Age                   From                         Message
  ----    ------            ----                  ----                         -------
  Normal  SuccessfulCreate  79m                   disruptionbudget-controller  Created PodDisruptionBudget kubevirt-disruption-budget-jrnqg
  Normal  SuccessfulCreate  79m                   virtualmachine-controller    Created virtual machine pod virt-launcher-always-run-strategy-vm-1635237973-0614927-ks7j7
  Normal  Started           79m                   virt-handler                 VirtualMachineInstance started.
  Normal  Created           58m (x41 over 79m)    virt-handler                 VirtualMachineInstance defined.
  Normal  SuccessfulCreate  58m                   disruptionbudget-controller  Created Migration kubevirt-evacuation-t9sg7
  Normal  SuccessfulCreate  58m                   virtualmachine-controller    Created PodDisruptionBudget kubevirt-migration-pdb-kubevirt-evacuation-t9sg7
  Normal  PreparingTarget   57m (x2 over 57m)     virt-handler                 VirtualMachineInstance Migration Target Prepared.
  Normal  PreparingTarget   57m                   virt-handler                 Migration Target is listening at 10.131.0.3, on ports: 45505,35785,36251
  Normal  SuccessfulDelete  57m                   disruptionbudget-controller  Deleted PodDisruptionBudget kubevirt-migration-pdb-kubevirt-evacuation-t9sg7
  Normal  Created           55m (x15 over 57m)    virt-handler                 VirtualMachineInstance defined.
  Normal  Created           53m (x2 over 54m)     virt-handler                 VirtualMachineInstance defined.
  Normal  Created           50m (x8 over 53m)     virt-handler                 VirtualMachineInstance defined.
  Normal  Created           49m                   virt-handler                 VirtualMachineInstance defined.
  Normal  SuccessfulCreate  45m                   workload-update-controller   Created Migration kubevirt-workload-update-77dsn for automated workload update
  Normal  Created           3m17s (x14 over 49m)  virt-handler                 VirtualMachineInstance defined.

Comment 2 David Vossel 2021-10-26 15:02:41 UTC
------------------
What's happening?
-----------------

- CNV by default limits 2 in-flight live migrations per source node
- QE environment has 2 migrations stuck in "pending" due to anti-affinity rules preventing target pod from scheduling.
- all future migrations are blocked until these two migrations complete

Here's the warning indicating why the target pods can't schedule --- "Warning  FailedScheduling  3h52m  default-scheduler  0/6 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 Insufficient bridge.network.kubevirt.io/upg-br-mark, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate."

--------------------------
Why is this happening now?
--------------------------

How did we get here when migrations in this environment worked in the past?

Automated workload updates will attempt to migrate any VMI which is capable of being migrated (migratable condition on vmi set to true). These two VMIs which are stuck in pending due to anti-affinity were never migrated in the past because they did not have EvictionStrategy: LiveMigrate... But with automated workload updates these VMIs are technically capable of being migrated, so migration is attempted.

What this means for people on previous CNV versions is that it is possible VMIs which never got migrated in the past during OCP updates (hard shutdown) will be migrated during automated workload updates. If the migrations get stuck in pending like this QE environment, then all future migrations are blocked until those VMIs are either migrated, restarted, or the global in-flight migration limit is increased.

---------------
Path forward.
---------------

The root cause of why migrations are being blocked is not new. It's always been possible for migration target nodes to get stuck in pending and block new migrations at the global level. What is new is that automated workload updates will attempt to migrate VMIs which may not have been migrated in the past, which can lead to this scenario.

Here's the recommendation for moving forward.

4.9 - maintain status quo (don't make the situation worse)
- Disable automatic opt-in of workload updates

4.9.1 - fix the blocked migration issue and enable workload updates.
- Add logic to cancel pending migrations after 10m (which will cause a new randomly selected VMI to migrate)
- Add pretty migration cli output, better migration events, and better logging to indicate when migrations are blocked due to global limits (this was way too difficult to understand)
- Enable automatic workload updates now that pending migrations can timeout, allowing migrations to be unblocked.

If we take this recommendation, 4.9 will maintain previous behavior (issue exists as it always has, but isn't exasperated by workload updates) and 4.9.1 will enable workload updates once we have logic to automatically unblock when migrations are stuck in pending indefinitely.

Thoughts?

Comment 3 Simone Tiraboschi 2021-10-26 16:41:03 UTC
(In reply to David Vossel from comment #2)
> If we take this recommendation, 4.9 will maintain previous behavior (issue
> exists as it always has, but isn't exasperated by workload updates) and
> 4.9.1 will enable workload updates once we have logic to automatically
> unblock when migrations are stuck in pending indefinitely.
> 
> Thoughts?

This is not going to solve the issue, but at least it will make it by far less impacting, at least not worse than it was before.

I sent a couple of upstream PRs to change the HCO default for workloadUpdateMethods to [].

Comment 4 sgott 2021-10-26 16:42:50 UTC
David,

You suggest a number of reasonable actions in Comment #2. A timeout and re-enabling migrations are both immediately straightforward.

What do you feel the level of effort is for better CLI output?

Simone, since the immediate problem is being handled at the HCO level, is Virt the best component?

Comment 5 David Vossel 2021-10-26 17:24:25 UTC
> What do you feel the level of effort is for better CLI output?

low. It is a matter of adding more fields to the migration CRD's printable columns. I'd like to see the target VMName and Phase fields added so i can easily pick out what migration belongs to what VM when i do a `oc get migrations`

Otherwise, we're stuck trying to match potentially 100s of migration objects with VMIs by introspecting migration yaml.

Comment 6 Simone Tiraboschi 2021-10-26 19:05:32 UTC
We should file a doc bug to amend 4.9.0 documentation and the release note.

Comment 7 Debarati Basu-Nag 2021-10-31 14:59:01 UTC
@kbidarka already validated this:
===========================
Manual Upgrade of 4.8.2 to 4.9.0 with CIRROS 70VMIs was Successful.

Each VM was with 256Mi Memory, and upon single node drain, it would load upto 95% each of the 2 nodes.
Probably Cirros VM's can go even less than 256Mi, maybe 128Mi would also do. But something to try with the next time.

Steps Performed:
1) Installed CNV 4.8.2 with OCP 4.8
2) Created 70VMs with CIRROS Images and started them Successfully.
3) Upgraded OCP 4.8 to 4.9.4
4) CNV 4.8.2 was running Successfully on 4.9.4
5) LiveMigrations were fine, during OCP Upgrade to 4.9.4
6) Upgraded CNV  from 4.8.2 to 4.9.0
7) All CNV Components upgraded successfully to 4.9.0
8) As we dropped "LiveMigrate" as default behavior, virt-launcher pods were still using 4.8.2 version.
9) patched HCO CR with "LiveMigrate" to trigger a Mass VMI Migration.
Post which, all the virt-launcher Pods LiveMigrated and also upgraded successfully to virt-launcher/images/v4.9.0-60

http://pastebin.test.redhat.com/1004548
============================
We also have a good upgrade via jenkins job:
============================
https://main-jenkins-csb-cnvqe.apps.ocp-c1.prod.psi.redhat.com/job/cnv-tests-runner/1294/testReport/

Validated automatic opt-in of workload updates is disabled:
============================
spec:
    certConfig:
      ca:
        duration: 48h0m0s
        renewBefore: 24h0m0s
      server:
        duration: 24h0m0s
        renewBefore: 12h0m0s
    featureGates:
      sriovLiveMigration: false
      withHostPassthroughCPU: false
    infra: {}
    liveMigrationConfig:
      completionTimeoutPerGiB: 800
      parallelMigrationsPerCluster: 5
      parallelOutboundMigrationsPerNode: 2
      progressTimeout: 150
    workloads: {}
  status:
===========================

Comment 10 errata-xmlrpc 2021-11-02 16:01:44 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Virtualization 4.9.0 Images security and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:4104

Comment 11 Red Hat Bugzilla 2023-09-15 01:16:38 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 500 days


Note You need to log in before you can comment on or make changes to this bug.