Bug 2236344 - Unable to perform EUS to EUS upgrade between 4.12 and 4.14 with workloads
Summary: Unable to perform EUS to EUS upgrade between 4.12 and 4.14 with workloads
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Container Native Virtualization (CNV)
Classification: Red Hat
Component: Virtualization
Version: 4.14.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 4.14.0
Assignee: lpivarc
QA Contact: Debarati Basu-Nag
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-08-31 01:48 UTC by Debarati Basu-Nag
Modified: 2023-11-08 14:06 UTC (History)
3 users (show)

Fixed In Version: v4.14.0.rhel9-1911
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-11-08 14:06:16 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
one virtlauncher pod log (70.76 KB, text/plain)
2023-08-31 01:48 UTC, Debarati Basu-Nag
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Github kubevirt kubevirt pull 10381 0 None open [release-1.0] SCC: Re-introduce CAP_SYS_PTRACE 2023-09-06 13:28:09 UTC
Red Hat Issue Tracker CNV-32690 0 None None None 2023-09-06 13:06:14 UTC
Red Hat Product Errata RHSA-2023:6817 0 None None None 2023-11-08 14:06:27 UTC

Description Debarati Basu-Nag 2023-08-31 01:48:07 UTC
Created attachment 1986222 [details]
one virtlauncher pod log

Created attachment 1986222 [details]
one virtlauncher pod log

Created attachment 1986222 [details]
one virtlauncher pod log

Description of problem: During EUS->EUS upgrade between 4.12 and 4.14 (brew.registry.redhat.io/rh-osbs/iib:566591), when I enable workloadupdate strategy to Livemigrate (after CNV is upgraded to 4.14), automatic workload updates for all the livemigratable vms fails.


Version-Release number of selected component (if applicable):


How reproducible:
100%

Steps to Reproduce:
1. pause worker mcp
2. turn off workloadupdate
3. upgrade to OCP 4.13,
4. upgrade to CNV 4.13 till the last z
5. upgrade to OCP 4.14
6. upgrade to CNV 4.14 till the last z
7. turn on workloadupdate strategy to LiveMigrate
8. unpause the worker mcp

Actual results:
Perform the above steps and at step 7 notice all vmims are failing.
================
test-upgrade-namespace    kubevirt-workload-update-462pk   Failed            always-run-strategy-vm-1693420081-0430818
test-upgrade-namespace    kubevirt-workload-update-gcx2q   Failed            always-run-strategy-vm-1693420081-0430818
test-upgrade-namespace    kubevirt-workload-update-qvbqv   Failed            always-run-strategy-vm-1693420081-0430818
test-upgrade-namespace    kubevirt-workload-update-st7kl   PreparingTarget   always-run-strategy-vm-1693420081-0430818
test-upgrade-namespace    kubevirt-workload-update-zpxkv   Failed            always-run-strategy-vm-1693420081-0430818
test-upgrade-namespace    kubevirt-workload-update-zrvsc   Failed            always-run-strategy-vm-1693420081-0430818
================

No successful vmim:
================
[cnv-qe-jenkins@cnv-qe-infra-01 eus]$ oc get vmim -A | grep -v Failed
NAMESPACE                 NAME                             PHASE             VMI
kmp-enabled-for-upgrade   kubevirt-workload-update-jv8wq   Scheduling        vm-upgrade-a-1693420859-1588397
kmp-enabled-for-upgrade   kubevirt-workload-update-p4djk   Pending           vm-upgrade-b-1693420866-669033
test-upgrade-namespace    kubevirt-evacuation-6gsch        PreparingTarget   vm-for-product-upgrade-nfs-1693419816-450729
test-upgrade-namespace    kubevirt-workload-update-48g84   Scheduling        vmb-macspoof-1693420728-3208427
test-upgrade-namespace    kubevirt-workload-update-zflrn   PreparingTarget   manual-run-strategy-vm-1693420080-612376
[cnv-qe-jenkins@cnv-qe-infra-01 eus]$ 
=================
snippet from the virt launcher pod, full log would be attached.
================
{"component":"virt-launcher","level":"info","msg":"Thread 32 (rpc-virtqemud) finished job remoteDispatchConnectListAllDomains with ret=0","pos":"virThreadJobClear:118","subcomponent":"libvirt","thread":"32","timestamp":"2023-08-31T01:27:23.889000Z"}
panic: timed out waiting for domain to be defined
{"component":"virt-launcher-monitor","level":"info","msg":"Reaped pid 12 with status 512","pos":"virt-launcher-monitor.go:125","timestamp":"2023-08-31T01:27:32.893277Z"}
{"component":"virt-launcher-monitor","level":"error","msg":"dirty virt-launcher shutdown: exit-code 2","pos":"virt-launcher-monitor.go:143","timestamp":"2023-08-31T01:27:32.893435Z"}
================
I see many failed pods
[cnv-qe-jenkins@cnv-qe-infra-01 eus]$ oc get pods -n test-upgrade-namespace | grep always
virt-launcher-always-run-strategy-vm-1693420081-0430818-bpk4s     0/1     Error       0          21m
virt-launcher-always-run-strategy-vm-1693420081-0430818-gjh69     0/1     Error       0          84m
virt-launcher-always-run-strategy-vm-1693420081-0430818-kmdpg     1/1     Running     0          5m32s
virt-launcher-always-run-strategy-vm-1693420081-0430818-kvh84     0/1     Error       0          89m
virt-launcher-always-run-strategy-vm-1693420081-0430818-qwjx7     0/1     Error       0          53m
virt-launcher-always-run-strategy-vm-1693420081-0430818-tnlkz     1/1     Running     0          7h13m
virt-launcher-always-run-strategy-vm-1693420081-0430818-vjppc     0/1     Error       0          94m
virt-launcher-always-run-strategy-vm-1693420081-0430818-wkbxx     0/1     Error       0          38m

================
Please note two running virt launcher pods per vm

Virt controller log is flooding with these messages:
===========
{"component":"virt-controller","kind":"","level":"error","msg":"failed to sync dynamic pod labels during sync: pods \"virt-launcher-always-run-strategy-vm-1693420081-0430818-tnlkz\" is forbidden: unable to validate against any security context constraint: [provider \"anyuid\": Forbidden: not usable by user or serviceaccount, provider \"pipelines-scc\": Forbidden: not usable by user or serviceaccount, provider restricted-v2: .spec.securityContext.seLinuxOptions.level: Invalid value: \"\": must be s0:c28,c12, provider restricted-v2: .spec.securityContext.seLinuxOptions.type: Invalid value: \"virt_launcher.process\": must be , provider restricted-v2: .containers[0].runAsUser: Invalid value: 107: must be in the ranges: [1000780000, 1000789999], provider restricted-v2: .containers[0].seLinuxOptions.level: Invalid value: \"\": must be s0:c28,c12, provider restricted-v2: .containers[0].seLinuxOptions.type: Invalid value: \"virt_launcher.process\": must be , provider restricted-v2: .containers[0].capabilities.add: Invalid value: \"SYS_PTRACE\": capability may not be added, provider \"restricted\": Forbidden: not usable by user or serviceaccount, provider \"containerized-data-importer\": Forbidden: not usable by user or serviceaccount, provider \"nonroot-v2\": Forbidden: not usable by user or serviceaccount, provider \"nonroot\": Forbidden: not usable by user or serviceaccount, provider \"hostmount-anyuid\": Forbidden: not usable by user or serviceaccount, provider kubevirt-controller: .containers[0].capabilities.add: Invalid value: \"SYS_PTRACE\": capability may not be added, provider \"machine-api-termination-handler\": Forbidden: not usable by user or serviceaccount, provider \"bridge-marker\": Forbidden: not usable by user or serviceaccount, provider \"hostnetwork-v2\": Forbidden: not usable by user or serviceaccount, provider \"hostnetwork\": Forbidden: not usable by user or serviceaccount, provider \"hostaccess\": Forbidden: not usable by user or serviceaccount, provider \"nfd-worker\": Forbidden: not usable by user or serviceaccount, provider \"hostpath-provisioner-csi\": Forbidden: not usable by user or serviceaccount, provider \"linux-bridge\": Forbidden: not usable by user or serviceaccount, provider \"nvidia-gpu-feature-discovery\": Forbidden: not usable by user or serviceaccount, provider \"nvidia-mig-manager\": Forbidden: not usable by user or serviceaccount, provider \"nvidia-node-status-exporter\": Forbidden: not usable by user or serviceaccount, provider \"nvidia-operator-validator\": Forbidden: not usable by user or serviceaccount, provider \"nvidia-sandbox-validator\": Forbidden: not usable by user or serviceaccount, provider \"nvidia-vgpu-manager\": Forbidden: not usable by user or serviceaccount, provider \"ovs-cni-marker\": Forbidden: not usable by user or serviceaccount, provider \"kubevirt-handler\": Forbidden: not usable by user or serviceaccount, provider \"rook-ceph\": Forbidden: not usable by user or serviceaccount, provider \"node-exporter\": Forbidden: not usable by user or serviceaccount, provider \"rook-ceph-csi\": Forbidden: not usable by user or serviceaccount, provider \"nvidia-dcgm\": Forbidden: not usable by user or serviceaccount, provider \"nvidia-dcgm-exporter\": Forbidden: not usable by user or serviceaccount, provider \"privileged\": Forbidden: not usable by user or serviceaccount]","name":"virt-launcher-always-run-strategy-vm-1693420081-0430818-tnlkz","namespace":"test-upgrade-namespace","pos":"vmi.go:458","timestamp":"2023-08-31T01:27:41.786479Z","uid":"c17c47ec-f4f6-47bc-a627-057aef26042c"}
{"component":"virt-controller","level":"info","msg":"reenqueuing VirtualMachineInstance test-upgrade-namespace/always-run-strategy-vm-1693420081-0430818","pos":"vmi.go:322","reason":"error syncing labels to pod: pods \"virt-launcher-always-run-strategy-vm-1693420081-0430818-tnlkz\" is forbidden: unable to validate against any security context constraint: [provider \"anyuid\": Forbidden: not usable by user or serviceaccount, provider \"pipelines-scc\": Forbidden: not usable by user or serviceaccount, provider restricted-v2: .spec.securityContext.seLinuxOptions.level: Invalid value: \"\": must be s0:c28,c12, provider restricted-v2: .spec.securityContext.seLinuxOptions.type: Invalid value: \"virt_launcher.process\": must be , provider restricted-v2: .containers[0].runAsUser: Invalid value: 107: must be in the ranges: [1000780000, 1000789999], provider restricted-v2: .containers[0].seLinuxOptions.level: Invalid value: \"\": must be s0:c28,c12, provider restricted-v2: .containers[0].seLinuxOptions.type: Invalid value: \"virt_launcher.process\": must be , provider restricted-v2: .containers[0].capabilities.add: Invalid value: \"SYS_PTRACE\": capability may not be added, provider \"restricted\": Forbidden: not usable by user or serviceaccount, provider \"containerized-data-importer\": Forbidden: not usable by user or serviceaccount, provider \"nonroot-v2\": Forbidden: not usable by user or serviceaccount, provider \"nonroot\": Forbidden: not usable by user or serviceaccount, provider \"hostmount-anyuid\": Forbidden: not usable by user or serviceaccount, provider kubevirt-controller: .containers[0].capabilities.add: Invalid value: \"SYS_PTRACE\": capability may not be added, provider \"machine-api-termination-handler\": Forbidden: not usable by user or serviceaccount, provider \"bridge-marker\": Forbidden: not usable by user or serviceaccount, provider \"hostnetwork-v2\": Forbidden: not usable by user or serviceaccount, provider \"hostnetwork\": Forbidden: not usable by user or serviceaccount, provider \"hostaccess\": Forbidden: not usable by user or serviceaccount, provider \"nfd-worker\": Forbidden: not usable by user or serviceaccount, provider \"hostpath-provisioner-csi\": Forbidden: not usable by user or serviceaccount, provider \"linux-bridge\": Forbidden: not usable by user or serviceaccount, provider \"nvidia-gpu-feature-discovery\": Forbidden: not usable by user or serviceaccount, provider \"nvidia-mig-manager\": Forbidden: not usable by user or serviceaccount, provider \"nvidia-node-status-exporter\": Forbidden: not usable by user or serviceaccount, provider \"nvidia-operator-validator\": Forbidden: not usable by user or serviceaccount, provider \"nvidia-sandbox-validator\": Forbidden: not usable by user or serviceaccount, provider \"nvidia-vgpu-manager\": Forbidden: not usable by user or serviceaccount, provider \"ovs-cni-marker\": Forbidden: not usable by user or serviceaccount, provider \"kubevirt-handler\": Forbidden: not usable by user or serviceaccount, provider \"rook-ceph\": Forbidden: not usable by user or serviceaccount, provider \"node-exporter\": Forbidden: not usable by user or serviceaccount, provider \"rook-ceph-csi\": Forbidden: not usable by user or serviceaccount, provider \"nvidia-dcgm\": Forbidden: not usable by user or serviceaccount, provider \"nvidia-dcgm-exporter\": Forbidden: not usable by user or serviceaccount, provider \"privileged\": Forbidden: not usable by user or serviceaccount]","timestamp":"2023-08-31T01:27:41.786552Z"}
{"component":"virt-controller","kind":"","level":"info","msg":"Marked Migration test-upgrade-namespace/kubevirt-workload-update-gcx2q failed on vmi due to target pod disappearing before migration kicked off.","name":"always-run-strategy-vm-1693420081-0430818","namespace":"test-upgrade-namespace","pos":"migration.go:827","timestamp":"2023-08-31T01:27:43.846173Z","uid":"ace4d971-9e13-4229-b322-d529e3216f95"}
==================
On unpausing the worker mcp, it fails to evict these vms off the node. Hence worker nodes never finishes updates. I see these error messages from machine-config-controller log:
==================
I0831 01:45:11.260097       1 drain_controller.go:350] Previous node drain found. Drain has been going on for 1.539504235401111 hours
E0831 01:45:11.260106       1 drain_controller.go:352] node cnv-qe-infra-33.cnvqe2.lab.eng.rdu2.redhat.com: drain exceeded timeout: 1h0m0s. Will continue to retry.
I0831 01:45:11.260120       1 drain_controller.go:173] node cnv-qe-infra-33.cnvqe2.lab.eng.rdu2.redhat.com: initiating drain
E0831 01:45:14.445756       1 drain_controller.go:144] WARNING: ignoring DaemonSet-managed Pods: cnv-tests-utilities/utility-8wtr5, nvidia-gpu-operator/nvidia-sandbox-validator-jjx7c, nvidia-gpu-operator/nvidia-vfio-manager-j4ll8, openshift-cluster-node-tuning-operator/tuned-8cb8s, openshift-cnv/bridge-marker-cmrpw, openshift-cnv/hostpath-provisioner-csi-r9lmh, openshift-cnv/kube-cni-linux-bridge-plugin-lfh2m, openshift-cnv/virt-handler-x4zd4, openshift-dns/dns-default-mhzfx, openshift-dns/node-resolver-d9ctn, openshift-image-registry/node-ca-dxj5d, openshift-ingress-canary/ingress-canary-gpg62, openshift-local-storage/diskmaker-manager-vszh7, openshift-machine-config-operator/machine-config-daemon-p4n8n, openshift-monitoring/node-exporter-mq2hg, openshift-multus/multus-89nzg, openshift-multus/multus-additional-cni-plugins-849q2, openshift-multus/network-metrics-daemon-zbgcj, openshift-network-diagnostics/network-check-target-vd9qt, openshift-nfd/nfd-worker-dqxqf, openshift-nmstate/nmstate-handler-tj5pr, openshift-operators/istio-cni-node-v2-3-kkqzr, openshift-ovn-kubernetes/ovnkube-node-lznhh, openshift-storage/csi-cephfsplugin-jnfmt, openshift-storage/csi-rbdplugin-mzml8
I0831 01:45:14.447178       1 drain_controller.go:144] evicting pod test-upgrade-namespace/virt-launcher-vm-for-product-upgrade-nfs-1693419816-450729t5ppv
E0831 01:45:14.478812       1 drain_controller.go:144] error when evicting pods/"virt-launcher-vm-for-product-upgrade-nfs-1693419816-450729t5ppv" -n "test-upgrade-namespace" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
I0831 01:45:19.478943       1 drain_controller.go:144] evicting pod test-upgrade-namespace/virt-launcher-vm-for-product-upgrade-nfs-1693419816-450729t5ppv
E0831 01:45:19.493153       1 drain_controller.go:144] error when evicting pods/"virt-launcher-vm-for-product-upgrade-nfs-1693419816-450729t5ppv" -n "test-upgrade-namespace" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
I0831 01:45:24.496967       1 drain_controller.go:144] evicting pod test-upgrade-namespace/virt-launcher-vm-for-product-upgrade-nfs-1693419816-450729t5ppv
E0831 01:45:24.523950       1 drain_controller.go:144] error when evicting pods/"virt-launcher-vm-for-product-upgrade-nfs-1693419816-450729t5ppv" -n "test-upgrade-namespace" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
I0831 01:45:29.524635       1 drain_controller.go:144] evicting pod test-upgrade-namespace/virt-launcher-vm-for-product-upgrade-nfs-1693419816-450729t5ppv
E0831 01:45:29.589530       1 drain_controller.go:144] error when evicting pods/"virt-launcher-vm-for-product-upgrade-nfs-1693419816-450729t5ppv" -n "test-upgrade-namespace" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
I0831 01:45:34.589574       1 drain_controller.go:144] evicting pod test-upgrade-namespace/virt-launcher-vm-for-product-upgrade-nfs-1693419816-450729t5ppv
E0831 01:45:34.616233       1 drain_controller.go:144] error when evicting pods/"virt-launcher-vm-for-product-upgrade-nfs-1693419816-450729t5ppv" -n "test-upgrade-namespace" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
I0831 01:45:39.616498       1 drain_controller.go:144] evicting pod test-upgrade-namespace/virt-launcher-vm-for-product-upgrade-nfs-1693419816-450729t5ppv
E0831 01:45:39.633715       1 drain_controller.go:144] error when evicting pods/"virt-launcher-vm-for-product-upgrade-nfs-1693419816-450729t5ppv" -n "test-upgrade-namespace" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
[cnv-qe-jenkins@cnv-qe-infra-01 eus]$ 


Expected results:
EUS upgrade completes successfully

Additional info:
Live cluster is available
Must gather can be found here: https://drive.google.com/drive/folders/1q4ipWMM2Z4jti9yJK_HCnFswDfEHkptV?usp=drive_link

Comment 1 Kedar Bidarkar 2023-08-31 09:31:23 UTC
During EUS to EUS upgrade from 4.12.z to 4.14.0, as seen below we see that:

CNV 4.14 virt-launcher pods are trying to run on OCP 4.12.

1) OCP 4.12 

[kbidarka@kbidarka-thinkpadt14sgen2i ocp-cnv-scripts]$ oc debug node/cnv-qe-infra-32.cnvqe2.lab.eng.rdu2.redhat.com
Temporary namespace openshift-debug-ss9m5 is created for debugging node...
Starting pod/cnv-qe-infra-32cnvqe2labengrdu2redhatcom-debug-9m7tx ...
To use host binaries, run `chroot /host`
Pod IP: 10.1.156.40
If you don't see a command prompt, try pressing enter.
sh-4.4# chroot /host
sh-4.4# semodule -l | grep virt_launcher
virt_launcher

sh-4.4# cat /etc/redhat-release 
Red Hat Enterprise Linux CoreOS release 4.12

2) The failed virt-launcher pod version

./fetch_pod_version_info_icsp.sh virt-launcher-always-run-strategy-vm-1693420081-0430818-7lsfj test-upgrade-namespace
"url": "https://access.redhat.com/containers/#/registry.access.redhat.com/container-native-virtualization/virt-launcher-rhel9/images/v4.14.0-379"



Also looking at the above comment in description section, it appears we are hitting SELinux issue.

---------------------------------------------------------------------------------------
To provide more explicit info from the cluster:

[kbidarka@kbidarka-thinkpadt14sgen2i auth]$ oc get nodes -o wide 
NAME                                             STATUS                     ROLES                  AGE     VERSION            INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                                                        KERNEL-VERSION                 CONTAINER-RUNTIME
cnv-qe-infra-29.cnvqe2.lab.eng.rdu2.redhat.com   Ready                      control-plane,master   2d14h   v1.27.4+d424288    x.x.x.x   <none>        Red Hat Enterprise Linux CoreOS 414.92.202308281054-0 (Plow)    5.14.0-284.28.1.el9_2.x86_64   cri-o://1.27.1-6.rhaos4.14.gitc2c9f36.el9
cnv-qe-infra-30.cnvqe2.lab.eng.rdu2.redhat.com   Ready                      control-plane,master   2d14h   v1.27.4+d424288    x.x.x.x   <none>        Red Hat Enterprise Linux CoreOS 414.92.202308281054-0 (Plow)    5.14.0-284.28.1.el9_2.x86_64   cri-o://1.27.1-6.rhaos4.14.gitc2c9f36.el9
cnv-qe-infra-31.cnvqe2.lab.eng.rdu2.redhat.com   Ready                      control-plane,master   2d14h   v1.27.4+d424288    x.x.x.x   <none>        Red Hat Enterprise Linux CoreOS 414.92.202308281054-0 (Plow)    5.14.0-284.28.1.el9_2.x86_64   cri-o://1.27.1-6.rhaos4.14.gitc2c9f36.el9
cnv-qe-infra-32.cnvqe2.lab.eng.rdu2.redhat.com   Ready                      worker                 2d13h   v1.25.12+26bab08   x.x.x.x  <none>        Red Hat Enterprise Linux CoreOS 412.86.202308260032-0 (Ootpa)   4.18.0-372.70.1.el8_6.x86_64   cri-o://1.25.4-4.rhaos4.12.gitb9319a2.el8
cnv-qe-infra-33.cnvqe2.lab.eng.rdu2.redhat.com   Ready,SchedulingDisabled   worker                 2d13h   v1.25.12+26bab08   x.x.x.x   <none>        Red Hat Enterprise Linux CoreOS 412.86.202308260032-0 (Ootpa)   4.18.0-372.70.1.el8_6.x86_64   cri-o://1.25.4-4.rhaos4.12.gitb9319a2.el8
cnv-qe-infra-34.cnvqe2.lab.eng.rdu2.redhat.com   Ready                      worker                 2d13h   v1.25.12+26bab08   x.x.x.x   <none>        Red Hat Enterprise Linux CoreOS 412.86.202308260032-0 (Ootpa)   4.18.0-372.70.1.el8_6.x86_64   cri-o://1.25.4-4.rhaos4.12.gitb9319a2.el8


[kbidarka@kbidarka-thinkpadt14sgen2i auth]$ oc get csv -n openshift-cnv 
NAME                                       DISPLAY                                          VERSION                   REPLACES                                   PHASE
kubevirt-hyperconverged-operator.v4.14.0   OpenShift Virtualization                         4.14.0                    kubevirt-hyperconverged-operator.v4.13.4   Succeeded



[kbidarka@kbidarka-thinkpadt14sgen2i auth]$ oc get ip -A
NAMESPACE                 NAME            CSV                                               APPROVAL    APPROVED
nvidia-gpu-operator       install-f57pp   gpu-operator-certified.v23.6.0                    Automatic   true
openshift-cnv             install-4wn76   kubevirt-hyperconverged-operator.v4.13.2          Manual      true
openshift-cnv             install-9bsl4   kubevirt-hyperconverged-operator.v4.13.4          Manual      true
openshift-cnv             install-srm28   kubevirt-hyperconverged-operator.v4.13.3          Manual      true
openshift-cnv             install-sz25x   kubevirt-hyperconverged-operator.v4.13.1          Manual      true
openshift-cnv             install-xrwpm   kubevirt-hyperconverged-operator.v4.14.0          Manual      true
openshift-local-storage   install-452wk   local-storage-operator.v4.12.0-202307182142       Automatic   true

Comment 2 Kedar Bidarkar 2023-08-31 11:34:07 UTC
From 4.12.6 cluster, the VM's created by default are nonRoot VM's.

]$ oc get hco kubevirt-hyperconverged -n openshift-cnv -o yaml 
apiVersion: hco.kubevirt.io/v1beta1
kind: HyperConverged
metadata:
  name: kubevirt-hyperconverged
  namespace: openshift-cnv
spec:
  certConfig:
    ca:
      duration: 48h0m0s
      renewBefore: 24h0m0s
    server:
      duration: 24h0m0s
      renewBefore: 12h0m0s
  featureGates:
    deployTektonTaskResources: false
    disableMDevConfiguration: false
    enableCommonBootImageImport: true
    nonRoot: true
    withHostPassthroughCPU: false

Comment 3 lpivarc 2023-08-31 14:27:50 UTC
I will need to collect logs for all our components. Cluster access would be preferred.

Comment 4 Debarati Basu-Nag 2023-09-18 17:29:41 UTC
Performed EUS->EUS upgrade from 4.12.6->4.14.0 successfully.

Comment 6 errata-xmlrpc 2023-11-08 14:06:16 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: OpenShift Virtualization 4.14.0 Images security and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:6817


Note You need to log in before you can comment on or make changes to this bug.