Bug 2177924
Summary: | After upgrading ocp to 4.13.0-0.ci.test-2023-03-02-214629-ci-ln-xw171tb-latest, can't bring vms up to running state | |||
---|---|---|---|---|
Product: | Container Native Virtualization (CNV) | Reporter: | Debarati Basu-Nag <dbasunag> | |
Component: | Virtualization | Assignee: | lpivarc | |
Status: | CLOSED ERRATA | QA Contact: | Kedar Bidarkar <kbidarka> | |
Severity: | urgent | Docs Contact: | ||
Priority: | urgent | |||
Version: | 4.12.2 | CC: | dollierp, dshchedr, fdeutsch, lpivarc, lsm5, stirabos, ycui | |
Target Milestone: | --- | Keywords: | TestBlocker | |
Target Release: | 4.12.3 | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | ||||
Fixed In Version: | Doc Type: | If docs needed, set a value | ||
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 2179392 (view as bug list) | Environment: | ||
Last Closed: | 2023-05-23 22:31:22 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | 2178990, 2179465 | |||
Bug Blocks: | 2179392 |
Comment 1
Kedar Bidarkar
2023-03-14 08:10:38 UTC
From the virt-handler logs: [kbidarka@localhost auth]$ oc logs -f virt-handler-7hn6d | grep must-gather-vm-2-1678748206-157084 Defaulted container "virt-handler" out of: virt-handler, virt-launcher (init) {"component":"virt-handler","kind":"","level":"info","msg":"VMI is in phase: Scheduled | Domain does not exist","name":"must-gather-vm-2-1678748206-157084","namespace":"node-gather-unprivileged","pos":"vm.go:1655","timestamp":"2023-03-14T07:20:19.718636Z","uid":"3848351d-52d7-4887-9988-38c8bc2b6faf"} {"component":"virt-handler","kind":"","level":"info","msg":"migration is block migration because of containerdisk volume","name":"must-gather-vm-2-1678748206-157084","namespace":"node-gather-unprivileged","pos":"vm.go:2312","timestamp":"2023-03-14T07:20:19.719350Z","uid":"3848351d-52d7-4887-9988-38c8bc2b6faf"} {"component":"virt-handler","kind":"","level":"info","msg":"migration is block migration because of cloudinitdisk volume","name":"must-gather-vm-2-1678748206-157084","namespace":"node-gather-unprivileged","pos":"vm.go:2312","timestamp":"2023-03-14T07:20:19.719389Z","uid":"3848351d-52d7-4887-9988-38c8bc2b6faf"} {"component":"virt-handler","kind":"","level":"info","msg":"VMI is in phase: Scheduled | Domain does not exist","name":"must-gather-vm-2-1678748206-157084","namespace":"node-gather-unprivileged","pos":"vm.go:1655","timestamp":"2023-03-14T07:20:19.782400Z","uid":"3848351d-52d7-4887-9988-38c8bc2b6faf"} {"component":"virt-handler","kind":"","level":"info","msg":"migration is block migration because of containerdisk volume","name":"must-gather-vm-2-1678748206-157084","namespace":"node-gather-unprivileged","pos":"vm.go:2312","timestamp":"2023-03-14T07:20:19.782611Z","uid":"3848351d-52d7-4887-9988-38c8bc2b6faf"} {"component":"virt-handler","kind":"","level":"info","msg":"migration is block migration because of cloudinitdisk volume","name":"must-gather-vm-2-1678748206-157084","namespace":"node-gather-unprivileged","pos":"vm.go:2312","timestamp":"2023-03-14T07:20:19.782648Z","uid":"3848351d-52d7-4887-9988-38c8bc2b6faf"} {"component":"virt-handler","kind":"","level":"info","msg":"VMI is in phase: Scheduled | Domain does not exist","name":"must-gather-vm-2-1678748206-157084","namespace":"node-gather-unprivileged","pos":"vm.go:1655","timestamp":"2023-03-14T07:20:20.680785Z","uid":"3848351d-52d7-4887-9988-38c8bc2b6faf"} {"component":"virt-handler","kind":"","level":"info","msg":"migration is block migration because of containerdisk volume","name":"must-gather-vm-2-1678748206-157084","namespace":"node-gather-unprivileged","pos":"vm.go:2312","timestamp":"2023-03-14T07:20:20.681026Z","uid":"3848351d-52d7-4887-9988-38c8bc2b6faf"} {"component":"virt-handler","kind":"","level":"info","msg":"migration is block migration because of cloudinitdisk volume","name":"must-gather-vm-2-1678748206-157084","namespace":"node-gather-unprivileged","pos":"vm.go:2312","timestamp":"2023-03-14T07:20:20.681048Z","uid":"3848351d-52d7-4887-9988-38c8bc2b6faf"} {"component":"virt-handler","kind":"","level":"info","msg":"VMI is in phase: Scheduled | Domain does not exist","name":"must-gather-vm-2-1678748206-157084","namespace":"node-gather-unprivileged","pos":"vm.go:1655","timestamp":"2023-03-14T07:20:20.719547Z","uid":"3848351d-52d7-4887-9988-38c8bc2b6faf"} {"component":"virt-handler","kind":"","level":"info","msg":"migration is block migration because of containerdisk volume","name":"must-gather-vm-2-1678748206-157084","namespace":"node-gather-unprivileged","pos":"vm.go:2312","timestamp":"2023-03-14T07:20:20.719803Z","uid":"3848351d-52d7-4887-9988-38c8bc2b6faf"} {"component":"virt-handler","kind":"","level":"info","msg":"migration is block migration because of cloudinitdisk volume","name":"must-gather-vm-2-1678748206-157084","namespace":"node-gather-unprivileged","pos":"vm.go:2312","timestamp":"2023-03-14T07:20:20.719836Z","uid":"3848351d-52d7-4887-9988-38c8bc2b6faf"} {"component":"virt-handler","kind":"","level":"info","msg":"VMI is in phase: Scheduled | Domain does not exist","name":"must-gather-vm-2-1678748206-157084","namespace":"node-gather-unprivileged","pos":"vm.go:1655","timestamp":"2023-03-14T07:20:21.720665Z","uid":"3848351d-52d7-4887-9988-38c8bc2b6faf"} {"component":"virt-handler","kind":"","level":"info","msg":"migration is block migration because of containerdisk volume","name":"must-gather-vm-2-1678748206-157084","namespace":"node-gather-unprivileged","pos":"vm.go:2312","timestamp":"2023-03-14T07:20:21.720862Z","uid":"3848351d-52d7-4887-9988-38c8bc2b6faf"} {"component":"virt-handler","kind":"","level":"info","msg":"migration is block migration because of cloudinitdisk volume","name":"must-gather-vm-2-1678748206-157084","namespace":"node-gather-unprivileged","pos":"vm.go:2312","timestamp":"2023-03-14T07:20:21.720896Z","uid":"3848351d-52d7-4887-9988-38c8bc2b6faf"} {"component":"virt-handler","kind":"","level":"info","msg":"VMI is in phase: Scheduled | Domain does not exist","name":"must-gather-vm-2-1678748206-157084","namespace":"node-gather-unprivileged","pos":"vm.go:1655","timestamp":"2023-03-14T07:20:22.721910Z","uid":"3848351d-52d7-4887-9988-38c8bc2b6faf"} {"component":"virt-handler","kind":"","level":"info","msg":"migration is block migration because of containerdisk volume","name":"must-gather-vm-2-1678748206-157084","namespace":"node-gather-unprivileged","pos":"vm.go:2312","timestamp":"2023-03-14T07:20:22.722117Z","uid":"3848351d-52d7-4887-9988-38c8bc2b6faf"} {"component":"virt-handler","kind":"","level":"info","msg":"migration is block migration because of cloudinitdisk volume","name":"must-gather-vm-2-1678748206-157084","namespace":"node-gather-unprivileged","pos":"vm.go:2312","timestamp":"2023-03-14T07:20:22.722133Z","uid":"3848351d-52d7-4887-9988-38c8bc2b6faf"} {"component":"virt-handler","kind":"","level":"info","msg":"VMI is in phase: Failed | Domain does not exist","name":"must-gather-vm-2-1678748206-157084","namespace":"node-gather-unprivileged","pos":"vm.go:1655","timestamp":"2023-03-14T07:20:22.739776Z","uid":"3848351d-52d7-4887-9988-38c8bc2b6faf"} {"component":"virt-handler","kind":"","level":"info","msg":"Performing final local cleanup for vmi with uid 3848351d-52d7-4887-9988-38c8bc2b6faf","name":"must-gather-vm-2-1678748206-157084","namespace":"node-gather-unprivileged","pos":"vm.go:1911","timestamp":"2023-03-14T07:20:22.739891Z","uid":"3848351d-52d7-4887-9988-38c8bc2b6faf"} {"component":"virt-handler","kind":"","level":"info","msg":"No container disk mount entries found to unmount","name":"must-gather-vm-2-1678748206-157084","namespace":"node-gather-unprivileged","pos":"mount.go:349","timestamp":"2023-03-14T07:20:22.739935Z","uid":"3848351d-52d7-4887-9988-38c8bc2b6faf"} {"component":"virt-handler","kind":"","level":"info","msg":"Cleaning up remaining hotplug volumes","name":"must-gather-vm-2-1678748206-157084","namespace":"node-gather-unprivileged","pos":"mount.go:770","timestamp":"2023-03-14T07:20:22.739958Z","uid":"3848351d-52d7-4887-9988-38c8bc2b6faf"} {"component":"virt-handler","kind":"Domain","level":"info","msg":"Removing domain from cache during final cleanup","name":"must-gather-vm-2-1678748206-157084","namespace":"node-gather-unprivileged","pos":"vm.go:1945","timestamp":"2023-03-14T07:20:22.740050Z","uid":""} {"component":"virt-handler","kind":"","level":"info","msg":"VMI is in phase: Failed | Domain does not exist","name":"must-gather-vm-2-1678748206-157084","namespace":"node-gather-unprivileged","pos":"vm.go:1655","timestamp":"2023-03-14T07:20:22.755424Z","uid":"3848351d-52d7-4887-9988-38c8bc2b6faf"} {"component":"virt-handler","kind":"","level":"info","msg":"Performing final local cleanup for vmi with uid 3848351d-52d7-4887-9988-38c8bc2b6faf","name":"must-gather-vm-2-1678748206-157084","namespace":"node-gather-unprivileged","pos":"vm.go:1911","timestamp":"2023-03-14T07:20:22.755555Z","uid":"3848351d-52d7-4887-9988-38c8bc2b6faf"} {"component":"virt-handler","kind":"","level":"info","msg":"No container disk mount entries found to unmount","name":"must-gather-vm-2-1678748206-157084","namespace":"node-gather-unprivileged","pos":"mount.go:349","timestamp":"2023-03-14T07:20:22.755630Z","uid":"3848351d-52d7-4887-9988-38c8bc2b6faf"} {"component":"virt-handler","kind":"","level":"info","msg":"Cleaning up remaining hotplug volumes","name":"must-gather-vm-2-1678748206-157084","namespace":"node-gather-unprivileged","pos":"mount.go:770","timestamp":"2023-03-14T07:20:22.755647Z","uid":"3848351d-52d7-4887-9988-38c8bc2b6faf"} {"component":"virt-handler","kind":"Domain","level":"info","msg":"Removing domain from cache during final cleanup","name":"must-gather-vm-2-1678748206-157084","namespace":"node-gather-unprivileged","pos":"vm.go:1945","timestamp":"2023-03-14T07:20:22.755696Z","uid":""} {"component":"virt-handler","kind":"","level":"info","msg":"VMI is in phase: Failed | Domain does not exist","name":"must-gather-vm-2-1678748206-157084","namespace":"node-gather-unprivileged","pos":"vm.go:1655","timestamp":"2023-03-14T07:20:22.793769Z","uid":"3848351d-52d7-4887-9988-38c8bc2b6faf"} {"component":"virt-handler","kind":"","level":"info","msg":"Performing final local cleanup for vmi with uid 3848351d-52d7-4887-9988-38c8bc2b6faf","name":"must-gather-vm-2-1678748206-157084","namespace":"node-gather-unprivileged","pos":"vm.go:1911","timestamp":"2023-03-14T07:20:22.793918Z","uid":"3848351d-52d7-4887-9988-38c8bc2b6faf"} {"component":"virt-handler","kind":"","level":"info","msg":"No container disk mount entries found to unmount","name":"must-gather-vm-2-1678748206-157084","namespace":"node-gather-unprivileged","pos":"mount.go:349","timestamp":"2023-03-14T07:20:22.793976Z","uid":"3848351d-52d7-4887-9988-38c8bc2b6faf"} {"component":"virt-handler","kind":"","level":"info","msg":"Cleaning up remaining hotplug volumes","name":"must-gather-vm-2-1678748206-157084","namespace":"node-gather-unprivileged","pos":"mount.go:770","timestamp":"2023-03-14T07:20:22.794038Z","uid":"3848351d-52d7-4887-9988-38c8bc2b6faf"} {"component":"virt-handler","kind":"Domain","level":"info","msg":"Removing domain from cache during final cleanup","name":"must-gather-vm-2-1678748206-157084","namespace":"node-gather-unprivileged","pos":"vm.go:1945","timestamp":"2023-03-14T07:20:22.794090Z","uid":""} {"component":"virt-handler","kind":"VirtualMachineInstance","level":"info","msg":"Performing final local cleanup for vmi with uid ","name":"must-gather-vm-2-1678748206-157084","namespace":"node-gather-unprivileged","pos":"vm.go:1911","timestamp":"2023-03-14T07:20:22.841322Z","uid":""} {"component":"virt-handler","kind":"Domain","level":"info","msg":"Removing domain from cache during final cleanup","name":"must-gather-vm-2-1678748206-157084","namespace":"node-gather-unprivileged","pos":"vm.go:1945","timestamp":"2023-03-14T07:20:22.841397Z","uid":""} The virt-launcher pod crashes and vanishes, less than 8 secs after starting the VM. Also, the virt-launcher pod never gets into the Running state. Hence no logs to attach from the virt-launcher pod. I tried creating a new VM in this current state with the same symptoms. No sure if related, but in upstream CI we are now encountering: {"component":"virt-launcher-monitor","level":"error","msg":"failed to run virt-launcher","pos":"virt-launcher-monitor.go:108","reason":"open /dev/null: permission denied","timestamp":"2023-03-13T08:44:56.159607Z"} {"component":"virt-launcher-monitor","level":"error","msg":"monitoring virt-launcher failed","pos":"virt-launcher-monitor.go:90","reason":"open /dev/null: permission denied","timestamp":"2023-03-13T08:44:56.497455Z"} with Kubevirt v0.58.1 Could this be a problem of an issue between userspace and kernelspace leading to a crash after qemu got spawned. Kedar, in case this is what Simone suggests (which we also encountered in Kubevirt) can we get audit logs (or you can search for avc directly)? On my case in upstream ci it's definitely due to SELinux: time->Tue Mar 14 15:50:48 2023 type=PROCTITLE msg=audit(1678809048.553:766): proctitle=2F7573722F62696E2F766972742D6C61756E636865722D6D6F6E69746F72002D2D71656D752D74696D656F75740032373673002D2D6E616D650074657374766D692D6E6B763764002D2D7569640034633163346435622D326238392D343435332D383631642D313331343033303562303030002D2D6E616D657370616365006B type=SYSCALL msg=audit(1678809048.553:766): arch=c000003e syscall=257 success=no exit=-13 a0=ffffffffffffff9c a1=c0000397a0 a2=80000 a3=0 items=0 ppid=19192 pid=19203 auid=4294967295 uid=107 gid=107 euid=107 suid=107 fsuid=107 egid=107 sgid=107 fsgid=107 tty=(none) ses=4294967295 comm="virt-launcher-m" exe="/usr/bin/virt-launcher-monitor" subj=system_u:system_r:virt_launcher.process:s0:c192,c492 key=(null) type=AVC msg=audit(1678809048.553:766): avc: denied { open } for pid=19203 comm="virt-launcher-m" path="/dev/null" dev="tmpfs" ino=229423 scontext=system_u:system_r:virt_launcher.process:s0:c192,c492 tcontext=system_u:object_r:container_file_t:s0:c192,c492 tclass=chr_file permissive=0 VM Name: must-gather-vm-2 Mar 14 17:13:56.392036 c01-dbn-412-4q8rm-worker-0-n2jzb kubenswrapper[3499]: E0314 17:13:56.391561 3499 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"virt-launcher-must-gather-vm-2-1678748206-157084-8dncj_node-gather-unprivileged(44134ef2-c0d9-4592-ad56-f1992618bf96)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"virt-launcher-must-gather-vm-2-1678748206-157084-8dncj_node-gather-unprivileged(44134ef2-c0d9-4592-ad56-f1992618bf96)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_virt-launcher-must-gather-vm-2-1678748206-157084-8dncj_node-gather-unprivileged_44134ef2-c0d9-4592-ad56-f1992618bf96_0(eeb84859942b50338224ca35793e735fd826c78cfdbd7bcbe8e2d1c673418e88): error adding pod node-gather-unprivileged_virt-launcher-must-gather-vm-2-1678748206-157084-8dncj to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): [node-gather-unprivileged/virt-launcher-must-gather-vm-2-1678748206-157084-8dncj/44134ef2-c0d9-4592-ad56-f1992618bf96:ovn-kubernetes]: error adding container to network \\\"ovn-kubernetes\\\": CNI request failed with status 400: '[node-gather-unprivileged/virt-launcher-must-gather-vm-2-1678748206-157084-8dncj eeb84859942b50338224ca35793e735fd826c78cfdbd7bcbe8e2d1c673418e88] [node-gather-unprivileged/virt-launcher-must-gather-vm-2-1678748206-157084-8dncj eeb84859942b50338224ca35793e735fd826c78cfdbd7bcbe8e2d1c673418e88] failed to configure pod interface: timed out waiting for OVS port binding (ovn-installed) for 0a:58:0a:81:02:09 [10.129.2.9/23]\\n'\"" pod="node-gather-unprivileged/virt-launcher-must-gather-vm-2-1678748206-157084-8dncj" podUID=44134ef2-c0d9-4592-ad56-f1992618bf96 --- VM Name: vm-rhel87-ocs Mar 14 16:36:40.430509 c01-dbn-412-4q8rm-worker-0-n2jzb kubenswrapper[3499]: E0314 16:36:40.430291 3499 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"virt-launcher-vm-rhel87-ocs-cz9b6_kbidarka(8d2bbcdc-e9df-43f3-8879-f0ec5327d3f7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"virt-launcher-vm-rhel87-ocs-cz9b6_kbidarka(8d2bbcdc-e9df-43f3-8879-f0ec5327d3f7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_virt-launcher-vm-rhel87-ocs-cz9b6_kbidarka_8d2bbcdc-e9df-43f3-8879-f0ec5327d3f7_0(0ffa69f5c91cfbbae24e7cc041bfaf578d2dc6fcacb9f0588b9759efb0b56e52): error adding pod kbidarka_virt-launcher-vm-rhel87-ocs-cz9b6 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): [kbidarka/virt-launcher-vm-rhel87-ocs-cz9b6/8d2bbcdc-e9df-43f3-8879-f0ec5327d3f7:ovn-kubernetes]: error adding container to network \\\"ovn-kubernetes\\\": CNI request failed with status 400: '[kbidarka/virt-launcher-vm-rhel87-ocs-cz9b6 0ffa69f5c91cfbbae24e7cc041bfaf578d2dc6fcacb9f0588b9759efb0b56e52] [kbidarka/virt-launcher-vm-rhel87-ocs-cz9b6 0ffa69f5c91cfbbae24e7cc041bfaf578d2dc6fcacb9f0588b9759efb0b56e52] failed to configure pod interface: timed out waiting for OVS port binding (ovn-installed) for 0a:58:0a:81:02:0b [10.129.2.11/23]\\n'\"" pod="kbidarka/virt-launcher-vm-rhel87-ocs-cz9b6" podUID=8d2bbcdc-e9df-43f3-8879-f0ec5327d3f7 ]$ oc get vm -A NAMESPACE NAME AGE kbidarka vm-rhel87-ocs 7h8m node-gather-unprivileged must-gather-vm-2-1678748206-157084 18h Simone, can you open a new bug? With information as what type of OS you are running on and so on... Could this bug be related https://issues.redhat.com/browse/OCPBUGS-8479 ? Setting TestOnly, as this looks like a symptom of bug 2178990 So --- apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 10-enable-container-use-devices-sebool spec: config: ignition: version: 2.2.0 systemd: units: - name: setsebool-container-use-devices.service enabled: true contents: | [Unit] Before=kubelet.service [Service] Type=oneshot ExecStart=setsebool container_use_devices true RemainAfterExit=yes [Install] WantedBy=multi-user.target looks like a valid workaround to continue the testing campaign until bug 2178990 is properly addresses. @dbasunag can you please give it a try? With the workaround (and PSA one) in place we are able to start a VM with CNV 4.12.2 on OCP 4.13. I fear we can also have something else on the upgrade path. stirabos@t14s:~$ oc version Client Version: 4.13.0-202303071354.p0.g0236210.assembly.stream-0236210 Kustomize Version: v4.5.7 Server Version: 4.13.0-0.nightly-2023-03-14-053612 Kubernetes Version: v1.26.2+06e8c46 stirabos@t14s:~$ oc get csv -n openshift-cnv NAME DISPLAY VERSION REPLACES PHASE kubevirt-hyperconverged-operator.v4.12.2 OpenShift Virtualization 4.12.2 kubevirt-hyperconverged-operator.v4.12.1 Succeeded openshift-pipelines-operator-rh.v1.9.2 Red Hat OpenShift Pipelines 1.9.2 Succeeded stirabos@t14s:~$ oc get nodes NAME STATUS ROLES AGE VERSION iuo-dbn-412-9f4fd-master-0 Ready control-plane,master 20h v1.26.2+bc894ae iuo-dbn-412-9f4fd-master-1 Ready control-plane,master 20h v1.26.2+bc894ae iuo-dbn-412-9f4fd-master-2 Ready control-plane,master 20h v1.26.2+bc894ae iuo-dbn-412-9f4fd-worker-0-7nfxq Ready worker 20h v1.26.2+bc894ae iuo-dbn-412-9f4fd-worker-0-ggn5p Ready worker 20h v1.26.2+bc894ae iuo-dbn-412-9f4fd-worker-0-v6gnc Ready worker 20h v1.26.2+bc894ae stirabos@t14s:~$ oc get MachineConfig | grep container-use-devices 100-master-setsebool-container-use-devices 3.2.0 20h 100-worker-setsebool-container-use-devices 3.2.0 20h stirabos@t14s:~$ oc get namespace simone -o json | jq .metadata.labels { "kubernetes.io/metadata.name": "simone", "openshift-pipelines.tekton.dev/namespace-reconcile-version": "1.9.2", "pod-security.kubernetes.io/enforce": "privileged", "pod-security.kubernetes.io/enforce-version": "v1.24", "security.openshift.io/scc.podSecurityLabelSync": "false" } stirabos@t14s:~$ oc get vmis -n simone NAME AGE PHASE IP NODENAME READY fedora-guk5hrp33fl7pkb6 2m8s Running 10.129.2.24 iuo-dbn-412-9f4fd-worker-0-7nfxq True stirabos@t14s:~$ oc get pods -n simone NAME READY STATUS RESTARTS AGE virt-launcher-fedora-guk5hrp33fl7pkb6-lpmvm 1/1 Running 0 112s Can we verify this as we now should have the fixed CoreOS build? Moving to ON_QE as I can't help anymore. Verified using OCP-4.13.0-rc.2 + CNV 4.12.2 deployed from redhat-operators source (aka production): > oc version > > Client Version: 4.13.0-rc.2 > Kustomize Version: v4.5.7 > Server Version: 4.13.0-rc.2 > Kubernetes Version: v1.26.2+dc93b13 > cat /etc/redhat-release > > Red Hat Enterprise Linux CoreOS release 4.13 > rpm -q container-selinux > > container-selinux-2.205.0-1.el9_2.noarch > oc -n openshift-cnv get csv > > NAME DISPLAY VERSION > kubevirt-hyperconverged-operator.v4.12.2 OpenShift Virtualization 4.12.2 I created a basic cirros-vm backed by a container disk: > kind: VirtualMachine > apiVersion: kubevirt.io/v1 > metadata: > name: cirros-container-disk > spec: > template: > spec: > domain: > cpu: > cores: 2 > devices: > disks: > - name: rootdisk > disk: > bus: virtio > resources: > requests: > memory: '128M' > volumes: > - name: rootdisk > containerDisk: > image: quay.io/kubevirt/cirros-container-disk-demo:latest > runStrategy: RerunOnFailure The VM is running properly: > oc get vm > > NAME AGE STATUS READY > cirros-container-disk 49s Running True The associated virt-launcher Pod is correctly using the virt_launcher.process SELinux type: > oc get pod virt-launcher-cirros-container-disk-h46xj -o yaml > > kind: Pod > apiVersion: v1 > metadata: > name: virt-launcher-cirros-container-disk-h46xj > namespace: default > [...] > spec: > [...] > containers: > - name: compute > [...] > securityContext: > capabilities: > add: > - NET_BIND_SERVICE > - SYS_PTRACE > drop: > - NET_RAW > privileged: false > runAsGroup: 107 > runAsNonRoot: true > runAsUser: 107 > [...] > - name: volumerootdisk > [...] > securityContext: > runAsNonRoot: true > runAsUser: 107 > seLinuxOptions: > type: virt_launcher.process > [...] > initContainers: > - name: container-disk-binary > [...] > securityContext: > runAsNonRoot: true > runAsUser: 107 > [...] > securityContext: > runAsGroup: 107 > runAsNonRoot: true > runAsUser: 107 > seLinuxOptions: > type: virt_launcher.process This has been validated against https://openshift-release.apps.ci.l2s4.p1.openshiftapps.com/releasestream/4-stable/release/4.13.0-rc.3 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (OpenShift Virtualization 4.12.3 Images), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2023:3283 |