Bug 2216774
| Summary: | [RFE] HCO should remove option to run VMs as root | ||
|---|---|---|---|
| Product: | Container Native Virtualization (CNV) | Reporter: | Akriti Gupta <akrgupta> |
| Component: | Installation | Assignee: | Simone Tiraboschi <stirabos> |
| Status: | VERIFIED --- | QA Contact: | Debarati Basu-Nag <dbasunag> |
| Severity: | medium | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 4.14.0 | CC: | dbasunag, fdeutsch, lpivarc, sgott, stirabos |
| Target Milestone: | --- | ||
| Target Release: | 4.14.0 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | hco-bundle-registry-container-v4.14.0.rhel9-1138 | Doc Type: | If docs needed, set a value |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | Type: | Bug | |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
This is expected behavior. @ I'd say the fetaure works as expected. It just happens that getting access to the privileged SCC (which would resolve this problem) is a task on the user/admin. In 4.13 customers are running as non-root. Thus maybe by 4.14 we want to remove this option and really only leave the jsoinpatch approach as an escape hatch. After all we want customers to run as non-root. Lubo, Stu, wdyt? I agree. The only thing I will clarify is that the admin is tasked to label the namespace with PSA label and is not required to manipulate SCC. IMHO this is even better. I believe we only want to support non-root deployments and customers should not have any reason to run as root. The change should be unnoticeable to them. *** Bug 2175135 has been marked as a duplicate of this bug. *** Renamed BZ to reflect the path forward. Re-assigning to the Installation component due to this. Added [RFE] to reflect that this is a requested behavior change. As for https://bugzilla.redhat.com/show_bug.cgi?id=2174859 we recently (CNV 4.14) introduced the Root FG marking the NonRoot as deprecated. Now having NonRoot FG as deprecated and a new Root FG as new but already deprecated does not make much sense. Let's keep only NonRoot as deprecated in 4.14 and let's remove it in 4.15. Validated against CNV-v4.14.0.rhel9-1238:
=====
(cnv-tests-4-14-py3.9) [cloud-user@ocp-ipi-executor-xl cnv-tests]$ oc get kubevirt kubevirt-kubevirt-hyperconverged -n openshift-cnv -o json | jq ".spec.configuration.developerConfiguration.featureGates"
[
"DataVolumes",
"SRIOV",
"CPUManager",
"CPUNodeDiscovery",
"Snapshot",
"HotplugVolumes",
"ExpandDisks",
"GPU",
"HostDevices",
"DownwardMetrics",
"NUMA",
"VMExport",
"DisableCustomSELinuxPolicy",
"KubevirtSeccompProfile",
"HotplugNICs",
"VMPersistentState",
"WithHostModelCPU",
"HypervStrictCheck"
]
(cnv-tests-4-14-py3.9) [cloud-user@ocp-ipi-executor-xl cnv-tests]$ oc get hco kubevirt-hyperconverged -n openshift-cnv -o json | jq ".spec.featureGates"
{
"deployKubeSecondaryDNS": false,
"deployTektonTaskResources": false,
"disableMDevConfiguration": false,
"enableCommonBootImageImport": true,
"nonRoot": true,
"persistentReservation": false,
"withHostPassthroughCPU": false
}
(cnv-tests-4-14-py3.9) [cloud-user@ocp-ipi-executor-xl cnv-tests]$
(cnv-tests-4-14-py3.9) [cloud-user@ocp-ipi-executor-xl cnv-tests]$ oc get hco kubevirt-hyperconverged -n openshift-cnv -o json | jq ".spec.featureGates"
{
"deployKubeSecondaryDNS": false,
"deployTektonTaskResources": false,
"disableMDevConfiguration": false,
"enableCommonBootImageImport": true,
"nonRoot": false,
"persistentReservation": false,
"withHostPassthroughCPU": false
}
(cnv-tests-4-14-py3.9) [cloud-user@ocp-ipi-executor-xl cnv-tests]$ oc get kubevirt kubevirt-kubevirt-hyperconverged -n openshift-cnv -o json | jq ".spec.configuration.developerConfiguration.featureGates"
[
"DataVolumes",
"SRIOV",
"CPUManager",
"CPUNodeDiscovery",
"Snapshot",
"HotplugVolumes",
"ExpandDisks",
"GPU",
"HostDevices",
"DownwardMetrics",
"NUMA",
"VMExport",
"DisableCustomSELinuxPolicy",
"KubevirtSeccompProfile",
"HotplugNICs",
"VMPersistentState",
"WithHostModelCPU",
"HypervStrictCheck",
"Root"
]
(cnv-tests-4-14-py3.9) [cloud-user@ocp-ipi-executor-xl cnv-tests]$
|
Description of problem: when creating VM after setting root:True in HCO CR , it stucks in starting state with following message: [akriti@fedora ~]$ oc describe vm vm3-rhel84-ocs | grep Message Message: virt-launcher pod has not yet been scheduled Message: failed to create pod for vmi default/vm3-rhel84-ocs, it needs a privileged namespace to run: pods "virt-launcher-vm3-rhel84-ocs-tfr7d" is forbidden: violates PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "compute" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "compute" must set securityContext.capabilities.drop=["ALL"]; container "compute" must not include "SYS_NICE" in securityContext.capabilities.add), runAsNonRoot != true (container "compute" must not set securityContext.runAsNonRoot=false), runAsUser=0 (pod and container "compute" must not set runAsUser=0) Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. set root: true in HCO 2. create a vm 3. start the VM Actual results: VM fails to be running Expected results: VM is running with virt-launcher pod running as Root Additional info: