Description of problem: hostpath-provisioner-csi logs shows info-level log message related security context issue. Version-Release number of selected component (if applicable): 4.11 How reproducible: 100% Expected results: Security context configuration prevents warning from occurring. Additional info: {"level":"info","ts":1652877318.6829474,"logger":"KubeAPIWarningLogger","msg":"would violate PodSecurity \"restricted:latest\": hostPath volumes (volumes \"socket-dir\", \"mountpoint-dir\", \"registration-dir\", \"plugins-dir\", \"hpp-csi-local-basic-data-dir\", \"hpp-csi-pvc-block-data-dir\"), privileged (containers \"hostpath-provisioner\", \"node-driver-registrar\", \"csi-provisioner\" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers \"hostpath-provisioner\", \"node-driver-registrar\", \"liveness-probe\", \"csi-provisioner\" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers \"hostpath-provisioner\", \"node-driver-registrar\", \"liveness-probe\", \"csi-provisioner\" must set securityContext.capabilities.drop=[\"ALL\"]), restricted volume types (volumes \"socket-dir\", \"mountpoint-dir\", \"registration-dir\", \"plugins-dir\", \"hpp-csi-local-basic-data-dir\", \"hpp-csi-pvc-block-data-dir\" use restricted volume type \"hostPath\"), runAsNonRoot != true (pod or containers \"hostpath-provisioner\", \"node-driver-registrar\", \"liveness-probe\", \"csi-provisioner\" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers \"hostpath-provisioner\", \"node-driver-registrar\", \"liveness-probe\", \"csi-provisioner\" must set securityContext.seccompProfile.type to \"RuntimeDefault\" or \"Localhost\")"} {"level":"info","ts":1652877321.0855155,"logger":"KubeAPIWarningLogger","msg":"would violate PodSecurity \"restricted:latest\": hostPath volumes (volume \"host-root\"), privileged (container \"mounter\" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (container \"mounter\" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container \"mounter\" must set securityContext.capabilities.drop=[\"ALL\"]), restricted volume types (volume \"host-root\" uses restricted volume type \"hostPath\"), runAsNonRoot != true (pod or container \"mounter\" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container \"mounter\" must set securityContext.seccompProfile.type to \"RuntimeDefault\" or \"Localhost\")"}
HPP CSI needs to be privileged (for bind mounts) so I think we will need the label for the openshift-cnv namespace.
Alexander, could you please update the bug?
With what? HPP needs privileges to do some thing, ergo it will never comply with the restricted scc.
Hi, Alexander, is it expected and we can close the bug?
What is the process to get exceptions for this. This is a CSI driver that requires additional permissions like all other CSI drivers.
So after discussing with reporter, HPP is installed in same namespace as KubeVirt, and virt-handler needs elevated privileges already, so it doesn't matter that HPP elevated privileges would mark the namespace as elevated because KubeVirt already has this. I believe HCO will label the namespace to indicate all this and HPP can just use that. Definitely need to have a discussion on if we should not ship HPP with CNV.
Test on CNV-v4.11.0-535, security context warning is not there, issue has been fixed.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: OpenShift Virtualization 4.11.0 Images security and bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:6526