http://pastebin.test.redhat.com/1105334 Description of problem: In audit log of a freshly deployed 4.14.0 cluster, I see pod security violation messages due to restricted profile virt-operator virt-launcher, virt-handler Version-Release number of selected component (if applicable): 4.14.0 How reproducible: 100% Steps to Reproduce: 1.deploy CNV 2. Check audit logs of 4.14.0 Actual results: Sample entry (http://pastebin.test.redhat.com/1105334): ============= User-agent: virt-operator/v0.0.0 (linux/amd64) kubernetes/$Format, Violations: {'kind': 'Event', 'apiVersion': 'audit.k8s.io/v1', 'level': 'Metadata', 'auditID': '9cfee1fa-78eb-47ee-a927-c3836cea6d39', 'stage': 'ResponseComplete', 'requestURI': '/apis/apps/v1/namespaces/openshift-cnv/daemonsets/virt-handler', 'verb': 'patch', 'user': {'username': 'system:serviceaccount:openshift-cnv:kubevirt-operator', 'uid': 'f968dcac-34bb-481d-bd7e-b2cda614d888', 'groups': ['system:serviceaccounts', 'system:serviceaccounts:openshift-cnv', 'system:authenticated'], 'extra': {'authentication.kubernetes.io/pod-name': ['virt-operator-6749d94f-9h6bv'], 'authentication.kubernetes.io/pod-uid': ['47a3dd95-c28d-433c-97c9-be3278d9ec3c']}}, 'sourceIPs': ['10.9.96.49'], 'userAgent': 'virt-operator/v0.0.0 (linux/amd64) kubernetes/$Format', 'objectRef': {'resource': 'daemonsets', 'namespace': 'openshift-cnv', 'name': 'virt-handler', 'apiGroup': 'apps', 'apiVersion': 'v1'}, 'responseStatus': {'metadata': {}, 'code': 200}, 'requestReceivedTimestamp': '2023-07-18T13:09:59.697520Z', 'stageTimestamp': '2023-07-18T13:09:59.716472Z', 'annotations': {'authorization.k8s.io/decision': 'allow', 'authorization.k8s.io/reason': 'RBAC: allowed by ClusterRoleBinding "kubevirt-hyperconverged-operator.v4.14.0-68bd6f97f6" of ClusterRole "kubevirt-hyperconverged-operator.v4.14.0-68bd6f97f6" to ServiceAccount "kubevirt-operator/openshift-cnv"', 'pod-security.kubernetes.io/audit-violations': 'would violate PodSecurity "restricted:latest": host namespaces (hostPID=true), privileged (containers "virt-launcher", "virt-handler" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "virt-launcher", "virt-handler" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "virt-launcher", "virt-handler" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "libvirt-runtimes", "virt-share-dir", "virt-lib-dir", "virt-private-dir", "device-plugin", "kubelet-pods-shortened", "kubelet-pods", "node-labeller" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "virt-launcher", "virt-handler" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "virt-launcher", "virt-handler" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")'}} ============= Details attached. Expected results: No pod security violation message Additional info: http://pastebin.test.redhat.com/1105334 https://kubernetes.io/docs/concepts/security/pod-security-admission/ https://bugzilla.redhat.com/show_bug.cgi?id=2089744
Technically this is behaving as expected from our side. on our namespace we have: $ oc get namespace openshift-cnv -o yaml apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/node-selector: "" openshift.io/sa.scc.mcs: s0:c27,c9 openshift.io/sa.scc.supplemental-groups: 1000720000/10000 openshift.io/sa.scc.uid-range: 1000720000/10000 creationTimestamp: "2023-07-12T11:45:57Z" labels: kubernetes.io/metadata.name: openshift-cnv olm.operatorgroup.uid/1092e918-ae54-4bac-82ce-2bc3d255f802: "" olm.operatorgroup.uid/bb0d098d-b2da-4076-a3eb-384313089ba9: "" openshift.io/cluster-monitoring: "true" pod-security.kubernetes.io/enforce: privileged pod-security.kubernetes.io/enforce-version: v1.24 security.openshift.io/scc.podSecurityLabelSync: "true" name: openshift-cnv resourceVersion: "67019" uid: b8c86726-d7a1-41fc-a361-135c850cd0f8 spec: finalizers: - kubernetes status: phase: Active with `security.openshift.io/scc.podSecurityLabelSync: "true"` to let the OCP psalabelsyncer set the expected value for `pod-security.kubernetes.io/enforce` according to our SCCs and indeed it's setting `pod-security.kubernetes.io/enforce: privileged` and so our pods are correctly ammitted. The point is that the OCP psalabelsyncer is not setting `pod-security.kubernetes.io/warn: privileged` and `pod-security.kubernetes.io/warn: audit` see: ( https://github.com/openshift/cluster-policy-controller/blob/0dff401e9819311c8f0de2792e913342c92883a1/pkg/psalabelsyncer/podsecurity_label_sync_controller.go#L257-L266 ) and the default at OCP level is now `pod-security.kubernetes.io/warn: restricted` and `pod-security.kubernetes.io/warn: restricted` and so the violations are still auted and warned although absolutely harmless. Let's try to get this properly fixed on the OCP psalabelsyncer side.
This is fixed on OCP - should be verified with next available 4.14.0 build. Related to https://issues.redhat.com/browse/AUTH-413 Should be tested with latest OCP CI build.
We should verify with 4.14.0-0.nightly-2023-08-20-085537 or newer nightly or CI build (OCP).
Verified against OCP-4.14.0-0.ci-2023-09-12-173607, CNV-v4.14.0.rhel9-1949, I don't see the pod security violation messages associated with virt-operator.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: OpenShift Virtualization 4.14.0 Images security and bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2023:6817