Description of problem: When there are multi containers in one pod, the number of selinux profiles not correct for log based selinux profiles recording Version-Release number of selected component (if applicable): 4.11.0-0.nightly-2022-05-20-213928 + security-profiles-operator-bundle-container-0.4.3-34 How reproducible: Always Steps to Reproduce: 1. Install SPO 2. Enable log Enrisher by command below: $ oc -n security-profiles-operator patch spod spod --type=merge -p '{"spec":{"enableLogEnricher":true}}' 3. Create a new namespace mytest. To record by using the enricher, create a ProfileRecording: $ oc new-project mytest $ oc apply -f -<<EOF apiVersion: security-profiles-operator.x-k8s.io/v1alpha1 kind: ProfileRecording metadata: name: spo-recording spec: kind: SelinuxProfile recorder: logs podSelector: matchLabels: name: hello-daemonset EOF 4. create the service account with privileged permission: $ oc create -f -<<EOF apiVersion: v1 kind: ServiceAccount metadata: creationTimestamp: null name: spo-record-sa --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: creationTimestamp: null name: spo-record namespace: mytest rules: - apiGroups: - security.openshift.io resources: - securitycontextconstraints resourceNames: - privileged verbs: - use --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: spo-record namespace: mytest subjects: - kind: ServiceAccount name: spo-record-sa roleRef: kind: Role name: spo-record apiGroup: rbac.authorization.k8s.io EOF 5. create a daemonset: $ oc apply -f -<<EOF apiVersion: apps/v1 kind: DaemonSet metadata: name: hello-daemonset spec: selector: matchLabels: name: hello-daemonset template: metadata: labels: name: hello-daemonset spec: nodeSelector: node-role.kubernetes.io/worker: "" serviceAccount: spo-record-sa initContainers: - name: wait image: quay.io/openshifttest/centos:centos7 command: ["/bin/sh", "-c", "env"] containers: - name: hello-openshift image: quay.io/openshifttest/hello-openshift:multiarch ports: - containerPort: 80 - name: hello-openshift2 image: quay.io/openshifttest/hello-openshift:multiarch-fedora ports: - containerPort: 81 EOF 6. Try to curl from different containers: $ oc get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES hello-daemonset-8xcww 2/2 Running 0 17s 10.129.2.22 xiyuan23-b-jfhk2-worker-0-ptpkv <none> <none> hello-daemonset-ts4qj 2/2 Running 0 17s 10.131.0.26 xiyuan23-b-jfhk2-worker-0-9n7x7 <none> <none> hello-daemonset-vrxmj 2/2 Running 0 17s 10.128.2.38 xiyuan23-b-jfhk2-worker-0-vjwdm <none> <none> $ oc get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES hello-daemonset-8xcww 2/2 Running 0 26s 10.129.2.22 xiyuan23-b-jfhk2-worker-0-ptpkv <none> <none> hello-daemonset-ts4qj 2/2 Running 0 26s 10.131.0.26 xiyuan23-b-jfhk2-worker-0-9n7x7 <none> <none> hello-daemonset-vrxmj 2/2 Running 0 26s 10.128.2.38 xiyuan23-b-jfhk2-worker-0-vjwdm <none> <none> $ oc debug node/xiyuan23-b-jfhk2-worker-0-ptpkv -- chroot /host curl 10.129.2.22:8080 W0523 17:54:44.276977 2984 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true, hostPID=true), hostPath volumes (volume "host"), privileged (container "container-00" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (container "container-00" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "container-00" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volume "host" uses restricted volume type "hostPath"), runAsNonRoot != true (pod or container "container-00" must set securityContext.runAsNonRoot=true), runAsUser=0 (container "container-00" must not set runAsUser=0), seccompProfile (pod or container "container-00" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Starting pod/xiyuan23-b-jfhk2-worker-0-ptpkv-debug ... To use host binaries, run `chroot /host` % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 17 100 17 0 0 8500 0 --:--:-- --:--:-- --:--:-- 8500 Hello OpenShift! Removing debug pod ... $ oc debug node/xiyuan23-b-jfhk2-worker-0-ptpkv -- chroot /host curl 10.129.2.22:8081 W0523 17:54:50.890104 3005 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true, hostPID=true), hostPath volumes (volume "host"), privileged (container "container-00" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (container "container-00" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "container-00" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volume "host" uses restricted volume type "hostPath"), runAsNonRoot != true (pod or container "container-00" must set securityContext.runAsNonRoot=true), runAsUser=0 (container "container-00" must not set runAsUser=0), seccompProfile (pod or container "container-00" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Starting pod/xiyuan23-b-jfhk2-worker-0-ptpkv-debug ... To use host binaries, run `chroot /host` % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 17 100 17 0 0 8500 0 --:--:-- --:--:-- --:--:-- 850Hello OpenShift! 0 Removing debug pod ... $ oc debug node/xiyuan23-b-jfhk2-worker-0-9n7x7 -- chroot /host curl 10.131.0.26:8080 W0523 17:55:19.585032 3073 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true, hostPID=true), hostPath volumes (volume "host"), privileged (container "container-00" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (container "container-00" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "container-00" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volume "host" uses restricted volume type "hostPath"), runAsNonRoot != true (pod or container "container-00" must set securityContext.runAsNonRoot=true), runAsUser=0 (container "container-00" must not set runAsUser=0), seccompProfile (pod or container "container-00" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Starting pod/xiyuan23-b-jfhk2-worker-0-9n7x7-debug ... To use host binaries, run `chroot /host` % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 17 100 17 0 0 8500 Hello OpenShift!-:-- --:--:-- 0 0 --:--:-- --:--:-- --:--:-- 17000 Removing debug pod ... $ oc debug node/xiyuan23-b-jfhk2-worker-0-9n7x7 -- chroot /host curl 10.131.0.26:8081 W0523 17:55:25.244576 3095 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true, hostPID=true), hostPath volumes (volume "host"), privileged (container "container-00" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (container "container-00" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "container-00" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volume "host" uses restricted volume type "hostPath"), runAsNonRoot != true (pod or container "container-00" must set securityContext.runAsNonRoot=true), runAsUser=0 (container "container-00" must not set runAsUser=0), seccompProfile (pod or container "container-00" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Starting pod/xiyuan23-b-jfhk2-worker-0-9n7x7-debug ... To use host binaries, run `chroot /host` % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 17 100 17 0 0 17000 0 --:--:-- --:--:-- --:--:-- 17000 Hello OpenShift! Removing debug pod ... $ oc debug node/xiyuan23-b-jfhk2-worker-0-vjwdm -- chroot /host curl 10.128.2.38:8080 W0523 17:55:52.126404 3128 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true, hostPID=true), hostPath volumes (volume "host"), privileged (container "container-00" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (container "container-00" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "container-00" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volume "host" uses restricted volume type "hostPath"), runAsNonRoot != true (pod or container "container-00" must set securityContext.runAsNonRoot=true), runAsUser=0 (container "container-00" must not set runAsUser=0), seccompProfile (pod or container "container-00" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Starting pod/xiyuan23-b-jfhk2-worker-0-vjwdm-debug ... To use host binaries, run `chroot /host` % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0Hello OpenShift! 100 17 100 17 0 0 17000 0 --:--:-- --:--:-- --:--:-- 17000 Removing debug pod … 7. Delete daemonset, and check the selinux profiles recorded: Actual results: Only 3 selinuxprofiles generated: $ oc get selinuxprofiles NAME USAGE STATE spo-recording-hello-openshift-1 spo-recording-hello-openshift-1_mytest.process Installed spo-recording-hello-openshift2-1 spo-recording-hello-openshift2-1_mytest.process Installed spo-recording-hello-openshift2-2 spo-recording-hello-openshift2-2_mytest.process Installed Expected results: Supposed to generate 6 containers, one for each container Additional info: Sometimes got only 1 selinuxprofile, sometimes got 3 selinuxprofiles
We should first implement the policy merging and then come back to this bug.
https://github.com/kubernetes-sigs/security-profiles-operator/pull/1112
@jhrozek Looks like the PR has been merged. Should the bug be manually moved to ON_QA?
(In reply to Roshni from comment #6) > @jhrozek Looks like the PR has been merged. Should the bug be > manually moved to ON_QA? It should be MODIFIED now that the patch has merged, but I'm struggling with downstream builds, so I'll hold off moving to ON_QA until I fix the build issues.
Verification pass with openshift-security-profiles-operator-bundle:0.5.0-42 + 4.13.0-0.nightly-2022-12-04-194803. And it also passed with openshift-security-profiles-operator-bundle:0.5.0-42 + 4.12.0-0.nightly-2022-12-04-160656 1. Install Security Profiles to namespace openshift-security-profiles to all namespaces. 2. Enable log Enrisher by command below: $ oc -n openshift-security-profiles patch spod spod --type=merge -p '{"spec":{"enableLogEnricher":true}}' securityprofilesoperatordaemon.security-profiles-operator.x-k8s.io/spod patched 3. Create a new namespace mytest. To record by using the enricher, create a ProfileRecording: $ oc new-project mytest $ oc label ns mytest spo.x-k8s.io/enable-recording="true" namespace/mytest labeled $ oc apply -f -<<EOF apiVersion: security-profiles-operator.x-k8s.io/v1alpha1 kind: ProfileRecording metadata: name: spo-recording spec: kind: SelinuxProfile recorder: logs podSelector: matchLabels: name: hello-daemonset EOF profilerecording.security-profiles-operator.x-k8s.io/spo-recording created 4. create the service account with privileged permission: $ $ oc create -f -<<EOF apiVersion: v1 kind: ServiceAccount metadata: creationTimestamp: null name: spo-record-sa --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: creationTimestamp: null name: spo-record namespace: mytest rules: - apiGroups: - security.openshift.io resources: - securitycontextconstraints resourceNames: - privileged verbs: - use --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: spo-record namespace: mytest subjects: - kind: ServiceAccount name: spo-record-sa roleRef: kind: Role name: spo-record apiGroup: rbac.authorization.k8s.io EOF serviceaccount/spo-record-sa created role.rbac.authorization.k8s.io/spo-record created rolebinding.rbac.authorization.k8s.io/spo-record created 5. create a daemonset: $ oc apply -f -<<EOF apiVersion: apps/v1 kind: DaemonSet metadata: name: hello-daemonset spec: selector: matchLabels: name: hello-daemonset template: metadata: labels: name: hello-daemonset spec: nodeSelector: node-role.kubernetes.io/worker: "" serviceAccount: spo-record-sa initContainers: - name: wait image: quay.io/openshifttest/centos:centos7 command: ["/bin/sh", "-c", "env"] containers: - name: hello-openshift image: quay.io/openshifttest/hello-openshift:multiarch ports: - containerPort: 80 - name: hello-openshift2 image: quay.io/openshifttest/hello-openshift:multiarch-fedora ports: - containerPort: 81 EOF daemonset.apps/hello-daemonset created $ oc get pod NAME READY STATUS RESTARTS AGE hello-daemonset-8swjw 2/2 Running 0 3m55s hello-daemonset-m4b5h 2/2 Running 0 3m54s hello-daemonset-xt8rs 2/2 Running 0 3m55s $ oc delete daemonset hello-daemonset daemonset.apps "hello-daemonset" deleted $ oc get selinuxprofiles.security-profiles-operator.x-k8s.io -w NAME USAGE STATE spo-recording-hello-openshift-8swjw spo-recording-hello-openshift-8swjw_mytest.process InProgress spo-recording-hello-openshift-m4b5h spo-recording-hello-openshift-m4b5h_mytest.process InProgress spo-recording-hello-openshift-xt8rs spo-recording-hello-openshift-xt8rs_mytest.process InProgress spo-recording-hello-openshift2-8swjw spo-recording-hello-openshift2-8swjw_mytest.process InProgress spo-recording-hello-openshift2-m4b5h spo-recording-hello-openshift2-m4b5h_mytest.process InProgress spo-recording-hello-openshift2-xt8rs spo-recording-hello-openshift2-xt8rs_mytest.process InProgress spo-recording-hello-openshift2-8swjw spo-recording-hello-openshift2-8swjw_mytest.process Installed spo-recording-hello-openshift-xt8rs spo-recording-hello-openshift-xt8rs_mytest.process Installed ^C $ oc get selinuxprofiles NAME USAGE STATE spo-recording-hello-openshift-8swjw spo-recording-hello-openshift-8swjw_mytest.process Installed spo-recording-hello-openshift-m4b5h spo-recording-hello-openshift-m4b5h_mytest.process Installed spo-recording-hello-openshift-xt8rs spo-recording-hello-openshift-xt8rs_mytest.process Installed spo-recording-hello-openshift2-8swjw spo-recording-hello-openshift2-8swjw_mytest.process Installed spo-recording-hello-openshift2-m4b5h spo-recording-hello-openshift2-m4b5h_mytest.process Installed spo-recording-hello-openshift2-xt8rs spo-recording-hello-openshift2-xt8rs_mytest.process Installed
Verified again with openshift-security-profiles-operator-bundle:0.5.0-39 + 4.13.0-0.nightly-2022-12-04-194803. Verification also passed.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (OpenShift Security Profiles Operator release), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2022:8762