Bug 2088200 - Security Profiles Operator should comply to restricted pod security level
Summary: Security Profiles Operator should comply to restricted pod security level
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Security Profiles Operator
Version: 4.11
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: ---
Assignee: Jakub Hrozek
QA Contact: xiyuan
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-05-19 02:03 UTC by xiyuan
Modified: 2023-01-18 11:37 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-01-18 11:36:58 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github kubernetes-sigs security-profiles-operator pull 944 0 None open deploy: Annotate the NS to allow running privileged 2022-05-23 20:36:02 UTC
Red Hat Product Errata RHBA-2022:8762 0 None None None 2023-01-18 11:37:02 UTC

Description xiyuan 2022-05-19 02:03:07 UTC
Description:
security profiles operator workloads should comply to restricted pod security level


Version :
4.11.0-0.nightly-2022-05-11-054135 + compliance-operatorv0.1.51-1

How to reproduce it (as minimally and precisely as possible)?
Always.


Steps to Reproduce:
Install security profiles operator:
$ cat test.sh
# All workloads creation is audited on masters with below annotation. Below cmd checks all workloads that would violate PodSecurity.
cat > cmd.txt << EOF
grep -hir 'would violate PodSecurity' /var/log/kube-apiserver/ | jq -r '.requestURI + " " + .annotations."pod-security.kubernetes.io/audit-violations"'
EOF

CMD="`cat cmd.txt`"
oc new-project xxia-test

# With admin, run above cmd on all masters:
MASTERS=`oc get no | grep master | grep -o '^[^ ]*'`
for i in $MASTERS
do
  oc debug -n xxia-test no/$i -- chroot /host bash -c "$CMD || true"
done > all-violations.txt

cat all-violations.txt | grep -E 'namespaces/(openshift|kube|security)-' | sort | uniq > all-violations_system_components.txt
cat all-violations_system_components.txt
 
In 4.11.0-0.nightly-2022-05-11-054135 env, run above script with admin: 
./test.sh
 
Got:
/apis/apps/v1/namespaces/security-profiles-operator/daemonsets/spod would violate PodSecurity "restricted:latest": hostPath volumes (volumes "host-varlib-volume", "host-operator-volume", "host-fsselinux-volume", "host-etcselinux-volume", "host-varlibselinux-volume", "profile-recording-output-volume", "host-auditlog-volume", "host-syslog-volume", "sys-kernel-debug-volume", "host-etc-osrelease-volume"), seLinuxOptions (containers "non-root-enabler", "security-profiles-operator" set forbidden securityContext.seLinuxOptions: type "spc_t"), unrestricted capabilities (container "metrics" must set securityContext.capabilities.drop=["ALL"]; container "non-root-enabler" must not include "CHOWN", "DAC_OVERRIDE", "FOWNER", "FSETID" in securityContext.capabilities.add), restricted volume types (volumes "host-varlib-volume", "host-operator-volume", "host-fsselinux-volume", "host-etcselinux-volume", "host-varlibselinux-volume", "profile-recording-output-volume", "host-auditlog-volume", "host-syslog-volume", "sys-kernel-debug-volume", "host-etc-osrelease-volume" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "non-root-enabler", "security-profiles-operator", "metrics" must set securityContext.runAsNonRoot=true), runAsUser=0 (container "non-root-enabler" must not set runAsUser=0), seccompProfile (pod or containers "non-root-enabler", "security-profiles-operator", "metrics" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
/apis/apps/v1/namespaces/security-profiles-operator/daemonsets would violate PodSecurity "restricted:latest": hostPath volumes (volumes "host-varlib-volume", "host-operator-volume", "host-fsselinux-volume", "host-etcselinux-volume", "host-varlibselinux-volume", "profile-recording-output-volume", "host-auditlog-volume", "host-syslog-volume", "sys-kernel-debug-volume", "host-etc-osrelease-volume"), seLinuxOptions (containers "non-root-enabler", "selinux-shared-policies-copier", "security-profiles-operator", "selinuxd" set forbidden securityContext.seLinuxOptions: type "spc_t"), allowPrivilegeEscalation != false (container "selinuxd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "selinuxd", "metrics" must set securityContext.capabilities.drop=["ALL"]; containers "non-root-enabler", "selinux-shared-policies-copier", "selinuxd" must not include "CHOWN", "DAC_OVERRIDE", "FOWNER", "FSETID" in securityContext.capabilities.add), restricted volume types (volumes "host-varlib-volume", "host-operator-volume", "host-fsselinux-volume", "host-etcselinux-volume", "host-varlibselinux-volume", "profile-recording-output-volume", "host-auditlog-volume", "host-syslog-volume", "sys-kernel-debug-volume", "host-etc-osrelease-volume" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "non-root-enabler", "selinux-shared-policies-copier", "security-profiles-operator", "selinuxd", "metrics" must set securityContext.runAsNonRoot=true), runAsUser=0 (containers "non-root-enabler", "selinux-shared-policies-copier", "selinuxd" must not set runAsUser=0), seccompProfile (pod or containers "non-root-enabler", "selinux-shared-policies-copier", "security-profiles-operator", "selinuxd", "metrics" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
/apis/apps/v1/namespaces/security-profiles-operator/deployments/security-profiles-operator?fieldManager=kubectl-client-side-apply would violate PodSecurity "restricted:latest": unrestricted capabilities (container "security-profiles-operator" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "security-profiles-operator" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "security-profiles-operator" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
/apis/apps/v1/namespaces/security-profiles-operator/deployments/security-profiles-operator-webhook would violate PodSecurity "restricted:latest": unrestricted capabilities (container "security-profiles-operator" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "security-profiles-operator" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "security-profiles-operator" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
/apis/apps/v1/namespaces/security-profiles-operator/deployments/security-profiles-operator would violate PodSecurity "restricted:latest": unrestricted capabilities (container "security-profiles-operator" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "security-profiles-operator" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "security-profiles-operator" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
/apis/apps/v1/namespaces/security-profiles-operator/deployments would violate PodSecurity "restricted:latest": unrestricted capabilities (container "security-profiles-operator" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "security-profiles-operator" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "security-profiles-operator" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
/apis/apps/v1/namespaces/security-profiles-operator/replicasets/security-profiles-operator-7949ff48f5 would violate PodSecurity "restricted:latest": unrestricted capabilities (container "security-profiles-operator" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "security-profiles-operator" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "security-profiles-operator" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
/apis/apps/v1/namespaces/security-profiles-operator/replicasets/security-profiles-operator-7f9d85fbdb would violate PodSecurity "restricted:latest": unrestricted capabilities (container "security-profiles-operator" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "security-profiles-operator" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "security-profiles-operator" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
/apis/apps/v1/namespaces/security-profiles-operator/replicasets would violate PodSecurity "restricted:latest": unrestricted capabilities (container "security-profiles-operator" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "security-profiles-operator" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "security-profiles-operator" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
/api/v1/namespaces/security-profiles-operator/pods would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true, hostPID=true), hostPath volumes (volume "host"), privileged (container "container-00" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (container "container-00" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "container-00" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volume "host" uses restricted volume type "hostPath"), runAsNonRoot != true (pod or container "container-00" must set securityContext.runAsNonRoot=true), runAsUser=0 (container "container-00" must not set runAsUser=0), seccompProfile (pod or container "container-00" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
/api/v1/namespaces/security-profiles-operator/pods would violate PodSecurity "restricted:latest": hostPath volumes (volumes "host-varlib-volume", "host-operator-volume", "host-fsselinux-volume", "host-etcselinux-volume", "host-varlibselinux-volume", "profile-recording-output-volume", "host-auditlog-volume", "host-syslog-volume", "sys-kernel-debug-volume", "host-etc-osrelease-volume"), seLinuxOptions (containers "non-root-enabler", "security-profiles-operator" set forbidden securityContext.seLinuxOptions: type "spc_t"), unrestricted capabilities (container "metrics" must set securityContext.capabilities.drop=["ALL"]; container "non-root-enabler" must not include "CHOWN", "DAC_OVERRIDE", "FOWNER", "FSETID" in securityContext.capabilities.add), restricted volume types (volumes "host-varlib-volume", "host-operator-volume", "host-fsselinux-volume", "host-etcselinux-volume", "host-varlibselinux-volume", "profile-recording-output-volume", "host-auditlog-volume", "host-syslog-volume", "sys-kernel-debug-volume", "host-etc-osrelease-volume" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "non-root-enabler", "security-profiles-operator", "metrics" must set securityContext.runAsNonRoot=true), runAsUser=0 (container "non-root-enabler" must not set runAsUser=0)
/api/v1/namespaces/security-profiles-operator/pods would violate PodSecurity "restricted:latest": hostPath volumes (volumes "host-varlib-volume", "host-operator-volume", "host-fsselinux-volume", "host-etcselinux-volume", "host-varlibselinux-volume", "profile-recording-output-volume", "host-auditlog-volume", "host-syslog-volume", "sys-kernel-debug-volume", "host-etc-osrelease-volume"), seLinuxOptions (containers "non-root-enabler", "selinux-shared-policies-copier", "security-profiles-operator", "selinuxd" set forbidden securityContext.seLinuxOptions: type "spc_t"), allowPrivilegeEscalation != false (container "selinuxd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "selinuxd", "metrics" must set securityContext.capabilities.drop=["ALL"]; containers "non-root-enabler", "selinux-shared-policies-copier", "selinuxd" must not include "CHOWN", "DAC_OVERRIDE", "FOWNER", "FSETID" in securityContext.capabilities.add), restricted volume types (volumes "host-varlib-volume", "host-operator-volume", "host-fsselinux-volume", "host-etcselinux-volume", "host-varlibselinux-volume", "profile-recording-output-volume", "host-auditlog-volume", "host-syslog-volume", "sys-kernel-debug-volume", "host-etc-osrelease-volume" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "non-root-enabler", "selinux-shared-policies-copier", "security-profiles-operator", "selinuxd", "metrics" must set securityContext.runAsNonRoot=true), runAsUser=0 (containers "non-root-enabler", "selinux-shared-policies-copier", "selinuxd" must not set runAsUser=0)
/api/v1/namespaces/security-profiles-operator/pods would violate PodSecurity "restricted:latest": runAsNonRoot != true (pod or container "security-profiles-operator" must set securityContext.runAsNonRoot=true)

Expected results:
Security Profiles Operator workload should comply to restricted pod security level

Additional info:

Comment 2 xiyuan 2022-05-30 15:17:18 UTC
verification pass with 4.11.0-0.nightly-2022-05-25-193227 and security-profiles-operator-bundle-container-0.4.3-42.
Test is for namespace created by command only. For operations in GUI, will create another bug to track.

$ oc get clusterversion
NAME      VERSION                              AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.11.0-0.nightly-2022-05-25-193227   True        False         13m     Cluster version is 4.11.0-0.nightly-2022-05-25-193227

1. install operator, remember to create ns with labels like below:

apiVersion: v1
kind: Namespace
metadata:
  name: security-profiles-operator
  labels:
    pod-security.kubernetes.io/enforce: privileged
    pod-security.kubernetes.io/audit: privileged
    pod-security.kubernetes.io/warn: privileged

$ oc get ns security-profiles-operator --show-labels 
NAME                         STATUS   AGE     LABELS
security-profiles-operator   Active   2m12s   kubernetes.io/metadata.name=security-profiles-operator,olm.operatorgroup.uid/2d6fd5c0-2942-4d28-9964-03e20d5eb215=,pod-security.kubernetes.io/audit=privileged,pod-security.kubernetes.io/enforce=privileged,pod-security.kubernetes.io/warn=privileged
$ oc get ip
NAME            CSV                                     APPROVAL    APPROVED
install-f9qm2   security-profiles-operator.v0.4.3-dev   Automatic   true
$ oc get csv
NAME                                    DISPLAY                            VERSION     REPLACES   PHASE
elasticsearch-operator.5.4.2            OpenShift Elasticsearch Operator   5.4.2                  Succeeded
security-profiles-operator.v0.4.3-dev   Security Profiles Operator         0.4.3-dev              Succeeded

$ cat sleep_sh_pod_p.yaml 
apiVersion: security-profiles-operator.x-k8s.io/v1beta1
kind: SeccompProfile
metadata:
  name: sleep-sh-pod
spec:
  defaultAction: SCMP_ACT_ERRNO
  architectures:
  - SCMP_ARCH_X86_64
  syscalls:
  - action: SCMP_ACT_ALLOW
    names:
    - arch_prctl
    - brk
    - capget
    - capset
    - chdir
    - clone
    - close
    - dup3
    - epoll_ctl
    - epoll_pwait
    - execve
    - exit_group
    - fchdir
    - fchown
    - fcntl
    - fstat
    - fstatfs
    - futex
    - getcwd
    - getdents64
    - getpid
    - getppid
    - getuid
    - ioctl
    - lseek
    - mmap
    - mount
    - mprotect
    - nanosleep
    - newfstatat
    - open
    - openat
    - pivot_root
    - prctl
    - read
    - rt_sigaction
    - rt_sigprocmask
    - rt_sigreturn
    - set_tid_address
    - setgid
    - setgroups
    - sethostname
    - setuid
    - stat
    - statfs
    - tgkill
    - time
    - umask
    - umount2
    - wait4
    - write
    #- mkdir
    #- mkdirat
$ oc apply -f sleep_sh_pod_p.yaml 
seccompprofile.security-profiles-operator.x-k8s.io/sleep-sh-pod created
$ oc get seccompprofile -w
NAME                 STATUS      AGE
log-enricher-trace   Installed   3m8s
nginx-1.19.1         Installed   3m8s
sleep-sh-pod         Installed   16s
^C$ cat selinux_profile_errorlogger.yaml
apiVersion: security-profiles-operator.x-k8s.io/v1alpha2
kind: SelinuxProfile
metadata:
  name: errorlogger
spec:
  inherit:
    - name: container
  allow:
    var_log_t:
      dir:
        - open
        - read
        - getattr
        - lock
        - search
        - ioctl
        - add_name
        - remove_name
        - write
      file:
        - getattr
        - read
        - write
        - append
        - ioctl
        - lock
        - map
        - open
        - create
      sock_file:
        - getattr
        - read
        - write
        - append
        - open
$ oc apply -f selinux_profile_errorlogger.yaml 
selinuxprofile.security-profiles-operator.x-k8s.io/errorlogger created
$ oc get selinuxprofile -w
NAME          USAGE                                            STATE
errorlogger   errorlogger_security-profiles-operator.process   InProgress
errorlogger   errorlogger_security-profiles-operator.process   Installed
^C$ oc new-project mytest
Now using project "mytest" on server "https://api.xiyuan30-112.qe.devcluster.openshift.com:6443".

You can add applications to this project with the 'new-app' command. For example, try:

    oc new-app rails-postgresql-example

to build a new example application in Ruby. Or use kubectl to deploy a simple Kubernetes application:

    kubectl create deployment hello-node --image=k8s.gcr.io/serve_hostname

(reverse-i-search)`oc pa': ^C patch ss default -p '{"debug":true}' --type='merge'
$ oc -n security-profiles-operator patch spod spod --type=merge -p '{"spec":{"enableLogEnricher":true}}'
securityprofilesoperatordaemon.security-profiles-operator.x-k8s.io/spod patched

$ oc get pod -n security-profiles-operator
NAME                                                  READY   STATUS    RESTARTS   AGE
security-profiles-operator-55476645c5-ljsms           1/1     Running   0          6m29s
security-profiles-operator-55476645c5-r5vfj           1/1     Running   0          6m29s
security-profiles-operator-55476645c5-snrwv           1/1     Running   0          6m29s
security-profiles-operator-webhook-7cf76c76d7-76czf   1/1     Running   0          6m19s
security-profiles-operator-webhook-7cf76c76d7-x75l7   1/1     Running   0          6m19s
security-profiles-operator-webhook-7cf76c76d7-zhd9j   1/1     Running   0          6m19s
spod-72kzk                                            4/4     Running   0          90s
spod-8xff9                                            4/4     Running   0          90s
spod-hdknk                                            4/4     Running   0          90s
spod-ht6d9                                            4/4     Running   0          90s
spod-n9vpr                                            3/4     Running   0          89s
spod-vsfr9                                            4/4     Running   0          90s
$ oc apply -f sleep_sh_pod_p.yaml 
seccompprofile.security-profiles-operator.x-k8s.io/sleep-sh-pod created
$ oc apply -f selinux_profile_errorlogger.yaml 
selinuxprofile.security-profiles-operator.x-k8s.io/errorlogger created
$ oc apply -f -<<EOF
> apiVersion: security-profiles-operator.x-k8s.io/v1alpha1
> kind: ProfileRecording
> metadata:
>   name: spo-recording
> spec:
>   kind: SeccompProfile
>   recorder: logs
>   podSelector:
>     matchLabels:
>        name: hello-daemonset
> EOF
profilerecording.security-profiles-operator.x-k8s.io/spo-recording created
$ oc get profile
profilebindings.security-profiles-operator.x-k8s.io    profilerecordings.security-profiles-operator.x-k8s.io  profiles.tuned.openshift.io
$ oc get seccompprofiles
NAME           STATUS      AGE
sleep-sh-pod   Installed   100s
$ oc get selinuxprofiles
NAME          USAGE                        STATE
errorlogger   errorlogger_mytest.process   Installed
$ oc apply -f -<<EOF
> apiVersion: security-profiles-operator.x-k8s.io/v1alpha1
> kind: ProfileBinding
> metadata:
>   name: busybox-binding
> spec:
>   profileRef:
>     kind: SeccompProfile
>     name: sleep-sh-pod
>   image: quay.io/openshifttest/busybox:latest
> EOF
profilebinding.security-profiles-operator.x-k8s.io/busybox-binding created
$ cat <<EOF>el-no-policy.yaml 
> apiVersion: v1
> kind: Pod
> metadata:
>   name: el-no-policy
> spec:
>   initContainers:
>   - name: errorlogger
>     #image: "registry.access.redhat.com/ubi8/ubi-minimal:latest"
>     image: quay.io/openshifttest/busybox
>     command: ['sh', '-c', 'echo "Time: Thu May 12 22:13:13 CST 2022. Some error info." >> /var/log/test.log || /bin/true']
>     volumeMounts:
>     - name: varlog
>       mountPath: /var/log
>   containers:
>   - name: pauser
>     image: "gcr.io/google_containers/pause:latest"
>   restartPolicy: Never
>   volumes:
>   - name: varlog
>     hostPath:
>       path: /var/log
>       type: Directory
> EOF
$ oc apply -f el-no-policy.yaml 
W0530 23:05:28.832183   11828 warnings.go:70] would violate PodSecurity "restricted:latest": hostPath volumes (volume "varlog"), allowPrivilegeEscalation != false (containers "errorlogger", "pauser" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "errorlogger", "pauser" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volume "varlog" uses restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "errorlogger", "pauser" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "errorlogger", "pauser" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
pod/el-no-policy created

2. Check with script whether comply to pod security level:
$ cat test.sh
 All workloads creation is audited on masters with below annotation. Below cmd checks all workloads that would violate PodSecurity.
cat > cmd.txt << EOF
grep -hir 'would violate PodSecurity' /var/log/kube-apiserver/ | jq -r '.requestURI + " " + .annotations."pod-security.kubernetes.io/audit-violations"'
EOF

CMD="`cat cmd.txt`"
oc new-project xxia-test

# With admin, run above cmd on all masters:
MASTERS=`oc get no | grep master | grep -o '^[^ ]*'`
for i in $MASTERS
do
  oc debug -n xxia-test no/$i -- chroot /host bash -c "$CMD || true"
done > all-violations.txt

cat all-violations.txt | grep -E 'namespaces/(openshift|kube|security)-' | sort | uniq > all-violations_system_components.txt
cat all-violations_system_components.txt


$ ./test.sh
./test.sh: line 1: All: command not found
Now using project "xxia-test" on server "https://api.xiyuan30-112.qe.devcluster.openshift.com:6443".

You can add applications to this project with the 'new-app' command. For example, try:

    oc new-app rails-postgresql-example

to build a new example application in Ruby. Or use kubectl to deploy a simple Kubernetes application:

    kubectl create deployment hello-node --image=k8s.gcr.io/serve_hostname

W0530 23:05:58.401108   11958 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true, hostPID=true), hostPath volumes (volume "host"), privileged (container "container-00" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (container "container-00" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "container-00" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volume "host" uses restricted volume type "hostPath"), runAsNonRoot != true (pod or container "container-00" must set securityContext.runAsNonRoot=true), runAsUser=0 (container "container-00" must not set runAsUser=0), seccompProfile (pod or container "container-00" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
Starting pod/ip-10-0-131-158us-east-2computeinternal-debug ...
To use host binaries, run `chroot /host`

Removing debug pod ...
W0530 23:06:08.357311   11980 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true, hostPID=true), hostPath volumes (volume "host"), privileged (container "container-00" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (container "container-00" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "container-00" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volume "host" uses restricted volume type "hostPath"), runAsNonRoot != true (pod or container "container-00" must set securityContext.runAsNonRoot=true), runAsUser=0 (container "container-00" must not set runAsUser=0), seccompProfile (pod or container "container-00" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
Starting pod/ip-10-0-180-130us-east-2computeinternal-debug ...
To use host binaries, run `chroot /host`

Removing debug pod ...
W0530 23:06:19.958558   11995 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true, hostPID=true), hostPath volumes (volume "host"), privileged (container "container-00" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (container "container-00" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "container-00" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volume "host" uses restricted volume type "hostPath"), runAsNonRoot != true (pod or container "container-00" must set securityContext.runAsNonRoot=true), runAsUser=0 (container "container-00" must not set runAsUser=0), seccompProfile (pod or container "container-00" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
Starting pod/ip-10-0-204-225us-east-2computeinternal-debug ...
To use host binaries, run `chroot /host`

Removing debug pod ...
/apis/apps/v1/namespaces/openshift-cloud-credential-operator/deployments would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "pod-identity-webhook" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "pod-identity-webhook" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "pod-identity-webhook" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "pod-identity-webhook" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
/apis/apps/v1/namespaces/openshift-cloud-credential-operator/replicasets would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "pod-identity-webhook" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "pod-identity-webhook" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "pod-identity-webhook" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "pod-identity-webhook" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
/apis/apps/v1/namespaces/openshift-logging/deployments/cluster-logging-operator would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "cluster-logging-operator" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "cluster-logging-operator" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "cluster-logging-operator" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "cluster-logging-operator" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
/apis/apps/v1/namespaces/openshift-logging/deployments would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "cluster-logging-operator" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "cluster-logging-operator" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "cluster-logging-operator" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "cluster-logging-operator" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
/apis/apps/v1/namespaces/openshift-logging/replicasets would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "cluster-logging-operator" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "cluster-logging-operator" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "cluster-logging-operator" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "cluster-logging-operator" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
/apis/apps/v1/namespaces/openshift-operators-redhat/deployments/elasticsearch-operator would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "kube-rbac-proxy" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "kube-rbac-proxy", "elasticsearch-operator" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or containers "kube-rbac-proxy", "elasticsearch-operator" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "kube-rbac-proxy", "elasticsearch-operator" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
/apis/apps/v1/namespaces/openshift-operators-redhat/deployments would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "kube-rbac-proxy" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "kube-rbac-proxy", "elasticsearch-operator" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or containers "kube-rbac-proxy", "elasticsearch-operator" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "kube-rbac-proxy", "elasticsearch-operator" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
/apis/apps/v1/namespaces/openshift-operators-redhat/replicasets would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "kube-rbac-proxy" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "kube-rbac-proxy", "elasticsearch-operator" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or containers "kube-rbac-proxy", "elasticsearch-operator" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "kube-rbac-proxy", "elasticsearch-operator" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
/apis/batch/v1/namespaces/openshift-marketplace/jobs would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (containers "util", "pull", "extract" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "util", "pull", "extract" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or containers "util", "pull", "extract" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "util", "pull", "extract" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
/api/v1/namespaces/openshift-cloud-credential-operator/pods would violate PodSecurity "restricted:latest": runAsNonRoot != true (pod or container "pod-identity-webhook" must set securityContext.runAsNonRoot=true)
/api/v1/namespaces/openshift-logging/pods would violate PodSecurity "restricted:latest": runAsNonRoot != true (pod or container "cluster-logging-operator" must set securityContext.runAsNonRoot=true)
/api/v1/namespaces/openshift-marketplace/pods would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "registry-server" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "registry-server" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "registry-server" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "registry-server" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
/api/v1/namespaces/openshift-marketplace/pods would violate PodSecurity "restricted:latest": runAsNonRoot != true (pod or containers "util", "pull", "extract" must set securityContext.runAsNonRoot=true)
/api/v1/namespaces/openshift-operators-redhat/pods would violate PodSecurity "restricted:latest": runAsNonRoot != true (pod or containers "kube-rbac-proxy", "elasticsearch-operator" must set securityContext.runAsNonRoot=true)
[xiyuan@MiWiFi-RA69-srv upg2]$ cat all-violations
all-violations_system_components.txt  all-violations.txt                    
[xiyuan@MiWiFi-RA69-srv upg2]$ cat all-violations
all-violations_system_components.txt  all-violations.txt                    
[xiyuan@MiWiFi-RA69-srv upg2]$ cat all-violations_system_components.txt  | grep security
/apis/apps/v1/namespaces/openshift-cloud-credential-operator/deployments would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "pod-identity-webhook" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "pod-identity-webhook" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "pod-identity-webhook" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "pod-identity-webhook" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
/apis/apps/v1/namespaces/openshift-cloud-credential-operator/replicasets would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "pod-identity-webhook" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "pod-identity-webhook" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "pod-identity-webhook" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "pod-identity-webhook" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
/apis/apps/v1/namespaces/openshift-logging/deployments/cluster-logging-operator would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "cluster-logging-operator" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "cluster-logging-operator" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "cluster-logging-operator" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "cluster-logging-operator" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
/apis/apps/v1/namespaces/openshift-logging/deployments would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "cluster-logging-operator" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "cluster-logging-operator" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "cluster-logging-operator" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "cluster-logging-operator" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
/apis/apps/v1/namespaces/openshift-logging/replicasets would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "cluster-logging-operator" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "cluster-logging-operator" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "cluster-logging-operator" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "cluster-logging-operator" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
/apis/apps/v1/namespaces/openshift-operators-redhat/deployments/elasticsearch-operator would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "kube-rbac-proxy" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "kube-rbac-proxy", "elasticsearch-operator" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or containers "kube-rbac-proxy", "elasticsearch-operator" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "kube-rbac-proxy", "elasticsearch-operator" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
/apis/apps/v1/namespaces/openshift-operators-redhat/deployments would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "kube-rbac-proxy" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "kube-rbac-proxy", "elasticsearch-operator" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or containers "kube-rbac-proxy", "elasticsearch-operator" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "kube-rbac-proxy", "elasticsearch-operator" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
/apis/apps/v1/namespaces/openshift-operators-redhat/replicasets would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "kube-rbac-proxy" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "kube-rbac-proxy", "elasticsearch-operator" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or containers "kube-rbac-proxy", "elasticsearch-operator" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "kube-rbac-proxy", "elasticsearch-operator" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
/apis/batch/v1/namespaces/openshift-marketplace/jobs would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (containers "util", "pull", "extract" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "util", "pull", "extract" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or containers "util", "pull", "extract" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "util", "pull", "extract" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
/api/v1/namespaces/openshift-cloud-credential-operator/pods would violate PodSecurity "restricted:latest": runAsNonRoot != true (pod or container "pod-identity-webhook" must set securityContext.runAsNonRoot=true)
/api/v1/namespaces/openshift-logging/pods would violate PodSecurity "restricted:latest": runAsNonRoot != true (pod or container "cluster-logging-operator" must set securityContext.runAsNonRoot=true)
/api/v1/namespaces/openshift-marketplace/pods would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "registry-server" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "registry-server" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "registry-server" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "registry-server" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
/api/v1/namespaces/openshift-marketplace/pods would violate PodSecurity "restricted:latest": runAsNonRoot != true (pod or containers "util", "pull", "extract" must set securityContext.runAsNonRoot=true)
/api/v1/namespaces/openshift-operators-redhat/pods would violate PodSecurity "restricted:latest": runAsNonRoot != true (pod or containers "kube-rbac-proxy", "elasticsearch-operator" must set securityContext.runAsNonRoot=true)
$ cat all-violations_system_components.txt  | grep security-profiles-operator
$

Comment 3 Jakub Hrozek 2022-06-13 20:40:18 UTC
this was built some time ago just got stuck in POST

Comment 4 xiyuan 2022-09-29 02:23:39 UTC
Verification pass with 4.12.0-0.nightly-2022-09-26-111919 and security-profiles-operator.v0.4.3-dev
$ oc apply -f -<<EOF
apiVersion: v1
kind: Namespace
metadata:
  name: security-profiles-operator
  labels:
    openshift.io/cluster-monitoring: "true"
    security.openshift.io/scc.podSecurityLabelSync: "false"
    pod-security.kubernetes.io/enforce: privileged
    pod-security.kubernetes.io/audit: privileged
    pod-security.kubernetes.io/warn: privileged
---
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
  name: security-profiles-operator
  namespace: security-profiles-operator
spec:
  targetNamespaces:
  - security-profiles-operator
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: security-profiles-operator-sub
  namespace: security-profiles-operator
spec:
  channel: release-0.4
  name: security-profiles-operator
  source: qe-app-registry
  sourceNamespace: openshift-marketplace
EOF
namespace/security-profiles-operator created
operatorgroup.operators.coreos.com/security-profiles-operator created
subscription.operators.coreos.com/security-profiles-operator-sub created

$ oc get csv
NAME                                    DISPLAY                      VERSION     REPLACES   PHASE
security-profiles-operator.v0.4.3-dev   Security Profiles Operator   0.4.3-dev              Succeeded
$ oc get pod
NAME                                                  READY   STATUS    RESTARTS   AGE
security-profiles-operator-575df5bbd4-6wqvw           1/1     Running   0          20m
security-profiles-operator-575df5bbd4-86fs2           1/1     Running   0          20m
security-profiles-operator-575df5bbd4-kvwnk           1/1     Running   0          20m
security-profiles-operator-webhook-544f4f5978-2f59w   1/1     Running   0          20m
security-profiles-operator-webhook-544f4f5978-7wwc7   1/1     Running   0          20m
security-profiles-operator-webhook-544f4f5978-9t4dl   1/1     Running   0          20m
spod-bpsw4                                            3/3     Running   0          20m
spod-q828s                                            3/3     Running   0          20m
spod-shjhx                                            3/3     Running   0          20m
spod-vbxqq                                            3/3     Running   0          20m
spod-wf4b7                                            3/3     Running   0          20m
spod-z54vg                                            3/3     Running   0          20m

$ cat test.sh 
# All workloads creation is audited on masters with below annotation. Below cmd checks all workloads that would violate PodSecurity.
cat > cmd.txt << EOF
grep -hir 'would violate PodSecurity' /var/log/kube-apiserver/ | jq -r '.requestURI + " " + .annotations."pod-security.kubernetes.io/audit-violations"'
EOF

CMD="`cat cmd.txt`"
oc create ns xxia-test
oc label ns xxia-test security.openshift.io/scc.podSecurityLabelSync=false pod-security.kubernetes.io/enforce=privileged pod-security.kubernetes.io/audit=privileged pod-security.kubernetes.io/warn=privileged --overwrite

# With admin, run above cmd on all masters:
MASTERS=`oc get no | grep master | grep -o '^[^ ]*'`
for i in $MASTERS
do
  oc debug -n xxia-test no/$i -- chroot /host bash -c "$CMD || true"
done > all-violations.txt

cat all-violations.txt | grep -E 'namespaces/(openshift|kube|security)-' | sort | uniq > all-violations_system_components.txt
cat all-violations_system_components.txt

$ ./test.sh
namespace/xxia-test created
namespace/xxia-test labeled
Starting pod/ip-10-0-159-219us-east-2computeinternal-debug ...
To use host binaries, run `chroot /host`

Removing debug pod ...
Starting pod/ip-10-0-189-160us-east-2computeinternal-debug ...
To use host binaries, run `chroot /host`

Removing debug pod ...
Starting pod/ip-10-0-197-37us-east-2computeinternal-debug ...
To use host binaries, run `chroot /host`

Removing debug pod ...
$ cat all-violations.txt
$

Comment 8 xiyuan 2022-12-06 13:39:06 UTC
Verified with 4.13.0-0.nightly-2022-12-05-155739 + security-profiles-operator-bundle-container-0.5.0-39. Verification pass. 
Add label pod-security.kubernetes.io/enforce=privileged for the namespace. And in the namespace where you want to do profilerecording for seccompprofiles/selinuxprofiles. Otherwise it won't work as expected.

#####install SPO and enable log enricher#######
$ oc apply -f -<<EOF
apiVersion: v1
kind: Namespace
metadata:
  name: openshift-security-profiles
  labels:
    openshift.io/cluster-monitoring: "true"
    pod-security.kubernetes.io/enforce: privileged
---
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
  name: security-profiles-operator
  namespace: openshift-security-profiles
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: security-profiles-operator-sub
  namespace: openshift-security-profiles
spec:
  channel: release-alpha-rhel-8
  name: security-profiles-operator
  source: spo
  sourceNamespace: openshift-marketplace
  config:
    env:
    - name: ENABLE_LOG_ENRICHER
      value: "true"
EOF
namespace/openshift-security-profiles created
operatorgroup.operators.coreos.com/security-profiles-operator created
subscription.operators.coreos.com/security-profiles-operator-sub created
$ oc project openshift-security-profiles 
Now using project "openshift-security-profiles" on server "https://api.xiyuan06-2.qe.gcp.devcluster.openshift.com:6443".
$ oc get ip
NAME            CSV                                 APPROVAL    APPROVED
install-qk9mm   security-profiles-operator.v0.5.0   Automatic   true
$ oc get csv
NAME                                DISPLAY                      VERSION   REPLACES   PHASE
security-profiles-operator.v0.5.0   Security Profiles Operator   0.5.0                Succeeded
$ oc get pod
NAME                                                  READY   STATUS    RESTARTS   AGE
security-profiles-operator-7c468c8949-77h74           1/1     Running   0          98s
security-profiles-operator-7c468c8949-htqch           1/1     Running   0          98s
security-profiles-operator-7c468c8949-rt2rm           1/1     Running   0          98s
security-profiles-operator-webhook-5799458f98-6f4cg   1/1     Running   0          91s
security-profiles-operator-webhook-5799458f98-dvxk6   1/1     Running   0          91s
security-profiles-operator-webhook-5799458f98-wtpnk   1/1     Running   0          91s
spod-7g2xr                                            4/4     Running   0          91s
spod-9npp2                                            4/4     Running   0          91s
spod-c8wrr                                            4/4     Running   0          91s
spod-ct4nh                                            4/4     Running   0          91s
spod-g5h5d                                            4/4     Running   0          91s
spod-qzvtl                                            4/4     Running   0          91s
$ oc -n openshift-security-profiles patch spod spod --type=merge -p '{"spec":{"enableLogEnricher":true}}'
securityprofilesoperatordaemon.security-profiles-operator.x-k8s.io/spod patched

##############profilerecording for selinuxprofile,working as expected
$ oc new-project mytest
Already on project "mytest" on server "https://api.xiyuan06-2.qe.gcp.devcluster.openshift.com:6443".

You can add applications to this project with the 'new-app' command. For example, try:

    oc new-app rails-postgresql-example

to build a new example application in Ruby. Or use kubectl to deploy a simple Kubernetes application:

    kubectl create deployment hello-node --image=k8s.gcr.io/e2e-test-images/agnhost:2.33 -- /agnhost serve-hostname

$  oc label ns mytest spo.x-k8s.io/enable-recording="true" 
namespace/mytest labeled
$ oc label ns mytest security.openshift.io/scc.podSecurityLabelSync=false pod-security.kubernetes.io/enforce=privileged  --overwrite=true
namespace/mytest labeled
$ oc apply -f -<<EOF
apiVersion: security-profiles-operator.x-k8s.io/v1alpha1
kind: ProfileRecording
metadata:
  name: spo-recording
spec:
  kind: SelinuxProfile
  recorder: logs
  podSelector:
    matchLabels:
       name: hello-daemonset
EOF
profilerecording.security-profiles-operator.x-k8s.io/spo-recording created
$ oc create -f -<<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  creationTimestamp: null
  name: spo-record-sa
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  creationTimestamp: null
  name: spo-record
  namespace: mytest
rules:
- apiGroups:
  - security.openshift.io
  resources:
  - securitycontextconstraints
  resourceNames:
  - privileged
  verbs:
  - use
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: spo-record
  namespace: mytest
subjects:
- kind: ServiceAccount
  name: spo-record-sa
roleRef:
  kind: Role
  name: spo-record
  apiGroup: rbac.authorization.k8s.io
EOF
serviceaccount/spo-record-sa created
role.rbac.authorization.k8s.io/spo-record created
rolebinding.rbac.authorization.k8s.io/spo-record created
$ oc apply -f -<<EOF
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: hello-daemonset
spec:
  selector:
      matchLabels:
        name: hello-daemonset
  template:
    metadata:
      labels:
        name: hello-daemonset
    spec:
      nodeSelector:
        node-role.kubernetes.io/worker: ""
      serviceAccount: spo-record-sa
      initContainers:
      - name: wait
        image: quay.io/openshifttest/centos:centos7
        command: ["/bin/sh", "-c", "env"]
      containers:
      - name: hello-openshift
        image: quay.io/openshifttest/hello-openshift:multiarch
        ports:
        - containerPort: 80
      - name: hello-openshift2
        image: quay.io/openshifttest/hello-openshift:multiarch-fedora
        ports:
        - containerPort: 81
EOF
daemonset.apps/hello-daemonset created

$ oc delete daemonset hello-daemonset
daemonset.apps "hello-daemonset" deleted
$ oc get selinuxprofiles
NAME                                   USAGE                                                 STATE
spo-recording-hello-openshift-gj9h8    spo-recording-hello-openshift2-gj9h_mytest.process    Installed
spo-recording-hello-openshift-j8jb7    spo-recording-hello-openshift-j8jb7_mytest.process    Installed
spo-recording-hello-openshift-p2brb    spo-recording-hello-openshift-p2brb_mytest.process    Installed
spo-recording-hello-openshift2-gj9h8   spo-recording-hello-openshift2-gj9h8_mytest.process   Installed
spo-recording-hello-openshift2-j8jb7   spo-recording-hello-openshift2-j8jb7_mytest.process   Installed
spo-recording-hello-openshift2-p2brb   spo-recording-hello-openshift2-p2brb_mytest.process   Installed

Comment 11 errata-xmlrpc 2023-01-18 11:36:58 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Security Profiles Operator release), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2022:8762


Note You need to log in before you can comment on or make changes to this bug.