Bug 2088202

Summary: compliance operator workloads should comply to restricted pod security level
Product: OpenShift Container Platform Reporter: xiyuan
Component: Compliance OperatorAssignee: Jakub Hrozek <jhrozek>
Status: CLOSED ERRATA QA Contact: xiyuan
Severity: high Docs Contact:
Priority: high    
Version: 4.11CC: lbragsta, mrogers, suprs, wenshen, xiyuan
Target Milestone: ---   
Target Release: 4.11.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Cause: The compliance-operator used administrative permissions on namespaces not labeled appropriately for privileged use. Consequence: Using the compliance operator would result in warning about Pod Security levels being violated. Fix: Use compliance-operator 0.1.53 Result: The compliance-operator 0.1.53 has appropriate namespace labels and permission adjustments to access results without violating permissions.
Story Points: ---
Clone Of: Environment:
Last Closed: 2022-07-14 12:40:58 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description xiyuan 2022-05-19 02:07:13 UTC
Description
compliance operator should comply to restricted pod security level


Version :
4.11.0-0.nightly-2022-05-11-054135 + compliance-operatorv0.1.51-1

How to reproduce it (as minimally and precisely as possible)?
Always.


Steps to Reproduce:
Install Compliance Operator operator from Operators->OperatorHub, create scansettingbinding for test.
$ cat test.sh
# All workloads creation is audited on masters with below annotation. Below cmd checks all workloads that would violate PodSecurity.
cat > cmd.txt << EOF
grep -hir 'would violate PodSecurity' /var/log/kube-apiserver/ | jq -r '.requestURI + " " + .annotations."pod-security.kubernetes.io/audit-violations"'
EOF

CMD="`cat cmd.txt`"
oc new-project xxia-test

# With admin, run above cmd on all masters:
MASTERS=`oc get no | grep master | grep -o '^[^ ]*'`
for i in $MASTERS
do
  oc debug -n xxia-test no/$i -- chroot /host bash -c "$CMD || true"
done > all-violations.txt

cat all-violations.txt | grep -E 'namespaces/(openshift|kube)-' | sort | uniq > all-violations_system_components.txt
cat all-violations_system_components.txt
 
In 4.11.0-0.nightly-2022-05-11-054135 env, run above script with admin: 
./test.sh

Actual results:
Got below info for compliance-operator:
/apis/apps/v1/namespaces/openshift-compliance/deployments/compliance-operator would violate PodSecurity "restricted:latest": unrestricted capabilities (container "compliance-operator" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "compliance-operator" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "compliance-operator" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
/apis/apps/v1/namespaces/openshift-compliance/deployments/rhcos4-high-master-rs would violate PodSecurity "restricted:latest": unrestricted capabilities (container "result-server" must set securityContext.capabilities.drop=["ALL"]), seccompProfile (pod or container "result-server" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
/apis/apps/v1/namespaces/openshift-compliance/deployments/rhcos4-high-worker-rs would violate PodSecurity "restricted:latest": unrestricted capabilities (container "result-server" must set securityContext.capabilities.drop=["ALL"]), seccompProfile (pod or container "result-server" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
/apis/apps/v1/namespaces/openshift-compliance/deployments/test-node-master-rs would violate PodSecurity "restricted:latest": unrestricted capabilities (container "result-server" must set securityContext.capabilities.drop=["ALL"]), seccompProfile (pod or container "result-server" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
/apis/apps/v1/namespaces/openshift-compliance/deployments/test-node-worker-rs would violate PodSecurity "restricted:latest": unrestricted capabilities (container "result-server" must set securityContext.capabilities.drop=["ALL"]), seccompProfile (pod or container "result-server" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
/apis/apps/v1/namespaces/openshift-compliance/deployments would violate PodSecurity "restricted:latest": unrestricted capabilities (container "compliance-operator" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "compliance-operator" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "compliance-operator" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
/apis/apps/v1/namespaces/openshift-compliance/deployments would violate PodSecurity "restricted:latest": unrestricted capabilities (container "result-server" must set securityContext.capabilities.drop=["ALL"]), seccompProfile (pod or container "result-server" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
/apis/apps/v1/namespaces/openshift-compliance/deployments would violate PodSecurity "restricted:latest": unrestricted capabilities (containers "content-container", "profileparser", "pauser" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or containers "content-container", "profileparser", "pauser" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "content-container", "profileparser", "pauser" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
/apis/apps/v1/namespaces/openshift-compliance/replicasets/rhcos4-high-master-rs-6cd4fb9f9c would violate PodSecurity "restricted:latest": unrestricted capabilities (container "result-server" must set securityContext.capabilities.drop=["ALL"]), seccompProfile (pod or container "result-server" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
/apis/apps/v1/namespaces/openshift-compliance/replicasets/rhcos4-high-master-rs-db4dd76bb would violate PodSecurity "restricted:latest": unrestricted capabilities (container "result-server" must set securityContext.capabilities.drop=["ALL"]), seccompProfile (pod or container "result-server" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
/apis/apps/v1/namespaces/openshift-compliance/replicasets/rhcos4-high-worker-rs-64dbf76965 would violate PodSecurity "restricted:latest": unrestricted capabilities (container "result-server" must set securityContext.capabilities.drop=["ALL"]), seccompProfile (pod or container "result-server" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
/apis/apps/v1/namespaces/openshift-compliance/replicasets/rhcos4-high-worker-rs-79989ddc54 would violate PodSecurity "restricted:latest": unrestricted capabilities (container "result-server" must set securityContext.capabilities.drop=["ALL"]), seccompProfile (pod or container "result-server" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
/apis/apps/v1/namespaces/openshift-compliance/replicasets/test-node-master-rs-79488d7dd9 would violate PodSecurity "restricted:latest": unrestricted capabilities (container "result-server" must set securityContext.capabilities.drop=["ALL"]), seccompProfile (pod or container "result-server" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
/apis/apps/v1/namespaces/openshift-compliance/replicasets/test-node-master-rs-85c7c6d57d would violate PodSecurity "restricted:latest": unrestricted capabilities (container "result-server" must set securityContext.capabilities.drop=["ALL"]), seccompProfile (pod or container "result-server" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
/apis/apps/v1/namespaces/openshift-compliance/replicasets/test-node-worker-rs-57fdf9cfd would violate PodSecurity "restricted:latest": unrestricted capabilities (container "result-server" must set securityContext.capabilities.drop=["ALL"]), seccompProfile (pod or container "result-server" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
/apis/apps/v1/namespaces/openshift-compliance/replicasets/test-node-worker-rs-6d9657684f would violate PodSecurity "restricted:latest": unrestricted capabilities (container "result-server" must set securityContext.capabilities.drop=["ALL"]), seccompProfile (pod or container "result-server" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
/apis/apps/v1/namespaces/openshift-compliance/replicasets would violate PodSecurity "restricted:latest": unrestricted capabilities (container "compliance-operator" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "compliance-operator" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "compliance-operator" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
/apis/apps/v1/namespaces/openshift-compliance/replicasets would violate PodSecurity "restricted:latest": unrestricted capabilities (container "result-server" must set securityContext.capabilities.drop=["ALL"]), seccompProfile (pod or container "result-server" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
/apis/apps/v1/namespaces/openshift-compliance/replicasets would violate PodSecurity "restricted:latest": unrestricted capabilities (containers "content-container", "profileparser", "pauser" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or containers "content-container", "profileparser", "pauser" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "content-container", "profileparser", "pauser" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")

Expected results:
compliance operator should comply to restricted pod security level

Additional info:

Comment 2 Jakub Hrozek 2022-05-19 12:11:20 UTC
This needs to be fixed before 4.11 goes out. I had a quick chat with Sascha Gruenert who maintains the DefaultRuntime seccomp profile and he said it should be available on all supported OCP versions (4.6+). But we need to test the operator on them because while the profile is available, it differs between releases.

Comment 3 Jakub Hrozek 2022-05-19 14:15:28 UTC
We can't use the seccomp profile as suggested on aos-devel without using a privileged SCC on OCP < 4.11 (only 4.11 introduces SCC restrictedv2) but either way we need to label the namespace because the node scanner pods mount the host FS..

Comment 4 xiyuan 2022-05-20 05:06:36 UTC
Verification pass with pre-merged PR code and 4.11.0-0.nightly-2022-05-18-171831. The  compliance operator workloads complies to restricted pod security level with the patch https://github.com/ComplianceAsCode/compliance-operator/pull/38.

$ oc get clusterversion
NAME      VERSION                              AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.11.0-0.nightly-2022-05-18-171831   True        False         3h37m   Cluster version is 4.11.0-0.nightly-2022-05-18-171831

# git log | head
commit d97b9268e5cd58e157cf67432fd818d7b48b5f9b
Author: Jakub Hrozek <jhrozek>
Date:   Thu May 19 14:22:00 2022 +0200

    deploy: Tighten operator securityContext
    
    - makes the operator and some workloads run as non root explicitly
    - drops all caps where appropriate

commit 8d5d821b239825cbba63fec4f80b6e52792099c6

$ make deploy-local
Creating 'openshift-compliance' namespace/project
namespace/openshift-compliance created
podman build -t quay.io/compliance-operator/compliance-operator:latest -f build/Dockerfile .
STEP 1: FROM golang:1.17 AS builder
...
namespace/openshift-compliance unchanged
deployment.apps/compliance-operator created
role.rbac.authorization.k8s.io/compliance-operator created
clusterrole.rbac.authorization.k8s.io/compliance-operator created
role.rbac.authorization.k8s.io/resultscollector created
role.rbac.authorization.k8s.io/api-resource-collector created
role.rbac.authorization.k8s.io/resultserver created
role.rbac.authorization.k8s.io/remediation-aggregator created
clusterrole.rbac.authorization.k8s.io/remediation-aggregator created
role.rbac.authorization.k8s.io/rerunner created
role.rbac.authorization.k8s.io/profileparser created
clusterrole.rbac.authorization.k8s.io/api-resource-collector created
rolebinding.rbac.authorization.k8s.io/compliance-operator created
clusterrolebinding.rbac.authorization.k8s.io/compliance-operator created
rolebinding.rbac.authorization.k8s.io/resultscollector created
rolebinding.rbac.authorization.k8s.io/remediation-aggregator created
clusterrolebinding.rbac.authorization.k8s.io/remediation-aggregator created
clusterrolebinding.rbac.authorization.k8s.io/api-resource-collector created
rolebinding.rbac.authorization.k8s.io/api-resource-collector created
rolebinding.rbac.authorization.k8s.io/rerunner created
rolebinding.rbac.authorization.k8s.io/profileparser created
rolebinding.rbac.authorization.k8s.io/resultserver created
serviceaccount/compliance-operator created
serviceaccount/resultscollector created
serviceaccount/remediation-aggregator created
serviceaccount/rerunner created
serviceaccount/api-resource-collector created
serviceaccount/profileparser created
serviceaccount/resultserver created
clusterrolebinding.rbac.authorization.k8s.io/compliance-operator-metrics created
clusterrole.rbac.authorization.k8s.io/compliance-operator-metrics created
deployment.apps/compliance-operator triggers updated
$ oc project openshift-compliance
Now using project "openshift-compliance" on server "https://api.xiyuan20-a.0520-os9.qe.rhcloud.com:6443".

$ oc get pod
NAME                                             READY   STATUS    RESTARTS        AGE
compliance-operator-6cf967dcdb-nlrhz             1/1     Running   1 (3m10s ago)   3m56s
ocp4-openshift-compliance-pp-7c85bc4ff-dp6dl     1/1     Running   0               2m28s
rhcos4-openshift-compliance-pp-95548cb79-76wxh   1/1     Running   0               2m28s

$ oc apply -f -<<EOF
> apiVersion: compliance.openshift.io/v1alpha1
> kind: ScanSettingBinding
> metadata:
>   name: my-ssb-r
> profiles:
>   - name: ocp4-moderate-node
>     kind: Profile
>     apiGroup: compliance.openshift.io/v1alpha1
> settingsRef:
>   name: default
>   kind: ScanSetting
>   apiGroup: compliance.openshift.io/v1alpha1
> EOF
scansettingbinding.compliance.openshift.io/my-ssb-r created
$ oc get suite -w
NAME       PHASE       RESULT
my-ssb-r   LAUNCHING   NOT-AVAILABLE
my-ssb-r   LAUNCHING   NOT-AVAILABLE
my-ssb-r   RUNNING     NOT-AVAILABLE
my-ssb-r   RUNNING     NOT-AVAILABLE
my-ssb-r   AGGREGATING   NOT-AVAILABLE
my-ssb-r   AGGREGATING   NOT-AVAILABLE
my-ssb-r   DONE          NON-COMPLIANT
my-ssb-r   DONE          NON-COMPLIANT


$ cat test.sh
cat > cmd.txt << EOF
grep -hir 'would violate PodSecurity' /var/log/kube-apiserver/ | jq -r '.requestURI + " " + .annotations."pod-security.kubernetes.io/audit-violations"'
EOF

CMD="`cat cmd.txt`"
oc new-project xxia-test

# With admin, run above cmd on all masters:
MASTERS=`oc get no | grep master | grep -o '^[^ ]*'`
for i in $MASTERS
do
  oc debug -n xxia-test no/$i -- chroot /host bash -c "$CMD || true"
done > all-violations.txt

cat all-violations.txt | grep -E 'namespaces/(openshift|kube|security)-' | sort | uniq > all-violations_system_components.txt
cat all-violations_system_components.txt

$ # ./test.sh 
Now using project "xxia-test" on server "https://api.xiyuan20-a.0520-os9.qe.rhcloud.com:6443".

You can add applications to this project with the 'new-app' command. For example, try:

    oc new-app rails-postgresql-example

to build a new example application in Ruby. Or use kubectl to deploy a simple Kubernetes application:

    kubectl create deployment hello-node --image=k8s.gcr.io/serve_hostname

W0520 12:47:51.908365    7051 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true, hostPID=true), hostPath volumes (volume "host"), privileged (container "container-00" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (container "container-00" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "container-00" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volume "host" uses restricted volume type "hostPath"), runAsNonRoot != true (pod or container "container-00" must set securityContext.runAsNonRoot=true), runAsUser=0 (container "container-00" must not set runAsUser=0), seccompProfile (pod or container "container-00" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
Starting pod/xiyuan20-a-rpntb-master-0-debug ...
To use host binaries, run `chroot /host`

Removing debug pod ...
W0520 12:48:55.681531    7118 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true, hostPID=true), hostPath volumes (volume "host"), privileged (container "container-00" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (container "container-00" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "container-00" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volume "host" uses restricted volume type "hostPath"), runAsNonRoot != true (pod or container "container-00" must set securityContext.runAsNonRoot=true), runAsUser=0 (container "container-00" must not set runAsUser=0), seccompProfile (pod or container "container-00" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
Starting pod/xiyuan20-a-rpntb-master-1-debug ...
To use host binaries, run `chroot /host`

Removing debug pod ...
W0520 12:49:54.470522    7169 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true, hostPID=true), hostPath volumes (volume "host"), privileged (container "container-00" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (container "container-00" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "container-00" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volume "host" uses restricted volume type "hostPath"), runAsNonRoot != true (pod or container "container-00" must set securityContext.runAsNonRoot=true), runAsUser=0 (container "container-00" must not set runAsUser=0), seccompProfile (pod or container "container-00" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
Starting pod/xiyuan20-a-rpntb-master-2-debug ...
To use host binaries, run `chroot /host`

Removing debug pod ...
/apis/apps/v1/namespaces/openshift-logging/deployments/cluster-logging-operator would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "cluster-logging-operator" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "cluster-logging-operator" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "cluster-logging-operator" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "cluster-logging-operator" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
/apis/apps/v1/namespaces/openshift-logging/deployments would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "cluster-logging-operator" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "cluster-logging-operator" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "cluster-logging-operator" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "cluster-logging-operator" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
/apis/apps/v1/namespaces/openshift-logging/replicasets would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "cluster-logging-operator" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "cluster-logging-operator" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "cluster-logging-operator" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "cluster-logging-operator" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
/apis/apps/v1/namespaces/openshift-operators-redhat/deployments/elasticsearch-operator would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "kube-rbac-proxy" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "kube-rbac-proxy", "elasticsearch-operator" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or containers "kube-rbac-proxy", "elasticsearch-operator" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "kube-rbac-proxy", "elasticsearch-operator" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
/apis/apps/v1/namespaces/openshift-operators-redhat/deployments would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "kube-rbac-proxy" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "kube-rbac-proxy", "elasticsearch-operator" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or containers "kube-rbac-proxy", "elasticsearch-operator" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "kube-rbac-proxy", "elasticsearch-operator" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
/apis/apps/v1/namespaces/openshift-operators-redhat/replicasets would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "kube-rbac-proxy" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "kube-rbac-proxy", "elasticsearch-operator" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or containers "kube-rbac-proxy", "elasticsearch-operator" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "kube-rbac-proxy", "elasticsearch-operator" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
/apis/batch/v1/namespaces/openshift-marketplace/jobs would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (containers "util", "pull", "extract" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "util", "pull", "extract" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or containers "util", "pull", "extract" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "util", "pull", "extract" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
/api/v1/namespaces/openshift-logging/pods would violate PodSecurity "restricted:latest": runAsNonRoot != true (pod or container "cluster-logging-operator" must set securityContext.runAsNonRoot=true)
/api/v1/namespaces/openshift-marketplace/pods would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "registry-server" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "registry-server" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "registry-server" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "registry-server" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
/api/v1/namespaces/openshift-marketplace/pods would violate PodSecurity "restricted:latest": runAsNonRoot != true (pod or containers "util", "pull", "extract" must set securityContext.runAsNonRoot=true)
/api/v1/namespaces/openshift-openstack-infra/pods would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPath volumes (volumes "resource-dir", "kubeconfig", "conf-dir", "nm-resolv"), privileged (containers "coredns", "coredns-monitor" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "render-config-coredns", "coredns", "coredns-monitor" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "render-config-coredns", "coredns", "coredns-monitor" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "resource-dir", "kubeconfig", "conf-dir", "nm-resolv" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "render-config-coredns", "coredns", "coredns-monitor" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "render-config-coredns", "coredns", "coredns-monitor" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
/api/v1/namespaces/openshift-openstack-infra/pods would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPath volumes (volumes "resource-dir", "script-dir", "kubeconfig", "kubeconfigvarlib", "conf-dir", "chroot-host"), privileged (containers "keepalived", "keepalived-monitor" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "render-config-keepalived", "keepalived", "keepalived-monitor" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "render-config-keepalived", "keepalived", "keepalived-monitor" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "resource-dir", "script-dir", "kubeconfig", "kubeconfigvarlib", "conf-dir", "chroot-host" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "render-config-keepalived", "keepalived", "keepalived-monitor" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "render-config-keepalived", "keepalived", "keepalived-monitor" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")

Comment 5 Jakub Hrozek 2022-06-03 12:40:50 UTC
This is not a blocker anymore since the enforcement was moved to 4.12

Comment 10 xiyuan 2022-07-08 03:15:29 UTC
Verification pass with compliance-operator.v0.1.53 + payload 4.11.0-rc.1
$ oc project openshift-compliance
Now using project "openshift-compliance" on server "https://api.xiyuan07-bug.qe.devcluster.openshift.com:6443".
$ oc get ip
NAME            CSV                           APPROVAL    APPROVED
install-hksfh   compliance-operator.v0.1.53   Automatic   true
$ oc get csv
NAME                            DISPLAY                            VERSION   REPLACES   PHASE
compliance-operator.v0.1.53     Compliance Operator                0.1.53               Succeeded
elasticsearch-operator.v5.5.0   OpenShift Elasticsearch Operator   5.5.0                Succeeded
$ oc get pod
NAME                                              READY   STATUS    RESTARTS      AGE
compliance-operator-7d97d85476-92wrd              1/1     Running   1 (53m ago)   54m
ocp4-openshift-compliance-pp-55647cf5d7-c9pt6     1/1     Running   0             52m
rhcos4-openshift-compliance-pp-598cc46495-pnzjg   1/1     Running   0             52m
$ oc get ssb
NAME       AGE
my-ssb-r   36m
$ oc get suite
NAME       PHASE   RESULT
my-ssb-r   DONE    INCONSISTENT

$ ./test.sh
Now using project "xxia-test" on server "https://api.xiyuan07-bug.qe.devcluster.openshift.com:6443".

You can add applications to this project with the 'new-app' command. For example, try:

    oc new-app rails-postgresql-example

to build a new example application in Ruby. Or use kubectl to deploy a simple Kubernetes application:

    kubectl create deployment hello-node --image=k8s.gcr.io/e2e-test-images/agnhost:2.33 -- /agnhost serve-hostname

Warning: would violate PodSecurity "restricted:v1.24": host namespaces (hostNetwork=true, hostPID=true), privileged (container "container-00" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (container "container-00" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "container-00" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volume "host" uses restricted volume type "hostPath"), runAsNonRoot != true (pod or container "container-00" must set securityContext.runAsNonRoot=true), runAsUser=0 (container "container-00" must not set runAsUser=0), seccompProfile (pod or container "container-00" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
Starting pod/ip-10-0-137-93us-east-2computeinternal-debug ...
To use host binaries, run `chroot /host`

Removing debug pod ...
Warning: would violate PodSecurity "restricted:v1.24": host namespaces (hostNetwork=true, hostPID=true), privileged (container "container-00" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (container "container-00" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "container-00" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volume "host" uses restricted volume type "hostPath"), runAsNonRoot != true (pod or container "container-00" must set securityContext.runAsNonRoot=true), runAsUser=0 (container "container-00" must not set runAsUser=0), seccompProfile (pod or container "container-00" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
Starting pod/ip-10-0-185-124us-east-2computeinternal-debug ...
To use host binaries, run `chroot /host`

Removing debug pod ...
Warning: would violate PodSecurity "restricted:v1.24": host namespaces (hostNetwork=true, hostPID=true), privileged (container "container-00" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (container "container-00" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "container-00" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volume "host" uses restricted volume type "hostPath"), runAsNonRoot != true (pod or container "container-00" must set securityContext.runAsNonRoot=true), runAsUser=0 (container "container-00" must not set runAsUser=0), seccompProfile (pod or container "container-00" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
Starting pod/ip-10-0-221-7us-east-2computeinternal-debug ...
To use host binaries, run `chroot /host`

Removing debug pod ...
/apis/apps/v1/namespaces/openshift-logging/deployments/cluster-logging-operator would violate PodSecurity "restricted:latest": unrestricted capabilities (container "cluster-logging-operator" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "cluster-logging-operator" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "cluster-logging-operator" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
/apis/apps/v1/namespaces/openshift-logging/deployments would violate PodSecurity "restricted:latest": unrestricted capabilities (container "cluster-logging-operator" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "cluster-logging-operator" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "cluster-logging-operator" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
/apis/apps/v1/namespaces/openshift-logging/replicasets would violate PodSecurity "restricted:latest": unrestricted capabilities (container "cluster-logging-operator" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "cluster-logging-operator" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "cluster-logging-operator" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
/apis/apps/v1/namespaces/openshift-operators-redhat/deployments/elasticsearch-operator would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "kube-rbac-proxy" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "kube-rbac-proxy", "elasticsearch-operator" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or containers "kube-rbac-proxy", "elasticsearch-operator" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "kube-rbac-proxy", "elasticsearch-operator" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
/apis/apps/v1/namespaces/openshift-operators-redhat/deployments would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "kube-rbac-proxy" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "kube-rbac-proxy", "elasticsearch-operator" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or containers "kube-rbac-proxy", "elasticsearch-operator" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "kube-rbac-proxy", "elasticsearch-operator" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
/apis/apps/v1/namespaces/openshift-operators-redhat/replicasets would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "kube-rbac-proxy" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "kube-rbac-proxy", "elasticsearch-operator" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or containers "kube-rbac-proxy", "elasticsearch-operator" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "kube-rbac-proxy", "elasticsearch-operator" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")

Comment 12 errata-xmlrpc 2022-07-14 12:40:58 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Compliance Operator bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2022:5537