Bug 2086519

Summary: workloads must comply to restricted security policy
Product: OpenShift Container Platform Reporter: Sergiusz Urbaniak <surbania>
Component: apiserver-authAssignee: Standa Laznicka <slaznick>
Status: CLOSED ERRATA QA Contact: Yash Tripathi <ytripath>
Severity: urgent Docs Contact:
Priority: urgent    
Version: 4.11CC: mfojtik, surbania
Target Milestone: ---   
Target Release: 4.11.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2022-08-10 11:12:07 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Sergiusz Urbaniak 2022-05-16 11:24:40 UTC
Starting from OpenShift 4.11 pod security admission is being activated. In OpenShift the default pod security admission level is going to be restricted.

Currently, some workloads do not yet comply to this. This tracks changes done by the Auth team.

Comment 13 Yash Tripathi 2022-06-24 14:03:33 UTC
Verified in 4.11.0-0.nightly-2022-06-23-153912

Testing https://github.com/openshift/apiserver-library-go/pull/85

1. $ oc create -f -<<EOF
apiVersion: v1
kind: Pod
metadata:
  name: testpod
spec:
  containers:
  - image: quay.io/openshifttest/hello-openshift:openshift
    name: node-hello
    securityContext:
      runAsUser: 100
      capabilities:
        drop: ["ALL"]
      seccompProfile:
        type: "RuntimeDefault"
      allowPrivilegeEscalation: false
EOF

2. $ oc get pod/testpod -o yaml
...
securityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop:
        - ALL
      runAsNonRoot: true
      runAsUser: 100
      seccompProfile:
        type: RuntimeDefault
...
Expected:       runAsNonRoot: true
Actual:       runAsNonRoot: true

Verified
-----------
Testing https://github.com/openshift/cluster-kube-apiserver-operator/pull/1358

$ oc extract cm/config -n openshift-kube-apiserver --confirm --to=- | jq '' - | jq '.admission.pluginConfig.PodSecurity.configuration.exemptions'
Output:
{
  "usernames": [
    "system:serviceaccount:openshift-infra:build-controller"
  ]
}

Expected: system:serviceaccount:openshift-infra:build-controller
https://github.com/stlaz/cluster-kube-apiserver-operator/blob/aab0cba685e69889087e776e417a96137ab7cef5/bindata/assets/config/defaultconfig.yaml#L31
Actual: system:serviceaccount:openshift-infra:build-controller

Verified
-----------
Testing https://github.com/openshift/oc/pull/1155

1. $ oc edit kubeapiserver cluster
update unsupportedConfigOverrides to
...
  unsupportedConfigOverrides:
    admission:
      pluginConfig:
        PodSecurity:
          configuration:
            apiVersion: pod-security.admission.config.k8s.io/v1beta1
            defaults:
              audit: restricted
              audit-version: latest
              enforce: restricted
              enforce-version: latest
              warn: restricted
              warn-version: latest
            exemptions:
              usernames:
              - system:serviceaccount:openshift-infra:build-controller
            kind: PodSecurityConfiguration
...

2. Wait for all OKAS pods to rotate ( check revision)
$ oc get po -n openshift-kube-apiserver -L revision -l apiserver -w
NAME                                                         READY   STATUS    RESTARTS   AGE     REVISION
kube-apiserver-ip-10-0-128-204.ap-south-1.compute.internal   5/5     Running   0          6m      14
kube-apiserver-ip-10-0-184-69.ap-south-1.compute.internal    5/5     Running   0          9m48s   14
kube-apiserver-ip-10-0-205-172.ap-south-1.compute.internal   5/5     Running   0          2m2s    14

3. $ oc debug node/ip-10-0-149-137.ap-south-1.compute.internal
error: PodSecurity violation error:
Ensure the target namespace has the appropriate security level set or consider creating a dedicated privileged namespace using:
        "oc create ns <namespace> -o yaml | oc label -f - pod-security.kubernetes.io/enforce=privileged pod-security.kubernetes.io/audit=privileged pod-security.kubernetes.io/warn=privileged".

Original error:
pods "ip-10-0-149-137ap-south-1computeinternal-debug" is forbidden: violates PodSecurity "restricted:latest": host namespaces (hostNetwork=true, hostPID=true), privileged (container "container-00" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (container "container-00" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "container-00" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volume "host" uses restricted volume type "hostPath"), runAsNonRoot != true (pod or container "container-00" must set securityContext.runAsNonRoot=true), runAsUser=0 (container "container-00" must not set runAsUser=0), seccompProfile (pod or container "container-00" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")

Actual error is according to format:
		err = fmt.Errorf("PodSecurity violation error:\n"+
			"Ensure the target namespace has the appropriate security level set "+
			"or consider creating a dedicated privileged namespace using:\n"+
			"\t\"oc create ns <namespace> -o yaml | oc label -f - pod-security.kubernetes.io/enforce=privileged pod-security.kubernetes.io/audit=privileged pod-security.kubernetes.io/warn=privileged\".\n\nOriginal error:\n%w", err)
	}

oc adm must-gather has some issue, will file a non-blocker bug later

Verified

Comment 14 errata-xmlrpc 2022-08-10 11:12:07 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: OpenShift Container Platform 4.11.0 bug fix and security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:5069