Description of problem: with custome project , `oc run` will hit error: oc run test --image=test -- sleep 300 Error from server (Forbidden): pods "test" is forbidden: violates PodSecurity "restricted:v1.24": allowPrivilegeEscalation != false (container "test" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "test" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "test" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "test" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Version-Release number of selected component (if applicable): oc version Client Version: 4.12.0-0.nightly-2022-08-12-053438 Kustomize Version: v4.5.4 Server Version: 4.12.0-0.nightly-2022-08-15-150248 Kubernetes Version: v1.24.0+da80cd0 How reproducible: always Steps to Reproduce: 1. create project : `oc new-project test` 2. Run the `oc run` command: `oc run test --image=test -- sleep 300` 3. Actual results: 2. hit error: oc run test --image=test -- sleep 300 Error from server (Forbidden): pods "test" is forbidden: violates PodSecurity "restricted:v1.24": allowPrivilegeEscalation != false (container "test" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "test" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "test" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "test" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Expected results: 2. no error Additional info: can't reproduce with project 'openshift-'
This is expected after https://github.com/openshift/cluster-policy-controller/pull/84 . However, customers would see this broken thing for commands like `oc run`. So this should be a valid bug and needs be fixed from customers' view.
oc run invokes kubectl run under the hood: https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/kubectl/pkg/generate/versioned/run.go#L325-L347
The workaround is specifying securityContext explicitly, then we can remove 'AutomationBlocker' keywords. #oc run test --image=quay.io/openshifttest/busybox@sha256:c5439d7db88ab5423999530349d327b04279ad3161d7596d2126dfb5b02bfd1f --overrides='{"spec":{"securityContext":{"runAsNonRoot":true,"seccompProfile":{"type":"RuntimeDefault"}}}}' -- sleep 300 pod/test created
Talking to Stanislav Laznicka (auth team lead), pod security admission enforcement is namespace-driven, the admission is enforced on pods based on their namespace or global configuration if the former is not set. The recommend solution is to properly label the namespace. Specifying the security context on the spec file explicitly is acceptable but not recommended. More information can be found in https://docs.openshift.com/container-platform/4.11/authentication/understanding-and-managing-pod-security-admission.html.
$ oc run testpod --image openshift/hello-openshift Error from server (Forbidden): pods "testpod" is forbidden: violates PodSecurity "restricted:v1.24": allowPrivilegeEscalation != false (container "testpod" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "testpod" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "testpod" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "testpod" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") $ oc create deployment testdeployment --image openshift/hello-openshift deployment.apps/testdeployment created [xxia@2022-09-22 22:15:14 CST my]$ oc get po NAME READY STATUS RESTARTS AGE testdeployment-5f64bddbb6-d5x9z 1/1 Running 0 11s Hi, oc create deployment can automatically have Running pod why not oc run can as well?
Found if normal user runs `oc run testpod --image openshift/hello-openshift`, no issue: $ oc login -u ... -p ... Login successful. ... $ oc new-project xxia Now using project "xxia" on server ... $ oc run testpod --image openshift/hello-openshift pod/testpod created $ oc run testpod-2 --image openshift/hello-openshift -n xxia --context admin Error from server (Forbidden): pods "testpod-2" is forbidden: violates PodSecurity "restricted:v1.24": allowPrivilegeEscalation != false (container "testpod-2" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "testpod-2" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "testpod-2" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "testpod-2" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Cluster admin can't do what normal user can, this is more weird.
Looks like it goes through the SCC plugin then goes through the PSa plugin, so when cluster admin runs oc run, it matches "anyuid" which will not mutate pod's "securityContext", while normal user matches "restricted-v2" which will mutate pod's "securityContext" to pass PSa.
(In reply to Xingxing Xia from comment #5) > $ oc run testpod --image openshift/hello-openshift > Error from server (Forbidden): pods "testpod" is forbidden: violates > PodSecurity "restricted:v1.24": allowPrivilegeEscalation != false (container > "testpod" must set securityContext.allowPrivilegeEscalation=false), > unrestricted capabilities (container "testpod" must set > securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or > container "testpod" must set securityContext.runAsNonRoot=true), > seccompProfile (pod or container "testpod" must set > securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") > > $ oc create deployment testdeployment --image openshift/hello-openshift > deployment.apps/testdeployment created > [xxia@2022-09-22 22:15:14 CST my]$ oc get po > NAME READY STATUS RESTARTS AGE > testdeployment-5f64bddbb6-d5x9z 1/1 Running 0 11s > > Hi, oc create deployment can automatically have Running pod why not oc run > can as well? create command works because there is a label syncer working behind. However, run command tries to run a pod directly and that's why, it gets an error. I think, this is not a bug.
(In reply to Xingxing Xia from comment #6) > Found if normal user runs `oc run testpod --image > openshift/hello-openshift`, no issue: > $ oc login -u ... -p ... > Login successful. > ... > $ oc new-project xxia > Now using project "xxia" on server ... > $ oc run testpod --image openshift/hello-openshift > pod/testpod created > $ oc run testpod-2 --image openshift/hello-openshift -n xxia --context admin > Error from server (Forbidden): pods "testpod-2" is forbidden: violates > PodSecurity "restricted:v1.24": allowPrivilegeEscalation != false (container > "testpod-2" must set securityContext.allowPrivilegeEscalation=false), > unrestricted capabilities (container "testpod-2" must set > securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or > container "testpod-2" must set securityContext.runAsNonRoot=true), > seccompProfile (pod or container "testpod-2" must set > securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") > > Cluster admin can't do what normal user can, this is more weird. I'm not sure is this expected or not. I'd like to hear opinions from auth team and if it is a bug, I can gladly work on that. I'm assigning to auth team to get feedback. Feel free to reassign it to me, if there is any bug.
This is expected behavior. Normal user can run pods because their pods' security context is defaulted by the "restricted-v2" SCC. Since the privileged user has broader permissions more permissive SCCs apply, and so they are unable to run their pod in the restricted NS.
> Since the privileged user has broader permissions more permissive SCCs apply, and so they are unable to run their pod in the restricted NS. But, why admin user can create the pod via the deployment? In a PSA enforce enabled cluster, "defaults": { "audit": "restricted", "audit-version": "latest", "enforce": "restricted", "enforce-version": "latest", "warn": "restricted", "warn-version": "latest" }, For admin user, MacBook-Pro:~ jianzhang$ oc whoami system:admin 1, pod failed to create due to it violates PodSecurity. MacBook-Pro:~ jianzhang$ oc create -f pod.yaml Error from server (Forbidden): error when creating "test.yaml": pods "myapp-v16-1" is forbidden: violates PodSecurity "restricted:v1.24": allowPrivilegeEscalation != false (container "myapp-v16-1" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "myapp-v16-1" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "myapp-v16-1" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "myapp-v16-1" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") MacBook-Pro:~ jianzhang$ cat pod.yaml apiVersion: v1 kind: Pod metadata: name: myapp-v16-1 spec: containers: - name: myapp-v16-1 image: quay.io/olmqe/myapp:v1.16-1 2, but, after using the deploy object to create the same pod, the pod can be created well, and the securityContext related configs were set automatically. MacBook-Pro:~ jianzhang$ oc create -f deploy.yaml Warning: would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "myapp-v16-1" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "myapp-v16-1" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "myapp-v16-1" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "myapp-v16-1" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") deployment.apps/myapp created MacBook-Pro:~ jianzhang$ cat deploy.yaml apiVersion: apps/v1 kind: Deployment metadata: name: myapp labels: app: myapp spec: replicas: 1 selector: matchLabels: app: myapp template: metadata: labels: app: myapp spec: containers: - name: myapp-v16-1 image: quay.io/olmqe/myapp:v1.16-1 MacBook-Pro:~ jianzhang$ oc get pods NAME READY STATUS RESTARTS AGE myapp-746ccc976-ql8nq 1/1 Running 0 10m MacBook-Pro:~ jianzhang$ oc get pod myapp-746ccc976-ql8nq -o yaml ... securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL seccompProfile: type: RuntimeDefault One more question, for the normal user, the pod can be created without any warning. But, why get warnings when creating deploy? Thanks! MacBook-Pro:~ jianzhang$ oc whoami testuser-0 MacBook-Pro:~ jianzhang$ oc create -f pod.yaml pod/myapp-v16-1 created MacBook-Pro:~ jianzhang$ oc get pods NAME READY STATUS RESTARTS AGE myapp-v16-1 1/1 Running 0 3s MacBook-Pro:~ jianzhang$ oc get pods myapp-v16-1 -o yaml apiVersion: v1 kind: Pod ... securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL seccompProfile: type: RuntimeDefault MacBook-Pro:~ jianzhang$ oc create -f deploy.yaml Warning: would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "myapp-v16-1" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "myapp-v16-1" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "myapp-v16-1" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "myapp-v16-1" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") deployment.apps/myapp created
it's because your user is more privileged and can access the "anyuid" SCC which matches your pod but does not default the fields to PSa restricted level. That's different with the SA of the deployment because it cannot use those SCCs.