Description of problem: Starting from OpenShift 4.11 pod security admission is being activated. In OpenShift the default pod security admission level is going to be restricted, so SSO should comply to restricted pod security level. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: For more information please refer to PR https://github.com/openshift/cluster-kube-scheduler-operator/pull/421
Having issues building the image, anli is currently looking into this and once issue is resolved i will test the bug.
Operator deployment still show pod security violations, so moving the bug back to assigned state. /apis/apps/v1/namespaces/openshift-secondary-scheduler-operator/deployments would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "secondary-scheduler" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "secondary-scheduler" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "secondary-scheduler" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "secondary-scheduler" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") /apis/apps/v1/namespaces/openshift-secondary-scheduler-operator/replicasets would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "secondary-scheduler" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "secondary-scheduler" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "secondary-scheduler" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "secondary-scheduler" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") /apis/batch/v1/namespaces/openshift-marketplace/jobs would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (containers "util", "pull", "extract" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "util", "pull", "extract" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or containers "util", "pull", "extract" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "util", "pull", "extract" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") [knarra@knarra openshift-tests-private]$ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.0-0.nightly-2022-06-15-222801 True False 4h54m Cluster version is 4.11.0-0.nightly-2022-06-15-222801 [knarra@knarra openshift-tests-private]$ oc get csv -n openshift-secondary-scheduler-operator NAME DISPLAY VERSION REPLACES PHASE secondaryscheduleroperator.v1.1.0 Secondary Scheduler Operator for Red Hat OpenShift 1.1.0 Succeeded
Would you please share the must-gather tarball?
Hello Jan, Below link contains the must-gather, let me know if you do not find something and i can provide the cluster to you. thanks!! http://virt-openshift-05.lab.eng.nay.redhat.com/knarra/2079805/ Thanks kasturi
Verified with the latest operator and i do not see any pod security violations, based on this moving the bug to verified state. [knarra@knarra ~]$ ./test.sh Already on project "xxia-test" on server "https://api.knarra0708.qe.devcluster.openshift.com:6443". You can add applications to this project with the 'new-app' command. For example, try: oc new-app rails-postgresql-example to build a new example application in Ruby. Or use kubectl to deploy a simple Kubernetes application: kubectl create deployment hello-node --image=k8s.gcr.io/e2e-test-images/agnhost:2.33 -- /agnhost serve-hostname Warning: would violate PodSecurity "restricted:v1.24": host namespaces (hostNetwork=true, hostPID=true), privileged (container "container-00" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (container "container-00" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "container-00" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volume "host" uses restricted volume type "hostPath"), runAsNonRoot != true (pod or container "container-00" must set securityContext.runAsNonRoot=true), runAsUser=0 (container "container-00" must not set runAsUser=0), seccompProfile (pod or container "container-00" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Starting pod/ip-10-0-136-51us-east-2computeinternal-debug ... To use host binaries, run `chroot /host` Removing debug pod ... Warning: would violate PodSecurity "restricted:v1.24": host namespaces (hostNetwork=true, hostPID=true), privileged (container "container-00" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (container "container-00" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "container-00" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volume "host" uses restricted volume type "hostPath"), runAsNonRoot != true (pod or container "container-00" must set securityContext.runAsNonRoot=true), runAsUser=0 (container "container-00" must not set runAsUser=0), seccompProfile (pod or container "container-00" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Starting pod/ip-10-0-180-31us-east-2computeinternal-debug ... To use host binaries, run `chroot /host` Removing debug pod ... Warning: would violate PodSecurity "restricted:v1.24": host namespaces (hostNetwork=true, hostPID=true), privileged (container "container-00" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (container "container-00" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "container-00" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volume "host" uses restricted volume type "hostPath"), runAsNonRoot != true (pod or container "container-00" must set securityContext.runAsNonRoot=true), runAsUser=0 (container "container-00" must not set runAsUser=0), seccompProfile (pod or container "container-00" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Starting pod/ip-10-0-223-38us-east-2computeinternal-debug ... To use host binaries, run `chroot /host` Removing debug pod ... [knarra@knarra ~]$ oc get csv -n openshift-secondary-scheduler-operator NAME DISPLAY VERSION REPLACES PHASE secondaryscheduleroperator.v1.1.0 Secondary Scheduler Operator for Red Hat OpenShift 1.1.0 Succeeded [knarra@knarra ~]$ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.0-rc.1 True False 3h30m Cluster version is 4.11.0-rc.1
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: OpenShift Container Platform 4.11.0 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:5069