Bug 2079805 - Secondary scheduler operator should comply to restricted pod security level
Summary: Secondary scheduler operator should comply to restricted pod security level
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: kube-scheduler
Version: 4.11
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 4.11.0
Assignee: Jan Chaloupka
QA Contact: RamaKasturi
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-04-28 09:51 UTC by RamaKasturi
Modified: 2022-08-10 11:09 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-08-10 11:09:16 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-kube-descheduler-operator pull 257 0 None Merged bug 2079805: manifests/deployment: comply to restricted pod security level 2022-06-02 06:56:11 UTC
Github openshift cluster-kube-descheduler-operator pull 258 0 None Merged bug 2079805: Update the list of descheduler configuration parameters 2022-06-02 06:56:10 UTC
Github openshift cluster-kube-descheduler-operator pull 262 0 None Merged bug 2079805: [operator] manifests/deployment: comply to restricted pod security level 2022-06-08 11:33:53 UTC
Github openshift secondary-scheduler-operator pull 46 0 None Merged bug 2079805: manifests/deployment: comply to restricted pod security level 2022-07-06 11:21:12 UTC
Red Hat Product Errata RHSA-2022:5069 0 None None None 2022-08-10 11:09:32 UTC

Description RamaKasturi 2022-04-28 09:51:42 UTC
Description of problem:
Starting from OpenShift 4.11 pod security admission is being activated. In OpenShift the default pod security admission level is going to be restricted, so SSO should comply to restricted pod security level.

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:
For more information please refer to PR https://github.com/openshift/cluster-kube-scheduler-operator/pull/421

Comment 3 RamaKasturi 2022-06-15 13:18:16 UTC
Having issues building the image, anli is currently looking into this and once issue is resolved i will test the bug.

Comment 4 RamaKasturi 2022-06-16 17:15:30 UTC
Operator deployment still show pod security violations, so moving the bug back to assigned state.

/apis/apps/v1/namespaces/openshift-secondary-scheduler-operator/deployments would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "secondary-scheduler" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "secondary-scheduler" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "secondary-scheduler" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "secondary-scheduler" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
/apis/apps/v1/namespaces/openshift-secondary-scheduler-operator/replicasets would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "secondary-scheduler" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "secondary-scheduler" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "secondary-scheduler" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "secondary-scheduler" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
/apis/batch/v1/namespaces/openshift-marketplace/jobs would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (containers "util", "pull", "extract" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "util", "pull", "extract" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or containers "util", "pull", "extract" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "util", "pull", "extract" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")

[knarra@knarra openshift-tests-private]$ oc get clusterversion
NAME      VERSION                              AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.11.0-0.nightly-2022-06-15-222801   True        False         4h54m   Cluster version is 4.11.0-0.nightly-2022-06-15-222801

[knarra@knarra openshift-tests-private]$ oc get csv -n openshift-secondary-scheduler-operator
NAME                                DISPLAY                                              VERSION   REPLACES   PHASE
secondaryscheduleroperator.v1.1.0   Secondary Scheduler Operator for Red Hat OpenShift   1.1.0                Succeeded

Comment 5 Jan Chaloupka 2022-06-17 07:41:23 UTC
Would you please share the must-gather tarball?

Comment 6 RamaKasturi 2022-06-17 09:15:04 UTC
Hello Jan,

   Below link contains the must-gather, let me know if you do not find something and i can provide the cluster to you. thanks!!

http://virt-openshift-05.lab.eng.nay.redhat.com/knarra/2079805/

Thanks
kasturi

Comment 9 RamaKasturi 2022-07-08 11:24:06 UTC
Verified with the latest operator and i do not see any pod security violations, based on this moving the bug to verified state.

[knarra@knarra ~]$ ./test.sh 
Already on project "xxia-test" on server "https://api.knarra0708.qe.devcluster.openshift.com:6443".

You can add applications to this project with the 'new-app' command. For example, try:

    oc new-app rails-postgresql-example

to build a new example application in Ruby. Or use kubectl to deploy a simple Kubernetes application:

    kubectl create deployment hello-node --image=k8s.gcr.io/e2e-test-images/agnhost:2.33 -- /agnhost serve-hostname

Warning: would violate PodSecurity "restricted:v1.24": host namespaces (hostNetwork=true, hostPID=true), privileged (container "container-00" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (container "container-00" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "container-00" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volume "host" uses restricted volume type "hostPath"), runAsNonRoot != true (pod or container "container-00" must set securityContext.runAsNonRoot=true), runAsUser=0 (container "container-00" must not set runAsUser=0), seccompProfile (pod or container "container-00" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
Starting pod/ip-10-0-136-51us-east-2computeinternal-debug ...
To use host binaries, run `chroot /host`

Removing debug pod ...
Warning: would violate PodSecurity "restricted:v1.24": host namespaces (hostNetwork=true, hostPID=true), privileged (container "container-00" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (container "container-00" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "container-00" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volume "host" uses restricted volume type "hostPath"), runAsNonRoot != true (pod or container "container-00" must set securityContext.runAsNonRoot=true), runAsUser=0 (container "container-00" must not set runAsUser=0), seccompProfile (pod or container "container-00" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
Starting pod/ip-10-0-180-31us-east-2computeinternal-debug ...
To use host binaries, run `chroot /host`

Removing debug pod ...
Warning: would violate PodSecurity "restricted:v1.24": host namespaces (hostNetwork=true, hostPID=true), privileged (container "container-00" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (container "container-00" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "container-00" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volume "host" uses restricted volume type "hostPath"), runAsNonRoot != true (pod or container "container-00" must set securityContext.runAsNonRoot=true), runAsUser=0 (container "container-00" must not set runAsUser=0), seccompProfile (pod or container "container-00" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
Starting pod/ip-10-0-223-38us-east-2computeinternal-debug ...
To use host binaries, run `chroot /host`

Removing debug pod ...


[knarra@knarra ~]$ oc get csv -n openshift-secondary-scheduler-operator
NAME                                DISPLAY                                              VERSION   REPLACES   PHASE
secondaryscheduleroperator.v1.1.0   Secondary Scheduler Operator for Red Hat OpenShift   1.1.0                Succeeded

[knarra@knarra ~]$ oc get clusterversion
NAME      VERSION       AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.11.0-rc.1   True        False         3h30m   Cluster version is 4.11.0-rc.1

Comment 11 errata-xmlrpc 2022-08-10 11:09:16 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: OpenShift Container Platform 4.11.0 bug fix and security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:5069


Note You need to log in before you can comment on or make changes to this bug.