Bug 2079292 - containers prometheus-operator/kube-rbac-proxy violate PodSecurity
Summary: containers prometheus-operator/kube-rbac-proxy violate PodSecurity
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Monitoring
Version: 4.11
Hardware: Unspecified
OS: Unspecified
medium
low
Target Milestone: ---
: 4.11.0
Assignee: Joao Marcal
QA Contact: Junqi Zhao
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-04-27 10:41 UTC by Junqi Zhao
Modified: 2022-08-10 11:08 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-08-10 11:08:39 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-monitoring-operator pull 1660 0 None open Bug 2079292: Adds PodsSecurity labels to UWM namespace 2022-05-05 16:42:05 UTC
Red Hat Product Errata RHSA-2022:5069 0 None None None 2022-08-10 11:08:50 UTC

Description Junqi Zhao 2022-04-27 10:41:06 UTC
Description of problem:
from https://kubernetes.io/docs/concepts/security/pod-security-admission/
In v1.23, the PodSecurity feature gate is a Beta feature and is enabled by default.

checked in cluster, PodSecurity is enabled by default
# oc -n openshift-kube-apiserver exec -c kube-apiserver kube-apiserver-ip-10-0-131-61.us-east-2.compute.internal -- kube-apiserver -h | grep "default enabled ones"
      --enable-admission-plugins strings       admission plugins that should be enabled in addition to default enabled ones (NamespaceLifecycle, LimitRanger, ServiceAccount, TaintNodesByCondition, PodSecurity, Priority, DefaultTolerationSeconds, DefaultStorageClass, StorageObjectInUseProtection, PersistentVolumeClaimResize, RuntimeClass, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, MutatingAdmissionWebhook, ValidatingAdmissionWebhook, ResourceQuota). Comma-delimited list of admission plugins: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, DefaultStorageClass, DefaultTolerationSeconds, DenyServiceExternalIPs, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodSecurity, PodSecurityPolicy, PodTolerationRestriction, Priority, ResourceQuota, RuntimeClass, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter.

# oc -n openshift-monitoring get pod | grep -E "cluster-monitoring-operator|prometheus-operator"
cluster-monitoring-operator-674b449d86-68kwg   2/2     Running   0          6h30m
prometheus-operator-6fc8d765fd-wwl87           2/2     Running   0          6h28m

containers prometheus-operator/kube-rbac-proxy violate PodSecurity 
# oc -n openshift-monitoring logs -c cluster-monitoring-operator cluster-monitoring-operator-674b449d86-68kwg | grep "allowPrivilegeEscalation != false" | tail
W0427 09:52:28.128970       1 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (containers "prometheus-operator", "kube-rbac-proxy" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "prometheus-operator", "kube-rbac-proxy" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or containers "prometheus-operator", "kube-rbac-proxy" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "prometheus-operator", "kube-rbac-proxy" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
W0427 09:54:44.124432       1 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (containers "prometheus-operator", "kube-rbac-proxy" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "prometheus-operator", "kube-rbac-proxy" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or containers "prometheus-operator", "kube-rbac-proxy" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "prometheus-operator", "kube-rbac-proxy" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
W0427 10:02:16.019517       1 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (containers "prometheus-operator", "kube-rbac-proxy" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "prometheus-operator", "kube-rbac-proxy" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or containers "prometheus-operator", "kube-rbac-proxy" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "prometheus-operator", "kube-rbac-proxy" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
W0427 10:02:33.214530       1 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (containers "prometheus-operator", "kube-rbac-proxy" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "prometheus-operator", "kube-rbac-proxy" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or containers "prometheus-operator", "kube-rbac-proxy" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "prometheus-operator", "kube-rbac-proxy" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
W0427 10:06:07.695095       1 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (containers "prometheus-operator", "kube-rbac-proxy" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "prometheus-operator", "kube-rbac-proxy" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or containers "prometheus-operator", "kube-rbac-proxy" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "prometheus-operator", "kube-rbac-proxy" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
W0427 10:06:24.922489       1 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (containers "prometheus-operator", "kube-rbac-proxy" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "prometheus-operator", "kube-rbac-proxy" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or containers "prometheus-operator", "kube-rbac-proxy" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "prometheus-operator", "kube-rbac-proxy" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
W0427 10:17:16.180246       1 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (containers "prometheus-operator", "kube-rbac-proxy" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "prometheus-operator", "kube-rbac-proxy" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or containers "prometheus-operator", "kube-rbac-proxy" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "prometheus-operator", "kube-rbac-proxy" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
W0427 10:17:33.495569       1 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (containers "prometheus-operator", "kube-rbac-proxy" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "prometheus-operator", "kube-rbac-proxy" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or containers "prometheus-operator", "kube-rbac-proxy" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "prometheus-operator", "kube-rbac-proxy" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
W0427 10:21:07.679560       1 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (containers "prometheus-operator", "kube-rbac-proxy" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "prometheus-operator", "kube-rbac-proxy" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or containers "prometheus-operator", "kube-rbac-proxy" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "prometheus-operator", "kube-rbac-proxy" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
W0427 10:21:24.818481       1 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (containers "prometheus-operator", "kube-rbac-proxy" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "prometheus-operator", "kube-rbac-proxy" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or containers "prometheus-operator", "kube-rbac-proxy" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "prometheus-operator", "kube-rbac-proxy" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
# oc -n openshift-monitoring logs -c cluster-monitoring-operator cluster-monitoring-operator-674b449d86-68kwg | grep "allowPrivilegeEscalation != false" | wc -l
98

securityContext: {} in prometheus-operator deployment
# oc -n openshift-monitoring get deploy prometheus-operator -oyaml
...
        name: prometheus-operator
        ports:
        - containerPort: 8080
          name: http
          protocol: TCP
        resources:
          requests:
            cpu: 5m
            memory: 150Mi
        securityContext: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: FallbackToLogsOnError
        volumeMounts:
        - mountPath: /etc/tls/private
          name: prometheus-operator-tls
      - args:
        - --logtostderr
        - --secure-listen-address=:8443
        - --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
        - --upstream=https://prometheus-operator.openshift-monitoring.svc:8080/
        - --tls-cert-file=/etc/tls/private/tls.crt
        - --tls-private-key-file=/etc/tls/private/tls.key
        - --client-ca-file=/etc/tls/client/client-ca.crt
        - --upstream-ca-file=/etc/configmaps/operator-cert-ca-bundle/service-ca.crt
        - --config-file=/etc/kube-rbac-policy/config.yaml
        - --tls-min-version=VersionTLS12
        image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:352b9150d76987c29c7ec9a634b848553c363de7b6ab7b5587b3e93aeed858cd
        imagePullPolicy: IfNotPresent
        name: kube-rbac-proxy
        ports:
        - containerPort: 8443
          name: https
          protocol: TCP
        resources:
          requests:
            cpu: 1m
            memory: 15Mi
        securityContext: {}
...



Version-Release number of selected component (if applicable):
4.11.0-0.nightly-2022-04-26-181148
# oc -n openshift-monitoring exec -c prometheus-operator prometheus-operator-6fc8d765fd-wwl87 -- operator --version
prometheus-operator, version 0.55.1 (branch: rhaos-4.11-rhel-8, revision: c694d68)
  build user:       root
  build date:       20220422-10:46:29
  go version:       go1.17.5
  platform:         linux/amd64

How reproducible:
always

Steps to Reproduce:
1. see the description
2.
3.

Actual results:
waring for containers prometheus-operator/kube-rbac-proxy violate PodSecurity 

Expected results:
no warning

Additional info:

Comment 1 Joao Marcal 2022-05-05 16:47:09 UTC
After investigating the issue I discovered that he openshift-monitoring namespace has the necessary labels for us to not see these warnings and they are in fact being generated only when we enable UWM since the namespace openshift-user-workload-monitoring actually doesn't have any labels regarding PodSecurity. Finally the reason the warnings started showing is because in [1] the Auth Team has set a default that is now (IIUC) applying to all namespaces.

[1] https://github.com/openshift/cluster-kube-apiserver-operator/pull/1308

Comment 2 Junqi Zhao 2022-05-09 06:59:57 UTC
enable UWM, would see the same error in prometheus-operator
# oc -n openshift-user-workload-monitoring get pod
NAME                                   READY   STATUS    RESTARTS   AGE
prometheus-operator-5f8bf6594d-pl88l   2/2     Running   0          4h33m
prometheus-user-workload-0             6/6     Running   0          4h31m
prometheus-user-workload-1             6/6     Running   0          4h31m
thanos-ruler-user-workload-0           3/3     Running   0          4h31m
thanos-ruler-user-workload-1           3/3     Running   0          4h31m


# oc -n openshift-user-workload-monitoring logs -c prometheus-operator prometheus-operator-5f8bf6594d-pl88l | grep allowPrivilegeEscalation
level=warn ts=2022-05-09T02:21:18.788365775Z caller=klog.go:96 component=k8s_client_runtime func=Warning msg="would violate PodSecurity \"restricted:latest\": allowPrivilegeEscalation != false (container \"thanos-ruler-proxy\" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container \"thanos-ruler-proxy\" must set securityContext.capabilities.drop=[\"ALL\"]), seccompProfile (pod or containers \"thanos-ruler\", \"config-reloader\", \"thanos-ruler-proxy\" must set securityContext.seccompProfile.type to \"RuntimeDefault\" or \"Localhost\")"
level=warn ts=2022-05-09T02:21:18.856629872Z caller=klog.go:96 component=k8s_client_runtime func=Warning msg="would violate PodSecurity \"restricted:latest\": allowPrivilegeEscalation != false (container \"thanos-ruler-proxy\" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container \"thanos-ruler-proxy\" must set securityContext.capabilities.drop=[\"ALL\"]), seccompProfile (pod or containers \"thanos-ruler\", \"config-reloader\", \"thanos-ruler-proxy\" must set securityContext.seccompProfile.type to \"RuntimeDefault\" or \"Localhost\")"
level=warn ts=2022-05-09T02:21:18.898681075Z caller=klog.go:96 component=k8s_client_runtime func=Warning msg="would violate PodSecurity \"restricted:latest\": allowPrivilegeEscalation != false (containers \"kube-rbac-proxy-federate\", \"kube-rbac-proxy-metrics\", \"kube-rbac-proxy-thanos\" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers \"kube-rbac-proxy-federate\", \"kube-rbac-proxy-metrics\", \"kube-rbac-proxy-thanos\" must set securityContext.capabilities.drop=[\"ALL\"]), seccompProfile (pod or containers \"init-config-reloader\", \"prometheus\", \"config-reloader\", \"thanos-sidecar\", \"kube-rbac-proxy-federate\", \"kube-rbac-proxy-metrics\", \"kube-rbac-proxy-thanos\" must set securityContext.seccompProfile.type to \"RuntimeDefault\" or \"Localhost\")"
level=warn ts=2022-05-09T02:21:19.179255308Z caller=klog.go:96 component=k8s_client_runtime func=Warning msg="would violate PodSecurity \"restricted:latest\": allowPrivilegeEscalation != false (containers \"kube-rbac-proxy-federate\", \"kube-rbac-proxy-metrics\", \"kube-rbac-proxy-thanos\" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers \"kube-rbac-proxy-federate\", \"kube-rbac-proxy-metrics\", \"kube-rbac-proxy-thanos\" must set securityContext.capabilities.drop=[\"ALL\"]), seccompProfile (pod or containers \"init-config-reloader\", \"prometheus\", \"config-reloader\", \"thanos-sidecar\", \"kube-rbac-proxy-federate\", \"kube-rbac-proxy-metrics\", \"kube-rbac-proxy-thanos\" must set securityContext.seccompProfile.type to \"RuntimeDefault\" or \"Localhost\")"

prometheus-operator-5f8bf6594d-pl88l containers: [prometheus-operator, kube-rbac-proxy]
prometheus-user-workload containers: [prometheus, config-reloader, thanos-sidecar, kube-rbac-proxy-federate, kube-rbac-proxy-metrics, kube-rbac-proxy-thanos], initContainers[init-config-reloader]
thanos-ruler-user-workload containers: [thanos-ruler, config-reloader, thanos-ruler-proxy]

Comment 7 Junqi Zhao 2022-06-01 07:29:33 UTC
verified with 4.11.0-0.nightly-2022-05-31-155315
not enabled UWM
# oc -n openshift-monitoring get pod | grep -E "cluster-monitoring-operator|prometheus-operator"
cluster-monitoring-operator-64756d5597-z6d5v             2/2     Running   0          61m
prometheus-operator-6669c6f67d-dpnfg                     2/2     Running   0          47m
prometheus-operator-admission-webhook-5ff78b9574-jrhr4   1/1     Running   0          59m
prometheus-operator-admission-webhook-5ff78b9574-klq2n   1/1     Running   0          59m

# oc -n openshift-monitoring logs -c cluster-monitoring-operator cluster-monitoring-operator-64756d5597-z6d5v | grep "allowPrivilegeEscalation != false" | tail
no result



enabled UWM
$ oc -n openshift-user-workload-monitoring get pod
NAME                                   READY   STATUS    RESTARTS   AGE
prometheus-operator-56cf84dbf8-vb4r9   2/2     Running   0          11m
prometheus-user-workload-0             6/6     Running   0          11m
prometheus-user-workload-1             6/6     Running   0          11m
thanos-ruler-user-workload-0           3/3     Running   0          11m
thanos-ruler-user-workload-1           3/3     Running   0          11m


$ oc -n openshift-user-workload-monitoring logs -c prometheus-operator prometheus-operator-56cf84dbf8-vb4r9 | grep "allowPrivilegeEscalation != false" | tail
no result

Comment 13 errata-xmlrpc 2022-08-10 11:08:39 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: OpenShift Container Platform 4.11.0 bug fix and security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:5069


Note You need to log in before you can comment on or make changes to this bug.