Bug 2079292
Summary: | containers prometheus-operator/kube-rbac-proxy violate PodSecurity | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Junqi Zhao <juzhao> |
Component: | Monitoring | Assignee: | Joao Marcal <jmarcal> |
Status: | CLOSED ERRATA | QA Contact: | Junqi Zhao <juzhao> |
Severity: | low | Docs Contact: | |
Priority: | medium | ||
Version: | 4.11 | CC: | amuller, anpicker, obochan, spasquie |
Target Milestone: | --- | ||
Target Release: | 4.11.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | No Doc Update | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2022-08-10 11:08:39 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Junqi Zhao
2022-04-27 10:41:06 UTC
After investigating the issue I discovered that he openshift-monitoring namespace has the necessary labels for us to not see these warnings and they are in fact being generated only when we enable UWM since the namespace openshift-user-workload-monitoring actually doesn't have any labels regarding PodSecurity. Finally the reason the warnings started showing is because in [1] the Auth Team has set a default that is now (IIUC) applying to all namespaces. [1] https://github.com/openshift/cluster-kube-apiserver-operator/pull/1308 enable UWM, would see the same error in prometheus-operator # oc -n openshift-user-workload-monitoring get pod NAME READY STATUS RESTARTS AGE prometheus-operator-5f8bf6594d-pl88l 2/2 Running 0 4h33m prometheus-user-workload-0 6/6 Running 0 4h31m prometheus-user-workload-1 6/6 Running 0 4h31m thanos-ruler-user-workload-0 3/3 Running 0 4h31m thanos-ruler-user-workload-1 3/3 Running 0 4h31m # oc -n openshift-user-workload-monitoring logs -c prometheus-operator prometheus-operator-5f8bf6594d-pl88l | grep allowPrivilegeEscalation level=warn ts=2022-05-09T02:21:18.788365775Z caller=klog.go:96 component=k8s_client_runtime func=Warning msg="would violate PodSecurity \"restricted:latest\": allowPrivilegeEscalation != false (container \"thanos-ruler-proxy\" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container \"thanos-ruler-proxy\" must set securityContext.capabilities.drop=[\"ALL\"]), seccompProfile (pod or containers \"thanos-ruler\", \"config-reloader\", \"thanos-ruler-proxy\" must set securityContext.seccompProfile.type to \"RuntimeDefault\" or \"Localhost\")" level=warn ts=2022-05-09T02:21:18.856629872Z caller=klog.go:96 component=k8s_client_runtime func=Warning msg="would violate PodSecurity \"restricted:latest\": allowPrivilegeEscalation != false (container \"thanos-ruler-proxy\" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container \"thanos-ruler-proxy\" must set securityContext.capabilities.drop=[\"ALL\"]), seccompProfile (pod or containers \"thanos-ruler\", \"config-reloader\", \"thanos-ruler-proxy\" must set securityContext.seccompProfile.type to \"RuntimeDefault\" or \"Localhost\")" level=warn ts=2022-05-09T02:21:18.898681075Z caller=klog.go:96 component=k8s_client_runtime func=Warning msg="would violate PodSecurity \"restricted:latest\": allowPrivilegeEscalation != false (containers \"kube-rbac-proxy-federate\", \"kube-rbac-proxy-metrics\", \"kube-rbac-proxy-thanos\" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers \"kube-rbac-proxy-federate\", \"kube-rbac-proxy-metrics\", \"kube-rbac-proxy-thanos\" must set securityContext.capabilities.drop=[\"ALL\"]), seccompProfile (pod or containers \"init-config-reloader\", \"prometheus\", \"config-reloader\", \"thanos-sidecar\", \"kube-rbac-proxy-federate\", \"kube-rbac-proxy-metrics\", \"kube-rbac-proxy-thanos\" must set securityContext.seccompProfile.type to \"RuntimeDefault\" or \"Localhost\")" level=warn ts=2022-05-09T02:21:19.179255308Z caller=klog.go:96 component=k8s_client_runtime func=Warning msg="would violate PodSecurity \"restricted:latest\": allowPrivilegeEscalation != false (containers \"kube-rbac-proxy-federate\", \"kube-rbac-proxy-metrics\", \"kube-rbac-proxy-thanos\" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers \"kube-rbac-proxy-federate\", \"kube-rbac-proxy-metrics\", \"kube-rbac-proxy-thanos\" must set securityContext.capabilities.drop=[\"ALL\"]), seccompProfile (pod or containers \"init-config-reloader\", \"prometheus\", \"config-reloader\", \"thanos-sidecar\", \"kube-rbac-proxy-federate\", \"kube-rbac-proxy-metrics\", \"kube-rbac-proxy-thanos\" must set securityContext.seccompProfile.type to \"RuntimeDefault\" or \"Localhost\")" prometheus-operator-5f8bf6594d-pl88l containers: [prometheus-operator, kube-rbac-proxy] prometheus-user-workload containers: [prometheus, config-reloader, thanos-sidecar, kube-rbac-proxy-federate, kube-rbac-proxy-metrics, kube-rbac-proxy-thanos], initContainers[init-config-reloader] thanos-ruler-user-workload containers: [thanos-ruler, config-reloader, thanos-ruler-proxy] verified with 4.11.0-0.nightly-2022-05-31-155315 not enabled UWM # oc -n openshift-monitoring get pod | grep -E "cluster-monitoring-operator|prometheus-operator" cluster-monitoring-operator-64756d5597-z6d5v 2/2 Running 0 61m prometheus-operator-6669c6f67d-dpnfg 2/2 Running 0 47m prometheus-operator-admission-webhook-5ff78b9574-jrhr4 1/1 Running 0 59m prometheus-operator-admission-webhook-5ff78b9574-klq2n 1/1 Running 0 59m # oc -n openshift-monitoring logs -c cluster-monitoring-operator cluster-monitoring-operator-64756d5597-z6d5v | grep "allowPrivilegeEscalation != false" | tail no result enabled UWM $ oc -n openshift-user-workload-monitoring get pod NAME READY STATUS RESTARTS AGE prometheus-operator-56cf84dbf8-vb4r9 2/2 Running 0 11m prometheus-user-workload-0 6/6 Running 0 11m prometheus-user-workload-1 6/6 Running 0 11m thanos-ruler-user-workload-0 3/3 Running 0 11m thanos-ruler-user-workload-1 3/3 Running 0 11m $ oc -n openshift-user-workload-monitoring logs -c prometheus-operator prometheus-operator-56cf84dbf8-vb4r9 | grep "allowPrivilegeEscalation != false" | tail no result Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: OpenShift Container Platform 4.11.0 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:5069 |