Description of problem: The init-config-reloader container of the prometheus-k8s-* pods requests too much resource. Example from a CI job: $ curl -s https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/pr-logs/pull/openshift_prometheus-operator/155/pull-ci-openshift-prometheus-operator-master-e2e-agnostic-cmo/1490739893649805312/artifacts/e2e-agnostic-cmo/gather-extra/artifacts/statefulsets.json | jq '.items[1].spec.template.spec.initContainers[0].resources.requests'{ "cpu": "100m", "memory": "50Mi" } As a comparison, this is the resource requests definition for the config-reloader container: curl -s https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/pr-logs/pull/openshift_prometheus-operator/155/pull-ci-openshift-prometheus-operator-master-e2e-agnostic-cmo/1490739893649805312/artifacts/e2e-agnostic-cmo/gather-extra/artifacts/statefulsets.json | jq '.items[1].spec.template.spec.containers[1].resources.requests' { "cpu": "1m", "memory": "10Mi" } Version-Release number of selected component (if applicable): 4.9 How reproducible: Always Steps to Reproduce: 1. Launch a cluster and check the resource requests of the prometheus-k8s-0 pod. 2. 3. Actual results: The init-config-reloader container requests 100m CPU and 50Mi memory. Expected results: The init-config-reloader container requests the same resources than the config-reloader container (e.g. 1m CPU and 10Mi memory). Additional info: The regression has been introduced when the init-config-reloader container has been added in the Prometheus operator v0.49.0 (https://github.com/prometheus-operator/prometheus-operator/pull/3955). This chane fixed bug 1950173. See also bug 2026311.
checked with 4.11.0-0.nightly-2022-02-23-185405, resource request for init-config-reloader is the same with config-reloader(cpu:1m memory:10Mi) # for i in prometheus-k8s-0; do echo $i; oc -n openshift-monitoring get pod $i -o go-template='{{range.spec.initContainers}}{{"Container Name: "}}{{.name}}{{"\r\nresources: "}}{{.resources}}{{"\n"}}{{end}}'; echo -e "\n"; done prometheus-k8s-0 Container Name: init-config-reloader resources: map[requests:map[cpu:1m memory:10Mi]] # for i in prometheus-k8s-0; do echo $i; oc -n openshift-monitoring get pod $i -o go-template='{{range.spec.containers}}{{"Container Name: "}}{{.name}}{{"\r\nresources: "}}{{.resources}}{{"\n"}}{{end}}'; echo -e "\n"; done prometheus-k8s-0 Container Name: prometheus resources: map[requests:map[cpu:70m memory:1Gi]] Container Name: config-reloader resources: map[requests:map[cpu:1m memory:10Mi]] Container Name: thanos-sidecar resources: map[requests:map[cpu:1m memory:25Mi]] Container Name: prometheus-proxy resources: map[requests:map[cpu:1m memory:20Mi]] Container Name: kube-rbac-proxy resources: map[requests:map[cpu:1m memory:15Mi]] Container Name: kube-rbac-proxy-thanos resources: map[requests:map[cpu:1m memory:10Mi]]
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: OpenShift Container Platform 4.11.0 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:5069