Bug 2057025 - Resource requests for the init-config-reloader container of prometheus-k8s-* pods are too high
Summary: Resource requests for the init-config-reloader container of prometheus-k8s-* ...
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Monitoring
Version: 4.9
Hardware: Unspecified
OS: Unspecified
Target Milestone: ---
: 4.11.0
Assignee: Simon Pasquier
QA Contact: Junqi Zhao
Brian Burt
Depends On:
TreeView+ depends on / blocked
Reported: 2022-02-22 15:21 UTC by Simon Pasquier
Modified: 2022-08-10 10:51 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Before this update, the init-config-reloader container of the Prometheus pods requested 100m of CPU and 50Mi of memory while the container needed way less resources in practice. With this update, the container requests 1m of CPU and 10Mi of memory which is consistent with the settings of the config-reloader container.
Clone Of:
Last Closed: 2022-08-10 10:50:45 UTC
Target Upstream Version:

Attachments (Terms of Use)

System ID Private Priority Status Summary Last Updated
Github openshift cluster-monitoring-operator pull 1563 0 None Draft Bug 2057025: fix init-config-reloader resource requests 2022-02-22 16:26:28 UTC
Red Hat Product Errata RHSA-2022:5069 0 None None None 2022-08-10 10:51:02 UTC

Description Simon Pasquier 2022-02-22 15:21:18 UTC
Description of problem:
The init-config-reloader container of the prometheus-k8s-* pods requests too much resource.

Example from a CI job:
$ curl -s https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/pr-logs/pull/openshift_prometheus-operator/155/pull-ci-openshift-prometheus-operator-master-e2e-agnostic-cmo/1490739893649805312/artifacts/e2e-agnostic-cmo/gather-extra/artifacts/statefulsets.json | jq '.items[1].spec.template.spec.initContainers[0].resources.requests'{
  "cpu": "100m",
  "memory": "50Mi"

As a comparison, this is the resource requests definition for the config-reloader container:

curl -s https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/pr-logs/pull/openshift_prometheus-operator/155/pull-ci-openshift-prometheus-operator-master-e2e-agnostic-cmo/1490739893649805312/artifacts/e2e-agnostic-cmo/gather-extra/artifacts/statefulsets.json | jq '.items[1].spec.template.spec.containers[1].resources.requests'
  "cpu": "1m",
  "memory": "10Mi"

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:
1. Launch a cluster and check the resource requests of the prometheus-k8s-0 pod.

Actual results:
The init-config-reloader container requests 100m CPU and 50Mi memory.

Expected results:
The init-config-reloader container requests the same resources than the config-reloader container (e.g. 1m CPU and 10Mi memory).

Additional info:
The regression has been introduced when the init-config-reloader container has been added in the Prometheus operator v0.49.0 (https://github.com/prometheus-operator/prometheus-operator/pull/3955). This chane fixed bug 1950173.
See also bug 2026311.

Comment 3 Junqi Zhao 2022-02-24 09:16:16 UTC
checked with 4.11.0-0.nightly-2022-02-23-185405, resource request for init-config-reloader is the same with config-reloader(cpu:1m memory:10Mi)
# for i in prometheus-k8s-0; do echo $i; oc -n openshift-monitoring get pod $i -o go-template='{{range.spec.initContainers}}{{"Container Name: "}}{{.name}}{{"\r\nresources: "}}{{.resources}}{{"\n"}}{{end}}'; echo -e "\n"; done
Container Name: init-config-reloader
resources: map[requests:map[cpu:1m memory:10Mi]]

# for i in prometheus-k8s-0; do echo $i; oc -n openshift-monitoring get pod $i -o go-template='{{range.spec.containers}}{{"Container Name: "}}{{.name}}{{"\r\nresources: "}}{{.resources}}{{"\n"}}{{end}}'; echo -e "\n"; done
Container Name: prometheus
resources: map[requests:map[cpu:70m memory:1Gi]]
Container Name: config-reloader
resources: map[requests:map[cpu:1m memory:10Mi]]
Container Name: thanos-sidecar
resources: map[requests:map[cpu:1m memory:25Mi]]
Container Name: prometheus-proxy
resources: map[requests:map[cpu:1m memory:20Mi]]
Container Name: kube-rbac-proxy
resources: map[requests:map[cpu:1m memory:15Mi]]
Container Name: kube-rbac-proxy-thanos
resources: map[requests:map[cpu:1m memory:10Mi]]

Comment 8 errata-xmlrpc 2022-08-10 10:50:45 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: OpenShift Container Platform 4.11.0 bug fix and security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.