Description of problem: This is a copy from https://github.com/openshift/origin/issues/24247. More specifically https://github.com/openshift/origin/issues/24247#issuecomment-560419826 Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: 1. Start a cluster 2. Check resource requests for `alertmanager-proxy`, `grafana-proxy`, `node-exporter`, `thanos-sidecar` 3. Actual results: No resource requests. Expected results: All containers in `openshift-monitoring` have resource requests set. Additional info:
Tested with 4.4.0-0.nightly-2020-01-09-013524, only `thanos-sidecar` container from Comment 0 does not set resources request # kubectl -n openshift-monitoring get pod prometheus-k8s-0 -o go-template='{{range.spec.containers}}{{"Container Name: "}}{{.name}}{{"\r\nresources: "}}{{.resources}}{{"\n"}}{{end}}' Container Name: prometheus resources: map[requests:map[cpu:200m memory:1Gi]] Container Name: prometheus-config-reloader resources: map[limits:map[cpu:100m memory:25Mi] requests:map[cpu:100m memory:25Mi]] Container Name: rules-configmap-reloader resources: map[limits:map[cpu:100m memory:25Mi] requests:map[cpu:100m memory:25Mi]] Container Name: thanos-sidecar resources: map[] Container Name: prometheus-proxy resources: map[requests:map[cpu:10m memory:20Mi]] Container Name: kube-rbac-proxy resources: map[requests:map[cpu:10m memory:20Mi]] Container Name: prom-label-proxy resources: map[requests:map[cpu:10m memory:20Mi]] # kubectl -n openshift-monitoring get pod alertmanager-main-0 -o go-template='{{range.spec.containers}}{{"Container Name: "}}{{.name}}{{"\r\nresources: "}}{{.resources}}{{"\n"}}{{end}}' Container Name: alertmanager resources: map[requests:map[memory:200Mi]] Container Name: config-reloader resources: map[limits:map[cpu:100m memory:25Mi] requests:map[cpu:100m memory:25Mi]] Container Name: alertmanager-proxy resources: map[requests:map[cpu:10m memory:20Mi]] # kubectl -n openshift-monitoring get pod grafana-6d7b8895f9-ttvs8 -o go-template='{{range.spec.containers}}{{"Container Name: "}}{{.name}}{{"\r\nresources: "}}{{.resources}}{{"\n"}}{{end}}' Container Name: grafana resources: map[requests:map[cpu:100m memory:100Mi]] Container Name: grafana-proxy resources: map[requests:map[cpu:10m memory:20Mi]] # kubectl -n openshift-monitoring get pod node-exporter-dfj72 -o go-template='{{range.spec.containers}}{{"Container Name: "}}{{.name}}{{"\r\nresources: "}}{{.resources}}{{"\n"}}{{end}}' Container Name: node-exporter resources: map[requests:map[cpu:102m memory:180Mi]] Container Name: kube-rbac-proxy resources: map[requests:map[cpu:10m memory:20Mi]]
Created attachment 1651209 [details] prometheus-k8s pod info
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:0581