Bug 1781066

Summary: Not every pod in the vanilla openshift installation set cpu resource request
Product: OpenShift Container Platform Reporter: Pawel Krupa <pkrupa>
Component: MonitoringAssignee: Pawel Krupa <pkrupa>
Status: CLOSED ERRATA QA Contact: Junqi Zhao <juzhao>
Severity: low Docs Contact:
Priority: unspecified    
Version: 4.4CC: alegrand, anpicker, erooth, kakkoyun, lcosic, mloibl, pkrupa, surbania
Target Milestone: ---   
Target Release: 4.4.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of:
: 1807775 (view as bug list) Environment:
Last Closed: 2020-05-04 11:19:04 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1807775    
Attachments:
Description Flags
prometheus-k8s pod info none

Description Pawel Krupa 2019-12-09 09:03:17 UTC
Description of problem:
This is a copy from https://github.com/openshift/origin/issues/24247. More specifically https://github.com/openshift/origin/issues/24247#issuecomment-560419826

Version-Release number of selected component (if applicable):


How reproducible:
Always


Steps to Reproduce:
1. Start a cluster
2. Check resource requests for `alertmanager-proxy`, `grafana-proxy`, `node-exporter`, `thanos-sidecar`
3.

Actual results:
No resource requests.

Expected results:
All containers in `openshift-monitoring` have resource requests set.


Additional info:

Comment 2 Junqi Zhao 2020-01-10 09:26:16 UTC
Tested with 4.4.0-0.nightly-2020-01-09-013524, only `thanos-sidecar` container from Comment 0 does not set resources request
# kubectl -n openshift-monitoring get pod prometheus-k8s-0 -o go-template='{{range.spec.containers}}{{"Container Name: "}}{{.name}}{{"\r\nresources: "}}{{.resources}}{{"\n"}}{{end}}'

Container Name: prometheus
resources: map[requests:map[cpu:200m memory:1Gi]]
Container Name: prometheus-config-reloader
resources: map[limits:map[cpu:100m memory:25Mi] requests:map[cpu:100m memory:25Mi]]
Container Name: rules-configmap-reloader
resources: map[limits:map[cpu:100m memory:25Mi] requests:map[cpu:100m memory:25Mi]]
Container Name: thanos-sidecar
resources: map[]
Container Name: prometheus-proxy
resources: map[requests:map[cpu:10m memory:20Mi]]
Container Name: kube-rbac-proxy
resources: map[requests:map[cpu:10m memory:20Mi]]
Container Name: prom-label-proxy
resources: map[requests:map[cpu:10m memory:20Mi]]

# kubectl -n openshift-monitoring get pod alertmanager-main-0 -o go-template='{{range.spec.containers}}{{"Container Name: "}}{{.name}}{{"\r\nresources: "}}{{.resources}}{{"\n"}}{{end}}'
Container Name: alertmanager
resources: map[requests:map[memory:200Mi]]
Container Name: config-reloader
resources: map[limits:map[cpu:100m memory:25Mi] requests:map[cpu:100m memory:25Mi]]
Container Name: alertmanager-proxy
resources: map[requests:map[cpu:10m memory:20Mi]]

# kubectl -n openshift-monitoring get pod grafana-6d7b8895f9-ttvs8 -o go-template='{{range.spec.containers}}{{"Container Name: "}}{{.name}}{{"\r\nresources: "}}{{.resources}}{{"\n"}}{{end}}'
Container Name: grafana
resources: map[requests:map[cpu:100m memory:100Mi]]
Container Name: grafana-proxy
resources: map[requests:map[cpu:10m memory:20Mi]]


# kubectl -n openshift-monitoring get pod node-exporter-dfj72 -o go-template='{{range.spec.containers}}{{"Container Name: "}}{{.name}}{{"\r\nresources: "}}{{.resources}}{{"\n"}}{{end}}'
Container Name: node-exporter
resources: map[requests:map[cpu:102m memory:180Mi]]
Container Name: kube-rbac-proxy
resources: map[requests:map[cpu:10m memory:20Mi]]

Comment 3 Junqi Zhao 2020-01-10 09:27:06 UTC
Created attachment 1651209 [details]
prometheus-k8s pod info

Comment 7 errata-xmlrpc 2020-05-04 11:19:04 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:0581