Bug 1887799 - User workload monitoring prometheus-config-reloader OOM
Summary: User workload monitoring prometheus-config-reloader OOM
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Monitoring
Version: 4.6
Hardware: Unspecified
OS: Unspecified
low
low
Target Milestone: ---
: 4.7.0
Assignee: Pawel Krupa
QA Contact: Junqi Zhao
URL:
Whiteboard: aos-scalability-46
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-10-13 11:28 UTC by Raul Sevilla
Modified: 2021-02-24 15:26 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-02-24 15:25:34 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-monitoring-operator pull 959 0 None closed Bug 1887799: Unset memory limits on config reloader container 2021-02-09 15:02:16 UTC
Red Hat Product Errata RHSA-2020:5633 0 None None None 2021-02-24 15:26:00 UTC

Description Raul Sevilla 2020-10-13 11:28:45 UTC
With a high number of servicemonitor objects (+2000), the container prometheus-config-reloader from the Prometheus user workload monitoring stack is OOM killed by the kernel due to it exceeds the configured memory CGroup. I realized that the memory usage of this container is limited to 25 MiB.

sh-4.4# systemctl status 25868 
Warning: The unit file, source configuration file or drop-ins of crio-b15f21dd2a580453516a5979d17b744e6ecce0a7c81f309f78e532d78ced4952.scope changed on disk. Run 'systemctl daemon-reload' to reload units.
● crio-b15f21dd2a580453516a5979d17b744e6ecce0a7c81f309f78e532d78ced4952.scope - libcontainer container b15f21dd2a580453516a5979d17b744e6ecce0a7c81f309f78e532d78ced4952
   Loaded: loaded (/run/systemd/transient/crio-b15f21dd2a580453516a5979d17b744e6ecce0a7c81f309f78e532d78ced4952.scope; transient)
Transient: yes
  Drop-In: /run/systemd/transient/crio-b15f21dd2a580453516a5979d17b744e6ecce0a7c81f309f78e532d78ced4952.scope.d
           └─50-DevicePolicy.conf, 50-DeviceAllow.conf, 50-MemoryLimit.conf, 50-CPUShares.conf, 50-CPUQuota.conf, 50-TasksAccounting.conf, 50-TasksMax.conf
   Active: active (running) since Tue 2020-10-13 10:42:16 UTC; 10min ago
    Tasks: 11 (limit: 1024)
   Memory: 23.8M (limit: 25.0M)
      CPU: 89ms
   CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod140bf972_2678_4117_b21e_b3b40c3aba75.slice/crio-b15f21dd2a580453516a5979d17b744e6ecce0a7c81f309f78e532d78ced4952.scope
           └─25868 /bin/prometheus-config-reloader --log-format=logfmt --reload-url=http://localhost:9090/-/reload --config-file=/etc/prometheus/config/prometheus.yaml.gz --config-envsubst-file=/etc/prometheus/config_out/prometheus.env.y>


Still have to confirm if the number of targets pointed by a ServiceMonitor object also affects memory usage.

Note: The same container for the monitoring cluster stack doesn't have this resource limitation.

Comment 1 Simon Pasquier 2020-10-13 13:26:27 UTC
Good catch! Somehow we never set "--config-reloader-memory=0" for the Prometheus operator running in openshift-user-workload-monitoring namespace (unlike what is done in openshift-monitoring).

Comment 7 errata-xmlrpc 2021-02-24 15:25:34 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:5633


Note You need to log in before you can comment on or make changes to this bug.