Bug 1934516 - Setup different priority classes for prometheus-k8s and prometheus-user-workload pods
Summary: Setup different priority classes for prometheus-k8s and prometheus-user-workl...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Monitoring
Version: 4.7
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 4.8.0
Assignee: Simon Pasquier
QA Contact: Junqi Zhao
URL:
Whiteboard:
: 1929764 (view as bug list)
Depends On:
Blocks: 1945856
TreeView+ depends on / blocked
 
Reported: 2021-03-03 12:02 UTC by Simon Pasquier
Modified: 2021-07-27 22:51 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
: 1945856 (view as bug list)
Environment:
Last Closed: 2021-07-27 22:49:27 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-monitoring-operator pull 1110 0 None open Bug 1934516: Change prometheus priority class to system-cluster-critical again 2021-04-09 07:44:08 UTC
Red Hat Product Errata RHSA-2021:2438 0 None None None 2021-07-27 22:51:00 UTC

Description Simon Pasquier 2021-03-03 12:02:15 UTC
Description of problem:
prometheus-k8s and prometheus-user-workload pods have the same priority class so the OOM killer might decide to kill a prometheus-k8s pod instead of a prometheus-user-workload pod.

Version-Release number of selected component (if applicable):
4.7

How reproducible:
Occasionally

Steps to Reproduce:
1. Enable user-workload monitoring
2. Deploy the sample application [1] with 3 replicas in many (> 20) namespaces.
3. Randomly and repeatedly terminate application pods to generate series churn (using chaoskube [2] for instance).
4. Wait until kubelet reports memory pressure and pods get OOM-killed.

[1] https://access.redhat.com/documentation/en-us/openshift_container_platform/4.6/html/monitoring/managing-metrics#setting-up-metrics-collection-for-user-defined-projects_managing-metrics
[2] https://github.com/linki/chaoskube

Actual results:
prometheus-k8s-0 and/or prometheus-k8s-1 pods get killed.

Expected results:
prometheus-user-workload-0 and prometheus-user-workload-1 pods should get killed first.

Additional info:
This bug is a consequence of the mitigation put in place to avoid nodes going unready when Prometheus uses an excessive amount of memory (see bug 1929277).

Comment 1 Simon Pasquier 2021-03-03 14:59:09 UTC
We've introduced the "openshift-user-critical" priority class [1] in 4.7 to apply different priorities between openshift-monitoring and openshift-user-workload pods: the former would be assigned "system-cluster-critical" and the latter "openshift-user-critical". The rationale being that under memory pressure, the node should preferentially evict openshift-user-workload pods.

Just before the 4.7 release, it's been discovered that during upgrades, prometheus-k8s pods could consume large chunks of memory, eventually leading to nodes being unready (see bug 1913532 for details). Initially we thought about creating a new "openshift-system-critical" priority class for openshift-monitoring pods that would sit between "system-cluster-critical" and "openshift-user-critical" [2]. It didn't work because "openshift-user-critical" uses the highest possible value (1000000000) so we would have to decrease its value to make room for the new "openshift-system-critical" class. Unfortunately it isn't possible for CVO to update an existing priority class (as reported in bug 1929741). As a stop-gap, we've decided to downgrade the priority of the prometheus-k8s pods and make them "openshift-user-critical" (e.g. same level as the prometheus-user-workload pods).

Fast forward now, we want to ensure that prometheus-k8s pods are less likely to be evicted than user workload components. We have identified 2 options so far:
1. Have CVO supporting priority class updates
2. Allow higher numbers for user-defined priority classes.

[1] https://github.com/openshift/cluster-monitoring-operator/pull/987
[2] https://github.com/openshift/cluster-monitoring-operator/pull/1055

Comment 2 Simon Pasquier 2021-03-18 14:45:55 UTC
*** Bug 1929764 has been marked as a duplicate of this bug. ***

Comment 7 Junqi Zhao 2021-04-14 11:50:38 UTC
based on Comment 5 and Comment 6, set to VERIFIED

Comment 10 errata-xmlrpc 2021-07-27 22:49:27 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.8.2 bug fix and security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:2438


Note You need to log in before you can comment on or make changes to this bug.