Description of problem:
prometheus-k8s and prometheus-user-workload pods have the same priority class so the OOM killer might decide to kill a prometheus-k8s pod instead of a prometheus-user-workload pod.
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1. Enable user-workload monitoring
2. Deploy the sample application  with 3 replicas in many (> 20) namespaces.
3. Randomly and repeatedly terminate application pods to generate series churn (using chaoskube  for instance).
4. Wait until kubelet reports memory pressure and pods get OOM-killed.
prometheus-k8s-0 and/or prometheus-k8s-1 pods get killed.
prometheus-user-workload-0 and prometheus-user-workload-1 pods should get killed first.
This bug is a consequence of the mitigation put in place to avoid nodes going unready when Prometheus uses an excessive amount of memory (see bug 1929277).
We've introduced the "openshift-user-critical" priority class  in 4.7 to apply different priorities between openshift-monitoring and openshift-user-workload pods: the former would be assigned "system-cluster-critical" and the latter "openshift-user-critical". The rationale being that under memory pressure, the node should preferentially evict openshift-user-workload pods.
Just before the 4.7 release, it's been discovered that during upgrades, prometheus-k8s pods could consume large chunks of memory, eventually leading to nodes being unready (see bug 1913532 for details). Initially we thought about creating a new "openshift-system-critical" priority class for openshift-monitoring pods that would sit between "system-cluster-critical" and "openshift-user-critical" . It didn't work because "openshift-user-critical" uses the highest possible value (1000000000) so we would have to decrease its value to make room for the new "openshift-system-critical" class. Unfortunately it isn't possible for CVO to update an existing priority class (as reported in bug 1929741). As a stop-gap, we've decided to downgrade the priority of the prometheus-k8s pods and make them "openshift-user-critical" (e.g. same level as the prometheus-user-workload pods).
Fast forward now, we want to ensure that prometheus-k8s pods are less likely to be evicted than user workload components. We have identified 2 options so far:
1. Have CVO supporting priority class updates
2. Allow higher numbers for user-defined priority classes.
*** Bug 1929764 has been marked as a duplicate of this bug. ***
based on Comment 5 and Comment 6, set to VERIFIED
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory (Moderate: OpenShift Container Platform 4.8.2 bug fix and security update), and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.