+++ This bug was initially created as a clone of Bug #1929278 +++
+++ This bug was initially created as a clone of Bug #1929277 +++
Description of problem:
The monitoring workloads use system-critical as their priority which causes problems when monitoring uses excessive memory and the nodes can't evict them.
Monitoring priority will be dropped to give the scheduler more flexibility to move these heavy workloads around and keep critical nodes alive.
Version-Release number of selected component (if applicable):
4.7 (probably affects older versions also)
Relatively easily, during upgrades
Steps to Reproduce:
1. Create a 4.6 cluster
2. upgrade it to 4.7
3. If prometheus uses excessive memory during the upgrade (due to WAL re-read and excessive time-series creation), the nodes will struggle to evict the prometheus workload and cause node unready failures.
The fix for this bug is a mitigation for:
I think we're going to want this change in 4.6, though it is not needed for 4.7GA
Reassigning to the CVO team as we are blocked by lack of ability to change the existing value of the class.
CVO already has bug 1929741 in this space, and that's blocked on upstream work (although sounds like CVO could delete/recreate as a temporary workaround if necessary). CVO doesn't need two bugs in this space, so I'm sending this back to monitoring. If you don't need this bug either, close it as a dup of bug 1929741, or explain the distinction I'm missing?
reassigning to kube-apiserver for the time being as we can't merge the PR as-is.
tested with 4.6.0-0.nightly-2021-03-21-131139
# oc -n openshift-monitoring get sts prometheus-k8s -oyaml | grep priorityClassName
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory (OpenShift Container Platform 4.6.23 bug fix update), and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.