+++ This bug was initially created as a clone of Bug #1929277 +++ Description of problem: The monitoring workloads use system-critical as their priority which causes problems when monitoring uses excessive memory and the nodes can't evict them. Monitoring priority will be dropped to give the scheduler more flexibility to move these heavy workloads around and keep critical nodes alive. Version-Release number of selected component (if applicable): 4.7 (probably affects older versions also) How reproducible: Relatively easily, during upgrades Steps to Reproduce: 1. Create a 4.6 cluster 2. upgrade it to 4.7 3. If prometheus uses excessive memory during the upgrade (due to WAL re-read and excessive time-series creation), the nodes will struggle to evict the prometheus workload and cause node unready failures. Additional info: The fix for this bug is a mitigation for: https://bugzilla.redhat.com/show_bug.cgi?id=1925061 https://bugzilla.redhat.com/show_bug.cgi?id=1913532
Reassigning to the CVO team as we are blocked by lack of ability to change the existing value of the class.
Moving this back to monitoring team. I will create a new bug against CVO for the fact that we have no mechanism to update(delete+recreate) priorityclasses today, but the monitoring team needs to deliver the fix that reduces the prom priority. Note that i don't think anyone knows the implications of deleting+recreating priorityclasses on existing workloads in those classes, so that path may not even be viable for the CVO to introduce. Monitoring team can fix this 4.7.0 blocker bug by one of: 1) just use the existing user class for cluster prom (would have same priority as UWM prom) 2) introduce a new user class with a lower priority, move UWM to that class, use the existing user class(or a new one) for cluster prom 3) getting the CVO to deliver a fix that allows priorityclass updates (per the bug i am about to create) 4) getting kube to carry a patch that allows for higher user defined priorityclass values (I may have missed some options, but those are the main ones i'm aware of)
upgrade from 4.6.18 to 4.7.0-0.nightly-2021-02-17-224627, see the attached picture, prometheus pods consumed 3.27GiB memory at the most during the upgrade, and no nodes are changed to unhealthy status # oc -n openshift-monitoring get sts prometheus-k8s -oyaml | grep priorityClassName priorityClassName: openshift-user-critical
Created attachment 1757790 [details] prometheus-k8s pods memory usage
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2020:5633