Bug 1929278 - Monitoring workloads using too high a priorityclass
Summary: Monitoring workloads using too high a priorityclass
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Monitoring
Version: 4.7
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
: 4.7.0
Assignee: Lili Cosic
QA Contact: Junqi Zhao
URL:
Whiteboard:
Depends On: 1929277
Blocks: 1929354
TreeView+ depends on / blocked
 
Reported: 2021-02-16 15:57 UTC by Ben Parees
Modified: 2021-02-24 15:58 UTC (History)
14 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1929277
: 1929354 (view as bug list)
Environment:
Last Closed: 2021-02-24 15:58:19 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
prometheus-k8s pods memory usage (78.18 KB, image/png)
2021-02-18 13:05 UTC, Junqi Zhao
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-monitoring-operator pull 1062 0 None closed Bug 1929278: [4.7]: jsonnet/prometheus.jsonnet: Apply openshift-user-critical class to cluster Prometheus 2021-02-17 23:17:07 UTC
Red Hat Product Errata RHSA-2020:5633 0 None None None 2021-02-24 15:58:38 UTC

Description Ben Parees 2021-02-16 15:57:53 UTC
+++ This bug was initially created as a clone of Bug #1929277 +++

Description of problem:

The monitoring workloads use system-critical as their priority which causes problems when monitoring uses excessive memory and the nodes can't evict them.

Monitoring priority will be dropped to give the scheduler more flexibility to move these heavy workloads around and keep critical nodes alive.


Version-Release number of selected component (if applicable):
4.7 (probably affects older versions also)

How reproducible:
Relatively easily, during upgrades

Steps to Reproduce:
1. Create a 4.6 cluster
2. upgrade it to 4.7
3. If prometheus uses excessive memory during the upgrade (due to WAL re-read and excessive time-series creation), the nodes will struggle to evict the prometheus workload and cause node unready failures.


Additional info:
The fix for this bug is a mitigation for:

https://bugzilla.redhat.com/show_bug.cgi?id=1925061
https://bugzilla.redhat.com/show_bug.cgi?id=1913532

Comment 1 Lili Cosic 2021-02-17 09:44:34 UTC
Reassigning to the CVO team as we are blocked by lack of ability to change the existing value of the class.

Comment 2 Ben Parees 2021-02-17 14:38:40 UTC
Moving this back to monitoring team.  I will create a new bug against CVO for the fact that we have no mechanism to update(delete+recreate) priorityclasses today, but the monitoring team needs to deliver the fix that reduces the prom priority.  Note that i don't think anyone knows the implications of deleting+recreating priorityclasses on existing workloads in those classes, so that path may not even be viable for the CVO to introduce.

Monitoring team can fix this 4.7.0 blocker bug by one of:

1) just use the existing user class for cluster prom  (would have same priority as UWM prom)
2) introduce a new user class with a lower priority, move UWM to that class, use the existing user class(or a new one) for cluster prom
3) getting the CVO to deliver a fix that allows priorityclass updates  (per the bug i am about to create)
4) getting kube to carry a patch that allows for higher user defined priorityclass values

(I may have missed some options, but those are the main ones i'm aware of)

Comment 3 Junqi Zhao 2021-02-18 13:05:00 UTC
upgrade from 4.6.18 to 4.7.0-0.nightly-2021-02-17-224627, see the attached picture, prometheus pods consumed 3.27GiB memory at the most during the upgrade, and no nodes are changed to unhealthy status
# oc -n openshift-monitoring get sts prometheus-k8s -oyaml | grep priorityClassName
      priorityClassName: openshift-user-critical

Comment 4 Junqi Zhao 2021-02-18 13:05:44 UTC
Created attachment 1757790 [details]
prometheus-k8s pods memory usage

Comment 7 errata-xmlrpc 2021-02-24 15:58:19 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:5633


Note You need to log in before you can comment on or make changes to this bug.