Bug 1899582 - update discovery burst to reflect lots of CRDs on openshift clusters
Summary: update discovery burst to reflect lots of CRDs on openshift clusters
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Monitoring
Version: 4.5
Hardware: x86_64
OS: Linux
medium
low
Target Milestone: ---
: 4.7.0
Assignee: Damien Grisonnet
QA Contact: Junqi Zhao
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-11-19 15:42 UTC by Simon Reber
Modified: 2021-02-24 15:35 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-02-24 15:34:22 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-monitoring-operator pull 999 0 None closed Bug 1899582: Increase rest config burst and QPS rate limits 2021-02-12 09:51:13 UTC
Red Hat Product Errata RHSA-2020:5633 0 None None None 2021-02-24 15:35:06 UTC

Description Simon Reber 2020-11-19 15:42:55 UTC
Description of problem:

In large OpenShift 4 environments we are seeing around 83'000 messages "Throttling request took 1.183079627s, request: GET: ..." in cluster-monitoring-operator logs on daily basis. This is related to the amount of objects and CRD's in the OpenShift 4 - Cluster. 

As increasing `config.Burst` in `oc` client seems to address this, we are requesting the same in the cluster-monitoring operator (probably cfg.Burst in cluster-monitoring-operator).

Version-Release number of selected component (if applicable):

 - 4.5 and 4.6

How reproducible:

 - Always (depending on the numbers of CRD's in the OpenShift - Cluster

Steps to Reproduce:
1. Add many CRD's to OpenShift 4 and add workloads
2. Watch logs from cluster-monitoring-operator

Actual results:

The `cluster-monitoring-operator` is constantly reporting `Throttling request took 1.183079627s, request: GET: ...` and thus throttling requests on client side.

Expected results:

No `Throttling request took 1.183079627s, request: GET: ...` being reported even when we have lots of CRD's and workload on a cluster.

Additional info:

Comment 3 Lili Cosic 2020-11-23 08:34:44 UTC
Thanks for the bug report, the solution makes sense. Lowered priority to medium from high, as from what I understand this does not actually have an effect on any component? It just logs a lot, that is all? Feel free to correct me!

Comment 4 Simon Reber 2020-11-23 08:44:33 UTC
(In reply to Lili Cosic from comment #3)
> Thanks for the bug report, the solution makes sense. Lowered priority to
> medium from high, as from what I understand this does not actually have an
> effect on any component? It just logs a lot, that is all? Feel free to
> correct me!
That is fine. But please make sure you fix it in timely manner or at least put in documentation/explanation to help people understand that it does not harm the environment and is only informational (client side throttling).

Comment 5 Damien Grisonnet 2020-12-01 18:28:38 UTC
Lowering the severity of this bug to low as it doesn't impact the use of any component of the monitoring stack.

Comment 7 Junqi Zhao 2020-12-09 09:08:02 UTC
tested with 4.7.0-0.nightly-2020-12-08-141245, added many CRD's to OpenShift 4 and add workloads, no `Throttling request took **` info in cluster-monitoring-operator, and there's error with `oc get pod -A'

Comment 11 errata-xmlrpc 2021-02-24 15:34:22 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:5633


Note You need to log in before you can comment on or make changes to this bug.