Bug 1960002 - Reduce number of kubelet WATCH requests
Summary: Reduce number of kubelet WATCH requests
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Node
Version: 4.6
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ---
: 4.6.z
Assignee: Elana Hashman
QA Contact: Sunil Choudhary
URL:
Whiteboard:
Depends On: 1951815
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-05-12 19:05 UTC by OpenShift BugZilla Robot
Modified: 2021-05-26 06:28 UTC (History)
13 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Cause: Kubelet can sometimes open a large number of WATCH requests for secrets and configmaps, particularly on node reboot. Consequence: The API servers may be overwhelmed under load. Fix: Reduce the number of kubelet WATCH requests. Result: Load is reduced on API servers.
Clone Of:
Environment:
Last Closed: 2021-05-26 06:27:55 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift kubernetes pull 745 0 None closed [release-4.6] Bug 1960002: UPSTREAM: 99393: kubelet: reduce configmap and secret watch 2021-05-17 18:20:00 UTC
Red Hat Product Errata RHBA-2021:1565 0 None None None 2021-05-26 06:28:06 UTC

Comment 3 Sunil Choudhary 2021-05-20 08:53:55 UTC
Checked on 4.6.0-0.nightly-2021-05-19-154420, rebooted  node multiple times.
I see the number of watch calls are low.

$ oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.6.0-0.nightly-2021-05-19-154420   True        False         60m     Cluster version is 4.6.0-0.nightly-2021-05-19-154420

$ oc get nodes
NAME                                         STATUS   ROLES    AGE   VERSION
ip-10-0-138-205.us-east-2.compute.internal   Ready    worker   81m   v1.19.0+9caf8fe
ip-10-0-149-244.us-east-2.compute.internal   Ready    master   87m   v1.19.0+9caf8fe
ip-10-0-176-207.us-east-2.compute.internal   Ready    worker   76m   v1.19.0+9caf8fe
ip-10-0-182-198.us-east-2.compute.internal   Ready    master   82m   v1.19.0+9caf8fe
ip-10-0-214-102.us-east-2.compute.internal   Ready    master   82m   v1.19.0+9caf8fe
ip-10-0-223-174.us-east-2.compute.internal   Ready    worker   76m   v1.19.0+9caf8fe

$ oc debug node/ip-10-0-138-205.us-east-2.compute.internal
Starting pod/ip-10-0-138-205us-east-2computeinternal-debug ...
...

sh-4.4# journalctl | grep -i "Starting reflector" | wc -l
209

Comment 5 errata-xmlrpc 2021-05-26 06:27:55 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Container Platform 4.6.30 bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:1565


Note You need to log in before you can comment on or make changes to this bug.