Bug 1909280 - oc and other clients experiencing api throttling
Summary: oc and other clients experiencing api throttling
Keywords:
Status: CLOSED DUPLICATE of bug 1899575
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: oc
Version: 4.5
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ---
Assignee: ravig
QA Contact: zhou ying
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-12-18 19:56 UTC by Neil Girard
Modified: 2024-03-25 17:38 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-01-05 12:08:55 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Neil Girard 2020-12-18 19:56:22 UTC
Description of problem:
When running oc commands (and in must gather), output has high amounts of throttling messages.

Example:
[cjan@fedora:troubleshoot]$ oc adm must-gather
[must-gather      ] OUT Using must-gather plugin-in image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f67b1a7d5ded1db536247d7e8baf4b31ba9defe84a390fb63074db44150411de
[must-gather      ] OUT namespace/openshift-must-gather-5dk4n created
[must-gather      ] OUT clusterrolebinding.rbac.authorization.k8s.io/must-gather-xft68 created
[must-gather      ] OUT pod for plug-in image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f67b1a7d5ded1db536247d7e8baf4b31ba9defe84a390fb63074db44150411de created
[must-gather-xzkqz] POD I1214 04:05:28.548683      28 request.go:621] Throttling request took 1.055151198s, request: GET:https://172.30.0.1:443/apis/k8s.cni.cncf.io/v1?timeout=32s
[must-gather-xzkqz] POD Wrote inspect data to must-gather.
[must-gather-xzkqz] POD Gathering data for ns/openshift-cluster-version...
[must-gather-xzkqz] POD I1214 04:05:35.908694      50 request.go:621] Throttling request took 1.184218823s, request: GET:https://172.30.0.1:443/apis/nodemaintenance.kubevirt.io/v1beta1?timeout=32s
[must-gather-xzkqz] POD Wrote inspect data to must-gather.
[must-gather-xzkqz] POD I1214 04:05:41.160161      70 request.go:621] Throttling request took 1.150392935s, request: GET:https://172.30.0.1:443/apis/config.openshift.io/v1?timeout=32s
[must-gather-xzkqz] POD Gathering data for ns/openshift-config...
[must-gather-xzkqz] POD Gathering data for ns/openshift-config-managed...
[must-gather-xzkqz] POD Gathering data for ns/openshift-authentication...
[must-gather-xzkqz] POD I1214 04:05:53.650397      70 request.go:621] Throttling request took 1.18458617s, request: GET:https://172.30.0.1:443/apis/k8s.cni.cncf.io/v1?timeout=32s
[must-gather-xzkqz] POD Gathering data for ns/openshift-authentication-operator...
[must-gather-xzkqz] POD Gathering data for ns/openshift-ingress...
[must-gather-xzkqz] POD Gathering data for ns/openshift-cloud-credential-operator...
[must-gather-xzkqz] POD I1214 04:06:06.067388      70 request.go:621] Throttling request took 1.187796208s, request: GET:https://172.30.0.1:443/apis/crunchydata.com/v1?timeout=32s
[must-gather-xzkqz] POD I1214 04:06:16.967303      70 request.go:621] Throttling request took 1.152879156s, request: GET:https://172.30.0.1:443/apis/ssp.kubevirt.io/v1?timeout=32s
[must-gather-xzkqz] POD I1214 04:06:27.066687      70 request.go:621] Throttling request took 1.146580995s, request: GET:https://172.30.0.1:443/apis/admissionregistration.k8s.io/v1?timeout=32s
[must-gather-xzkqz] POD Gathering data for ns/openshift-machine-api...
[must-gather-xzkqz] POD I1214 04:06:38.179811      70 request.go:621] Throttling request took 1.182212294s, request: GET:https://172.30.0.1:443/apis/crunchydata.com/v1?timeout=32s
[must-gather-xzkqz] POD Gathering data for ns/openshift-config-operator...
[must-gather-xzkqz] POD I1214 04:06:48.586737      70 request.go:621] Throttling request took 1.183081918s, request: GET:https://172.30.0.1:443/apis/k8s.cni.cncf.io/v1?timeout=32s
[must-gather-xzkqz] POD Gathering data for ns/openshift-console-operator...
[must-gather-xzkqz] POD Gathering data for ns/openshift-console...
[must-gather-xzkqz] POD I1214 04:06:59.303676      70 request.go:621] Throttling request took 1.188475476s, request: GET:https://172.30.0.1:443/apis/metering.openshift.io/v1alpha1?timeout=32s
[must-gather-xzkqz] POD Gathering data for ns/openshift-cluster-storage-operator...
[must-gather-xzkqz] POD Gathering data for ns/openshift-dns-operator...
[must-gather-xzkqz] POD Gathering data for ns/openshift-dns...
[must-gather-xzkqz] POD I1214 04:07:33.184993      70 request.go:621] Throttling request took 1.183924598s, request: GET:https://172.30.0.1:443/apis/snapshot.storage.k8s.io/v1beta1?timeout=32s
[must-gather-xzkqz] POD Gathering data for ns/openshift-etcd-operator...
[must-gather-xzkqz] POD Gathering data for ns/openshift-etcd...
[must-gather-xzkqz] POD I1214 04:08:07.323811      70 request.go:621] Throttling request took 1.184622448s, request: GET:https://172.30.0.1:443/apis/portworx.io/v1beta2?timeout=32s
[must-gather-xzkqz] POD I1214 04:08:17.335867      70 request.go:621] Throttling request took 1.360852174s, request: GET:https://172.30.0.1:443/apis/flowcontrol.apiserver.k8s.io/v1alpha1?timeout=32s
[must-gather-xzkqz] POD Gathering data for ns/openshift-image-registry...
[must-gather-xzkqz] POD I1214 04:08:46.952155      70 request.go:621] Throttling request took 1.183061354s, request: GET:https://172.30.0.1:443/apis/console.openshift.io/v1?timeout=32s
[must-gather-xzkqz] POD Gathering data for ns/openshift-ingress-operator...
[must-gather-xzkqz] POD Gathering data for ns/openshift-insights...
[must-gather-xzkqz] POD I1214 04:08:57.198363      70 request.go:621] Throttling request took 1.17873313s, request: GET:https://172.30.0.1:443/apis/image.openshift.io/v1?timeout=32s
[must-gather-xzkqz] POD Gathering data for ns/openshift-kube-apiserver-operator...
[must-gather-xzkqz] POD I1214 04:09:09.258120      70 request.go:621] Throttling request took 1.186311669s, request: GET:https://172.30.0.1:443/apis/core.libopenstorage.org/v1?timeout=32s
[must-gather-xzkqz] POD Gathering data for ns/openshift-kube-apiserver...
[must-gather-xzkqz] POD I1214 04:09:39.345130      70 request.go:621] Throttling request took 1.183485555s, request: GET:https://172.30.0.1:443/apis/authentication.k8s.io/v1beta1?timeout=32s
[must-gather-xzkqz] POD Gathering data for ns/openshift-kube-controller-manager...
[must-gather-xzkqz] POD I1214 04:09:59.308482      70 request.go:621] Throttling request took 1.187549076s, request: GET:https://172.30.0.1:443/apis/portworx.io/v1beta2?timeout=32s
[must-gather-xzkqz] POD Gathering data for ns/openshift-kube-controller-manager-operator...
[must-gather-xzkqz] POD Gathering data for ns/openshift-kube-scheduler...
[must-gather-xzkqz] POD I1214 04:10:11.281310      70 request.go:621] Throttling request took 1.185456191s, request: GET:https://172.30.0.1:443/apis/appprotect.f5.com/v1beta1?timeout=32s
[must-gather-xzkqz] POD Gathering data for ns/openshift-kube-scheduler-operator...
[must-gather-xzkqz] POD Gathering data for ns/openshift-kube-storage-version-migrator...
[must-gather-xzkqz] POD Gathering data for ns/openshift-kube-storage-version-migrator-operator...
[must-gather-xzkqz] POD I1214 04:10:22.463615      70 request.go:621] Throttling request took 1.179593995s, request: GET:https://172.30.0.1:443/apis/admissionregistration.k8s.io/v1?timeout=32s
[must-gather-xzkqz] POD Gathering data for ns/openshift-cluster-machine-approver...
[must-gather-xzkqz] POD I1214 04:10:33.395779      70 request.go:621] Throttling request took 1.186185997s, request: GET:https://172.30.0.1:443/apis/metrics.k8s.io/v1beta1?timeout=32s
[must-gather-xzkqz] POD Gathering data for ns/openshift-machine-config-operator...
[must-gather-xzkqz] POD I1214 04:10:55.283476      70 request.go:621] Throttling request took 1.184540423s, request: GET:https://172.30.0.1:443/apis/autoscaling.internal.knative.dev/v1alpha1?timeout=32s
[must-gather-xzkqz] POD Gathering data for ns/openshift-marketplace...
[must-gather-xzkqz] POD I1214 04:11:06.051802      70 request.go:621] Throttling request took 1.168238502s, request: GET:https://172.30.0.1:443/apis/stork.libopenstorage.org/v1alpha1?timeout=32s
[must-gather-xzkqz] POD Gathering data for ns/openshift-monitoring...
[must-gather-xzkqz] OUT gather logs unavailable: unexpected EOF
[must-gather-xzkqz] OUT waiting for gather to complete

Most operations seem unphased by this, but cannot determine if it is playing a role in some timeouts being seen.

Version-Release number of selected component (if applicable):
4.5.19

How reproducible:
Customer sees this all the time.  I'm unable to recreate in my 4.5 cluster.

Steps to Reproduce:
N/A

Actual results:
See throttling messages in oc client terminal output constantly

Expected results:
run oc commands w/o any throttling messages in terminal

Additional info:
Customer is hitting this.  Case has many log gathers available on supportshell and in the case.  Case number is 02823948

Comment 1 David Hernández Fernández 2020-12-20 09:49:07 UTC
Could this be a local set up issue?  the throttling logs spam could be related to local setup where you are running the project. See:
~~~
if latency > extraLongThrottleLatency {
		// If the rate limiter latency is very high, the log message should be printed at a higher log level,
		// but we use a throttled logger to prevent spamming.
		globalThrottledLogger.Infof("Throttling request took %v, request: %s:%s", latency, r.verb, r.URL().String())
	}

From: https://github.com/kubernetes/client-go/blob/3d5c80942cce510064da1ab62c579e190a0230fd/rest/request.go#L574-L575 - https://github.com/operator-framework/operator-sdk/issues/3216

Comment 3 Stefan Schimanski 2021-01-05 09:44:24 UTC
Throttling messages are normal and expected. oc adm must-gather might be a little to verbose here. A message after a second of latency seems to be too much for a cli command.

Comment 4 Maciej Szulik 2021-01-05 12:08:55 UTC

*** This bug has been marked as a duplicate of bug 1899575 ***


Note You need to log in before you can comment on or make changes to this bug.