Bug 1894574
Summary: | deployment issue and throttling request alerts | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | mchebbi <mchebbi> |
Component: | oc | Assignee: | Maciej Szulik <maszulik> |
Status: | CLOSED DUPLICATE | QA Contact: | zhou ying <yinzhou> |
Severity: | medium | Docs Contact: | |
Priority: | medium | ||
Version: | 4.5 | CC: | aos-bugs, jokerman, mfojtik, xxia |
Target Milestone: | --- | ||
Target Release: | 4.7.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2020-11-26 14:03:36 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
mchebbi@redhat.com
2020-11-04 14:29:05 UTC
Throttling requests is normal and to be expected, and to be handled by the client. The operator do that. If oc doesn't, this is a bug. Changing component. As Stefan mentioned in the previous comment the throttling is perfectly normal behavior and oc is also respecting those since it's built on the same primitives as the operators are, namely client-go library. For the client oc apply command, I'd need logs from the execution with -v=8 to be able to diagnose what's preventing oc apply from succeeding. It will also be helpful to have the yaml definitions of those deployments. shorturl.at/azI49 ==> requested information. I have attached all requested information on the link above. Looks like throttling messages coming on screen was bug : https://bugzilla.redhat.com/show_bug.cgi?id=1894574. Could you confirm ? the customer have upgraded Dev cluster to 4.5.16 and still seeing throttling messages on stdout. he don't see those messages in other clusters which are on 4.5.16. Dev Cluster has more load than any other clusters as this point. I have attached must-gather for the upgraded Dev cluster, yaml file used for apply and the apply command output which fails the IBM UCD plugin because Plugin don't expect any messages on stdout / stderr. Is there way to suppress these messages ? customer also wanted to update my observation if it helps for investigation. It looks like oc cli may have some role displaying throttling messages as well - he has a cluster which is on 4.4.16. he used 2 versions of oc clis to execute commands. he tried to get an "invalid resource" because he can consistently generate throttling message with that. See below: 1. oc cli version 4.4 $./oc version Client Version: openshift-clients-4.4.0-202006211643.p0-2-gd89e458c3 Server Version: 4.4.16 Kubernetes Version: v1.17.1+b83bc57 $./oc get ppv <=== invalid resource error: the server doesn't have a resource type "ppv" 2. oc cli version 4.5.16 $oc version Client Version: 4.5.16 Server Version: 4.4.16 Kubernetes Version: v1.17.1+b83bc57 $ oc get ppv <=== invalid resource I1117 10:55:20.750698 9336 request.go:621] Throttling request took 1.023522471s, request: GET:https://api.aws2.ocplb.travp.net:6443/apis/kibana.k8s.elastic.co/v1?timeout=32s error: the server doesn't have a resource type "ppv" With the oc cli 4.5.16 I am consistently getting throttling message. Another observation is this message comes up for the first time for particular api call but then for subsequent api calls it doesn't come up for sometime if the previous call was successful. hello, Could you please check the requested information shorturl.at/azI49 and the description on my previous comment. The throttling is caused by big amounts of CRDs during discovery, compare https://bugzilla.redhat.com/show_bug.cgi?id=1899575 Currently, there is no way to bypass these messages, in 4.7 we will are increasing the limit such that these should not happen that frequently. *** This bug has been marked as a duplicate of bug 1899575 *** |