Created attachment 1702509 [details] cluster-monitoring-operator deploy file Created attachment 1702509 [details] cluster-monitoring-operator deploy file Description of problem: found a lot of Throttling request info in 4.6 CMO logs # oc -n openshift-monitoring logs $(oc -n openshift-monitoring get po | grep cluster-monitoring-operator | awk '{print $1}') -c cluster-monitoring-operator | grep "Throttling request took" | tail I0727 10:46:50.279312 1 request.go:557] Throttling request took 195.286579ms, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-monitoring/secrets/prometheus-k8s-grpc-tls-eng7e4r7vqpm7 I0727 10:46:50.479313 1 request.go:557] Throttling request took 195.488823ms, request: PUT:https://172.30.0.1:443/api/v1/namespaces/openshift-monitoring/secrets/prometheus-k8s-grpc-tls-eng7e4r7vqpm7 I0727 10:46:50.679332 1 request.go:557] Throttling request took 191.594194ms, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-monitoring/secrets?labelSelector=monitoring.openshift.io%2Fname%3Dprometheus-k8s-grpc-tls%2Cmonitoring.openshift.io%2Fhash%21%3Deng7e4r7vqpm7 I0727 10:46:50.879313 1 request.go:557] Throttling request took 190.747872ms, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-monitoring/configmaps/prometheus-trusted-ca-bundle I0727 10:46:51.079299 1 request.go:557] Throttling request took 167.908961ms, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-monitoring/configmaps/prometheus-trusted-ca-bundle I0727 10:46:51.279341 1 request.go:557] Throttling request took 178.821893ms, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-monitoring/configmaps/prometheus-trusted-ca-bundle-d34s91lhv300e I0727 10:46:51.479325 1 request.go:557] Throttling request took 182.213621ms, request: PUT:https://172.30.0.1:443/api/v1/namespaces/openshift-monitoring/configmaps/prometheus-trusted-ca-bundle-d34s91lhv300e I0727 10:46:51.679335 1 request.go:557] Throttling request took 174.716322ms, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-monitoring/configmaps?labelSelector=monitoring.openshift.io%2Fname%3Dprometheus%2Cmonitoring.openshift.io%2Fhash%21%3Dd34s91lhv300e I0727 10:47:01.910380 1 request.go:557] Throttling request took 113.001737ms, request: DELETE:https://172.30.0.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-monitoring/servicemonitors/kube-scheduler I0727 10:47:02.110482 1 request.go:557] Throttling request took 195.571121ms, request: DELETE:https://172.30.0.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-monitoring/servicemonitors/openshift-apiserver Version-Release number of selected component (if applicable): 4.6.0-0.nightly-2020-07-25-091217 How reproducible: always Steps to Reproduce: 1. See the description 2. 3. Actual results: Expected results: Additional info:
issue is fixed with 4.6.0-0.nightly-2020-08-25-234625 # oc -n openshift-monitoring get deploy cluster-monitoring-operator -oyaml ... - -logtostderr=true - -v=2 ... # oc -n openshift-monitoring logs $(oc -n openshift-monitoring get po | grep cluster-monitoring-operator | awk '{print $1}') -c cluster-monitoring-operator | grep "Throttling request took" | tail no result
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (OpenShift Container Platform 4.6 GA Images), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:4196