Bug 1860889 - reduce the loglevel for CMO logs
Summary: reduce the loglevel for CMO logs
Keywords:
Status: VERIFIED
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Monitoring
Version: 4.6
Hardware: Unspecified
OS: Unspecified
low
low
Target Milestone: ---
: 4.6.0
Assignee: Simon Pasquier
QA Contact: Junqi Zhao
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-07-27 10:58 UTC by Junqi Zhao
Modified: 2020-08-26 07:36 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed:
Target Upstream Version:


Attachments (Terms of Use)
cluster-monitoring-operator deploy file (11.04 KB, text/plain)
2020-07-27 10:58 UTC, Junqi Zhao
no flags Details


Links
System ID Priority Status Summary Last Updated
Github openshift cluster-monitoring-operator pull 913 None closed Bug 1860889: decrease CMO log verbosity from 3 to 2 2020-09-16 10:01:52 UTC

Description Junqi Zhao 2020-07-27 10:58:54 UTC
Created attachment 1702509 [details]
cluster-monitoring-operator deploy file

Created attachment 1702509 [details]
cluster-monitoring-operator deploy file

Description of problem:
found a lot of Throttling request info in 4.6 CMO logs
# oc -n openshift-monitoring logs $(oc -n openshift-monitoring get po | grep cluster-monitoring-operator | awk '{print $1}') -c cluster-monitoring-operator | grep "Throttling request took" | tail
I0727 10:46:50.279312       1 request.go:557] Throttling request took 195.286579ms, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-monitoring/secrets/prometheus-k8s-grpc-tls-eng7e4r7vqpm7
I0727 10:46:50.479313       1 request.go:557] Throttling request took 195.488823ms, request: PUT:https://172.30.0.1:443/api/v1/namespaces/openshift-monitoring/secrets/prometheus-k8s-grpc-tls-eng7e4r7vqpm7
I0727 10:46:50.679332       1 request.go:557] Throttling request took 191.594194ms, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-monitoring/secrets?labelSelector=monitoring.openshift.io%2Fname%3Dprometheus-k8s-grpc-tls%2Cmonitoring.openshift.io%2Fhash%21%3Deng7e4r7vqpm7
I0727 10:46:50.879313       1 request.go:557] Throttling request took 190.747872ms, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-monitoring/configmaps/prometheus-trusted-ca-bundle
I0727 10:46:51.079299       1 request.go:557] Throttling request took 167.908961ms, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-monitoring/configmaps/prometheus-trusted-ca-bundle
I0727 10:46:51.279341       1 request.go:557] Throttling request took 178.821893ms, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-monitoring/configmaps/prometheus-trusted-ca-bundle-d34s91lhv300e
I0727 10:46:51.479325       1 request.go:557] Throttling request took 182.213621ms, request: PUT:https://172.30.0.1:443/api/v1/namespaces/openshift-monitoring/configmaps/prometheus-trusted-ca-bundle-d34s91lhv300e
I0727 10:46:51.679335       1 request.go:557] Throttling request took 174.716322ms, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-monitoring/configmaps?labelSelector=monitoring.openshift.io%2Fname%3Dprometheus%2Cmonitoring.openshift.io%2Fhash%21%3Dd34s91lhv300e
I0727 10:47:01.910380       1 request.go:557] Throttling request took 113.001737ms, request: DELETE:https://172.30.0.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-monitoring/servicemonitors/kube-scheduler
I0727 10:47:02.110482       1 request.go:557] Throttling request took 195.571121ms, request: DELETE:https://172.30.0.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-monitoring/servicemonitors/openshift-apiserver


Version-Release number of selected component (if applicable):
4.6.0-0.nightly-2020-07-25-091217

How reproducible:
always

Steps to Reproduce:
1. See the description
2.
3.

Actual results:


Expected results:


Additional info:

Comment 8 Junqi Zhao 2020-08-26 07:36:44 UTC
issue is fixed with 4.6.0-0.nightly-2020-08-25-234625
# oc -n openshift-monitoring get deploy cluster-monitoring-operator -oyaml
...
        - -logtostderr=true
        - -v=2
...

# oc -n openshift-monitoring logs $(oc -n openshift-monitoring get po | grep cluster-monitoring-operator | awk '{print $1}') -c cluster-monitoring-operator | grep "Throttling request took" | tail
no result


Note You need to log in before you can comment on or make changes to this bug.