Bug 1414813

Summary: OCP 3.5: log spam: at loglevel=2 all REST calls are being logged in the master logs
Product: OpenShift Container Platform Reporter: Mike Fiedler <mifiedle>
Component: NodeAssignee: Avesh Agarwal <avagarwa>
Status: CLOSED NOTABUG QA Contact: Mike Fiedler <mifiedle>
Severity: medium Docs Contact:
Priority: low    
Version: 3.5.0CC: aos-bugs, avagarwa, clichybi, decarr, eparis, jokerman, jrosenta, mmccomas, sjenning, sreber, tparsons
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Previously REST API calls were being logged at log level 2 which is the default level and due to this, lot of logs were being generated and it was hard to see during debugging what went wrong. With this fix, REST API calls logging is moved to log level 3 to avoid them being logged by default.
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-08-24 15:10:00 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Mike Fiedler 2017-01-19 14:19:34 UTC
Description of problem:

At loglevel=2, all REST API calls seem to be logged in the atomic-openshift-master logs


Version-Release number of selected component (if applicable):

3.5.0.5

How reproducible: Always


Steps to Reproduce:
1.  Install 3.5 cluster
2.  Create some projects and applications
3.  journalctl -lf -u atomic-openshift-master



Actual results:

root       2429      1  7 09:00 ?        00:01:20 /usr/bin/openshift start master --config=/etc/origin/master/master-config.yaml --loglevel=2


Jan 19 09:18:11 ip-172-31-45-230.us-west-2.compute.internal atomic-openshift-master[2429]: I0119 09:18:11.565341    2429 panics.go:76] GET /api/v1/nodes?resourceVersion=0: (918.371µs) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4] 172.31.45.230:44446]
Jan 19 09:18:15 ip-172-31-45-230.us-west-2.compute.internal atomic-openshift-master[2429]: I0119 09:18:15.302373    2429 panics.go:76] GET /apis/extensions/v1beta1/thirdpartyresources: (1.050177ms) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4] 172.31.45.230:44446]
Jan 19 09:18:15 ip-172-31-45-230.us-west-2.compute.internal atomic-openshift-master[2429]: I0119 09:18:15.395607    2429 panics.go:76] GET /api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-45-230.us-west-2.compute.internal&resourceVersion=0: (808.75µs) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4] 172.31.45.230:44462]
Jan 19 09:18:15 ip-172-31-45-230.us-west-2.compute.internal atomic-openshift-master[2429]: I0119 09:18:15.469082    2429 panics.go:76] PUT /api/v1/nodes/ip-172-31-45-230.us-west-2.compute.internal/status: (5.089357ms) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4] 172.31.45.230:44462]
Jan 19 09:18:15 ip-172-31-45-230.us-west-2.compute.internal atomic-openshift-master[2429]: I0119 09:18:15.678314    2429 panics.go:76] GET /api/v1/watch/persistentvolumes?resourceVersion=84985&timeoutSeconds=574: (9m34.002922289s) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4 system:serviceaccount:openshift-infra:pv-binder-controller] 172.31.45.230:44510]
Jan 19 09:18:16 ip-172-31-45-230.us-west-2.compute.internal atomic-openshift-master[2429]: I0119 09:18:16.001977    2429 panics.go:76] GET /api/v1/namespaces/default/services/kubernetes: (1.061663ms) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4] 172.31.45.230:44446]
Jan 19 09:18:16 ip-172-31-45-230.us-west-2.compute.internal atomic-openshift-master[2429]: I0119 09:18:16.377285    2429 panics.go:76] GET /api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-8-135.us-west-2.compute.internal&resourceVersion=0: (829.055µs) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4] 172.31.8.135:50922]
Jan 19 09:18:16 ip-172-31-45-230.us-west-2.compute.internal atomic-openshift-master[2429]: I0119 09:18:16.462134    2429 panics.go:76] PUT /api/v1/nodes/ip-172-31-8-135.us-west-2.compute.internal/status: (6.833231ms) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4] 172.31.8.135:50922]
Jan 19 09:18:16 ip-172-31-45-230.us-west-2.compute.internal atomic-openshift-master[2429]: I0119 09:18:16.567375    2429 panics.go:76] GET /api/v1/nodes?resourceVersion=0: (946.941µs) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4] 172.31.45.230:44446]
Jan 19 09:18:18 ip-172-31-45-230.us-west-2.compute.internal atomic-openshift-master[2429]: I0119 09:18:18.395887    2429 panics.go:76] GET /apis/batch/v2alpha1/cronjobs: (3.208792ms) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4 system:serviceaccount:openshift-infra:job-controller] 172.31.45.230:44510]
Jan 19 09:18:18 ip-172-31-45-230.us-west-2.compute.internal atomic-openshift-master[2429]: I0119 09:18:18.399267    2429 panics.go:76] GET /apis/batch/v2alpha1/jobs: (2.665742ms) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4 system:serviceaccount:openshift-infra:job-controller] 172.31.45.230:44510]
Jan 19 09:18:18 ip-172-31-45-230.us-west-2.compute.internal atomic-openshift-master[2429]: I0119 09:18:18.470564    2429 panics.go:76] GET /api/v1/namespaces/svt20/secrets/default-token-8bf2p: (2.211554ms) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4] 172.31.30.114:47452]
Jan 19 09:18:18 ip-172-31-45-230.us-west-2.compute.internal atomic-openshift-master[2429]: I0119 09:18:18.470724    2429 panics.go:76] GET /api/v1/namespaces/svt12/secrets/default-token-8h349: (2.334479ms) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4] 172.31.30.114:47452]
Jan 19 09:18:18 ip-172-31-45-230.us-west-2.compute.internal atomic-openshift-master[2429]: I0119 09:18:18.670533    2429 panics.go:76] GET /api/v1/namespaces/svt20/secrets/default-dockercfg-g5zfq: (2.179758ms) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4] 172.31.30.114:47452]
Jan 19 09:18:18 ip-172-31-45-230.us-west-2.compute.internal atomic-openshift-master[2429]: I0119 09:18:18.670539    2429 panics.go:76] GET /api/v1/namespaces/svt12/secrets/default-dockercfg-4jjtf: (2.263993ms) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4] 172.31.30.114:47452]
Jan 19 09:18:19 ip-172-31-45-230.us-west-2.compute.internal atomic-openshift-master[2429]: I0119 09:18:19.391647    2429 panics.go:76] GET /api/v1/namespaces/svt12/secrets/nodejs-mongodb-example: (1.530316ms) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4] 172.31.30.114:47452]
Jan 19 09:18:19 ip-172-31-45-230.us-west-2.compute.internal atomic-openshift-master[2429]: I0119 09:18:19.391721    2429 panics.go:76] GET /api/v1/namespaces/svt20/secrets/nodejs-mongodb-example: (1.620302ms) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4] 172.31.30.114:47452]
Jan 19 09:18:19 ip-172-31-45-230.us-west-2.compute.internal atomic-openshift-master[2429]: I0119 09:18:19.393438    2429 panics.go:76] PATCH /api/v1/namespaces/svt12/events/mongodb-1-h6d6v.149b313da39762a6: (3.056549ms) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4] 172.31.30.114:47452]
Jan 19 09:18:19 ip-172-31-45-230.us-west-2.compute.internal atomic-openshift-master[2429]: I0119 09:18:19.397173    2429 panics.go:76] PATCH /api/v1/namespaces/svt20/events/nodejs-mongodb-example-17-2x470.149b31392c637e17: (2.696075ms) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4] 172.31.30.114:47452]
Jan 19 09:18:19 ip-172-31-45-230.us-west-2.compute.internal atomic-openshift-master[2429]: I0119 09:18:19.550496    2429 panics.go:76] GET /api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-48.us-west-2.compute.internal&resourceVersion=0: (795.727µs) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4] 172.31.18.48:56108]
Jan 19 09:18:19 ip-172-31-45-230.us-west-2.compute.internal atomic-openshift-master[2429]: I0119 09:18:19.624877    2429 panics.go:76] PUT /api/v1/nodes/ip-172-31-18-48.us-west-2.compute.internal/status: (5.884497ms) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4] 172.31.18.48:56108]
Jan 19 09:18:19 ip-172-31-45-230.us-west-2.compute.internal atomic-openshift-master[2429]: I0119 09:18:19.953578    2429 panics.go:76] GET /api/v1/namespaces/svt18/pods/mongodb-1-cwvd5: (1.701622ms) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4] 172.31.8.135:50922]
Jan 19 09:18:19 ip-172-31-45-230.us-west-2.compute.internal atomic-openshift-master[2429]: I0119 09:18:19.960525    2429 panics.go:76] PUT /api/v1/namespaces/svt18/pods/mongodb-1-cwvd5/status: (4.388732ms) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4] 172.31.8.135:50922]
Jan 19 09:18:19 ip-172-31-45-230.us-west-2.compute.internal atomic-openshift-master[2429]: I0119 09:18:19.965823    2429 panics.go:76] GET /api/v1/namespaces/svt18/endpoints/mongodb: (3.83974ms) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4 system:serviceaccount:openshift-infra:endpoint-controller] 172.31.45.230:44510]
Jan 19 09:18:19 ip-172-31-45-230.us-west-2.compute.internal atomic-openshift-master[2429]: I0119 09:18:19.970679    2429 panics.go:76] PUT /api/v1/namespaces/svt18/endpoints/mongodb: (4.260576ms) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4 system:serviceaccount:openshift-infra:endpoint-controller] 172.31.45.230:44510]
Jan 19 09:18:20 ip-172-31-45-230.us-west-2.compute.internal atomic-openshift-master[2429]: I0119 09:18:20.002497    2429 panics.go:76] GET /api/v1/namespaces/svt13/secrets/default-token-fqtjj: (2.105831ms) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4] 172.31.8.135:50922]
Jan 19 09:18:20 ip-172-31-45-230.us-west-2.compute.internal atomic-openshift-master[2429]: I0119 09:18:20.002502    2429 panics.go:76] GET /api/v1/namespaces/svt18/secrets/default-token-qsxql: (2.90195ms) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4] 172.31.8.135:50922]
Jan 19 09:18:20 ip-172-31-45-230.us-west-2.compute.internal atomic-openshift-master[2429]: I0119 09:18:20.253143    2429 panics.go:76] GET /api/v1/namespaces/svt18/secrets/default-dockercfg-gdgwz: (1.901794ms) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4] 172.31.8.135:50922]
Jan 19 09:18:20 ip-172-31-45-230.us-west-2.compute.internal atomic-openshift-master[2429]: I0119 09:18:20.253573    2429 panics.go:76] GET /api/v1/namespaces/svt13/secrets/default-dockercfg-t17kw: (1.717725ms) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4] 172.31.8.135:50922]


Expected results:

Minimal informational logging at loglevel=2

Additional info:

Comment 1 Seth Jennings 2017-01-27 15:37:30 UTC
The --loglevel option is really a bad name as people associate it with syslog-type loglevels.  Really, it is a "debug verbosity level", more like -vv in other commands.

--loglevel should only be set when debugging where verbose logs are desired.  When --loglevel is not set, you still get ERROR, WARNING, and INFO level messages in the log.

Is there a particular reason you are running with --loglevel=2?

Comment 2 Mike Fiedler 2017-01-27 15:40:52 UTC
That is the default loglevel QE has run with in the past.   Generally we want enough logging so that if problems occur, the logs are useful to development in case the problem is not easily recreateable.

Comment 3 Seth Jennings 2017-01-27 16:10:00 UTC
Ok, let me see if someone changed it upstream for 1.5 and, if so, determine the reason for doing so.

Comment 4 Derek Carr 2017-01-31 22:40:01 UTC
Avesh - can you track down if there was a change in behavior and why?

Comment 5 Avesh Agarwal 2017-01-31 23:16:25 UTC
yeah sure, will look into it soon.

Comment 6 Avesh Agarwal 2017-02-03 15:24:35 UTC
I think I have found a solution and I am doing some testing and will report back soon.

Comment 7 Avesh Agarwal 2017-02-03 18:52:57 UTC
I sent following PR to kube upstream to address this issue:

https://github.com/kubernetes/kubernetes/pull/40933

Comment 8 Mike Fiedler 2017-03-01 22:00:09 UTC
Verified these messages do not appear in OCP 3.4 when the master is at loglevel 3.4

Comment 9 Mike Fiedler 2017-03-01 22:00:39 UTC
comment 8 should say "loglevel 2"

Comment 10 Eric Paris 2017-04-12 13:27:04 UTC
same as https://github.com/openshift/origin/issues/13724

Comment 11 Avesh Agarwal 2017-04-12 13:43:44 UTC
(In reply to Eric Paris from comment #10)
> same as https://github.com/openshift/origin/issues/13724

Help get this https://github.com/kubernetes/kubernetes/pull/40933 merged.

Comment 13 Avesh Agarwal 2017-04-20 19:41:01 UTC
https://github.com/openshift/origin/pull/13844

Comment 14 Mike Fiedler 2017-06-07 19:41:04 UTC
Verified on 3.6.99

Comment 17 Mike Fiedler 2017-08-22 16:50:44 UTC
This was regressed in the 1.7 rebase.   I opened https://bugzilla.redhat.com/show_bug.cgi?id=1484095 to track it.

Comment 21 Eric Paris 2017-08-24 15:10:00 UTC
Marking as closed, since this was fixed in 3.6 and the BZ was targetting a fix in 3.6

I think it would be --v=2 --vmodule="panics=4"

What was the error message?

Comment 23 Seth Jennings 2017-08-24 18:52:44 UTC
It is --loglevel in openshift

Comment 24 Avesh Agarwal 2017-08-24 19:00:59 UTC
(In reply to Seth Jennings from comment #23)
> It is --loglevel in openshift

loglevel is only for --v, the other thing here is how to configure vmodule in openshift.

Comment 25 Seth Jennings 2017-08-24 21:14:28 UTC
Yes, I was replying specifically to the --v error.  I've looked in the code and it doesn't look like openshift allows vmodule to be set by users, which is unfortunate.

Comment 26 Joel Rosental R. 2017-08-25 08:19:08 UTC
As per Seth last comment I understand there is no current workaround for this bug in OCP 3.5 apart from setting loglevel=1 then?

Comment 27 Avesh Agarwal 2017-08-28 14:26:10 UTC
(In reply to Joel Rosental R. from comment #26)
> As per Seth last comment I understand there is no current workaround for
> this bug in OCP 3.5 apart from setting loglevel=1 then?

Hi Joel,

David (@deads) told me that there is another flag --logspec that maps to vmodule, that can be used to configure per module/file logging. So it seems, you could try: --logpsec='log=3'. Let me know if this works for you or not.

Comment 40 Avesh Agarwal 2017-12-15 18:30:33 UTC
So I think I have investigated it and I am seeing same behaviour that using logspec does not lower log level to 1.

The reason is that it seems glog's behavior that I have observed is that logspec can help to go above specified loglevel for a particualar file, but it is never   going below specified loglevel. For example, if loglevel is 2, logspec can help go to higher levels > 2 for a particular file, but it never goes below 2. Similarly if loglevel is at 3, logspec can help go > 3 but it never goes down to 2 or lower.

I am not sure if glog did it intentionally or it's a bug in glog but it is what it is.

Right now, as I see is to handle this issue is to patch openshift like we have done for 3.6/3.7.