Description of problem: At loglevel=2, all REST API calls seem to be logged in the atomic-openshift-master logs Version-Release number of selected component (if applicable): 3.5.0.5 How reproducible: Always Steps to Reproduce: 1. Install 3.5 cluster 2. Create some projects and applications 3. journalctl -lf -u atomic-openshift-master Actual results: root 2429 1 7 09:00 ? 00:01:20 /usr/bin/openshift start master --config=/etc/origin/master/master-config.yaml --loglevel=2 Jan 19 09:18:11 ip-172-31-45-230.us-west-2.compute.internal atomic-openshift-master[2429]: I0119 09:18:11.565341 2429 panics.go:76] GET /api/v1/nodes?resourceVersion=0: (918.371µs) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4] 172.31.45.230:44446] Jan 19 09:18:15 ip-172-31-45-230.us-west-2.compute.internal atomic-openshift-master[2429]: I0119 09:18:15.302373 2429 panics.go:76] GET /apis/extensions/v1beta1/thirdpartyresources: (1.050177ms) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4] 172.31.45.230:44446] Jan 19 09:18:15 ip-172-31-45-230.us-west-2.compute.internal atomic-openshift-master[2429]: I0119 09:18:15.395607 2429 panics.go:76] GET /api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-45-230.us-west-2.compute.internal&resourceVersion=0: (808.75µs) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4] 172.31.45.230:44462] Jan 19 09:18:15 ip-172-31-45-230.us-west-2.compute.internal atomic-openshift-master[2429]: I0119 09:18:15.469082 2429 panics.go:76] PUT /api/v1/nodes/ip-172-31-45-230.us-west-2.compute.internal/status: (5.089357ms) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4] 172.31.45.230:44462] Jan 19 09:18:15 ip-172-31-45-230.us-west-2.compute.internal atomic-openshift-master[2429]: I0119 09:18:15.678314 2429 panics.go:76] GET /api/v1/watch/persistentvolumes?resourceVersion=84985&timeoutSeconds=574: (9m34.002922289s) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4 system:serviceaccount:openshift-infra:pv-binder-controller] 172.31.45.230:44510] Jan 19 09:18:16 ip-172-31-45-230.us-west-2.compute.internal atomic-openshift-master[2429]: I0119 09:18:16.001977 2429 panics.go:76] GET /api/v1/namespaces/default/services/kubernetes: (1.061663ms) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4] 172.31.45.230:44446] Jan 19 09:18:16 ip-172-31-45-230.us-west-2.compute.internal atomic-openshift-master[2429]: I0119 09:18:16.377285 2429 panics.go:76] GET /api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-8-135.us-west-2.compute.internal&resourceVersion=0: (829.055µs) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4] 172.31.8.135:50922] Jan 19 09:18:16 ip-172-31-45-230.us-west-2.compute.internal atomic-openshift-master[2429]: I0119 09:18:16.462134 2429 panics.go:76] PUT /api/v1/nodes/ip-172-31-8-135.us-west-2.compute.internal/status: (6.833231ms) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4] 172.31.8.135:50922] Jan 19 09:18:16 ip-172-31-45-230.us-west-2.compute.internal atomic-openshift-master[2429]: I0119 09:18:16.567375 2429 panics.go:76] GET /api/v1/nodes?resourceVersion=0: (946.941µs) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4] 172.31.45.230:44446] Jan 19 09:18:18 ip-172-31-45-230.us-west-2.compute.internal atomic-openshift-master[2429]: I0119 09:18:18.395887 2429 panics.go:76] GET /apis/batch/v2alpha1/cronjobs: (3.208792ms) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4 system:serviceaccount:openshift-infra:job-controller] 172.31.45.230:44510] Jan 19 09:18:18 ip-172-31-45-230.us-west-2.compute.internal atomic-openshift-master[2429]: I0119 09:18:18.399267 2429 panics.go:76] GET /apis/batch/v2alpha1/jobs: (2.665742ms) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4 system:serviceaccount:openshift-infra:job-controller] 172.31.45.230:44510] Jan 19 09:18:18 ip-172-31-45-230.us-west-2.compute.internal atomic-openshift-master[2429]: I0119 09:18:18.470564 2429 panics.go:76] GET /api/v1/namespaces/svt20/secrets/default-token-8bf2p: (2.211554ms) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4] 172.31.30.114:47452] Jan 19 09:18:18 ip-172-31-45-230.us-west-2.compute.internal atomic-openshift-master[2429]: I0119 09:18:18.470724 2429 panics.go:76] GET /api/v1/namespaces/svt12/secrets/default-token-8h349: (2.334479ms) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4] 172.31.30.114:47452] Jan 19 09:18:18 ip-172-31-45-230.us-west-2.compute.internal atomic-openshift-master[2429]: I0119 09:18:18.670533 2429 panics.go:76] GET /api/v1/namespaces/svt20/secrets/default-dockercfg-g5zfq: (2.179758ms) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4] 172.31.30.114:47452] Jan 19 09:18:18 ip-172-31-45-230.us-west-2.compute.internal atomic-openshift-master[2429]: I0119 09:18:18.670539 2429 panics.go:76] GET /api/v1/namespaces/svt12/secrets/default-dockercfg-4jjtf: (2.263993ms) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4] 172.31.30.114:47452] Jan 19 09:18:19 ip-172-31-45-230.us-west-2.compute.internal atomic-openshift-master[2429]: I0119 09:18:19.391647 2429 panics.go:76] GET /api/v1/namespaces/svt12/secrets/nodejs-mongodb-example: (1.530316ms) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4] 172.31.30.114:47452] Jan 19 09:18:19 ip-172-31-45-230.us-west-2.compute.internal atomic-openshift-master[2429]: I0119 09:18:19.391721 2429 panics.go:76] GET /api/v1/namespaces/svt20/secrets/nodejs-mongodb-example: (1.620302ms) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4] 172.31.30.114:47452] Jan 19 09:18:19 ip-172-31-45-230.us-west-2.compute.internal atomic-openshift-master[2429]: I0119 09:18:19.393438 2429 panics.go:76] PATCH /api/v1/namespaces/svt12/events/mongodb-1-h6d6v.149b313da39762a6: (3.056549ms) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4] 172.31.30.114:47452] Jan 19 09:18:19 ip-172-31-45-230.us-west-2.compute.internal atomic-openshift-master[2429]: I0119 09:18:19.397173 2429 panics.go:76] PATCH /api/v1/namespaces/svt20/events/nodejs-mongodb-example-17-2x470.149b31392c637e17: (2.696075ms) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4] 172.31.30.114:47452] Jan 19 09:18:19 ip-172-31-45-230.us-west-2.compute.internal atomic-openshift-master[2429]: I0119 09:18:19.550496 2429 panics.go:76] GET /api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-48.us-west-2.compute.internal&resourceVersion=0: (795.727µs) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4] 172.31.18.48:56108] Jan 19 09:18:19 ip-172-31-45-230.us-west-2.compute.internal atomic-openshift-master[2429]: I0119 09:18:19.624877 2429 panics.go:76] PUT /api/v1/nodes/ip-172-31-18-48.us-west-2.compute.internal/status: (5.884497ms) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4] 172.31.18.48:56108] Jan 19 09:18:19 ip-172-31-45-230.us-west-2.compute.internal atomic-openshift-master[2429]: I0119 09:18:19.953578 2429 panics.go:76] GET /api/v1/namespaces/svt18/pods/mongodb-1-cwvd5: (1.701622ms) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4] 172.31.8.135:50922] Jan 19 09:18:19 ip-172-31-45-230.us-west-2.compute.internal atomic-openshift-master[2429]: I0119 09:18:19.960525 2429 panics.go:76] PUT /api/v1/namespaces/svt18/pods/mongodb-1-cwvd5/status: (4.388732ms) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4] 172.31.8.135:50922] Jan 19 09:18:19 ip-172-31-45-230.us-west-2.compute.internal atomic-openshift-master[2429]: I0119 09:18:19.965823 2429 panics.go:76] GET /api/v1/namespaces/svt18/endpoints/mongodb: (3.83974ms) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4 system:serviceaccount:openshift-infra:endpoint-controller] 172.31.45.230:44510] Jan 19 09:18:19 ip-172-31-45-230.us-west-2.compute.internal atomic-openshift-master[2429]: I0119 09:18:19.970679 2429 panics.go:76] PUT /api/v1/namespaces/svt18/endpoints/mongodb: (4.260576ms) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4 system:serviceaccount:openshift-infra:endpoint-controller] 172.31.45.230:44510] Jan 19 09:18:20 ip-172-31-45-230.us-west-2.compute.internal atomic-openshift-master[2429]: I0119 09:18:20.002497 2429 panics.go:76] GET /api/v1/namespaces/svt13/secrets/default-token-fqtjj: (2.105831ms) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4] 172.31.8.135:50922] Jan 19 09:18:20 ip-172-31-45-230.us-west-2.compute.internal atomic-openshift-master[2429]: I0119 09:18:20.002502 2429 panics.go:76] GET /api/v1/namespaces/svt18/secrets/default-token-qsxql: (2.90195ms) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4] 172.31.8.135:50922] Jan 19 09:18:20 ip-172-31-45-230.us-west-2.compute.internal atomic-openshift-master[2429]: I0119 09:18:20.253143 2429 panics.go:76] GET /api/v1/namespaces/svt18/secrets/default-dockercfg-gdgwz: (1.901794ms) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4] 172.31.8.135:50922] Jan 19 09:18:20 ip-172-31-45-230.us-west-2.compute.internal atomic-openshift-master[2429]: I0119 09:18:20.253573 2429 panics.go:76] GET /api/v1/namespaces/svt13/secrets/default-dockercfg-t17kw: (1.717725ms) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4] 172.31.8.135:50922] Expected results: Minimal informational logging at loglevel=2 Additional info:
The --loglevel option is really a bad name as people associate it with syslog-type loglevels. Really, it is a "debug verbosity level", more like -vv in other commands. --loglevel should only be set when debugging where verbose logs are desired. When --loglevel is not set, you still get ERROR, WARNING, and INFO level messages in the log. Is there a particular reason you are running with --loglevel=2?
That is the default loglevel QE has run with in the past. Generally we want enough logging so that if problems occur, the logs are useful to development in case the problem is not easily recreateable.
Ok, let me see if someone changed it upstream for 1.5 and, if so, determine the reason for doing so.
Avesh - can you track down if there was a change in behavior and why?
yeah sure, will look into it soon.
I think I have found a solution and I am doing some testing and will report back soon.
I sent following PR to kube upstream to address this issue: https://github.com/kubernetes/kubernetes/pull/40933
Verified these messages do not appear in OCP 3.4 when the master is at loglevel 3.4
comment 8 should say "loglevel 2"
same as https://github.com/openshift/origin/issues/13724
(In reply to Eric Paris from comment #10) > same as https://github.com/openshift/origin/issues/13724 Help get this https://github.com/kubernetes/kubernetes/pull/40933 merged.
https://github.com/openshift/origin/pull/13844
Verified on 3.6.99
This was regressed in the 1.7 rebase. I opened https://bugzilla.redhat.com/show_bug.cgi?id=1484095 to track it.
Marking as closed, since this was fixed in 3.6 and the BZ was targetting a fix in 3.6 I think it would be --v=2 --vmodule="panics=4" What was the error message?
It is --loglevel in openshift
(In reply to Seth Jennings from comment #23) > It is --loglevel in openshift loglevel is only for --v, the other thing here is how to configure vmodule in openshift.
Yes, I was replying specifically to the --v error. I've looked in the code and it doesn't look like openshift allows vmodule to be set by users, which is unfortunate.
As per Seth last comment I understand there is no current workaround for this bug in OCP 3.5 apart from setting loglevel=1 then?
(In reply to Joel Rosental R. from comment #26) > As per Seth last comment I understand there is no current workaround for > this bug in OCP 3.5 apart from setting loglevel=1 then? Hi Joel, David (@deads) told me that there is another flag --logspec that maps to vmodule, that can be used to configure per module/file logging. So it seems, you could try: --logpsec='log=3'. Let me know if this works for you or not.
So I think I have investigated it and I am seeing same behaviour that using logspec does not lower log level to 1. The reason is that it seems glog's behavior that I have observed is that logspec can help to go above specified loglevel for a particualar file, but it is never going below specified loglevel. For example, if loglevel is 2, logspec can help go to higher levels > 2 for a particular file, but it never goes below 2. Similarly if loglevel is at 3, logspec can help go > 3 but it never goes down to 2 or lower. I am not sure if glog did it intentionally or it's a bug in glog but it is what it is. Right now, as I see is to handle this issue is to patch openshift like we have done for 3.6/3.7.