Bug 1885245 - kube-state-metrics: Logging is broken due to mix of k8s.io/klog v1 and v2
Summary: kube-state-metrics: Logging is broken due to mix of k8s.io/klog v1 and v2
Keywords:
Status: CLOSED NEXTRELEASE
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Monitoring
Version: 4.6
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 4.8.0
Assignee: Kemal Akkoyun
QA Contact: Junqi Zhao
URL:
Whiteboard:
Depends On:
Blocks: 1896304
TreeView+ depends on / blocked
 
Reported: 2020-10-05 13:44 UTC by Sergiusz Urbaniak
Modified: 2021-06-07 09:12 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-06-07 09:11:58 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift kube-state-metrics pull 38 0 None closed Bug 1885245: Bump klog to v2 and client-go to v0.18.9 2021-02-18 02:31:23 UTC

Description Sergiusz Urbaniak 2020-10-05 13:44:00 UTC
This bug was initially created as a copy of Bug #1883461

I am copying this bug because: 



k8s.io/klog moved to v2 in client-go and frieds. The operator does not use v2 yet and hence, we lose one half of the logging output (v2).

Comment 4 Junqi Zhao 2020-11-05 02:59:56 UTC
tested with 4.7.0-0.nightly-2020-11-05-010603, still mix of k8s.io/klog v1 and v2, see the "goroutine 6 [chan receive]" part

# oc -n openshift-monitoring  rsh -c kube-state-metrics kube-state-metrics-6f757d55c8-jbbzg
sh-4.4$ kube-state-metrics --kubeconfig=asdf -v=4
I1105 02:57:27.021294      27 main.go:86] Using default collectors
I1105 02:57:27.021351      27 main.go:98] Using all namespace
I1105 02:57:27.021361      27 main.go:139] metric white-blacklisting: blacklisting the following items: 
F1105 02:57:27.021377      27 main.go:149] Failed to create client: stat asdf: no such file or directory
goroutine 1 [running]:
k8s.io/klog/v2.stacks(0xc0000b0001, 0xc000194320, 0x69, 0x98)
	/go/src/k8s.io/kube-state-metrics/vendor/k8s.io/klog/v2/klog.go:996 +0xb9
k8s.io/klog/v2.(*loggingT).output(0x21702c0, 0xc000000003, 0x0, 0x0, 0xc00037f7a0, 0x20cb922, 0x7, 0x95, 0x0)
	/go/src/k8s.io/kube-state-metrics/vendor/k8s.io/klog/v2/klog.go:945 +0x191
k8s.io/klog/v2.(*loggingT).printf(0x21702c0, 0x3, 0x0, 0x0, 0x16adc4f, 0x1b, 0xc0001a9cb0, 0x1, 0x1)
	/go/src/k8s.io/kube-state-metrics/vendor/k8s.io/klog/v2/klog.go:733 +0x17a
k8s.io/klog/v2.Fatalf(...)
	/go/src/k8s.io/kube-state-metrics/vendor/k8s.io/klog/v2/klog.go:1456
main.main()
	/go/src/k8s.io/kube-state-metrics/main.go:149 +0x8fa

goroutine 7 [chan receive]:
k8s.io/klog/v2.(*loggingT).flushDaemon(0x21702c0)
	/go/src/k8s.io/kube-state-metrics/vendor/k8s.io/klog/v2/klog.go:1131 +0x8b
created by k8s.io/klog/v2.init.0
	/go/src/k8s.io/kube-state-metrics/vendor/k8s.io/klog/v2/klog.go:416 +0xd8

goroutine 6 [chan receive]:
k8s.io/klog.(*loggingT).flushDaemon(0x21701e0)
	/go/src/k8s.io/kube-state-metrics/vendor/k8s.io/klog/klog.go:1010 +0x8b
created by k8s.io/klog.init.0
	/go/src/k8s.io/kube-state-metrics/vendor/k8s.io/klog/klog.go:411 +0xd8

Comment 5 Lili Cosic 2020-11-10 09:11:07 UTC
Yes, if you pass an invalid kubeconfig we exit with fatal as we require it. Can you try passing a valid kubeconfig and trying to validate again? Thanks!

Comment 6 Junqi Zhao 2020-11-10 09:24:17 UTC
(In reply to Lili Cosic from comment #5)
> Yes, if you pass an invalid kubeconfig we exit with fatal as we require it.
> Can you try passing a valid kubeconfig and trying to validate again? Thanks!

There is nothing to do with the invalid kubeconfig, I mean, "goroutine 7 [chan receive]" part is klog/v2, but "goroutine 6 [chan receive]" is klog v1 which should not be in the log
*************************************************
goroutine 7 [chan receive]:
k8s.io/klog/v2.(*loggingT).flushDaemon(0x21702c0)
	/go/src/k8s.io/kube-state-metrics/vendor/k8s.io/klog/v2/klog.go:1131 +0x8b
created by k8s.io/klog/v2.init.0
	/go/src/k8s.io/kube-state-metrics/vendor/k8s.io/klog/v2/klog.go:416 +0xd8

goroutine 6 [chan receive]:
k8s.io/klog.(*loggingT).flushDaemon(0x21701e0)
	/go/src/k8s.io/kube-state-metrics/vendor/k8s.io/klog/klog.go:1010 +0x8b
created by k8s.io/klog.init.0
	/go/src/k8s.io/kube-state-metrics/vendor/k8s.io/klog/klog.go:411 +0xd8
*************************************************
It seems the same issue with Comment 2 and Comment 3 in bug 1885241

Comment 11 Junqi Zhao 2021-01-08 02:52:32 UTC
tested with 4.7.0-0.nightly-2021-01-07-181010,
commit is https://github.com/openshift/kube-state-metrics/commit/900e9b1c8ac3de13f02113e4b6d6901207a7a799
and the fix is already there, but it was two months ago, still could see the v1 klog, see from 
***********************************
goroutine 18 [chan receive]:
k8s.io/klog.(*loggingT).flushDaemon(0x21710a0)
	/go/src/k8s.io/kube-state-metrics/vendor/k8s.io/klog/klog.go:1010 +0x8b
created by k8s.io/klog.init.0
	/go/src/k8s.io/kube-state-metrics/vendor/k8s.io/klog/klog.go:411 +0xd8
***********************************
# oc -n openshift-monitoring rsh -c kube-state-metrics kube-state-metrics-6f9495ff98-zhpds
sh-4.4$ kube-state-metrics --kubeconfig=asdf -v=4
I0108 02:41:32.250561      21 main.go:86] Using default collectors
I0108 02:41:32.250616      21 main.go:98] Using all namespace
I0108 02:41:32.250629      21 main.go:139] metric white-blacklisting: blacklisting the following items: 
F0108 02:41:32.250668      21 main.go:149] Failed to create client: stat asdf: no such file or directory
goroutine 1 [running]:
k8s.io/klog/v2.stacks(0xc000010001, 0xc0000be3c0, 0x69, 0x98)
	/go/src/k8s.io/kube-state-metrics/vendor/k8s.io/klog/v2/klog.go:996 +0xb9
k8s.io/klog/v2.(*loggingT).output(0x2171180, 0xc000000003, 0x0, 0x0, 0xc0004357a0, 0x20cce92, 0x7, 0x95, 0x0)
	/go/src/k8s.io/kube-state-metrics/vendor/k8s.io/klog/v2/klog.go:945 +0x191
k8s.io/klog/v2.(*loggingT).printf(0x2171180, 0x3, 0x0, 0x0, 0x16aed8f, 0x1b, 0xc0000d9cb0, 0x1, 0x1)
	/go/src/k8s.io/kube-state-metrics/vendor/k8s.io/klog/v2/klog.go:733 +0x17a
k8s.io/klog/v2.Fatalf(...)
	/go/src/k8s.io/kube-state-metrics/vendor/k8s.io/klog/v2/klog.go:1456
main.main()
	/go/src/k8s.io/kube-state-metrics/main.go:149 +0x8fa

goroutine 18 [chan receive]:
k8s.io/klog.(*loggingT).flushDaemon(0x21710a0)
	/go/src/k8s.io/kube-state-metrics/vendor/k8s.io/klog/klog.go:1010 +0x8b
created by k8s.io/klog.init.0
	/go/src/k8s.io/kube-state-metrics/vendor/k8s.io/klog/klog.go:411 +0xd8

goroutine 19 [chan receive]:
k8s.io/klog/v2.(*loggingT).flushDaemon(0x2171180)
	/go/src/k8s.io/kube-state-metrics/vendor/k8s.io/klog/v2/klog.go:1131 +0x8b
created by k8s.io/klog/v2.init.0
	/go/src/k8s.io/kube-state-metrics/vendor/k8s.io/klog/v2/klog.go:416 +0xd8

Comment 19 Junqi Zhao 2021-04-19 03:41:10 UTC
tested with 4.8.0-0.nightly-2021-04-18-101412, issue is fixed
# oc -n openshift-monitoring rsh -c kube-state-metrics kube-state-metrics-fb6d9bb6-6dk7v
sh-4.4$ kube-state-metrics --kubeconfig=asdf -v=4
I0419 03:38:08.403063      23 main.go:95] Using default resources
I0419 03:38:08.403120      23 main.go:107] Using all namespace
I0419 03:38:08.403131      23 main.go:128] metric allow-denylisting: Excluding the following lists that were on denylist: 
F0419 03:38:08.403166      23 main.go:138] Failed to create client: stat asdf: no such file or directory
goroutine 1 [running]:
k8s.io/klog/v2.stacks(0xc000010001, 0xc0000c4320, 0x69, 0x97)
	/go/src/k8s.io/kube-state-metrics/vendor/k8s.io/klog/v2/klog.go:1021 +0xb9
k8s.io/klog/v2.(*loggingT).output(0x2242220, 0xc000000003, 0x0, 0x0, 0xc0004ca690, 0x1bc260d, 0x7, 0x8a, 0x0)
	/go/src/k8s.io/kube-state-metrics/vendor/k8s.io/klog/v2/klog.go:970 +0x191
k8s.io/klog/v2.(*loggingT).printf(0x2242220, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x1728e5a, 0x1b, 0xc0004b46d0, 0x1, ...)
	/go/src/k8s.io/kube-state-metrics/vendor/k8s.io/klog/v2/klog.go:751 +0x191
k8s.io/klog/v2.Fatalf(...)
	/go/src/k8s.io/kube-state-metrics/vendor/k8s.io/klog/v2/klog.go:1509
main.main()
	/go/src/k8s.io/kube-state-metrics/main.go:138 +0xc55

goroutine 18 [chan receive]:
k8s.io/klog/v2.(*loggingT).flushDaemon(0x2242220)
	/go/src/k8s.io/kube-state-metrics/vendor/k8s.io/klog/v2/klog.go:1164 +0x8b
created by k8s.io/klog/v2.init.0
	/go/src/k8s.io/kube-state-metrics/vendor/k8s.io/klog/v2/klog.go:418 +0xdf

Comment 20 Kemal Akkoyun 2021-06-07 09:11:58 UTC
It's resolved after upgrading to KSM V2 #c19


Note You need to log in before you can comment on or make changes to this bug.