Bug 1885245
| Summary: | kube-state-metrics: Logging is broken due to mix of k8s.io/klog v1 and v2 | ||
|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | Sergiusz Urbaniak <surbania> |
| Component: | Monitoring | Assignee: | Kemal Akkoyun <kakkoyun> |
| Status: | CLOSED NEXTRELEASE | QA Contact: | Junqi Zhao <juzhao> |
| Severity: | medium | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 4.6 | CC: | alegrand, anpicker, erooth, juzhao, kakkoyun, lcosic, pkrupa, spasquie |
| Target Milestone: | --- | ||
| Target Release: | 4.8.0 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2021-06-07 09:11:58 UTC | Type: | --- |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | |||
| Bug Blocks: | 1896304 | ||
|
Description
Sergiusz Urbaniak
2020-10-05 13:44:00 UTC
tested with 4.7.0-0.nightly-2020-11-05-010603, still mix of k8s.io/klog v1 and v2, see the "goroutine 6 [chan receive]" part # oc -n openshift-monitoring rsh -c kube-state-metrics kube-state-metrics-6f757d55c8-jbbzg sh-4.4$ kube-state-metrics --kubeconfig=asdf -v=4 I1105 02:57:27.021294 27 main.go:86] Using default collectors I1105 02:57:27.021351 27 main.go:98] Using all namespace I1105 02:57:27.021361 27 main.go:139] metric white-blacklisting: blacklisting the following items: F1105 02:57:27.021377 27 main.go:149] Failed to create client: stat asdf: no such file or directory goroutine 1 [running]: k8s.io/klog/v2.stacks(0xc0000b0001, 0xc000194320, 0x69, 0x98) /go/src/k8s.io/kube-state-metrics/vendor/k8s.io/klog/v2/klog.go:996 +0xb9 k8s.io/klog/v2.(*loggingT).output(0x21702c0, 0xc000000003, 0x0, 0x0, 0xc00037f7a0, 0x20cb922, 0x7, 0x95, 0x0) /go/src/k8s.io/kube-state-metrics/vendor/k8s.io/klog/v2/klog.go:945 +0x191 k8s.io/klog/v2.(*loggingT).printf(0x21702c0, 0x3, 0x0, 0x0, 0x16adc4f, 0x1b, 0xc0001a9cb0, 0x1, 0x1) /go/src/k8s.io/kube-state-metrics/vendor/k8s.io/klog/v2/klog.go:733 +0x17a k8s.io/klog/v2.Fatalf(...) /go/src/k8s.io/kube-state-metrics/vendor/k8s.io/klog/v2/klog.go:1456 main.main() /go/src/k8s.io/kube-state-metrics/main.go:149 +0x8fa goroutine 7 [chan receive]: k8s.io/klog/v2.(*loggingT).flushDaemon(0x21702c0) /go/src/k8s.io/kube-state-metrics/vendor/k8s.io/klog/v2/klog.go:1131 +0x8b created by k8s.io/klog/v2.init.0 /go/src/k8s.io/kube-state-metrics/vendor/k8s.io/klog/v2/klog.go:416 +0xd8 goroutine 6 [chan receive]: k8s.io/klog.(*loggingT).flushDaemon(0x21701e0) /go/src/k8s.io/kube-state-metrics/vendor/k8s.io/klog/klog.go:1010 +0x8b created by k8s.io/klog.init.0 /go/src/k8s.io/kube-state-metrics/vendor/k8s.io/klog/klog.go:411 +0xd8 Yes, if you pass an invalid kubeconfig we exit with fatal as we require it. Can you try passing a valid kubeconfig and trying to validate again? Thanks! (In reply to Lili Cosic from comment #5) > Yes, if you pass an invalid kubeconfig we exit with fatal as we require it. > Can you try passing a valid kubeconfig and trying to validate again? Thanks! There is nothing to do with the invalid kubeconfig, I mean, "goroutine 7 [chan receive]" part is klog/v2, but "goroutine 6 [chan receive]" is klog v1 which should not be in the log ************************************************* goroutine 7 [chan receive]: k8s.io/klog/v2.(*loggingT).flushDaemon(0x21702c0) /go/src/k8s.io/kube-state-metrics/vendor/k8s.io/klog/v2/klog.go:1131 +0x8b created by k8s.io/klog/v2.init.0 /go/src/k8s.io/kube-state-metrics/vendor/k8s.io/klog/v2/klog.go:416 +0xd8 goroutine 6 [chan receive]: k8s.io/klog.(*loggingT).flushDaemon(0x21701e0) /go/src/k8s.io/kube-state-metrics/vendor/k8s.io/klog/klog.go:1010 +0x8b created by k8s.io/klog.init.0 /go/src/k8s.io/kube-state-metrics/vendor/k8s.io/klog/klog.go:411 +0xd8 ************************************************* It seems the same issue with Comment 2 and Comment 3 in bug 1885241 tested with 4.7.0-0.nightly-2021-01-07-181010, commit is https://github.com/openshift/kube-state-metrics/commit/900e9b1c8ac3de13f02113e4b6d6901207a7a799 and the fix is already there, but it was two months ago, still could see the v1 klog, see from *********************************** goroutine 18 [chan receive]: k8s.io/klog.(*loggingT).flushDaemon(0x21710a0) /go/src/k8s.io/kube-state-metrics/vendor/k8s.io/klog/klog.go:1010 +0x8b created by k8s.io/klog.init.0 /go/src/k8s.io/kube-state-metrics/vendor/k8s.io/klog/klog.go:411 +0xd8 *********************************** # oc -n openshift-monitoring rsh -c kube-state-metrics kube-state-metrics-6f9495ff98-zhpds sh-4.4$ kube-state-metrics --kubeconfig=asdf -v=4 I0108 02:41:32.250561 21 main.go:86] Using default collectors I0108 02:41:32.250616 21 main.go:98] Using all namespace I0108 02:41:32.250629 21 main.go:139] metric white-blacklisting: blacklisting the following items: F0108 02:41:32.250668 21 main.go:149] Failed to create client: stat asdf: no such file or directory goroutine 1 [running]: k8s.io/klog/v2.stacks(0xc000010001, 0xc0000be3c0, 0x69, 0x98) /go/src/k8s.io/kube-state-metrics/vendor/k8s.io/klog/v2/klog.go:996 +0xb9 k8s.io/klog/v2.(*loggingT).output(0x2171180, 0xc000000003, 0x0, 0x0, 0xc0004357a0, 0x20cce92, 0x7, 0x95, 0x0) /go/src/k8s.io/kube-state-metrics/vendor/k8s.io/klog/v2/klog.go:945 +0x191 k8s.io/klog/v2.(*loggingT).printf(0x2171180, 0x3, 0x0, 0x0, 0x16aed8f, 0x1b, 0xc0000d9cb0, 0x1, 0x1) /go/src/k8s.io/kube-state-metrics/vendor/k8s.io/klog/v2/klog.go:733 +0x17a k8s.io/klog/v2.Fatalf(...) /go/src/k8s.io/kube-state-metrics/vendor/k8s.io/klog/v2/klog.go:1456 main.main() /go/src/k8s.io/kube-state-metrics/main.go:149 +0x8fa goroutine 18 [chan receive]: k8s.io/klog.(*loggingT).flushDaemon(0x21710a0) /go/src/k8s.io/kube-state-metrics/vendor/k8s.io/klog/klog.go:1010 +0x8b created by k8s.io/klog.init.0 /go/src/k8s.io/kube-state-metrics/vendor/k8s.io/klog/klog.go:411 +0xd8 goroutine 19 [chan receive]: k8s.io/klog/v2.(*loggingT).flushDaemon(0x2171180) /go/src/k8s.io/kube-state-metrics/vendor/k8s.io/klog/v2/klog.go:1131 +0x8b created by k8s.io/klog/v2.init.0 /go/src/k8s.io/kube-state-metrics/vendor/k8s.io/klog/v2/klog.go:416 +0xd8 tested with 4.8.0-0.nightly-2021-04-18-101412, issue is fixed # oc -n openshift-monitoring rsh -c kube-state-metrics kube-state-metrics-fb6d9bb6-6dk7v sh-4.4$ kube-state-metrics --kubeconfig=asdf -v=4 I0419 03:38:08.403063 23 main.go:95] Using default resources I0419 03:38:08.403120 23 main.go:107] Using all namespace I0419 03:38:08.403131 23 main.go:128] metric allow-denylisting: Excluding the following lists that were on denylist: F0419 03:38:08.403166 23 main.go:138] Failed to create client: stat asdf: no such file or directory goroutine 1 [running]: k8s.io/klog/v2.stacks(0xc000010001, 0xc0000c4320, 0x69, 0x97) /go/src/k8s.io/kube-state-metrics/vendor/k8s.io/klog/v2/klog.go:1021 +0xb9 k8s.io/klog/v2.(*loggingT).output(0x2242220, 0xc000000003, 0x0, 0x0, 0xc0004ca690, 0x1bc260d, 0x7, 0x8a, 0x0) /go/src/k8s.io/kube-state-metrics/vendor/k8s.io/klog/v2/klog.go:970 +0x191 k8s.io/klog/v2.(*loggingT).printf(0x2242220, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x1728e5a, 0x1b, 0xc0004b46d0, 0x1, ...) /go/src/k8s.io/kube-state-metrics/vendor/k8s.io/klog/v2/klog.go:751 +0x191 k8s.io/klog/v2.Fatalf(...) /go/src/k8s.io/kube-state-metrics/vendor/k8s.io/klog/v2/klog.go:1509 main.main() /go/src/k8s.io/kube-state-metrics/main.go:138 +0xc55 goroutine 18 [chan receive]: k8s.io/klog/v2.(*loggingT).flushDaemon(0x2242220) /go/src/k8s.io/kube-state-metrics/vendor/k8s.io/klog/v2/klog.go:1164 +0x8b created by k8s.io/klog/v2.init.0 /go/src/k8s.io/kube-state-metrics/vendor/k8s.io/klog/v2/klog.go:418 +0xdf It's resolved after upgrading to KSM V2 #c19 |