Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1885241

Summary: kube-rbac-proxy: Logging is broken due to mix of k8s.io/klog v1 and v2
Product: OpenShift Container Platform Reporter: Sergiusz Urbaniak <surbania>
Component: MonitoringAssignee: Pawel Krupa <pkrupa>
Status: CLOSED ERRATA QA Contact: hongyan li <hongyli>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 4.6CC: alegrand, anpicker, erooth, kakkoyun, lcosic, pkrupa, surbania
Target Milestone: ---   
Target Release: 4.7.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-02-24 15:23:10 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Sergiusz Urbaniak 2020-10-05 13:41:21 UTC
This bug was initially created as a copy of Bug #1883461

I am copying this bug because: 



k8s.io/klog moved to v2 in client-go and frieds. The operator does not use v2 yet and hence, we lose one half of the logging output (v2).

Comment 2 Junqi Zhao 2020-10-13 09:04:25 UTC
tested with 4.7.0-0.ci-2020-10-12-222453, use a non-exist config file,we can see klog/v2 in the log
sh-4.4$ kube-rbac-proxy --config-file=asfd --v=4
I1013 09:02:38.607167      64 main.go:159] Reading config file: asfd
F1013 09:02:38.607220      64 main.go:162] Failed to read resource-attribute file: open asfd: no such file or directory
goroutine 1 [running]:
k8s.io/klog/v2.stacks(0xc00012a001, 0xc00051e280, 0x78, 0x99)
	/go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/klog/v2/klog.go:996 +0xb9
k8s.io/klog/v2.(*loggingT).output(0x1e95820, 0xc000000003, 0x0, 0x0, 0xc000322230, 0x1e00259, 0x7, 0xa2, 0x0)
	/go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/klog/v2/klog.go:945 +0x191
k8s.io/klog/v2.(*loggingT).printf(0x1e95820, 0x3, 0x0, 0x0, 0x14f7501, 0x2a, 0xc000607dc0, 0x1, 0x1)
	/go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/klog/v2/klog.go:733 +0x17a
k8s.io/klog/v2.Fatalf(...)
	/go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/klog/v2/klog.go:1463
main.main()
	/go/src/github.com/brancz/kube-rbac-proxy/main.go:162 +0x2fba

goroutine 18 [chan receive]:
k8s.io/klog.(*loggingT).flushDaemon(0x1e95740)
	/go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/klog/klog.go:1010 +0x8b
created by k8s.io/klog.init.0
	/go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/klog/klog.go:411 +0xd8

goroutine 19 [chan receive]:
k8s.io/klog/v2.(*loggingT).flushDaemon(0x1e95820)
	/go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/klog/v2/klog.go:1131 +0x8b
created by k8s.io/klog/v2.init.0
	/go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/klog/v2/klog.go:416 +0xd8

Comment 3 Junqi Zhao 2020-10-13 09:08:58 UTC
see from Comment 2, I think it still mix of k8s.io/klog v1 and v2, please correct me if I am wrong
goroutine 18 [chan receive]:
k8s.io/klog.(*loggingT).flushDaemon(0x1e95740)
	/go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/klog/klog.go:1010 +0x8b
created by k8s.io/klog.init.0
	/go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/klog/klog.go:411 +0xd8

goroutine 19 [chan receive]:
k8s.io/klog/v2.(*loggingT).flushDaemon(0x1e95820)
	/go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/klog/v2/klog.go:1131 +0x8b
created by k8s.io/klog/v2.init.0
	/go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/klog/v2/klog.go:416 +0xd8

Comment 4 Pawel Krupa 2020-10-13 10:33:51 UTC
Great catch. It seems that one of libraries used by kube-rbac-proxy is in an older version which wasn't moved to klog v2 yet.

I created an upstream PR in https://github.com/brancz/kube-rbac-proxy/pull/95 and will port it downstream after it is merged.

Comment 6 hongyan li 2020-10-23 08:40:48 UTC
fix is in 4.7.0-0.nightly-2020-10-23-024149 or later payloads

Comment 7 hongyan li 2020-10-23 10:38:13 UTC
Log is back to normal and no v1 log now

sh-4.4$ kube-rbac-proxy --config-file=asfd --v=4
I1023 10:36:55.879406   16106 main.go:159] Reading config file: asfd
F1023 10:36:55.879467   16106 main.go:162] Failed to read resource-attribute file: open asfd: no such file or directory
goroutine 1 [running]:
k8s.io/klog/v2.stacks(0xc000010001, 0xc00013e140, 0x78, 0x99)
	/go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/klog/v2/klog.go:996 +0xb9
k8s.io/klog/v2.(*loggingT).output(0x22a52c0, 0xc000000003, 0x0, 0x0, 0xc000372690, 0x21fc6a1, 0x7, 0xa2, 0x0)
	/go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/klog/v2/klog.go:945 +0x191
k8s.io/klog/v2.(*loggingT).printf(0x22a52c0, 0x3, 0x0, 0x0, 0x177e3bc, 0x2a, 0xc000397dc0, 0x1, 0x1)
	/go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/klog/v2/klog.go:733 +0x17a
k8s.io/klog/v2.Fatalf(...)
	/go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/klog/v2/klog.go:1463
main.main()
	/go/src/github.com/brancz/kube-rbac-proxy/main.go:162 +0x2fba

goroutine 6 [chan receive]:
k8s.io/klog/v2.(*loggingT).flushDaemon(0x22a52c0)
	/go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/klog/v2/klog.go:1131 +0x8b
created by k8s.io/klog/v2.init.0
	/go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/klog/v2/klog.go:416 +0xd8

Comment 11 errata-xmlrpc 2021-02-24 15:23:10 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:5633