This bug was initially created as a copy of Bug #1883461 I am copying this bug because: k8s.io/klog moved to v2 in client-go and frieds. The operator does not use v2 yet and hence, we lose one half of the logging output (v2).
tested with 4.7.0-0.ci-2020-10-12-222453, use a non-exist config file,we can see klog/v2 in the log sh-4.4$ kube-rbac-proxy --config-file=asfd --v=4 I1013 09:02:38.607167 64 main.go:159] Reading config file: asfd F1013 09:02:38.607220 64 main.go:162] Failed to read resource-attribute file: open asfd: no such file or directory goroutine 1 [running]: k8s.io/klog/v2.stacks(0xc00012a001, 0xc00051e280, 0x78, 0x99) /go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/klog/v2/klog.go:996 +0xb9 k8s.io/klog/v2.(*loggingT).output(0x1e95820, 0xc000000003, 0x0, 0x0, 0xc000322230, 0x1e00259, 0x7, 0xa2, 0x0) /go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/klog/v2/klog.go:945 +0x191 k8s.io/klog/v2.(*loggingT).printf(0x1e95820, 0x3, 0x0, 0x0, 0x14f7501, 0x2a, 0xc000607dc0, 0x1, 0x1) /go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/klog/v2/klog.go:733 +0x17a k8s.io/klog/v2.Fatalf(...) /go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/klog/v2/klog.go:1463 main.main() /go/src/github.com/brancz/kube-rbac-proxy/main.go:162 +0x2fba goroutine 18 [chan receive]: k8s.io/klog.(*loggingT).flushDaemon(0x1e95740) /go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/klog/klog.go:1010 +0x8b created by k8s.io/klog.init.0 /go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/klog/klog.go:411 +0xd8 goroutine 19 [chan receive]: k8s.io/klog/v2.(*loggingT).flushDaemon(0x1e95820) /go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/klog/v2/klog.go:1131 +0x8b created by k8s.io/klog/v2.init.0 /go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/klog/v2/klog.go:416 +0xd8
see from Comment 2, I think it still mix of k8s.io/klog v1 and v2, please correct me if I am wrong goroutine 18 [chan receive]: k8s.io/klog.(*loggingT).flushDaemon(0x1e95740) /go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/klog/klog.go:1010 +0x8b created by k8s.io/klog.init.0 /go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/klog/klog.go:411 +0xd8 goroutine 19 [chan receive]: k8s.io/klog/v2.(*loggingT).flushDaemon(0x1e95820) /go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/klog/v2/klog.go:1131 +0x8b created by k8s.io/klog/v2.init.0 /go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/klog/v2/klog.go:416 +0xd8
Great catch. It seems that one of libraries used by kube-rbac-proxy is in an older version which wasn't moved to klog v2 yet. I created an upstream PR in https://github.com/brancz/kube-rbac-proxy/pull/95 and will port it downstream after it is merged.
fix is in 4.7.0-0.nightly-2020-10-23-024149 or later payloads
Log is back to normal and no v1 log now sh-4.4$ kube-rbac-proxy --config-file=asfd --v=4 I1023 10:36:55.879406 16106 main.go:159] Reading config file: asfd F1023 10:36:55.879467 16106 main.go:162] Failed to read resource-attribute file: open asfd: no such file or directory goroutine 1 [running]: k8s.io/klog/v2.stacks(0xc000010001, 0xc00013e140, 0x78, 0x99) /go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/klog/v2/klog.go:996 +0xb9 k8s.io/klog/v2.(*loggingT).output(0x22a52c0, 0xc000000003, 0x0, 0x0, 0xc000372690, 0x21fc6a1, 0x7, 0xa2, 0x0) /go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/klog/v2/klog.go:945 +0x191 k8s.io/klog/v2.(*loggingT).printf(0x22a52c0, 0x3, 0x0, 0x0, 0x177e3bc, 0x2a, 0xc000397dc0, 0x1, 0x1) /go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/klog/v2/klog.go:733 +0x17a k8s.io/klog/v2.Fatalf(...) /go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/klog/v2/klog.go:1463 main.main() /go/src/github.com/brancz/kube-rbac-proxy/main.go:162 +0x2fba goroutine 6 [chan receive]: k8s.io/klog/v2.(*loggingT).flushDaemon(0x22a52c0) /go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/klog/v2/klog.go:1131 +0x8b created by k8s.io/klog/v2.init.0 /go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/klog/v2/klog.go:416 +0xd8
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2020:5633