Bug 1885243

Summary: prometheus-adapter: Logging is broken due to mix of k8s.io/klog v1 and v2
Product: OpenShift Container Platform Reporter: Sergiusz Urbaniak <surbania>
Component: MonitoringAssignee: Sergiusz Urbaniak <surbania>
Status: CLOSED ERRATA QA Contact: Junqi Zhao <juzhao>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 4.6CC: alegrand, anpicker, erooth, kakkoyun, lcosic, pkrupa, surbania
Target Milestone: ---   
Target Release: 4.7.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-02-24 15:23:10 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Sergiusz Urbaniak 2020-10-05 13:42:47 UTC
This bug was initially created as a copy of Bug #1883461

I am copying this bug because: 



k8s.io/klog moved to v2 in client-go and frieds. The operator does not use v2 yet and hence, we lose one half of the logging output (v2).

Comment 4 Junqi Zhao 2020-11-05 03:08:55 UTC
tested with 4.7.0-0.nightly-2020-11-05-010603, issue is fixed
# oc -n openshift-monitoring rsh prometheus-adapter-586945776f-mj9hb
sh-4.4$ cm-adapter --config=asdf -v=4
I1105 03:04:03.846666      28 adapter.go:98] successfully using in-cluster auth
F1105 03:04:03.846730      28 adapter.go:272] unable to load metrics discovery config: unable to load metrics discovery configuration: unable to load metrics discovery config file: open asdf: no such file or directory
goroutine 1 [running]:
k8s.io/klog/v2.stacks(0xc000010001, 0xc00035e600, 0xda, 0x1ef)
	/go/src/github.com/directxman12/k8s-prometheus-adapter/vendor/k8s.io/klog/v2/klog.go:996 +0xb9
k8s.io/klog/v2.(*loggingT).output(0x2b865e0, 0xc000000003, 0x0, 0x0, 0xc000496690, 0x2ab27db, 0xa, 0x110, 0x0)
	/go/src/github.com/directxman12/k8s-prometheus-adapter/vendor/k8s.io/klog/v2/klog.go:945 +0x191
k8s.io/klog/v2.(*loggingT).printf(0x2b865e0, 0x3, 0x0, 0x0, 0x1d3db8c, 0x2b, 0xc00047df40, 0x1, 0x1)
	/go/src/github.com/directxman12/k8s-prometheus-adapter/vendor/k8s.io/klog/v2/klog.go:733 +0x17a
k8s.io/klog/v2.Fatalf(...)
	/go/src/github.com/directxman12/k8s-prometheus-adapter/vendor/k8s.io/klog/v2/klog.go:1463
main.main()
	/go/src/github.com/directxman12/k8s-prometheus-adapter/cmd/adapter/adapter.go:272 +0x445

goroutine 6 [chan receive]:
k8s.io/klog/v2.(*loggingT).flushDaemon(0x2b865e0)
	/go/src/github.com/directxman12/k8s-prometheus-adapter/vendor/k8s.io/klog/v2/klog.go:1131 +0x8b
created by k8s.io/klog/v2.init.0
	/go/src/github.com/directxman12/k8s-prometheus-adapter/vendor/k8s.io/klog/v2/klog.go:416 +0xd8

goroutine 51 [select]:
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x1dd6c18, 0x1f68f00, 0xc000340000, 0x1, 0xc0000960c0)
	/go/src/github.com/directxman12/k8s-prometheus-adapter/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x1dd6c18, 0x12a05f200, 0x0, 0xc000320701, 0xc0000960c0)
	/go/src/github.com/directxman12/k8s-prometheus-adapter/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/go/src/github.com/directxman12/k8s-prometheus-adapter/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
k8s.io/apimachinery/pkg/util/wait.Forever(0x1dd6c18, 0x12a05f200)
	/go/src/github.com/directxman12/k8s-prometheus-adapter/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:81 +0x4f
created by k8s.io/component-base/logs.InitLogs
	/go/src/github.com/directxman12/k8s-prometheus-adapter/vendor/k8s.io/component-base/logs/logs.go:58 +0x8a

Comment 8 errata-xmlrpc 2021-02-24 15:23:10 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:5633