Bug 1883497 - [Cloud Team] Logging is broken due to mix of k8s.io/klog v1 and v2
Summary: [Cloud Team] Logging is broken due to mix of k8s.io/klog v1 and v2
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Cloud Compute
Version: 4.6
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 4.6.0
Assignee: Joel Speed
QA Contact: sunzhaohua
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-09-29 11:57 UTC by Joel Speed
Modified: 2020-10-27 16:46 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1883461
: 1885151 (view as bug list)
Environment:
Last Closed: 2020-10-27 16:46:19 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-api-provider-aws pull 356 0 None closed Bug 1883497: Fix missing logs due to mixed klog versions 2021-01-19 10:27:13 UTC
Github openshift cluster-api-provider-aws pull 357 0 None closed Bug 1883497: Fix missing logs due to mixed klog versions 2021-01-19 10:26:33 UTC
Github openshift cluster-api-provider-azure pull 169 0 None closed Bug 1883497: Fix missing logs due to mixed klog versions 2021-01-19 10:26:33 UTC
Github openshift cluster-api-provider-baremetal pull 118 0 None closed Bug 1883497: Fix missing logs due to mixed klog versions 2021-01-19 10:27:13 UTC
Github openshift cluster-api-provider-gcp pull 123 0 None closed Bug 1883497: Fix missing logs due to mixed klog versions 2021-01-19 10:26:33 UTC
Github openshift cluster-api-provider-gcp pull 126 0 None closed Bug 1883497: Fix missing logs due to mixed klog versions 2021-01-19 10:27:13 UTC
Github openshift cluster-api-provider-openstack pull 130 0 None closed Bug 1883497: Fix missing logs due to mixed klog versions 2021-01-19 10:26:35 UTC
Github openshift cluster-autoscaler-operator pull 168 0 None closed Bug 1883497: Fix missing logs due to mixed klog versions 2021-01-19 10:26:35 UTC
Github openshift cluster-machine-approver pull 89 0 None closed Bug 1883497: Fix missing logs due to mixed klog versions 2021-01-19 10:26:35 UTC
Github openshift machine-api-operator pull 714 0 None closed Bug 1883497: Fix missing logs due to mixed klog versions 2021-01-19 10:27:15 UTC
Red Hat Product Errata RHBA-2020:4196 0 None None None 2020-10-27 16:46:52 UTC

Description Joel Speed 2020-09-29 11:57:52 UTC
We have klog throughout our controllers and because MAO depends on both klog v1 and v2, this means all of our components will need an update

+++ This bug was initially created as a clone of Bug #1883461 +++

k8s.io/klog moved to v2 in client-go and frieds. The operator does not use v2 yet and hence, we lose one half of the logging output (v2).

--- Additional comment from Stefan Schimanski on 2020-09-29 11:09:06 UTC ---

I did some tests using openshift-apiserver that uses kube component-base to add klog command line flags:

$ ./openshift-apiserver start --config asfd --v=4
I0929 13:04:18.354425   53628 cmd.go:57] v1 klog
I0929 13:04:18.354506   53628 cmd.go:58] v2 klog
I0929 13:04:18.354512   53628 cmd.go:61] v2 V(2) klog
I0929 13:04:18.354515   53628 cmd.go:64] v2 V(4) klog

So this means that we DO NOT set verbosity for v1 klog, and hence lose all the klogv1.V(2).Infof output.

Comment 1 Joel Speed 2020-10-02 10:50:21 UTC
Need more PRs to revendor MAO now it's merged, back to Assigned

Comment 2 Joel Speed 2020-10-05 09:06:53 UTC
One PR didn't make it into the 4.6 branch before the cutover, going to remove that PR from this Bug and create a clone for that particular PR

Comment 3 Joel Speed 2020-10-05 09:10:36 UTC
All PRs attached to this BZ are now merged, moving to Modified

Comment 6 sunzhaohua 2020-10-13 07:21:20 UTC
Verified, the operator has already use v2 klog
clusterversion: 4.6.0-0.nightly-2020-10-12-223649

verified on aws
$ oc rsh -c machine-controller machine-api-controllers-57968448cb-x2pcq
sh-4.4$ ./machine-controller-manager -v=4
F1013 04:20:18.270007      34 main.go:127] Error creating manager: error listening on :8081: listen tcp :8081: bind: address already in use
goroutine 1 [running]:
k8s.io/klog/v2.stacks(0xc0001ae001, 0xc00000a700, 0x8c, 0xfa)
	/go/src/sigs.k8s.io/cluster-api-provider-aws/vendor/k8s.io/klog/v2/klog.go:996 +0xb9
k8s.io/klog/v2.(*loggingT).output(0x2b26ca0, 0xc000000003, 0x0, 0x0, 0xc0003d2230, 0x2a73fe5, 0x7, 0x7f, 0x0)
	/go/src/sigs.k8s.io/cluster-api-provider-aws/vendor/k8s.io/klog/v2/klog.go:945 +0x191
k8s.io/klog/v2.(*loggingT).printf(0x2b26ca0, 0xc000000003, 0x0, 0x0, 0x1c980cc, 0x1a, 0xc0006b7d80, 0x1, 0x1)
	/go/src/sigs.k8s.io/cluster-api-provider-aws/vendor/k8s.io/klog/v2/klog.go:733 +0x17a
k8s.io/klog/v2.Fatalf(...)
	/go/src/sigs.k8s.io/cluster-api-provider-aws/vendor/k8s.io/klog/v2/klog.go:1463
main.main()
	/go/src/sigs.k8s.io/cluster-api-provider-aws/cmd/manager/main.go:127 +0x616
verified on gcp
$ oc rsh -c machine-controller machine-api-controllers-65fbf64c45-nk55m
sh-4.4$ ./machine-controller-manager -v=4
k8s.io/klog/v2.stacks(0xc000010001, 0xc000800000, 0xa1, 0xf0)
	/go/src/github.com/openshift/cluster-api-provider-gcp/vendor/k8s.io/klog/v2/klog.go:996 +0xb9
k8s.io/klog/v2.(*loggingT).output(0x2a3c940, 0xc000000003, 0x0, 0x0, 0xc0006d6690, 0x298428d, 0x7, 0x6c, 0x0)
	/go/src/github.com/openshift/cluster-api-provider-gcp/vendor/k8s.io/klog/v2/klog.go:945 +0x191
k8s.io/klog/v2.(*loggingT).printf(0x2a3c940, 0xc000000003, 0x0, 0x0, 0x1cdc8ff, 0x2f, 0xc00091fd90, 0x1, 0x1)
	/go/src/github.com/openshift/cluster-api-provider-gcp/vendor/k8s.io/klog/v2/klog.go:733 +0x17a
k8s.io/klog/v2.Fatalf(...)
	/go/src/github.com/openshift/cluster-api-provider-gcp/vendor/k8s.io/klog/v2/klog.go:1463
main.main()
	/go/src/github.com/openshift/cluster-api-provider-gcp/cmd/manager/main.go:108 +0x51d

goroutine 6 [chan receive]:
k8s.io/klog/v2.(*loggingT).flushDaemon(0x2a3c940)
	/go/src/github.com/openshift/cluster-api-provider-gcp/vendor/k8s.io/klog/v2/klog.go:1131 +0x8b
created by k8s.io/klog/v2.init.0
	/go/src/github.com/openshift/cluster-api-provider-gcp/vendor/k8s.io/klog/v2/klog.go:416 +0xd8

verified on azure
goroutine 1 [running]:
k8s.io/klog/v2.stacks(0xc000012001, 0xc00089e000, 0xa1, 0xef)
	/go/src/sigs.k8s.io/cluster-api-provider-azure/vendor/k8s.io/klog/v2/klog.go:996 +0xb9
k8s.io/klog/v2.(*loggingT).output(0x2740480, 0xc000000003, 0x0, 0x0, 0xc000138000, 0x268e433, 0x7, 0x6f, 0x0)
	/go/src/sigs.k8s.io/cluster-api-provider-azure/vendor/k8s.io/klog/v2/klog.go:945 +0x191
k8s.io/klog/v2.(*loggingT).printf(0x2740480, 0xc000000003, 0x0, 0x0, 0x1a5b233, 0x2f, 0xc00082bda0, 0x1, 0x1)
	/go/src/sigs.k8s.io/cluster-api-provider-azure/vendor/k8s.io/klog/v2/klog.go:733 +0x17a
k8s.io/klog/v2.Fatalf(...)
	/go/src/sigs.k8s.io/cluster-api-provider-azure/vendor/k8s.io/klog/v2/klog.go:1463
main.main()
	/go/src/sigs.k8s.io/cluster-api-provider-azure/cmd/manager/main.go:111 +0x4bd

goroutine 6 [chan receive]:
k8s.io/klog/v2.(*loggingT).flushDaemon(0x2740480)
	/go/src/sigs.k8s.io/cluster-api-provider-azure/vendor/k8s.io/klog/v2/klog.go:1131 +0x8b
created by k8s.io/klog/v2.init.0
	/go/src/sigs.k8s.io/cluster-api-provider-azure/vendor/k8s.io/klog/v2/klog.go:416 +0xd8
verified on osp
k8s.io/klog/v2.stacks(0xc0000d4001, 0xc000522000, 0x74, 0xef)
	/go/src/sigs.k8s.io/cluster-api-provider-openstack/vendor/k8s.io/klog/v2/klog.go:996 +0xb9
k8s.io/klog/v2.(*loggingT).output(0x25a3340, 0xc000000003, 0x0, 0x0, 0xc000463110, 0x24edb83, 0x7, 0x6d, 0x1b52d00)
	/go/src/sigs.k8s.io/cluster-api-provider-openstack/vendor/k8s.io/klog/v2/klog.go:945 +0x191
k8s.io/klog/v2.(*loggingT).printDepth(0x25a3340, 0xc000000003, 0x0, 0x0, 0x1, 0xc000a9fe00, 0x1, 0x1)
	/go/src/sigs.k8s.io/cluster-api-provider-openstack/vendor/k8s.io/klog/v2/klog.go:718 +0x165
k8s.io/klog/v2.(*loggingT).print(...)
	/go/src/sigs.k8s.io/cluster-api-provider-openstack/vendor/k8s.io/klog/v2/klog.go:703
k8s.io/klog/v2.Fatal(...)
	/go/src/sigs.k8s.io/cluster-api-provider-openstack/vendor/k8s.io/klog/v2/klog.go:1443
main.main()
	/go/src/sigs.k8s.io/cluster-api-provider-openstack/cmd/manager/main.go:109 +0x546

goroutine 21 [chan receive]:
k8s.io/klog/v2.(*loggingT).flushDaemon(0x25a3340)
	/go/src/sigs.k8s.io/cluster-api-provider-openstack/vendor/k8s.io/klog/v2/klog.go:1131 +0x8b
created by k8s.io/klog/v2.init.0
	/go/src/sigs.k8s.io/cluster-api-provider-openstack/vendor/k8s.io/klog/v2/klog.go:416 +0xd8

verified on vsphere
goroutine 1 [running]:
k8s.io/klog/v2.stacks(0xc000182001, 0xc0005f2000, 0xa1, 0xec)
	/go/src/github.com/openshift/machine-api-operator/vendor/k8s.io/klog/v2/klog.go:996 +0xb9
k8s.io/klog/v2.(*loggingT).output(0x2e0b7e0, 0xc000000003, 0x0, 0x0, 0xc00063e070, 0x2d4d3bd, 0x7, 0x66, 0x0)
	/go/src/github.com/openshift/machine-api-operator/vendor/k8s.io/klog/v2/klog.go:945 +0x191
k8s.io/klog/v2.(*loggingT).printf(0x2e0b7e0, 0xc000000003, 0x0, 0x0, 0x1e56fc7, 0x2f, 0xc0001a3da0, 0x1, 0x1)
	/go/src/github.com/openshift/machine-api-operator/vendor/k8s.io/klog/v2/klog.go:733 +0x17a
k8s.io/klog/v2.Fatalf(...)
	/go/src/github.com/openshift/machine-api-operator/vendor/k8s.io/klog/v2/klog.go:1463
main.main()
	/go/src/github.com/openshift/machine-api-operator/cmd/vsphere/main.go:102 +0x576

goroutine 18 [chan receive]:
k8s.io/klog/v2.(*loggingT).flushDaemon(0x2e0b7e0)
	/go/src/github.com/openshift/machine-api-operator/vendor/k8s.io/klog/v2/klog.go:1131 +0x8b
created by k8s.io/klog/v2.init.0
	/go/src/github.com/openshift/machine-api-operator/vendor/k8s.io/klog/v2/klog.go:416 +0xd8

Comment 8 errata-xmlrpc 2020-10-27 16:46:19 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Container Platform 4.6 GA Images), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:4196


Note You need to log in before you can comment on or make changes to this bug.