+++ This bug was initially created as a clone of Bug #1883497 +++ We have klog throughout our controllers and because MAO depends on both klog v1 and v2, this means all of our components will need an update +++ This bug was initially created as a clone of Bug #1883461 +++ k8s.io/klog moved to v2 in client-go and frieds. The operator does not use v2 yet and hence, we lose one half of the logging output (v2). --- Additional comment from Stefan Schimanski on 2020-09-29 11:09:06 UTC --- I did some tests using openshift-apiserver that uses kube component-base to add klog command line flags: $ ./openshift-apiserver start --config asfd --v=4 I0929 13:04:18.354425 53628 cmd.go:57] v1 klog I0929 13:04:18.354506 53628 cmd.go:58] v2 klog I0929 13:04:18.354512 53628 cmd.go:61] v2 V(2) klog I0929 13:04:18.354515 53628 cmd.go:64] v2 V(4) klog So this means that we DO NOT set verbosity for v1 klog, and hence lose all the klogv1.V(2).Infof output. --- Additional comment from Joel Speed on 2020-10-02 10:50:21 UTC --- Need more PRs to revendor MAO now it's merged, back to Assigned --- Additional comment from Joel Speed on 2020-10-05 09:06:53 UTC --- One PR didn't make it into the 4.6 branch before the cutover, going to remove that PR from this Bug and create a clone for that particular PR
Verified clusterversion: 4.7.0-0.nightly-2020-10-24-155529 $ oc rsh -c machine-controller machine-api-controllers-77cb79cc5f-7h78m sh-4.4$ ./machine-controller-manager -v=4 goroutine 1 [running]: k8s.io/klog/v2.stacks(0xc0000cc001, 0xc0000a4b00, 0xa1, 0xfa) /go/src/sigs.k8s.io/cluster-api-provider-azure/vendor/k8s.io/klog/v2/klog.go:996 +0xb9 k8s.io/klog/v2.(*loggingT).output(0x27361c0, 0xc000000003, 0x0, 0x0, 0xc0004ec310, 0x26845d3, 0x7, 0x6f, 0x0) /go/src/sigs.k8s.io/cluster-api-provider-azure/vendor/k8s.io/klog/v2/klog.go:945 +0x191 k8s.io/klog/v2.(*loggingT).printf(0x27361c0, 0xc000000003, 0x0, 0x0, 0x1a550d3, 0x2f, 0xc0000e1da0, 0x1, 0x1) /go/src/sigs.k8s.io/cluster-api-provider-azure/vendor/k8s.io/klog/v2/klog.go:733 +0x17a k8s.io/klog/v2.Fatalf(...) /go/src/sigs.k8s.io/cluster-api-provider-azure/vendor/k8s.io/klog/v2/klog.go:1463 main.main() /go/src/sigs.k8s.io/cluster-api-provider-azure/cmd/manager/main.go:111 +0x4bd goroutine 19 [chan receive]: k8s.io/klog/v2.(*loggingT).flushDaemon(0x27361c0) /go/src/sigs.k8s.io/cluster-api-provider-azure/vendor/k8s.io/klog/v2/klog.go:1131 +0x8b created by k8s.io/klog/v2.init.0 /go/src/sigs.k8s.io/cluster-api-provider-azure/vendor/k8s.io/klog/v2/klog.go:416 +0xd8
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2020:5633