Sorry was in training for a week and able to afford time on this bug today. Per the test in bug 1919968#c3 and the guide in bug 1919968#c5, I could test by checking if panic with "webhook.(*WebhookAuthorizer).Authorize" is logged. So search apiserver.*Observed a panic: runtime error: invalid memory address or nil pointer dereference in 4\.8 jobs within last 7 days: $ w3m -dump -cols 200 'https://search.ci.openshift.org/?search=apiserver.*Observed+a+panic%3A+runtime+error%3A+invalid+memory+address+or+nil+pointer+dereference&maxAge=336h&context=1&type=junit&name=4%5C.8&excludeName=&maxMatches=5&maxBytes=20971520&groupBy=job' Only got: periodic-ci-openshift-release-master-ci-4.8-e2e-aws-serial (all) - 180 runs, 48% failed, 1% of failures match = 1% impact #1370497671634096128 junit 4 days ago # Undiagnosed panic detected in pod pods/openshift-apiserver_apiserver-5f468f667-lm2jq_openshift-apiserver-check-endpoints.log.gz:E0312 23:07:49.116023 1 runtime.go:76] Observed a panic: runtime error: invalid memory address or nil pointer dereference So no oauth-apiserver and openshift-apiserver *container* logs of such panic now, will moving this to VERIFIED. BTW I looked at above openshift-apiserver-check-endpoints *container* logs in #1370497671634096128, saw: 2021-03-12T22:47:24.790757114Z E0312 22:47:24.790733 1 reflector.go:138] k8s.io/client-go.1/tools/cache/reflector.go:167: Failed to watch *v1alpha1.PodNetworkConnectivityCheck: failed to list *v1alpha1.PodNetworkConnectivityCheck: the server could not find the requested resource (get podnetworkconnectivitychecks.controlplane.operator.openshift.io) 2021-03-12T22:47:25.885156029Z I0312 22:47:25.885120 1 base_controller.go:72] Caches are synced for check-endpoints 2021-03-12T22:47:25.885156029Z I0312 22:47:25.885142 1 base_controller.go:109] Starting #1 worker of check-endpoints controller ... 2021-03-12T23:07:49.116080699Z E0312 23:07:49.116023 1 runtime.go:76] Observed a panic: runtime error: invalid memory address or nil pointer dereference 2021-03-12T23:07:49.116080699Z goroutine 3797 [running]: 2021-03-12T23:07:49.116080699Z k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1.1(0xc001dba7e0) 2021-03-12T23:07:49.116080699Z k8s.io/apiserver.1/pkg/server/filters/timeout.go:106 +0x113 2021-03-12T23:07:49.116080699Z panic(0x2175920, 0x379acb0) 2021-03-12T23:07:49.116080699Z runtime/panic.go:969 +0x1b9 2021-03-12T23:07:49.116080699Z k8s.io/apiserver/plugin/pkg/authorizer/webhook.(*WebhookAuthorizer).Authorize(0xc000417200, 0x2838d40, 0xc000afc990, 0x2859000, 0xc0008a9180, 0x2, 0x0, 0x0, 0x0, 0x0) 2021-03-12T23:07:49.116080699Z k8s.io/apiserver.1/plugin/pkg/authorizer/webhook/webhook.go:208 +0x8b9 2021-03-12T23:07:49.116080699Z k8s.io/apiserver/pkg/authorization/union.unionAuthzHandler.Authorize(0xc000ad36c0, 0x3, 0x4, 0x2838d40, 0xc000afc990, 0x2859000, 0xc0008a9180, 0x1, 0x3, 0xc000684e60, ...) 2021-03-12T23:07:49.116080699Z k8s.io/apiserver.1/pkg/authorization/union/union.go:52 +0xfe 2021-03-12T23:07:49.116080699Z k8s.io/apiserver/pkg/authorization/union.unionAuthzHandler.Authorize(0xc000110940, 0x2, 0x2, 0x2838d40, 0xc000afc990, 0x2859000, 0xc0008a9180, 0x27a4eb8, 0x20397c0, 0xc00110a000, ...) 2021-03-12T23:07:49.116080699Z k8s.io/apiserver.1/pkg/authorization/union/union.go:52 +0xfe openshift-apiserver-check-endpoints still uses 0.20.1. This container runs `cluster-kube-apiserver-operator check-endpoints`, so I'll file a bug in KAS.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.8.2 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:2438