Hide Forgot
Description of problem: The console operator can panic due to a nil pointer dereference on the cm parameter in DefaultDeployment: E0304 21:01:06.429157 1 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 585 [running]: k8s.io/apimachinery/pkg/util/runtime.logPanic(0x1df1200, 0x35f3b70) /go/src/github.com/openshift/console-operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0xa3 k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /go/src/github.com/openshift/console-operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x82 panic(0x1df1200, 0x35f3b70) /usr/local/go/src/runtime/panic.go:969 +0x166 github.com/openshift/console-operator/pkg/console/subresource/deployment.DefaultDeployment(0xc00013f680, 0x0, 0xc001067cc0, 0xc0008ae8c0, 0xc000cbc280, 0xc000b4a3c0, 0xc00127e4e0, 0x203000, 0x0) /go/src/github.com/openshift/console-operator/pkg/console/subresource/deployment/deployment.go:66 +0x17c github.com/openshift/console-operator/pkg/console/operator.(*consoleOperator).SyncDeployment(0xc0002602d0, 0xc00013f680, 0x0, 0xc001067cc0, 0xc0008ae8c0, 0xc000cbc280, 0xc000b4a3c0, 0xc00127e4e0, 0x0, 0x0, ...) /go/src/github.com/openshift/console-operator/pkg/console/operator/sync_v400.go:228 +0xad github.com/openshift/console-operator/pkg/console/operator.(*consoleOperator).sync_v400(0xc0002602d0, 0xc000f0f8b8, 0xc001223180, 0xc00013f680, 0xc00012a960, 0xc00127e4e0, 0x0, 0x0) /go/src/github.com/openshift/console-operator/pkg/console/operator/sync_v400.go:119 +0xc2a github.com/openshift/console-operator/pkg/console/operator.(*consoleOperator).handleSync(0xc0002602d0, 0xc001223180, 0xc00013f680, 0xc00012a960, 0xc00127e4e0, 0x0, 0x0) /go/src/github.com/openshift/console-operator/pkg/console/operator/operator.go:228 +0x370 github.com/openshift/console-operator/pkg/console/operator.(*consoleOperator).Sync(0xc0002602d0, 0x251d300, 0xc00013f680, 0x0, 0x0) /go/src/github.com/openshift/console-operator/pkg/console/operator/operator.go:204 +0x7f0 monis.app/go/openshift/controller.(*controller).handleSync(0xc000211730, 0x20d7b39, 0x7, 0x20d7b39, 0x7, 0x0, 0x0) /go/src/github.com/openshift/console-operator/vendor/monis.app/go/openshift/controller/controller.go:118 +0x112 monis.app/go/openshift/controller.(*controller).processNextWorkItem(0xc000211730, 0x203000) /go/src/github.com/openshift/console-operator/vendor/monis.app/go/openshift/controller/controller.go:104 +0x17c monis.app/go/openshift/controller.(*controller).runWorker(0xc000211730) /go/src/github.com/openshift/console-operator/vendor/monis.app/go/openshift/controller/controller.go:91 +0x2b k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0009e8e00) /go/src/github.com/openshift/console-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0009e8e00, 0x2480460, 0xc0001f9080, 0x385141434f414101, 0xc00010e5a0) /go/src/github.com/openshift/console-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xa3 k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0009e8e00, 0x3b9aca00, 0x0, 0x6632706b71686501, 0xc00010e5a0) /go/src/github.com/openshift/console-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98 k8s.io/apimachinery/pkg/util/wait.Until(0xc0009e8e00, 0x3b9aca00, 0xc00010e5a0) /go/src/github.com/openshift/console-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d created by monis.app/go/openshift/controller.(*controller).Run /go/src/github.com/openshift/console-operator/vendor/monis.app/go/openshift/controller/controller.go:66 +0x282 Version-Release number of selected component (if applicable): Only observed on 4.5. How reproducible: Observed on two CI runs. Additional info: I observed the panic in the following job: https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/openshift_sdn/268/pull-ci-openshift-sdn-release-4.5-e2e-aws/1367571325169700864 Searching for additional occurrences, I found the panic in the following job: https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/openshift_release/16498/rehearse-16498-pull-ci-openshift-ovn-kubernetes-release-4.5-e2e-gcp-ovn-upgrade/1367547256189751296 I could not find any other jobs that had the same panic.
observed and searched for job logs in pull-ci-openshift-* https://prow.ci.openshift.org and it looks like the panic is not coming up anymore.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.8.2 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:2438