The ingress operator crashes with a nil pointer dereference if the cluster infrastructure config's status.platformStatus field is unset: % oc -n openshift-ingress-operator logs ingress-operator-7cf6c4f489-js47n 2019-07-29T21:42:30.645Z INFO operator log/log.go:26 started zapr logger 2019-07-29T21:42:32.661Z INFO operator.entrypoint ingress-operator/main.go:61 using operator namespace {"namespace": "openshift-ingress-operator"} panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x150c461] goroutine 1 [running]: main.getPlatformStatus(0x1bb7d60, 0xc0001bb930, 0xc00009e340, 0x0, 0x0, 0x18a8798) /ingress-operator/cmd/ingress-operator/main.go:174 +0x41 main.main() /ingress-operator/cmd/ingress-operator/main.go:89 +0x368 +++ This bug was initially created as a clone of Bug #1731323 +++
Verified with 4.2.0-0.nightly-2019-07-30-073644 and issue has been fixed. No panic in the ingress operator log and can be upgraded from 4.1 to 4.2. $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.2.0-0.nightly-2019-07-30-073644 True False 6m5s Cluster version is 4.2.0-0.nightly-2019-07-30-073644 $ oc get co NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.2.0-0.nightly-2019-07-30-073644 True False False 59m cloud-credential 4.2.0-0.nightly-2019-07-30-073644 True False False 68m cluster-autoscaler 4.2.0-0.nightly-2019-07-30-073644 True False False 68m console 4.2.0-0.nightly-2019-07-30-073644 True False False 60m dns 4.2.0-0.nightly-2019-07-30-073644 True False False 67m image-registry 4.2.0-0.nightly-2019-07-30-073644 True False False 12m ingress 4.2.0-0.nightly-2019-07-30-073644 True False False 62m kube-apiserver 4.2.0-0.nightly-2019-07-30-073644 True False False 66m kube-controller-manager 4.2.0-0.nightly-2019-07-30-073644 True False False 65m kube-scheduler 4.2.0-0.nightly-2019-07-30-073644 True False False 65m machine-api 4.2.0-0.nightly-2019-07-30-073644 True False False 68m machine-config 4.2.0-0.nightly-2019-07-30-073644 True False False 67m marketplace 4.2.0-0.nightly-2019-07-30-073644 True False False 7m35s monitoring 4.2.0-0.nightly-2019-07-30-073644 True False False 7m53s network 4.2.0-0.nightly-2019-07-30-073644 True False False 68m node-tuning 4.2.0-0.nightly-2019-07-30-073644 True False False 12m openshift-apiserver 4.2.0-0.nightly-2019-07-30-073644 True False False 12m openshift-controller-manager 4.2.0-0.nightly-2019-07-30-073644 True False False 67m openshift-samples 4.2.0-0.nightly-2019-07-30-073644 True False False 30m operator-lifecycle-manager 4.2.0-0.nightly-2019-07-30-073644 True False False 67m operator-lifecycle-manager-catalog 4.2.0-0.nightly-2019-07-30-073644 True False False 67m operator-lifecycle-manager-packageserver 4.2.0-0.nightly-2019-07-30-073644 True False False 10m service-ca 4.2.0-0.nightly-2019-07-30-073644 True False False 68m service-catalog-apiserver 4.2.0-0.nightly-2019-07-30-073644 True False False 64m service-catalog-controller-manager 4.2.0-0.nightly-2019-07-30-073644 True False False 64m storage 4.2.0-0.nightly-2019-07-30-073644 True False False 40m support 4.2.0-0.nightly-2019-07-30-073644 True False False 39m
This was verified on AWS platform.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:2922