Bug 1734192

Summary: Ingress operator cannot be upgraded from 4.1 to 4.2 on AWS
Product: OpenShift Container Platform Reporter: Miciah Dashiel Butler Masters <mmasters>
Component: NetworkingAssignee: Miciah Dashiel Butler Masters <mmasters>
Networking sub component: router QA Contact: Hongan Li <hongli>
Status: CLOSED ERRATA Docs Contact:
Severity: high    
Priority: high CC: aos-bugs, cdc, dmace, gpei, hongli, jokerman, mmccomas, sponnaga, wzheng, yapei
Version: 4.2.0Keywords: TestBlocker
Target Milestone: ---   
Target Release: 4.2.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1731323 Environment:
Last Closed: 2019-10-16 06:33:52 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Miciah Dashiel Butler Masters 2019-07-29 22:28:04 UTC
The ingress operator crashes with a nil pointer dereference if the cluster infrastructure config's status.platformStatus field is unset:

    % oc -n openshift-ingress-operator logs ingress-operator-7cf6c4f489-js47n
    2019-07-29T21:42:30.645Z        INFO    operator        log/log.go:26   started zapr logger
    2019-07-29T21:42:32.661Z        INFO    operator.entrypoint     ingress-operator/main.go:61     using operator namespace
    {"namespace": "openshift-ingress-operator"}
    panic: runtime error: invalid memory address or nil pointer dereference
    [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x150c461]
    
    goroutine 1 [running]:
    main.getPlatformStatus(0x1bb7d60, 0xc0001bb930, 0xc00009e340, 0x0, 0x0, 0x18a8798)
            /ingress-operator/cmd/ingress-operator/main.go:174 +0x41
    main.main()
            /ingress-operator/cmd/ingress-operator/main.go:89 +0x368

+++ This bug was initially created as a clone of Bug #1731323 +++

Comment 2 Hongan Li 2019-07-30 09:11:57 UTC
Verified with 4.2.0-0.nightly-2019-07-30-073644 and issue has been fixed. 
No panic in the ingress operator log and can be upgraded from 4.1 to 4.2.

$ oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.2.0-0.nightly-2019-07-30-073644   True        False         6m5s    Cluster version is 4.2.0-0.nightly-2019-07-30-073644
$ oc get co
NAME                                       VERSION                             AVAILABLE   PROGRESSING   DEGRADED   SINCE
authentication                             4.2.0-0.nightly-2019-07-30-073644   True        False         False      59m
cloud-credential                           4.2.0-0.nightly-2019-07-30-073644   True        False         False      68m
cluster-autoscaler                         4.2.0-0.nightly-2019-07-30-073644   True        False         False      68m
console                                    4.2.0-0.nightly-2019-07-30-073644   True        False         False      60m
dns                                        4.2.0-0.nightly-2019-07-30-073644   True        False         False      67m
image-registry                             4.2.0-0.nightly-2019-07-30-073644   True        False         False      12m
ingress                                    4.2.0-0.nightly-2019-07-30-073644   True        False         False      62m
kube-apiserver                             4.2.0-0.nightly-2019-07-30-073644   True        False         False      66m
kube-controller-manager                    4.2.0-0.nightly-2019-07-30-073644   True        False         False      65m
kube-scheduler                             4.2.0-0.nightly-2019-07-30-073644   True        False         False      65m
machine-api                                4.2.0-0.nightly-2019-07-30-073644   True        False         False      68m
machine-config                             4.2.0-0.nightly-2019-07-30-073644   True        False         False      67m
marketplace                                4.2.0-0.nightly-2019-07-30-073644   True        False         False      7m35s
monitoring                                 4.2.0-0.nightly-2019-07-30-073644   True        False         False      7m53s
network                                    4.2.0-0.nightly-2019-07-30-073644   True        False         False      68m
node-tuning                                4.2.0-0.nightly-2019-07-30-073644   True        False         False      12m
openshift-apiserver                        4.2.0-0.nightly-2019-07-30-073644   True        False         False      12m
openshift-controller-manager               4.2.0-0.nightly-2019-07-30-073644   True        False         False      67m
openshift-samples                          4.2.0-0.nightly-2019-07-30-073644   True        False         False      30m
operator-lifecycle-manager                 4.2.0-0.nightly-2019-07-30-073644   True        False         False      67m
operator-lifecycle-manager-catalog         4.2.0-0.nightly-2019-07-30-073644   True        False         False      67m
operator-lifecycle-manager-packageserver   4.2.0-0.nightly-2019-07-30-073644   True        False         False      10m
service-ca                                 4.2.0-0.nightly-2019-07-30-073644   True        False         False      68m
service-catalog-apiserver                  4.2.0-0.nightly-2019-07-30-073644   True        False         False      64m
service-catalog-controller-manager         4.2.0-0.nightly-2019-07-30-073644   True        False         False      64m
storage                                    4.2.0-0.nightly-2019-07-30-073644   True        False         False      40m
support                                    4.2.0-0.nightly-2019-07-30-073644   True        False         False      39m

Comment 3 Hongan Li 2019-07-30 10:13:54 UTC
This was verified on AWS platform.

Comment 4 errata-xmlrpc 2019-10-16 06:33:52 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:2922