Bug 1734192 - Ingress operator cannot be upgraded from 4.1 to 4.2 on AWS
Summary: Ingress operator cannot be upgraded from 4.1 to 4.2 on AWS
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Networking
Version: 4.2.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 4.2.0
Assignee: Miciah Dashiel Butler Masters
QA Contact: Hongan Li
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-07-29 22:28 UTC by Miciah Dashiel Butler Masters
Modified: 2022-08-04 22:24 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1731323
Environment:
Last Closed: 2019-10-16 06:33:52 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-ingress-operator pull 275 0 None None None 2019-07-29 22:29:39 UTC
Red Hat Product Errata RHBA-2019:2922 0 None None None 2019-10-16 06:34:02 UTC

Description Miciah Dashiel Butler Masters 2019-07-29 22:28:04 UTC
The ingress operator crashes with a nil pointer dereference if the cluster infrastructure config's status.platformStatus field is unset:

    % oc -n openshift-ingress-operator logs ingress-operator-7cf6c4f489-js47n
    2019-07-29T21:42:30.645Z        INFO    operator        log/log.go:26   started zapr logger
    2019-07-29T21:42:32.661Z        INFO    operator.entrypoint     ingress-operator/main.go:61     using operator namespace
    {"namespace": "openshift-ingress-operator"}
    panic: runtime error: invalid memory address or nil pointer dereference
    [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x150c461]
    
    goroutine 1 [running]:
    main.getPlatformStatus(0x1bb7d60, 0xc0001bb930, 0xc00009e340, 0x0, 0x0, 0x18a8798)
            /ingress-operator/cmd/ingress-operator/main.go:174 +0x41
    main.main()
            /ingress-operator/cmd/ingress-operator/main.go:89 +0x368

+++ This bug was initially created as a clone of Bug #1731323 +++

Comment 2 Hongan Li 2019-07-30 09:11:57 UTC
Verified with 4.2.0-0.nightly-2019-07-30-073644 and issue has been fixed. 
No panic in the ingress operator log and can be upgraded from 4.1 to 4.2.

$ oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.2.0-0.nightly-2019-07-30-073644   True        False         6m5s    Cluster version is 4.2.0-0.nightly-2019-07-30-073644
$ oc get co
NAME                                       VERSION                             AVAILABLE   PROGRESSING   DEGRADED   SINCE
authentication                             4.2.0-0.nightly-2019-07-30-073644   True        False         False      59m
cloud-credential                           4.2.0-0.nightly-2019-07-30-073644   True        False         False      68m
cluster-autoscaler                         4.2.0-0.nightly-2019-07-30-073644   True        False         False      68m
console                                    4.2.0-0.nightly-2019-07-30-073644   True        False         False      60m
dns                                        4.2.0-0.nightly-2019-07-30-073644   True        False         False      67m
image-registry                             4.2.0-0.nightly-2019-07-30-073644   True        False         False      12m
ingress                                    4.2.0-0.nightly-2019-07-30-073644   True        False         False      62m
kube-apiserver                             4.2.0-0.nightly-2019-07-30-073644   True        False         False      66m
kube-controller-manager                    4.2.0-0.nightly-2019-07-30-073644   True        False         False      65m
kube-scheduler                             4.2.0-0.nightly-2019-07-30-073644   True        False         False      65m
machine-api                                4.2.0-0.nightly-2019-07-30-073644   True        False         False      68m
machine-config                             4.2.0-0.nightly-2019-07-30-073644   True        False         False      67m
marketplace                                4.2.0-0.nightly-2019-07-30-073644   True        False         False      7m35s
monitoring                                 4.2.0-0.nightly-2019-07-30-073644   True        False         False      7m53s
network                                    4.2.0-0.nightly-2019-07-30-073644   True        False         False      68m
node-tuning                                4.2.0-0.nightly-2019-07-30-073644   True        False         False      12m
openshift-apiserver                        4.2.0-0.nightly-2019-07-30-073644   True        False         False      12m
openshift-controller-manager               4.2.0-0.nightly-2019-07-30-073644   True        False         False      67m
openshift-samples                          4.2.0-0.nightly-2019-07-30-073644   True        False         False      30m
operator-lifecycle-manager                 4.2.0-0.nightly-2019-07-30-073644   True        False         False      67m
operator-lifecycle-manager-catalog         4.2.0-0.nightly-2019-07-30-073644   True        False         False      67m
operator-lifecycle-manager-packageserver   4.2.0-0.nightly-2019-07-30-073644   True        False         False      10m
service-ca                                 4.2.0-0.nightly-2019-07-30-073644   True        False         False      68m
service-catalog-apiserver                  4.2.0-0.nightly-2019-07-30-073644   True        False         False      64m
service-catalog-controller-manager         4.2.0-0.nightly-2019-07-30-073644   True        False         False      64m
storage                                    4.2.0-0.nightly-2019-07-30-073644   True        False         False      40m
support                                    4.2.0-0.nightly-2019-07-30-073644   True        False         False      39m

Comment 3 Hongan Li 2019-07-30 10:13:54 UTC
This was verified on AWS platform.

Comment 4 errata-xmlrpc 2019-10-16 06:33:52 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:2922


Note You need to log in before you can comment on or make changes to this bug.