Bug 1695210 - ingress operator does not reset progressing transition timestamp when it upgrades
Summary: ingress operator does not reset progressing transition timestamp when it upgr...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Networking
Version: 4.1.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 4.1.0
Assignee: Daneyon Hansen
QA Contact: Hongan Li
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-04-02 16:24 UTC by Clayton Coleman
Modified: 2022-08-04 22:24 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-06-04 10:46:54 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:0758 0 None None None 2019-06-04 10:47:00 UTC

Description Clayton Coleman 2019-04-02 16:24:11 UTC
During an upgrade, operators are required to update the lastTransitionTimestamp of Progressing when they start upgrading and when they complete.  If there is no need to set progressing, the last transition time should still be reset at the end of an upgrade when they hit "level".

When I started an upgrade (39m ago was the beginning) I see:

clusteroperator.config.openshift.io/ingress                              0.0.1     True        False         False     55m

which means the operator did not reset progressing lastTransitionTime

This value is used to tell the admin when "something happened" and an upgrade is "something".

Comment 1 Clayton Coleman 2019-04-02 16:45:28 UTC
https://github.com/openshift/cluster-version-operator/pull/154 will document this and an e2e test will verify it in the future post-upgrade

Comment 2 Daneyon Hansen 2019-04-08 01:26:43 UTC
This PR fixes the bug: https://github.com/openshift/cluster-ingress-operator/pull/198

With the PR:

Before deleting the default ingress controller:
$ oc get clusteroperator.config.openshift.io/ingress -o yaml
apiVersion: config.openshift.io/v1
kind: ClusterOperator
<SNIP>
spec: {}
status:
  conditions:
  - lastTransitionTime: 2019-04-08T00:37:29Z
    status: "False"
    type: Failing
  - lastTransitionTime: 2019-04-08T00:37:29Z
    status: "False"
    type: Progressing
  - lastTransitionTime: 2019-04-08T00:37:29Z
    status: "True"
    type: Available
<SNIP>

Now, no ingress controllers exist. After deleting the default ingress controller, the status conditions change with a new timestamp:
$ oc get clusteroperator.config.openshift.io/ingress -o yaml
apiVersion: config.openshift.io/v1
kind: ClusterOperator
<SNIP>
status:
  conditions:
  - lastTransitionTime: 2019-04-08T00:37:29Z
    status: "False"
    type: Failing
  - lastTransitionTime: 2019-04-08T00:39:14Z
    message: 0 ingress controllers available, want 1
    reason: Reconciling
    status: "True"
    type: Progressing
  - lastTransitionTime: 2019-04-08T00:39:14Z
    message: 0 ingress controllers available, want 1
    reason: IngressUnavailable
    status: "False"
    type: Available
<SNIP>

Lastly, the default ingress controller is recreated. 'Progressing' and 'Available' status conditions/timestamps are updated:
$ oc get clusteroperator.config.openshift.io/ingress -o yaml
apiVersion: config.openshift.io/v1
kind: ClusterOperator
<SNIP>
spec: {}
status:
  conditions:
  - lastTransitionTime: 2019-04-08T00:37:29Z
    status: "False"
    type: Failing
  - lastTransitionTime: 2019-04-08T00:39:45Z
    status: "False"
    type: Progressing
  - lastTransitionTime: 2019-04-08T00:39:45Z
    status: "True"
    type: Available
<SNIP>

The same behavior should be reflected during an operator upgrade.

Comment 7 Hongan Li 2019-04-30 09:59:19 UTC
verified with 4.1.0-0.nightly-2019-04-28-064010 and issue has been fixed.

test steps as #Comment 2

Comment 9 errata-xmlrpc 2019-06-04 10:46:54 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0758


Note You need to log in before you can comment on or make changes to this bug.