Bug 1695210
| Summary: | ingress operator does not reset progressing transition timestamp when it upgrades | ||
|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | Clayton Coleman <ccoleman> |
| Component: | Networking | Assignee: | Daneyon Hansen <dhansen> |
| Networking sub component: | router | QA Contact: | Hongan Li <hongli> |
| Status: | CLOSED ERRATA | Docs Contact: | |
| Severity: | high | ||
| Priority: | high | CC: | aos-bugs, bbennett, dhansen, dmace, wsun |
| Version: | 4.1.0 | Keywords: | BetaBlocker |
| Target Milestone: | --- | ||
| Target Release: | 4.1.0 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2019-06-04 10:46:54 UTC | Type: | --- |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Clayton Coleman
2019-04-02 16:24:11 UTC
https://github.com/openshift/cluster-version-operator/pull/154 will document this and an e2e test will verify it in the future post-upgrade This PR fixes the bug: https://github.com/openshift/cluster-ingress-operator/pull/198 With the PR: Before deleting the default ingress controller: $ oc get clusteroperator.config.openshift.io/ingress -o yaml apiVersion: config.openshift.io/v1 kind: ClusterOperator <SNIP> spec: {} status: conditions: - lastTransitionTime: 2019-04-08T00:37:29Z status: "False" type: Failing - lastTransitionTime: 2019-04-08T00:37:29Z status: "False" type: Progressing - lastTransitionTime: 2019-04-08T00:37:29Z status: "True" type: Available <SNIP> Now, no ingress controllers exist. After deleting the default ingress controller, the status conditions change with a new timestamp: $ oc get clusteroperator.config.openshift.io/ingress -o yaml apiVersion: config.openshift.io/v1 kind: ClusterOperator <SNIP> status: conditions: - lastTransitionTime: 2019-04-08T00:37:29Z status: "False" type: Failing - lastTransitionTime: 2019-04-08T00:39:14Z message: 0 ingress controllers available, want 1 reason: Reconciling status: "True" type: Progressing - lastTransitionTime: 2019-04-08T00:39:14Z message: 0 ingress controllers available, want 1 reason: IngressUnavailable status: "False" type: Available <SNIP> Lastly, the default ingress controller is recreated. 'Progressing' and 'Available' status conditions/timestamps are updated: $ oc get clusteroperator.config.openshift.io/ingress -o yaml apiVersion: config.openshift.io/v1 kind: ClusterOperator <SNIP> spec: {} status: conditions: - lastTransitionTime: 2019-04-08T00:37:29Z status: "False" type: Failing - lastTransitionTime: 2019-04-08T00:39:45Z status: "False" type: Progressing - lastTransitionTime: 2019-04-08T00:39:45Z status: "True" type: Available <SNIP> The same behavior should be reflected during an operator upgrade. [1] supersedes [2] to fix this bug. [1] https://github.com/openshift/cluster-ingress-operator/pull/201 [2] https://github.com/openshift/cluster-ingress-operator/pull/198 verified with 4.1.0-0.nightly-2019-04-28-064010 and issue has been fixed. test steps as #Comment 2 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0758 |