Hide Forgot
During an upgrade, operators are required to update the lastTransitionTimestamp of Progressing when they start upgrading and when they complete. If there is no need to set progressing, the last transition time should still be reset at the end of an upgrade when they hit "level". When I started an upgrade (39m ago was the beginning) I see: clusteroperator.config.openshift.io/ingress 0.0.1 True False False 55m which means the operator did not reset progressing lastTransitionTime This value is used to tell the admin when "something happened" and an upgrade is "something".
https://github.com/openshift/cluster-version-operator/pull/154 will document this and an e2e test will verify it in the future post-upgrade
This PR fixes the bug: https://github.com/openshift/cluster-ingress-operator/pull/198 With the PR: Before deleting the default ingress controller: $ oc get clusteroperator.config.openshift.io/ingress -o yaml apiVersion: config.openshift.io/v1 kind: ClusterOperator <SNIP> spec: {} status: conditions: - lastTransitionTime: 2019-04-08T00:37:29Z status: "False" type: Failing - lastTransitionTime: 2019-04-08T00:37:29Z status: "False" type: Progressing - lastTransitionTime: 2019-04-08T00:37:29Z status: "True" type: Available <SNIP> Now, no ingress controllers exist. After deleting the default ingress controller, the status conditions change with a new timestamp: $ oc get clusteroperator.config.openshift.io/ingress -o yaml apiVersion: config.openshift.io/v1 kind: ClusterOperator <SNIP> status: conditions: - lastTransitionTime: 2019-04-08T00:37:29Z status: "False" type: Failing - lastTransitionTime: 2019-04-08T00:39:14Z message: 0 ingress controllers available, want 1 reason: Reconciling status: "True" type: Progressing - lastTransitionTime: 2019-04-08T00:39:14Z message: 0 ingress controllers available, want 1 reason: IngressUnavailable status: "False" type: Available <SNIP> Lastly, the default ingress controller is recreated. 'Progressing' and 'Available' status conditions/timestamps are updated: $ oc get clusteroperator.config.openshift.io/ingress -o yaml apiVersion: config.openshift.io/v1 kind: ClusterOperator <SNIP> spec: {} status: conditions: - lastTransitionTime: 2019-04-08T00:37:29Z status: "False" type: Failing - lastTransitionTime: 2019-04-08T00:39:45Z status: "False" type: Progressing - lastTransitionTime: 2019-04-08T00:39:45Z status: "True" type: Available <SNIP> The same behavior should be reflected during an operator upgrade.
[1] supersedes [2] to fix this bug. [1] https://github.com/openshift/cluster-ingress-operator/pull/201 [2] https://github.com/openshift/cluster-ingress-operator/pull/198
verified with 4.1.0-0.nightly-2019-04-28-064010 and issue has been fixed. test steps as #Comment 2
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0758