Hide Forgot
During an upgrade, operators are required to update the lastTransitionTimestamp of Progressing when they start upgrading and when they complete. If there is no need to set progressing, the last transition time should still be reset at the end of an upgrade when they hit "level". When I started an upgrade (39m ago was the beginning) I see: clusteroperator.config.openshift.io/network 0.0.1 True False False 57m which means the operator did not reset progressing lastTransitionTime This value is used to tell the admin when "something happened" and an upgrade is "something".
https://github.com/openshift/cluster-version-operator/pull/154 will document this and an e2e test will verify it in the future post-upgrade
Fixed in https://github.com/openshift/cluster-network-operator/pull/143 and merged.
hi, I did upgrade from 4.1.0-0.nightly-2019-04-18-170154 to 4.1.0-0.nightly-2019-04-18-210657 history: - completionTime: "2019-04-19T07:25:10Z" image: registry.svc.ci.openshift.org/ocp/release:4.1.0-0.nightly-2019-04-18-210657 startedTime: "2019-04-19T06:50:46Z" state: Completed version: 4.1.0-0.nightly-2019-04-18-210657 - completionTime: "2019-04-19T06:50:46Z" image: registry.svc.ci.openshift.org/ocp/release@sha256:41d6f271eeadb23632b4a8b173f5ba2a22fc02a69bf77eaf834fbd462d9fdb80 startedTime: "2019-04-18T19:31:02Z" state: Completed version: 4.1.0-0.nightly-2019-04-18-170154 observedGeneration: 3 versionHash: O_Lv92WtZTw= I found the 'lastTransitionTime' of Progressing have been updated, see: # oc get co network -o yaml apiVersion: config.openshift.io/v1 kind: ClusterOperator metadata: creationTimestamp: "2019-04-18T19:31:15Z" generation: 1 name: network resourceVersion: "255928" selfLink: /apis/config.openshift.io/v1/clusteroperators/network uid: 87e4cd08-6210-11e9-a0be-061050670bc0 spec: {} status: conditions: - lastTransitionTime: "2019-04-18T19:31:26Z" status: "False" type: Failing - lastTransitionTime: "2019-04-19T07:17:05Z" status: "False" type: Progressing - lastTransitionTime: "2019-04-18T19:31:50Z" status: "True" type: Available - lastTransitionTime: "2019-04-19T06:51:43Z" status: "False" type: Degraded extension: null versions: - name: operator version: 4.1.0-0.nightly-2019-04-18-210657 but I found the 'SINCE' did not be updated: oc get co NAME VERSION AVAILABLE PROGRESSING FAILING SINCE authentication 4.1.0-0.nightly-2019-04-18-210657 True False False 12h cloud-credential 4.1.0-0.nightly-2019-04-18-210657 True False False 12h cluster-autoscaler 4.1.0-0.nightly-2019-04-18-210657 True False False 12h console 4.1.0-0.nightly-2019-04-18-210657 True False False 12h dns 4.1.0-0.nightly-2019-04-18-210657 True False False 12h image-registry 4.1.0-0.nightly-2019-04-18-210657 True False False 49m ingress 4.1.0-0.nightly-2019-04-18-210657 True False False 63m kube-apiserver 4.1.0-0.nightly-2019-04-18-210657 True False False 12h kube-controller-manager 4.1.0-0.nightly-2019-04-18-210657 True False False 12h kube-scheduler 4.1.0-0.nightly-2019-04-18-210657 True False False 12h machine-api 4.1.0-0.nightly-2019-04-18-210657 True False False 12h machine-config 4.1.0-0.nightly-2019-04-18-210657 True False False 53m marketplace 4.1.0-0.nightly-2019-04-18-210657 True False False 47m monitoring 4.1.0-0.nightly-2019-04-18-210657 True False False 57m network 4.1.0-0.nightly-2019-04-18-210657 True False False 12h node-tuning 4.1.0-0.nightly-2019-04-18-210657 True False False 49m openshift-apiserver 4.1.0-0.nightly-2019-04-18-210657 True False False 12h
Interesting. I'm not sure where the SINCE comes from. Let me look.
SINCE is just the lastTransitionTime of Available. Since the network doesn't go unavailable, it won't change. So this should be fixed.
thanks Casey, I thought the SINCE should be updated when upgrading. so now the SINCE will not be updated unless I recreated the clusteroperator network resource. verified this bug according to comemnt 6.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0758