Bug 1695200 - network operator does not reset progressing transition timestamp when it upgrades
Summary: network operator does not reset progressing transition timestamp when it upgr...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Networking
Version: 4.1.0
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 4.1.0
Assignee: Casey Callendrello
QA Contact: Meng Bo
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-04-02 16:15 UTC by Clayton Coleman
Modified: 2019-06-04 10:46 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-06-04 10:46:50 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:0758 None None None 2019-06-04 10:46:57 UTC

Description Clayton Coleman 2019-04-02 16:15:45 UTC
During an upgrade, operators are required to update the lastTransitionTimestamp of Progressing when they start upgrading and when they complete.  If there is no need to set progressing, the last transition time should still be reset at the end of an upgrade when they hit "level".

When I started an upgrade (39m ago was the beginning) I see:

clusteroperator.config.openshift.io/network                              0.0.1                          True        False         False     57m

which means the operator did not reset progressing lastTransitionTime

This value is used to tell the admin when "something happened" and an upgrade is "something".

Comment 1 Clayton Coleman 2019-04-02 17:36:44 UTC
https://github.com/openshift/cluster-version-operator/pull/154 will document this and an e2e test will verify it in the future post-upgrade

Comment 2 Casey Callendrello 2019-04-10 14:25:33 UTC
Fixed in https://github.com/openshift/cluster-network-operator/pull/143 and merged.

Comment 4 zhaozhanqi 2019-04-19 08:24:55 UTC
hi, I did upgrade from 4.1.0-0.nightly-2019-04-18-170154 to 4.1.0-0.nightly-2019-04-18-210657

    history:
    - completionTime: "2019-04-19T07:25:10Z"
      image: registry.svc.ci.openshift.org/ocp/release:4.1.0-0.nightly-2019-04-18-210657
      startedTime: "2019-04-19T06:50:46Z"
      state: Completed
      version: 4.1.0-0.nightly-2019-04-18-210657
    - completionTime: "2019-04-19T06:50:46Z"
      image: registry.svc.ci.openshift.org/ocp/release@sha256:41d6f271eeadb23632b4a8b173f5ba2a22fc02a69bf77eaf834fbd462d9fdb80
      startedTime: "2019-04-18T19:31:02Z"
      state: Completed
      version: 4.1.0-0.nightly-2019-04-18-170154
    observedGeneration: 3
    versionHash: O_Lv92WtZTw=


I found the 'lastTransitionTime' of Progressing have been updated, see: 
# oc get co network -o yaml
apiVersion: config.openshift.io/v1
kind: ClusterOperator
metadata:
  creationTimestamp: "2019-04-18T19:31:15Z"
  generation: 1
  name: network
  resourceVersion: "255928"
  selfLink: /apis/config.openshift.io/v1/clusteroperators/network
  uid: 87e4cd08-6210-11e9-a0be-061050670bc0
spec: {}
status:
  conditions:
  - lastTransitionTime: "2019-04-18T19:31:26Z"
    status: "False"
    type: Failing
  - lastTransitionTime: "2019-04-19T07:17:05Z"
    status: "False"
    type: Progressing
  - lastTransitionTime: "2019-04-18T19:31:50Z"
    status: "True"
    type: Available
  - lastTransitionTime: "2019-04-19T06:51:43Z"
    status: "False"
    type: Degraded
  extension: null
  versions:
  - name: operator
    version: 4.1.0-0.nightly-2019-04-18-210657

but I found the 'SINCE' did not be updated:

 oc get co 
NAME                                 VERSION                             AVAILABLE   PROGRESSING   FAILING   SINCE
authentication                       4.1.0-0.nightly-2019-04-18-210657   True        False         False     12h
cloud-credential                     4.1.0-0.nightly-2019-04-18-210657   True        False         False     12h
cluster-autoscaler                   4.1.0-0.nightly-2019-04-18-210657   True        False         False     12h
console                              4.1.0-0.nightly-2019-04-18-210657   True        False         False     12h
dns                                  4.1.0-0.nightly-2019-04-18-210657   True        False         False     12h
image-registry                       4.1.0-0.nightly-2019-04-18-210657   True        False         False     49m
ingress                              4.1.0-0.nightly-2019-04-18-210657   True        False         False     63m
kube-apiserver                       4.1.0-0.nightly-2019-04-18-210657   True        False         False     12h
kube-controller-manager              4.1.0-0.nightly-2019-04-18-210657   True        False         False     12h
kube-scheduler                       4.1.0-0.nightly-2019-04-18-210657   True        False         False     12h
machine-api                          4.1.0-0.nightly-2019-04-18-210657   True        False         False     12h
machine-config                       4.1.0-0.nightly-2019-04-18-210657   True        False         False     53m
marketplace                          4.1.0-0.nightly-2019-04-18-210657   True        False         False     47m
monitoring                           4.1.0-0.nightly-2019-04-18-210657   True        False         False     57m
network                              4.1.0-0.nightly-2019-04-18-210657   True        False         False     12h
node-tuning                          4.1.0-0.nightly-2019-04-18-210657   True        False         False     49m
openshift-apiserver                  4.1.0-0.nightly-2019-04-18-210657   True        False         False     12h

Comment 5 Casey Callendrello 2019-04-23 08:59:01 UTC
Interesting. I'm not sure where the SINCE comes from. Let me look.

Comment 6 Casey Callendrello 2019-04-23 15:50:20 UTC
SINCE is just the lastTransitionTime of Available. Since the network doesn't go unavailable, it won't change. So this should be fixed.

Comment 7 zhaozhanqi 2019-04-24 02:10:33 UTC
thanks Casey, I thought the SINCE should be updated when upgrading. so now the SINCE will not be updated unless I recreated the clusteroperator network resource. 
verified this bug according to comemnt 6.

Comment 9 errata-xmlrpc 2019-06-04 10:46:50 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0758


Note You need to log in before you can comment on or make changes to this bug.