Bug 1690747 - [networking_operator] The status of 'FAILING' become nil when clusteroperator network resource is deleted and then recovered by network operator
Summary: [networking_operator] The status of 'FAILING' become nil when clusteroperator...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Networking
Version: 4.1.0
Hardware: All
OS: All
medium
medium
Target Milestone: ---
: 4.1.0
Assignee: Ricardo Carrillo Cruz
QA Contact: Meng Bo
URL:
Whiteboard:
: 1696331 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-03-20 07:28 UTC by zhaozhanqi
Modified: 2019-06-04 10:46 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-06-04 10:46:13 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:0758 0 None None None 2019-06-04 10:46:19 UTC

Description zhaozhanqi 2019-03-20 07:28:08 UTC
Description of problem:
When I deleted the clusteroperator network resource by manual. And it will be recreated it in about 3 mins by network operator pod. but the status of 'FAILING' become nil.

# oc get clusteroperator network
NAME      VERSION                             AVAILABLE   PROGRESSING   FAILING   SINCE
network   4.0.0-0.nightly-2019-03-18-200009   True        False                   2m28s

Version-Release number of selected component (if applicable):
4.0.0-0.nightly-2019-03-18-200009

How reproducible:
always

Steps to Reproduce:
1. Delete the clusteroperator network resource
   oc delete clusteroperator network
2. Check the clusteroperator network will be recovered in 3 mins
   oc get clusteroperator 
3. Check the logs of network operator pod
   

Actual results:
step2: The column of 'FAILING' is nil

# oc get clusteroperator network
NAME      VERSION                             AVAILABLE   PROGRESSING   FAILING   SINCE
network   4.0.0-0.nightly-2019-03-18-200009   True        False                   2m28s

step3: Logs from the network operator logs:

2019/03/20 07:09:53 Created ClusterOperator with status:
conditions:
- lastTransitionTime: "2019-03-20T07:09:53Z"
  status: "False"
  type: Progressing
- lastTransitionTime: "2019-03-20T07:09:53Z"
  status: "True"
  type: Available
extension: null
versions:
- name: operator
  version: 4.0.0-0.nightly-2019-03-18-200009
2019/03/20 07:09:58 Reconciling update to openshift-sdn/sdn-controller
2019/03/20 07:09:58 Updated ClusterOperator with status:
conditions:
- lastTransitionTime: "2019-03-20T07:09:58Z"
  status: "False"
  type: Progressing
- lastTransitionTime: "2019-03-20T07:09:58Z"
  status: "True"
  type: Available
extension: null
versions:
- name: operator
  version: 4.0.0-0.nightly-2019-03-18-200009
2019/03/20 07:10:00 Reconciling update to openshift-sdn/ovs
2019/03/20 07:10:00 Updated ClusterOperator with status:
conditions:
- lastTransitionTime: "2019-03-20T07:09:58Z"
  status: "False"
  type: Progressing
- lastTransitionTime: "2019-03-20T07:09:58Z"
  status: "True"
  type: Available
extension: null
versions:
- name: operator
  version: 4.0.0-0.nightly-2019-03-18-200009
2019/03/20 07:10:03 Reconciling update to openshift-sdn/sdn
2019/03/20 07:10:03 Updated ClusterOperator with status:
conditions:
- lastTransitionTime: "2019-03-20T07:09:58Z"
  status: "False"
  type: Progressing
- lastTransitionTime: "2019-03-20T07:09:58Z"
  status: "True"
  type: Available
extension: null
versions:
- name: operator
  version: 4.0.0-0.nightly-2019-03-18-200009

Expected results:

the status of 'FAILING' should have the correct value.

Additional info:

Comment 1 Ricardo Carrillo Cruz 2019-04-03 12:17:25 UTC
https://github.com/openshift/cluster-network-operator/pull/136

Comment 2 Casey Callendrello 2019-04-04 16:16:15 UTC
*** Bug 1696331 has been marked as a duplicate of this bug. ***

Comment 4 zhaozhanqi 2019-04-10 05:43:59 UTC
verified this bug on 4.0.0-0.ci-2019-04-09-225415

Comment 6 errata-xmlrpc 2019-06-04 10:46:13 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0758


Note You need to log in before you can comment on or make changes to this bug.