Bug 1690747

Summary: [networking_operator] The status of 'FAILING' become nil when clusteroperator network resource is deleted and then recovered by network operator
Product: OpenShift Container Platform Reporter: zhaozhanqi <zzhao>
Component: NetworkingAssignee: Ricardo Carrillo Cruz <ricarril>
Status: CLOSED ERRATA QA Contact: Meng Bo <bmeng>
Severity: medium Docs Contact:
Priority: medium    
Version: 4.1.0CC: anusaxen, aos-bugs, bbennett, ricarril
Target Milestone: ---   
Target Release: 4.1.0   
Hardware: All   
OS: All   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-06-04 10:46:13 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description zhaozhanqi 2019-03-20 07:28:08 UTC
Description of problem:
When I deleted the clusteroperator network resource by manual. And it will be recreated it in about 3 mins by network operator pod. but the status of 'FAILING' become nil.

# oc get clusteroperator network
NAME      VERSION                             AVAILABLE   PROGRESSING   FAILING   SINCE
network   4.0.0-0.nightly-2019-03-18-200009   True        False                   2m28s

Version-Release number of selected component (if applicable):
4.0.0-0.nightly-2019-03-18-200009

How reproducible:
always

Steps to Reproduce:
1. Delete the clusteroperator network resource
   oc delete clusteroperator network
2. Check the clusteroperator network will be recovered in 3 mins
   oc get clusteroperator 
3. Check the logs of network operator pod
   

Actual results:
step2: The column of 'FAILING' is nil

# oc get clusteroperator network
NAME      VERSION                             AVAILABLE   PROGRESSING   FAILING   SINCE
network   4.0.0-0.nightly-2019-03-18-200009   True        False                   2m28s

step3: Logs from the network operator logs:

2019/03/20 07:09:53 Created ClusterOperator with status:
conditions:
- lastTransitionTime: "2019-03-20T07:09:53Z"
  status: "False"
  type: Progressing
- lastTransitionTime: "2019-03-20T07:09:53Z"
  status: "True"
  type: Available
extension: null
versions:
- name: operator
  version: 4.0.0-0.nightly-2019-03-18-200009
2019/03/20 07:09:58 Reconciling update to openshift-sdn/sdn-controller
2019/03/20 07:09:58 Updated ClusterOperator with status:
conditions:
- lastTransitionTime: "2019-03-20T07:09:58Z"
  status: "False"
  type: Progressing
- lastTransitionTime: "2019-03-20T07:09:58Z"
  status: "True"
  type: Available
extension: null
versions:
- name: operator
  version: 4.0.0-0.nightly-2019-03-18-200009
2019/03/20 07:10:00 Reconciling update to openshift-sdn/ovs
2019/03/20 07:10:00 Updated ClusterOperator with status:
conditions:
- lastTransitionTime: "2019-03-20T07:09:58Z"
  status: "False"
  type: Progressing
- lastTransitionTime: "2019-03-20T07:09:58Z"
  status: "True"
  type: Available
extension: null
versions:
- name: operator
  version: 4.0.0-0.nightly-2019-03-18-200009
2019/03/20 07:10:03 Reconciling update to openshift-sdn/sdn
2019/03/20 07:10:03 Updated ClusterOperator with status:
conditions:
- lastTransitionTime: "2019-03-20T07:09:58Z"
  status: "False"
  type: Progressing
- lastTransitionTime: "2019-03-20T07:09:58Z"
  status: "True"
  type: Available
extension: null
versions:
- name: operator
  version: 4.0.0-0.nightly-2019-03-18-200009

Expected results:

the status of 'FAILING' should have the correct value.

Additional info:

Comment 1 Ricardo Carrillo Cruz 2019-04-03 12:17:25 UTC
https://github.com/openshift/cluster-network-operator/pull/136

Comment 2 Casey Callendrello 2019-04-04 16:16:15 UTC
*** Bug 1696331 has been marked as a duplicate of this bug. ***

Comment 4 zhaozhanqi 2019-04-10 05:43:59 UTC
verified this bug on 4.0.0-0.ci-2019-04-09-225415

Comment 6 errata-xmlrpc 2019-06-04 10:46:13 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0758