Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1765456

Summary: ingresscontroller should shows proper status when LoadBalancerReady or DNSReady in False
Product: OpenShift Container Platform Reporter: Hongan Li <hongli>
Component: NetworkingAssignee: Dan Mace <dmace>
Networking sub component: router QA Contact: Hongan Li <hongli>
Status: CLOSED DUPLICATE Docs Contact:
Severity: medium    
Priority: medium CC: aos-bugs
Version: 4.3.0   
Target Milestone: ---   
Target Release: 4.3.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-10-25 11:14:38 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Hongan Li 2019-10-25 07:22:23 UTC
Description of problem:
see https://bugzilla.redhat.com/show_bug.cgi?id=1765044#c2, the Degraded status is False even the DNSReady in False.

Version-Release number of selected component (if applicable):
4.3.0-0.nightly-2019-10-24-203507

How reproducible:
100%

Steps to Reproduce:
1.
2.
3.

Actual results:
the Degraded status is False even the DNSReady in False.
  conditions:
  - lastTransitionTime: "2019-10-23T07:49:05Z"
    reason: Valid
    status: "True"
    type: Admitted
  - lastTransitionTime: "2019-10-23T07:53:11Z"
    status: "True"
    type: Available
  - lastTransitionTime: "2019-10-23T07:49:09Z"
    message: The endpoint publishing strategy supports a managed load balancer
    reason: WantedByEndpointPublishingStrategy
    status: "True"
    type: LoadBalancerManaged
  - lastTransitionTime: "2019-10-23T07:49:12Z"
    message: The LoadBalancer service is provisioned
    reason: LoadBalancerProvisioned
    status: "True"
    type: LoadBalancerReady
  - lastTransitionTime: "2019-10-23T07:49:09Z"
    message: DNS management is supported and zones are specified in the cluster DNS
      config.
    reason: Normal
    status: "True"
    type: DNSManaged
  - lastTransitionTime: "2019-10-23T07:53:11Z"
    message: 'The record failed to provision in some zones: [{ map[Name:qe-piqin-1023-dvgh2-int
      kubernetes.io/cluster/qe-piqin-1023-dvgh2:owned]} {Z3B3KOVA3TRCWP map[]}]'
    reason: FailedZones
    status: "False"
    type: DNSReady
  - lastTransitionTime: "2019-10-23T07:49:09Z"
    status: "False"
    type: Degraded


Expected results:
If LoadBalancerManaged and DNSManaged status are "True" and any of LoadBalancerReady or DNSReady in "False", then Degraded should be set to "True".

Additional info:

Comment 1 Dan Mace 2019-10-25 11:14:38 UTC
Thanks for the report. Since this problem has most visibly manifested as auth operator degraded conditions, we're tracking the work in https://bugzilla.redhat.com/show_bug.cgi?id=1765282. I'm going to mark this report as a duplicate to avoid confusion!

*** This bug has been marked as a duplicate of bug 1765282 ***