Bug 1683765 - Conflicting clusteringress resources are allowed
Summary: Conflicting clusteringress resources are allowed
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Routing
Version: 4.1.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 4.1.0
Assignee: Daneyon Hansen
QA Contact: Hongan Li
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-02-27 18:21 UTC by Dan Mace
Modified: 2019-06-04 10:44 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-06-04 10:44:42 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:0758 None None None 2019-06-04 10:44:49 UTC

Description Dan Mace 2019-02-27 18:21:34 UTC
Description of problem:

Creating a clusteringress with a `.spec.ingressDomain` that conflicts with another clusteringress is allowed, causing the new conflicting clusteringress to effectively assume ownership of resources owned by another clusteringress (for example, DNS records for the LB service).

A clusteringress whose ingressDomain conflicts with an older clusteringress should be considered invalid/rejected and be not acted upon.

Version-Release number of selected component (if applicable):


How reproducible:

Create a new clusteringress with a `spec.ingressDomain` that matches another clusteringress (creating one with a nil is a good way to demonstrate a conflict with the default clusteringress).


Steps to Reproduce:
1.
2.
3.

Actual results:

The new conflicting clusteringress is reconciled as if it were valid.

Expected results:

The new conflicting clusteringress should be rejected in some way; at a minimum it should be ignored and the condition should be logged or reported through an event. Better still would be to report the state as a status condition.

Additional info:

Comment 4 Hongan Li 2019-04-03 03:22:19 UTC
verified with 4.0.0-0.nightly-2019-04-02-081046 and issue has been fixed.

The message "domain conflicts" is showed in operator logs and no status for the new ingresscontroller when creating conflicting ingresscontroller.


$ oc -n openshift-ingress-operator get ingresscontroller/test0 -o yaml
apiVersion: operator.openshift.io/v1
kind: IngressController
metadata:
  creationTimestamp: 2019-04-03T02:57:50Z
  generation: 1
  name: test0
  namespace: openshift-ingress-operator
  resourceVersion: "295171"
  selfLink: /apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/test0
  uid: 44befad2-55bc-11e9-a9b0-0a20a2ee6b90


operator logs:
2019-04-03T02:57:50.814Z	INFO	operator.controller	controller/controller.go:222	domain conflicts with existing IngressController	{"domain": "apps.hongli402.qe.devcluster.openshift.com", "namespace": "openshift-ingress-operator", "name": "default"}
2019-04-03T02:57:50.814Z	INFO	operator.controller	controller/controller.go:196	domain not unique, not setting status domain for IngressController	{"namespace": "openshift-ingress-operator", "name": "test0"}

Comment 6 errata-xmlrpc 2019-06-04 10:44:42 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0758


Note You need to log in before you can comment on or make changes to this bug.