Bug 1682922 - Deleting a user-defined clusteringress does not remove child resources
Summary: Deleting a user-defined clusteringress does not remove child resources
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Routing
Version: 4.1.0
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 4.1.0
Assignee: Dan Mace
QA Contact: Hongan Li
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-02-25 20:17 UTC by Daneyon Hansen
Modified: 2019-06-04 10:44 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-06-04 10:44:27 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:0758 None None None 2019-06-04 10:44:33 UTC
Github openshift cluster-ingress-operator pull 146 None None None 2019-03-07 21:57:50 UTC

Description Daneyon Hansen 2019-02-25 20:17:59 UTC
Description of problem:
Deleting a user-defined clusteringress does not remove child resources. For example, the deployment and service still exists.

Version-Release number of selected component (if applicable):
master

How reproducible:
always

Steps to Reproduce:
1.Create a clusteringress:
$ BASE_DOMAIN=$(oc get dns/cluster -o jsonpath='{.spec.baseDomain}')
$ cat > test0-clusteringress.yaml <<EOF
 apiVersion: ingress.openshift.io/v1alpha1
 kind: ClusterIngress
 metadata:
   name: test0
   namespace: openshift-ingress-operator
 spec:
   highAvailability:
     type: Cloud
   ingressDomain: test0.$BASE_DOMAIN
   replicas: 2
 EOF

2. Verify the clusteringress is created, along with the child resources:
$ oc get clusteringress/test0 -n openshift-ingress-operator
NAME      AGE
test0     8s
$ oc get deploy/router-test0 -n openshift-ingress
NAME           READY     UP-TO-DATE   AVAILABLE   AGE
router-test0   0/2       2            0           18s
$ oc get svc/router-test0 -n openshift-ingress
NAME           TYPE           CLUSTER-IP     EXTERNAL-IP                                                               PORT(S)                      AGE
router-test0   LoadBalancer   172.30.88.72   ab6ec6b07393911e9ae0b0a6b3bd6b3e-2040644037.us-west-2.elb.amazonaws.com   80:30405/TCP,443:30614/TCP   29s

3. Delete the clusteringress. Verify the clusteringress has been deleted and some of the child resources (deploy and svc) still exist:
$ oc get clusteringress/test0 -n openshift-ingress-operator
No resources found.
Error from server (NotFound): clusteringresses.ingress.openshift.io "test0" not found
$ oc get deploy/router-test0 -n openshift-ingress
NAME           READY     UP-TO-DATE   AVAILABLE   AGE
router-test0   2/2       2            2           2m26s
$ oc get svc/router-test0 -n openshift-ingress
NAME           TYPE           CLUSTER-IP     EXTERNAL-IP                                                               PORT(S)                      AGE
router-test0   LoadBalancer   172.30.88.72   ab6ec6b07393911e9ae0b0a6b3bd6b3e-2040644037.us-west-2.elb.amazonaws.com   80:30405/TCP,443:30614/TCP   2m31s

Actual results:
Child resources exist

Expected results:
No child left behind ;-)

Additional info:
# Operator logs for test0 clusteringress create produces:
2019-02-25T11:58:12.979-0800	INFO	operator.controller	controller/controller.go:92	reconciling	{"request": "openshift-ingress-operator/test0"}
2019-02-25T11:58:13.551-0800	INFO	operator.dns	aws/dns.go:271	skipping DNS record update	{"record": {"Zone":{"tags":{"Name":"danehans-9nggd-int","kubernetes.io/cluster/danehans-9nggd":"owned"}},"Type":"ALIAS","Alias":{"Domain":"*.test0.danehans.devcluster.openshift.com","Target":"ae09b648b393611e9ae0b0a6b3bd6b3e-94099517.us-west-2.elb.amazonaws.com"}}}
2019-02-25T11:58:13.551-0800	INFO	operator.controller	controller/controller_dns.go:26	ensured DNS record for clusteringress	{"namespace": "openshift-ingress-operator", "name": "test0", "record": {"Zone":{"tags":{"Name":"danehans-9nggd-int","kubernetes.io/cluster/danehans-9nggd":"owned"}},"Type":"ALIAS","Alias":{"Domain":"*.test0.danehans.devcluster.openshift.com","Target":"ae09b648b393611e9ae0b0a6b3bd6b3e-94099517.us-west-2.elb.amazonaws.com"}}}
2019-02-25T11:58:13.551-0800	INFO	operator.dns	aws/dns.go:271	skipping DNS record update	{"record": {"Zone":{"id":"Z3URY6TWQ91KVV"},"Type":"ALIAS","Alias":{"Domain":"*.test0.danehans.devcluster.openshift.com","Target":"ae09b648b393611e9ae0b0a6b3bd6b3e-94099517.us-west-2.elb.amazonaws.com"}}}
2019-02-25T11:58:13.551-0800	INFO	operator.controller	controller/controller_dns.go:26	ensured DNS record for clusteringress	{"namespace": "openshift-ingress-operator", "name": "test0", "record": {"Zone":{"id":"Z3URY6TWQ91KVV"},"Type":"ALIAS","Alias":{"Domain":"*.test0.danehans.devcluster.openshift.com","Target":"ae09b648b393611e9ae0b0a6b3bd6b3e-94099517.us-west-2.elb.amazonaws.com"}}}
2019-02-25T11:58:14.326-0800	INFO	operator.controller	controller/controller_ca_configmap.go:117	created configmap	{"namespace": "openshift-config-managed", "name": "router-ca"}
2019-02-25T11:58:14.541-0800	DEBUG	operator.init.kubebuilder.controller	controller/controller.go:236	Successfully Reconciled	{"controller": "operator-controller", "request": "openshift-ingress-operator/test0"}
2019-02-25T11:58:14.541-0800	INFO	operator.controller	controller/controller.go:92	reconciling	{"request": "openshift-ingress-operator/test0"}
2019-02-25T11:58:15.063-0800	INFO	operator.dns	aws/dns.go:271	skipping DNS record update	{"record": {"Zone":{"tags":{"Name":"danehans-9nggd-int","kubernetes.io/cluster/danehans-9nggd":"owned"}},"Type":"ALIAS","Alias":{"Domain":"*.test0.danehans.devcluster.openshift.com","Target":"ae09b648b393611e9ae0b0a6b3bd6b3e-94099517.us-west-2.elb.amazonaws.com"}}}
2019-02-25T11:58:15.063-0800	INFO	operator.controller	controller/controller_dns.go:26	ensured DNS record for clusteringress	{"namespace": "openshift-ingress-operator", "name": "test0", "record": {"Zone":{"tags":{"Name":"danehans-9nggd-int","kubernetes.io/cluster/danehans-9nggd":"owned"}},"Type":"ALIAS","Alias":{"Domain":"*.test0.danehans.devcluster.openshift.com","Target":"ae09b648b393611e9ae0b0a6b3bd6b3e-94099517.us-west-2.elb.amazonaws.com"}}}
2019-02-25T11:58:15.063-0800	INFO	operator.dns	aws/dns.go:271	skipping DNS record update	{"record": {"Zone":{"id":"Z3URY6TWQ91KVV"},"Type":"ALIAS","Alias":{"Domain":"*.test0.danehans.devcluster.openshift.com","Target":"ae09b648b393611e9ae0b0a6b3bd6b3e-94099517.us-west-2.elb.amazonaws.com"}}}
2019-02-25T11:58:15.063-0800	INFO	operator.controller	controller/controller_dns.go:26	ensured DNS record for clusteringress	{"namespace": "openshift-ingress-operator", "name": "test0", "record": {"Zone":{"id":"Z3URY6TWQ91KVV"},"Type":"ALIAS","Alias":{"Domain":"*.test0.danehans.devcluster.openshift.com","Target":"ae09b648b393611e9ae0b0a6b3bd6b3e-94099517.us-west-2.elb.amazonaws.com"}}}
2019-02-25T11:58:15.747-0800	DEBUG	operator.init.kubebuilder.controller	controller/controller.go:236	Successfully Reconciled	{"controller": "operator-controller", "request": "openshift-ingress-operator/test0"}

# Operator logs for test0 clusteringress delete produces:
2019-02-25T12:00:04.789-0800	INFO	operator.controller	controller/controller.go:92	reconciling	{"request": "openshift-ingress-operator/test0"}
2019-02-25T12:00:04.827-0800	INFO	operator.controller	controller/controller.go:102	clusteringress not found; reconciliation will be skipped	{"request": "openshift-ingress-operator/test0"}
2019-02-25T12:00:05.025-0800	INFO	operator.controller	controller/controller_ca_configmap.go:146	deleted configmap	{"namespace": "openshift-config-managed", "name": "router-ca"}
2019-02-25T12:00:05.239-0800	DEBUG	operator.init.kubebuilder.controller	controller/controller.go:236	Successfully Reconciled	{"controller": "operator-controller", "request": "openshift-ingress-operator/test0"}

Comment 1 Dan Mace 2019-02-25 21:15:26 UTC
I suspect the root cause is we're not enforcing finalizers on clusteringresses except for the default clusteringress (which the operator creates).

Comment 2 Daneyon Hansen 2019-02-27 05:08:01 UTC
I will be submitting a PR that fixes this bug. I am now able to CRUD multiple clusteringresses. However, I have uncovered another bug that I will create a bugzilla for and link to this.

Comment 3 Daneyon Hansen 2019-02-27 05:24:29 UTC
Here is a link to the related bug: https://bugzilla.redhat.com/show_bug.cgi?id=1683515

Comment 6 Weibin Liang 2019-03-12 19:43:50 UTC
https://github.com/openshift/cluster-ingress-operator/commit/8246fb3e170ae6796ab9bcd852dc22cf9609b9a8 changed the api to ingresscontrollers. Here is an updated example manifest:

$ cat test0-ing.yaml 
kind: IngressController
apiVersion: operator.openshift.io/v1
metadata:
  name: test0
  namespace: openshift-ingress-operator
spec:
  domain: tests0.<your_ingress_domain>

Tested in v4.0.0-0.177.0 and results passed:

[root@dhcp-41-193 openshift-4.0]# oc delete -f test0-clusteringress.yaml
ingresscontroller.operator.openshift.io "test0" deleted
[root@dhcp-41-193 openshift-4.0]# oc get svc/router-test0 -n openshift-ingress
Error from server (NotFound): services "router-test0" not found
[root@dhcp-41-193 openshift-4.0]# oc get deploy/router-test0 -n openshift-ingress
Error from server (NotFound): deployments.extensions "router-test0" not found
[root@dhcp-41-193 openshift-4.0]# oc get IngressController/test0 -n openshift-ingress-operator
Error from server (NotFound): ingresscontrollers.operator.openshift.io "test0" not found

Comment 8 errata-xmlrpc 2019-06-04 10:44:27 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0758


Note You need to log in before you can comment on or make changes to this bug.