Description of problem: When delete a project, the bindings and instance still exist under it. Also if another user create same name project, it can get the instance. Version-Release number of selected component (if applicable): openshift v3.6.172.0.0 kubernetes v1.6.1+5115d708d7 etcd 3.2.1 How reproducible: Always Steps to Reproduce: 1. Create a project and create bindings and instance in it [root@host-8-241-22 dma]# oadm new-project ups Created project ups [root@host-8-241-22 dma]# oc create -f bindings.yaml -n ups binding "ups-binding" created [root@host-8-241-22 dma]# oc create -f instance.yaml -n ups instance "ups-instance" created [root@host-8-241-22 dma]# oc get instance -n ups NAME KIND ups-instance Instance.v1alpha1.servicecatalog.k8s.io [root@host-8-241-22 dma]# oc get bindings.servicecatalog.k8s.io -n ups NAME KIND ups-binding Binding.v1alpha1.servicecatalog.k8s.io 2. Delete the prject and check bindings and instance [root@host-8-241-22 dma]# oc delete project ups project "ups" deleted [root@host-8-241-22 dma]# [root@host-8-241-22 dma]# oc get project ups Error from server (NotFound): namespaces "ups" not found [root@host-8-241-22 dma]# [root@host-8-241-22 dma]# [root@host-8-241-22 dma]# [root@host-8-241-22 dma]# oc get instance -n ups NAME KIND ups-instance Instance.v1alpha1.servicecatalog.k8s.io [root@host-8-241-22 dma]# oc get bindings.servicecatalog.k8s.io -n ups NAME KIND ups-binding Binding.v1alpha1.servicecatalog.k8s.io 3. Login with different user and create same name project then get instance bindigns Actual results: 2. The bindings and instance still exist after delete project 3. Different user can get the instance bindings Expected results: 2. bindings and instance should be removed after delete project 3. Different user shouldn't get the instance and bindings Additional info:
The env is installed by openshift-ansible
Paul - please triage further, possibly want to doc this as known issue for 3.6, definitely must fix for 3.7.
Hi guys, I experience the same issue. Is there currently a workaround to remove remaining bindings and instances ? Using oc delete does not work at all: [root@ocp-master01 archi]# oc get bindings.servicecatalog.k8s.io NAME KIND mongodb-ephemeral-4dzlt-d5nwm Binding.v1alpha1.servicecatalog.k8s.io [root@ocp-master01 archi]# oc delete bindings.servicecatalog.k8s.io mongodb-ephemeral-4dzlt-d5nwm binding "mongodb-ephemeral-4dzlt-d5nwm" deleted [root@ocp-master01 archi]# oc get bindings.servicecatalog.k8s.io NAME KIND mongodb-ephemeral-4dzlt-d5nwm Binding.v1alpha1.servicecatalog.k8s.io Thank you in advance
Maxime- When you delete these resources, they have to be finalized before they are fully deleted. This requires communication with the service broker that offers the service, so you will rarely see these resources be fully deleted immediately. You can check the status of the ServiceBinding to get information about what's happening to the object.
This seems to work properly with current latest build, but I will retest after https://github.com/openshift/origin/pull/16908 is merged. Several notes: if there is an error with the binding, the error condition may block the deletion of the instance and binding. I've got at least 2 example error conditions: 1) create an instance that references a serviceclass from TSB such as mysql-persistent (update service-catalog/go/src/github.com/kubernetes-incubator/service-catalog/contrib/examples/walkthrough/ups-instance.yaml and then oc create -f ups-instance.yaml). Then create a binding for this instance. If you look at the controller logs, you will see errors around the serviceclass, and the instance and binding are in a error condition, ready=false. 2) I installed Ansible Service Broker and then created a hastebin application. Once this was deployed I created a binding for it. In the controller logs, there are errors/warnings like this: type: 'Warning' reason: 'ErrorNonbindableServiceClass' References a non-bindable ClusterServiceClass (K8S: "ab24ffd54da0aefdea5277e0edce8425" ExternalName: "dh-hastebin-apb") and Plan ("default") combination in both cases, because the binding is in an error state, you can't delete it and it blocks cleaning up the associated instance. I'll create an upstream service catalog bug for this and come back with a link.
upstream issue for items in comment 5 https://github.com/kubernetes-incubator/service-catalog/issues/1423
This all looks to work properly. If using an Ansible installed OpenShift, please ensure it has https://github.com/openshift/openshift-ansible/pull/5746 which was merged today.
I'm not sure I understand this comment: > I found there will always leave a new namespace after delete a target namespace. Can you clarify what you meant here? The pod in question appears to be a pod that the ansible broker left running. The ansible broker runs pods in transient namespaces and these are unrelated to the namespace that the service catalog resources (ServiceInstances, ServiceBindings) were created in.
Yeah...but the transient namespaces exist all the way, as above shows, the transient namespace "a5fc66c0-e4da-4b9b-a23c-ff7609068e7e" has kept alive 17 hours, is it normally? [root@preserve-jiazha-1024master-etcd-1 ~]# oc get ns | grep a5fc66c0-e4da-4b9b-a23c-ff7609068e7e a5fc66c0-e4da-4b9b-a23c-ff7609068e7e Active 17h In my opinion, the transient namespace should be deleted immediately after deleting the namespace that user created, right?
Shawn, Please see if you can reproduce and gain insight into what might be wrong.
1. user deletes the project/namespace. 2. svc catalog sends us a deprovision 3. we attempt to create a svc-acct that has access to deleted namesapce 4. we create the Transient Namespace and run the pod, which can not access the Target Namespace because it does not exist 5. Pod fails and we keep the transient namespace around due to configuration. We can on deprovision request check if the target namespace is gone and then say the deprovision was successful.
https://github.com/openshift/ansible-service-broker/pull/520
https://github.com/openshift/ansible-service-broker/pull/529
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2017:3188