Force deletion should work here, but it's correct that the namespace should not be deleted when this happens. Errors with controller reconciliation should block deletion of the resource _until the force delete is used as a clean-up operation_. So, if a namespace is deleted, and an instance can't be fully deleted for some reason, it should block the deletion of a namespace until an administrator cleans it up. Jay has a WIP PR up for this here: https://github.com/kubernetes-incubator/service-catalog/pull/1708
quick test while I work on revendoring Service Catalog 0.1.5 into Origin master, this issue is fixed. I'll update it as such once the PR is completed.
Fixed by https://github.com/openshift/origin/pull/18480 (it does not require a force)
Test failed with Openshift v3.9.0-0.47.0 + service-catalog v0.1.8 Steps: # oc create -f https://raw.githubusercontent.com/openshift-qe/v3-testfiles/master/svc-catalog/ups-broker-deploy.yaml -n kube-service-catalog deployment "ups-broker" created # oc create -f https://raw.githubusercontent.com/openshift-qe/v3-testfiles/master/svc-catalog/ups-broker-svc.yaml -n kube-service-catalog service "ups-broker" created # oc create -f https://raw.githubusercontent.com/openshift-qe/v3-testfiles/master/svc-catalog/ups-broker-3.7.yaml clusterservicebroker "ups-broker" created # oc delete deployment ups-broker -n kube-service-catalog deployment "ups-broker" deleted # oc new-project test-ns Now using project "test-ns" on server "https://172.16.120.149:8443". You can add applications to this project with the 'new-app' command. For example, try: oc new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-ex.git to build a new example application in Ruby. # oc create -f https://raw.githubusercontent.com/openshift-qe/v3-testfiles/master/svc-catalog/ups-instance.yaml serviceinstance "ups-instance" created # oc get serviceinstance NAME AGE ups-instance 10s # oc delete serviceinstance ups-instance -n test-ns serviceinstance "ups-instance" deleted # oc get serviceinstance NAME AGE ups-instance 26s # oc delete --force=true serviceinstance ups-instance serviceinstance "ups-instance" deleted # oc get serviceinstance NAME AGE ups-instance 16m # oc delete --force=true --grace-period=0 serviceinstance ups-instance warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. serviceinstance "ups-instance" deleted # oc get serviceinstance NAME AGE ups-instance 16m # oc describe serviceinstance ups-instance -n test-ns Name: ups-instance Namespace: test-ns Labels: <none> Annotations: <none> API Version: servicecatalog.k8s.io/v1beta1 Kind: ServiceInstance Metadata: Creation Timestamp: 2018-02-23T03:01:09Z Deletion Grace Period Seconds: 0 Deletion Timestamp: 2018-02-23T03:01:32Z Finalizers: kubernetes-incubator/service-catalog Generation: 2 Resource Version: 146641 Self Link: /apis/servicecatalog.k8s.io/v1beta1/namespaces/test-ns/serviceinstances/ups-instance UID: cc6ccaed-1845-11e8-b0a2-0a580a800006 Spec: Cluster Service Class External Name: user-provided-service Cluster Service Class Ref: Name: 4f6e6cf6-ffdd-425f-a2c7-3c9258ad2468 Cluster Service Plan External Name: default Cluster Service Plan Ref: Name: 86064792-7ea2-467b-af93-ac9694d96d52 External ID: b8a7375e-962f-4a66-aef9-3211bd064b18 Update Requests: 0 User Info: Groups: system:cluster-admins system:masters system:authenticated UID: Username: system:admin Status: Async Op In Progress: false Conditions: Last Transition Time: 2018-02-23T03:01:36Z Message: Error deprovisioning, ClusterServiceClass (K8S: "4f6e6cf6-ffdd-425f-a2c7-3c9258ad2468" ExternalName: "user-provided-service") at ClusterServiceBroker "ups-broker": Delete http://ups-broker.kube-service-catalog.svc.cluster.local/v2/service_instances/b8a7375e-962f-4a66-aef9-3211bd064b18?accepts_incomplete=true&plan_id=86064792-7ea2-467b-af93-ac9694d96d52&service_id=4f6e6cf6-ffdd-425f-a2c7-3c9258ad2468: dial tcp 172.30.197.23:80: getsockopt: no route to host Reason: DeprovisionCallFailed Status: Unknown Type: Ready Current Operation: Deprovision Deprovision Status: Required Operation Start Time: 2018-02-23T03:01:33Z Orphan Mitigation In Progress: false Reconciled Generation: 0 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning ErrorCallingProvision 14m (x6 over 14m) service-catalog-controller-manager The provision call failed and will be retried: Error communicating with broker for provisioning: Put http://ups-broker.kube-service-catalog.svc.cluster.local/v2/service_instances/b8a7375e-962f-4a66-aef9-3211bd064b18?accepts_incomplete=true: dial tcp 172.30.197.23:80: getsockopt: no route to host Warning DeprovisionCallFailed 4m (x25 over 14m) service-catalog-controller-manager Error deprovisioning, ClusterServiceClass (K8S: "4f6e6cf6-ffdd-425f-a2c7-3c9258ad2468" ExternalName: "user-provided-service") at ClusterServiceBroker "ups-broker": Delete http://ups-broker.kube-service-catalog.svc.cluster.local/v2/service_instances/b8a7375e-962f-4a66-aef9-3211bd064b18?accepts_incomplete=true&plan_id=86064792-7ea2-467b-af93-ac9694d96d52&service_id=4f6e6cf6-ffdd-425f-a2c7-3c9258ad2468: dial tcp 172.30.197.23:80: getsockopt: no route to host
This is the expected behavior when the deprovision operation can't be carried out at the broker. You can remove the finalizer to unblock full deletion of the service instance, but you run the risk that there may be resources associated with the instance that aren't fully deleted. For the record, '--force' doesn't have any effect on service catalog resources. My comment that it should work was a statement that ideally it should work, not that we expect it to work now. Since I wrote that comment, I've discovered the --force is basically a client-side notion only, and we won't be able to have custom behavior to make --force work for service catalog resources. I believe the specific behavior this bug was created for is fixed.
Paul also make some important concept clear in here: Refer to: https://bugzilla.redhat.com/show_bug.cgi?id=1541350#c5
@Zhang- Do you believe this is still a bug?
Changing status to ON_QA since ready for test.
Base on Comment 4 & Comment 5. It is OKay for QE, changing status to "VERIFIED". Furthermore, we will improve relate doc to clarify the relate notice for customer in bug https://bugzilla.redhat.com/show_bug.cgi?id=1548618 @Paul, Thanks.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:3748