Bug 1583495
Summary: | bundlebindings and binding credentials can not be deleted when asb lose connection to registry | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Zihan Tang <zitang> |
Component: | Service Broker | Assignee: | David Zager <dzager> |
Status: | CLOSED ERRATA | QA Contact: | Zihan Tang <zitang> |
Severity: | medium | Docs Contact: | |
Priority: | medium | ||
Version: | 3.10.0 | CC: | aos-bugs, chezhang, dzager, jiazha |
Target Milestone: | --- | ||
Target Release: | 3.11.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | No Doc Update | |
Doc Text: |
undefined
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2018-10-11 07:20:02 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Zihan Tang
2018-05-29 06:46:19 UTC
This is similar to an issue already filed upstream https://github.com/openshift/ansible-service-broker/issues/813 The broker's bootstrap procedure (at a high-level): 1. Remove "all" APB specs from our backing store (etcd or crds) 2. Go get all APBs based on our configuration 3. Write APBs specs back to backing store (etcd or crds) There is a window here when a request to provision|update|bind|unbind|deprovision from the servicecatalog would fail because the referenced APB (or bundle) would not exist. Talked with shurley, the root cause of ansible-service-broker#813 is different. Reference https://github.com/openshift/ansible-service-broker/issues/970 The related issue https://github.com/openshift/ansible-service-broker/issues/970 is closed and those changes should be in the latest broker release "openshift-enterprise-asb-container-v3.11.0-0.10.0.1" Verified step: 1. provision mariadb-apb and create a binding. 2. edit broker-config registy tag to an invaild tag to simulate asb lose connection to this registry, restart asb pod. 3. check bundles and clusterserviceclass: # oc get bundles -o=custom-columns=NAME:.metadata.name,FQ\ NAME:.spec.fq_name,Delete:.spec.delete NAME FQ NAME Delete 0300d1ae1841c23a9df0a179ad0605fd brew-mariadb-apb true 0e5dbb6592fec99057f94fbb095ec558 brew-mediawiki-apb true 48749329dd289591e11ba737f15fc71b brew-postgresql-apb true bd8dff760b959264f3ab38d42ba5e7a8 brew-mysql-apb true # oc get clusterserviceclass -o=custom-columns=NAME:.metadata.name,EXTERNAL\ NAME:.spec.externalName,REMOVED:.status.removedFromBrokerCatalog NAME EXTERNAL NAME REMOVED 0300d1ae1841c23a9df0a179ad0605fd brew-mariadb-apb false 0e5dbb6592fec99057f94fbb095ec558 brew-mediawiki-apb false 48749329dd289591e11ba737f15fc71b brew-postgresql-apb false bd8dff760b959264f3ab38d42ba5e7a8 brew-mysql-apb false 3. delete servicebinding, check bundlebindings and secret in asb ns. they all deleted. In v3.11, bundles will not be directly deleted, I want to confirm that: when ASB lose connection to the registry or find the bundle in the registry is deleted, ASB will: 1. if first time syncing registry, mark the bundle as deleted. 2. if second time syncing (the bundle has already been marked as deleted), delete the bundle directly. are the above scenarios right as designed ? The broker, by default, runs the bootstrap procedure on startup and simply adds all of the APBs it finds into it's datastore. On subsequent calls to bootstrap: 1. If an APB, for which I have a bundle, is not found in the registry we mark it for deletion 2. If an APB, for which I have a bundle marked for deletion, is not found we remove the bundle I believe this agrees with what you are suggesting. I simply want to be careful with "first time" and "second time". Hope this helps. David, thanks for your clarification. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:2652 |