Hide Forgot
fixed by https://github.com/openshift/cluster-svcat-apiserver-operator/pull/56 This applies only to the Service Catalog API Server Operator - - Service Catalog Controller Manager is independent and continues to work as it was before. When cluster admin changes the ServiceCatalogAPIServer resource to have a `managementState: Removed` the operator will log a new condition: message: Unable to automatically remove Service Catalog once installed. Request to remove will be ignored. See documentation for more details. reason: Unable to automatically remove Service Catalog once installed. status: "True" type: RemovalRequestIgnored and will *not* delete the namespace. The admin can then either change the managementStage back to Managed (which would clear this RemoveRequestIgnored condition) or cluster admin can consult doc where they will be told if they proceed, gc will delete all secrets that were created by service catalog bindings. Alternatively, we may provide a script/oc commands they can run that would remove the owenerRef from these secrets. If they want to proceed with removing Service Catalog, cluster admin should execute `oc delete namespace openshift-service-catalog`. This will trigger finalizer code in the Operator and will result in the Service Catalog API Server being completely removed. Last step is the cluster admin must `oc delete APIService v1beta1.servicecatalog.k8s.io`. At this point the operator reflects that service catalog api server has been removed - admin could re-install if desired. This will be better addressed in 4.1.z.
That fixed PR wasn't merged in the payload: 4.1.0-0.nightly-2019-05-16-223922, but 4.1.0-0.nightly-2019-05-17-041605. 1, Enable Service Catalog by changing `managementState: Removed` to `managementState: Managed`. mac:~ jianzhang$ oc get pods -n openshift-service-catalog-apiserver NAME READY STATUS RESTARTS AGE apiserver-45tbw 1/1 Running 0 36m apiserver-cn5sm 1/1 Running 0 36m apiserver-wxhnh 1/1 Running 0 36m mac:~ jianzhang$ oc get pods -n openshift-service-catalog-controller-manager NAME READY STATUS RESTARTS AGE controller-manager-g9srl 1/1 Running 0 36m controller-manager-hkkgc 1/1 Running 0 36m controller-manager-l4s2s 1/1 Running 0 36m 2, Deploy a sample broker. mac:~ jianzhang$ oc get clusterservicebroker NAME URL STATUS AGE ups-broker http://ups-broker.kube-service-catalog.svc.cluster.local Ready 1m mac:~ jianzhang$ oc get clusterserviceclass NAME EXTERNAL-NAME BROKER AGE 4f6e6cf6-ffdd-425f-a2c7-3c9258ad2468 user-provided-service ups-broker 1m 5f6e6cf6-ffdd-425f-a2c7-3c9258ad2468 user-provided-service-single-plan ups-broker 1m 8a6229d4-239e-4790-ba1f-8367004d0473 user-provided-service-with-schemas ups-broker 1m mac:~ jianzhang$ oc get pods -n kube-service-catalog NAME READY STATUS RESTARTS AGE ups-broker-5f8568bc95-fzp9q 1/1 Running 0 4m16s 3, Disable Service Catalog by changing `managementState: Managed` to `managementState: Removed`. The APIServer ignore the `Removed` status, and the controller-manager worked as before. And the `openshift-service-catalog-controller-manager` project was deleted as expected. mac:~ jianzhang$ oc get servicecatalogapiserver cluster -o yaml apiVersion: operator.openshift.io/v1 kind: ServiceCatalogAPIServer metadata: annotations: release.openshift.io/create-only: "true" creationTimestamp: "2019-05-17T08:51:17Z" generation: 3 name: cluster resourceVersion: "29211" selfLink: /apis/operator.openshift.io/v1/servicecatalogapiservers/cluster uid: ef5a050b-7880-11e9-aad8-02aeb8a97e12 spec: logLevel: Normal managementState: Removed status: conditions: - lastTransitionTime: "2019-05-17T09:19:13Z" status: "True" type: Available - lastTransitionTime: "2019-05-17T09:18:02Z" status: "False" type: Progressing - lastTransitionTime: "2019-05-17T08:57:55Z" reason: Removed status: "False" type: Degraded - lastTransitionTime: "2019-05-17T09:17:53Z" reason: NoUnsupportedConfigOverrides status: "True" type: UnsupportedConfigOverridesUpgradeable - lastTransitionTime: "2019-05-17T09:17:53Z" status: "False" type: ResourceSyncControllerDegraded - lastTransitionTime: "2019-05-17T09:17:59Z" status: "False" type: WorkloadDegraded - lastTransitionTime: "2019-05-17T09:57:53Z" message: Unable to automatically remove Service Catalog once installed. Request to remove will be ignored. See documentation for more details. reason: Unable to automatically remove Service Catalog once installed. status: "True" type: RemovalRequestIgnored generations: - group: apps hash: "" lastGeneration: 2 name: apiserver namespace: openshift-service-catalog-apiserver resource: daemonsets observedGeneration: 2 readyReplicas: 0 mac:~ jianzhang$ oc get servicecatalogcontrollermanager cluster -o yaml apiVersion: operator.openshift.io/v1 kind: ServiceCatalogControllerManager metadata: creationTimestamp: "2019-05-17T08:57:55Z" generation: 3 name: cluster resourceVersion: "29341" selfLink: /apis/operator.openshift.io/v1/servicecatalogcontrollermanagers/cluster uid: dc6d898d-7881-11e9-952c-022716466838 spec: logLevel: Normal managementState: Removed status: conditions: - lastTransitionTime: "2019-05-17T09:19:20Z" reason: Removed status: "True" type: Available - lastTransitionTime: "2019-05-17T09:18:25Z" reason: Removed status: "False" type: Progressing - lastTransitionTime: "2019-05-17T08:57:55Z" reason: Removed status: "False" type: Degraded - lastTransitionTime: "2019-05-17T09:18:22Z" status: "False" type: WorkloadDegraded generations: - group: apps hash: "" lastGeneration: 2 name: controller-manager namespace: openshift-service-catalog-controller-manager resource: daemonsets observedGeneration: 2 readyReplicas: 0 version: 4.1.0-0.nightly-2019-05-17-041605 LGTM for the current fix. Verify it.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0758