Hide Forgot
Description of problem: Install the service catalog failed via the subscription. Got below errors of the controller-manager: F1130 05:14:51.103281 1 controller_manager.go:232] error running controllers: failed to get api versions from server: failed to get supported resources from server: unable to retrieve the complete list of server APIs: servicecatalog.k8s.io/v1beta1: the server is currently unable to handle the request Version-Release number of selected component (if applicable): [core@jian-master-0 ~]$ oc exec apiserver-685f9b5fdd-bmr6m -- service-catalog --version v4.0.0-0.74.0;Upstream:v0.1.31 [core@jian-master-0 ~]$ oc exec olm-operator-796dc97869-ddgq5 -- olm -version OLM version: 0.8.0 git commit: bb46d55 How reproducible: always Steps to Reproduce: 1. Create OCP 4.0 via openshift-installer. 2. Create a project named "kube-service-catalog". #oc adm new-project kube-service-catalog 3. Install the service catalog by using subscription, like below: [core@jian-master-0 ~]$ cat svcat.yaml apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: namespace: kube-service-catalog generateName: svcat- spec: source: rh-operators name: svcat startingCSV: svcat.v0.1.34 channel: alpha Actual results: [core@jian-master-0 ~]$ oc get csv NAME DISPLAY VERSION REPLACES PHASE svcat.v0.1.34 Service Catalog 0.1.34 Failed [core@jian-master-0 ~]$ oc get pods NAME READY STATUS RESTARTS AGE apiserver-685f9b5fdd-bmr6m 2/2 Running 0 11m controller-manager-794679fbc4-hmqkb 0/1 CrashLoopBackOff 7 11m Expected results: The pods of the service catalog should work well. Additional info: The APIservice of service catalog did exist(see below). I'm not sure why got the above errors, if you think this is an issue of OLM, please feel free to change the component. [core@jian-master-0 ~]$ oc get apiservice v1beta1.servicecatalog.k8s.io -o yaml apiVersion: apiregistration.k8s.io/v1 kind: APIService metadata: creationTimestamp: 2018-11-29T14:49:35Z name: v1beta1.servicecatalog.k8s.io ownerReferences: - apiVersion: operators.coreos.com/v1alpha1 blockOwnerDeletion: false controller: false kind: ClusterServiceVersion name: svcat.v0.1.34 uid: fb183843-f3e5-11e8-bc41-ae3985d81a66 - apiVersion: operators.coreos.com/v1alpha1 blockOwnerDeletion: false controller: false kind: ClusterServiceVersion name: svcat.v0.1.34 uid: b6ec1ff7-f45e-11e8-bc41-ae3985d81a66 resourceVersion: "2389617" selfLink: /apis/apiregistration.k8s.io/v1/apiservices/v1beta1.servicecatalog.k8s.io uid: fd2ce729-f3e5-11e8-bc41-ae3985d81a66 spec: caBundle: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJhVENDQVE2Z0F3SUJBZ0lJU2U5WFV6QzMzd2d3Q2dZSUtvWkl6ajBFQXdJd0dERVdNQlFHQTFVRUNoTU4KVW1Wa0lFaGhkQ3dnU1c1akxqQWVGdzB4T0RFeE16QXdOVEV6TlRCYUZ3MHlNREV4TWprd05URXpOVEJhTUJneApGakFVQmdOVkJBb1REVkpsWkNCSVlYUXNJRWx1WXk0d1dUQVRCZ2NxaGtqT1BRSUJCZ2dxaGtqT1BRTUJCd05DCkFBVHZaZGRGT2svaWRXWFFQRGpXaUtseDJMcmVtdXdRajdYNHF2VW5CZERFd3FrUVNGWTJVdnN5elVVdFhub0oKNy9iWGMrbVZoU1pFWGd1a2pjLy91Q1JBbzBJd1FEQU9CZ05WSFE4QkFmOEVCQU1DQW9Rd0hRWURWUjBsQkJZdwpGQVlJS3dZQkJRVUhBd0lHQ0NzR0FRVUZCd01CTUE4R0ExVWRFd0VCL3dRRk1BTUJBZjh3Q2dZSUtvWkl6ajBFCkF3SURTUUF3UmdJaEFOQVpMN0toQUxGeDY1bXZndHhXcGEzM2VVNUtER3ZMRjNFTHFaNXNmcHRVQWlFQWkydjYKYW16b0xrcFpHZ1FzZUV6ZFRSMzZ1Mm9PNndsL2hHYmRvVVNzandZPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg== group: servicecatalog.k8s.io groupPriorityMinimum: 2000 service: name: v1beta1-servicecatalog-k8s-io namespace: kube-service-catalog version: v1beta1 versionPriority: 15
I'm glad to see you attempting to install Service Catalog Jian. Yesterday I was also looking at this and it looks like there is a core problem with the master api server. Associated with ttps://github.com/openshift/installer/issues/755, https://github.com/openshift/installer/issues/751, https://github.com/openshift/cluster-kube-apiserver-operator/issues/143 I'll leave this open for now and will update once we make some progress.
I believe this issue has been resolved. Last week I and engineers on the Broker team did multiple installs with multiple current builds and had general success with Service Catalog.
LGTM, the Service Catalog pods worked well, verify it, details: [jzhang@dhcp-140-18 aws-cluster]$ oc version oc v4.0.0-0.41.0 kubernetes v1.11.0+d4cacc0 features: Basic-Auth GSSAPI Kerberos SPNEGO Server https://xxia2nd-api.devcluster.openshift.com:6443 kubernetes v1.11.0+231d012 [jzhang@dhcp-140-18 aws-cluster]$ oc exec olm-operator-85f7b8f886-zx8z4 -- olm -version OLM version: 0.8.0 git commit: 8cdb2cc [jzhang@dhcp-140-18 aws-cluster]$ oc get csv NAME DISPLAY VERSION REPLACES PHASE svcat.v0.1.34 Service Catalog 0.1.34 Succeeded [jzhang@dhcp-140-18 aws-cluster]$ oc get pods NAME READY STATUS RESTARTS AGE apiserver-7fd856b8f5-c6plj 2/2 Running 0 6m controller-manager-6cf8b9867d-f8jlt 1/1 Running 3 6m
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0758