Hide Forgot
Description of problem: I can pull the image: ose-service-catalog:v4.0 correctly, but I find the version of the service-catalog binary is not correct. Version-Release number of selected component (if applicable): OCP 4.0 brewregistry.stage.redhat.io/openshift/ose-service-catalog:v4.0 How reproducible: always Steps to Reproduce: 1. Pull the 4.0 image of Service Catalog. 2. Check the version of the service-catalog binary. Actual results: [root@qe-jfan-3 ~]# docker run -ti --rm brewregistry.stage.redhat.io/openshift/ose-service-catalog:v4.0 service-catalog --version v4.0.0-0.66.0;Upstream:v0.1.31 Expected results: The release version of 3.11 is "Upstream:v0.1.35", so, at least, the version of 4.0 should be newer than it. Additional info: [root@qe-jfan-3 ~]# docker run -ti --rm registry.redhat.io/openshift3/ose-service-catalog:v3.11 service-catalog --version v3.11.43;Upstream:v0.1.35
Added the TestBlocker since it blocks the functional testing of the 4.0. Correct me if I'm wrong.
When I install service-catalog with Next-gen installer 4.0 (0.7.0), default service catalog image is : quay.io/openshift/origin-service-catalog:v4.0.0 but version is old: v3.11.0+5e975ea-2-dirty;Upstream:v0.1.31 After creating ASB or TSB by operator, the controller-manager pod will turn to CrashLoopBackOff. # oc logs -f controller-manager-7fc4d64bcc-fqtrc I1218 09:33:18.622794 1 feature_gate.go:194] feature gates: map[OriginatingIdentity:true] I1218 09:33:18.622903 1 feature_gate.go:194] feature gates: map[OriginatingIdentity:true AsyncBindingOperations:true] I1218 09:33:18.622926 1 feature_gate.go:194] feature gates: map[OriginatingIdentity:true AsyncBindingOperations:true NamespacedServiceBroker:true] I1218 09:33:18.622950 1 hyperkube.go:192] Service Catalog version v3.11.0+5e975ea-2-dirty;Upstream:v0.1.31 (built 2018-11-27T03:45:41Z) I1218 09:33:18.719018 1 leaderelection.go:185] attempting to acquire leader lease kube-service-catalog/service-catalog-controller-manager... I1218 09:33:18.924366 1 leaderelection.go:194] successfully acquired lease kube-service-catalog/service-catalog-controller-manager I1218 09:33:18.924855 1 event.go:221] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"kube-service-catalog", Name:"service-catalog-controller-manager", UID:"f5ee8652-0274-11e9-bdd1-0639d6b2fa2a", APIVersion:"v1", ResourceVersion:"263444", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' controller-manager-7fc4d64bcc-fqtrc-external-service-catalog-controller became leader F1218 09:33:19.421052 1 controller_manager.go:232] error running controllers: failed to get api versions from server: failed to get supported resources from server: unable to retrieve the complete list of server APIs: servicecatalog.k8s.io/v1beta1: the server is currently unable to handle the request I'm not sure whether it caused by the old version. Can you help to update service catalog image to the latest in upstream or downstream? Otherwise it blocks the installation testing of service broker.
For workaround in #comment 2: remove all resource limit of service-catalog containers in rh-operator config map, then controller-manager pod will be more stable.
Merged v0.1.38 (latest upstream version) into openshift/service-catalog and verified the updated build is available on quay and dockerhub.
Jay, Yes, the version of the Service Catalog is correct. LGTM, verify it. The image: quay.io/openshift/origin-service-catalog:v4.0.0 mac:20-payload jianzhang$ oc get pods -n kube-service-catalog NAME READY STATUS RESTARTS AGE apiserver-6b8d478d77-58vkr 2/2 Running 0 4m controller-manager-78744c7ccc-w8knh 1/1 Running 7 14h mac:20-payload jianzhang$ oc exec apiserver-6b8d478d77-58vkr -- service-catalog --version Defaulting container name to apiserver. Use 'oc describe pod/apiserver-6b8d478d77-58vkr -n kube-service-catalog' to see all of the containers in this pod. v4.0.0-v0.1.38+7a95e74-2-dirty;Upstream:v0.1.38 @Zihan Please have a try with the latest version. Please create a new bug to trace that issue if it still CrashLoopBackOff.
Verify it according to comment 5.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0758