Seen in 3.10 (openshift v3.10.0-alpha.0+db7939c-1104) with oc cluster up, but I am pretty certain it will be an issue in OCP as well. every 15 seconds in the controller logs: I0516 14:16:29.464865 1 controller.go:294] error getting the cluster info configmap: "configmaps \"cluster-info\" is forbidden: User \"system:serviceaccount:kube-service-catalog:service-catalog-controller\" cannot get configmaps in the namespace \"default\": User \"system:serviceaccount:kube-service-catalog:service-catalog-controller\" cannot get configmaps in project \"default\"" because the namespace is wrong on role and rolebinding for cluster-info-configmap and cluster-info-configmap-binding - - currently its set to catalog, the namespace must be "default" I manually changed, but now I'm seeing W0516 14:19:14.512255 1 controller.go:277] due to error "configmaps is forbidden: User \"system:serviceaccount:kube-service-catalog:service-catalog-controller\" cannot create configmaps in the namespace \"default\": User \"system:serviceaccount:kube-service-catalog:service-catalog-controller\" cannot create configmaps in project \"default\"", could not set clusterid configmap to &v1.ConfigMap{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"cluster-info", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Data:map[string]string{"id":"0188e887-5914-11e8-a5f1-0242ac110006"}, BinaryData:map[string][]uint8(nil)}
Was also an issue upstream. Fixed upstream with https://github.com/kubernetes-incubator/service-catalog/pull/2042 Need to fix this in openshift by specifying the kube-service-catalog namespace in the controller's --cluster-id-configmap-namespace argument.
fixed for Cluster Up in origin: https://github.com/openshift/origin/pull/19757 fixed in OCP: https://github.com/openshift/openshift-ansible/pull/8441
For OCP side, used the openshift-ansible master branch to install it and it works well, details: [root@qe-jiazha-test310master-etcd-1 ~]# oc get role cluster-info-configmap -o yaml| grep namespace namespace: kube-service-catalog selfLink: /apis/authorization.openshift.io/v1/namespaces/kube-service-catalog/roles/cluster-info-configmap [root@qe-jiazha-test310master-etcd-1 ~]# oc get daemonset controller-manager -o yaml| grep namespace ... - --cluster-id-configmap-namespace=kube-service-catalog fieldPath: metadata.namespace [root@qe-jiazha-test310master-etcd-1 ~]# oc version oc v3.10.25 kubernetes v1.10.0+b81c8f8 features: Basic-Auth GSSAPI Kerberos SPNEGO Server https://qe-jiazha-test310master-etcd-1:8443 openshift v3.10.25 kubernetes v1.10.0+b81c8f8 For origin side, start the cluster by below command: #oc cluster up --enable='*,service-catalog,template-service-broker' --image='brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/ose-${component}:${version}' --public-hostname=10.8.241.46 --base-dir=jian1 [root@preserved-cluster-up-ui-long-term-use ~]$ oc version oc v3.10.25 kubernetes v1.10.0+b81c8f8 features: Basic-Auth GSSAPI Kerberos SPNEGO Server https://10.8.241.46:8443 openshift v3.10.27 kubernetes v1.10.0+b81c8f8 [root@preserved-cluster-up-ui-long-term-use ~]$ oc get role cluster-info-configmap -o yaml | grep namespace namespace: kube-service-catalog selfLink: /apis/rbac.authorization.k8s.io/v1/namespaces/kube-service-catalog/roles/cluster-info-configmap [root@preserved-cluster-up-ui-long-term-use ~]$ oc get deployment controller-manager -o yaml | grep "cluster-id-configmap-namespace=kube-service-catalog" ... - --cluster-id-configmap-namespace=kube-service-catalog And, I didn't find the report errors. LGTM, verify it.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:2509