You get `E0606 19:42:53.690619 1 leaderelection.go:310] error initially creating leader election record: configmaps is forbidden: User "system:kube-controller-manager" cannot create resource "configmaps" in API group "" in the namespace "kube-system" ` in the log. We need to fix the RBAC rules to allow this so that we can auto-recover from a user deletion.
Trying to reproduce this now, I'm not entirely clear on the details (which configmap this is related to and where this log message comes from) I can't see this problem anymore in either KCM or KCM-O logs: $ oc delete configmaps/kube-controller-manager -n kube-system ##(is this the right configmap?) $ oc logs pod/openshift-kube-scheduler-operator-66b8c9947b-8qd6r -n openshift-kube-scheduler-operator | grep "configmaps is forbidden" ## no output $ oc logs pod/openshift-kube-scheduler-ip-10-0-155-71.us-west-2.compute.internal -n openshift-kube-scheduler | grep "configmaps is forbidden" ## no output Given that, I'm going to put this ON_QA to verify and if not include any details I missed
Even though the error isn't there, the configmap doesn't get recreated... taking off qa
Mike is it a thing we need to track for 4.2?
We need this fixed. I recommend creating a test that deletes the configmap and then deletes the pods. Then creates a replicaset and sees pods created.
This has been fixed in https://github.com/openshift/cluster-kube-controller-manager-operator/pull/311 and will be backported onced merged
Confirmed with latest version, the issue can't be reproduced: [root@dhcp-140-138 roottest]# oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.2.0-0.nightly-2020-01-13-060909 True False 10m Cluster version is 4.2.0-0.nightly-2020-01-13-060909 [root@dhcp-140-138 roottest]# oc get cm -n kube-system NAME DATA AGE bootstrap 1 23m cluster-config-v1 1 29m extension-apiserver-authentication 6 29m kube-controller-manager 0 29m root-ca 1 29m [root@dhcp-140-138 roottest]# oc delete cm/kube-controller-manager -n kube-system configmap "kube-controller-manager" deleted [root@dhcp-140-138 roottest]# oc get cm -n kube-system NAME DATA AGE bootstrap 1 24m cluster-config-v1 1 29m extension-apiserver-authentication 6 29m kube-controller-manager 0 1s root-ca 1 29m
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:0107