Description of problem: The CLO can't create resources for fluentd, kibana and curator after creating the clusterlogging instance. a lot of error message in the CLO pod log: {"level":"error","ts":1582781019.9700615,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"clusterlogging-controller","request":"openshift-logging/instance","error":"Unable to create or update visualization for \"instance\": Failure creating Kibana route shared config: Failure creating configmap: configmaps is forbidden: User \"system:serviceaccount:openshift-logging:cluster-logging-operator\" cannot create resource \"configmaps\" in API group \"\" in the namespace \"openshift-config-managed\"","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/k8s.io/apimachinery/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/k8s.io/apimachinery/pkg/util/wait/wait.go:88"} {"level":"error","ts":1582781021.639464,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"clusterlogging-controller","request":"openshift-logging/instance","error":"Unable to create or update visualization for \"instance\": Failure creating Kibana route shared config: Failure creating configmap: configmaps is forbidden: User \"system:serviceaccount:openshift-logging:cluster-logging-operator\" cannot create resource \"configmaps\" in API group \"\" in the namespace \"openshift-config-managed\"","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/k8s.io/apimachinery/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/k8s.io/apimachinery/pkg/util/wait/wait.go:88"} $ oc get all NAME READY STATUS RESTARTS AGE pod/cluster-logging-operator-7f97b94698-sqh48 1/1 Running 0 9m17s pod/elasticsearch-cdm-xqivtja3-1-b7b55c744-qkhxl 2/2 Running 0 7m44s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/elasticsearch ClusterIP 172.30.190.52 <none> 9200/TCP 7m46s service/elasticsearch-cluster ClusterIP 172.30.27.93 <none> 9300/TCP 7m46s service/elasticsearch-metrics ClusterIP 172.30.241.54 <none> 60000/TCP 7m46s service/kibana ClusterIP 172.30.34.86 <none> 443/TCP 7m46s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/cluster-logging-operator 1/1 1 1 9m18s deployment.apps/elasticsearch-cdm-xqivtja3-1 1/1 1 1 7m46s NAME DESIRED CURRENT READY AGE replicaset.apps/cluster-logging-operator-7f97b94698 1 1 1 9m18s replicaset.apps/elasticsearch-cdm-xqivtja3-1-b7b55c744 1 1 1 7m46s NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD route.route.openshift.io/kibana kibana-openshift-logging.apps.qitang.qe.gcp.devcluster.openshift.com kibana <all> reencrypt/Redirect None $ oc get clusterlogging instance -oyaml apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: creationTimestamp: "2020-02-27T05:23:00Z" generation: 1 name: instance namespace: openshift-logging resourceVersion: "69889" selfLink: /apis/logging.openshift.io/v1/namespaces/openshift-logging/clusterloggings/instance uid: ccbde860-2219-4a69-848a-de649665a1c8 spec: collection: logs: fluentd: {} type: fluentd curation: curator: schedule: '*/10 * * * *' type: curator logStore: elasticsearch: nodeCount: 1 redundancyPolicy: ZeroRedundancy resources: limits: memory: 2Gi requests: cpu: 200m memory: 2Gi storage: {} type: elasticsearch managementState: Managed visualization: kibana: replicas: 1 type: kibana status: collection: logs: fluentdStatus: daemonSet: "" nodes: null pods: null curation: {} logStore: elasticsearchStatus: - cluster: activePrimaryShards: 1 activeShards: 1 initializingShards: 0 numDataNodes: 1 numNodes: 1 pendingTasks: 0 relocatingShards: 0 status: green unassignedShards: 0 clusterName: elasticsearch nodeConditions: elasticsearch-cdm-xqivtja3-1: [] nodeCount: 1 pods: client: failed: [] notReady: [] ready: - elasticsearch-cdm-xqivtja3-1-b7b55c744-qkhxl data: failed: [] notReady: [] ready: - elasticsearch-cdm-xqivtja3-1-b7b55c744-qkhxl master: failed: [] notReady: [] ready: - elasticsearch-cdm-xqivtja3-1-b7b55c744-qkhxl shardAllocationEnabled: all visualization: {} $ oc get cm NAME DATA AGE cluster-logging-operator-lock 0 15m elasticsearch 3 14m $ oc get sa NAME SECRETS AGE builder 2 16m cluster-logging-operator 2 16m default 2 16m deployer 2 16m elasticsearch 2 14m kibana 2 14m registry 2 16m Version-Release number of selected component (if applicable): ose-cluster-logging-operator-v4.4.0-202002262131 How reproducible: Always Steps to Reproduce: 1. deploy CLO and EO 2. create clusterlogging instance with https://raw.githubusercontent.com/openshift-qe/v3-testfiles/master/logging/clusterlogging/example.yaml 3. Actual results: Expected results: Additional info:
$ oc get role clusterlogging-shared-config -oyaml apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: creationTimestamp: "2020-02-28T02:26:07Z" name: clusterlogging-shared-config namespace: openshift-logging resourceVersion: "350366" selfLink: /apis/rbac.authorization.k8s.io/v1/namespaces/openshift-logging/roles/clusterlogging-shared-config uid: d88cfcc0-2a6e-4655-b718-52185a432481 rules: - apiGroups: - "" resources: - configmaps verbs: - get - create - update - delete $ oc get rolebinding clusterlogging-shared-config -oyaml apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: creationTimestamp: "2020-02-28T02:26:07Z" name: clusterlogging-shared-config namespace: openshift-logging resourceVersion: "350367" selfLink: /apis/rbac.authorization.k8s.io/v1/namespaces/openshift-logging/rolebindings/clusterlogging-shared-config uid: 5a2b90cc-a4dd-4e99-ac0a-223e73ddba47 roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: clusterlogging-shared-config subjects: - kind: ServiceAccount name: cluster-logging-operator namespace: openshift-logging
Workaround: add clusterrole to the sa cluster-logging-operator: $ oc get clusterrole clusterlogging-shared-config -oyaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: creationTimestamp: "2020-02-28T02:51:00Z" name: clusterlogging-shared-config resourceVersion: "363077" selfLink: /apis/rbac.authorization.k8s.io/v1/clusterroles/clusterlogging-shared-config uid: c2340365-3a54-450f-99aa-ce19fe7e5983 rules: - apiGroups: - "" resources: - configmaps verbs: - get - create - update - delete $ oc get clusterrolebinding clusterlogging-shared-config -oyaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: creationTimestamp: "2020-02-28T02:51:10Z" name: clusterlogging-shared-config resourceVersion: "363137" selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/clusterlogging-shared-config uid: 7ed592b7-776f-4a64-9892-ecaac1e699f7 roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: clusterlogging-shared-config subjects: - kind: ServiceAccount name: cluster-logging-operator namespace: openshift-logging
Closing as a duplicate because if there are issues with the impl they should be fixed against the original issue. *** This bug has been marked as a duplicate of bug 1806651 ***
I am reopening this so we can use it to revert the regression that was caused by https://bugzilla.redhat.com/show_bug.cgi?id=1806651 https://bugzilla.redhat.com/show_bug.cgi?id=1806651 will remain open because we will still need a solution for it, someday.
To accelerated the PR merge in 4.4. Verified using quay.io/openshift/origin-cluster-logging-operator@sha256:9057825a57c65b098132257add099cbca2e5f2e5032f3a370c9329025f60462b oc get configmap NAME DATA AGE clo-olm 3 11m cluster-logging-operator-lock 0 10m curator 3 10m elasticsearch 3 10m fluentd 3 10m fluentd-trusted-ca-bundle 1 10m indexmanagement-scripts 2 8m34s kibana-trusted-ca-bundle 1 10m sharing-config 2 10m
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:2409