Description of problem: The CLO can't process clusterlogging instance, lots of error in the log: E0821 01:13:39.313091 1 reflector.go:178] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:224: Failed to list *v1.ClusterLogging: clusterloggings.logging.openshift.io is forbidden: User "system:serviceaccount:openshift-logging:cluster-logging-operator" cannot list resource "clusterloggings" in API group "logging.openshift.io" at the cluster scope E0821 01:14:28.580112 1 reflector.go:178] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:224: Failed to list *v1.ClusterLogging: clusterloggings.logging.openshift.io is forbidden: User "system:serviceaccount:openshift-logging:cluster-logging-operator" cannot list resource "clusterloggings" in API group "logging.openshift.io" at the cluster scope $ oc get role clusterlogging.4.6.0-202008202200.p0-cluster-logging-6594f469b4 -oyaml apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: creationTimestamp: "2020-08-21T00:52:55Z" labels: olm.owner: clusterlogging.4.6.0-202008202200.p0 olm.owner.kind: ClusterServiceVersion olm.owner.namespace: openshift-logging operators.coreos.com/cluster-logging.openshift-logging: "" managedFields: - apiVersion: rbac.authorization.k8s.io/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:labels: .: {} f:olm.owner: {} f:olm.owner.kind: {} f:olm.owner.namespace: {} f:ownerReferences: .: {} k:{"uid":"c5a2e1d8-40e2-47e2-b163-a9e559fc0d6c"}: .: {} f:apiVersion: {} f:blockOwnerDeletion: {} f:controller: {} f:kind: {} f:name: {} f:uid: {} f:rules: {} manager: catalog operation: Update time: "2020-08-21T00:52:55Z" - apiVersion: rbac.authorization.k8s.io/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:labels: f:operators.coreos.com/cluster-logging.openshift-logging: {} manager: olm operation: Update time: "2020-08-21T00:52:58Z" name: clusterlogging.4.6.0-202008202200.p0-cluster-logging-6594f469b4 namespace: openshift-logging ownerReferences: - apiVersion: operators.coreos.com/v1alpha1 blockOwnerDeletion: false controller: false kind: ClusterServiceVersion name: clusterlogging.4.6.0-202008202200.p0 uid: c5a2e1d8-40e2-47e2-b163-a9e559fc0d6c resourceVersion: "48718" selfLink: /apis/rbac.authorization.k8s.io/v1/namespaces/openshift-logging/roles/clusterlogging.4.6.0-202008202200.p0-cluster-logging-6594f469b4 uid: d558d96e-c378-4321-9e5d-6168b79ee65d rules: - apiGroups: - logging.openshift.io resources: - '*' verbs: - '*' - apiGroups: - "" resources: - pods - services - endpoints - persistentvolumeclaims - events - configmaps - secrets - serviceaccounts - serviceaccounts/finalizers verbs: - '*' - apiGroups: - apps resources: - deployments - daemonsets - replicasets - statefulsets verbs: - '*' - apiGroups: - route.openshift.io resources: - routes - routes/custom-host verbs: - '*' - apiGroups: - batch resources: - cronjobs verbs: - '*' - apiGroups: - rbac.authorization.k8s.io resources: - roles - rolebindings verbs: - '*' - apiGroups: - security.openshift.io resourceNames: - privileged resources: - securitycontextconstraints verbs: - use - apiGroups: - monitoring.coreos.com resources: - servicemonitors - prometheusrules verbs: - '*' - apiGroups: - apps resourceNames: - cluster-logging-operator resources: - deployments/finalizers verbs: - update Version-Release number of selected component (if applicable): clusterlogging.4.6.0-202008202200.p0 How reproducible: 100% Steps to Reproduce: 1. deploy CLO and EO 2. create clusterlogging CR instance in the openshift-logging namespace 3. check pods, the EFK pods are not created, check CLO pod log Actual results: Expected results: Additional info:
Can you please test with the latest code as I am unable to reproduce. I tested by: * Deploying the latest 4.6 images * Deploying specifically the operator image referenced in the BZ If you still see the issue, maybe its related to the operator bundle image associated with the BZ?
I tried to build manifest image using the latest code from https://github.com/openshift/cluster-logging-operator/tree/master/manifests and https://github.com/openshift/elasticsearch-operator/tree/master/manifests by myself, still hit the same issue. CLO image: quay.io/openshift/origin-cluster-logging-operator@sha256:9889a9d6cf44145bfb611cad79835636a7fa55b55b1f97ea12946f6a6f71433e ose-cluster-logging-operator-v4.6.0-202008231041.p0 $ oc get role -oyaml apiVersion: v1 items: - apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: creationTimestamp: "2020-08-24T01:27:40Z" labels: olm.owner: clusterlogging.v4.6.0 olm.owner.kind: ClusterServiceVersion olm.owner.namespace: openshift-logging operators.coreos.com/cluster-logging.openshift-logging: "" managedFields: - apiVersion: rbac.authorization.k8s.io/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:labels: .: {} f:olm.owner: {} f:olm.owner.kind: {} f:olm.owner.namespace: {} f:ownerReferences: .: {} k:{"uid":"c637bd52-9b91-4c65-ab26-bda909073eb2"}: .: {} f:apiVersion: {} f:blockOwnerDeletion: {} f:controller: {} f:kind: {} f:name: {} f:uid: {} f:rules: {} manager: catalog operation: Update time: "2020-08-24T01:27:40Z" - apiVersion: rbac.authorization.k8s.io/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:labels: f:operators.coreos.com/cluster-logging.openshift-logging: {} manager: olm operation: Update time: "2020-08-24T01:27:50Z" name: clusterlogging.v4.6.0-cluster-logging-operator-b69f5b6c7 namespace: openshift-logging ownerReferences: - apiVersion: operators.coreos.com/v1alpha1 blockOwnerDeletion: false controller: false kind: ClusterServiceVersion name: clusterlogging.v4.6.0 uid: c637bd52-9b91-4c65-ab26-bda909073eb2 resourceVersion: "76880" selfLink: /apis/rbac.authorization.k8s.io/v1/namespaces/openshift-logging/roles/clusterlogging.v4.6.0-cluster-logging-operator-b69f5b6c7 uid: 805c5509-e4c1-4995-b945-d5ed36d21500 rules: - apiGroups: - logging.openshift.io resources: - '*' verbs: - '*' - apiGroups: - "" resources: - pods - services - endpoints - persistentvolumeclaims - events - configmaps - secrets - serviceaccounts - serviceaccounts/finalizers - services/finalizers verbs: - '*' - apiGroups: - apps resources: - deployments - daemonsets - replicasets - statefulsets verbs: - '*' - apiGroups: - route.openshift.io resources: - routes - routes/custom-host verbs: - '*' - apiGroups: - batch resources: - cronjobs verbs: - '*' - apiGroups: - rbac.authorization.k8s.io resources: - roles - rolebindings verbs: - '*' - apiGroups: - security.openshift.io resourceNames: - privileged resources: - securitycontextconstraints verbs: - use - apiGroups: - monitoring.coreos.com resources: - servicemonitors - prometheusrules verbs: - '*' - apiGroups: - apps resourceNames: - cluster-logging-operator resources: - deployments/finalizers verbs: - update
After I add cluster-admin role to the user "system:serviceaccount:openshift-logging:cluster-logging-operator" and delete the CLO pod to wait for the new CLO pod to start, the logging EFK pods could be deployed.
Workaround: oc adm policy add-cluster-role-to-user cluster-admin system:serviceaccount:openshift-logging:cluster-logging-operator
Verified with clusterlogging.4.6.0-202008252031.p0
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (OpenShift Container Platform 4.6.1 extras update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:4198