Bug 1870898 - User "system:serviceaccount:openshift-logging:cluster-logging-operator" cannot list resource "clusterloggings" in API group "logging.openshift.io" at the cluster scope
Summary: User "system:serviceaccount:openshift-logging:cluster-logging-operator" canno...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Logging
Version: 4.6
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ---
: 4.6.0
Assignee: Jeff Cantrill
QA Contact: Qiaoling Tang
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-08-21 01:19 UTC by Qiaoling Tang
Modified: 2020-10-27 15:12 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-10-27 15:10:23 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-logging-operator pull 673 0 None closed Bug 1870898: Revert WATCH_NAMESPACE to unbreak clusterlogging 2020-09-11 08:38:38 UTC
Red Hat Product Errata RHBA-2020:4198 0 None None None 2020-10-27 15:12:33 UTC

Description Qiaoling Tang 2020-08-21 01:19:35 UTC
Description of problem:
The CLO can't process clusterlogging instance, lots of error in the log:

E0821 01:13:39.313091       1 reflector.go:178] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:224: Failed to list *v1.ClusterLogging: clusterloggings.logging.openshift.io is forbidden: User "system:serviceaccount:openshift-logging:cluster-logging-operator" cannot list resource "clusterloggings" in API group "logging.openshift.io" at the cluster scope
E0821 01:14:28.580112       1 reflector.go:178] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:224: Failed to list *v1.ClusterLogging: clusterloggings.logging.openshift.io is forbidden: User "system:serviceaccount:openshift-logging:cluster-logging-operator" cannot list resource "clusterloggings" in API group "logging.openshift.io" at the cluster scope

$ oc get role clusterlogging.4.6.0-202008202200.p0-cluster-logging-6594f469b4  -oyaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  creationTimestamp: "2020-08-21T00:52:55Z"
  labels:
    olm.owner: clusterlogging.4.6.0-202008202200.p0
    olm.owner.kind: ClusterServiceVersion
    olm.owner.namespace: openshift-logging
    operators.coreos.com/cluster-logging.openshift-logging: ""
  managedFields:
  - apiVersion: rbac.authorization.k8s.io/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:labels:
          .: {}
          f:olm.owner: {}
          f:olm.owner.kind: {}
          f:olm.owner.namespace: {}
        f:ownerReferences:
          .: {}
          k:{"uid":"c5a2e1d8-40e2-47e2-b163-a9e559fc0d6c"}:
            .: {}
            f:apiVersion: {}
            f:blockOwnerDeletion: {}
            f:controller: {}
            f:kind: {}
            f:name: {}
            f:uid: {}
      f:rules: {}
    manager: catalog
    operation: Update
    time: "2020-08-21T00:52:55Z"
  - apiVersion: rbac.authorization.k8s.io/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:labels:
          f:operators.coreos.com/cluster-logging.openshift-logging: {}
    manager: olm
    operation: Update
    time: "2020-08-21T00:52:58Z"
  name: clusterlogging.4.6.0-202008202200.p0-cluster-logging-6594f469b4
  namespace: openshift-logging
  ownerReferences:
  - apiVersion: operators.coreos.com/v1alpha1
    blockOwnerDeletion: false
    controller: false
    kind: ClusterServiceVersion
    name: clusterlogging.4.6.0-202008202200.p0
    uid: c5a2e1d8-40e2-47e2-b163-a9e559fc0d6c
  resourceVersion: "48718"
  selfLink: /apis/rbac.authorization.k8s.io/v1/namespaces/openshift-logging/roles/clusterlogging.4.6.0-202008202200.p0-cluster-logging-6594f469b4
  uid: d558d96e-c378-4321-9e5d-6168b79ee65d
rules:
- apiGroups:
  - logging.openshift.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - ""
  resources:
  - pods
  - services
  - endpoints
  - persistentvolumeclaims
  - events
  - configmaps
  - secrets
  - serviceaccounts
  - serviceaccounts/finalizers
  verbs:
  - '*'
- apiGroups:
  - apps
  resources:
  - deployments
  - daemonsets
  - replicasets
  - statefulsets
  verbs:
  - '*'
- apiGroups:
  - route.openshift.io
  resources:
  - routes
  - routes/custom-host
  verbs:
  - '*'
- apiGroups:
  - batch
  resources:
  - cronjobs
  verbs:
  - '*'
- apiGroups:
  - rbac.authorization.k8s.io
  resources:
  - roles
  - rolebindings
  verbs:
  - '*'
- apiGroups:
  - security.openshift.io
  resourceNames:
  - privileged
  resources:
  - securitycontextconstraints
  verbs:
  - use
- apiGroups:
  - monitoring.coreos.com
  resources:
  - servicemonitors
  - prometheusrules
  verbs:
  - '*'
- apiGroups:
  - apps
  resourceNames:
  - cluster-logging-operator
  resources:
  - deployments/finalizers
  verbs:
  - update


Version-Release number of selected component (if applicable):
clusterlogging.4.6.0-202008202200.p0 

How reproducible:
100%

Steps to Reproduce:
1. deploy CLO and EO
2. create clusterlogging CR instance in the openshift-logging namespace
3. check pods, the EFK pods are not created, check CLO pod log

Actual results:


Expected results:


Additional info:

Comment 1 Jeff Cantrill 2020-08-21 20:34:40 UTC
Can you please test with the latest code as I am unable to reproduce.  I tested by:

* Deploying the latest 4.6 images
* Deploying specifically the operator image referenced in the BZ

If you still see the issue, maybe its related to the operator bundle image associated with the BZ?

Comment 2 Qiaoling Tang 2020-08-24 02:39:22 UTC
I tried to build manifest image using the latest code from https://github.com/openshift/cluster-logging-operator/tree/master/manifests and https://github.com/openshift/elasticsearch-operator/tree/master/manifests by myself, still hit the same issue.

CLO image: 
quay.io/openshift/origin-cluster-logging-operator@sha256:9889a9d6cf44145bfb611cad79835636a7fa55b55b1f97ea12946f6a6f71433e
ose-cluster-logging-operator-v4.6.0-202008231041.p0


$ oc get role -oyaml
apiVersion: v1
items:
- apiVersion: rbac.authorization.k8s.io/v1
  kind: Role
  metadata:
    creationTimestamp: "2020-08-24T01:27:40Z"
    labels:
      olm.owner: clusterlogging.v4.6.0
      olm.owner.kind: ClusterServiceVersion
      olm.owner.namespace: openshift-logging
      operators.coreos.com/cluster-logging.openshift-logging: ""
    managedFields:
    - apiVersion: rbac.authorization.k8s.io/v1
      fieldsType: FieldsV1
      fieldsV1:
        f:metadata:
          f:labels:
            .: {}
            f:olm.owner: {}
            f:olm.owner.kind: {}
            f:olm.owner.namespace: {}
          f:ownerReferences:
            .: {}
            k:{"uid":"c637bd52-9b91-4c65-ab26-bda909073eb2"}:
              .: {}
              f:apiVersion: {}
              f:blockOwnerDeletion: {}
              f:controller: {}
              f:kind: {}
              f:name: {}
              f:uid: {}
        f:rules: {}
      manager: catalog
      operation: Update
      time: "2020-08-24T01:27:40Z"
    - apiVersion: rbac.authorization.k8s.io/v1
      fieldsType: FieldsV1
      fieldsV1:
        f:metadata:
          f:labels:
            f:operators.coreos.com/cluster-logging.openshift-logging: {}
      manager: olm
      operation: Update
      time: "2020-08-24T01:27:50Z"
    name: clusterlogging.v4.6.0-cluster-logging-operator-b69f5b6c7
    namespace: openshift-logging
    ownerReferences:
    - apiVersion: operators.coreos.com/v1alpha1
      blockOwnerDeletion: false
      controller: false
      kind: ClusterServiceVersion
      name: clusterlogging.v4.6.0
      uid: c637bd52-9b91-4c65-ab26-bda909073eb2
    resourceVersion: "76880"
    selfLink: /apis/rbac.authorization.k8s.io/v1/namespaces/openshift-logging/roles/clusterlogging.v4.6.0-cluster-logging-operator-b69f5b6c7
    uid: 805c5509-e4c1-4995-b945-d5ed36d21500
  rules:
  - apiGroups:
    - logging.openshift.io
    resources:
    - '*'
    verbs:
    - '*'
  - apiGroups:
    - ""
    resources:
    - pods
    - services
    - endpoints
    - persistentvolumeclaims
    - events
    - configmaps
    - secrets
    - serviceaccounts
    - serviceaccounts/finalizers
    - services/finalizers
    verbs:
    - '*'
  - apiGroups:
    - apps
    resources:
    - deployments
    - daemonsets
    - replicasets
    - statefulsets
    verbs:
    - '*'
  - apiGroups:
    - route.openshift.io
    resources:
    - routes
    - routes/custom-host
    verbs:
    - '*'
  - apiGroups:
    - batch
    resources:
    - cronjobs
    verbs:
    - '*'
  - apiGroups:
    - rbac.authorization.k8s.io
    resources:
    - roles
    - rolebindings
    verbs:
    - '*'
  - apiGroups:
    - security.openshift.io
    resourceNames:
    - privileged
    resources:
    - securitycontextconstraints
    verbs:
    - use
  - apiGroups:
    - monitoring.coreos.com
    resources:
    - servicemonitors
    - prometheusrules
    verbs:
    - '*'
  - apiGroups:
    - apps
    resourceNames:
    - cluster-logging-operator
    resources:
    - deployments/finalizers
    verbs:
    - update

Comment 3 Qiaoling Tang 2020-08-24 03:02:48 UTC
After I add cluster-admin role to the user "system:serviceaccount:openshift-logging:cluster-logging-operator" and delete the CLO pod to wait for the new CLO pod to start, the logging EFK pods could be deployed.

Comment 4 Qiaoling Tang 2020-08-24 03:03:22 UTC
Workaround: oc adm policy add-cluster-role-to-user cluster-admin system:serviceaccount:openshift-logging:cluster-logging-operator

Comment 8 Qiaoling Tang 2020-08-26 02:22:49 UTC
Verified with clusterlogging.4.6.0-202008252031.p0

Comment 10 errata-xmlrpc 2020-10-27 15:10:23 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Container Platform 4.6.1 extras update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:4198


Note You need to log in before you can comment on or make changes to this bug.