Bug 1836352 - OLM does not create ClusterRoles with reports/export subresource
Summary: OLM does not create ClusterRoles with reports/export subresource
Keywords:
Status: CLOSED EOL
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Metering Operator
Version: 4.3.0
Hardware: Unspecified
OS: Unspecified
unspecified
low
Target Milestone: ---
: ---
Assignee: tflannag
QA Contact: Peter Ruan
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-05-15 16:50 UTC by Paul Weil
Modified: 2022-08-25 21:03 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-08-25 21:03:12 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Paul Weil 2020-05-15 16:50:50 UTC
On OSD we limit what a customer can do on a cluster. We use ClusterRoles created by OLM for each CRD installed for an operator to aggregate access to roles for those customer permissions. We hit a problem with the metering-operator, installation and setup requires granting `reports/export` but this is not granted through an OLM generated ClusterRole. This means the customer doesn't have the permission and we don't have an automated and clean way to provide the RBAC.

From a conversation with Evan Cordell on slack [1] I think the extension for export needs to be added to the reports CRD for it to be picked up by OLM and a ClusterRole generated that we can use.

Net result that we need is the following aggregation rule [2] will pull in the permissions that are given in the `report-exporter` Role:

  - matchExpressions:
    - key: olm.owner.kind
      operator: In
      values:
      - OperatorGroup
    - key: olm.owner.namespace
      operator: NotIn
      values:
      - openshift-cloud-ingress-operator
      - openshift-monitoring
      - openshift-operator-lifecycle-manager
      - openshift-rbac-permissions
      - openshift-splunk-forwarder-operator
      - openshift-velero
[1] https://coreos.slack.com/archives/C3VS0LV41/p1587672410173900
[2] https://github.com/openshift/managed-cluster-config/blob/master/deploy/rbac-permissions-operator-config/03-dedicated-admins-project.ClusterRole.yaml

cc Patrick Strick Narayanan Raghavan Matt Woodson

Comment 1 Naveen Malik 2020-05-18 19:15:59 UTC
Context from slack conversation:

A customer installed the metering operator into a namespace they administer.
We don't know how it was setup, it was all done by the customer.
We setup RBAC for any CRD shipped with an operator installed by a customer via OLM using role aggregation so we can support 3rd party and custom operator installations.

Therefore we rely on any RBAC to any CRD resources to be represented in clusterroles created by OLM.

Specifically for this BZ, if metering operator is installed we need `reports/export` to be represented in the clusterroles created by OLM.  This is the only thing missing that we're aware of based on customer feedback.

The following ClusterRole can be used to reproduce the role aggregation we use for OSD.  After creating, oc get the clusterrole and rules will be populated with whatever matches the aggregationRule.

aggregationRule:
  clusterRoleSelectors:
  # aggregate all customer installed operator rbac from OLM
  - matchExpressions:
    - key: olm.owner.kind
      operator: In
      values:
      - OperatorGroup
    - key: olm.owner.namespace
      operator: NotIn
      values:
      - openshift-cloud-ingress-operator
      - openshift-monitoring
      - openshift-operator-lifecycle-manager
      - openshift-rbac-permissions
      - openshift-splunk-forwarder-operator
      - openshift-velero
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: dedicated-admins-project
rules: []

Comment 16 Brett Tofel 2020-09-11 18:47:26 UTC
I agree with Tim on severity based on docs and workarounds, so lowering it.

Marking UpcomingSprint as we worked more critical bugs and feature work this sprint.


Note You need to log in before you can comment on or make changes to this bug.