Bug 1668853 - Programmatically aggregated cluster roles block admin rolebinding
Summary: Programmatically aggregated cluster roles block admin rolebinding
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: OLM
Version: 3.11.0
Hardware: x86_64
OS: Linux
high
high
Target Milestone: ---
: 3.11.z
Assignee: Evan Cordell
QA Contact: Jian Zhang
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-01-23 17:32 UTC by chris.liles
Modified: 2023-09-18 00:15 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-01-14 05:31:27 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift openshift-ansible pull 11921 0 'None' closed Bug 1668853: Fix rbac aggregation for OLM 2020-09-18 14:12:46 UTC
Red Hat Product Errata RHBA-2020:0017 0 None None None 2020-01-14 05:31:39 UTC

Description chris.liles 2019-01-23 17:32:55 UTC
Description of problem:
OLM programmatically creates and aggregates clusterroles for view/edit for all CRDs that it installs/manages. The verbs for view (get, watch, list) are not aggregated to the admin cluster role. This breaks the ability of a user with an admin rolebinding to be able to add additional view rolebindings, as they now lack the get, watch, list. Privilege escalation prevention blocks ability to assign.

Version-Release number of selected component (if applicable):
3.11.16

How reproducible:
1. Install OLM.
2. Install an operator through OLM (tested with couchbase)
3. Become a project admin (non-cluster admin.)
4. Add view rolebinding to another user.

Actual results:
Error from server (Forbidden): rolebindings \"view\" is forbidden: attempt to grant extra privileges: [{[get] [couchbase.com] [couchbaseclusters] [] []} {[list] [couchbase.com] [couchbaseclusters] [] []} {[watch] [couchbase.com] [couchbaseclusters] [] []}] user=&{system:serviceaccount:cicd:my-project-admin-service-account

Expected results:
Admin is able to add rolebinding.

Additional info:
PR for 3.11 branch is open to fix this: https://github.com/operator-framework/operator-lifecycle-manager/pull/673

Comment 5 Evan Cordell 2019-03-11 13:29:00 UTC
Hi - this has been fixed and merged here: https://github.com/operator-framework/operator-lifecycle-manager/pull/671

But we need to update openshift-ansible to get those changes.

Comment 13 Jian Zhang 2019-10-09 06:11:45 UTC
1, Install OLM component via the openshift-ansible.
mac:openshift-ansible jianzhang$ git branch
  master
  release-3.10
* release-3.11

mac:openshift-ansible jianzhang$ ansible-playbook -i qe-inventory-host-file playbooks/olm/config.yml 
...
INSTALLER STATUS ********************************************************************************************************************************************
Initialization  : Complete (0:00:32)
OLM Install     : Complete (0:03:24)
Wednesday 09 October 2019  10:59:03 +0800 (0:00:00.075)       0:03:56.172 ***** 
=============================================================================== 

2, Check the `aggregate-olm-view` clusterrole, and it looks good.
[root@qe-xiuwang-311merrn-1 ~]# oc get clusterrole aggregate-olm-view -o yaml
apiVersion: authorization.openshift.io/v1
kind: ClusterRole
metadata:
  creationTimestamp: 2019-10-09T02:59:02Z
  labels:
    rbac.authorization.k8s.io/aggregate-to-admin: "true"
    rbac.authorization.k8s.io/aggregate-to-edit: "true"
    rbac.authorization.k8s.io/aggregate-to-view: "true"
  name: aggregate-olm-view
  resourceVersion: "9292"
  selfLink: /apis/authorization.openshift.io/v1/clusterroles/aggregate-olm-view
  uid: bf89f53c-ea40-11e9-8de1-fa163e7f17a2
rules:
- apiGroups:
  - operators.coreos.com
  attributeRestrictions: null
  resources:
  - catalogsources
  - clusterserviceversions
  - installplans
  - subscriptions
  verbs:
  - get
  - list
  - watch


3, Install etcd-operator in the default namespace.
[root@qe-xiuwang-311merrn-1 ~]# oc get pods
NAME                             READY     STATUS    RESTARTS   AGE
docker-registry-1-6rcvw          1/1       Running   1          3h
etcd-operator-7b49974f5b-gf8nx   3/3       Running   3          7m
registry-console-1-rjc2v         1/1       Running   1          3h
router-1-xhwq5                   1/1       Running   2          3h

4, Create two users: jiazha1, jiazha2
[root@qe-xiuwang-311merrn-1 ~]# oc adm policy add-role-to-user admin jiazha1 -n default
role "admin" added: "jiazha1"

Login with test1 and provide view role to the `jiazha2` user, LGTM. 
[root@qe-xiuwang-311merrn-1 ~]# oc login -u jiazha1 -p redhat https://qe-xiuwang-311merrn-1:8443
Login successful.
You have one project on this server: "default"
Using project "default".
[root@qe-xiuwang-311merrn-1 ~]# oc whoami
jiazha1
[root@qe-xiuwang-311merrn-1 ~]# oc adm policy add-role-to-user view jiazha2 -n default
role "view" added: "jiazha2"

But, for fixed PR above: https://github.com/operator-framework/operator-lifecycle-manager/pull/671, it wasn't merged in the release-3.11 branch(https://github.com/operator-framework/operator-lifecycle-manager/blob/release-3.11/deploy/chart/templates/21-aggregated-view.clusterrole.yaml). 
I guess there will be something wrong with deployment for upstream. Change status to ASSIGNED first. @evan What do you think?

Comment 16 Evan Cordell 2019-11-04 19:24:02 UTC
Jian, if this is working as intended in 3.11, can we close the bug? The PR to fix this is against openshift-ansible.

Comment 17 Jian Zhang 2019-11-13 02:35:36 UTC
Evan,

> Jian, if this is working as intended in 3.11, can we close the bug? The PR to fix this is against openshift-ansible.

Sorry for the late reply. I don't think so. It failed to run in 3.11, details in comment 13.

Comment 18 Evan Cordell 2019-12-10 13:25:36 UTC
According to comment #13, everything was working fine via openshift-ansible, the concern was that the commits weren't in the release-3.11 branch of OLM. I ask that we verify this issue since the bug is fixed in openshift-ansible (there is no supported install path from the release-3.11 branch of OLM - files from those manifests are instead vendored into openshift-ansible.)

To keep things in sync, I opened this PR: https://github.com/operator-framework/operator-lifecycle-manager/pull/1183 but I would still like to verify this bug, since the installer for OLM in 3.11 is openshift ansible, and the issue is fixed there.

Comment 21 Jian Zhang 2019-12-24 09:12:52 UTC
> To keep things in sync, I opened this PR: https://github.com/operator-framework/operator-lifecycle-manager/pull/1183 but I would still like to verify this bug,

Yes, @even, thanks for your PR. Verify it. Details in comment 13.

Comment 23 errata-xmlrpc 2020-01-14 05:31:27 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:0017

Comment 24 Red Hat Bugzilla 2023-09-18 00:15:21 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days


Note You need to log in before you can comment on or make changes to this bug.