Description of problem: OLM programmatically creates and aggregates clusterroles for view/edit for all CRDs that it installs/manages. The verbs for view (get, watch, list) are not aggregated to the admin cluster role. This breaks the ability of a user with an admin rolebinding to be able to add additional view rolebindings, as they now lack the get, watch, list. Privilege escalation prevention blocks ability to assign. Version-Release number of selected component (if applicable): 3.11.16 How reproducible: 1. Install OLM. 2. Install an operator through OLM (tested with couchbase) 3. Become a project admin (non-cluster admin.) 4. Add view rolebinding to another user. Actual results: Error from server (Forbidden): rolebindings \"view\" is forbidden: attempt to grant extra privileges: [{[get] [couchbase.com] [couchbaseclusters] [] []} {[list] [couchbase.com] [couchbaseclusters] [] []} {[watch] [couchbase.com] [couchbaseclusters] [] []}] user=&{system:serviceaccount:cicd:my-project-admin-service-account Expected results: Admin is able to add rolebinding. Additional info: PR for 3.11 branch is open to fix this: https://github.com/operator-framework/operator-lifecycle-manager/pull/673
Hi - this has been fixed and merged here: https://github.com/operator-framework/operator-lifecycle-manager/pull/671 But we need to update openshift-ansible to get those changes.
1, Install OLM component via the openshift-ansible. mac:openshift-ansible jianzhang$ git branch master release-3.10 * release-3.11 mac:openshift-ansible jianzhang$ ansible-playbook -i qe-inventory-host-file playbooks/olm/config.yml ... INSTALLER STATUS ******************************************************************************************************************************************** Initialization : Complete (0:00:32) OLM Install : Complete (0:03:24) Wednesday 09 October 2019 10:59:03 +0800 (0:00:00.075) 0:03:56.172 ***** =============================================================================== 2, Check the `aggregate-olm-view` clusterrole, and it looks good. [root@qe-xiuwang-311merrn-1 ~]# oc get clusterrole aggregate-olm-view -o yaml apiVersion: authorization.openshift.io/v1 kind: ClusterRole metadata: creationTimestamp: 2019-10-09T02:59:02Z labels: rbac.authorization.k8s.io/aggregate-to-admin: "true" rbac.authorization.k8s.io/aggregate-to-edit: "true" rbac.authorization.k8s.io/aggregate-to-view: "true" name: aggregate-olm-view resourceVersion: "9292" selfLink: /apis/authorization.openshift.io/v1/clusterroles/aggregate-olm-view uid: bf89f53c-ea40-11e9-8de1-fa163e7f17a2 rules: - apiGroups: - operators.coreos.com attributeRestrictions: null resources: - catalogsources - clusterserviceversions - installplans - subscriptions verbs: - get - list - watch 3, Install etcd-operator in the default namespace. [root@qe-xiuwang-311merrn-1 ~]# oc get pods NAME READY STATUS RESTARTS AGE docker-registry-1-6rcvw 1/1 Running 1 3h etcd-operator-7b49974f5b-gf8nx 3/3 Running 3 7m registry-console-1-rjc2v 1/1 Running 1 3h router-1-xhwq5 1/1 Running 2 3h 4, Create two users: jiazha1, jiazha2 [root@qe-xiuwang-311merrn-1 ~]# oc adm policy add-role-to-user admin jiazha1 -n default role "admin" added: "jiazha1" Login with test1 and provide view role to the `jiazha2` user, LGTM. [root@qe-xiuwang-311merrn-1 ~]# oc login -u jiazha1 -p redhat https://qe-xiuwang-311merrn-1:8443 Login successful. You have one project on this server: "default" Using project "default". [root@qe-xiuwang-311merrn-1 ~]# oc whoami jiazha1 [root@qe-xiuwang-311merrn-1 ~]# oc adm policy add-role-to-user view jiazha2 -n default role "view" added: "jiazha2" But, for fixed PR above: https://github.com/operator-framework/operator-lifecycle-manager/pull/671, it wasn't merged in the release-3.11 branch(https://github.com/operator-framework/operator-lifecycle-manager/blob/release-3.11/deploy/chart/templates/21-aggregated-view.clusterrole.yaml). I guess there will be something wrong with deployment for upstream. Change status to ASSIGNED first. @evan What do you think?
Jian, if this is working as intended in 3.11, can we close the bug? The PR to fix this is against openshift-ansible.
Evan, > Jian, if this is working as intended in 3.11, can we close the bug? The PR to fix this is against openshift-ansible. Sorry for the late reply. I don't think so. It failed to run in 3.11, details in comment 13.
According to comment #13, everything was working fine via openshift-ansible, the concern was that the commits weren't in the release-3.11 branch of OLM. I ask that we verify this issue since the bug is fixed in openshift-ansible (there is no supported install path from the release-3.11 branch of OLM - files from those manifests are instead vendored into openshift-ansible.) To keep things in sync, I opened this PR: https://github.com/operator-framework/operator-lifecycle-manager/pull/1183 but I would still like to verify this bug, since the installer for OLM in 3.11 is openshift ansible, and the issue is fixed there.
> To keep things in sync, I opened this PR: https://github.com/operator-framework/operator-lifecycle-manager/pull/1183 but I would still like to verify this bug, Yes, @even, thanks for your PR. Verify it. Details in comment 13.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:0017
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days