Description of problem: When using namespaces and assigning admin rights to each user within their own namespace where User A would be admin of Namespace A and be able to create and manage clusters within that namespace, but not be able to affect clusters within User B's Namespace B. This is not working as expected. Version-Release number of selected component (if applicable): 2.1 How reproducible: always Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: I tried to add the RBAC roles to user1 (onto managed cluster - cluster-a) hoping to reach my goal - ``` $ oc policy add-role-to-user admin user1 -n cluster-a $ oc adm policy add-cluster-role-to-user self-provisioner user1 $ oc adm policy add-cluster-role-to-user open-cluster-management:cluster-manager-admin user1 ``` The roles allow only user1 to create a cluster with the name "cluster-a", but they also allow user1 to manage all of the other created clusters in RHACM (for example, I'm able to see cluster-b). I tried to harden the RBAC rules by creating a custom role, which in theory, would allow user1 to create and manage only cluster-a. (I'm attaching the custom role file as an attachment to the case). The commands I ran - ``` $ oc policy add-role-to-user admin user1 -n cluster-a $ oc create -f acm-rbac.yaml $ oc adm policy add-cluster-role-to-user open-cluster-management:admin:cluster-a user1 ``` But, the command failed with the next error - ``` Failed creating cluster resource for import: managedclusters.cluster.open-cluster-management.io is forbidden: User "user1" cannot create resource "managedclusters" in API group "cluster.open-cluster-management.io" at the cluster scope ``` I also tried to add some more ApiGroups rules from the "open-cluster-management:cluster-manager-admin" to the custom RBAC role in "acm-rbac.yaml", but I ended up seeing all of the clusters managed by the ACM, which does not correspond with my goal (I also started getting admission controller errors when trying to create a cluster with that user).
Hi Jian Qiu, any follow up on this? Could you please also confirm if Server Foundation is the right component here and not GRC/Policy? Thanks.
I am not quite sure i understand the issue. The managedcluster is cluster scoped, and you will need rbac rule specifically for managedcluster. What clusterrole are you creating?
The clusterrole we're trying to create is one that allows a user, UserA, the ability to only create and manage a particular cluster or set of clusters that cannot be modified by a UserB. The UserB would also be able to create and manage a particular cluster or subset of clusters that UserA could not affect.
Is there anything we can pass along to the customer at this time or further clarification we can work to clear up? Thank you
G2Bsync 737828505 comment qiujian16 Thu, 03 Dec 2020 10:12:41 UTC G2Bsync I took a look at the clusterrole yaml, i think the missing part is here https://github.com/open-cluster-management/multicloudhub-operator/blob/master/templates/multiclusterhub/base/rbac/clusterrole-clustermanageradmin-aggregate.yaml#L23-L25. It is need for a user to create a managedcluster with `hubAcceptClient: true`
G2Bsync 738273297 comment ckandag Thu, 03 Dec 2020 19:58:49 UTC G2Bsync This following in the problem description is confusing "User A would be admin of Namespace A and be able to **create and manage clusters within that namespace**," Each namespace is associated with only one managed-cluster. you cannot create clusters with in a namespace. so this confusing. couple of notes about the roles - open-cluster-management:cluster-manager-admin. is a super user role - if you give a user this role , he has access to all naemspaces and managedclusters on the HUB. So if the intent is to give only user-1 access to cluster-a. and only user-2 access to cluster-b, then here are cmd to achieve that for user-1 admin access to cluster-a oc policy add-role-to-user admin user1 -n cluster-a oc adm policy add-cluster-role-to-user open-cluster-management:admin:cluster-a user1 for user-2 admin access to cluster-b oc policy add-role-to-user admin user2 -n cluster-b oc adm policy add-cluster-role-to-user open-cluster-management:admin:cluster-b user2
The customer's objective is to allow users *without* an open-cluster-management:cluster-manager-admin role to be able to *create* and manage clusters for their own departments. As described in the issue user-1 will be able to *create* and manage a certain cluster set, which user-2 will not have access to. The main idea here is to provide RHACM "self service" for provisioning and managing clusters per department in the organization. If I understand correctly if I run the commands provided in the last comment, and cluster-a is still *not* provisioned by RHACM, user-1 will not be able to create/import it, and it still required cluster-admin privileges to do so - ``` for user-1 admin access to cluster-a oc policy add-role-to-user admin user1 -n cluster-a oc adm policy add-cluster-role-to-user open-cluster-management:admin:cluster-a user1 ``` Is there any way for us to reach the desired state? A successful scenario for the customer would be - * cluster-a does not exist. * user-1 provisions the cluster. * user-1 manages the cluster. * user-2 is not able to manage the cluster.
In your clusterrole created for user-1, would you also add the following rule? - apiGroups: ["cluster.open-cluster-management.io"] resources: ["managedclusters", "managedclusters/accept","managedclusters/status"] verbs: ["create","get", "list", "watch", "update", "delete", "deletecollection", "patch"] resourceNames: ["cluster1"]
Ran the next commands - ``` oc new-project demo-openshift-cluster oc adm policy add-cluster-role-to-user open-cluster-management:admin:demo-openshift-cluster user1 ``` Created the next clusterrole - ``` apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: open-cluster-management:admin:demo-openshift-cluster rules: - apiGroups: - cluster.open-cluster-management.io resourceNames: - demo-openshift-cluster resources: - managedclusters - managedclusters/accept - managedclusters/status verbs: - create - delete - get - list - patch - update - watch - deletecollection ``` And applied it to user1 ``` oc apply -f role.yaml ``` Afterwards, I tried to import a cluster using user1, and failed with the next error - ``` Failed creating cluster resource for import: namespaces "demo-openshift-cluster" is forbidden: User "user1" cannot patch resource "namespaces" in API group "" in the namespace "demo-openshift-cluster" ``` I added the `patch` verb to the `namespace` resource for user1, making the created clusterrole look like - ``` apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: open-cluster-management:admin:demo-openshift-cluster rules: - apiGroups: - cluster.open-cluster-management.io resourceNames: - demo-openshift-cluster resources: - managedclusters - managedclusters/accept - managedclusters/status verbs: - create - delete - get - list - patch - update - watch - deletecollection - apiGroups: - "" resources: - namespaces resourceNames: - demo-openshift-cluster verbs: - get - list - watch - patch ``` After applying the role, I get the next error when importing a cluster in RHACM - ``` Failed creating cluster resource for import: managedclusters.cluster.open-cluster-management.io is forbidden: User "user1" cannot create resource "managedclusters" in API group "cluster.open-cluster-management.io" at the cluster scope ```
I also added the next rule in the role - ``` - apiGroups: - cluster.open-cluster-management.io resources: - managedclusters - managedclusters/accept - managedclusters/status verbs: - create ``` Now I'm getting the next error - ``` Failed creating cluster resource for import: admission webhook "managedclustervalidators.admission.cluster.open-cluster-management.io" denied the request: user "user1" cannot update the HubAcceptsClient field ```
I was able to import the cluster only after I removed the `CREATE` operation from the managedclustervalidators.admission.cluster.open-cluster-management.io ValidatingWebhookConfiguration resource. Even though I achieved the goal, the solution is not good in my opinion because of the next reasons: - If user1 deletes the cluster, the "modified" open-cluster-management:admin:demo-openshift-cluster clusterrole is deleted as well, making user1 unable to provision "demo-openshift-cluster" again without the interference of the admin user, who will re-assign the role to the user. A workaround can be done by creating a clusterrole with a different name e.g - "open-cluster-management:admin:demo-openshift-cluster-role", but this role will not be managed by RHACM, who will create an "open-cluster-management:admin:demo-openshift-cluster" clusterrole after creating the managedcluster object, which means that there would be 2 almost identical roles, with slightly different names - not the most aesthetic solution in my opinion. - RHACM does not allow cluster creation without a "global" `create` rule on all `managedclusters` resources. It is not the best solution since you basically allow user1 to create as many clusters as he wants, as long as he has admin rights on specific namespaces on the HUB cluster. e.g - the rule - ``` - apiGroups: - cluster.open-cluster-management.io resources: - managedclusters - managedclusters/accept - managedclusters/status verbs: - create ``` - This workaround required editing, and removing an operation from a ValidatingWebhookConfiguration resource.
Sorry, let me clarify this. User should be able to create any managedCluster, but the user should not be able to create a ManagedCluster with HubAcceptClient: true unless a certain permission is given to the user. The rbac rule that I posted here has a type, the correct one in all should be ``` apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: open-cluster-management:admin:demo-openshift-cluster rules: - apiGroups: - cluster.open-cluster-management.io resources: - managedclusters verbs: - create - patch - update - apiGroups: - cluster.open-cluster-management.io resources: - managedclusters/accept resourceNames: - demo-openshift-cluster verbs: - update - apiGroups: - "" resources: - namespaces resourceNames: - demo-openshift-cluster verbs: - get - list - watch - patch ```
G2Bsync 742932887 comment skeeey Fri, 11 Dec 2020 02:56:39 UTC G2Bsync, the api group of ``` - apiGroups: - cluster.open-cluster-management.io resources: - managedclusters/accept resourceNames: - demo-openshift-cluster verbs: - update ``` is wrong, it should be ``` - apiGroups: - register.open-cluster-management.io resources: - managedclusters/accept resourceNames: - demo-openshift-cluster verbs: - update ``` with the correct api group, I think you don't need to change the webhook
Using the suggested rule does not work - ``` - apiGroups: - register.open-cluster-management.io resources: - managedclusters/accept resourceNames: - demo-openshift-cluster verbs: - update ``` Only when I remove `resourceNames` I stop receiving the webhook error. The final ClusterRole that works - ``` apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: open-cluster-management:cluster-manager-test rules: - apiGroups: - cluster.open-cluster-management.io resources: - managedclusters - managedclusters/status resourceNames: - demo-openshift-cluster verbs: - get - list - watch - update - delete - deletecollection - patch - apiGroups: - cluster.open-cluster-management.io resources: - managedclusters - managedclusters/accept - managedclusters/status resourceNames: - demo-openshift-cluster verbs: - create - get - list - watch - update - delete - deletecollection - patch - apiGroups: - "" resources: - namespaces resourceNames: - demo-openshift-cluster verbs: - create - get - list - watch - update - delete - deletecollection - patch - apiGroups: - register.open-cluster-management.io resources: - managedclusters/accept verbs: - update ``` ** While testing I was using the `import cluster` feature. Will I need any other permissions to `create cluster`? **
test with below rules, it works ``` apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: open-cluster-management:cluster-manager-test rules: - apiGroups: - cluster.open-cluster-management.io resources: - managedclusters resourceNames: - demo-openshift-cluster verbs: - get - list - watch - update - delete - deletecollection - patch - apiGroups: - cluster.open-cluster-management.io resources: - managedclusters verbs: - create - apiGroups: - "" resources: - namespaces resourceNames: - demo-openshift-cluster verbs: - create - get - list - watch - update - delete - deletecollection - patch - apiGroups: - register.open-cluster-management.io resources: - managedclusters/accept resourceNames: - demo-openshift-cluster verbs: - update ```
update the above clusterrole, it should be ``` apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: open-cluster-management:cluster-manager-test rules: - apiGroups: - cluster.open-cluster-management.io resources: - managedclusters resourceNames: - demo-openshift-cluster verbs: - get - list - watch - update - delete - deletecollection - patch - apiGroups: - cluster.open-cluster-management.io resources: - managedclusters verbs: - create - apiGroups: - "" resources: - namespaces resourceNames: - demo-openshift-cluster verbs: - create - get - list - watch - update - delete - deletecollection - patch - apiGroups: - register.open-cluster-management.io resources: - managedclusters/accept verbs: - update ```
Hi, cluster-manager-admin is different since it is supposed to managed all managedclusters. I do not think we have clusterroles to cover this use case yet. The reason is that this clusterrole has the resourceName in it which the system cannot know in advance since the managedcluster is not created yet. We will improve the document to explain how it can be achieved.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Advanced Cluster Management for Kubernetes version 2.2 images), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2021:0729