Bug 1898988

Summary: [RFE] OCS CephFS External Mode Multi-tenancy. Add cephfs subvolumegroup and path= caps per cluster.
Product: [Red Hat Storage] Red Hat OpenShift Data Foundation Reporter: daniel parkes <dparkes>
Component: rookAssignee: Parth Arora <paarora>
Status: CLOSED ERRATA QA Contact: Neha Berry <nberry>
Severity: high Docs Contact:
Priority: high    
Version: 4.6CC: amohan, bniver, ddomingu, dparkes, eduen, eharney, etamir, gfidente, gmeno, hchiramm, madam, maugarci, mrajanna, muagarwa, nberry, ocs-bugs, odf-bz-bot, owasserm, paarora, pcfe, pdonnell, pgrist, sabose, shan, shilpsha, sostapov, srai, tnielsen, vavuthu
Target Milestone: ---Keywords: AutomationBackLog, FutureFeature
Target Release: ODF 4.10.0   
Hardware: All   
OS: All   
Whiteboard:
Fixed In Version: 4.10.0-113 Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of:
: 2069319 (view as bug list) Environment:
Last Closed: 2022-04-13 18:49:40 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 2069319    

Comment 3 Michael Adam 2020-11-20 09:49:07 UTC
RFE ==> moving to 4.7, as we're in RC stage for 4.6

Comment 4 Michael Adam 2020-11-20 09:50:16 UTC
@Travis, is this something that we would do in rook? (+ maybe exposing through ocs-operator?)

Comment 5 Sébastien Han 2020-11-26 13:54:52 UTC
Daniel, just to clarify: the block pools are not per OCP cluster.
The ceph-csi keys can consume any pool. So effectively each ceph-csi keys can access another OCP cluster pool.

We do not enforce a particular pool access per OCP cluster.

For CephFS, the "tenant" like you mentioned, can **NOT** access the cephfs mounts from another tenant. Simply because the tenant (user application) only see a mountpoint (which lives in the container's own namespace).
Maybe I'm missing something, so can you clarify how a tenant would access another tenant mount?

Thanks.

If we decided to do something, we would need to rework the python script that does all of that.
Assigning to Arun but do not devel_ack for now. Let's keep the discussion going.

Comment 6 daniel parkes 2020-11-27 06:46:47 UTC
Thanks Sebastien,

I see what you mean by 'the tenant (user application) only see a mountpoint (which lives in the container's own namespace).' I was thinking of a tenant to a full OCP installation not just a deployed application, maybe I'm going overboard here but let me try and explain with an example what I mean.

Let's say we have a OSP deployment, we have 2 OSP project/tenants, each tenant is a different team in the Org, Each of these tenants/teams installs a OCP cluster on their project, and they also deploy OCS using external mode to a single RHCS server(operated by a storage team). 

Each OCP team ask the RHCS storage team to run the python script to generate the secrets for their individual OCS deployment, the RHCS storage team runs the script and returns the keys/secrets to the OCP team, because the keys for RBD and CEPH provisioner are the same for both teams , team A with his keys and access to the ceph public network is able to mount and access data from a rbd volume or cephfs subvolume of team B.

Going in to the cephfs case, If we used a dedicated key and subvolumegroup per each OCP cluster, we could set caps on the path of the dedicated subvolumegroup (for example path=/cephfs/CSI-cluster) so that the cephfs provisioner would only have access to the subvolumes in its subvolumegroup and with that key wouldn't be able to access other tenants data.

As you mentioned also the RBD provisioner keys are the same, I didn't notice, because with RBD we can create a dedicated pool, but it's true that the provisioner has the same key) so with rbd we are in a similar situation, if we could have the option to use a dedicated key and rbdpool per OCP cluster, we could use the pool= cap so the tenant only has access to the volumes in the RBD pool assigned to his tenant/OCP cluster. Maybe a pool per cluster is overkill and in the longrun it would make sense to use rbd namespaces to segregate between different projects/OCP clusters. 

Thanks.

Comment 8 Sébastien Han 2021-02-03 09:20:15 UTC
Thanks, Daniel, for elaborating more on that issue.
Indeed, this sounds good as an improvement for Openshift dedicated so 4.8 is a reasonable target.

Comment 11 Travis Nielsen 2021-05-11 15:06:06 UTC
Arun can you take a look at this?

Comment 12 Travis Nielsen 2021-05-17 15:51:43 UTC
Arun any update?

Comment 14 Travis Nielsen 2021-06-07 16:00:52 UTC
Moving to 4.9 since it still needs more discussion about the rook changes

Comment 15 Humble Chirammal 2021-06-23 13:56:21 UTC
>Each OCP team ask the RHCS storage team to run the python script to generate the secrets for their individual OCS deployment, the RHCS storage team runs the script and returns the keys/secrets to the OCP team, because the keys for RBD and CEPH provisioner are the same for both teams , team A with his keys and access to the ceph public network is able to mount and access data from a rbd volume or cephfs subvolume of team B.

hmmm. this is equivalent to asking for 'admin' access  and if the cluster admin is fine to give access, CSI is kept aside here ie CSI can not really control the access to the volumes or backend cluster pools ..etc for the client. so this may not be the right concern.

However, having different subvolume group access wrt multi tenancy is a good idea. In a later stage this could also help us to take a (volume) 'group' level snapshot ( which is an upcoming feature in kube ) and really work with multi tenancy in place. so we need that functionality anyway for better segregation of the access and also for the proper/restricted isolation of the storage chunk in the backend. 

Seb, one side question I have here is, do we have specific capability available for CephFS to restrict the access to a 'subvolumegroup' alone?

Comment 17 arun kumar mohan 2021-06-25 07:28:44 UTC
Will take this once we have the necessary details.

Comment 18 Travis Nielsen 2021-07-26 15:56:47 UTC
Arun What details do we need?

Comment 19 Travis Nielsen 2021-09-13 15:42:51 UTC
Arun?

Comment 20 Travis Nielsen 2021-09-20 15:17:06 UTC
Moving the RFE to 4.10

Comment 21 Travis Nielsen 2021-09-27 15:19:14 UTC
Subham can you take a look at this one? Thanks!

Comment 22 arun kumar mohan 2021-10-07 03:42:25 UTC
Sorry Travis. Lost track of this BZ. Will help Subham regarding the fix.

Comment 23 Sébastien Han 2021-12-22 15:43:00 UTC
Relates to https://issues.redhat.com/browse/RHSTOR-2317

Comment 24 Subham Rai 2022-01-07 06:23:04 UTC
assigned it to Parth as has already done the changes in upstream.

Comment 35 Mudit Agarwal 2022-03-03 09:57:12 UTC
Please add doc text

Comment 43 errata-xmlrpc 2022-04-13 18:49:40 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.10.0 enhancement, security & bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:1372

Comment 44 Red Hat Bugzilla 2023-12-08 04:25:04 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days