Bug 1898988
| Summary: | [RFE] OCS CephFS External Mode Multi-tenancy. Add cephfs subvolumegroup and path= caps per cluster. | |||
|---|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat OpenShift Data Foundation | Reporter: | daniel parkes <dparkes> | |
| Component: | rook | Assignee: | Parth Arora <paarora> | |
| Status: | CLOSED ERRATA | QA Contact: | Neha Berry <nberry> | |
| Severity: | high | Docs Contact: | ||
| Priority: | high | |||
| Version: | 4.6 | CC: | amohan, bniver, ddomingu, dparkes, eduen, eharney, etamir, gfidente, gmeno, hchiramm, madam, maugarci, mrajanna, muagarwa, nberry, ocs-bugs, odf-bz-bot, owasserm, paarora, pcfe, pdonnell, pgrist, sabose, shan, shilpsha, sostapov, srai, tnielsen, vavuthu | |
| Target Milestone: | --- | Keywords: | AutomationBackLog, FutureFeature | |
| Target Release: | ODF 4.10.0 | |||
| Hardware: | All | |||
| OS: | All | |||
| Whiteboard: | ||||
| Fixed In Version: | 4.10.0-113 | Doc Type: | No Doc Update | |
| Doc Text: | Story Points: | --- | ||
| Clone Of: | ||||
| : | 2069319 (view as bug list) | Environment: | ||
| Last Closed: | 2022-04-13 18:49:40 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | ||||
| Bug Blocks: | 2069319 | |||
|
Comment 3
Michael Adam
2020-11-20 09:49:07 UTC
@Travis, is this something that we would do in rook? (+ maybe exposing through ocs-operator?) Daniel, just to clarify: the block pools are not per OCP cluster. The ceph-csi keys can consume any pool. So effectively each ceph-csi keys can access another OCP cluster pool. We do not enforce a particular pool access per OCP cluster. For CephFS, the "tenant" like you mentioned, can **NOT** access the cephfs mounts from another tenant. Simply because the tenant (user application) only see a mountpoint (which lives in the container's own namespace). Maybe I'm missing something, so can you clarify how a tenant would access another tenant mount? Thanks. If we decided to do something, we would need to rework the python script that does all of that. Assigning to Arun but do not devel_ack for now. Let's keep the discussion going. Thanks Sebastien, I see what you mean by 'the tenant (user application) only see a mountpoint (which lives in the container's own namespace).' I was thinking of a tenant to a full OCP installation not just a deployed application, maybe I'm going overboard here but let me try and explain with an example what I mean. Let's say we have a OSP deployment, we have 2 OSP project/tenants, each tenant is a different team in the Org, Each of these tenants/teams installs a OCP cluster on their project, and they also deploy OCS using external mode to a single RHCS server(operated by a storage team). Each OCP team ask the RHCS storage team to run the python script to generate the secrets for their individual OCS deployment, the RHCS storage team runs the script and returns the keys/secrets to the OCP team, because the keys for RBD and CEPH provisioner are the same for both teams , team A with his keys and access to the ceph public network is able to mount and access data from a rbd volume or cephfs subvolume of team B. Going in to the cephfs case, If we used a dedicated key and subvolumegroup per each OCP cluster, we could set caps on the path of the dedicated subvolumegroup (for example path=/cephfs/CSI-cluster) so that the cephfs provisioner would only have access to the subvolumes in its subvolumegroup and with that key wouldn't be able to access other tenants data. As you mentioned also the RBD provisioner keys are the same, I didn't notice, because with RBD we can create a dedicated pool, but it's true that the provisioner has the same key) so with rbd we are in a similar situation, if we could have the option to use a dedicated key and rbdpool per OCP cluster, we could use the pool= cap so the tenant only has access to the volumes in the RBD pool assigned to his tenant/OCP cluster. Maybe a pool per cluster is overkill and in the longrun it would make sense to use rbd namespaces to segregate between different projects/OCP clusters. Thanks. Thanks, Daniel, for elaborating more on that issue. Indeed, this sounds good as an improvement for Openshift dedicated so 4.8 is a reasonable target. Arun can you take a look at this? Arun any update? Moving to 4.9 since it still needs more discussion about the rook changes >Each OCP team ask the RHCS storage team to run the python script to generate the secrets for their individual OCS deployment, the RHCS storage team runs the script and returns the keys/secrets to the OCP team, because the keys for RBD and CEPH provisioner are the same for both teams , team A with his keys and access to the ceph public network is able to mount and access data from a rbd volume or cephfs subvolume of team B.
hmmm. this is equivalent to asking for 'admin' access and if the cluster admin is fine to give access, CSI is kept aside here ie CSI can not really control the access to the volumes or backend cluster pools ..etc for the client. so this may not be the right concern.
However, having different subvolume group access wrt multi tenancy is a good idea. In a later stage this could also help us to take a (volume) 'group' level snapshot ( which is an upcoming feature in kube ) and really work with multi tenancy in place. so we need that functionality anyway for better segregation of the access and also for the proper/restricted isolation of the storage chunk in the backend.
Seb, one side question I have here is, do we have specific capability available for CephFS to restrict the access to a 'subvolumegroup' alone?
Will take this once we have the necessary details. Arun What details do we need? Arun? Moving the RFE to 4.10 Subham can you take a look at this one? Thanks! Sorry Travis. Lost track of this BZ. Will help Subham regarding the fix. Relates to https://issues.redhat.com/browse/RHSTOR-2317 assigned it to Parth as has already done the changes in upstream. Please add doc text Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.10.0 enhancement, security & bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:1372 The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days |