Bug 2268996 - CAPS for mds changes as per cmount path in NFS export
Summary: CAPS for mds changes as per cmount path in NFS export
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Cephadm
Version: 7.0
Hardware: All
OS: All
urgent
urgent
Target Milestone: ---
: 7.1
Assignee: avan
QA Contact: Manisha Saini
Akash Raj
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2024-03-11 12:38 UTC by Sachin Punadikar
Modified: 2024-06-13 14:29 UTC (History)
13 users (show)

Fixed In Version: ceph-18.2.1-171.el9cp
Doc Type: Bug Fix
Doc Text:
Cause: Export user_id generation is based on only the cluster_id & fs name whereas cephx key for it includes cmount_path as well for generation. Thus if cmount_path values changes it will case CAPS issue Consequence: Based on cmount path, while creating NFS export, the CAPS permission path gets changed. The CAPS are for a file system, and should not get changed based on export creation using sub-volumes. Fix: Restrcit user from changing cmount_path and keep it defaulted to '/' throughout for all exports. Result: There'll be no CAPS issues for user_id,cephx_key pair generated for exports.
Clone Of:
Environment:
Last Closed: 2024-06-13 14:29:14 UTC
Embargoed:
gfarnum: needinfo-


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 63377 0 None None None 2024-04-01 05:50:10 UTC
Red Hat Issue Tracker RHCEPH-8486 0 None None None 2024-03-11 12:38:59 UTC
Red Hat Product Errata RHSA-2024:3925 0 None None None 2024-06-13 14:29:17 UTC

Description Sachin Punadikar 2024-03-11 12:38:27 UTC
Description of problem:
Based on cmount path, while creating NFS export, the CAPS permission path gets changed. The CAPS are for a file system, and should not get changed based on export creation using sub-volumes.

Version-Release number of selected component (if applicable):


How reproducible: Always


Steps to Reproduce:
1. Create NFS export without specifying cmount path

[ceph: root@ceph-spunadikar-rbuham-node1-installer /]# ceph auth ls | grep fs3
mds.fs3.ceph-spunadikar-rbuham-node1-installer.lnyqem
mds.fs3.ceph-spunadikar-rbuham-node3.weyyyj

Created export using sub volume from fs3
[ceph: root@ceph-spunadikar-rbuham-node1-installer /]# ceph nfs export create cephfs cganesha "/fs3/share1" fs3 --path /volumes/_nogroup/fs3svol1/286f20a5-d2a3-483f-9753-4db5dfb23731
{
  "bind": "/fs3/share1",
  "cluster": "cganesha",
  "fs": "fs3",
  "mode": "RW",
  "path": "/volumes/_nogroup/fs3svol1/286f20a5-d2a3-483f-9753-4db5dfb23731"
}

[ceph: root@ceph-spunadikar-rbuham-node1-installer /]# ceph auth ls | grep fs3
mds.fs3.ceph-spunadikar-rbuham-node1-installer.lnyqem
mds.fs3.ceph-spunadikar-rbuham-node3.weyyyj
client.nfs.cganesha.fs3
	caps: [osd] allow rw pool=.nfs namespace=cganesha, allow rw tag cephfs data=fs3

client.nfs.cganesha.fs3
	key: AQDTpu5le/B8FxAAnswuYv2EJYzuoYjI2DRwmA==
	caps: [mds] allow rw path=/
	caps: [mon] allow r
	caps: [osd] allow rw pool=.nfs namespace=cganesha, allow rw tag cephfs data=fs3

Note : See the caps for mds "caps: [mds] allow rw path=/"

2. create another export with different cmount path 

[ceph: root@ceph-spunadikar-rbuham-node1-installer /]# ceph nfs export create cephfs cganesha "/fs3/share2" fs3 --path /volumes/_nogroup/fs3svol2/a96f73b5-8752-437e-ba6a-fbb7e2f36c44 --cmount_path /volumes/_nogroup/fs3svol2/a96f73b5-8752-437e-ba6a-fbb7e2f36c44
Error EPERM: Failed to update caps for nfs.cganesha.fs3: updated caps for client.nfs.cganesha.fs3
[ceph: root@ceph-spunadikar-rbuham-node1-installer /]# ceph nfs export create cephfs cganesha "/fs3/share2" fs3 --path /volumes/_nogroup/fs3svol2/a96f73b5-8752-437e-ba6a-fbb7e2f36c44 --cmount_path /volumes/_nogroup/fs3svol2/a96f73b5-8752-437e-ba6a-fbb7e2f36c44
{
  "bind": "/fs3/share2",
  "cluster": "cganesha",
  "fs": "fs3",
  "mode": "RW",
  "path": "/volumes/_nogroup/fs3svol2/a96f73b5-8752-437e-ba6a-fbb7e2f36c44"
}

Change in auth 
[ceph: root@ceph-spunadikar-rbuham-node1-installer /]# ceph auth ls | grep fs3
mds.fs3.ceph-spunadikar-rbuham-node1-installer.lnyqem
mds.fs3.ceph-spunadikar-rbuham-node3.weyyyj
client.nfs.cganesha.fs3
	caps: [mds] allow rw path=/volumes/_nogroup/fs3svol2/a96f73b5-8752-437e-ba6a-fbb7e2f36c44
	caps: [osd] allow rw pool=.nfs namespace=cganesha, allow rw tag cephfs data=fs3
client.nfs.cganesha.fs3
	key: AQDTpu5le/B8FxAAnswuYv2EJYzuoYjI2DRwmA==
	caps: [mds] allow rw path=/volumes/_nogroup/fs3svol2/a96f73b5-8752-437e-ba6a-fbb7e2f36c44
	caps: [mon] allow r
	caps: [osd] allow rw pool=.nfs namespace=cganesha, allow rw tag cephfs data=fs3

Note : See the changed caps for mds "caps: [mds] allow rw path=/volumes/_nogroup/fs3svol2/a96f73b5-8752-437e-ba6a-fbb7e2f36c44"

This results in non availability of earlier defined export.


Expected results: The mds caps should not get changed irrespective of cmount path mentioned while creating NFS export.

Comment 28 Frank Filz 2024-04-10 14:49:50 UTC
cmount_path = / should always work.

I agree with Avan that perhaps just leaving the code alone for now, but emphasizing that cmount_path must always be /.

Does it work if you only ever have one export to have cmount_path == path?

Comment 29 Sachin Punadikar 2024-04-10 14:53:44 UTC
If you have only 1 export for a cephFS, then any cmount path will work.
For multiple exports on a cephFS, the cmount path must be "/"

Comment 34 Manisha Saini 2024-04-15 05:50:50 UTC
Undoing the cmount adjustments implemented in version 7.0 might reintroduce the memory consumption issue resolved in 7.0 - - https://bugzilla.redhat.com/show_bug.cgi?id=2236325#c20.

Shouldn't we proceed with comment #29 and keep cmount set to the default "/"?

Comment 36 Frank Filz 2024-04-15 14:55:23 UTC
I don't think we're proposing removing the introduction of cmount_path to allow sharing of cephfs clients. What I am understanding we are talking about removing is the ceph adm changes that allow specifying cmount_path when creating an export that had the purpose of allowing creation of exports that DID NOT share the cephfs client.

So long as the end result is we still default to cmount_path = "/" for all exports and thus share the cephfs client for all exports, we preserve the reduction in memory use that was critical for 7.0.

If my understanding of the proposal is wrong, please describe in more detail what patches are proposed to revert.

Comment 40 Frank Filz 2024-04-16 21:23:29 UTC
OK, I'm not sure anything else is necessary from me for now.

Comment 48 Manisha Saini 2024-04-22 09:32:22 UTC
If we transition this BZ to ON_QA, it suggests that the problem outlined in this BZ has been resolved. 
However, this is not the situation. 
Considering Comment #47, could we consider marking this issue as NOTABUG, given that cmount = "/" is the default recommended path? And with that the issue will not be observed which is reported in this BZ?


Transitioning this BZ to ON_QA without addressing the problem reported in this BZ creates a misleading representation of the issue.

Comment 49 Sachin Punadikar 2024-04-22 09:46:03 UTC
If the admin/end user is not restricted via code to not to pass cmount_path other than "/", then this issue may lead to in-accessible NFS exports.
So in my opinion, this should not be marked as NOTABUG. This bug needs to be fixed.

A quicker fix would be error out when someone provides cmount_path other than "/". So end user will not be able to create any NFS export with cmount_path other than "/".

Comment 52 Frank Filz 2024-04-29 19:33:31 UTC
Yea, release notes work. Actually, to what extent do we even document the parameter in the product? It is documented in the Ganesha man pages.

Comment 54 Frank Filz 2024-05-02 16:55:59 UTC
Yea, that looks perfect. And yea, changing any path that wasn't "/" to "/" sounds like a good safety measure.

Thanks for buttoning this up.

Comment 59 avan 2024-06-05 07:45:43 UTC
(In reply to Akash Raj from comment #58)
> Hi Avan.
> 
> Please provide the doc type and doc text. This is being requested to be
> added in the 7.1 release notes.
> 
> Thanks.

yes, done!

Comment 60 errata-xmlrpc 2024-06-13 14:29:14 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Critical: Red Hat Ceph Storage 7.1 security, enhancements, and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2024:3925


Note You need to log in before you can comment on or make changes to this bug.