Description of problem: Based on cmount path, while creating NFS export, the CAPS permission path gets changed. The CAPS are for a file system, and should not get changed based on export creation using sub-volumes. Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: 1. Create NFS export without specifying cmount path [ceph: root@ceph-spunadikar-rbuham-node1-installer /]# ceph auth ls | grep fs3 mds.fs3.ceph-spunadikar-rbuham-node1-installer.lnyqem mds.fs3.ceph-spunadikar-rbuham-node3.weyyyj Created export using sub volume from fs3 [ceph: root@ceph-spunadikar-rbuham-node1-installer /]# ceph nfs export create cephfs cganesha "/fs3/share1" fs3 --path /volumes/_nogroup/fs3svol1/286f20a5-d2a3-483f-9753-4db5dfb23731 { "bind": "/fs3/share1", "cluster": "cganesha", "fs": "fs3", "mode": "RW", "path": "/volumes/_nogroup/fs3svol1/286f20a5-d2a3-483f-9753-4db5dfb23731" } [ceph: root@ceph-spunadikar-rbuham-node1-installer /]# ceph auth ls | grep fs3 mds.fs3.ceph-spunadikar-rbuham-node1-installer.lnyqem mds.fs3.ceph-spunadikar-rbuham-node3.weyyyj client.nfs.cganesha.fs3 caps: [osd] allow rw pool=.nfs namespace=cganesha, allow rw tag cephfs data=fs3 client.nfs.cganesha.fs3 key: AQDTpu5le/B8FxAAnswuYv2EJYzuoYjI2DRwmA== caps: [mds] allow rw path=/ caps: [mon] allow r caps: [osd] allow rw pool=.nfs namespace=cganesha, allow rw tag cephfs data=fs3 Note : See the caps for mds "caps: [mds] allow rw path=/" 2. create another export with different cmount path [ceph: root@ceph-spunadikar-rbuham-node1-installer /]# ceph nfs export create cephfs cganesha "/fs3/share2" fs3 --path /volumes/_nogroup/fs3svol2/a96f73b5-8752-437e-ba6a-fbb7e2f36c44 --cmount_path /volumes/_nogroup/fs3svol2/a96f73b5-8752-437e-ba6a-fbb7e2f36c44 Error EPERM: Failed to update caps for nfs.cganesha.fs3: updated caps for client.nfs.cganesha.fs3 [ceph: root@ceph-spunadikar-rbuham-node1-installer /]# ceph nfs export create cephfs cganesha "/fs3/share2" fs3 --path /volumes/_nogroup/fs3svol2/a96f73b5-8752-437e-ba6a-fbb7e2f36c44 --cmount_path /volumes/_nogroup/fs3svol2/a96f73b5-8752-437e-ba6a-fbb7e2f36c44 { "bind": "/fs3/share2", "cluster": "cganesha", "fs": "fs3", "mode": "RW", "path": "/volumes/_nogroup/fs3svol2/a96f73b5-8752-437e-ba6a-fbb7e2f36c44" } Change in auth [ceph: root@ceph-spunadikar-rbuham-node1-installer /]# ceph auth ls | grep fs3 mds.fs3.ceph-spunadikar-rbuham-node1-installer.lnyqem mds.fs3.ceph-spunadikar-rbuham-node3.weyyyj client.nfs.cganesha.fs3 caps: [mds] allow rw path=/volumes/_nogroup/fs3svol2/a96f73b5-8752-437e-ba6a-fbb7e2f36c44 caps: [osd] allow rw pool=.nfs namespace=cganesha, allow rw tag cephfs data=fs3 client.nfs.cganesha.fs3 key: AQDTpu5le/B8FxAAnswuYv2EJYzuoYjI2DRwmA== caps: [mds] allow rw path=/volumes/_nogroup/fs3svol2/a96f73b5-8752-437e-ba6a-fbb7e2f36c44 caps: [mon] allow r caps: [osd] allow rw pool=.nfs namespace=cganesha, allow rw tag cephfs data=fs3 Note : See the changed caps for mds "caps: [mds] allow rw path=/volumes/_nogroup/fs3svol2/a96f73b5-8752-437e-ba6a-fbb7e2f36c44" This results in non availability of earlier defined export. Expected results: The mds caps should not get changed irrespective of cmount path mentioned while creating NFS export.
cmount_path = / should always work. I agree with Avan that perhaps just leaving the code alone for now, but emphasizing that cmount_path must always be /. Does it work if you only ever have one export to have cmount_path == path?
If you have only 1 export for a cephFS, then any cmount path will work. For multiple exports on a cephFS, the cmount path must be "/"
Undoing the cmount adjustments implemented in version 7.0 might reintroduce the memory consumption issue resolved in 7.0 - - https://bugzilla.redhat.com/show_bug.cgi?id=2236325#c20. Shouldn't we proceed with comment #29 and keep cmount set to the default "/"?
I don't think we're proposing removing the introduction of cmount_path to allow sharing of cephfs clients. What I am understanding we are talking about removing is the ceph adm changes that allow specifying cmount_path when creating an export that had the purpose of allowing creation of exports that DID NOT share the cephfs client. So long as the end result is we still default to cmount_path = "/" for all exports and thus share the cephfs client for all exports, we preserve the reduction in memory use that was critical for 7.0. If my understanding of the proposal is wrong, please describe in more detail what patches are proposed to revert.
OK, I'm not sure anything else is necessary from me for now.
If we transition this BZ to ON_QA, it suggests that the problem outlined in this BZ has been resolved. However, this is not the situation. Considering Comment #47, could we consider marking this issue as NOTABUG, given that cmount = "/" is the default recommended path? And with that the issue will not be observed which is reported in this BZ? Transitioning this BZ to ON_QA without addressing the problem reported in this BZ creates a misleading representation of the issue.
If the admin/end user is not restricted via code to not to pass cmount_path other than "/", then this issue may lead to in-accessible NFS exports. So in my opinion, this should not be marked as NOTABUG. This bug needs to be fixed. A quicker fix would be error out when someone provides cmount_path other than "/". So end user will not be able to create any NFS export with cmount_path other than "/".
Yea, release notes work. Actually, to what extent do we even document the parameter in the product? It is documented in the Ganesha man pages.
Yea, that looks perfect. And yea, changing any path that wasn't "/" to "/" sounds like a good safety measure. Thanks for buttoning this up.
(In reply to Akash Raj from comment #58) > Hi Avan. > > Please provide the doc type and doc text. This is being requested to be > added in the 7.1 release notes. > > Thanks. yes, done!
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Critical: Red Hat Ceph Storage 7.1 security, enhancements, and bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2024:3925