Description of problem: ============== Currently, if the user deploy NFS-ganesha on ceph cluster and create an export out of it, when trying to mount on linux client with version v3 --> It fails by default. There are additional changes that needs to be done manually in ganesha.conf file and volume export file to enable the v3 mount. Given that we're extending support for protocol v3 in the upcoming IBM 7.1 release, it will be logical to include the prerequisite in the cephadm codebase. This would eliminate the need for customers to perform these manual steps.. 1. Default Ganesha.conf --> To enable v3 mount, user will have to manually fetch the ganesha config, add additional params ( Protocols = 3, 4; and mount_path_pseudo = true;) and apply set the updated config for ceph orchestrator ==================== cat ganesha.conf # {{ cephadm_managed }} NFS_CORE_PARAM { Enable_NLM = false; Enable_RQUOTA = false; Protocols = 3, 4; --------> Added "3" mount_path_pseudo = true; --> Added additional param to enable pseudo mount NFS_Port = {{ port }}; {% if bind_addr %} Bind_addr = {{ bind_addr }}; {% endif %} {% if haproxy_hosts %} HAProxy_Hosts = {{ haproxy_hosts|join(", ") }}; {% endif %} } NFSv4 { Delegations = false; RecoveryBackend = 'rados_cluster'; Minor_Versions = 1, 2; } RADOS_KV { UserId = "{{ user }}"; nodeid = "{{ nodeid }}"; pool = "{{ pool }}"; namespace = "{{ namespace }}"; } RADOS_URLS { UserId = "{{ user }}"; watch_url = "{{ url }}"; } RGW { cluster = "ceph"; name = "client.{{ rgw_user }}"; } LOG { COMPONENTS { ALL = FULL_DEBUG; } } %url {{ url }} ---------------------------------------------- 2. Export file --> User will have to fetch the export file, edit changes and reapply the conf file to achieve v3 mount cat /var/lib/ceph/export.conf { "access_type": "RW", "clients": [], "cluster_id": "cephfs-nfs", "export_id": 1, "fsal": { "cmount_path": "/", "fs_name": "cephfs", "name": "CEPH", "user_id": "nfs.cephfs-nfs.cephfs" }, "path": "/", "protocols": [ 3, -----> Added "3" 4 ], "pseudo": "/export_0", "security_label": true, "squash": "none", "transports": [ "TCP" ] } Version-Release number of selected component (if applicable): ============= RHCS 7.1 How reproducible: ========= 1/1 Steps to Reproduce: ============ 1.Deploy ganesha cluster on ceph 2. Create an export 3. Mount the export on RHEL client with vers=3 Actual results: ======== Mount will fail if we do not do the ganesha.conf and export file changes manually to enable the v3 mount Expected results: ========== Mount should pass with vers=3 without any manual prereq. Additional info:
ganesha.conf and export.conf are created by cephadm and/or ceph dashboard. will clone to ceph dashboard as necessary.
Hi Adam, Given that RHCS 7.1 is going to support the NFS v3 protocol, when can we expect the fix for this BZ? This BZ is important in terms of user usability. Else documentation BZ will be required to have all the changes needed to enable v3 support in the documentation guides.
Verified BZ with # ceph --version ceph version 18.2.1-67.el9cp (e63e407e02b2616a7b4504a4f7c5a76f89aad3ce) reef (stable) # ceph versions { "mon": { "ceph version 18.2.1-67.el9cp (e63e407e02b2616a7b4504a4f7c5a76f89aad3ce) reef (stable)": 3 }, "mgr": { "ceph version 18.2.1-67.el9cp (e63e407e02b2616a7b4504a4f7c5a76f89aad3ce) reef (stable)": 2 }, "osd": { "ceph version 18.2.1-67.el9cp (e63e407e02b2616a7b4504a4f7c5a76f89aad3ce) reef (stable)": 18 }, "mds": { "ceph version 18.2.1-67.el9cp (e63e407e02b2616a7b4504a4f7c5a76f89aad3ce) reef (stable)": 2 }, "rgw": { "ceph version 18.2.1-67.el9cp (e63e407e02b2616a7b4504a4f7c5a76f89aad3ce) reef (stable)": 2 }, "overall": { "ceph version 18.2.1-67.el9cp (e63e407e02b2616a7b4504a4f7c5a76f89aad3ce) reef (stable)": 27 } } 1. Created ganesha cluster [ceph: root@ceph-mani-l5tjk3-node1-installer /]# ceph nfs cluster ls [] [ceph: root@ceph-mani-l5tjk3-node1-installer /]# ceph nfs cluster create nfsganesha "ceph-mani-l5tjk3-node1-installer ceph-mani-l5tjk3-node2" [ceph: root@ceph-mani-l5tjk3-node1-installer /]# ceph nfs cluster ls [ "nfsganesha" ] 2. Validate the default ganesha.conf file ===> It contains the v3 protocol enable params by default [ceph: root@ceph-mani-l5tjk3-node1-installer /]# cat /usr/share/ceph/mgr/cephadm/templates/services/nfs/ganesha.conf.j2 # {{ cephadm_managed }} NFS_CORE_PARAM { Enable_NLM = false; Enable_RQUOTA = false; Protocols = 3, 4; ====> Added by default in ganesha.conf file mount_path_pseudo = true; NFS_Port = {{ port }}; {% if bind_addr %} Bind_addr = {{ bind_addr }}; {% endif %} {% if haproxy_hosts %} HAProxy_Hosts = {{ haproxy_hosts|join(", ") }}; {% endif %} } NFSv4 { Delegations = false; RecoveryBackend = 'rados_cluster'; Minor_Versions = 1, 2; } RADOS_KV { UserId = "{{ user }}"; nodeid = "{{ nodeid }}"; pool = "{{ pool }}"; namespace = "{{ namespace }}"; } RADOS_URLS { UserId = "{{ user }}"; watch_url = "{{ url }}"; } RGW { cluster = "ceph"; name = "client.{{ rgw_user }}"; } %url {{ url }} 3. Create ganesha export [ceph: root@ceph-mani-l5tjk3-node1-installer /]# ceph nfs export create cephfs nfsganesha /ganesha1 cephfs --path=/ { "bind": "/ganesha1", "cluster": "nfsganesha", "fs": "cephfs", "mode": "RW", "path": "/" } [ceph: root@ceph-mani-l5tjk3-node1-installer /]# ceph nfs export info nfsganesha /ganesha1 { "access_type": "RW", "clients": [], "cluster_id": "nfsganesha", "export_id": 1, "fsal": { "cmount_path": "/", "fs_name": "cephfs", "name": "CEPH", "user_id": "nfs.nfsganesha.cephfs" }, "path": "/", "protocols": [ 3, =========> v3 protocol added by default at the time of export creation 4 ], "pseudo": "/ganesha1", "security_label": true, "squash": "none", "transports": [ "TCP" ] } 4. Mount the export on client via v3 protocol [root@ceph-mani-l5tjk3-node7 ~]# cd /mnt/ [root@ceph-mani-l5tjk3-node7 mnt]# ls [root@ceph-mani-l5tjk3-node7 mnt]# mkdir ganesha [root@ceph-mani-l5tjk3-node7 mnt]# mount -t nfs -o vers=3 10.0.206.62:/ganesha1 /mnt/ganesha/ Created symlink /run/systemd/system/remote-fs.target.wants/rpc-statd.service → /usr/lib/systemd/system/rpc-statd.service. [root@ceph-mani-l5tjk3-node7 mnt]# cd /mnt/ganesha/ [root@ceph-mani-l5tjk3-node7 ganesha]# touch f1 10.0.206.62:/ganesha1 on /mnt/ganesha type nfs (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.0.206.62,mountvers=3,mountport=46348,mountproto=udp,local_lock=none,addr=10.0.206.62) Mount for v3 works as expect without modifying the ganesha.conf file and export file.Moving this BZ to verified state.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Critical: Red Hat Ceph Storage 7.1 security, enhancements, and bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2024:3925