Bug 2220891
| Summary: | [NFS-Ganesha] NFS mount with vers=4.0 is failing on client | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Manisha Saini <msaini> |
| Component: | Cephadm | Assignee: | Adam King <adking> |
| Status: | CLOSED WORKSFORME | QA Contact: | Mohit Bisht <mobisht> |
| Severity: | high | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 7.0 | CC: | adking, cephqe-warriors, ffilz, kkeithle, mbenjamin, prprakas, vdas |
| Target Milestone: | --- | ||
| Target Release: | 7.0 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2023-10-06 17:22:32 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
What is the complete config?
In particular I'm interested in what NFSV4 { Minor_Versions } is set at.
Default Config created when share is exported via NFS is
----------------
[ceph: root@ceph-msaini-7plchm-node1-installer /]# ceph nfs export get cephfs-nfs /export
{
"access_type": "RW",
"clients": [],
"cluster_id": "cephfs-nfs",
"export_id": 1,
"fsal": {
"fs_name": "cephfs",
"name": "CEPH",
"user_id": "nfs.cephfs-nfs.1"
},
"path": "/",
"protocols": [
4
],
"pseudo": "/export",
"security_label": true,
"squash": "none",
"transports": [
"TCP"
]
}
[root@ceph-mani-6v5mr7-node5 mnt]# mount -t nfs -o vers=4.1,port=2049 10.0.209.39:/export /mnt/test/
[root@ceph-mani-6v5mr7-node5 mnt]# umount /mnt/test/
[root@ceph-mani-6v5mr7-node5 mnt]# mount -t nfs -o vers=4.0,port=2049 10.0.209.39:/export /mnt/test/
mount.nfs: Protocol not supported
[root@ceph-mani-6v5mr7-node5 mnt]#
--------------------------
Edited the export file to explicitly mention "4.0" in export file as below-
---------------------------
# ceph nfs export apply cephfs-nfs -i export1.conf
[
{
"pseudo": "/export",
"state": "updated"
}
]
# ceph nfs export get cephfs-nfs /export
{
"access_type": "RW",
"clients": [],
"cluster_id": "cephfs-nfs",
"export_id": 1,
"fsal": {
"fs_name": "cephfs",
"name": "CEPH",
"user_id": "nfs.cephfs-nfs.1"
},
"path": "/",
"protocols": [
4.0
],
"pseudo": "/export",
"security_label": true,
"squash": "none",
"transports": [
"TCP"
]
}
[root@ceph-mani-6v5mr7-node5 mnt]# mount -t nfs -o vers=4.1,port=2049 10.0.209.39:/export /mnt/test/
[root@ceph-mani-6v5mr7-node5 mnt]# umount /mnt/test/
[root@ceph-mani-6v5mr7-node5 mnt]# mount -t nfs -o vers=4.0,port=2049 10.0.209.39:/export /mnt/test/
mount.nfs: Protocol not supported
[root@ceph-mani-6v5mr7-node5 mnt]#
Thanks for the EXPORT config, but there is more to Ganesha config. I'm not sure how/where we set up the rest of the config. Of particular interest in this case is the Minor_Versions in the NFSv4 config block. I suspect it may not include 0. For NFSv4.0 to work, that would either have to be not present (so the default of 0,1,2 is applied, or 0 would have to be explicitly called out. Please make sure minor protocols 0, 1, and 2, i.e. 4.0, 4.1, and 4.2 are configured for each export. While only 4.1 (and later) is supported in the product, QE needs 4.0 enabled to run the pynfs test suite. Also, how do we get the config other than exports from ceph adm? (In reply to Frank Filz from comment #5) > Also, how do we get the config other than exports from ceph adm? We write a ganesha conf to the host for the daemon. You'd have to just go read it off the host where the nfs daemon was deployed from /var/lib/ceph/<fsid>/<nfs-daemon-name>/ e.g. [root@vm-00 ~]# cat /var/lib/ceph/87a5129a-5284-11ee-a89e-525400398e54/nfs.foo.0.0.vm-00.tfsjep/etc/ganesha/ganesha.conf # This file is generated by cephadm. NFS_CORE_PARAM { Enable_NLM = false; Enable_RQUOTA = false; Protocols = 4; NFS_Port = 2049; } NFSv4 { Delegations = false; RecoveryBackend = 'rados_cluster'; Minor_Versions = 1, 2; } RADOS_KV { UserId = "nfs.foo.0.0.vm-00.tfsjep"; nodeid = "nfs.foo.0"; pool = ".nfs"; namespace = "foo"; } RADOS_URLS { UserId = "nfs.foo.0.0.vm-00.tfsjep"; watch_url = "rados://.nfs/foo/conf-nfs.foo"; } RGW { cluster = "ceph"; name = "client.nfs.foo.0.0.vm-00.tfsjep-rgw"; } is that what you mean? (In reply to Adam King from comment #6) > (In reply to Frank Filz from comment #5) > > Also, how do we get the config other than exports from ceph adm? > > We write a ganesha conf to the host for the daemon. You'd have to just go > read it off the host where the nfs daemon was deployed from > /var/lib/ceph/<fsid>/<nfs-daemon-name>/ e.g. > > [root@vm-00 ~]# cat > /var/lib/ceph/87a5129a-5284-11ee-a89e-525400398e54/nfs.foo.0.0.vm-00.tfsjep/ > etc/ganesha/ganesha.conf > # This file is generated by cephadm. > NFS_CORE_PARAM { > Enable_NLM = false; > Enable_RQUOTA = false; > Protocols = 4; > NFS_Port = 2049; > } > > NFSv4 { > Delegations = false; > RecoveryBackend = 'rados_cluster'; > Minor_Versions = 1, 2; > } > > RADOS_KV { > UserId = "nfs.foo.0.0.vm-00.tfsjep"; > nodeid = "nfs.foo.0"; > pool = ".nfs"; > namespace = "foo"; > } > > RADOS_URLS { > UserId = "nfs.foo.0.0.vm-00.tfsjep"; > watch_url = "rados://.nfs/foo/conf-nfs.foo"; > } > > RGW { > cluster = "ceph"; > name = "client.nfs.foo.0.0.vm-00.tfsjep-rgw"; > } > > is that what you mean? This ganesha conf is generated by cephadm based off of this template https://github.com/ceph/ceph/blob/main/src/pybind/mgr/cephadm/templates/services/nfs/ganesha.conf.j2 so those minor versions are hardcoded in (unless going through this process https://docs.ceph.com/en/octopus/cephadm/monitoring/#using-custom-configuration-files but that's a bit more work). Do we need to add support for 4.0 to this ganesha conf? Yes, for some testing we need to enable 4.0 via NFSv4 { Minor_Versions = 0,1,2; }
(In reply to Frank Filz from comment #8) > Yes, for some testing we need to enable 4.0 via NFSv4 { Minor_Versions = > 0,1,2; } Is it safe to just have it enabled in general? If so, I can just adjust the template line to allow minor version 0 as well. If not, we need to make this configurable somehow. (In reply to Adam King from comment #9) > (In reply to Frank Filz from comment #8) > > Yes, for some testing we need to enable 4.0 via NFSv4 { Minor_Versions = > > 0,1,2; } > > Is it safe to just have it enabled in general? If so, I can just adjust the > template line to allow minor version 0 as well. If not, we need to make this > configurable somehow. I'm pretty sure we still want to limit the product to 4.1+, we just need 4.0 for some testing, particularly pynfs. So some sort of configuration would be best. Of course that means documentation... Which will require some word crafting. (In reply to Frank Filz from comment #10) > (In reply to Adam King from comment #9) > > (In reply to Frank Filz from comment #8) > > > Yes, for some testing we need to enable 4.0 via NFSv4 { Minor_Versions = > > > 0,1,2; } > > > > Is it safe to just have it enabled in general? If so, I can just adjust the > > template line to allow minor version 0 as well. If not, we need to make this > > configurable somehow. > > I'm pretty sure we still want to limit the product to 4.1+, we just need 4.0 > for some testing, particularly pynfs. So some sort of configuration would be > best. Of course that means documentation... Which will require some word > crafting. Okay, since this is just for testing, we do have a, not necessarily super user friendly, but also not difficult way to modify this template QE could use. 1) grab the existing template from the container cephadm shell -- cat /usr/share/ceph/mgr/cephadm/templates/services/nfs/ganesha.conf.j2 > ganesha.conf.j2 2) modify the minor versions in the template to get a modified template that allows minor version 0 sed -e 's/Minor_Versions = 1, 2;/Minor_Versions = 0, 1, 2;/' ganesha.conf.j2 > modified-ganesha.conf.j2 3) set the modified template to be what cephadm uses for the ganesha conf template cephadm shell --mount modified-ganesha.conf.j2 -- ceph config-key set mgr/cephadm/services/nfs/ganesha.conf -i /mnt/modified-ganesha.conf.j2 4) redeploy the nfs daemons. In my case, the service was "nfs.foo" but you can replace it with your nfs service name cephadm shell -- ceph orch redeploy nfs.foo 5) Verify the nfs daemon has the modified conf set up (you'll have to insert your cluster's fsid, and nfs daemon name into this) [root@vm-00 ~]# cat /var/lib/ceph/96e59b16-5ed3-11ee-ac62-525400501f93/nfs.foo.0.0.vm-00.ijgheu/etc/ganesha/ganesha.conf | grep Minor Minor_Versions = 0, 1, 2; --- At this point, "mount -t nfs -o vers=4.0,port=2049 192.168.122.33:/test /mnt/test" worked for me (pseudo-path for my export was "/test", just insert your own host ip/fqdn and mount points) --- @msaini can you try that procedure out and see if you can get the 4.0 mount to work? Hi Adam, Test the steps provided in comment #11. With these steps mount is successful with vers 4.0 ganesha.conf file was edited as below ---- # ceph config-key get mgr/cephadm/services/nfs/ganesha.conf # {{ cephadm_managed }} NFS_CORE_PARAM { Enable_NLM = false; Enable_RQUOTA = false; Protocols = 4; NFS_Port = {{ port }}; {% if bind_addr %} Bind_addr = {{ bind_addr }}; {% endif %} {% if haproxy_hosts %} HAProxy_Hosts = {{ haproxy_hosts|join(", ") }}; {% endif %} } NFSv4 { Delegations = false; RecoveryBackend = 'rados_cluster'; Minor_Versions = 0, 1, 2; } NFS_KRB5 { PrincipalName = nfs; KeytabPath = /etc/krb5.keytab; Active_krb5 = true; } RADOS_KV { UserId = "{{ user }}"; nodeid = "{{ nodeid }}"; pool = "{{ pool }}"; namespace = "{{ namespace }}"; } RADOS_URLS { UserId = "{{ user }}"; watch_url = "{{ url }}"; } RGW { cluster = "ceph"; name = "client.{{ rgw_user }}"; } %url {{ url }} ----- [root@ceph-mani-b04gdn-node6 mnt]# mount -t nfs -o vers=4.0 ceph-mani-b04gdn-node1-installer.redhat.com:/ceph1 /mnt/ganesha/ [root@ceph-mani-b04gdn-node6 mnt]# cd /mnt/ganesha/ [root@ceph-mani-b04gdn-node6 ganesha]# touch f1 ceph-mani-b04gdn-node1-installer.redhat.com:/ceph1 on /mnt/ganesha type nfs4 (rw,relatime,vers=4.0,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.0.210.99,local_lock=none,addr=10.0.208.32) (In reply to Manisha Saini from comment #12) > Hi Adam, > > Test the steps provided in comment #11. With these steps mount is successful > with vers 4.0 > > ganesha.conf file was edited as below > ---- > # ceph config-key get mgr/cephadm/services/nfs/ganesha.conf > # {{ cephadm_managed }} > NFS_CORE_PARAM { > Enable_NLM = false; > Enable_RQUOTA = false; > Protocols = 4; > NFS_Port = {{ port }}; > {% if bind_addr %} > Bind_addr = {{ bind_addr }}; > {% endif %} > {% if haproxy_hosts %} > HAProxy_Hosts = {{ haproxy_hosts|join(", ") }}; > {% endif %} > } > > NFSv4 { > Delegations = false; > RecoveryBackend = 'rados_cluster'; > Minor_Versions = 0, 1, 2; > } > > NFS_KRB5 { > PrincipalName = nfs; > KeytabPath = /etc/krb5.keytab; > Active_krb5 = true; > } > > RADOS_KV { > UserId = "{{ user }}"; > nodeid = "{{ nodeid }}"; > pool = "{{ pool }}"; > namespace = "{{ namespace }}"; > } > > RADOS_URLS { > UserId = "{{ user }}"; > watch_url = "{{ url }}"; > } > > RGW { > cluster = "ceph"; > name = "client.{{ rgw_user }}"; > } > > %url {{ url }} > > ----- > > [root@ceph-mani-b04gdn-node6 mnt]# mount -t nfs -o vers=4.0 > ceph-mani-b04gdn-node1-installer.redhat.com:/ceph1 /mnt/ganesha/ > [root@ceph-mani-b04gdn-node6 mnt]# cd /mnt/ganesha/ > [root@ceph-mani-b04gdn-node6 ganesha]# touch f1 > > ceph-mani-b04gdn-node1-installer.redhat.com:/ceph1 on /mnt/ganesha type nfs4 > (rw,relatime,vers=4.0,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp, > timeo=600,retrans=2,sec=sys,clientaddr=10.0.210.99,local_lock=none,addr=10.0. > 208.32) Great, is that adequate for you for doing the testing you need with 4.0? If so I think we can close this. From https://bugzilla.redhat.com/show_bug.cgi?id=2220891#c10 we probably don't want to actually add 4.0 as a real supported version, so nothing left to do here. (In reply to Adam King from comment #13) > (In reply to Manisha Saini from comment #12) > > Hi Adam, > > > > Test the steps provided in comment #11. With these steps mount is successful > > with vers 4.0 > > > > ganesha.conf file was edited as below > > ---- > > # ceph config-key get mgr/cephadm/services/nfs/ganesha.conf > > # {{ cephadm_managed }} > > NFS_CORE_PARAM { > > Enable_NLM = false; > > Enable_RQUOTA = false; > > Protocols = 4; > > NFS_Port = {{ port }}; > > {% if bind_addr %} > > Bind_addr = {{ bind_addr }}; > > {% endif %} > > {% if haproxy_hosts %} > > HAProxy_Hosts = {{ haproxy_hosts|join(", ") }}; > > {% endif %} > > } > > > > NFSv4 { > > Delegations = false; > > RecoveryBackend = 'rados_cluster'; > > Minor_Versions = 0, 1, 2; > > } > > > > NFS_KRB5 { > > PrincipalName = nfs; > > KeytabPath = /etc/krb5.keytab; > > Active_krb5 = true; > > } > > > > RADOS_KV { > > UserId = "{{ user }}"; > > nodeid = "{{ nodeid }}"; > > pool = "{{ pool }}"; > > namespace = "{{ namespace }}"; > > } > > > > RADOS_URLS { > > UserId = "{{ user }}"; > > watch_url = "{{ url }}"; > > } > > > > RGW { > > cluster = "ceph"; > > name = "client.{{ rgw_user }}"; > > } > > > > %url {{ url }} > > > > ----- > > > > [root@ceph-mani-b04gdn-node6 mnt]# mount -t nfs -o vers=4.0 > > ceph-mani-b04gdn-node1-installer.redhat.com:/ceph1 /mnt/ganesha/ > > [root@ceph-mani-b04gdn-node6 mnt]# cd /mnt/ganesha/ > > [root@ceph-mani-b04gdn-node6 ganesha]# touch f1 > > > > ceph-mani-b04gdn-node1-installer.redhat.com:/ceph1 on /mnt/ganesha type nfs4 > > (rw,relatime,vers=4.0,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp, > > timeo=600,retrans=2,sec=sys,clientaddr=10.0.210.99,local_lock=none,addr=10.0. > > 208.32) > > Great, is that adequate for you for doing the testing you need with 4.0? If > so I think we can close this. From > https://bugzilla.redhat.com/show_bug.cgi?id=2220891#c10 we probably don't > want to actually add 4.0 as a real supported version, so nothing left to do > here. Hi Adam, yes, mounting with 4.0 will be enough to run pynfs test suit with vers 4.0 mount. Since 4.0 will not be a supported version in production ,we can close this BZ closing as per https://bugzilla.redhat.com/show_bug.cgi?id=2220891#c14 |
Description of problem: ========== Mounting of NFS share on client with v4.0 is failing. # mount -t nfs -o vers=4.0,port=2049 ceph-mani-30erzz-node6:/ganesha2 /mnt/ganesha/ mount.nfs: Protocol not supported Export block ===== { "export_id": 2, "path": "/volumes/_nogroup/ganesha2/d616fcd5-752f-489a-8ebe-a6e5d966323f", "cluster_id": "nfsganesha", "pseudo": "/ganesha2", "access_type": "RO", "squash": "none", "security_label": true, "protocols": [ 4 ], "transports": [ "TCP" ], "fsal": { "name": "CEPH", "user_id": "nfs.nfsganesha.2", "fs_name": "cephfs" }, "clients": [] } ] Same is passing with v4.1 # mount -t nfs -o vers=4.1,port=2049 ceph-mani-30erzz-node6:/ganesha2 /mnt/ganesha/ # tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,seclabel,size=374800k,nr_inodes=93700,mode=700,inode64) ceph-mani-30erzz-node6:/ganesha2 on /mnt/ganesha type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.0.208.62,local_lock=none,addr=10.0.209.47) Version-Release number of selected component (if applicable): =========================================== # rpm -qa | grep nfs libnfsidmap-2.5.4-18.el9.x86_64 nfs-utils-2.5.4-18.el9.x86_64 nfs-ganesha-selinux-5.3-1.el9cp.noarch nfs-ganesha-5.3-1.el9cp.x86_64 nfs-ganesha-ceph-5.3-1.el9cp.x86_64 nfs-ganesha-rados-grace-5.3-1.el9cp.x86_64 nfs-ganesha-rados-urls-5.3-1.el9cp.x86_64 nfs-ganesha-rgw-5.3-1.el9cp.x86_64 How reproducible: ============ 2/2 Steps to Reproduce: ========= 1. Deploy Ceph cluster with NFS 2. Create an CephFS volume and mount the volume on client via v4.0 #mount -t nfs -o vers=4.0,port=2049 ceph-mani-30erzz-node6:/ganesha2 /mnt/ganesha/ Actual results: ========= Mount failed - mount.nfs: Protocol not supported Expected results: ========== Mount should pass Additional info: