Bug 2259461
Summary: | [NFS-Ganesha] Updating any entry in the volume export file (Changing RW to RO or Adding vers=3 in volume export file) resulting in mount failure on client side. | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Manisha Saini <msaini> |
Component: | Cephadm | Assignee: | Adam King <adking> |
Status: | CLOSED ERRATA | QA Contact: | Manisha Saini <msaini> |
Severity: | urgent | Docs Contact: | Akash Raj <akraj> |
Priority: | unspecified | ||
Version: | 7.1 | CC: | adking, akraj, cephqe-warriors, gouthamr, kkeithle, spunadik, tserlin, vdas |
Target Milestone: | --- | Keywords: | Automation, Regression |
Target Release: | 7.1 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | ceph-18.2.1-36.el9cp | Doc Type: | No Doc Update |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2024-06-13 14:24:52 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 2267614, 2298578, 2298579 |
Description
Manisha Saini
2024-01-21 23:43:25 UTC
After modifying the export definition, its entry (and may be the related rados object) gets removed. Need to be addressed by cephadm component. (In reply to Sachin Punadikar from comment #5) > After modifying the export definition, its entry (and may be the related > rados object) gets removed. Need to be addressed by cephadm component. Export related rados object is missing after modification - [ceph: root@ceph-win-test-xt7w9v-node1-installer /]# rados get -N cephfs-nfs -p .nfs export-0 /export_conf error getting .nfs/export-0: (2) No such file or directory Hi Adam, Could you please provide the fix for same as it is a blocker for v3 testing on RHCS 7.1 build. Meanwhile, if will be helpful if you can let us know the workaround (if exist?) for same to unblock our testing. Thanks Hi Adam, With the steps shared in comment #9, I am able to reproduce the issue with RHCS 7.1 build [ceph: root@ceph-win-nfs-ubxis7-node1-installer /]# ceph --version ceph version 18.2.1-10.el9cp (ccf42acecc9e7ec19c8994e4d2ca0180b612ad1e) reef (stable) [ceph: root@ceph-win-nfs-ubxis7-node1-installer /]# rpm -qa | grep nfs libnfsidmap-2.5.4-20.el9.x86_64 nfs-utils-2.5.4-20.el9.x86_64 nfs-ganesha-selinux-5.6-4.el9cp.noarch nfs-ganesha-5.6-4.el9cp.x86_64 nfs-ganesha-rgw-5.6-4.el9cp.x86_64 nfs-ganesha-ceph-5.6-4.el9cp.x86_64 nfs-ganesha-rados-grace-5.6-4.el9cp.x86_64 nfs-ganesha-rados-urls-5.6-4.el9cp.x86_64 [ceph: root@ceph-win-nfs-ubxis7-node1-installer /]# =================== 1. Create Ganesha cluster [ceph: root@ceph-win-nfs-ubxis7-node1-installer /]# ceph nfs cluster ls [] [ceph: root@ceph-win-nfs-ubxis7-node1-installer /]# ceph nfs cluster create cephfs-nfs 'ceph-win-nfs-ubxis7-node2' [ceph: root@ceph-win-nfs-ubxis7-node1-installer /]# ceph nfs cluster info cephfs-nfs { "cephfs-nfs": { "backend": [ { "hostname": "ceph-win-nfs-ubxis7-node2", "ip": "10.0.208.67", "port": 2049 } ], "virtual_ip": null } } [ceph: root@ceph-win-nfs-ubxis7-node1-installer /]# ceph fs volume ls [ { "name": "cephfs" } ] [ceph: root@ceph-win-nfs-ubxis7-node1-installer /]# ceph orch ps --daemon-type nfs NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID nfs.cephfs-nfs.0.0.ceph-win-nfs-ubxis7-node2.uxrgby ceph-win-nfs-ubxis7-node2 *:2049 running (3m) 3m ago 3m 21.7M - 5.6 18a49f4e73b3 1345ae01b290 2. Edit the ganesha.conf file to include the protocol 3. ===== [root@ceph-win-nfs-ubxis7-node2 ~]# cat /var/lib/ceph/2fcec538-c503-11ee-aaca-fa163e048975/nfs.cephfs-nfs.0.0.ceph-win-nfs-ubxis7-node2.uxrgby/etc/ganesha/ganesha.conf # This file is generated by cephadm. NFS_CORE_PARAM { Enable_NLM = false; Enable_RQUOTA = false; Protocols = 3, 4; NFS_Port = 2049; } NFSv4 { Delegations = false; RecoveryBackend = 'rados_cluster'; Minor_Versions = 1, 2; } RADOS_KV { UserId = "nfs.cephfs-nfs.0.0.ceph-win-nfs-ubxis7-node2.uxrgby"; nodeid = "nfs.cephfs-nfs.0"; pool = ".nfs"; namespace = "cephfs-nfs"; } RADOS_URLS { UserId = "nfs.cephfs-nfs.0.0.ceph-win-nfs-ubxis7-node2.uxrgby"; watch_url = "rados://.nfs/cephfs-nfs/conf-nfs.cephfs-nfs"; } RGW { cluster = "ceph"; name = "client.nfs.cephfs-nfs.0.0.ceph-win-nfs-ubxis7-node2.uxrgby-rgw"; } %url rados://.nfs/cephfs-nfs/conf-nfs.cephfs-nfs =============== 3. Create ganesha export [ceph: root@ceph-win-nfs-ubxis7-node1-installer /]# ceph nfs export create cephfs cephfs-nfs /export_0 cephfs { "bind": "/export_0", "cluster": "cephfs-nfs", "fs": "cephfs", "mode": "RW", "path": "/" } =============== 4. Mount the export on client via 4.1 protocol. [root@ceph-win-nfs-ubxis7-node5 mnt]# mount -t nfs -o vers=4.1 -o port=2049 10.0.208.67:/export_0 /mnt/ganesha/ [root@ceph-win-nfs-ubxis7-node5 mnt]# [root@ceph-win-nfs-ubxis7-node5 mnt]# cd /mnt/ganesha/ [root@ceph-win-nfs-ubxis7-node5 ganesha]# ls [root@ceph-win-nfs-ubxis7-node5 ganesha]# touch f1 [root@ceph-win-nfs-ubxis7-node5 ganesha]# [root@ceph-win-nfs-ubxis7-node5 mnt]# umount /mnt/ganesha/ [root@ceph-win-nfs-ubxis7-node5 mnt]# ========== 5. Edit the export file to include the protocol 3. [ceph: root@ceph-win-nfs-ubxis7-node1-installer /]# ceph nfs export info cephfs-nfs /export_0 { "access_type": "RW", "clients": [], "cluster_id": "cephfs-nfs", "export_id": 1, "fsal": { "cmount_path": "/", "fs_name": "cephfs", "name": "CEPH", "user_id": "nfs.cephfs-nfs.cephfs" }, "path": "/", "protocols": [ 4 ], "pseudo": "/export_0", "security_label": true, "squash": "none", "transports": [ "TCP" ] } [ceph: root@ceph-win-nfs-ubxis7-node1-installer /]# rados get -N cephfs-nfs -p ".nfs" conf-nfs.cephfs-nfs /nfs_conf [ceph: root@ceph-win-nfs-ubxis7-node1-installer /]# cat nfs_conf %url "rados://.nfs/cephfs-nfs/export-1" [ceph: root@ceph-win-nfs-ubxis7-node1-installer /]# [ceph: root@ceph-win-nfs-ubxis7-node1-installer /]# rados get -N cephfs-nfs -p ".nfs" export-1 /export_conf [ceph: root@ceph-win-nfs-ubxis7-node1-installer /]# cat export_conf EXPORT { FSAL { name = "CEPH"; user_id = "nfs.cephfs-nfs.cephfs"; filesystem = "cephfs"; secret_access_key = "AQA0UMJlwzFGCRAAqTLoy+DrjpSRrdAU9RHwJw=="; cmount_path = "/"; } export_id = 1; path = "/"; pseudo = "/export_0"; access_type = "RW"; squash = "none"; attr_expiration_time = 0; security_label = true; protocols = 4; transports = "TCP"; } [root@ceph-win-nfs-ubxis7-node1-installer ~]# vi export.conf [root@ceph-win-nfs-ubxis7-node1-installer ~]# cephadm shell --mount export.conf:/var/lib/ceph/export.conf Inferring fsid 2fcec538-c503-11ee-aaca-fa163e048975 Inferring config /var/lib/ceph/2fcec538-c503-11ee-aaca-fa163e048975/mon.ceph-win-nfs-ubxis7-node1-installer/config Using ceph image with id '18a49f4e73b3' and tag 'ceph-7.1-rhel-9-containers-candidate-64751-20240131002053' created on 2024-01-31 00:23:56 +0000 UTC registry-proxy.engineering.redhat.com/rh-osbs/rhceph@sha256:5e19702546ffe42b24b5c05936fae05045083a2103a54fb9400a37fabdcd2e50 [ceph: root@ceph-win-nfs-ubxis7-node1-installer /]# ceph nfs export apply cephfs-nfs -i /var/lib/ceph/export.conf [ { "pseudo": "/export_0", "state": "updated" } ] [ceph: root@ceph-win-nfs-ubxis7-node1-installer /]# ceph nfs export info cephfs-nfs /export_0 { "access_type": "RW", "clients": [], "cluster_id": "cephfs-nfs", "export_id": 1, "fsal": { "cmount_path": "/", "fs_name": "cephfs", "name": "CEPH", "user_id": "nfs.cephfs-nfs.cephfs" }, "path": "/", "protocols": [ 3, 4 ], "pseudo": "/export_0", "security_label": true, "squash": "none", "transports": [ "TCP" ] } [ceph: root@ceph-win-nfs-ubxis7-node1-installer /]# rados get -N cephfs-nfs -p ".nfs" conf-nfs.cephfs-nfs /nfs_conf [ceph: root@ceph-win-nfs-ubxis7-node1-installer /]# cat /nfs_conf %url "rados://.nfs/cephfs-nfs/export-1" [ceph: root@ceph-win-nfs-ubxis7-node1-installer /]# rados get -N cephfs-nfs -p ".nfs" export-1 /export_conf [ceph: root@ceph-win-nfs-ubxis7-node1-installer /]# cat export_conf EXPORT { FSAL { name = "CEPH"; user_id = "nfs.cephfs-nfs.cephfs"; filesystem = "cephfs"; cmount_path = "/"; } export_id = 1; path = "/"; pseudo = "/export_0"; access_type = "RW"; squash = "none"; attr_expiration_time = 0; security_label = true; protocols = 3, 4; transports = "TCP"; } ==================== 6. Mount the export on Client. It failed [root@ceph-win-nfs-ubxis7-node5 mnt]# mount -t nfs -o vers=4.1 -o port=2049 10.0.208.67:/export_0 /mnt/ganesha/ mount.nfs: mounting 10.0.208.67:/export_0 failed, reason given by server: No such file or directory [root@ceph-win-nfs-ubxis7-node5 mnt]# Logs -- ========= 2fcec538-c503-11ee-aaca-fa163e048975. -node2-uxrgby[140958]: 06/02/2024 20:02:56 : epoch 65c29070 : ceph-win-nfs-ubxis7-node2 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT -node2-uxrgby[140958]: 06/02/2024 20:02:56 : epoch 65c29070 : ceph-win-nfs-ubxis7-node2 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.6 -node2-uxrgby[140958]: 06/02/2024 20:02:56 : epoch 65c29070 : ceph-win-nfs-ubxis7-node2 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed -node2-uxrgby[140958]: 06/02/2024 20:02:56 : epoch 65c29070 : ceph-win-nfs-ubxis7-node2 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587 -node2-uxrgby[140958]: 06/02/2024 20:02:56 : epoch 65c29070 : ceph-win-nfs-ubxis7-node2 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576. -node2-uxrgby[140958]: 06/02/2024 20:02:56 : epoch 65c29070 : ceph-win-nfs-ubxis7-node2 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper. -node2-uxrgby[140958]: 06/02/2024 20:02:56 : epoch 65c29070 : ceph-win-nfs-ubxis7-node2 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized. -node2-uxrgby[140958]: 06/02/2024 20:02:56 : epoch 65c29070 : ceph-win-nfs-ubxis7-node2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90 -node2-uxrgby[140958]: 06/02/2024 20:02:56 : epoch 65c29070 : ceph-win-nfs-ubxis7-node2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend -node2-uxrgby[140958]: 06/02/2024 20:02:56 : epoch 65c29070 : ceph-win-nfs-ubxis7-node2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0) -node2-uxrgby[140958]: 06/02/2024 20:02:59 : epoch 65c29070 : ceph-win-nfs-ubxis7-node2 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE -node2-uxrgby[140958]: 06/02/2024 20:02:59 : epoch 65c29070 : ceph-win-nfs-ubxis7-node2 : ganesha.nfsd-2[main] create_export :FSAL :CRIT :Unable to init Ceph handle. -node2-uxrgby[140958]: 06/02/2024 20:02:59 : epoch 65c29070 : ceph-win-nfs-ubxis7-node2 : ganesha.nfsd-2[main] mdcache_fsal_create_export :FSAL :MAJ :Failed to call create_export on underlying FSAL Ceph -node2-uxrgby[140958]: 06/02/2024 20:02:59 : epoch 65c29070 : ceph-win-nfs-ubxis7-node2 : ganesha.nfsd-2[main] fsal_cfg_commit :CONFIG :CRIT :Could not create export for (/export_0) to (/) -node2-uxrgby[140958]: 06/02/2024 20:02:59 : epoch 65c29070 : ceph-win-nfs-ubxis7-node2 : ganesha.nfsd-2[main] export_commit_common :CONFIG :CRIT :fsal_export is NULL -node2-uxrgby[140958]: 06/02/2024 20:02:59 : epoch 65c29070 : ceph-win-nfs-ubxis7-node2 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!! -node2-uxrgby[140958]: 06/02/2024 20:02:59 : epoch 65c29070 : ceph-win-nfs-ubxis7-node2 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :CRIT :Config File ("rados://.nfs/cephfs-nfs/export-1":2): 1 validation errors in block FSAL -node2-uxrgby[140958]: 06/02/2024 20:02:59 : epoch 65c29070 : ceph-win-nfs-ubxis7-node2 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :CRIT :Config File ("rados://.nfs/cephfs-nfs/export-1":2): Errors processing block (FSAL) -node2-uxrgby[140958]: 06/02/2024 20:02:59 : epoch 65c29070 : ceph-win-nfs-ubxis7-node2 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :CRIT :Config File ("rados://.nfs/cephfs-nfs/export-1":1): 1 validation errors in block EXPORT -node2-uxrgby[140958]: 06/02/2024 20:02:59 : epoch 65c29070 : ceph-win-nfs-ubxis7-node2 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :CRIT :Config File ("rados://.nfs/cephfs-nfs/export-1":1): Errors processing block (EXPORT) -node2-uxrgby[140958]: 06/02/2024 20:02:59 : epoch 65c29070 : ceph-win-nfs-ubxis7-node2 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :CRIT :Config File (/etc/ganesha/ganesha.conf:22): Unknown block (RADOS_URLS) -node2-uxrgby[140958]: 06/02/2024 20:02:59 : epoch 65c29070 : ceph-win-nfs-ubxis7-node2 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :CRIT :Config File (/etc/ganesha/ganesha.conf:27): Unknown block (RGW) -node2-uxrgby[140958]: 06/02/2024 20:02:59 : epoch 65c29070 : ceph-win-nfs-ubxis7-node2 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL -node2-uxrgby[140958]: 06/02/2024 20:02:59 : epoch 65c29070 : ceph-win-nfs-ubxis7-node2 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setg> -node2-uxrgby[140958]: 06/02/2024 20:02:59 : epoch 65c29070 : ceph-win-nfs-ubxis7-node2 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory) -node2-uxrgby[140958]: 06/02/2024 20:02:59 : epoch 65c29070 : ceph-win-nfs-ubxis7-node2 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection -node2-uxrgby[140958]: 06/02/2024 20:02:59 : epoch 65c29070 : ceph-win-nfs-ubxis7-node2 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection -node2-uxrgby[140958]: 06/02/2024 20:02:59 : epoch 65c29070 : ceph-win-nfs-ubxis7-node2 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection -node2-uxrgby[140958]: 06/02/2024 20:02:59 : epoch 65c29070 : ceph-win-nfs-ubxis7-node2 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs -node2-uxrgby[140958]: 06/02/2024 20:02:59 : epoch 65c29070 : ceph-win-nfs-ubxis7-node2 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection -node2-uxrgby[140958]: 06/02/2024 20:02:59 : epoch 65c29070 : ceph-win-nfs-ubxis7-node2 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized -node2-uxrgby[140958]: 06/02/2024 20:02:59 : epoch 65c29070 : ceph-win-nfs-ubxis7-node2 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists -node2-uxrgby[140958]: 06/02/2024 20:02:59 : epoch 65c29070 : ceph-win-nfs-ubxis7-node2 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name -node2-uxrgby[140958]: 06/02/2024 20:02:59 : epoch 65c29070 : ceph-win-nfs-ubxis7-node2 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keyt> -node2-uxrgby[140958]: 06/02/2024 20:02:59 : epoch 65c29070 : ceph-win-nfs-ubxis7-node2 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:2) -node2-uxrgby[140958]: 06/02/2024 20:02:59 : epoch 65c29070 : ceph-win-nfs-ubxis7-node2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor. -node2-uxrgby[140958]: 06/02/2024 20:02:59 : epoch 65c29070 : ceph-win-nfs-ubxis7-node2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully -node2-uxrgby[140958]: 06/02/2024 20:02:59 : epoch 65c29070 : ceph-win-nfs-ubxis7-node2 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting -node2-uxrgby[140958]: 06/02/2024 20:02:59 : epoch 65c29070 : ceph-win-nfs-ubxis7-node2 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown -node2-uxrgby[140958]: 06/02/2024 20:02:59 : epoch 65c29070 : ceph-win-nfs-ubxis7-node2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully -node2-uxrgby[140958]: 06/02/2024 20:02:59 : epoch 65c29070 : ceph-win-nfs-ubxis7-node2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully -node2-uxrgby[140958]: 06/02/2024 20:02:59 : epoch 65c29070 : ceph-win-nfs-ubxis7-node2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully -node2-uxrgby[140958]: 06/02/2024 20:02:59 : epoch 65c29070 : ceph-win-nfs-ubxis7-node2 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :------------------------------------------------- -node2-uxrgby[140958]: 06/02/2024 20:02:59 : epoch 65c29070 : ceph-win-nfs-ubxis7-node2 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT : NFS SERVER INITIALIZED -node2-uxrgby[140958]: 06/02/2024 20:02:59 : epoch 65c29070 : ceph-win-nfs-ubxis7-node2 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :------------------------------------------------- I tested the workaround suggested by Sachin in comment #13. With the workaround, I am able to mount the NFS share on v3 clients. @adking when can we expect the fix for this BZ? Verified this BZ with [ceph: root@ceph-mani-xq8a1i-node1-installer /]# ceph --version ceph version 18.2.1-36.el9cp (6a98aedd49fa0fc9b8fb383cd9642169971f54d0) reef (stable) 1. Create ganesha cluster [ceph: root@ceph-mani-xq8a1i-node1-installer /]# ceph nfs cluster info cephfs-nfs { "cephfs-nfs": { "backend": [ { "hostname": "ceph-mani-xq8a1i-node2", "ip": "10.0.208.75", "port": 2049 } ], "virtual_ip": null } } 2. Create export [ceph: root@ceph-mani-xq8a1i-node1-installer /]# ceph nfs export create cephfs cephfs-nfs /export_1 cephfs { "bind": "/export_1", "cluster": "cephfs-nfs", "fs": "cephfs", "mode": "RW", "path": "/" } [ceph: root@ceph-mani-xq8a1i-node1-installer /]# ceph nfs export info cephfs-nfs /export_1 { "access_type": "RW", "clients": [], "cluster_id": "cephfs-nfs", "export_id": 1, "fsal": { "cmount_path": "/", "fs_name": "cephfs", "name": "CEPH", "user_id": "nfs.cephfs-nfs.cephfs" }, "path": "/", "protocols": [ 4 ], "pseudo": "/export_1", "security_label": true, "squash": "none", "transports": [ "TCP" ] } 3. Enable v3 support by editing the ganesha.conf file [ceph: root@ceph-mani-xq8a1i-node1-installer /]# ceph config-key get mgr/cephadm/services/nfs/ganesha.conf # {{ cephadm_managed }} NFS_CORE_PARAM { Enable_NLM = false; Enable_RQUOTA = false; Protocols = 3, 4; mount_path_pseudo = true; NFS_Port = {{ port }}; {% if bind_addr %} Bind_addr = {{ bind_addr }}; {% endif %} {% if haproxy_hosts %} HAProxy_Hosts = {{ haproxy_hosts|join(", ") }}; {% endif %} } NFSv4 { Delegations = false; RecoveryBackend = 'rados_cluster'; Minor_Versions = 1, 2; } RADOS_KV { UserId = "{{ user }}"; nodeid = "{{ nodeid }}"; pool = "{{ pool }}"; namespace = "{{ namespace }}"; } RADOS_URLS { UserId = "{{ user }}"; watch_url = "{{ url }}"; } RGW { cluster = "ceph"; name = "client.{{ rgw_user }}"; } %url {{ url }} 4. Edit the export file to add v3 [ceph: root@ceph-mani-xq8a1i-node1-installer /]# ceph nfs export info cephfs-nfs /export_1 > export_1.conf [ceph: root@ceph-mani-xq8a1i-node1-installer /]# sed -i '/\"protocols\": /a \ 3,' export_1.conf [ceph: root@ceph-mani-xq8a1i-node1-installer /]# cat export_1.conf { "access_type": "RW", "clients": [], "cluster_id": "cephfs-nfs", "export_id": 1, "fsal": { "cmount_path": "/", "fs_name": "cephfs", "name": "CEPH", "user_id": "nfs.cephfs-nfs.cephfs" }, "path": "/", "protocols": [ 3, 4 ], "pseudo": "/export_1", "security_label": true, "squash": "none", "transports": [ "TCP" ] } 5. Apply the changes [ceph: root@ceph-mani-xq8a1i-node1-installer /]# ceph nfs export apply cephfs-nfs -i export_1.conf [ { "pseudo": "/export_1", "state": "updated" } ] [ceph: root@ceph-mani-xq8a1i-node1-installer /]# ceph nfs export info cephfs-nfs /export_1 { "access_type": "RW", "clients": [], "cluster_id": "cephfs-nfs", "export_id": 1, "fsal": { "cmount_path": "/", "fs_name": "cephfs", "name": "CEPH", "user_id": "nfs.cephfs-nfs.cephfs" }, "path": "/", "protocols": [ 3, 4 ], "pseudo": "/export_1", "security_label": true, "squash": "none", "transports": [ "TCP" ] } 6. Mount the export on client via vers=3 [root@ceph-mani-xq8a1i-node5 mnt]# mount -t nfs -o vers=3 10.0.208.75:/export_1 /mnt/nfs/ Created symlink /run/systemd/system/remote-fs.target.wants/rpc-statd.service → /usr/lib/systemd/system/rpc-statd.service. [root@ceph-mani-xq8a1i-node5 mnt]# cd /mnt/nfs/ [root@ceph-mani-xq8a1i-node5 nfs]# touch f1 [root@ceph-mani-xq8a1i-node5 nfs]# df Filesystem 1K-blocks Used Available Use% Mounted on devtmpfs 4096 0 4096 0% /dev tmpfs 1867708 84 1867624 1% /dev/shm tmpfs 747084 8756 738328 2% /run /dev/vda4 41056236 2336824 38719412 6% / /dev/vda3 548864 296916 251948 55% /boot /dev/vda2 204580 7132 197448 4% /boot/efi tmpfs 373540 0 373540 0% /run/user/0 10.0.208.75:/export_1 87842816 0 87842816 0% /mnt/nfs 10.0.208.75:/export_1 on /mnt/nfs type nfs (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.0.208.75,mountvers=3,mountport=49688,mountproto=udp,local_lock=none,addr=10.0.208.75) [root@ceph-mani-xq8a1i-node5 nfs]# Mount was successful.Moving this BZ to verified state Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Critical: Red Hat Ceph Storage 7.1 security, enhancements, and bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2024:3925 |