Bug 2308647 - [CephFS][NFS] NFS daemon fails during mount, error in messages 'status=2/INVALIDARGUMENT'
Summary: [CephFS][NFS] NFS daemon fails during mount, error in messages 'status=2/INVA...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: NFS-Ganesha
Version: 8.0
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: 8.1
Assignee: Sachin Punadikar
QA Contact: Manisha Saini
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2024-08-30 07:22 UTC by sumr
Modified: 2025-06-26 12:15 UTC (History)
6 users (show)

Fixed In Version: nfs-ganesha-6.5-11.el9cp
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2025-06-26 12:15:22 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-9629 0 None None None 2024-08-30 07:23:51 UTC
Red Hat Product Errata RHSA-2025:9775 0 None None None 2025-06-26 12:15:24 UTC

Description sumr 2024-08-30 07:22:08 UTC
Description of problem:
NFS mount exits with error and NFS daemon goes in error state on ceph build 19.1.0-61

NFS configuration succeeds on node and export for CephFS subvolume also succeeds. But during mount, mount exists with error as below,

2024-08-29 16:50:43,808 (cephci.snapshot_clone.cg_snap_test) [INFO] - cephci.RH.8.0.rhel-9.Regression.19.1.0-61.cephfs.15.cephci.ceph.ceph.py:1596 - Running command mount -t nfs -o port=2049 ceph-regression-lcmn0e-vybdku-node6:/export_572_sv_def_1 /mnt/cephfs_nfs_q9z/ on 10.0.195.26 timeout 600

2024-08-29 16:52:49,536 (cephci.snapshot_clone.cg_snap_test) [ERROR] - cephci.RH.8.0.rhel-9.Regression.19.1.0-61.cephfs.15.cephci.ceph.ceph.py:1632 - Error 32 during cmd, timeout 600

2024-08-29 16:52:49,537 (cephci.snapshot_clone.cg_snap_test) [ERROR] - cephci.RH.8.0.rhel-9.Regression.19.1.0-61.cephfs.15.cephci.ceph.ceph.py:1633 - Created symlink /run/systemd/system/remote-fs.target.wants/rpc-statd.service → /usr/lib/systemd/system/rpc-statd.service.

mount.nfs: Connection refused

NFS daemon is in error state,

nfs.cephfs-nfs.0.0.ceph-regression-lcmn0e-vybdku-node6.eslxgj  ceph-regression-lcmn0e-vybdku-node6            *:2049       error           13s ago   9h        -        -  <unknown>        <unknown>     <unknown>  

/var/log/messages has below relevant messages,

Aug 29 16:49:50 ceph-regression-lcmn0e-vybdku-node6 systemd[1]: Starting Ceph nfs.cephfs-nfs.0.0.ceph-regression-lcmn0e-vybdku-node6.eslxgj for c89bcfc2-6643-11ef-8abd-fa163e19bea8...

Aug 29 16:49:50 ceph-regression-lcmn0e-vybdku-node6 ceph-c89bcfc2-6643-11ef-8abd-fa163e19bea8-nfs-cephfs-nfs-0-0-ceph-regression-lcmn0e-vybdku-node6-eslxgj[33310]: 29/08/2024 20:49:50 : epoch 66d0deee : ceph-regression-lcmn0e-vybdku-node6 : ganesha.nfsd-2[main] main :MAIN :FATAL :Error setting prctl PR_SET_IO_FLUSHER flag: Operation not permitted
Aug 29 16:49:50 ceph-regression-lcmn0e-vybdku-node6 systemd[1]: Started Ceph nfs.cephfs-nfs.0.0.ceph-regression-lcmn0e-vybdku-node6.eslxgj for c89bcfc2-6643-11ef-8abd-fa163e19bea8.

Aug 29 16:49:50 ceph-regression-lcmn0e-vybdku-node6 systemd[1]: ceph-c89bcfc2-6643-11ef-8abd-fa163e19bea8.0.0.ceph-regression-lcmn0e-vybdku-node6.eslxgj.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Aug 29 16:49:50 ceph-regression-lcmn0e-vybdku-node6 systemd[1]: ceph-c89bcfc2-6643-11ef-8abd-fa163e19bea8.0.0.ceph-regression-lcmn0e-vybdku-node6.eslxgj.service: Failed with result 'exit-code'.

Version-Release number of selected component (if applicable): 19.1.0-61


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:
System logs copied to magna002 server @ http://magna002.ceph.redhat.com/ceph-qe-logs/suma/issues/nfs_error_quiesce/node6/

Test logs : http://magna002.ceph.redhat.com/cephci-jenkins/results/openstack/RH/8.0/rhel-9/Regression/19.1.0-61/cephfs/15/tier-1_cephfs_cg_quiesce/cg_snap_interop_workflow_2_0.log

Comment 15 errata-xmlrpc 2025-06-26 12:15:22 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat Ceph Storage 8.1 security, bug fix, and enhancement updates), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2025:9775


Note You need to log in before you can comment on or make changes to this bug.