Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
This project is now read‑only. Starting Monday, February 2, please use https://ibm-ceph.atlassian.net/ for all bug tracking management.

Bug 2374369

Summary: [NFS-Ganesha] NFS export created with same CephFS subvolume path on two different NFS clusters leads to inconsistent visibility and synchronization issues across clients
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Manisha Saini <msaini>
Component: CephadmAssignee: Ujjawal Anand <uanand>
Status: CLOSED ERRATA QA Contact: Manisha Saini <msaini>
Severity: high Docs Contact:
Priority: unspecified    
Version: 8.1CC: akane, cephqe-warriors, ffilz, jcaratza, kkeithle
Target Milestone: ---   
Target Release: 9.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-20.1.0-66 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 2389738 (view as bug list) Environment:
Last Closed: 2026-01-29 06:50:30 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 2389738    

Description Manisha Saini 2025-06-23 18:57:48 UTC
Description of problem:
======================

When an NFS export is created using the same CephFS subvolume path but assigned to two different NFS-Ganesha clusters, changes made on one export (e.g. file, dirs creation) are not reflected on the other. This leads to inconsistency and misleading file system views across clients.

In this scenario, Client 1 mounted export from Cluster 1 and created files and directories. Client 2, which mounted the same subvolume via export from Cluster 2, can only see a partial list of files and inconsistent data.

This points to possible cache inconsistency or export-level isolation that breaks expected CephFS behavior when accessed through NFS.

Version-Release number of selected component (if applicable):
===========================
# ceph --version
ceph version 19.2.1-224.el9cp (7a698d1865dee2d91ba1430045db051c4def6957) squid (stable)

# rpm -qa | grep nfs
libnfsidmap-2.5.4-34.el9.x86_64
nfs-utils-2.5.4-34.el9.x86_64
nfs-ganesha-selinux-6.5-20.el9cp.noarch
nfs-ganesha-6.5-20.el9cp.x86_64
nfs-ganesha-ceph-6.5-20.el9cp.x86_64
nfs-ganesha-rados-grace-6.5-20.el9cp.x86_64
nfs-ganesha-rados-urls-6.5-20.el9cp.x86_64
nfs-ganesha-rgw-6.5-20.el9cp.x86_64


How reproducible:
==============
2/2


Steps to Reproduce:
===============
1. Create 2 NFS Cluster using different hosts

# ceph nfs cluster ls
[
  "nfsganesha",
  "nfs1"
]

# ceph nfs cluster info nfs1
{
  "nfs1": {
    "backend": [
      {
        "hostname": "ceph-hotfix-nfs-test-w3yafi-node3",
        "ip": "10.0.64.130",
        "port": 2049
      }
    ],
    "virtual_ip": null
  }
}

# ceph nfs cluster info nfsganesha
{
  "nfsganesha": {
    "backend": [
      {
        "hostname": "ceph-hotfix-nfs-test-w3yafi-node2",
        "ip": "10.0.67.226",
        "port": 2049
      }
    ],
    "virtual_ip": null
  }
}

2. Create a subvolume and get the path to create the NFS export

# ceph fs subvolume create cephfs ganesha1 --group_name ganeshagroup --namespace-isolated

# ceph fs subvolume getpath cephfs ganesha1 --group_name ganeshagroup
/volumes/ganeshagroup/ganesha1/c1d2cc81-b087-40c9-8a4b-9fc1b055e2d0

3. Create 2 NFS exports each mapped to 2 different NFS clusters but using the same subvolume path


# ceph nfs export create cephfs nfs1 /nfs1 cephfs --path=/volumes/ganeshagroup/ganesha1/c1d2cc81-b087-40c9-8a4b-9fc1b055e2d0
{
  "bind": "/nfs1",
  "cluster": "nfs1",
  "fs": "cephfs",
  "mode": "RW",
  "path": "/volumes/ganeshagroup/ganesha1/c1d2cc81-b087-40c9-8a4b-9fc1b055e2d0"
}

# ceph nfs export create cephfs nfsganesha /ganesha1 cephfs --path=/volumes/ganeshagroup/ganesha1/c1d2cc81-b087-40c9-8a4b-9fc1b055e2d0
{
  "bind": "/ganesha1",
  "cluster": "nfsganesha",
  "fs": "cephfs",
  "mode": "RW",
  "path": "/volumes/ganeshagroup/ganesha1/c1d2cc81-b087-40c9-8a4b-9fc1b055e2d0"


4. Mount the 2 exports on 2 different Clients and create files and dires

Client 1: 
-------
[root@ceph-hotfix-nfs-test-w3yafi-node7 mnt]# mount -t nfs 10.0.64.130:/nfs1 /mnt/nfs1/
[root@ceph-hotfix-nfs-test-w3yafi-node7 mnt]# cd /mnt/nfs1/
[root@ceph-hotfix-nfs-test-w3yafi-node7 nfs1]# ls
[root@ceph-hotfix-nfs-test-w3yafi-node7 nfs1]# touch g1
[root@ceph-hotfix-nfs-test-w3yafi-node7 nfs1]# touch g2
[root@ceph-hotfix-nfs-test-w3yafi-node7 nfs1]# touch g3
[root@ceph-hotfix-nfs-test-w3yafi-node7 nfs1]# mkdir dir1
[root@ceph-hotfix-nfs-test-w3yafi-node7 nfs1]# mkdir dir2
[root@ceph-hotfix-nfs-test-w3yafi-node7 nfs1]# mkdir dir3

Client 2:
---------

[root@ceph-hotfix-nfs-test-w3yafi-node6 mnt]# mount -t nfs 10.0.67.226:/ganesha1 /mnt/ganesha1/
[root@ceph-hotfix-nfs-test-w3yafi-node6 mnt]# cd /mnt/ganesha1/
[root@ceph-hotfix-nfs-test-w3yafi-node6 ganesha1]# ls
[root@ceph-hotfix-nfs-test-w3yafi-node6 ganesha1]# ls
g1  g2  g3
[root@ceph-hotfix-nfs-test-w3yafi-node6 ganesha1]# ls
g1  g2  g3
[root@ceph-hotfix-nfs-test-w3yafi-node6 ganesha1]# mkdir dir1
mkdir: cannot create directory ‘dir1’: File exists
[root@ceph-hotfix-nfs-test-w3yafi-node6 ganesha1]# ls
g1  g2  g3
[root@ceph-hotfix-nfs-test-w3yafi-node6 ganesha1]# mkdir dir2
mkdir: cannot create directory ‘dir2’: File exists
[root@ceph-hotfix-nfs-test-w3yafi-node6 ganesha1]# ls
g1  g2  g3
[root@ceph-hotfix-nfs-test-w3yafi-node6 ganesha1]#

5. Now try to delete the files/dirs from client 2:

Client 2:
--------
[root@ceph-hotfix-nfs-test-w3yafi-node6 ganesha1]# ls
dir1  dir2  dir3
[root@ceph-hotfix-nfs-test-w3yafi-node6 ganesha1]# rm -rf dir3
[root@ceph-hotfix-nfs-test-w3yafi-node6 ganesha1]# ls
dir1  dir2
[root@ceph-hotfix-nfs-test-w3yafi-node6 ganesha1]# rm -rf dir2
[root@ceph-hotfix-nfs-test-w3yafi-node6 ganesha1]# ls
dir1
[root@ceph-hotfix-nfs-test-w3yafi-node6 ganesha1]#

Client 1:
------

[root@ceph-hotfix-nfs-test-w3yafi-node7 nfs1]# ls
dir1  dir2  dir3
[root@ceph-hotfix-nfs-test-w3yafi-node7 nfs1]# ls
dir1  dir2  dir3
[root@ceph-hotfix-nfs-test-w3yafi-node7 nfs1]# ls
dir1  dir2  dir3
[root@ceph-hotfix-nfs-test-w3yafi-node7 nfs1]# cd dir2/
[root@ceph-hotfix-nfs-test-w3yafi-node7 dir2]# ls
ls: cannot open directory '.': Stale file handle
[root@ceph-hotfix-nfs-test-w3yafi-node7 dir2]# cd ..
[root@ceph-hotfix-nfs-test-w3yafi-node7 nfs1]# cd dir3/
[root@ceph-hotfix-nfs-test-w3yafi-node7 dir3]# ls
ls: cannot open directory '.': Stale file handle
[root@ceph-hotfix-nfs-test-w3yafi-node7 dir3]# cd ..
[root@ceph-hotfix-nfs-test-w3yafi-node7 nfs1]# ls
dir1  dir2  dir3
[root@ceph-hotfix-nfs-test-w3yafi-node7 nfs1]# mkdir dir3
mkdir: cannot create directory ‘dir3’: File exists
[root@ceph-hotfix-nfs-test-w3yafi-node7 nfs1]# ls
dir1  dir2  dir3
[root@ceph-hotfix-nfs-test-w3yafi-node7 nfs1]#



Actual results:
=============
Attempting to create those directories on Client 1 fails with mkdir: cannot create directory ‘dir3’: File exists.
Cd to non existent directory causing "stale file handle"

ls does not reflect the directories although they exist on the backend CephFS.




Expected results:
=============
Both clients, regardless of the NFS cluster/export they use, should have a consistent and synchronized view of the backend CephFS subvolume.


Additional info:

Comment 1 Storage PM bot 2025-06-23 18:58:01 UTC
Please specify the severity of this bug. Severity is defined here:
https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.

Comment 13 errata-xmlrpc 2026-01-29 06:50:30 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat Ceph Storage 9.0 Security and Enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2026:1536