Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
This project is now read‑only. Starting Monday, February 2, please use https://ibm-ceph.atlassian.net/ for all bug tracking management.

Bug 2135573

Summary: NFS client unable to see newly created files when listing directory contents in a FS subvolume clone
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Ram Raja <rraja>
Component: CephFSAssignee: Venky Shankar <vshankar>
Status: CLOSED ERRATA QA Contact: Hemanth Kumar <hyelloji>
Severity: high Docs Contact: Eliska <ekristov>
Priority: unspecified    
Version: 6.0CC: ceph-eng-bugs, cephqe-warriors, ekristov, gfarnum, gouthamr, hyelloji, jdurgin, lhh, lkuchlan, mhicks, pasik, vdas, vereddy, vhariria, vshankar
Target Milestone: ---   
Target Release: 6.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-17.2.3-55.el9cp Doc Type: Bug Fix
Doc Text:
.Directory listing from a NFS client now works as expected for NFS-Ganesha exports Previously, Ceph File System (CephFS) Metadata Server (MDS) would not increment the change attribute, (`change_attr`) of a directory inode during CephFS operations which only change the directory inode's `ctime`. Therefore, a NFS kernel client would not invalidate its `readdir` cache when it is supposed to. This is because the NFS Ganesha server backed by CephFS would sometimes report incorrect change attribute value of the directory inode. As a result, the NFS client would list stale directory contents for NFS Ganesha exports backed by CephFS. With this fix, CephFS MDS now increments the change attribute of the directory inode during operations and the directory listing from the NFS client now works as expected for NFS Ganesha server exports backed by CephFS.
Story Points: ---
Clone Of: 2118263 Environment:
Last Closed: 2023-03-20 18:58:58 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 2118263    
Bug Blocks: 2126050    

Comment 1 Ram Raja 2022-10-18 02:52:24 UTC
The fix is merged in Ceph main branch. I've created a quincy backport PR https://github.com/ceph/ceph/pull/48520 for the respective quincy backport tracker https://tracker.ceph.com/issues/57879

Comment 8 Ram Raja 2022-10-21 01:17:51 UTC
Hemanth, I'm copying over the steps  from https://tracker.ceph.com/issues/57210 I used to reproduce this issue in a Ceph cluster without needing OpenStack manila . I used a vstart cluster, but the steps should be the same in a QE test cluster
```
$ ./bin/ceph fs volume create a
$ ./bin/ceph fs subvolume create a subvol01
$ ./bin/ceph fs subvolume getpath a subvol01

$ ./bin/ceph nfs cluster create nfs-ganesha
$ ./bin/ceph nfs export create cephfs nfs-ganesha /cephfs3 a `./bin/ceph fs subvolume getpath a subvol01`
$ sudo mount.nfs4 127.0.0.1:/cephfs3 /mnt/nfs1/
$ pushd /mnt/nfs1/
$ sudo touch file00
$ # can see newly created file when listing directory contents
$ ls
file00
$ popd

$ ./bin/ceph fs subvolume snapshot create a subvol01 snap01
$ ./bin/ceph fs subvolume snapshot clone a subvol01 snap01 clone01
$ ./bin/ceph nfs export create cephfs nfs-ganesha /cephfs4 a `./bin/ceph fs subvolume getpath a clone01`
$ sudo mount.nfs4 127.0.0.1:/cephfs4 /mnt/nfs2/
$ pushd /mnt/nfs2/
$ ls
file00
$ sudo touch file01
$ # can see cloned 'file00' but cannot see the newly created file 'file01' when reading the directory contents within the clone
$ ls
file00
```

With this fix, you should be able to see the newly created 'file01' too in the FS subvolume clone when listing using the NFS client.

Comment 21 errata-xmlrpc 2023-03-20 18:58:58 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 6.0 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:1360