Please share logs and steps to reproduce it.
Created attachment 1741590 [details] nfs-server logs
1.Create multiple nfs exports 2.Mount nfs on 4 clients with different exports. 3.Create 3 directories & run IOs on each directory & root directory on each mount ( directories d1,d2,d3) 4.Again create 4 new directories & run IOs on each directory on each mount ( directories d4,d5,d6,d7) 5.Delete these 4 directories simultaneously. Deletion will take time .Stop deletion in between after some time say 1 minute. 6.Try to access directories, hang occurs. 7.login again to client & try to access mount, hang occurs Step 3) Directory & Command to run on that directory root= for n in {1..7000}; do dd if=/dev/urandom of=uile$( printf %03d "$n" ) bs=4k count=1; done d1= for n in {1..1000}; do dd if=/dev/urandom of=file$( printf %03d "$n" ) bs=10M count=100; done d2= for n in {1..30); do dd if=/dev/urandom of=mile$( printf %03d "$n" ) bs=30M count=1000; done d3= for n in {1..100000}; do dd if=/dev/urandom of=tile$( printf %03d "$n" ) bs=10M count=10; done Step 4) Commands used to run IOs on different mount d4= for n in {1..2000000}; do dd if=/dev/urandom of=uile$( printf %03d "$n" ) bs=4k count=1; done d5= for n in {1..1000}; do dd if=/dev/urandom of=file$( printf %03d "$n" ) bs=10M count=100; done d6= for n in {1..30); do dd if=/dev/urandom of=mile$( printf %03d "$n" ) bs=30M count=1000; done d7= for n in {1..100000}; do dd if=/dev/urandom of=tile$( printf %03d "$n" ) bs=10M count=10; done Step 5) Start deleting directories (d4,d5,d6,d7 simultaneously) rm -rf d4 rm -rf d5 rm -rf d6 rm -rf d7
1 Correction in step 3 root= for n in {1..2000000}; do dd if=/dev/urandom of=uile$( printf %03d "$n" ) bs=4k count=1; done And for step 3) IOs were stopped when files in root dierctory were around 7000 files And for step 4) IOs were stopped when ceph storage "size" was around 333G & "RAW USED" was around 1000G.
Currently, the ganesha logs are no longer saved as syslog. Please share the mgr and nfs-ganesha container logs: podman logs <container_id>. .
Created attachment 1742902 [details] mgr_logs These are mgr_logs. I have given nfs logs in #comment6
It is mostly due to this bug https://bugzilla.redhat.com/show_bug.cgi?id=1851102. Using Ganesha version 3.3-3 and above should resolve it.
Looks good
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 5.0 bug fix and enhancement), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:3294