So, as a point of context, I tested on VFS. I obviously can't test with 24 clients, but I got: 2G: 334113.55 kB/sec 4G: 343316.70 kB/sec 6G: 360236.56 kB/sec 8G: 358361.70 kB/sec This is not a decline with increasing size. This indicates to me it may not an issue in Ganesha proper, but in GFAPI and/or Gluster. It's possible it's a client scaling issue, but I'm not sure how that would interact with file size to generate a slowdown.
Ack to defer to batch update
The degradation here is cause because number of fsync call increases highly when file size is increased. These fsync calls are not requested from client and nfs-ganesha side but are noticed on gluster side. Number of commit on client side are as follows(used nfsstat to get these numbers): 2GB: 2-3 calls 8GB: 7-8 calls Number of fsync on gluster side are following(used gluster profile for it): 2GB: 7 calls 8GB: 20178 calls Also the issue is not seen in distribute type of volume. And this increased number of fsync call is happening from AFR layer.
In AFR layer, lock->release was set to true if multiple fds were open which subsequently caused high frequency of fsync calls. This issue got fixed with the patch - https://review.gluster.org/#/c/glusterfs/+/21210/
Since afr BZ is fixed in 3.4.1, this use-case can be tested from nfs-ganesha protocol
I'm not able to reproduce it on my setup. Could you share your setup and volume configuration details.
Hi Girjesh, As, my setup is busy with other testing so I can not provide setup now for this testing. But looking at my analysis it seems there is still performance drop. So, till the time we debug this issue I am going to assign back this bug to you.
(In reply to Sachin P Mali from comment #54) > Hi Girjesh, > As, my setup is busy with other testing so I can not provide setup now for > this testing. But looking at my analysis it seems there is still performance > drop. > So, till the time we debug this issue I am going to assign back this bug to > you. Could you share nfsstat data of clients and gluster profile data of servers. Also please share details of volume you're testing on and other tunables (if you're setting any while testing). Data for RCA is not available. Hence moving the bug back to QA.
Hi Girjesh, I will try provide asked data by tomorrow.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0260