Bug 1577822 - Sequential Writes and Reads throughput is degrading on NFS-ganesha with increasing the file size
Summary: Sequential Writes and Reads throughput is degrading on NFS-ganesha with incre...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: nfs-ganesha
Version: rhgs-3.4
Hardware: All
OS: Linux
high
high
Target Milestone: ---
: RHGS 3.4.z Batch Update 3
Assignee: Girjesh Rajoria
QA Contact: Sachin P Mali
URL:
Whiteboard:
Depends On: 1630688
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-05-14 08:23 UTC by Karan Sandha
Modified: 2019-02-04 07:34 UTC (History)
21 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-02-04 07:34:12 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:0260 0 None None None 2019-02-04 07:34:15 UTC

Comment 17 Daniel Gryniewicz 2018-07-27 13:30:12 UTC
So, as a point of context, I tested on VFS.  I obviously can't test with 24 clients, but I got:

2G: 334113.55 kB/sec
4G: 343316.70 kB/sec
6G: 360236.56 kB/sec
8G: 358361.70 kB/sec


This is not a decline with increasing size.  This indicates to me it may not an issue in Ganesha proper, but in GFAPI and/or Gluster.  It's possible it's a client scaling issue, but I'm not sure how that would interact with file size to generate a slowdown.

Comment 37 Anand Paladugu 2018-08-24 12:14:29 UTC
Ack to defer to batch update

Comment 39 Girjesh Rajoria 2018-10-10 15:06:33 UTC
The degradation here is cause because number of fsync call increases highly when file size is increased. These fsync calls are not requested from client and nfs-ganesha side but are noticed on gluster side.

Number of commit on client side are as follows(used nfsstat to get these numbers):
2GB: 2-3 calls
8GB: 7-8 calls

Number of fsync on gluster side are following(used gluster profile for it):
2GB: 7 calls
8GB: 20178 calls

Also the issue is not seen in distribute type of volume. And this increased number of fsync call is happening from AFR layer.

Comment 40 Girjesh Rajoria 2018-10-15 12:06:50 UTC
In AFR layer, lock->release was set to true if multiple fds were open which subsequently caused high frequency of fsync calls. This issue got fixed with the patch - https://review.gluster.org/#/c/glusterfs/+/21210/

Comment 41 Girjesh Rajoria 2018-10-15 13:28:57 UTC
Since afr BZ is fixed in 3.4.1, this use-case can be tested from nfs-ganesha protocol

Comment 52 Girjesh Rajoria 2019-01-04 11:19:07 UTC
I'm not able to reproduce it on my setup. Could you share your setup and volume configuration details.

Comment 54 Sachin P Mali 2019-01-08 07:05:56 UTC
Hi Girjesh,
As, my setup is busy with other testing so I can not provide setup now for this testing. But looking at my analysis it seems there is still performance drop.
So, till the time we debug this issue I am going to assign back this bug to you.

Comment 55 Girjesh Rajoria 2019-01-08 09:27:46 UTC
(In reply to Sachin P Mali from comment #54)
> Hi Girjesh,
> As, my setup is busy with other testing so I can not provide setup now for
> this testing. But looking at my analysis it seems there is still performance
> drop.
> So, till the time we debug this issue I am going to assign back this bug to
> you.

Could you share nfsstat data of clients and gluster profile data of servers. Also please share details of volume you're testing on and other tunables (if you're setting any while testing).
Data for RCA is not available. Hence moving the bug back to QA.

Comment 56 Sachin P Mali 2019-01-08 10:26:38 UTC
Hi Girjesh,
I will try provide asked data by tomorrow.

Comment 66 errata-xmlrpc 2019-02-04 07:34:12 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0260


Note You need to log in before you can comment on or make changes to this bug.