Bug 1382091

Summary: [Perf] : Poor small file performance on Ganesha v3 mounts compared to Gluster NFS
Product: Red Hat Gluster Storage Reporter: Ambarish <asoman>
Component: nfs-ganeshaAssignee: Girjesh Rajoria <grajoria>
Status: CLOSED WONTFIX QA Contact: Manisha Saini <msaini>
Severity: high Docs Contact:
Priority: high    
Version: rhgs-3.2CC: ffilz, jthottan, kkeithle, pasik, rcyriac, rhs-bugs, skoduri, storage-qa-internal
Target Milestone: ---Keywords: Performance, Triaged, ZStream
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-06-15 13:10:25 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Ambarish 2016-10-05 17:25:05 UTC
Description of problem:
-----------------------

**The intent of opening this BZ is to compare small file IOPS on gNFS and Ganesha v3 mounts,and to reduce the magnitude of difference between the two. **

Small file I/O(creates ,reads,appends etc) is comparatively slower on Ganesha v3 mounts than gNFS,under the exact same workload and environment.

Details in comments.

Version-Release number of selected component (if applicable):
-------------------------------------------------------------

nfs-ganesha-2.4.0-2.el7rhgs.x86_64
glusterfs-ganesha-3.8.4-2.el7rhgs.x86_64

How reproducible:
-----------------

100%

Steps to Reproduce:
-------------------

Run smallfile in a distributed multithreaded way from 1/4 clients : 

python /small-files/smallfile/smallfile_cli.py --operation create  --threads 8  --file-size 64 --files 10000 --top /gluster-mount --host-set "`echo $CLIENT | tr ' ' ','`"

Actual results:
---------------

There is a difference in throughputs (almost 60% in some cases) between gNFS and Ganesha v3 under the same workload. 

Expected results:
-----------------

The difference between the two should be not be as pronounced as is being seen atm.

Additional info:
----------------

Vol Type  : 2*2,Dist Rep
Client and Server OS : RHEL 7

Server Profiles will be updated once https://bugzilla.redhat.com/show_bug.cgi?id=1381353 is fixed.

Comment 3 Jiffin 2016-10-26 13:27:54 UTC
So I tried to reproduce similar in my set up with smaller load(2500) and single client
(for multiple client test spuriously fails for an ssh issue)
I still can see performance degradation for ganesha v3 mounts with gluster nfs.
Here I will concentrate on file operations including create,read,append,rename,remove. From the no.s directory operations like mkdir and rmdir are still comparable with gnfs.

This is my initial analysis based on packet traces and profiling.

1.) for operations like read/append/create ganesha is using additional open call than gNFS. gNFS performs this operation with help of anonymous fd , so there is no explicit open call send to server. For small files time spend for open call is almost same for read operation.

2.) For renames ganesha is performing two additional lookups than gnfs

3.) The delete call in ganesha is equivalent to flush+release+unlink, but for gnfs it is just a unlink call.

I will try to figure out reasons for 2 and 3 in upcoming days

Comment 4 Jiffin 2016-11-02 13:37:21 UTC
For read/create/append , I have used glfs_h_anonymous_read and glfs_h_anonymous_write. I can see performance boost on my workload
workload : single client 4 threads 2500 files of 64k files
setup : 2 server , 1x2 volume

(average of two runs)

operation          gNFS      ganeshav3    ganeshav3withoutopen
create              65         48             67
read               418        210             277
append             150        119             144

Comment 10 surabhi 2016-11-29 10:06:17 UTC
As per the triaging we all have the agreement that this BZ has to be fixed in rhgs-3.2.0. Providing qa_ack

Comment 20 Kaleb KEITHLEY 2020-02-13 14:34:17 UTC
fixed in current build, pending qe verification