Hide Forgot
Description of problem: ----------------------- **The intent of opening this BZ is to compare small file IOPS on gNFS and Ganesha v3 mounts,and to reduce the magnitude of difference between the two. ** Small file I/O(creates ,reads,appends etc) is comparatively slower on Ganesha v3 mounts than gNFS,under the exact same workload and environment. Details in comments. Version-Release number of selected component (if applicable): ------------------------------------------------------------- nfs-ganesha-2.4.0-2.el7rhgs.x86_64 glusterfs-ganesha-3.8.4-2.el7rhgs.x86_64 How reproducible: ----------------- 100% Steps to Reproduce: ------------------- Run smallfile in a distributed multithreaded way from 1/4 clients : python /small-files/smallfile/smallfile_cli.py --operation create --threads 8 --file-size 64 --files 10000 --top /gluster-mount --host-set "`echo $CLIENT | tr ' ' ','`" Actual results: --------------- There is a difference in throughputs (almost 60% in some cases) between gNFS and Ganesha v3 under the same workload. Expected results: ----------------- The difference between the two should be not be as pronounced as is being seen atm. Additional info: ---------------- Vol Type : 2*2,Dist Rep Client and Server OS : RHEL 7 Server Profiles will be updated once https://bugzilla.redhat.com/show_bug.cgi?id=1381353 is fixed.
So I tried to reproduce similar in my set up with smaller load(2500) and single client (for multiple client test spuriously fails for an ssh issue) I still can see performance degradation for ganesha v3 mounts with gluster nfs. Here I will concentrate on file operations including create,read,append,rename,remove. From the no.s directory operations like mkdir and rmdir are still comparable with gnfs. This is my initial analysis based on packet traces and profiling. 1.) for operations like read/append/create ganesha is using additional open call than gNFS. gNFS performs this operation with help of anonymous fd , so there is no explicit open call send to server. For small files time spend for open call is almost same for read operation. 2.) For renames ganesha is performing two additional lookups than gnfs 3.) The delete call in ganesha is equivalent to flush+release+unlink, but for gnfs it is just a unlink call. I will try to figure out reasons for 2 and 3 in upcoming days
For read/create/append , I have used glfs_h_anonymous_read and glfs_h_anonymous_write. I can see performance boost on my workload workload : single client 4 threads 2500 files of 64k files setup : 2 server , 1x2 volume (average of two runs) operation gNFS ganeshav3 ganeshav3withoutopen create 65 48 67 read 418 210 277 append 150 119 144
As per the triaging we all have the agreement that this BZ has to be fixed in rhgs-3.2.0. Providing qa_ack
fixed in current build, pending qe verification