| Summary: | [Perf] : Poor small file performance on Ganesha v3 mounts compared to Gluster NFS | ||
|---|---|---|---|
| Product: | Red Hat Gluster Storage | Reporter: | Ambarish <asoman> |
| Component: | nfs-ganesha | Assignee: | Girjesh Rajoria <grajoria> |
| Status: | CLOSED WONTFIX | QA Contact: | Manisha Saini <msaini> |
| Severity: | high | Docs Contact: | |
| Priority: | high | ||
| Version: | rhgs-3.2 | CC: | ffilz, jthottan, kkeithle, pasik, rcyriac, rhs-bugs, skoduri, storage-qa-internal |
| Target Milestone: | --- | Keywords: | Performance, Triaged, ZStream |
| Target Release: | --- | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2020-06-15 13:10:25 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
|
Description
Ambarish
2016-10-05 17:25:05 UTC
So I tried to reproduce similar in my set up with smaller load(2500) and single client (for multiple client test spuriously fails for an ssh issue) I still can see performance degradation for ganesha v3 mounts with gluster nfs. Here I will concentrate on file operations including create,read,append,rename,remove. From the no.s directory operations like mkdir and rmdir are still comparable with gnfs. This is my initial analysis based on packet traces and profiling. 1.) for operations like read/append/create ganesha is using additional open call than gNFS. gNFS performs this operation with help of anonymous fd , so there is no explicit open call send to server. For small files time spend for open call is almost same for read operation. 2.) For renames ganesha is performing two additional lookups than gnfs 3.) The delete call in ganesha is equivalent to flush+release+unlink, but for gnfs it is just a unlink call. I will try to figure out reasons for 2 and 3 in upcoming days For read/create/append , I have used glfs_h_anonymous_read and glfs_h_anonymous_write. I can see performance boost on my workload workload : single client 4 threads 2500 files of 64k files setup : 2 server , 1x2 volume (average of two runs) operation gNFS ganeshav3 ganeshav3withoutopen create 65 48 67 read 418 210 277 append 150 119 144 As per the triaging we all have the agreement that this BZ has to be fixed in rhgs-3.2.0. Providing qa_ack fixed in current build, pending qe verification |