Bug 1727287 - [Perf] Rename regressing by over 17% in replica3 volume over NFS-Ganesha v4.1
Summary: [Perf] Rename regressing by over 17% in replica3 volume over NFS-Ganesha v4.1
Keywords:
Status: CLOSED CANTFIX
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: nfs-ganesha
Version: rhgs-3.5
Hardware: All
OS: All
high
high
Target Milestone: ---
: ---
Assignee: Girjesh Rajoria
QA Contact: Manisha Saini
URL:
Whiteboard:
Depends On: 1713890
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-07-05 11:56 UTC by Soumya Koduri
Modified: 2019-10-14 15:57 UTC (History)
15 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-09-18 08:39:55 UTC
Embargoed:


Attachments (Terms of Use)

Description Soumya Koduri 2019-07-05 11:56:07 UTC
Description of problem:

As per the latest perf runs (on glusterfs-6.0.6 and nfs-ganesha-2.7.3-5) , there is regression in RENAME workload by 17.05 % over NFSv4.1 protocol

https://docs.google.com/spreadsheets/d/1r-peEBfHjds27ISwuikWgPJ4djAGQqqZu397wGrWtE4/edit#gid=525127835

Note: On fuse the same workload regressed by around ~5-6% (bug1713890)

Version-Release number of selected component (if applicable):
glusterfs-6.0.6
nfs-ganesha-2.7.3-5

How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 21 Girjesh Rajoria 2019-08-14 06:28:23 UTC
In gdb I saw client4_0_lookup hitting 9-10 times, server4_0_lookup hitting 20254 and posix_lookup hitting 20250 times on gluster side. While gluster profile was showing 20255 calls happening for lookup.

Soumya, what's your say on the difference between number of calls of client4_0_lookup and server4_0_lookup.


Note You need to log in before you can comment on or make changes to this bug.