Bug 1334858

Summary: [Perf] : ls-l is not as performant as it used to be on older RHGS builds
Product: Red Hat Gluster Storage Reporter: Ambarish <asoman>
Component: coreAssignee: Pranith Kumar K <pkarampu>
Status: CLOSED ERRATA QA Contact: Ambarish <asoman>
Severity: high Docs Contact:
Priority: unspecified    
Version: rhgs-3.1CC: amukherj, pkarampu, rcyriac, rhinduja, rhs-bugs
Target Milestone: ---   
Target Release: RHGS 3.2.0   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: glusterfs-3.8.4-1 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-03-23 05:30:35 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
Bug Depends On: 1369187    
Bug Blocks: 1351522    

Description Ambarish 2016-05-10 16:29:19 UTC
Description of problem:
-----------------------

ls-l gave an IOPS of 25876.938824 files/sec(calculated on hotfix given as part of https://bugzilla.redhat.com/show_bug.cgi?id=1287531) as compared to 17305.069449 files/sec on the latest RHGS downstream build- 3.7.9-3.

Version-Release number of selected component (if applicable):
-------------------------------------------------------------

glusterfs-3.7.9-3.el6rhs.x86_64


How reproducible:
-----------------

Every which way I try.


Steps to Reproduce:
------------------

1. Install older/hotfix gluster rpms. Run ls-l workload via smallfiles.

2. Upgrade to 3.7.9-3 build.Run the same workload again.

3. The older version had nearly  > 30% better performance

Actual results:
--------------

30% difference in IOPS on older and latest RHGS builds

Expected results:
-----------------

Newer builds should be as performant(or more) as the older builds.

Additional info:
---------------
10GbE network
4 nodes,4 clients.1X mount per server.
2*2 dist rep volume
OS : RHEL 6.X
The volume is "performance tuned",i.e cluster.lookup-optimize is on and server.event-threads and client.event-threads are 4 each.

Comment 3 Ambarish 2016-05-10 16:32:39 UTC
*EXACT WORKLOAD* :

Ran this twice :

*DROP CACHE*;python /small-files/smallfile/smallfile_cli.py --operation ls-l --threads 8  --file-size 64 --files 10000 --top /gluster-mount --host-set "`echo $CLIENT | tr ' ' ','`" --pause 500

Comment 9 Atin Mukherjee 2016-08-30 03:52:45 UTC
Fix http://review.gluster.org/15237 has already made into release-3.8 branch upstream and hence should be available in rhgs-3.2.0 as part of a rebase.

Comment 12 Atin Mukherjee 2016-09-21 08:15:05 UTC
Given this BZ is made available in rhgs-3.2.0 as part of rebase, moving it to MODIFIED. Errata should be able to move it to ON_QA in sometime.

Comment 14 Ambarish 2016-10-03 17:45:04 UTC
Tested on 3.8.4-1 and 3.8.4-2.

*On RHEL 7* - 25612.371320 files/sec
*On RHEL 6* - 25547.426860 files/sec

Happily moving this to Verified :)

Comment 16 errata-xmlrpc 2017-03-23 05:30:35 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2017-0486.html