Bug 1334858 - [Perf] : ls-l is not as performant as it used to be on older RHGS builds
Summary: [Perf] : ls-l is not as performant as it used to be on older RHGS builds
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: core
Version: rhgs-3.1
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: RHGS 3.2.0
Assignee: Pranith Kumar K
QA Contact: Ambarish
URL:
Whiteboard:
Depends On: 1369187
Blocks: 1351522
TreeView+ depends on / blocked
 
Reported: 2016-05-10 16:29 UTC by Ambarish
Modified: 2017-03-23 05:30 UTC (History)
5 users (show)

Fixed In Version: glusterfs-3.8.4-1
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-03-23 05:30:35 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2017:0486 0 normal SHIPPED_LIVE Moderate: Red Hat Gluster Storage 3.2.0 security, bug fix, and enhancement update 2017-03-23 09:18:45 UTC

Description Ambarish 2016-05-10 16:29:19 UTC
Description of problem:
-----------------------

ls-l gave an IOPS of 25876.938824 files/sec(calculated on hotfix given as part of https://bugzilla.redhat.com/show_bug.cgi?id=1287531) as compared to 17305.069449 files/sec on the latest RHGS downstream build- 3.7.9-3.

Version-Release number of selected component (if applicable):
-------------------------------------------------------------

glusterfs-3.7.9-3.el6rhs.x86_64


How reproducible:
-----------------

Every which way I try.


Steps to Reproduce:
------------------

1. Install older/hotfix gluster rpms. Run ls-l workload via smallfiles.

2. Upgrade to 3.7.9-3 build.Run the same workload again.

3. The older version had nearly  > 30% better performance

Actual results:
--------------

30% difference in IOPS on older and latest RHGS builds

Expected results:
-----------------

Newer builds should be as performant(or more) as the older builds.

Additional info:
---------------
10GbE network
4 nodes,4 clients.1X mount per server.
2*2 dist rep volume
OS : RHEL 6.X
The volume is "performance tuned",i.e cluster.lookup-optimize is on and server.event-threads and client.event-threads are 4 each.

Comment 3 Ambarish 2016-05-10 16:32:39 UTC
*EXACT WORKLOAD* :

Ran this twice :

*DROP CACHE*;python /small-files/smallfile/smallfile_cli.py --operation ls-l --threads 8  --file-size 64 --files 10000 --top /gluster-mount --host-set "`echo $CLIENT | tr ' ' ','`" --pause 500

Comment 9 Atin Mukherjee 2016-08-30 03:52:45 UTC
Fix http://review.gluster.org/15237 has already made into release-3.8 branch upstream and hence should be available in rhgs-3.2.0 as part of a rebase.

Comment 12 Atin Mukherjee 2016-09-21 08:15:05 UTC
Given this BZ is made available in rhgs-3.2.0 as part of rebase, moving it to MODIFIED. Errata should be able to move it to ON_QA in sometime.

Comment 14 Ambarish 2016-10-03 17:45:04 UTC
Tested on 3.8.4-1 and 3.8.4-2.

*On RHEL 7* - 25612.371320 files/sec
*On RHEL 6* - 25547.426860 files/sec

Happily moving this to Verified :)

Comment 16 errata-xmlrpc 2017-03-23 05:30:35 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2017-0486.html


Note You need to log in before you can comment on or make changes to this bug.