Bug 1324604 - [Perf] : 14-53% regression in metadata performance with RHGS 3.1.3 on FUSE mounts
Summary: [Perf] : 14-53% regression in metadata performance with RHGS 3.1.3 on FUSE mo...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: replicate
Version: rhgs-3.1
Hardware: x86_64
OS: Linux
high
high
Target Milestone: ---
: RHGS 3.1.3
Assignee: Ashish Pandey
QA Contact: Ambarish
URL:
Whiteboard:
Depends On:
Blocks: 1311817
TreeView+ depends on / blocked
 
Reported: 2016-04-06 18:12 UTC by Ambarish
Modified: 2016-09-17 12:17 UTC (History)
9 users (show)

Fixed In Version: glusterfs-3.7.9-3
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-06-23 05:15:52 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:1240 0 normal SHIPPED_LIVE Red Hat Gluster Storage 3.1 Update 3 2016-06-23 08:51:28 UTC

Description Ambarish 2016-04-06 18:12:39 UTC
Created attachment 1144309 [details]
Console logs with baseline(3.1.2) as well as RHGS 3.1.3

Description of problem:

Looks like we have regressed with the latest RHGS build while running metadata operations(
ls -l,chmod,setxattr and stat) on FUSE mounts.The flux from baseline(3.1.2 final build) varies from 14 to 53%. 
getxattr looks OK,though.

Version-Release number of selected component (if applicable):

glusterfs-3.7.5-19.el6rhs.x86_64

How reproducible:

3/3

Steps to Reproduce:

1. Run metadata FOPS thrice using small file benchmark on 3.1.2 and establish a baseline
2. Upgrade to 3.1.3 and run the same tests thrice
3. Acceptance criteria : Percentage change should be within 10%


Actual results:

ls -l,chmod,setxattr and stat have regressed more than 10%.Details in comments.


Expected results:

Regression Threshold is +-10%.


Additional info:

OS : RHEL 6.7

The volume was "performance tuned"

[root@gqas001 rpm]# gluster v info
 
Volume Name: testvol
Type: Distributed-Replicate
Volume ID: 6410a358-53ff-409e-94ee-566937e9ab2d
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: gqas001.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick0
Brick2: gqas014.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick1
Brick3: gqas015.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick2
Brick4: gqas016.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick3
Options Reconfigured:
*client.event-threads: 4*
*server.event-threads: 4*
*cluster.lookup-optimize: on*
server.allow-insecure: on
performance.stat-prefetch: off
performance.readdir-ahead: on
[root@gqas001 rpm]# 
[root@gqas001 rpm]#

Smallfiles benchmark was used in a distributed multithreaded manner.

The testbed consisted of 4 servers,4 clients(1X mount per server) on 10GbE network.

Console logs attached.

Comment 3 Ambarish 2016-04-06 18:24:02 UTC
******
ls -l
******

3.1.2 : 14230 files/sec
3.1.3 : 12340 files/sec

Regression : -14%

With older relases ,we used to see close to 25201.4 files/sec


*****
stat
*****

3.1.2 : 5693 files/sec
3.1.3 : 2627 files/sec

Regression : -53%

********
setxattr
********

3.1.2 : 5591.6 files/sec
3.1.3 : 2609 files/sec

Regression : -53%

*****
chmod
*****

3.1.2 : 4543 files/sec
3.1.3 : 2412 files/sec

Regression : -46%

********
getxattr
********

3.1.2 : 21531.3 files/sec
3.1.3 : 21530 files/sec

Regression : 0%   ---------> PASS

Comment 4 Ambarish 2016-04-06 18:26:33 UTC
A similar issue was tracked via https://bugzilla.redhat.com/show_bug.cgi?id=1287531.
Is it possible to get 3.1.3 version with the patch from glusterfs-3.7.5-13.2.git84d7c27.el6rhs.x86_64.rpm which was provided as a fix for the above BZ?

Comment 5 Ambarish 2016-04-07 13:31:49 UTC
Ugggh!
I meant Version number =  glusterfs-3.7.9-1.el6rhs.x86_64

Comment 13 Ambarish 2016-04-28 09:46:38 UTC
Hitting the reported issue with 3.7.9-2 build as well.

Comment 21 Ambarish 2016-05-05 06:16:15 UTC
Ugghh....
Please ignore the last comment.
Wrong Bug!

Comment 25 errata-xmlrpc 2016-06-23 05:15:52 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2016:1240


Note You need to log in before you can comment on or make changes to this bug.