Bug 1382303 - [Perf] : ~30% regression on smallfile creates on Ganesha v4 mounts
Summary: [Perf] : ~30% regression on smallfile creates on Ganesha v4 mounts
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: nfs-ganesha
Version: rhgs-3.2
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: ---
Assignee: Kaleb KEITHLEY
QA Contact: Ambarish
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-10-06 09:49 UTC by Ambarish
Modified: 2016-11-17 13:49 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-11-17 13:49:12 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Ambarish 2016-10-06 09:49:12 UTC
Description of problem:
----------------------

There is a 30% regression on smallfile creates on Ganesha v4 mounts from 3.1.1 to 3.2

Creates on 3.1.1 : 724 files/sec
Creates on 3.1.3 : 524 files/sec
Creates on 3.2   : 513 files/sec

On v3 mounts,in fact,there was a  ~15% increase under the same workload,between 3.1.3 and 3.1.3.

Version-Release number of selected component (if applicable):
-------------------------------------------------------------

nfs-ganesha-2.4.0-2.el7rhgs.x86_64
glusterfs-ganesha-3.8.4-2.el7rhgs.x86_64


How reproducible:
----------------

100%

Steps to Reproduce:
-------------------

1. Establish baseline on 3.1.1.
2. Run same workload on 3.1.3.Check regressions ,whether they are within 10%.
3. Exact Workload : python /small-files/smallfile/smallfile_cli.py --operation  create  --threads 8  --file-size 64 --files 10000 --top /gluster-mount --host-set "`echo $CLIENT | tr ' ' ','`"

Actual results:
---------------

~30% regression on smallfile creates on v4.

Expected results:
-----------------

Regression Threshold is 10%.

Additional info:
----------------

* Client Server OS : RHEL 7.2

* Server Profiles to be attached after https://bugzilla.redhat.com/show_bug.cgi?id=1381353 is fixed.

* Vol Info :


Volume Name: testvol
Type: Distributed-Replicate
Volume ID: b93b99bd-d1d2-4236-98bc-08311f94e7dc
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: gqas013.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick0
Brick2: gqas005.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick1
Brick3: gqas006.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick2
Brick4: gqas011.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick3
Options Reconfigured:
cluster.lookup-optimize: on
diagnostics.count-fop-hits: on
diagnostics.latency-measurement: on
ganesha.enable: on
server.event-threads:4
client.event-threads: 4
features.cache-invalidation: off
nfs.disable: on
performance.readdir-ahead: on
performance.stat-prefetch: off
server.allow-insecure: on
nfs-ganesha: enable
cluster.enable-shared-storage: enable
[root@gqas013 tmp]#


Note You need to log in before you can comment on or make changes to this bug.