Bug 1429230 - [Perf] : Random writes have regressed by 36% on plain distribute volumes mounted via FUSE
Summary: [Perf] : Random writes have regressed by 36% on plain distribute volumes mou...
Keywords:
Status: CLOSED DEFERRED
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: io-threads
Version: rhgs-3.2
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: ---
Assignee: Ravishankar N
QA Contact: Nag Pavan Chilakam
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-03-05 18:41 UTC by Ambarish
Modified: 2018-11-09 03:53 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-11-09 03:53:31 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1395204 0 low CLOSED 34% drop in Random Writes from 3.1.3 to 3.2 on replica 2 FUSE mounts 2021-02-22 00:41:40 UTC

Internal Links: 1395204

Description Ambarish 2017-03-05 18:41:23 UTC
Description of problem:
-----------------------

A regression seems to have been introduced on plain dstribute volumes on random write workloads over FUSE mounts.

3.1.3 : 518085 kB/sec

3.2 : 328057 kB/sec

Regression : -36%

Version-Release number of selected component (if applicable):
-------------------------------------------------------------

3.8.4-15


How reproducible:
-----------------

Every time.


Actual results:
---------------

36% regression with io-threads on on 3.2 bits.


Expected results:
-----------------

Regression Threshold : +-10%


Additional info:
----------------

Volume Name: testvol
Type: Distribute
Volume ID: 35b73a47-bdc7-48b2-81a1-9b66624ae57c
Status: Started
Snapshot Count: 0
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: gqas014.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick0
Brick2: gqas005.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick1
Brick3: gqas006.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick2
Brick4: gqas015.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick3
Options Reconfigured:
network.inode-lru-limit: 90000
performance.md-cache-timeout: 600
performance.cache-invalidation: on
performance.cache-samba-metadata: on
performance.stat-prefetch: on
features.cache-invalidation-timeout: 600
features.cache-invalidation: on
client.event-threads: 2
server.event-threads: 2
cluster.lookup-optimize: off
performance.client-io-threads: on
transport.address-family: inet
performance.readdir-ahead: on
nfs.disable: off
[root@gqas005 ~]#

Comment 3 Ambarish 2017-03-05 18:49:46 UTC
3.1.3 : 518085 kB/sec

3.2 Defaults : 328057 kB/sec

3.2 io-threads off :  527510 B/sec

Switching off io-threads brings back the lost regression.

Comment 6 Ambarish 2017-03-06 06:18:16 UTC
Hi Nithya,

There was no mdcache in 3.1.3..Also,io-threads were disabled for my tests.

I'll attach the server profiles in a while..

Comment 19 Atin Mukherjee 2018-11-09 03:34:08 UTC
Is this still an issue? If not can we close this bug?

Comment 20 Ravishankar N 2018-11-09 03:53:31 UTC
I don't think we would be working on fixing rhgs-3.2 any more. I'm taking the liberty of closing the BZ even though its not replicate component. Please re-open if needed or perf issues are seen on the latest rhgs version.


Note You need to log in before you can comment on or make changes to this bug.