Bug 1324608 - [Perf] : Large file random writes regressed on FUSE mounts by 11-24% on RHGS 3.1.3
Summary: [Perf] : Large file random writes regressed on FUSE mounts by 11-24% on RHGS ...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: write-behind
Version: rhgs-3.1
Hardware: x86_64
OS: Linux
high
high
Target Milestone: ---
: ---
Assignee: Csaba Henk
QA Contact: Rahul Hinduja
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-04-06 19:02 UTC by Ambarish
Modified: 2018-04-16 18:17 UTC (History)
15 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-04-16 18:17:40 UTC
Target Upstream Version:


Attachments (Terms of Use)
Console logs showing Iozone's Random R/W throughput (20.98 KB, application/vnd.oasis.opendocument.text)
2016-04-06 19:02 UTC, Ambarish
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1324612 0 high CLOSED [Perf] : Large File Random Write performance is off target by 14% on gNFS mounts 2021-02-22 00:41:40 UTC

Internal Links: 1324612

Description Ambarish 2016-04-06 19:02:11 UTC
Created attachment 1144313 [details]
Console logs showing Iozone's Random R/W throughput

Description of problem:

I see a slight regression on large file random writes with FUSE mounted volumes.

This is from one of the automated runs :

With 3.1.2 (baseline):  mean rand write throughput = 389985.975000 KB/s

With 3.1.3 : mean rand write throughput= 293772.400000 KB/s

Regression : -24.67 percent


Version-Release number of selected component (if applicable):

glusterfs-3.7.5-19.el6rhs.x86_64

How reproducible:

2/2

Steps to Reproduce:

1. Run iozone random R/W test(I=2) on fuse mounts with 3.1.2 thrice
2. Run same test thrice after upgrading to RHGS 3.1.3
3. The mean throughputs should not vary by more than 10%

Actual results:

24% off target random write performance

Expected results:

Regression Threshold is 10%.

Additional info:

OS : RHEL 6.7

Iozone was used in a distributed multithreaded manner with a 2G file size ,record size of 64K and a total of 16 threads.


Setup consisted of 4 servers,4 clients (1X mount per server) on 10GbE network.

Volume Settings :

[root@gqas001 ~]# gluster v info

 
Volume Name: testvol
Type: Distributed-Replicate
Volume ID: 2a668beb-7f26-48f9-8550-157108fe1a55
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: gqas001.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick0
Brick2: gqas014.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick1
Brick3: gqas015.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick2
Brick4: gqas016.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick3
Options Reconfigured:
performance.readdir-ahead: on
performance.stat-prefetch: off
server.allow-insecure: on
[root@gqas001 ~]# 
[root@gqas001 ~]# 

Console logs attached for two tests(machines were reimaged in between)

Comment 3 Ambarish 2016-04-07 13:32:22 UTC
Ugggh!
I meant Version number =  glusterfs-3.7.9-1.el6rhs.x86_64

Comment 9 Ambarish 2016-04-28 09:47:07 UTC
Hitting the reported issue with 3.7.9-2 build as well.

Comment 10 Ambarish 2016-04-28 11:22:01 UTC
Reproduced on 3.7.9-2.The issue is intermittent ,though


Note You need to log in before you can comment on or make changes to this bug.