Created attachment 1144313[details]
Console logs showing Iozone's Random R/W throughput
Description of problem:
I see a slight regression on large file random writes with FUSE mounted volumes.
This is from one of the automated runs :
With 3.1.2 (baseline): mean rand write throughput = 389985.975000 KB/s
With 3.1.3 : mean rand write throughput= 293772.400000 KB/s
Regression : -24.67 percent
Version-Release number of selected component (if applicable):
glusterfs-3.7.5-19.el6rhs.x86_64
How reproducible:
2/2
Steps to Reproduce:
1. Run iozone random R/W test(I=2) on fuse mounts with 3.1.2 thrice
2. Run same test thrice after upgrading to RHGS 3.1.3
3. The mean throughputs should not vary by more than 10%
Actual results:
24% off target random write performance
Expected results:
Regression Threshold is 10%.
Additional info:
OS : RHEL 6.7
Iozone was used in a distributed multithreaded manner with a 2G file size ,record size of 64K and a total of 16 threads.
Setup consisted of 4 servers,4 clients (1X mount per server) on 10GbE network.
Volume Settings :
[root@gqas001 ~]# gluster v info
Volume Name: testvol
Type: Distributed-Replicate
Volume ID: 2a668beb-7f26-48f9-8550-157108fe1a55
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: gqas001.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick0
Brick2: gqas014.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick1
Brick3: gqas015.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick2
Brick4: gqas016.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick3
Options Reconfigured:
performance.readdir-ahead: on
performance.stat-prefetch: off
server.allow-insecure: on
[root@gqas001 ~]#
[root@gqas001 ~]#
Console logs attached for two tests(machines were reimaged in between)
Created attachment 1144313 [details] Console logs showing Iozone's Random R/W throughput Description of problem: I see a slight regression on large file random writes with FUSE mounted volumes. This is from one of the automated runs : With 3.1.2 (baseline): mean rand write throughput = 389985.975000 KB/s With 3.1.3 : mean rand write throughput= 293772.400000 KB/s Regression : -24.67 percent Version-Release number of selected component (if applicable): glusterfs-3.7.5-19.el6rhs.x86_64 How reproducible: 2/2 Steps to Reproduce: 1. Run iozone random R/W test(I=2) on fuse mounts with 3.1.2 thrice 2. Run same test thrice after upgrading to RHGS 3.1.3 3. The mean throughputs should not vary by more than 10% Actual results: 24% off target random write performance Expected results: Regression Threshold is 10%. Additional info: OS : RHEL 6.7 Iozone was used in a distributed multithreaded manner with a 2G file size ,record size of 64K and a total of 16 threads. Setup consisted of 4 servers,4 clients (1X mount per server) on 10GbE network. Volume Settings : [root@gqas001 ~]# gluster v info Volume Name: testvol Type: Distributed-Replicate Volume ID: 2a668beb-7f26-48f9-8550-157108fe1a55 Status: Started Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: gqas001.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick0 Brick2: gqas014.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick1 Brick3: gqas015.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick2 Brick4: gqas016.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick3 Options Reconfigured: performance.readdir-ahead: on performance.stat-prefetch: off server.allow-insecure: on [root@gqas001 ~]# [root@gqas001 ~]# Console logs attached for two tests(machines were reimaged in between)