Bug 1001850 - Regression in large file sequential writes
Summary: Regression in large file sequential writes
Keywords:
Status: CLOSED EOL
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterfs
Version: 2.1
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ---
Assignee: Bug Updates Notification Mailing List
QA Contact: storage-qa-internal@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-08-28 00:58 UTC by Anush Shetty
Modified: 2015-12-03 17:11 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-12-03 17:11:59 UTC
Embargoed:


Attachments (Terms of Use)

Description Anush Shetty 2013-08-28 00:58:10 UTC
Description of problem: We see a huge drop in performance while running large file sequential writes with iozone. This is in comparison with RHS 2.0


Version-Release number of selected component (if applicable): 3.4.0.20rhs-2.el6rhs


How reproducible: Always


Steps to Reproduce:
1. Create a 2x2 Distributed-Replicate volume
2. Mount 2 fuse clients
3. Run iozone in clustered mode with the options: -w -c -e -i 0 -+n -r 64k -s 1g -t 8

Actual results:

Avg Throughput values for writes from 3 Runs in Kbytes/sec

3.3.0.7rhs-1.el6rhs : 138077
3.4.0.20rhs-2.el6rhs : 90974


Expected results:

Results should be comparable to 2.0

Additional info:

Log files here: http://rhs-client2.lab.eng.blr.redhat.com/iozone/run21/

Comment 2 Anush Shetty 2013-08-28 02:10:55 UTC
Small correction, the size of the file was 10G and not 1G as mentioned under 'Steps to reproduce'. Apologies for the typo.

Comment 3 Anush Shetty 2013-08-28 03:05:18 UTC
We see a huge improvement in write throughput with 3.4.0.23rhs-1.el6rhs

With 3.4.0.20rhs-2.el6rhs: 90974 Kbytes/sec
With 3.4.0.23rhs-1.el6rhs: 117405 Kbytes/sec

Comment 4 Anush Shetty 2013-08-28 05:53:29 UTC
run6 - glusterfs - 3.3.0.7rhs-1.el6rhs - IOZONE - [-w -c -e -i 0 -+n -r 64k -s 10g -t 8] - distrep - (quota off, gsync off)
run21 - glusterfs - 3.4.0.20rhs-2.el6rhs - IOZONE - [-w -c -e -i 0 -+n -r 64k -s 10g -t 8] - distrep - (quota off, gsync off)
run23 - glusterfs - 3.4.0.23rhs-1.el6rhs - IOZONE - [-w -c -e -i 0 -+n -r 64k -s 10g -t 8] - distrep - (quota off, gsync off)
run25 - glusterfs - 3.4.0.24rhs-1.el6rhs - IOZONE - [-w -c -e -i 0 -+n -r 64k -s 10g -t 8] - distrep - (quota off, gsync off)

Operations                      RUN6    RUN21   RUN23   RUN25
-------------------------       ------- ------- ------- -------
write                           138077  90974   117405  98672
read                            194193  168711  165774  170840

Comment 5 Vivek Agarwal 2015-12-03 17:11:59 UTC
Thank you for submitting this issue for consideration in Red Hat Gluster Storage. The release for which you requested us to review, is now End of Life. Please See https://access.redhat.com/support/policy/updates/rhs/

If you can reproduce this bug against a currently maintained version of Red Hat Gluster Storage, please feel free to file a new report against the current release.


Note You need to log in before you can comment on or make changes to this bug.