Bug 1001850 - Regression in large file sequential writes
Regression in large file sequential writes
Status: CLOSED EOL
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterfs (Show other bugs)
2.1
Unspecified Unspecified
unspecified Severity high
: ---
: ---
Assigned To: Bug Updates Notification Mailing List
storage-qa-internal@redhat.com
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-08-27 20:58 EDT by Anush Shetty
Modified: 2015-12-03 12:11 EST (History)
2 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-12-03 12:11:59 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Anush Shetty 2013-08-27 20:58:10 EDT
Description of problem: We see a huge drop in performance while running large file sequential writes with iozone. This is in comparison with RHS 2.0


Version-Release number of selected component (if applicable): 3.4.0.20rhs-2.el6rhs


How reproducible: Always


Steps to Reproduce:
1. Create a 2x2 Distributed-Replicate volume
2. Mount 2 fuse clients
3. Run iozone in clustered mode with the options: -w -c -e -i 0 -+n -r 64k -s 1g -t 8

Actual results:

Avg Throughput values for writes from 3 Runs in Kbytes/sec

3.3.0.7rhs-1.el6rhs : 138077
3.4.0.20rhs-2.el6rhs : 90974


Expected results:

Results should be comparable to 2.0

Additional info:

Log files here: http://rhs-client2.lab.eng.blr.redhat.com/iozone/run21/
Comment 2 Anush Shetty 2013-08-27 22:10:55 EDT
Small correction, the size of the file was 10G and not 1G as mentioned under 'Steps to reproduce'. Apologies for the typo.
Comment 3 Anush Shetty 2013-08-27 23:05:18 EDT
We see a huge improvement in write throughput with 3.4.0.23rhs-1.el6rhs

With 3.4.0.20rhs-2.el6rhs: 90974 Kbytes/sec
With 3.4.0.23rhs-1.el6rhs: 117405 Kbytes/sec
Comment 4 Anush Shetty 2013-08-28 01:53:29 EDT
run6 - glusterfs - 3.3.0.7rhs-1.el6rhs - IOZONE - [-w -c -e -i 0 -+n -r 64k -s 10g -t 8] - distrep - (quota off, gsync off)
run21 - glusterfs - 3.4.0.20rhs-2.el6rhs - IOZONE - [-w -c -e -i 0 -+n -r 64k -s 10g -t 8] - distrep - (quota off, gsync off)
run23 - glusterfs - 3.4.0.23rhs-1.el6rhs - IOZONE - [-w -c -e -i 0 -+n -r 64k -s 10g -t 8] - distrep - (quota off, gsync off)
run25 - glusterfs - 3.4.0.24rhs-1.el6rhs - IOZONE - [-w -c -e -i 0 -+n -r 64k -s 10g -t 8] - distrep - (quota off, gsync off)

Operations                      RUN6    RUN21   RUN23   RUN25
-------------------------       ------- ------- ------- -------
write                           138077  90974   117405  98672
read                            194193  168711  165774  170840
Comment 5 Vivek Agarwal 2015-12-03 12:11:59 EST
Thank you for submitting this issue for consideration in Red Hat Gluster Storage. The release for which you requested us to review, is now End of Life. Please See https://access.redhat.com/support/policy/updates/rhs/

If you can reproduce this bug against a currently maintained version of Red Hat Gluster Storage, please feel free to file a new report against the current release.

Note You need to log in before you can comment on or make changes to this bug.