Description of problem: We see a huge drop in performance while running large file sequential writes with iozone. This is in comparison with RHS 2.0 Version-Release number of selected component (if applicable): 3.4.0.20rhs-2.el6rhs How reproducible: Always Steps to Reproduce: 1. Create a 2x2 Distributed-Replicate volume 2. Mount 2 fuse clients 3. Run iozone in clustered mode with the options: -w -c -e -i 0 -+n -r 64k -s 1g -t 8 Actual results: Avg Throughput values for writes from 3 Runs in Kbytes/sec 3.3.0.7rhs-1.el6rhs : 138077 3.4.0.20rhs-2.el6rhs : 90974 Expected results: Results should be comparable to 2.0 Additional info: Log files here: http://rhs-client2.lab.eng.blr.redhat.com/iozone/run21/
Small correction, the size of the file was 10G and not 1G as mentioned under 'Steps to reproduce'. Apologies for the typo.
We see a huge improvement in write throughput with 3.4.0.23rhs-1.el6rhs With 3.4.0.20rhs-2.el6rhs: 90974 Kbytes/sec With 3.4.0.23rhs-1.el6rhs: 117405 Kbytes/sec
run6 - glusterfs - 3.3.0.7rhs-1.el6rhs - IOZONE - [-w -c -e -i 0 -+n -r 64k -s 10g -t 8] - distrep - (quota off, gsync off) run21 - glusterfs - 3.4.0.20rhs-2.el6rhs - IOZONE - [-w -c -e -i 0 -+n -r 64k -s 10g -t 8] - distrep - (quota off, gsync off) run23 - glusterfs - 3.4.0.23rhs-1.el6rhs - IOZONE - [-w -c -e -i 0 -+n -r 64k -s 10g -t 8] - distrep - (quota off, gsync off) run25 - glusterfs - 3.4.0.24rhs-1.el6rhs - IOZONE - [-w -c -e -i 0 -+n -r 64k -s 10g -t 8] - distrep - (quota off, gsync off) Operations RUN6 RUN21 RUN23 RUN25 ------------------------- ------- ------- ------- ------- write 138077 90974 117405 98672 read 194193 168711 165774 170840
Thank you for submitting this issue for consideration in Red Hat Gluster Storage. The release for which you requested us to review, is now End of Life. Please See https://access.redhat.com/support/policy/updates/rhs/ If you can reproduce this bug against a currently maintained version of Red Hat Gluster Storage, please feel free to file a new report against the current release.