Bug 1054133 - Slow write speed when using small blocksize
Summary: Slow write speed when using small blocksize
Keywords:
Status: CLOSED EOL
Alias: None
Product: GlusterFS
Classification: Community
Component: fuse
Version: 3.4.2
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-01-16 10:18 UTC by Johan Huysmans
Modified: 2015-10-07 13:50 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-10-07 13:49:43 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)

Description Johan Huysmans 2014-01-16 10:18:13 UTC
Description of problem:
Write speed is very slow when writing with small block sizes

Version-Release number of selected component (if applicable):
glusterfs 3.4.2 built on Jan  3 2014 12:38:26


How reproducible:
When I perform writes to the /gluster directory (this is the local xfs partition where the brick is located) I get these speeds:
# dd if=/dev/zero of=/gluster/test bs=8k count=128k conv=fsync
131072+0 records in
131072+0 records out
1073741824 bytes (1.1 GB) copied, 6.61025 s, 162 MB/s

# dd if=/dev/zero of=/gluster/test bs=128k count=8k conv=fsync
8192+0 records in
8192+0 records out
1073741824 bytes (1.1 GB) copied, 6.43005 s, 167 MB/s

When I perform the same test on the glusterfs mountpoint I get following results:
# dd if=/dev/zero of=/mnt/sharedfs/test bs=8k count=128k conv=fsync
131072+0 records in
131072+0 records out
1073741824 bytes (1.1 GB) copied, 22.7804 s, 47.1 MB/s

# dd if=/dev/zero of=/mnt/sharedfs/test bs=128k count=8k conv=fsync
8192+0 records in
8192+0 records out
1073741824 bytes (1.1 GB) copied, 9.43556 s, 114 MB/s 


Additional info:

# gluster volume info
Volume Name: testvolume
Type: Distribute
Volume ID: 9476ee38-33d3-4e98-9649-2e150b92d26e
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: SRV-1:/gluster/brick1
Options Reconfigured:
network.ping-timeout: 5
performance.stat-prefetch: off


# mount | grep gluster
/dev/mapper/systemvg-gluster on /gluster type xfs (rw)
localhost:/testvolume on /mnt/sharedfs type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)

Comment 1 Jeff Darcy 2014-01-16 15:24:48 UTC
I don't mean to sound dismissive, but this is very much NOTABUG.  It's an expected characteristic of single-threaded I/O on any distributed filesystem that tries to protect against data loss.  Each write requires a network round trip.  Even if the networking is local, that's a lot more overhead than local disk I/O.  With 8KB writes, 47.1MB/s corresponds to 5887 IOPS, or an average round-trip latency of 0.17ms.  You don't specify whether these tests were run on a single machine or a real network, but it's not clear why you'd expect better latency than that.

If you want to get a better picture of what your GlusterFS installation can do, try testing with multiple threads - and ideally multiple clients.  If you want to buffer writes locally then you can use the performance.write-behind-window-size volume option to buffer writes locally.  This is set to a very conservative 1MB by default, because setting it larger increases vulnerability to data loss if a client fails.  There are other performance-related options you can also try, but that seems like the key one you're looking for.

Comment 2 Niels de Vos 2015-05-17 21:57:48 UTC
GlusterFS 3.7.0 has been released (http://www.gluster.org/pipermail/gluster-users/2015-May/021901.html), and the Gluster project maintains N-2 supported releases. The last two releases before 3.7 are still maintained, at the moment these are 3.6 and 3.5.

This bug has been filed against the 3,4 release, and will not get fixed in a 3.4 version any more. Please verify if newer versions are affected with the reported problem. If that is the case, update the bug with a note, and update the version if you can. In case updating the version is not possible, leave a comment in this bug report with the version you tested, and set the "Need additional information the selected bugs from" below the comment box to "bugs@gluster.org".

If there is no response by the end of the month, this bug will get automatically closed.

Comment 3 Kaleb KEITHLEY 2015-10-07 13:49:43 UTC
GlusterFS 3.4.x has reached end-of-life.

If this bug still exists in a later release please reopen this and change the version or open a new bug.

Comment 4 Kaleb KEITHLEY 2015-10-07 13:50:53 UTC
GlusterFS 3.4.x has reached end-of-life.\                                                   \                                                                               If this bug still exists in a later release please reopen this and change the version or open a new bug.


Note You need to log in before you can comment on or make changes to this bug.