Bug 822253 - Poor disk performance
Poor disk performance
Status: CLOSED INSUFFICIENT_DATA
Product: GlusterFS
Classification: Community
Component: core (Show other bugs)
3.2.6
x86_64 Linux
low Severity medium
: ---
: ---
Assigned To: Amar Tumballi
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2012-05-16 16:28 EDT by Philip
Modified: 2013-12-18 19:08 EST (History)
4 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2012-12-27 03:48:13 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Philip 2012-05-16 16:28:28 EDT
Hi,

we are currently running glusterfs 3.2.6 on a two node cluster that is hosting a replicated volume. Both storage servers have high end hardware: 22 disks in a hardware raid and dual sandy bridge quad core CPUs. There is 16TB of data stored on the volume, the average file size is 4mb. The default configuration was not changed. We are using TCP transport and XFS.

There are seven dedicated servers that are mounting the volume and make it available through nginx http server. There is lots of concurrency and gluster seems to have problems with this and is only capable to deliver 5-10% of the actual hardware ressources due to very inefficient disk usage.

Doing high concurrent reads directly on the underlying FS gives 10-15 times as much throughput compared to doing a localhost-mount and doing the reads on the gluster mount. Here is a quick hacked php script that should make it easy to reproduce this behaviour: http://pastebin.com/6bPj3GRt 

The outcome is high-end hardware that is close to being overloaded at a few 100Mbit/s of throughput: http://pastebin.com/4BQk3CKZ (sdb is the 22disk hardware raid which is *only* hosting a single brick)

Such poor disk efficiency makes gluster nearly unusable and clearly a very bad choice cost wise.

If there is any additional info you need please let me know.
Comment 1 krishnan parthasarathi 2012-06-13 07:42:38 EDT
Philip,
Gather some 'profile' information using the following approach,

- # gluster volume profile VOLNAME start
- Run workload
- # gluster volume profile VOLNAME info (Repeat this a few times while the workload is present on the volume)
- gluster volume profile VOLNAME stop (once the workload is complete)

This should shed more light on 'what' is happening.
Comment 2 Amar Tumballi 2012-07-11 02:19:37 EDT
please also try with the 3.3.0 version.
Comment 3 Amar Tumballi 2012-09-18 01:42:49 EDT
Philip, any update on this issue? did you happen to try newer version?
Comment 4 Amar Tumballi 2012-12-27 03:48:13 EST
Philip, we will close the bug as INSUFFICIENT_DATA, please re-open with more data and would be great if you give new version a try before that.

Note You need to log in before you can comment on or make changes to this bug.