Description of problem:
Dispersed volumes are much slower than replicate
Version-Release number of selected component (if applicable): 3.6
Steps to Reproduce:
This is caused by a sequential inodelk/entrylk for each operation
REVIEW: http://review.gluster.org/8369 (ec: Optimize read/write performance) posted (#1) for review on master by Xavier Hernandez (firstname.lastname@example.org)
REVIEW: http://review.gluster.org/8369 (ec: Optimize read/write performance) posted (#2) for review on master by Xavier Hernandez (email@example.com)
COMMIT: http://review.gluster.org/8369 committed in master by Vijay Bellur (firstname.lastname@example.org)
Author: Xavier Hernandez <email@example.com>
Date: Mon Jul 14 17:34:04 2014 +0200
ec: Optimize read/write performance
This patch significantly improves performance of read/write
operations on a dispersed volume by reusing previous inodelk/
entrylk operations on the same inode/entry. This reduces the
latency of each individual operation considerably.
Inode version and size are also updated when needed instead
of on each request. This gives an additional boost.
Signed-off-by: Xavier Hernandez <firstname.lastname@example.org>
Tested-by: Gluster Build System <email@example.com>
Reviewed-by: Jeff Darcy <firstname.lastname@example.org>
Reviewed-by: Dan Lambright <email@example.com>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.
glusterfs-3.7.0 has been announced on the Gluster mailinglists , packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist  and the update infrastructure for your distribution.