Description of problem: Dispersed volumes are much slower than replicate Version-Release number of selected component (if applicable): 3.6 How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: This is caused by a sequential inodelk/entrylk for each operation
REVIEW: http://review.gluster.org/8369 (ec: Optimize read/write performance) posted (#1) for review on master by Xavier Hernandez (xhernandez)
REVIEW: http://review.gluster.org/8369 (ec: Optimize read/write performance) posted (#2) for review on master by Xavier Hernandez (xhernandez)
COMMIT: http://review.gluster.org/8369 committed in master by Vijay Bellur (vbellur) ------ commit d97863562bb0d2f685df3d2e3aa4bef1299c8307 Author: Xavier Hernandez <xhernandez> Date: Mon Jul 14 17:34:04 2014 +0200 ec: Optimize read/write performance This patch significantly improves performance of read/write operations on a dispersed volume by reusing previous inodelk/ entrylk operations on the same inode/entry. This reduces the latency of each individual operation considerably. Inode version and size are also updated when needed instead of on each request. This gives an additional boost. Change-Id: I4b98d5508c86b53032e16e295f72a3f83fd8fcac BUG: 1122586 Signed-off-by: Xavier Hernandez <xhernandez> Reviewed-on: http://review.gluster.org/8369 Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Jeff Darcy <jdarcy> Reviewed-by: Dan Lambright <dlambrig>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report. glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user