+++ This bug was initially created as a clone of Bug #1122586 +++ Description of problem: Dispersed volumes are much slower than replicate Version-Release number of selected component (if applicable): 3.6 How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: This is caused by a sequential inodelk/entrylk for each operation --- Additional comment from Anand Avati on 2014-07-23 16:37:49 CEST --- REVIEW: http://review.gluster.org/8369 (ec: Optimize read/write performance) posted (#1) for review on master by Xavier Hernandez (xhernandez) --- Additional comment from Anand Avati on 2014-09-10 09:41:11 CEST --- REVIEW: http://review.gluster.org/8369 (ec: Optimize read/write performance) posted (#2) for review on master by Xavier Hernandez (xhernandez)
REVIEW: http://review.gluster.org/8746 (ec: Optimize read/write performance) posted (#1) for review on release-3.6 by Xavier Hernandez (xhernandez)
COMMIT: http://review.gluster.org/8746 committed in release-3.6 by Vijay Bellur (vbellur) ------ commit b224dd14b75fb993eec4f44ecf11edce8a6fc42f Author: Xavier Hernandez <xhernandez> Date: Mon Jul 14 17:34:04 2014 +0200 ec: Optimize read/write performance This patch significantly improves performance of read/write operations on a dispersed volume by reusing previous inodelk/ entrylk operations on the same inode/entry. This reduces the latency of each individual operation considerably. Inode version and size are also updated when needed instead of on each request. This gives an additional boost. This is a backport of http://review.gluster.org/8369/ Change-Id: I4b98d5508c86b53032e16e295f72a3f83fd8fcac BUG: 1140844 Signed-off-by: Xavier Hernandez <xhernandez> Reviewed-on: http://review.gluster.org/8746 Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Jeff Darcy <jdarcy> Reviewed-by: Dan Lambright <dlambrig>
A beta release for GlusterFS 3.6.0 has been released. Please verify if the release solves this bug report for you. In case the glusterfs-3.6.0beta1 release does not have a resolution for this issue, leave a comment in this bug and move the status to ASSIGNED. If this release fixes the problem for you, leave a note and change the status to VERIFIED. Packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update (possibly an "updates-testing" repository) infrastructure for your distribution. [1] http://supercolony.gluster.org/pipermail/gluster-users/2014-September/018836.html [2] http://supercolony.gluster.org/pipermail/gluster-users/
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.6.1, please reopen this bug report. glusterfs-3.6.1 has been announced [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://supercolony.gluster.org/pipermail/gluster-users/2014-November/019410.html [2] http://supercolony.gluster.org/mailman/listinfo/gluster-users
What's the steps of your test for read/write in disperse, Is it that many threads operate one file? Only read, or Only write, or read and write ? Thanks.
This bug was opened because single read/write thread was very slow compared to replicate.
Oh, I got it, Thanks. (In reply to Xavier Hernandez from comment #6) > This bug was opened because single read/write thread was very slow compared > to replicate.