Description of problem: With performance.parallel-readdir enabled, there is a readdir-ahead loaded as a subvolume to DHT. Since readdir-ahead starts pre fetching dentries in opendir, readdir-ahead is not aware of whether cluster.readdir-optimize is enabled in DHT (as DHT uses it only in readdir). This results in fetching all dentries from all subvols irrespective of cluster.readdir-optimize is enabled or not. The fix is to retain the current logic of filtering out dentries pointing to directories from all but one subvolume even when performance.parallel-readdir is enabled. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
REVIEW: https://review.gluster.org/16461 (Readdir-ahead : Honour readdir-optimise option of dht) posted (#2) for review on release-3.10 by Raghavendra G (rgowdapp)
COMMIT: https://review.gluster.org/16461 committed in release-3.10 by Shyamsundar Ranganathan (srangana) ------ commit c1cafe6e314f01d3f07229c0972af5f1017c62cf Author: Poornima G <pgurusid> Date: Thu Dec 8 16:08:40 2016 +0530 Readdir-ahead : Honour readdir-optimise option of dht >Change-Id: I9c5e65b32e316e6a2fc7e1f5c79fce79386b78e2 >BUG: 1401812 >Signed-off-by: Poornima G <pgurusid> >Reviewed-on: https://review.gluster.org/16071 >Smoke: Gluster Build System <jenkins.org> >CentOS-regression: Gluster Build System <jenkins.org> >NetBSD-regression: NetBSD Build System <jenkins.org> >Reviewed-by: Raghavendra G <rgowdapp> Change-Id: I9c5e65b32e316e6a2fc7e1f5c79fce79386b78e2 BUG: 1417027 Signed-off-by: Poornima G <pgurusid> (cherry picked from commit 7c6538f6c8f9a015663b4fc57c640a7c451c87f7) Reviewed-on: https://review.gluster.org/16461 Tested-by: Raghavendra G <rgowdapp> Smoke: Gluster Build System <jenkins.org> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> Reviewed-by: Raghavendra G <rgowdapp> Reviewed-by: Shyamsundar Ranganathan <srangana>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.10.0, please open a new bug report. glusterfs-3.10.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/gluster-users/2017-February/030119.html [2] https://www.gluster.org/pipermail/gluster-users/