Description of problem: With performance.parallel-readdir enabled, a readdir-ahead is loaded in each subvolume of DHT. Though individual readdir-ahead has a limit on the amount of memory used for caching, there is no limit on the number of subvolumes to DHT and hence the number of readdir-ahead in graph can be large. The cumulative cache size of all readdir-ahead xlators can be huge in such a scenario. So, there should be a way to limit the amount of total cache consumed by all readdir-ahead xlators in a large volume Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
REVIEW: https://review.gluster.org/16459 (glusterd, rda: If parallel readdir is enabled, split the cache limit) posted (#2) for review on release-3.10 by Raghavendra G (rgowdapp)
COMMIT: https://review.gluster.org/16459 committed in release-3.10 by Shyamsundar Ranganathan (srangana) ------ commit 6ebbfb67315ae9abc4058775d3b48d5abfe306d5 Author: Poornima G <pgurusid> Date: Tue Jan 17 17:45:59 2017 +0530 glusterd, rda: If parallel readdir is enabled, split the cache limit With patch http://review.gluster.org/#/c/16072/ readdir-ahead can be loaded as a child of dht. i.e. there can be more than one instance of readdir-ahead in client process. In this case the rda-cache-size should be split among all the readdir-ahead instances. Also the value of rda-request-size is considered as the minimum cache size of any readdir-ahead instance. >Change-Id: Iea2fe6d4c46adc09dd2e9a252332a0fe3005f2b9 >BUG: 1401812 >Signed-off-by: Poornima G <pgurusid> >Reviewed-on: https://review.gluster.org/16424 >Smoke: Gluster Build System <jenkins.org> >NetBSD-regression: NetBSD Build System <jenkins.org> >CentOS-regression: Gluster Build System <jenkins.org> >Reviewed-by: Kaushal M <kaushal> >Reviewed-by: Raghavendra G <rgowdapp> Change-Id: Iea2fe6d4c46adc09dd2e9a252332a0fe3005f2b9 BUG: 1417028 Signed-off-by: Poornima G <pgurusid> (cherry picked from commit f245dc568e3c22882e22ddd3e26a4207f5704e3b) Reviewed-on: https://review.gluster.org/16459 Tested-by: Raghavendra G <rgowdapp> Smoke: Gluster Build System <jenkins.org> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> Reviewed-by: Atin Mukherjee <amukherj> Reviewed-by: Raghavendra G <rgowdapp> Reviewed-by: Shyamsundar Ranganathan <srangana>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.10.0, please open a new bug report. glusterfs-3.10.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/gluster-users/2017-February/030119.html [2] https://www.gluster.org/pipermail/gluster-users/