In high-scale tests (hundreds of volumes with four bricks each) it became apparent that a frightening percentage of our memory in a brick process was being consumed by xlator_t structures, in numbers exactly proportional to the *square* of how many volumes we had. This happens because every volume creation would result in a FETCHSPEC request being sent and acted upon for every previously existing volume. Since the teardown of the graph used to process each such request wasn't freeing the xlator_t, this would leak every xlator_t in every temporary graph on every iteration. The fix, quite simply, is to take advantage of the painstaking work already done and *actually free* the structure at the right point.
REVIEW: https://review.gluster.org/16570 (libglusterfs: fix serious leak of xlator_t structures) posted (#1) for review on master by Jeff Darcy (jdarcy)
COMMIT: https://review.gluster.org/16570 committed in master by Shyamsundar Ranganathan (srangana) ------ commit 2199c688b73dfe90868f9469f92e21b0e0795e57 Author: Jeff Darcy <jdarcy> Date: Wed Feb 8 19:45:46 2017 -0500 libglusterfs: fix serious leak of xlator_t structures There's a lot of logic (and some long comments) around how to free these structures safely, but then we didn't do it. Now we do. Change-Id: I9731ae75c60e99cc43d33d0813a86912db97fd96 BUG: 1420571 Signed-off-by: Jeff Darcy <jdarcy> Reviewed-on: https://review.gluster.org/16570 Smoke: Gluster Build System <jenkins.org> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> Reviewed-by: Poornima G <pgurusid> Reviewed-by: Shyamsundar Ranganathan <srangana>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.11.0, please open a new bug report. glusterfs-3.11.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/announce/2017-May/000073.html [2] https://www.gluster.org/pipermail/gluster-users/