In high-scale tests (hundreds of volumes with four bricks each) it became apparent that a frightening percentage of our memory in a brick process was being consumed by xlator_t structures, in numbers exactly proportional to the *square* of how many volumes we had. This happens because every volume creation would result in a FETCHSPEC request being sent and acted upon for every previously existing volume. Since the teardown of the graph used to process each such request wasn't freeing the xlator_t, this would leak every xlator_t in every temporary graph on every iteration.
The fix, quite simply, is to take advantage of the painstaking work already done and *actually free* the structure at the right point.
REVIEW: https://review.gluster.org/16570 (libglusterfs: fix serious leak of xlator_t structures) posted (#1) for review on master by Jeff Darcy (firstname.lastname@example.org)
COMMIT: https://review.gluster.org/16570 committed in master by Shyamsundar Ranganathan (email@example.com)
Author: Jeff Darcy <firstname.lastname@example.org>
Date: Wed Feb 8 19:45:46 2017 -0500
libglusterfs: fix serious leak of xlator_t structures
There's a lot of logic (and some long comments) around how to free
these structures safely, but then we didn't do it. Now we do.
Signed-off-by: Jeff Darcy <email@example.com>
Smoke: Gluster Build System <firstname.lastname@example.org>
NetBSD-regression: NetBSD Build System <email@example.com>
CentOS-regression: Gluster Build System <firstname.lastname@example.org>
Reviewed-by: Poornima G <email@example.com>
Reviewed-by: Shyamsundar Ranganathan <firstname.lastname@example.org>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.11.0, please open a new bug report.
glusterfs-3.11.0 has been announced on the Gluster mailinglists , packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist  and the update infrastructure for your distribution.