Some functions were allocating 64K booleans, which are (crazily) mapped to 4-byte ints, for a total of 256KB per call. Besides being generally wasteful, this means any code that creates worker threads - e.g. syncops, io-threads translator - must allocate much bigger stacks for each thread *just in case* it calls into portmap. With multiplexing, this limits the number of threads we can have, and therefore the number of bricks we can support in one process.
REVIEW: https://review.gluster.org/15745 (libglusterfs+transport+io-threads: fix 256KB stack abuse) posted (#5) for review on master by Jeff Darcy (jdarcy)
COMMIT: https://review.gluster.org/15745 committed in master by Shyamsundar Ranganathan (srangana) ------ commit c8a23cc6cd289dd28deb136bf2550f28e2761ef3 Author: Jeff Darcy <jdarcy> Date: Thu Oct 27 11:51:47 2016 -0400 libglusterfs+transport+io-threads: fix 256KB stack abuse Some functions were allocating 64K booleans, which are (crazily) mapped to 4-byte ints, for a total of 256KB per call. Changed to use bitfields instead, so usage is now only 8KB per call. This was the impediment to changing the io-threads stack size, so that has been adjusted too. Change-Id: I8781c4f2c8f2b830f4535e366995fac8dd0a8653 BUG: 1418095 Signed-off-by: Jeff Darcy <jdarcy> Reviewed-on: https://review.gluster.org/15745 Smoke: Gluster Build System <jenkins.org> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> Reviewed-by: N Balachandran <nbalacha> Reviewed-by: Shyamsundar Ranganathan <srangana>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.11.0, please open a new bug report. glusterfs-3.11.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/announce/2017-May/000073.html [2] https://www.gluster.org/pipermail/gluster-users/