Description of problem: In a scale container storage setup which hosts ~1000 1 X 3 volumes, its seen that if a single brick process is going to host all the rest of 999 brick instances, the overall footprint of the brick process might still be at the higher side. We already have an option cluster.max-bricks-per-process which when set to a value n, that's the max cap of number of brick instances to be attached to a brick process. We have seen benefits in defining a cap of 250 of bricks per process in few scale deployments where things haven't gone crazy like glusterd taking lots of time to process disconnect events, bricks not reaching to a warning level of OOM killer situation etc. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
REVIEW: https://review.gluster.org/21701 (glusterd: make max-bricks-per-process default value to 250) posted (#1) for review on master by Atin Mukherjee
REVIEW: https://review.gluster.org/21701 (glusterd: make max-bricks-per-process default value to 250) posted (#2) for review on master by Atin Mukherjee
REVIEW: https://review.gluster.org/21797 (glusterd: set cluster.max-bricks-per-process to 250) posted (#1) for review on master by Atin Mukherjee
REVIEW: https://review.gluster.org/21797 (glusterd: set cluster.max-bricks-per-process to 250) posted (#2) for review on master by Atin Mukherjee
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/