+++ This bug was initially created as a clone of Bug #1652118 +++ Description of problem: In a scale container storage setup which hosts ~1000 1 X 3 volumes, its seen that if a single brick process is going to host all the rest of 999 brick instances, the overall footprint of the brick process might still be at the higher side. We already have an option cluster.max-bricks-per-process which when set to a value n, that's the max cap of number of brick instances to be attached to a brick process. We have seen benefits in defining a cap of 250 of bricks per process in few scale deployments where things haven't gone crazy like glusterd taking lots of time to process disconnect events, bricks not reaching to a warning level of OOM killer situation etc. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: --- Additional comment from Worker Ant on 2018-11-21 10:50:14 EST --- REVIEW: https://review.gluster.org/21701 (glusterd: make max-bricks-per-process default value to 250) posted (#1) for review on master by Atin Mukherjee --- Additional comment from Worker Ant on 2018-11-25 09:34:31 EST --- REVIEW: https://review.gluster.org/21701 (glusterd: make max-bricks-per-process default value to 250) posted (#2) for review on master by Atin Mukherjee
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:3827