Description of problem: data center setups with large number of bricks with replication causes a flood of connections to glusterd causing connections to be dropped due to insufficient backlog queue length Version-Release number of selected component (if applicable): How reproducible: frequently Steps to Reproduce: 1. 2. 3. Actual results: Expected results: connections to glusterd should not be dropped or refused Additional info:
REVIEW: https://review.gluster.org/21482 (glusterd: raise default transport.listen-backlog) posted (#1) for review on master by Milind Changire
COMMIT: https://review.gluster.org/21482 committed in master by "Atin Mukherjee" <amukherj> with a commit message- glusterd: raise default transport.listen-backlog Problem: data center setups with large number of bricks with replication causes a flood of connections from bricks and self-heal daemons to glusterd causing connections to be dropped due to insufficient listener socket backlog queue length Solution: raise default value of transport.listen-backlog to 1024 Change-Id: I879e4161a88f1e30875046dff232499a8e2e6c51 fixes: bz#1642850 Signed-off-by: Milind Changire <mchangir>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/