+++ This bug was initially created as a clone of Bug #1683880 +++ Description of problem: Multiple shd processes are running while created 100 volumes in brick_mux environment Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: 1. Create a 1x3 volume 2. Enable brick_mux 3.Run below command n1=<ip> n2=<ip> n3=<ip> for i in {1..10};do for h in {1..20};do gluster v create vol-$i-$h rep 3 $n1:/home/dist/brick$h/vol-$i-$h $n2:/home/dist/brick$h/vol-$i-$h $n3:/home/dist/brick$h/vol-$i-$h force gluster v start vol-$i-$h sleep 1 done done for k in $(gluster v list|grep -v heketi);do gluster v stop $k --mode=script;sleep 2;gluster v delete $k --mode=script;sleep 2;done Actual results: Multiple shd processes are running and consuming system resources Expected results: Only one shd process should be run Additional info:
REVIEW: https://review.gluster.org/22290 (glusterd: Multiple shd processes are spawned on brick_mux environment) posted (#1) for review on master by MOHIT AGRAWAL
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-4.1.9, please open a new bug report. glusterfs-4.1.9 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/gluster-users/2019-June/036679.html [2] https://www.gluster.org/pipermail/gluster-users/