Description of problem: Multiple shd processes are running while created 100 volumes in brick_mux environment Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: 1. Create a 1x3 volume 2. Enable brick_mux 3.Run below command n1=<ip> n2=<ip> n3=<ip> for i in {1..10};do for h in {1..20};do gluster v create vol-$i-$h rep 3 $n1:/home/dist/brick$h/vol-$i-$h $n2:/home/dist/brick$h/vol-$i-$h $n3:/home/dist/brick$h/vol-$i-$h force gluster v start vol-$i-$h sleep 1 done done for k in $(gluster v list|grep -v heketi);do gluster v stop $k --mode=script;sleep 2;gluster v delete $k --mode=script;sleep 2;done Actual results: Multiple shd processes are running and consuming system resources Expected results: Only one shd process should be run Additional info:
Upstream patch is posted to resolve the same https://review.gluster.org/#/c/glusterfs/+/22290/
(In reply to Mohit Agrawal from comment #1) > Upstream patch is posted to resolve the same > https://review.gluster.org/#/c/glusterfs/+/22290/ this is an upstream bug only :-) Once the mainline patch is merged and we backport it to release-6 branch, the bug status will be corrected.
REVIEW: https://review.gluster.org/22344 (glusterfsd: Multiple shd processes are spawned on brick_mux environment) posted (#2) for review on release-6 by MOHIT AGRAWAL
REVIEW: https://review.gluster.org/22344 (glusterfsd: Multiple shd processes are spawned on brick_mux environment) merged (#3) on release-6 by Shyamsundar Ranganathan
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/