Bug 1684404

Summary: Multiple shd processes are running on brick_mux environmet
Product: [Community] GlusterFS Reporter: Milind Changire <mchangir>
Component: glusterdAssignee: Mohit Agrawal <moagrawa>
Status: CLOSED NEXTRELEASE QA Contact:
Severity: high Docs Contact:
Priority: high    
Version: mainlineCC: amukherj, bugs, hgowtham, moagrawa, pasik
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1683880 Environment:
Last Closed: 2019-04-01 12:55:50 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1683880, 1696147, 1696513    
Bug Blocks:    

Description Milind Changire 2019-03-01 08:00:34 UTC
+++ This bug was initially created as a clone of Bug #1683880 +++

Description of problem:
Multiple shd processes are running while created 100 volumes in brick_mux environment

Version-Release number of selected component (if applicable):


How reproducible:
Always

Steps to Reproduce:
1. Create a 1x3 volume 
2. Enable brick_mux
3.Run below command
n1=<ip>
n2=<ip>
n3=<ip>

for i in {1..10};do
    for h in {1..20};do
                gluster v create vol-$i-$h rep 3 $n1:/home/dist/brick$h/vol-$i-$h $n2:/home/dist/brick$h/vol-$i-$h $n3:/home/dist/brick$h/vol-$i-$h force
                gluster v start vol-$i-$h
                sleep 1
     done
done
for k in $(gluster v list|grep -v heketi);do gluster v stop $k --mode=script;sleep 2;gluster v delete $k --mode=script;sleep 2;done

Actual results:
Multiple shd processes are running and consuming system resources

Expected results:
Only one shd process should be run

Additional info:

Comment 1 Worker Ant 2019-03-01 08:22:32 UTC
REVIEW: https://review.gluster.org/22290 (glusterd: Multiple shd processes are spawned on brick_mux environment) posted (#1) for review on master by MOHIT AGRAWAL

Comment 2 hari gowtham 2019-07-11 09:00:42 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-4.1.9, please open a new bug report.

glusterfs-4.1.9 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://lists.gluster.org/pipermail/gluster-users/2019-June/036679.html
[2] https://www.gluster.org/pipermail/gluster-users/