Bug 1696513 - Multiple shd processes are running on brick_mux environmet
Summary: Multiple shd processes are running on brick_mux environmet
Keywords:
Status: CLOSED NEXTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: glusterd
Version: 4.1
Hardware: x86_64
OS: Unspecified
high
high
Target Milestone: ---
Assignee: Mohit Agrawal
QA Contact:
URL:
Whiteboard:
Depends On: 1683880
Blocks: glusterfs-6.0 1684404 1696147 1732875
TreeView+ depends on / blocked
 
Reported: 2019-04-05 03:56 UTC by Mohit Agrawal
Modified: 2019-07-24 15:04 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1683880
Environment:
Last Closed: 2019-04-08 14:02:56 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Gluster.org Gerrit 22511 0 None Merged glusterfsd: Multiple shd processes are spawned on brick_mux environment 2019-04-08 14:02:55 UTC

Description Mohit Agrawal 2019-04-05 03:56:38 UTC
+++ This bug was initially created as a clone of Bug #1683880 +++

Description of problem:
Multiple shd processes are running while created 100 volumes in brick_mux environment

Version-Release number of selected component (if applicable):


How reproducible:
Always

Steps to Reproduce:
1. Create a 1x3 volume 
2. Enable brick_mux
3.Run below command
n1=<ip>
n2=<ip>
n3=<ip>

for i in {1..10};do
    for h in {1..20};do
                gluster v create vol-$i-$h rep 3 $n1:/home/dist/brick$h/vol-$i-$h $n2:/home/dist/brick$h/vol-$i-$h $n3:/home/dist/brick$h/vol-$i-$h force
                gluster v start vol-$i-$h
                sleep 1
     done
done
for k in $(gluster v list|grep -v heketi);do gluster v stop $k --mode=script;sleep 2;gluster v delete $k --mode=script;sleep 2;done

Actual results:
Multiple shd processes are running and consuming system resources

Expected results:
Only one shd process should be run

Additional info:

--- Additional comment from Mohit Agrawal on 2019-03-01 08:23:03 UTC ---

Upstream patch is posted to resolve the same
https://review.gluster.org/#/c/glusterfs/+/22290/

--- Additional comment from Atin Mukherjee on 2019-03-06 15:30:41 UTC ---

(In reply to Mohit Agrawal from comment #1)
> Upstream patch is posted to resolve the same
> https://review.gluster.org/#/c/glusterfs/+/22290/

this is an upstream bug only :-) Once the mainline patch is merged and we backport it to release-6 branch, the bug status will be corrected.

--- Additional comment from Worker Ant on 2019-03-12 11:21:18 UTC ---

REVIEW: https://review.gluster.org/22344 (glusterfsd: Multiple shd processes are spawned on brick_mux environment) posted (#2) for review on release-6 by MOHIT AGRAWAL

--- Additional comment from Worker Ant on 2019-03-12 20:53:28 UTC ---

REVIEW: https://review.gluster.org/22344 (glusterfsd: Multiple shd processes are spawned on brick_mux environment) merged (#3) on release-6 by Shyamsundar Ranganathan

--- Additional comment from Shyamsundar on 2019-03-25 16:33:26 UTC ---

This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report.

glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html
[2] https://www.gluster.org/pipermail/gluster-users/

Comment 1 Worker Ant 2019-04-05 04:21:26 UTC
REVIEW: https://review.gluster.org/22511 (glusterfsd: Multiple shd processes are spawned on brick_mux environment) posted (#2) for review on release-4.1 by MOHIT AGRAWAL

Comment 2 Worker Ant 2019-04-08 14:02:56 UTC
REVIEW: https://review.gluster.org/22511 (glusterfsd: Multiple shd processes are spawned on brick_mux environment) merged (#3) on release-4.1 by Shyamsundar Ranganathan

Comment 3 hari gowtham 2019-07-11 09:05:32 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-4.1.9, please open a new bug report.

glusterfs-4.1.9 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://lists.gluster.org/pipermail/gluster-users/2019-June/036679.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.