Bug 1656951

Summary: cluster.max-bricks-per-process 250 not working as expected
Product: [Community] GlusterFS Reporter: Atin Mukherjee <amukherj>
Component: glusterdAssignee: bugs <bugs>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: high Docs Contact:
Priority: high    
Version: mainlineCC: bmekala, bugs, kiyer, rhs-bugs, sankarshan, storage-qa-internal, vbellur
Target Milestone: ---Keywords: ZStream
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: glusterfs-6.0 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1656924 Environment:
Last Closed: 2019-03-25 16:32:33 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1656924    

Comment 1 Atin Mukherjee 2018-12-06 17:48:06 UTC
Description of problem:
On a 3 node gluster cluster with 1000+ volumes `pidof glusterfsd` returns only one single pid instead of returning 4 pids even when the default value of cluster.max-bricks-per-process is set to 250.

# gluster v get all all
Option                                  Value                                  
------                                  -----                                  
cluster.server-quorum-ratio             51                                     
cluster.enable-shared-storage           disable                                
cluster.op-version                      31304                                  
cluster.max-op-version                  31304                                  
cluster.brick-multiplex                 enable                                 
cluster.max-bricks-per-process          250                                    
cluster.daemon-log-level                INFO                        

# pidof glusterfsd
31746

Version-Release number of selected component (if applicable):
glusterfs-3.12.2-31

How reproducible:
1/1

Steps to Reproduce:
1.Set cluster.brick-multiplex to enable.
# gluster v get all all
Option                                  Value                                  
------                                  -----                                  
cluster.server-quorum-ratio             51                                     
cluster.enable-shared-storage           disable                                
cluster.op-version                      31304                                  
cluster.max-op-version                  31304                                  
cluster.brick-multiplex                 enable                                 
cluster.max-bricks-per-process          250                                    
cluster.daemon-log-level                INFO                                

2.Create 1000+ volumes of type replica 1x3.
3.Execute `pidof glusterfsd` command on any node.(It'll return only a single pid)
# pidof glusterfsd
31746

Actual results:
`pidof glusterfsd` returns only a single pid. (Observed across all nodes.)
# pidof glusterfsd
31746

Expected results:
`pidof glusterfsd` should return 4 pids which means each process will have 250 volumes attached to it.
# pidof glusterfsd
31746 <xxxxx> <xxxxx> <xxxxx>

Additional info:
Even after stopping and starting all volumes `pidof glusterfsd` was returning only one pid instead of returning four pids.



Root Cause:

In get_mux_limit_per_process (), glusterd looks for the cluster.max-bricks-per-process value from the global option dictionary but never falls back to the default from the global option table in case the global option dictionary doesn't have it which is usual until and unless the option is not reconfigured.

Comment 2 Worker Ant 2018-12-06 17:50:15 UTC
REVIEW: https://review.gluster.org/21819 (glusterd: fix get_mux_limit_per_process to read default value) posted (#1) for review on master by Atin Mukherjee

Comment 3 Worker Ant 2018-12-07 07:10:41 UTC
REVIEW: https://review.gluster.org/21819 (glusterd: fix get_mux_limit_per_process to read default value) posted (#3) for review on master by Atin Mukherjee

Comment 4 Shyamsundar 2019-03-25 16:32:33 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report.

glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html
[2] https://www.gluster.org/pipermail/gluster-users/