Bug 1468950

Summary: [RFE] Have a global option to set per node limit to the number of multiplexed brick processes
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Atin Mukherjee <amukherj>
Component: coreAssignee: Samikshan Bairagya <sbairagy>
Status: CLOSED ERRATA QA Contact: Nag Pavan Chilakam <nchilaka>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: rhgs-3.3CC: nchilaka, rcyriac, rhs-bugs, storage-qa-internal
Target Milestone: ---Keywords: FutureFeature
Target Release: RHGS 3.3.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard: brick-multiplexing
Fixed In Version: glusterfs-3.8.4-33 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1468962 (view as bug list) Environment:
Last Closed: 2017-09-21 05:02:13 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1417138    

Description Atin Mukherjee 2017-07-10 04:36:51 UTC
Description of problem:

The main purpose the brick multiplexing feature tries to serve is to reduce the number of brick processes, hence reusing resources. Currently, we do not have a cap on the number of brick processes that can be run per node with brick multiplexing enabled. The proposal is to have a global option cluster.max-bricks-per-process which can control the number of brick instances to be attached to a single brick process.

Comment 2 Atin Mukherjee 2017-07-10 04:37:57 UTC
upstream patch : https://review.gluster.org/17469

Comment 7 Nag Pavan Chilakam 2017-07-24 14:08:03 UTC
Qatp_Functional validation
note:
b is the number of bricks existing per node 
n is cluster.max-bricks-per-process

1)default should allow all bricks to point to same brick process ie on glusterfsd per node irrespective of number of bricks hosted --pass
2)setting to a value say n, must be effective from here on ---->PASS, but will be effective for new vol creates only, existing volumes would have to be restarted, for this effect
3)if bricks on fsd >n(ie which were created before setting value to n), still new bricks must get created on new fsd -->PASS
4)if already b>>n, and offline one of the b, still a new brick create must use a new pid(as long b doesn't go less than n)
5)if b>=n and add-brick must spawn new process
6)if b=n ,remove brick and then add new brick/vol must use existing proc, as b<n with remove brick

Comment 8 Nag Pavan Chilakam 2017-07-25 10:09:28 UTC
validation

All the cases mentioned in comment#7 have passed 
Testversion:3.8.4-35
hence moving this bz to verified
Note: The memory consumption and profiling should be check by perf team and if any descripencies there, that can be tracked seperatekly.
Functionally, the RFE is healthy and any further issues can be tracked with new bugs.
Eg: Some cosmetic bugs were raised as below
1474344 - Allow only the maximum number of bricks supported for cluster.max-bricks-per-process
1474342 - reset cluster.max-bricks-per-process value when brick mux is disabled

Comment 10 errata-xmlrpc 2017-09-21 05:02:13 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:2774