Bug 1468950 - [RFE] Have a global option to set per node limit to the number of multiplexed brick processes
[RFE] Have a global option to set per node limit to the number of multiplexed...
Status: VERIFIED
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: core (Show other bugs)
3.3
Unspecified Unspecified
unspecified Severity unspecified
: ---
: RHGS 3.3.0
Assigned To: Samikshan Bairagya
nchilaka
brick-multiplexing
: FutureFeature
Depends On:
Blocks: 1417138
  Show dependency treegraph
 
Reported: 2017-07-10 00:36 EDT by Atin Mukherjee
Modified: 2017-07-25 06:09 EDT (History)
4 users (show)

See Also:
Fixed In Version: glusterfs-3.8.4-33
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1468962 (view as bug list)
Environment:
Last Closed:
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Atin Mukherjee 2017-07-10 00:36:51 EDT
Description of problem:

The main purpose the brick multiplexing feature tries to serve is to reduce the number of brick processes, hence reusing resources. Currently, we do not have a cap on the number of brick processes that can be run per node with brick multiplexing enabled. The proposal is to have a global option cluster.max-bricks-per-process which can control the number of brick instances to be attached to a single brick process.
Comment 2 Atin Mukherjee 2017-07-10 00:37:57 EDT
upstream patch : https://review.gluster.org/17469
Comment 7 nchilaka 2017-07-24 10:08:03 EDT
Qatp_Functional validation
note:
b is the number of bricks existing per node 
n is cluster.max-bricks-per-process

1)default should allow all bricks to point to same brick process ie on glusterfsd per node irrespective of number of bricks hosted --pass
2)setting to a value say n, must be effective from here on ---->PASS, but will be effective for new vol creates only, existing volumes would have to be restarted, for this effect
3)if bricks on fsd >n(ie which were created before setting value to n), still new bricks must get created on new fsd -->PASS
4)if already b>>n, and offline one of the b, still a new brick create must use a new pid(as long b doesn't go less than n)
5)if b>=n and add-brick must spawn new process
6)if b=n ,remove brick and then add new brick/vol must use existing proc, as b<n with remove brick
Comment 8 nchilaka 2017-07-25 06:09:28 EDT
validation

All the cases mentioned in comment#7 have passed 
Testversion:3.8.4-35
hence moving this bz to verified
Note: The memory consumption and profiling should be check by perf team and if any descripencies there, that can be tracked seperatekly.
Functionally, the RFE is healthy and any further issues can be tracked with new bugs.
Eg: Some cosmetic bugs were raised as below
1474344 - Allow only the maximum number of bricks supported for cluster.max-bricks-per-process
1474342 - reset cluster.max-bricks-per-process value when brick mux is disabled

Note You need to log in before you can comment on or make changes to this bug.