Bug 1472417 - No clear method to multiplex all bricks to one process(glusterfsd) with cluster.max-bricks-per-process option
Summary: No clear method to multiplex all bricks to one process(glusterfsd) with clust...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: glusterd
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
Assignee: Samikshan Bairagya
QA Contact:
URL:
Whiteboard: brick-multiplexing
Depends On:
Blocks: 1472289
TreeView+ depends on / blocked
 
Reported: 2017-07-18 16:35 UTC by Samikshan Bairagya
Modified: 2017-09-05 17:37 UTC (History)
7 users (show)

Fixed In Version: glusterfs-3.12.0
Clone Of: 1472289
Environment:
Last Closed: 2017-09-05 17:37:29 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Samikshan Bairagya 2017-07-18 16:35:41 UTC
+++ This bug was initially created as a clone of Bug #1472289 +++

Description of problem:
========================
With a new option "cluster.max-bricks-per-process" we can now set a limit to the number of bricks to be mutliplexed to one glusterfsd pid


However, once I set the value to say any value n (where n>1), then if every n bricks multiplex to 1 pid, and the next n to the next pid ,etc

However , at later point of time, if for multiple reasons(one reason being, i have scaled down my number of volumes) and I want all the bricks to be muxed only to 1 glusterfsd, there is not straight forward way

Following are the problems:
1)by default the value is 1 , however in effect it means max (ie all bricks run on only on fsd)
2)now once set to some value n where n>1, we cannot later revert to a setting where all bricks mux to only one fsd due to below
 a) now setting cluster.max-bricks-per-process=1 is resulting in all bricks spawning new fsd(breaking brick mux)
 b)setting to zero, also has same effect as 1


Version-Release number of selected component (if applicable):
====================
3.8.4-34



Steps to Reproduce:
1.create 10 volumes, don't start it
2.enable mux 
3.start all 10 vols
4. all bricks take same glusterfsd
5. now set cluster.max-bricks-per-process to 5
6. create another 10 vols and start them
7. now first 5 new vols take a new fsd and the remaining 5 take the next fsds
7. now if i want to make all bricks run on same fsd, then i cannot revert
as setting cluster.max-bricks-per-process to 1/0 is breaking brick mux feature

Actual results:
 now if i want to make all bricks run on same fsd, then i cannot revert
as setting cluster.max-bricks-per-process to 1/0 is breaking brick mux feature


Expected results:
==================
define a integer value (should be 0 or 1 ) to make all bricks to run on same fsd

Additional info:

--- Additional comment from Atin Mukherjee on 2017-07-18 07:56:10 EDT ---

We have a plan to make the default to 0 instead of 1 which would ensure that once we fall back to default with brick mux enabled all the bricks get attached to a single process. However we'd need to ensure that volumes are restarted to have this into effect.

@Samikshan - can you please send an upstream patch?

--- Additional comment from nchilaka on 2017-07-18 08:00:24 EDT ---

(In reply to Atin Mukherjee from comment #1)
> We have a plan to make the default to 0 instead of 1 which would ensure that
> once we fall back to default with brick mux enabled all the bricks get
> attached to a single process. However we'd need to ensure that volumes are
> restarted to have this into effect.
> 
> @Samikshan - can you please send an upstream patch?

Completely fine with the restart requirement.
One more question, if we tag 0 to default brick mux feature, then what about 1.
It has no importance. It more of breaks brick mux feature.
It may be better to set both 0 and 1 to the default brick mux feature

--- Additional comment from Atin Mukherjee on 2017-07-18 09:33:39 EDT ---

Having both 0 and 1 as default value doesn't make any sense to me. What we could do at best is have 0 as default and CLI doesn't allow this option to be configured with value as 1. Does it make sense?

--- Additional comment from nchilaka on 2017-07-18 09:38:26 EDT ---

(In reply to Atin Mukherjee from comment #3)
> Having both 0 and 1 as default value doesn't make any sense to me. What we
> could do at best is have 0 as default and CLI doesn't allow this option to
> be configured with value as 1. Does it make sense?

makes sense

Comment 1 Worker Ant 2017-07-18 16:38:16 UTC
REVIEW: https://review.gluster.org/17819 (glusterd: When brick-mux is enabled, set default mux limit to 0) posted (#1) for review on master by Samikshan Bairagya (samikshan)

Comment 2 Worker Ant 2017-07-19 05:30:58 UTC
REVIEW: https://review.gluster.org/17819 (glusterd: Set default value for cluster.max-bricks-per-process to 0) posted (#2) for review on master by Samikshan Bairagya (samikshan)

Comment 3 Worker Ant 2017-07-19 05:36:23 UTC
REVIEW: https://review.gluster.org/17819 (glusterd: Set default value for cluster.max-bricks-per-process to 0) posted (#3) for review on master by Samikshan Bairagya (samikshan)

Comment 4 Worker Ant 2017-07-19 08:52:50 UTC
REVIEW: https://review.gluster.org/17819 (glusterd: Set default value for cluster.max-bricks-per-process to 0) posted (#4) for review on master by Samikshan Bairagya (samikshan)

Comment 5 Worker Ant 2017-07-19 20:17:31 UTC
COMMIT: https://review.gluster.org/17819 committed in master by Jeff Darcy (jeff.us) 
------
commit acdbdaeba222e9ffeae077485681e5101c48d107
Author: Samikshan Bairagya <samikshan>
Date:   Tue Jul 18 21:33:45 2017 +0530

    glusterd: Set default value for cluster.max-bricks-per-process to 0
    
    When brick-multiplexing is enabled, and
    "cluster.max-bricks-per-process" isn't explicitly set, multiplexing
    happens without any limit set. But the default value set for that
    tunable is 1, which is confusing. This commit sets the default
    value to 0, and prevents the user from being able to set this value
    to 1 when brick-multiplexing is enbaled. The default value of 0
    denotes that brick-multiplexing can happen without any limit on the
    number of bricks per process.
    
    Change-Id: I4647f7bf5837d520075dc5c19a6e75bc1bba258b
    BUG: 1472417
    Signed-off-by: Samikshan Bairagya <samikshan>
    Reviewed-on: https://review.gluster.org/17819
    Smoke: Gluster Build System <jenkins.org>
    Reviewed-by: Atin Mukherjee <amukherj>
    CentOS-regression: Gluster Build System <jenkins.org>

Comment 6 Shyamsundar 2017-09-05 17:37:29 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.12.0, please open a new bug report.

glusterfs-3.12.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2017-September/000082.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.