Description of problem: ======================== With a new option "cluster.max-bricks-per-process" we can now set a limit to the number of bricks to be mutliplexed to one glusterfsd pid However, once I set the value to say any value n (where n>1), then if every n bricks multiplex to 1 pid, and the next n to the next pid ,etc However , at later point of time, if for multiple reasons(one reason being, i have scaled down my number of volumes) and I want all the bricks to be muxed only to 1 glusterfsd, there is not straight forward way Following are the problems: 1)by default the value is 1 , however in effect it means max (ie all bricks run on only on fsd) 2)now once set to some value n where n>1, we cannot later revert to a setting where all bricks mux to only one fsd due to below a) now setting cluster.max-bricks-per-process=1 is resulting in all bricks spawning new fsd(breaking brick mux) b)setting to zero, also has same effect as 1 Version-Release number of selected component (if applicable): ==================== 3.8.4-34 Steps to Reproduce: 1.create 10 volumes, don't start it 2.enable mux 3.start all 10 vols 4. all bricks take same glusterfsd 5. now set cluster.max-bricks-per-process to 5 6. create another 10 vols and start them 7. now first 5 new vols take a new fsd and the remaining 5 take the next fsds 7. now if i want to make all bricks run on same fsd, then i cannot revert as setting cluster.max-bricks-per-process to 1/0 is breaking brick mux feature Actual results: now if i want to make all bricks run on same fsd, then i cannot revert as setting cluster.max-bricks-per-process to 1/0 is breaking brick mux feature Expected results: ================== define a integer value (should be 0 or 1 ) to make all bricks to run on same fsd Additional info:
We have a plan to make the default to 0 instead of 1 which would ensure that once we fall back to default with brick mux enabled all the bricks get attached to a single process. However we'd need to ensure that volumes are restarted to have this into effect. @Samikshan - can you please send an upstream patch?
(In reply to Atin Mukherjee from comment #1) > We have a plan to make the default to 0 instead of 1 which would ensure that > once we fall back to default with brick mux enabled all the bricks get > attached to a single process. However we'd need to ensure that volumes are > restarted to have this into effect. > > @Samikshan - can you please send an upstream patch? Completely fine with the restart requirement. One more question, if we tag 0 to default brick mux feature, then what about 1. It has no importance. It more of breaks brick mux feature. It may be better to set both 0 and 1 to the default brick mux feature
Having both 0 and 1 as default value doesn't make any sense to me. What we could do at best is have 0 as default and CLI doesn't allow this option to be configured with value as 1. Does it make sense?
(In reply to Atin Mukherjee from comment #3) > Having both 0 and 1 as default value doesn't make any sense to me. What we > could do at best is have 0 as default and CLI doesn't allow this option to > be configured with value as 1. Does it make sense? makes sense
upstream patch : https://review.gluster.org/#/c/17819/
downstream patch : https://code.engineering.redhat.com/gerrit/#/c/112934/
on_qa validation: With brick mux on: 1)should not be allowed to set value to 1, as it creates confusion 2)default must be zero 3)must be allowed to set any value b/w {0,2..n}, where n is a postive whole number(other than 1) All the above must also reflect in behavior, ie if value is set to 4, then only max 4 bricks per node must be associated with one glusterfsd proc validation: [root@dhcp35-192 ~]# gluster v get all all Option Value ------ ----- cluster.server-quorum-ratio 51 cluster.enable-shared-storage disable cluster.op-version 31101 cluster.brick-multiplex on cluster.max-bricks-per-process 0 [root@dhcp35-192 ~]# gluster v set all cluster.max-bricks-per-process 0 volume set: success [root@dhcp35-192 ~]# gluster v set all cluster.max-bricks-per-process 1 volume set: failed: Brick-multiplexing is enabled. Please set this option to a value other than 1 to make use of the brick-multiplexing feature. [root@dhcp35-192 ~]# gluster v set all cluster.max-bricks-per-process 2 volume set: success [root@dhcp35-192 ~]# gluster v set all cluster.max-bricks-per-process 3 volume set: success [root@dhcp35-192 ~]# gluster v set all cluster.max-bricks-per-process 0 volume set: success [root@dhcp35-192 ~]# gluster v set all cluster.max-bricks-per-process 1 volume set: failed: Brick-multiplexing is enabled. Please set this option to a value other than 1 to make use of the brick-multiplexing feature. [root@dhcp35-192 ~]# gluster v set all cluster.max-bricks-per-process 3 volume set: success [root@dhcp35-192 ~]# gluster v set all cluster.max-bricks-per-process 199 volume set: success As all the above cases passed, hence moving to verified testversion:3.8.4-35 with brick mux off: 1)default should show as 0 2)should not be allowed to change this value 3)brick mux should not be in effect Validation: if we try to set value , when brick mux is off, getting below error as expected: [root@dhcp35-192 ~]# gluster v get all all Option Value ------ ----- cluster.server-quorum-ratio 51 cluster.enable-shared-storage disable cluster.op-version 31101 cluster.brick-multiplex disable cluster.max-bricks-per-process 0 [root@dhcp35-192 ~]# gluster v set all cluster.max-bricks-per-process 10 volume set: failed: Brick-multiplexing is not enabled. Please enable brick multiplexing before trying to set this option. [root@dhcp35-192 ~]# gluster v set all cluster.max-bricks-per-process 0 volume set: failed: Brick-multiplexing is not enabled. Please enable brick multiplexing before trying to set this option. [root@dhcp35-192 ~]# gluster v set all cluster.max-bricks-per-process 1 volume set: failed: Brick-multiplexing is not enabled. Please enable brick multiplexing before trying to set this option.
raised cosmetic bug 1474342 - reset cluster.max-bricks-per-process value when brick mux is disabled
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:2774