Description of problem: When we set cluster.quorum-count to no. number of bricks to be up, it should validate max no of bricks value command can accept. Version-Release number of selected component (if applicable): [root@node1 b1]# rpm -qa | grep glusterfs glusterfs-api-3.6.0.42-1.el6rhs.x86_64 glusterfs-geo-replication-3.6.0.42-1.el6rhs.x86_64 samba-glusterfs-3.6.509-169.4.el6rhs.x86_64 glusterfs-debuginfo-3.6.0.41-1.el6rhs.x86_64 glusterfs-3.6.0.42-1.el6rhs.x86_64 glusterfs-fuse-3.6.0.42-1.el6rhs.x86_64 glusterfs-server-3.6.0.42-1.el6rhs.x86_64 glusterfs-rdma-3.6.0.42-1.el6rhs.x86_64 glusterfs-libs-3.6.0.42-1.el6rhs.x86_64 glusterfs-cli-3.6.0.42-1.el6rhs.x86_64 How reproducible: 100% Steps to Reproduce: 1.Create 2x3 distribute replicate volume 2.execute gluster volume set <volname> cluster.quorum-type fixed 3.execute gluster volume set <volname> cluster.quorum-count <no. of bricks to be up> Actual results: gluster volume set <volname> cluster.quorum-count accepts any valune between [1 - 2147483647] Expected results: gluster volume set <volname> cluster.quorum-count should validate max no, of brick count to accept Additional info: root@node1 b1]# gluster v info Volume Name: testvol Type: Distributed-Replicate Volume ID: ad77882e-9000-4aef-9c97-4a9dadd85ac8 Status: Started Snap Volume: no Number of Bricks: 2 x 3 = 6 Transport-type: tcp Bricks: Brick1: 10.70.47.143:/rhs/brick1/b1 Brick2: 10.70.47.145:/rhs/brick1/b2 Brick3: 10.70.47.150:/rhs/brick1/b3 Brick4: 10.70.47.151:/rhs/brick1/b4 Brick5: 10.70.47.143:/rhs/brick1/b5 Brick6: 10.70.47.145:/rhs/brick1/b6 Options Reconfigured: cluster.quorum-count: 1 cluster.quorum-type: fixed performance.readdir-ahead: on
Upstream patch: https://review.gluster.org/#/c/19104/
Update: ========= verified with build: glusterfs-3.12.2-7.el7rhgs.x86_64 scenario 1: 1. Create 2 * 3 distribute-replicate volume and start 2. set cluster.quorum-type to fixed 3. Try setting cluster.quorum-count to 4 and it should fail which is expected. # gluster vol set 23 cluster.quorum-count 4 volume set: failed: 4 in cluster.quorum-count 4 is out of range [1 - 3] # # gluster vol info 23 Volume Name: 23 Type: Distributed-Replicate Volume ID: d3e0d371-3c00-49ef-a7eb-5ae12cd80388 Status: Started Snapshot Count: 0 Number of Bricks: 2 x 3 = 6 Transport-type: tcp Bricks: Brick1: 10.70.35.61:/bricks/brick0/testvol_distributed-replicated_brick0 Brick2: 10.70.35.174:/bricks/brick0/testvol_distributed-replicated_brick1 Brick3: 10.70.35.17:/bricks/brick0/testvol_distributed-replicated_brick2 Brick4: 10.70.35.163:/bricks/brick0/testvol_distributed-replicated_brick3 Brick5: 10.70.35.136:/bricks/brick0/testvol_distributed-replicated_brick4 Brick6: 10.70.35.214:/bricks/brick0/testvol_distributed-replicated_brick5 Options Reconfigured: cluster.quorum-count: 3 cluster.quorum-type: fixed transport.address-family: inet nfs.disable: on cluster.localtime-logging: disable # scenario 2: 1. Create 1 * 2 replicate volume and start 2. set cluster.quorum-type to fixed 3. Try setting cluster.quorum-count to 3 and it should fail which is expected. # gluster vol set 12 cluster.quorum-count 3 volume set: failed: 3 in cluster.quorum-count 3 is out of range [1 - 2] # # gluster vol info 12 Volume Name: 12 Type: Replicate Volume ID: c95239e9-d6f9-4d2b-9f85-6ef847542b18 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: 10.70.35.61:/bricks/brick1/b0 Brick2: 10.70.35.174:/bricks/brick1/b1 Options Reconfigured: cluster.quorum-count: 2 cluster.quorum-type: fixed transport.address-family: inet nfs.disable: on cluster.localtime-logging: disable #
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2018:2607