Description of problem: ======================== EC volume getting created without any redundant brick The disperse-count is 4 disperse-data is 4 and redundnant count is 0 Ideally this should fail and used to fail Version-Release number of selected component (if applicable): ============================================================= glusterfs-server-3.12.2-12.el7rhgs.x86_64 I am seeing this issue even on latest 3.3.1-async ie 3.8.4-54.13 However, this was not the case previously, so don't know when the regression got introduced How reproducible: ================ 3/3 Always reproducible Steps to Reproduce: ================== 1.Create an EC volume with disperse-count is 4 disperse-data is 4 and redundnant count is 0 [root@dhcp42-53 ~]# gluster volume create test-dispersed disperse 4 disperse-data 4 10.70.42.53:/bricks/brick2/dispersed 10.70.42.160:/bricks/brick2/dispersed 10.70.42.138:/bricks/brick2/dispersed 10.70.42.164:/bricks/brick2/dispersed 10.70.42.40:/bricks/brick2/dispersed 10.70.42.159:/bricks/brick2/dispersed 10.70.42.53:/bricks/brick1/dispersed 10.70.42.160:/bricks/brick1/dispersed 10.70.42.138:/bricks/brick1/dispersed 10.70.42.164:/bricks/brick1/dispersed 10.70.42.40:/bricks/brick1/dispersed 10.70.42.159:/bricks/brick1/dispersed volume create: test-dispersed: success: please start the volume to access data [root@dhcp42-53 ~]# Actual results: =============== Should fail Expected results: =============== It is sucessful. Additional info: ================ Volume Name: test-dispersed Type: Distributed-Disperse Volume ID: 8d85f62b-7122-4b7f-8fb2-06fe66ad29e5 Status: Created Snapshot Count: 0 Number of Bricks: 3 x (4 + 0) = 12 Transport-type: tcp Bricks: Brick1: 10.70.42.53:/bricks/brick2/dispersed Brick2: 10.70.42.160:/bricks/brick2/dispersed Brick3: 10.70.42.138:/bricks/brick2/dispersed Brick4: 10.70.42.164:/bricks/brick2/dispersed Brick5: 10.70.42.40:/bricks/brick2/dispersed Brick6: 10.70.42.159:/bricks/brick2/dispersed Brick7: 10.70.42.53:/bricks/brick1/dispersed Brick8: 10.70.42.160:/bricks/brick1/dispersed Brick9: 10.70.42.138:/bricks/brick1/dispersed Brick10: 10.70.42.164:/bricks/brick1/dispersed Brick11: 10.70.42.40:/bricks/brick1/dispersed Brick12: 10.70.42.159:/bricks/brick1/dispersed Options Reconfigured: transport.address-family: inet nfs.disable: on cluster.enable-shared-storage: enable The above example is for a distributed-dispersed volume Adding logs for a dispersed volume [root@dhcp42-53 ~]# gluster volume create test-dispersed1 disperse 4 disperse-data 4 10.70.42.53:/bricks/brick2/dispersed1 10.70.42.160:/bricks/brick2/dispersed1 10.70.42.138:/bricks/brick2/dispersed1 10.70.42.164:/bricks/brick2/dispersed1 volume create: test-dispersed1: success: please start the volume to access data Volume Name: test-dispersed1 Type: Disperse Volume ID: ab9251ba-02c8-49c7-9b3c-2ac0409ff104 Status: Created Snapshot Count: 0 Number of Bricks: 1 x (4 + 0) = 4 Transport-type: tcp Bricks: Brick1: 10.70.42.53:/bricks/brick2/dispersed1 Brick2: 10.70.42.160:/bricks/brick2/dispersed1 Brick3: 10.70.42.138:/bricks/brick2/dispersed1 Brick4: 10.70.42.164:/bricks/brick2/dispersed1 Options Reconfigured: transport.address-family: inet nfs.disable: on cluster.enable-shared-storage: enable
Yes this is a regression. Tried this again on my setup [root@dhcp35-18 proc]# gluster volume create test-dispersed disperse 4 disperse-data 4 10.70.35.18:/gluster/brick1/distdispersed 10.70.35.57:/gluster/brick1/distdispersed 10.70.35.131:/gluster/brick1/distdispersed 10.70.35.66:/gluster/brick1/distdispersed 10.70.35.94:/gluster/brick1/distdispersed 10.70.35.122:/gluster/brick1/distdispersed 10.70.35.18:/gluster/brick2/distdispersed 10.70.35.57:/gluster/brick2/distdispersed volume create: test-dispersed: success: please start the volume to access data [root@dhcp35-18 proc]# gluster vol info Volume Name: test-dispersed Type: Distributed-Disperse Volume ID: 7e2bb094-d78b-4f94-add4-5cb60c95c90d Status: Created Snapshot Count: 0 Number of Bricks: 2 x (4 + 0) = 8 Transport-type: tcp Bricks: Brick1: 10.70.35.18:/gluster/brick1/distdispersed Brick2: 10.70.35.57:/gluster/brick1/distdispersed Brick3: 10.70.35.131:/gluster/brick1/distdispersed Brick4: 10.70.35.66:/gluster/brick1/distdispersed Brick5: 10.70.35.94:/gluster/brick1/distdispersed Brick6: 10.70.35.122:/gluster/brick1/distdispersed Brick7: 10.70.35.18:/gluster/brick2/distdispersed Brick8: 10.70.35.57:/gluster/brick2/distdispersed Options Reconfigured: transport.address-family: inet nfs.disable: on Updated the sosreports in - http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/ubansal/1597252/
Upstream patch: https://review.gluster.org/#/c/glusterfs/+/21478/
*** Bug 1613687 has been marked as a duplicate of this bug. ***
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0263