Bug 1642448 - EC volume getting created without any redundant brick
Summary: EC volume getting created without any redundant brick
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: disperse
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
Assignee: Sunil Kumar Acharya
QA Contact:
URL:
Whiteboard:
Depends On: 1597252
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-10-24 12:39 UTC by Sunil Kumar Acharya
Modified: 2019-03-25 16:31 UTC (History)
8 users (show)

Fixed In Version: glusterfs-6.0
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1597252
Environment:
Last Closed: 2018-11-20 09:30:51 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)

Description Sunil Kumar Acharya 2018-10-24 12:39:35 UTC
Description of problem:
========================
EC volume getting created without any redundant brick
The disperse-count is 4 disperse-data is 4 and redundnant count is 0
Ideally this should fail and used to fail


Version-Release number of selected component (if applicable):
=============================================================
glusterfs-server-3.12.2-12.el7rhgs.x86_64

I am seeing this issue even on latest 3.3.1-async ie 3.8.4-54.13 

However, this was not the case previously, so don't know when the regression got introduced

How reproducible: 
================
3/3
Always reproducible


Steps to Reproduce:
==================
1.Create an EC volume with disperse-count is 4 disperse-data is 4 and redundnant count is 0

[root@dhcp42-53 ~]# gluster volume create test-dispersed disperse 4 disperse-data 4 10.70.42.53:/bricks/brick2/dispersed 10.70.42.160:/bricks/brick2/dispersed 10.70.42.138:/bricks/brick2/dispersed 10.70.42.164:/bricks/brick2/dispersed 10.70.42.40:/bricks/brick2/dispersed 10.70.42.159:/bricks/brick2/dispersed 10.70.42.53:/bricks/brick1/dispersed 10.70.42.160:/bricks/brick1/dispersed 10.70.42.138:/bricks/brick1/dispersed 10.70.42.164:/bricks/brick1/dispersed 10.70.42.40:/bricks/brick1/dispersed 10.70.42.159:/bricks/brick1/dispersed
volume create: test-dispersed: success: please start the volume to access data
[root@dhcp42-53 ~]# 


Actual results:
===============
Should fail


Expected results: 
===============
It is sucessful.



Additional info:
================

Volume Name: test-dispersed
Type: Distributed-Disperse
Volume ID: 8d85f62b-7122-4b7f-8fb2-06fe66ad29e5
Status: Created
Snapshot Count: 0
Number of Bricks: 3 x (4 + 0) = 12
Transport-type: tcp
Bricks:
Brick1: 10.70.42.53:/bricks/brick2/dispersed
Brick2: 10.70.42.160:/bricks/brick2/dispersed
Brick3: 10.70.42.138:/bricks/brick2/dispersed
Brick4: 10.70.42.164:/bricks/brick2/dispersed
Brick5: 10.70.42.40:/bricks/brick2/dispersed
Brick6: 10.70.42.159:/bricks/brick2/dispersed
Brick7: 10.70.42.53:/bricks/brick1/dispersed
Brick8: 10.70.42.160:/bricks/brick1/dispersed
Brick9: 10.70.42.138:/bricks/brick1/dispersed
Brick10: 10.70.42.164:/bricks/brick1/dispersed
Brick11: 10.70.42.40:/bricks/brick1/dispersed
Brick12: 10.70.42.159:/bricks/brick1/dispersed
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
cluster.enable-shared-storage: enable


The above example is for a distributed-dispersed volume 
Adding logs for a dispersed volume

[root@dhcp42-53 ~]# gluster volume create test-dispersed1 disperse 4 disperse-data 4 10.70.42.53:/bricks/brick2/dispersed1 10.70.42.160:/bricks/brick2/dispersed1 10.70.42.138:/bricks/brick2/dispersed1 10.70.42.164:/bricks/brick2/dispersed1
volume create: test-dispersed1: success: please start the volume to access data

 
Volume Name: test-dispersed1
Type: Disperse
Volume ID: ab9251ba-02c8-49c7-9b3c-2ac0409ff104
Status: Created
Snapshot Count: 0
Number of Bricks: 1 x (4 + 0) = 4
Transport-type: tcp
Bricks:
Brick1: 10.70.42.53:/bricks/brick2/dispersed1
Brick2: 10.70.42.160:/bricks/brick2/dispersed1
Brick3: 10.70.42.138:/bricks/brick2/dispersed1
Brick4: 10.70.42.164:/bricks/brick2/dispersed1
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
cluster.enable-shared-storage: enable

Comment 1 Worker Ant 2018-10-24 13:17:25 UTC
REVIEW: https://review.gluster.org/21478 (cluster/ec : Prevent volume create without redundant brick) posted (#1) for review on master by Sunil Kumar Acharya

Comment 2 Worker Ant 2018-10-24 19:56:40 UTC
COMMIT: https://review.gluster.org/21478 committed in master by "Sunil Kumar Acharya" <sheggodu@redhat.com> with a commit message- cluster/ec : Prevent volume create without redundant brick

Problem:
EC volumes can be created without any redundant brick.

Solution:
Updated the conditional check to avoid volume create without
redundant brick.

fixes: bz#1642448
Change-Id: I0cb334b1b9378d67fcb8abf793dbe312c3179c0b
Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com>

Comment 3 Shyamsundar 2019-03-25 16:31:31 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report.

glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.