Bug 1597252 - EC volume getting created without any redundant brick
Summary: EC volume getting created without any redundant brick
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: disperse
Version: rhgs-3.4
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: RHGS 3.4.z Batch Update 3
Assignee: Sunil Kumar Acharya
QA Contact: Upasana
URL:
Whiteboard:
: 1613687 (view as bug list)
Depends On:
Blocks: 1642448
TreeView+ depends on / blocked
 
Reported: 2018-07-02 11:43 UTC by Upasana
Modified: 2019-02-04 07:41 UTC (History)
9 users (show)

Fixed In Version: glusterfs-3.12.2-33
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1642448 (view as bug list)
Environment:
Last Closed: 2019-02-04 07:41:25 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:0263 0 None None None 2019-02-04 07:41:37 UTC

Description Upasana 2018-07-02 11:43:43 UTC
Description of problem:
========================
EC volume getting created without any redundant brick
The disperse-count is 4 disperse-data is 4 and redundnant count is 0
Ideally this should fail and used to fail


Version-Release number of selected component (if applicable):
=============================================================
glusterfs-server-3.12.2-12.el7rhgs.x86_64

I am seeing this issue even on latest 3.3.1-async ie 3.8.4-54.13 

However, this was not the case previously, so don't know when the regression got introduced

How reproducible: 
================
3/3
Always reproducible


Steps to Reproduce:
==================
1.Create an EC volume with disperse-count is 4 disperse-data is 4 and redundnant count is 0

[root@dhcp42-53 ~]# gluster volume create test-dispersed disperse 4 disperse-data 4 10.70.42.53:/bricks/brick2/dispersed 10.70.42.160:/bricks/brick2/dispersed 10.70.42.138:/bricks/brick2/dispersed 10.70.42.164:/bricks/brick2/dispersed 10.70.42.40:/bricks/brick2/dispersed 10.70.42.159:/bricks/brick2/dispersed 10.70.42.53:/bricks/brick1/dispersed 10.70.42.160:/bricks/brick1/dispersed 10.70.42.138:/bricks/brick1/dispersed 10.70.42.164:/bricks/brick1/dispersed 10.70.42.40:/bricks/brick1/dispersed 10.70.42.159:/bricks/brick1/dispersed
volume create: test-dispersed: success: please start the volume to access data
[root@dhcp42-53 ~]# 


Actual results:
===============
Should fail


Expected results: 
===============
It is sucessful.



Additional info:
================

Volume Name: test-dispersed
Type: Distributed-Disperse
Volume ID: 8d85f62b-7122-4b7f-8fb2-06fe66ad29e5
Status: Created
Snapshot Count: 0
Number of Bricks: 3 x (4 + 0) = 12
Transport-type: tcp
Bricks:
Brick1: 10.70.42.53:/bricks/brick2/dispersed
Brick2: 10.70.42.160:/bricks/brick2/dispersed
Brick3: 10.70.42.138:/bricks/brick2/dispersed
Brick4: 10.70.42.164:/bricks/brick2/dispersed
Brick5: 10.70.42.40:/bricks/brick2/dispersed
Brick6: 10.70.42.159:/bricks/brick2/dispersed
Brick7: 10.70.42.53:/bricks/brick1/dispersed
Brick8: 10.70.42.160:/bricks/brick1/dispersed
Brick9: 10.70.42.138:/bricks/brick1/dispersed
Brick10: 10.70.42.164:/bricks/brick1/dispersed
Brick11: 10.70.42.40:/bricks/brick1/dispersed
Brick12: 10.70.42.159:/bricks/brick1/dispersed
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
cluster.enable-shared-storage: enable


The above example is for a distributed-dispersed volume 
Adding logs for a dispersed volume

[root@dhcp42-53 ~]# gluster volume create test-dispersed1 disperse 4 disperse-data 4 10.70.42.53:/bricks/brick2/dispersed1 10.70.42.160:/bricks/brick2/dispersed1 10.70.42.138:/bricks/brick2/dispersed1 10.70.42.164:/bricks/brick2/dispersed1
volume create: test-dispersed1: success: please start the volume to access data

 
Volume Name: test-dispersed1
Type: Disperse
Volume ID: ab9251ba-02c8-49c7-9b3c-2ac0409ff104
Status: Created
Snapshot Count: 0
Number of Bricks: 1 x (4 + 0) = 4
Transport-type: tcp
Bricks:
Brick1: 10.70.42.53:/bricks/brick2/dispersed1
Brick2: 10.70.42.160:/bricks/brick2/dispersed1
Brick3: 10.70.42.138:/bricks/brick2/dispersed1
Brick4: 10.70.42.164:/bricks/brick2/dispersed1
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
cluster.enable-shared-storage: enable

Comment 5 Upasana 2018-07-06 04:24:37 UTC
Yes this is a regression. Tried this again on my setup

[root@dhcp35-18 proc]# gluster volume create test-dispersed disperse 4 disperse-data 4 10.70.35.18:/gluster/brick1/distdispersed 10.70.35.57:/gluster/brick1/distdispersed 10.70.35.131:/gluster/brick1/distdispersed 10.70.35.66:/gluster/brick1/distdispersed 10.70.35.94:/gluster/brick1/distdispersed 10.70.35.122:/gluster/brick1/distdispersed 10.70.35.18:/gluster/brick2/distdispersed 10.70.35.57:/gluster/brick2/distdispersed
volume create: test-dispersed: success: please start the volume to access data
[root@dhcp35-18 proc]# gluster vol info
 
Volume Name: test-dispersed
Type: Distributed-Disperse
Volume ID: 7e2bb094-d78b-4f94-add4-5cb60c95c90d
Status: Created
Snapshot Count: 0
Number of Bricks: 2 x (4 + 0) = 8
Transport-type: tcp
Bricks:
Brick1: 10.70.35.18:/gluster/brick1/distdispersed
Brick2: 10.70.35.57:/gluster/brick1/distdispersed
Brick3: 10.70.35.131:/gluster/brick1/distdispersed
Brick4: 10.70.35.66:/gluster/brick1/distdispersed
Brick5: 10.70.35.94:/gluster/brick1/distdispersed
Brick6: 10.70.35.122:/gluster/brick1/distdispersed
Brick7: 10.70.35.18:/gluster/brick2/distdispersed
Brick8: 10.70.35.57:/gluster/brick2/distdispersed
Options Reconfigured:
transport.address-family: inet
nfs.disable: on


Updated the sosreports in - http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/ubansal/1597252/

Comment 12 Sunil Kumar Acharya 2018-10-24 13:18:45 UTC
Upstream patch: https://review.gluster.org/#/c/glusterfs/+/21478/

Comment 13 Ashish Pandey 2018-11-16 07:39:01 UTC
*** Bug 1613687 has been marked as a duplicate of this bug. ***

Comment 24 errata-xmlrpc 2019-02-04 07:41:25 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0263


Note You need to log in before you can comment on or make changes to this bug.