Bug 1232183 - cli correction: if tried to create multiple bricks on same server shows replicate volume instead of disperse volume
Summary: cli correction: if tried to create multiple bricks on same server shows repli...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: disperse
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Ashish Pandey
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1232185
TreeView+ depends on / blocked
 
Reported: 2015-06-16 09:09 UTC by Ashish Pandey
Modified: 2016-06-16 13:12 UTC (History)
3 users (show)

Fixed In Version: glusterfs-3.8rc2
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1232185 (view as bug list)
Environment:
Last Closed: 2016-06-16 13:12:06 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Ashish Pandey 2015-06-16 09:09:41 UTC
Description of problem:

Correction to be made for the cli output.

if multiple bricks are created on the same server, the output shows replicate volume instead of disperse volume.

[root@ninja ~]# gluster v create testvol disperse 11 redundancy 3 ninja:/rhs/brick1/testvol1 transformers:/rhs/brick1/testvol2 interstellar:/rhs/brick1/testvol3 ninja:/rhs/brick2/testvol4 transformers:/rhs/brick2/testvol5 interstellar:/rhs/brick2/testvol6 ninja:/rhs/brick3/testvol7 transformers:/rhs/brick3/testvol8 interstellar:/rhs/brick3/testvol9 ninja:/rhs/brick4/testvol10 transformers:/rhs/brick4/testvol11
volume create: testvol: failed: Multiple bricks of a *replicate volume* are present on the same server. This setup is not optimal. Use 'force' at the end of the command if you want to override this behavior. 
[root@ninja ~]# 

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:
Message is showing "replicate volume" while create volume failed for disperse type

Expected results:
Message should show "disperse volume" while create volume failed for disperse type

Additional info:

Comment 1 Anand Avati 2015-06-16 10:03:19 UTC
REVIEW: http://review.gluster.org/11250 ( glusterd: Correction in Error message for disperse  volume create) posted (#1) for review on master by Ashish Pandey (aspandey)

Comment 3 Ashish Pandey 2015-06-16 10:29:33 UTC
Please ignore the previous comment -2
patch for this bug is http://review.gluster.org/#/c/11250

Comment 4 Anand Avati 2015-07-07 06:33:33 UTC
COMMIT: http://review.gluster.org/11250 committed in master by Krishnan Parthasarathi (kparthas) 
------
commit 3e1866aee751a8e7870cdce5b171a9007029e63c
Author: Ashish Pandey <aspandey>
Date:   Tue Jun 16 15:18:50 2015 +0530

    glusterd: Correction in Error message for disperse
     volume create
    
     Problem: If all the bricks are on same server and
     creating "disperse" volume without using "force",
     it throws a failure message mentioning "replicate"
     as volume.
    
     Solution: Adding failure message for disperse volume too
    
    Change-Id: I9e466b1fe9dae8cf556903b1a2c4f0b270159841
    BUG: 1232183
    Signed-off-by: Ashish Pandey <aspandey>
    Reviewed-on: http://review.gluster.org/11250
    Tested-by: NetBSD Build System <jenkins.org>
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Pranith Kumar Karampuri <pkarampu>

Comment 5 Mike McCune 2016-03-28 22:16:42 UTC
This bug was accidentally moved from POST to MODIFIED via an error in automation, please see mmccune with any questions

Comment 6 Niels de Vos 2016-06-16 13:12:06 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.

glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.