Description of problem:
Correction to be made for the cli output.
if multiple bricks are created on the same server, the output shows replicate volume instead of disperse volume.
[root@ninja ~]# gluster v create testvol disperse 11 redundancy 3 ninja:/rhs/brick1/testvol1 transformers:/rhs/brick1/testvol2 interstellar:/rhs/brick1/testvol3 ninja:/rhs/brick2/testvol4 transformers:/rhs/brick2/testvol5 interstellar:/rhs/brick2/testvol6 ninja:/rhs/brick3/testvol7 transformers:/rhs/brick3/testvol8 interstellar:/rhs/brick3/testvol9 ninja:/rhs/brick4/testvol10 transformers:/rhs/brick4/testvol11
volume create: testvol: failed: Multiple bricks of a *replicate volume* are present on the same server. This setup is not optimal. Use 'force' at the end of the command if you want to override this behavior.
Version-Release number of selected component (if applicable):
Steps to Reproduce:
Message is showing "replicate volume" while create volume failed for disperse type
Message should show "disperse volume" while create volume failed for disperse type
REVIEW: http://review.gluster.org/11250 ( glusterd: Correction in Error message for disperse volume create) posted (#1) for review on master by Ashish Pandey (email@example.com)
Please ignore the previous comment -2
patch for this bug is http://review.gluster.org/#/c/11250
COMMIT: http://review.gluster.org/11250 committed in master by Krishnan Parthasarathi (firstname.lastname@example.org)
Author: Ashish Pandey <email@example.com>
Date: Tue Jun 16 15:18:50 2015 +0530
glusterd: Correction in Error message for disperse
Problem: If all the bricks are on same server and
creating "disperse" volume without using "force",
it throws a failure message mentioning "replicate"
Solution: Adding failure message for disperse volume too
Signed-off-by: Ashish Pandey <firstname.lastname@example.org>
Tested-by: NetBSD Build System <email@example.com>
Tested-by: Gluster Build System <firstname.lastname@example.org>
Reviewed-by: Pranith Kumar Karampuri <email@example.com>
This bug was accidentally moved from POST to MODIFIED via an error in automation, please see firstname.lastname@example.org with any questions
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.
glusterfs-3.8.0 has been announced on the Gluster mailinglists , packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist  and the update infrastructure for your distribution.