Description of problem: Correction to be made for the cli output. if multiple bricks are created on the same server, the output shows replicate volume instead of disperse volume. [root@ninja ~]# gluster v create testvol disperse 11 redundancy 3 ninja:/rhs/brick1/testvol1 transformers:/rhs/brick1/testvol2 interstellar:/rhs/brick1/testvol3 ninja:/rhs/brick2/testvol4 transformers:/rhs/brick2/testvol5 interstellar:/rhs/brick2/testvol6 ninja:/rhs/brick3/testvol7 transformers:/rhs/brick3/testvol8 interstellar:/rhs/brick3/testvol9 ninja:/rhs/brick4/testvol10 transformers:/rhs/brick4/testvol11 volume create: testvol: failed: Multiple bricks of a *replicate volume* are present on the same server. This setup is not optimal. Use 'force' at the end of the command if you want to override this behavior. [root@ninja ~]# Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Message is showing "replicate volume" while create volume failed for disperse type Expected results: Message should show "disperse volume" while create volume failed for disperse type Additional info:
REVIEW: http://review.gluster.org/11250 ( glusterd: Correction in Error message for disperse volume create) posted (#1) for review on master by Ashish Pandey (aspandey)
Please ignore the previous comment -2 patch for this bug is http://review.gluster.org/#/c/11250
COMMIT: http://review.gluster.org/11250 committed in master by Krishnan Parthasarathi (kparthas) ------ commit 3e1866aee751a8e7870cdce5b171a9007029e63c Author: Ashish Pandey <aspandey> Date: Tue Jun 16 15:18:50 2015 +0530 glusterd: Correction in Error message for disperse volume create Problem: If all the bricks are on same server and creating "disperse" volume without using "force", it throws a failure message mentioning "replicate" as volume. Solution: Adding failure message for disperse volume too Change-Id: I9e466b1fe9dae8cf556903b1a2c4f0b270159841 BUG: 1232183 Signed-off-by: Ashish Pandey <aspandey> Reviewed-on: http://review.gluster.org/11250 Tested-by: NetBSD Build System <jenkins.org> Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu>
This bug was accidentally moved from POST to MODIFIED via an error in automation, please see mmccune with any questions
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report. glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/ [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user