+++ This bug was initially created as a clone of Bug #1232183 +++ Description of problem: Correction to be made for the cli output. if multiple bricks are created on the same server, the output shows replicate volume instead of disperse volume. [root@ninja ~]# gluster v create testvol disperse 11 redundancy 3 ninja:/rhs/brick1/testvol1 transformers:/rhs/brick1/testvol2 interstellar:/rhs/brick1/testvol3 ninja:/rhs/brick2/testvol4 transformers:/rhs/brick2/testvol5 interstellar:/rhs/brick2/testvol6 ninja:/rhs/brick3/testvol7 transformers:/rhs/brick3/testvol8 interstellar:/rhs/brick3/testvol9 ninja:/rhs/brick4/testvol10 transformers:/rhs/brick4/testvol11 volume create: testvol: failed: Multiple bricks of a *replicate volume* are present on the same server. This setup is not optimal. Use 'force' at the end of the command if you want to override this behavior. [root@ninja ~]# Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Message is showing "replicate volume" while create volume failed for disperse type Expected results: Message should show "disperse volume" while create volume failed for disperse type Additional info:
REVIEW: http://review.gluster.org/11251 (glusterd: Correction in Error message for disperse volume create) posted (#2) for review on release-3.7 by Ashish Pandey (aspandey)
REVIEW: http://review.gluster.org/11251 (glusterd: Correction in Error message for disperse volume create) posted (#3) for review on release-3.7 by Ashish Pandey (aspandey)
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.3, please open a new bug report. glusterfs-3.7.3 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/12078 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user