Bug 1027270 - remove-brick : Improvements needed in remove-brick grammar, command execution output messages and documentation
remove-brick : Improvements needed in remove-brick grammar, command execution...
Product: GlusterFS
Classification: Community
Component: glusterd (Show other bugs)
Unspecified Unspecified
unspecified Severity medium
: ---
: ---
Assigned To: Ravishankar N
Depends On: 1025240
  Show dependency treegraph
Reported: 2013-11-06 08:01 EST by Ravishankar N
Modified: 2014-04-17 07:50 EDT (History)
4 users (show)

See Also:
Fixed In Version: glusterfs-3.5.0
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1025240
Last Closed: 2014-04-17 07:50:19 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Ravishankar N 2013-11-06 08:01:26 EST
+++ This bug was initially created as a clone of Bug #1025240 +++

Description of problem:
Grammar: Usage: volume remove-brick <VOLNAME> [replica <COUNT>] <BRICK> ... {start|stop|status|commit|force}

remove-brick can be executed without any options i.e we need not have to use any of these options : "{start|stop|status|commit|force}". Hence We should be placing these options in between "[ ]" brackets. 

Command execution output message improvements:
Command 1 :  gluster v remove-brick replica 2 rhs-client13:/rhs/bricks/b3 start

wrong brick type: 2, use <HOSTNAME>:<export-dir-abs-path>
Usage: volume remove-brick <VOLNAME> [replica <COUNT>] <BRICK> ... {start|stop|status|commit|force}

In this case we have not specified the volume name. The error message should just be the "Usage". The additional message :"wrong brick type: 2, use <HOSTNAME>:<export-dir-abs-path>" is confusing and doesn't make any sense in this context. Why would the brick type be "<HOSTNAME>:<export-dir-abs-path>" ?

Command 2 : gluster v remove-brick vol_rep replica 2 rhs-client13:/rhs/bricks/b3 start

volume remove-brick start: failed: Rebalancing not needed when reducing replica count. Try without the 'start' option

a. Instead of "Rebalancing not needed" message it would be appropriate to replace it with "Migration of data is not needed"

b. Try without the 'start' option can be replaced with "Use 'force' option"

Documentation improvements:

Documentation link : http://documentation-devel.engineering.redhat.com/docs/en-US/Red_Hat_Storage/2.1-RC/html/Administration_Guide/sect-User_Guide-Managing_Volumes-Shrinking.html

1. Existing documentation doesn't have information about the usage of remove-brick command when we are reducing the replica count i.e "gluster volume remove-brick VOLNAME replica <replica_count> BRICK force". The changes which went into the patch "https://code.engineering.redhat.com/gerrit/#/c/11755/" needs to documented. 

Version-Release number of selected component (if applicable):
glusterfs built on Oct 21 2013 14:00:58

--- Additional comment from RHEL Product and Program Management on 2013-10-31 06:09:47 EDT ---

Since this issue was entered in bugzilla, the release flag has been
set to ? to ensure that it is properly evaluated for this release.

--- Additional comment from Ravishankar N on 2013-11-06 08:00:13 EST ---

Command 1 :  gluster v remove-brick replica 2 rhs-client13:/rhs/bricks/b3 start

wrong brick type: 2, use <HOSTNAME>:<export-dir-abs-path>
Usage: volume remove-brick <VOLNAME> [replica <COUNT>] <BRICK> ... {start|stop|status|commit|force}
Though the 'wrong brick type 2' log might not make sense at first glance, the 'Usage' log helps interpret the output in a meaningful way, more so for sys-admins. According to the Usage syntax, the word that follows 'gluster v remove-brick' is parsed as the VOLNAME. Here it happens to be 'replica'. If the next phrase is the optional keyword 'replica', the subsequent argument must be an integer (the replica count). Since that isn't the case, the CLI expects that the next phrase ('2' in this case) to be a brick-path of the format "<HOSTNAME>:<export-dir-abs-path>". If there is a format mismatch it gives out the error message.

On a lighter note, the message is akin to a cryptic compiler error which might not make full sense immediately but tells you exactly what the error is :)

The other improvements in the description can be addressed.
Comment 1 Anand Avati 2013-11-06 08:13:01 EST
REVIEW: http://review.gluster.org/6001 (glusterd: modify remove-brick CLI message.) posted (#4) for review on master by Ravishankar N (ravishankar@redhat.com)
Comment 2 Anand Avati 2013-11-08 02:54:32 EST
COMMIT: http://review.gluster.org/6001 committed in master by Vijay Bellur (vbellur@redhat.com) 
commit 11e28d3aedabac01b5cfc219745681426b5cf416
Author: Ravishankar N ravishankar@redhat.com <ravishankar@redhat.com>
Date:   Wed Nov 6 18:40:25 2013 +0530

    glusterd: modify remove-brick CLI message.
    In the current context "replica_cnt" is used just to know whether the
    specific key exists or not by calling "dict_get_int32", which we can
    replace by "dict_get ()". And changing the log message as it is more
    appropriate to say "migration of data" rather than "rebalance".
    This patch refactors commit 51c6fa7a354826744de98 against BZ 961669
    reviewed on : http://review.gluster.org/5566
    Change-Id: I48eae206a28d4083975e64407ed8fe4539f9c24b
    BUG: 1027270
    Signed-off-by: Ravishankar N <ravishankar@redhat.com>
    Original patch: Susant Palai <spalai@redhat.com>
    Reviewed-on: http://review.gluster.org/6001
    Tested-by: Gluster Build System <jenkins@build.gluster.com>
    Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
    Reviewed-by: susant palai <spalai@redhat.com>
    Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Comment 3 Niels de Vos 2014-04-17 07:50:19 EDT
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.5.0, please reopen this bug report.

glusterfs-3.5.0 has been announced on the Gluster Developers mailinglist [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/6137
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Note You need to log in before you can comment on or make changes to this bug.