Bug 986896 - Output of 'gluster volume remove-brick <VOLUME> <BRICK(s)> status' command is ambiguous for 'status' field
Output of 'gluster volume remove-brick <VOLUME> <BRICK(s)> status' command is...
Status: CLOSED CURRENTRELEASE
Product: GlusterFS
Classification: Community
Component: cli (Show other bugs)
3.4.0-beta
Unspecified Unspecified
unspecified Severity unspecified
: ---
: ---
Assigned To: Susant Kumar Palai
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-07-22 07:04 EDT by Susant Kumar Palai
Modified: 2014-04-17 07:43 EDT (History)
3 users (show)

See Also:
Fixed In Version: glusterfs-3.5.0
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2014-04-17 07:43:48 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Susant Kumar Palai 2013-07-22 07:04:35 EDT
Description of problem:

In the output of 'gluster volume remove-brick <VOLUME> <BRICK(s)>  status, the status field value for the Red Hat Storage (RHS) server nodes that do require any action is given as 'not started'. This is ambiguous in that it may be taken to mean that the action is yet not started, but will be started later, when the real meaning should be 'no action required'.

--------------------------------------------------------------------

# gluster volume remove-brick RHS_VM_imagestore RHS01:/brick6 RHS02:/brick6  startRemove Brick start successful

# gluster volume remove-brick RHS_VM_imagestore RHS01:/brick6 RHS02:/brick6  status
                                    Node Rebalanced-files          size       scanned      failures         status
                               ---------      -----------   -----------   -----------   -----------   ------------
                               	localhost            1            0            9            0    in progress
     				RHS02                0            0           31            0      completed
     				RHS03                0            0            0            0    not started
     				RHS04                0            0            0            0    not started

--------------------------------------------------------------------


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.
  
Actual results:

In the output of 'gluster volume remove-brick <VOLUME> <BRICK(s)>  status, the status field value for the Red Hat Storage (RHS) server nodes that do require any action is given as 'not started'.

Expected results:

In the output of 'gluster volume remove-brick <VOLUME> <BRICK(s)>  status, the status field value for the Red Hat Storage (RHS) server nodes that do require any action should be 'no action required', to avoid ambiguity.

Additional info:
Comment 1 Anand Avati 2013-07-24 07:49:13 EDT
REVIEW: http://review.gluster.org/5383 (cli :remove-brick process output leads to ambiguity) posted (#1) for review on master by susant palai (spalai@redhat.com)
Comment 2 Anand Avati 2013-07-24 14:06:01 EDT
COMMIT: http://review.gluster.org/5383 committed in master by Vijay Bellur (vbellur@redhat.com) 
------
commit e45e0037f6df6a0fab846a83fb2c99bb09417cf4
Author: susant <spalai@redhat.com>
Date:   Wed Jul 24 14:11:55 2013 +0530

    cli :remove-brick process output leads to ambiguity
    
    The output of remove-brick  status as "Not started" leads to
    ambiguity.We should not show the status of the Server nodes
    which do not participate in the remove-brick process.
    
    Change-Id: I85fea40deb15f3e2dd5487d881f48c9aff7221de
    BUG: 986896
    Signed-off-by: susant <spalai@redhat.com>
    Reviewed-on: http://review.gluster.org/5383
    Tested-by: Gluster Build System <jenkins@build.gluster.com>
    Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Comment 3 Niels de Vos 2014-04-17 07:43:48 EDT
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.5.0, please reopen this bug report.

glusterfs-3.5.0 has been announced on the Gluster Developers mailinglist [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/6137
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Note You need to log in before you can comment on or make changes to this bug.