Bug 986896

Summary: Output of 'gluster volume remove-brick <VOLUME> <BRICK(s)> status' command is ambiguous for 'status' field
Product: [Community] GlusterFS Reporter: Susant Kumar Palai <spalai>
Component: cliAssignee: Susant Kumar Palai <spalai>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 3.4.0-betaCC: gluster-bugs, nsathyan, spalai
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-3.5.0 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2014-04-17 11:43:48 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Susant Kumar Palai 2013-07-22 11:04:35 UTC
Description of problem:

In the output of 'gluster volume remove-brick <VOLUME> <BRICK(s)>  status, the status field value for the Red Hat Storage (RHS) server nodes that do require any action is given as 'not started'. This is ambiguous in that it may be taken to mean that the action is yet not started, but will be started later, when the real meaning should be 'no action required'.

--------------------------------------------------------------------

# gluster volume remove-brick RHS_VM_imagestore RHS01:/brick6 RHS02:/brick6  startRemove Brick start successful

# gluster volume remove-brick RHS_VM_imagestore RHS01:/brick6 RHS02:/brick6  status
                                    Node Rebalanced-files          size       scanned      failures         status
                               ---------      -----------   -----------   -----------   -----------   ------------
                               	localhost            1            0            9            0    in progress
     				RHS02                0            0           31            0      completed
     				RHS03                0            0            0            0    not started
     				RHS04                0            0            0            0    not started

--------------------------------------------------------------------


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.
  
Actual results:

In the output of 'gluster volume remove-brick <VOLUME> <BRICK(s)>  status, the status field value for the Red Hat Storage (RHS) server nodes that do require any action is given as 'not started'.

Expected results:

In the output of 'gluster volume remove-brick <VOLUME> <BRICK(s)>  status, the status field value for the Red Hat Storage (RHS) server nodes that do require any action should be 'no action required', to avoid ambiguity.

Additional info:

Comment 1 Anand Avati 2013-07-24 11:49:13 UTC
REVIEW: http://review.gluster.org/5383 (cli :remove-brick process output leads to ambiguity) posted (#1) for review on master by susant palai (spalai)

Comment 2 Anand Avati 2013-07-24 18:06:01 UTC
COMMIT: http://review.gluster.org/5383 committed in master by Vijay Bellur (vbellur) 
------
commit e45e0037f6df6a0fab846a83fb2c99bb09417cf4
Author: susant <spalai>
Date:   Wed Jul 24 14:11:55 2013 +0530

    cli :remove-brick process output leads to ambiguity
    
    The output of remove-brick  status as "Not started" leads to
    ambiguity.We should not show the status of the Server nodes
    which do not participate in the remove-brick process.
    
    Change-Id: I85fea40deb15f3e2dd5487d881f48c9aff7221de
    BUG: 986896
    Signed-off-by: susant <spalai>
    Reviewed-on: http://review.gluster.org/5383
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Vijay Bellur <vbellur>

Comment 3 Niels de Vos 2014-04-17 11:43:48 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.5.0, please reopen this bug report.

glusterfs-3.5.0 has been announced on the Gluster Developers mailinglist [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/6137
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user