Bug 1396109 - Remove-brick status output is showing status of fix-layout instead of original remove-brick status output
Summary: Remove-brick status output is showing status of fix-layout instead of origina...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: distribute
Version: 3.9
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
Assignee: Nithya Balachandran
QA Contact:
URL:
Whiteboard:
Depends On: 1389697
Blocks: 1386127
TreeView+ depends on / blocked
 
Reported: 2016-11-17 13:04 UTC by Nithya Balachandran
Modified: 2017-03-08 10:18 UTC (History)
4 users (show)

Fixed In Version: glusterfs-3.9.1
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1389697
Environment:
Last Closed: 2017-03-08 10:18:51 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Comment 1 Nithya Balachandran 2016-11-17 13:05:55 UTC
Steps to reproduce the issue:

1. On a 2 node cluster, create a volume with 1 brick on each node.
2. From node1, run 
gluster v rebalance <volname> fix-layout start

3. Once the fix-layout has completed,  from node1, run 
gluster v remove-brick <volname> <brick on node2> start

4. On node1, run
gluster v remove-brick <volname> <brick on node2> status


This will print the fix-layout output.

Running the command on node2 prints the output correctly.
gluster v remove-brick <volname> <brick on node2> status

--- Additional comment from Worker Ant on 2016-10-28 06:10:50 EDT ---

REVIEW: http://review.gluster.org/15749 (cli/rebalance: remove brick status is incorrect) posted (#1) for review on master by N Balachandran (nbalacha)

--- Additional comment from Worker Ant on 2016-11-16 23:38:10 EST ---

REVIEW: http://review.gluster.org/15749 (cli/rebalance: remove brick status is incorrect) posted (#2) for review on master by N Balachandran (nbalacha)

--- Additional comment from Worker Ant on 2016-11-17 05:14:19 EST ---

COMMIT: http://review.gluster.org/15749 committed in master by Atin Mukherjee (amukherj) 
------
commit 35b085ba345cafb2b0ee978a4c4475ab0dcba5a6
Author: N Balachandran <nbalacha>
Date:   Fri Oct 28 15:21:52 2016 +0530

    cli/rebalance: remove brick status is incorrect
    
    If a remove brick operation is preceded by a fix-layout,
    running remove-brick status on a node which does not
    contain any of the bricks that were removed displays
    fix-layout status.
    
    The defrag_cmd variable was not updated in glusterd
    for the nodes not hosting removed bricks causing the
    status parsing to go wrong. This is now updated.
    Also made minor modifications to the spacing in
    the fix-layout status output.
    
    Change-Id: Ib735ce26be7434cd71b76e4c33d9b0648d0530db
    BUG: 1389697
    Signed-off-by: N Balachandran <nbalacha>
    Reviewed-on: http://review.gluster.org/15749
    Smoke: Gluster Build System <jenkins.org>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.org>
    Reviewed-by: Atin Mukherjee <amukherj>

Comment 2 Worker Ant 2016-11-17 13:13:41 UTC
REVIEW: http://review.gluster.org/15870 (cli/rebalance: remove brick status is incorrect) posted (#1) for review on release-3.9 by N Balachandran (nbalacha)

Comment 3 Worker Ant 2016-11-17 18:15:56 UTC
COMMIT: http://review.gluster.org/15870 committed in release-3.9 by Atin Mukherjee (amukherj) 
------
commit 7579370add44ee7d6cd854584964706f7248c035
Author: N Balachandran <nbalacha>
Date:   Thu Nov 17 18:16:18 2016 +0530

    cli/rebalance: remove brick status is incorrect
    
    If a remove brick operation is preceded by a fix-layout,
    running remove-brick status on a node which does not
    contain any of the bricks that were removed displays
    fix-layout status.
    
    The defrag_cmd variable was not updated in glusterd
    for the nodes not hosting removed bricks causing the
    status parsing to go wrong. This is now updated.
    Also made minor modifications to the spacing in
    the fix-layout status output.
    
    > Change-Id: Ib735ce26be7434cd71b76e4c33d9b0648d0530db
    > BUG: 1389697
    > Signed-off-by: N Balachandran <nbalacha>
    > Reviewed-on: http://review.gluster.org/15749
    > Smoke: Gluster Build System <jenkins.org>
    > NetBSD-regression: NetBSD Build System <jenkins.org>
    > CentOS-regression: Gluster Build System <jenkins.org>
    > Reviewed-by: Atin Mukherjee <amukherj>
    (cherry picked from commit 35b085ba345cafb2b0ee978a4c4475ab0dcba5a6)
    
    Change-Id: I3da89c61da07bc5e037527aafc84d184dcd1f764
    BUG: 1396109
    Signed-off-by: N Balachandran <nbalacha>
    Reviewed-on: http://review.gluster.org/15870
    Tested-by: Atin Mukherjee <amukherj>
    Smoke: Gluster Build System <jenkins.org>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.org>
    Reviewed-by: Atin Mukherjee <amukherj>

Comment 4 Kaushal 2017-03-08 10:18:51 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.9.1, please open a new bug report.

glusterfs-3.9.1 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/gluster-users/2017-January/029725.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.