Bug 1040371 - For the nodes which does not participate in remove brick , remove brick status gives the ouptut of rebalance.
Summary: For the nodes which does not participate in remove brick , remove brick statu...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: glusterd
Version: mainline
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
Assignee: Krutika Dhananjay
QA Contact:
URL:
Whiteboard:
Depends On: 1024725
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-12-11 10:16 UTC by Krutika Dhananjay
Modified: 2014-11-11 08:25 UTC (History)
11 users (show)

Fixed In Version: glusterfs-3.6.0beta1
Clone Of: 1024725
Environment:
Last Closed: 2014-11-11 08:25:32 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Krutika Dhananjay 2013-12-11 10:16:21 UTC
+++ This bug was initially created as a clone of Bug #1024725 +++

Description of problem:
For the nodes which does not participate in remove brick , remove brick status gives the ouptut of rebalance.

How reproducible:
Always

Steps to Reproduce:
0. Create a 2 node cluster.
1. Create a distribute volume with 2 bricks, one on each server.
2. Mount the volume and create data in it.
3. Add a new brick to the volume.
4. start rebalance on the volume and stop it.
5. Now rebalance status shows stopped for the nodes where rebalance was running. Following is the output for the same.

[root@localhost ~]# gluster vol rebalance vol_dis_rep status
                                    Node Rebalanced-files          size       scanned      failures       skipped               status   run time in secs
                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
                               localhost                2         2.0GB            61             0            20            completed              66.00
                            10.70.37.140                0        0Bytes            60             0             0            completed               0.00
                             10.70.37.75                0        0Bytes             0             0             0          not started               0.00
                             10.70.37.43                0        0Bytes             0             0             0              stopped               0.00
volume rebalance: vol_dis_rep: success: 

4. Now start remove brick.
5. Once started check the ouput. The following is what it displays.

[root@localhost ~]# gluster vol remove-brick vol_dis_rep 10.70.37.108:/rhs/brick3/b5 10.70.37.140:/rhs/brick3/b6  status
                                    Node Rebalanced-files          size       scanned      failures       skipped         status run-time in secs
                               ---------      -----------   -----------   -----------   -----------   -----------   ------------   --------------
                               localhost                2         2.0GB            61             0             0      completed            66.00
                            10.70.37.140                0        0Bytes            60             0             0      completed             0.00
                             10.70.37.75                0        0Bytes             0             0             0    not started             0.00
                             10.70.37.43                0        0Bytes             0             0             0        stopped             0.00



Actual results:
For the nodes on which remove brick is not started, it shows output of rebalance status.

Expected results:
For the nodes on which remove brick is not started it should show status as "notstarted" or the nodes which does not participate in the remove brick should not be shown in the status.

Comment 1 Anand Avati 2013-12-11 11:16:26 UTC
REVIEW: http://review.gluster.org/6482 (glusterd: Fix incorrect remove-brick status) posted (#1) for review on master by Krutika Dhananjay (kdhananj)

Comment 2 Anand Avati 2013-12-16 12:46:08 UTC
COMMIT: http://review.gluster.org/6482 committed in master by Vijay Bellur (vbellur) 
------
commit 7f70a9d2b2a0c3141ccdabb79401d39c871e47a9
Author: Krutika Dhananjay <kdhananj>
Date:   Mon Dec 9 17:12:49 2013 +0530

    glusterd: Fix incorrect remove-brick status
    
    PROBLEM:
    
    'remove-brick status' was reported to be showing the status
    of a previous rebalance op that was aborted, on the node which
    doesn't participate in the remove-brick operation.
    
    FIX:
    
    Unconditionally reset defrag status to NOT_STARTED whenever a
    remove-brick or a rebalance op is attempted.
    
    Change-Id: Iddf3a14a2ef352e77e0f690fe65aa36ec3011257
    BUG: 1040371
    Signed-off-by: Krutika Dhananjay <kdhananj>
    Reviewed-on: http://review.gluster.org/6482
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Krishnan Parthasarathi <kparthas>
    Reviewed-by: Kaushal M <kaushal>
    Reviewed-by: Vijay Bellur <vbellur>

Comment 3 Niels de Vos 2014-09-22 12:33:35 UTC
A beta release for GlusterFS 3.6.0 has been released. Please verify if the release solves this bug report for you. In case the glusterfs-3.6.0beta1 release does not have a resolution for this issue, leave a comment in this bug and move the status to ASSIGNED. If this release fixes the problem for you, leave a note and change the status to VERIFIED.

Packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update (possibly an "updates-testing" repository) infrastructure for your distribution.

[1] http://supercolony.gluster.org/pipermail/gluster-users/2014-September/018836.html
[2] http://supercolony.gluster.org/pipermail/gluster-users/

Comment 4 Niels de Vos 2014-11-11 08:25:32 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.6.1, please reopen this bug report.

glusterfs-3.6.1 has been announced [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://supercolony.gluster.org/pipermail/gluster-users/2014-November/019410.html
[2] http://supercolony.gluster.org/mailman/listinfo/gluster-users


Note You need to log in before you can comment on or make changes to this bug.