Bug 1089668 - DHT - rebalance - gluster volume rebalance <volname> status shows output even though User hasn't run rebalance on that volume (it shows remove-brick status)
Summary: DHT - rebalance - gluster volume rebalance <volname> status shows output even...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: glusterd
Version: pre-release
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
Assignee: Gaurav Kumar Garg
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-04-21 13:10 UTC by Gaurav Kumar Garg
Modified: 2016-06-05 23:37 UTC (History)
9 users (show)

Fixed In Version: glusterfs-3.6.0beta1
Clone Of: 951488
Environment:
Last Closed: 2014-11-11 08:30:17 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Gaurav Kumar Garg 2014-04-21 13:10:21 UTC
+++ This bug was initially created as a clone of Bug #951488 +++

Description of problem:
DHT - rebalance - gluster volume rebalance <volname> status shows output even though User hasn't run rebalance on that volume (it shows remove-brick status)

Version-Release number of selected component (if applicable):
3.3.0.7rhs-1.el6rhs.x86_64

How reproducible:
always

Steps to Reproduce:
1. Create a Distributed volume having 2 or more sub-volume and start the volume.
[root@cutlass issue9]# gluster volume info issue11
 
Volume Name: issue11
Type: Distribute
Volume ID: c6a094bc-649e-48c4-ac6c-f015e55a0a40
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: mia.lab.eng.blr.redhat.com:/brick1/11
Brick2: fan.lab.eng.blr.redhat.com:/brick1/11
Brick3: fred.lab.eng.blr.redhat.com:/brick1/11



2. Fuse Mount the volume from the client-1 using “mount -t glusterfs  server:/<volume> <client-1_mount_point>”

mount -t glusterfs XXX:/issue11 /mnt/issue11

3. From mount point create some files at root level.
 for i in `seq 20 50` ; do touch /mnt/issue11/n$i ; done

4.remove brick/s from volume
gluster volume remove-brick issue11 fred.lab.eng.blr.redhat.com:/brick1/11 start


5. do not rebalance command but check status of rebalance command

[root@cutlass issue9]# gluster volume rebalance issue11 status
                                    Node Rebalanced-files          size       scanned      failures         status
                               ---------      -----------   -----------   -----------   -----------   ------------
                               localhost                0            0            0            0    not started
              mia.lab.eng.blr.redhat.com                0            0            0            0    not started
                             10.70.34.91                0            0            0            0    not started
             fred.lab.eng.blr.redhat.com                7            0           31            0      completed

  
Actual results:
even though User hasn't run rebalance command it is showing status of the command. 


Expected results:
1)  As a part of remove brick(to migrate data) we run rebalance in background but it should not show status of that command to User when User checks status of rebalance

2) status is misleading - for all other bricks it says 'not started'. Does it mean process is not started yet and rebalance is still running in background???



Additional info:

--- Additional comment from Scott Haines on 2013-09-27 13:07:45 EDT ---

Targeting for 3.0.0 (Denali) release.

Comment 1 Anand Avati 2014-04-21 13:46:31 UTC
REVIEW: http://review.gluster.org/7517 (glusterd: rebalance status does not show rebalance status when user has not started rebalance) posted (#1) for review on master by Gaurav Kumar Garg (ggarg)

Comment 2 Anand Avati 2014-04-22 07:25:42 UTC
REVIEW: http://review.gluster.org/7517 (glusterd: Differentiate rebalance status and remove-brick status messages) posted (#2) for review on master by Gaurav Kumar Garg (ggarg)

Comment 3 Anand Avati 2014-04-22 08:57:58 UTC
REVIEW: http://review.gluster.org/7517 (glusterd: Differentiate rebalance status and remove-brick status messages) posted (#3) for review on master by Gaurav Kumar Garg (ggarg)

Comment 4 Anand Avati 2014-04-22 12:05:15 UTC
REVIEW: http://review.gluster.org/7517 (glusterd: Differentiate rebalance status and remove-brick status messages) posted (#4) for review on master by Gaurav Kumar Garg (ggarg)

Comment 5 Anand Avati 2014-04-22 12:26:02 UTC
REVIEW: http://review.gluster.org/7517 (glusterd: Differentiate rebalance status and remove-brick status messages) posted (#5) for review on master by Gaurav Kumar Garg (ggarg)

Comment 6 Anand Avati 2014-04-25 08:40:57 UTC
REVIEW: http://review.gluster.org/7517 (glusterd: Differentiate rebalance status and remove-brick status messages) posted (#6) for review on master by Gaurav Kumar Garg (ggarg)

Comment 7 Anand Avati 2014-05-02 16:31:53 UTC
COMMIT: http://review.gluster.org/7517 committed in master by Vijay Bellur (vbellur) 
------
commit dd5e318e020fab5914567885c1b83815b39d46f9
Author: ggarg <ggarg>
Date:   Mon Apr 21 18:59:00 2014 +0530

    glusterd: Differentiate rebalance status and remove-brick status messages
    
    previously when user triggred 'gluster volume remove-brick VOLNAME
    BRICK start' then command' gluster volume rebalance <volname> status'
    showing output even user has not triggred "rebalance start" and when
    user triggred 'gluster volume rebalance <volname> start' then command
    'gluster volume remove-brick VOLNAME BRICK status' showing output even
    user has not run rebalance start and remove brick start.
    
    regression test failed in previous patch. file test/dht.rc and
    test/bug/bug-973073 edited to avoid regression test failure.
    
    now with this fix it will differentiate rebalance and remove-brick
    status messages.
    
    Signed-off-by: ggarg <ggarg>
    
    Change-Id: I7f92ad247863b9f5fbc0887cc2ead07754bcfb4f
    BUG: 1089668
    Reviewed-on: http://review.gluster.org/7517
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Atin Mukherjee <amukherj>
    Reviewed-by: Humble Devassy Chirammal <humble.devassy>
    Reviewed-by: Vijay Bellur <vbellur>

Comment 8 Gaurav Kumar Garg 2014-05-26 07:33:54 UTC
With this code change if user trigger command "rebalance <volname> status " and if rebalance not started then it will show error message that rebalance not started.
Similar case for remove-brick if user trigger command "remove-brick <volname> status" and if remove-brick not started then it will show error message that remove-brick not started.

Comment 9 Atin Mukherjee 2014-05-26 07:45:08 UTC
In addition to previous comment, it is to be noted that remove-brick/rebalance start is always a new transaction for glusterd and hence if a remove-brick/rebalance transaction is stopped in between, remove-brick/rebalance status would not show the status of the previous remove-brick/rebalance transaction and will error out saying remove-brick/rebalance is not started.

Comment 10 Anand Avati 2014-06-12 13:24:00 UTC
REVIEW: http://review.gluster.org/8050 (tests: fix for spurious failure:) posted (#1) for review on master by Gaurav Kumar Garg (ggarg)

Comment 11 Anand Avati 2014-06-12 15:32:51 UTC
COMMIT: http://review.gluster.org/8050 committed in master by Pranith Kumar Karampuri (pkarampu) 
------
commit 997d2fbd2dd31c0b16bf9323757cdc97200f6629
Author: ggarg <ggarg>
Date:   Thu Jun 12 18:51:51 2014 +0530

    tests: fix for spurious failure:
    
    Change-Id: I39cc497f12c83aa055acb6e88e4c3e1e8774e577
    BUG: 1089668
    Signed-off-by: ggarg <ggarg>
    Reviewed-on: http://review.gluster.org/8050
    Reviewed-by: Sachin Pandit <spandit>
    Reviewed-by: Pranith Kumar Karampuri <pkarampu>
    Tested-by: Pranith Kumar Karampuri <pkarampu>

Comment 12 SATHEESARAN 2014-09-19 09:37:22 UTC
Gaurav, 

It was decided that to leave the bug status as MODIFIED when you are done with the bug, So I have moved this bug to MODIFIED state

Comment 13 Niels de Vos 2014-09-22 12:38:19 UTC
A beta release for GlusterFS 3.6.0 has been released. Please verify if the release solves this bug report for you. In case the glusterfs-3.6.0beta1 release does not have a resolution for this issue, leave a comment in this bug and move the status to ASSIGNED. If this release fixes the problem for you, leave a note and change the status to VERIFIED.

Packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update (possibly an "updates-testing" repository) infrastructure for your distribution.

[1] http://supercolony.gluster.org/pipermail/gluster-users/2014-September/018836.html
[2] http://supercolony.gluster.org/pipermail/gluster-users/

Comment 14 Niels de Vos 2014-11-11 08:30:17 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.6.1, please reopen this bug report.

glusterfs-3.6.1 has been announced [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://supercolony.gluster.org/pipermail/gluster-users/2014-November/019410.html
[2] http://supercolony.gluster.org/mailman/listinfo/gluster-users


Note You need to log in before you can comment on or make changes to this bug.