Bug 1031887 - Rebalance : Status command shows the node which does not participate in rebalance.
Rebalance : Status command shows the node which does not participate in rebal...
Status: CLOSED CURRENTRELEASE
Product: GlusterFS
Classification: Community
Component: glusterd (Show other bugs)
mainline
Unspecified Unspecified
high Severity high
: ---
: ---
Assigned To: Kaushal
:
Depends On:
Blocks: 1019846
  Show dependency treegraph
 
Reported: 2013-11-18 23:50 EST by Kaushal
Modified: 2014-04-17 07:50 EDT (History)
8 users (show)

See Also:
Fixed In Version: glusterfs-3.5.0
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1019846
Environment:
Last Closed: 2014-04-17 07:50:49 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Kaushal 2013-11-18 23:50:59 EST
+++ This bug was initially created as a clone of Bug #1019846 +++

Description of problem:
Status command shows the node which does not participate in rebalance.

Version-Release number of selected component (if applicable):


How reproducible:
Always

Steps to Reproduce:
1. Create a  distributed Volume with 2 bricks.

2.Fuse mount the volume and create some files 
for i in {1..300} ; do dd if=/dev/urandom of=f"$i" bs=10M count=1; done

3.Add a brick to the volume and start rebalance .

4. Once rebalance completes, add another node and run the command "gluster vol rebalance <volName> status" 

The following is the output displayed.
 Node Rebalanced-files          size       scanned      failures       skipped         status run time in secs
                               ---------      -----------   -----------   -----------   -----------   -----------   ------------   --------------
                               localhost                0        0Bytes            10             0             0      completed             0.00
                             10.70.37.61                0        0Bytes            10             0             0      completed             0.00
                            10.70.37.167                0        0Bytes            10             0             0      completed             0.00
                             10.70.37.69                0        0Bytes             0             0             0    not started             0.00
volume rebalance: vol_dis: success: 

5. Gluster volume info output.

Volume Name: vol_dis
Type: Distribute
Volume ID: 982f06fb-619b-4e5f-b647-605074c1f468
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: 10.70.37.166:/rhs/brick1/b1
Brick2: 10.70.37.61:/rhs/brick1/b2
Brick3: 10.70.37.167:/rhs/brick1/b3
Options Reconfigured:
auth.allow: *
user.cifs: enable
nfs.disable: off

From the above it is clear that there are no bricks on the node 10.70.37.69.



Actual results:
The node which was not involved in  rebalance  also comes up in the status output.

Expected results:
The node which has not participated in rebalance should not be listed as part of the status output.

Additional info:

--- Additional comment from RHEL Product and Program Management on 2013-10-16 19:25:44 IST ---

Since this issue was entered in bugzilla, the release flag has been
set to ? to ensure that it is properly evaluated for this release.

--- Additional comment from Dusmant on 2013-10-17 15:32:13 IST ---

Remove brick status also has the exact same issue like rebalance. Probably they are all related.
Comment 1 Kaushal 2013-11-21 02:29:26 EST
Commits bc9f0bb5ce(cli: List only nodes which have rebalance started in rebalance status) and 3c38ba1e7b(glusterd: Start rebalance only where required) have been merged into master.
Comment 2 Anand Avati 2013-12-03 18:19:38 EST
COMMIT: http://review.gluster.org/6337 committed in master by Anand Avati (avati@redhat.com) 
------
commit 916785766777ea74c30df17b6e2c572bc1c9a534
Author: Kaushal M <kaushal@redhat.com>
Date:   Fri Nov 22 13:03:57 2013 +0530

    cli: More checks in rebalance status output
    
    Change-Id: Ibd2edc5608ae6d3370607bff1c626c8347c4deda
    BUG: 1031887
    Signed-off-by: Kaushal M <kaushal@redhat.com>
    Reviewed-on: http://review.gluster.org/6337
    Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
    Tested-by: Gluster Build System <jenkins@build.gluster.com>
    Reviewed-by: Anand Avati <avati@redhat.com>
Comment 3 Anand Avati 2013-12-23 03:59:15 EST
REVIEW: http://review.gluster.org/6561 (cli: More checks in rebalance status output) posted (#1) for review on release-3.5 by Krishnan Parthasarathi (kparthas@redhat.com)
Comment 4 Anand Avati 2013-12-23 09:56:25 EST
COMMIT: http://review.gluster.org/6561 committed in release-3.5 by Vijay Bellur (vbellur@redhat.com) 
------
commit 3ef4b7eb9d1f4e305e1b7c85ee5bb51d7b18e305
Author: Krishnan Parthasarathi <kparthas@redhat.com>
Date:   Mon Dec 23 14:07:40 2013 +0530

    cli: More checks in rebalance status output
    
    Change-Id: Ibd2edc5608ae6d3370607bff1c626c8347c4deda
    BUG: 1031887
    Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
    Reviewed-on: http://review.gluster.org/6561
    Tested-by: Gluster Build System <jenkins@build.gluster.com>
    Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Comment 5 Niels de Vos 2014-04-17 07:50:49 EDT
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.5.0, please reopen this bug report.

glusterfs-3.5.0 has been announced on the Gluster Developers mailinglist [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/6137
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Note You need to log in before you can comment on or make changes to this bug.