Bug 1031887

Summary: Rebalance : Status command shows the node which does not participate in rebalance.
Product: [Community] GlusterFS Reporter: Kaushal <kaushal>
Component: glusterdAssignee: Kaushal <kaushal>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: high Docs Contact:
Priority: high    
Version: mainlineCC: dpati, dtsang, gluster-bugs, knarra, mmahoney, pprakash, ssampat, vbellur
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-3.5.0 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1019846 Environment:
Last Closed: 2014-04-17 11:50:49 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1019846    

Description Kaushal 2013-11-19 04:50:59 UTC
+++ This bug was initially created as a clone of Bug #1019846 +++

Description of problem:
Status command shows the node which does not participate in rebalance.

Version-Release number of selected component (if applicable):


How reproducible:
Always

Steps to Reproduce:
1. Create a  distributed Volume with 2 bricks.

2.Fuse mount the volume and create some files 
for i in {1..300} ; do dd if=/dev/urandom of=f"$i" bs=10M count=1; done

3.Add a brick to the volume and start rebalance .

4. Once rebalance completes, add another node and run the command "gluster vol rebalance <volName> status" 

The following is the output displayed.
 Node Rebalanced-files          size       scanned      failures       skipped         status run time in secs
                               ---------      -----------   -----------   -----------   -----------   -----------   ------------   --------------
                               localhost                0        0Bytes            10             0             0      completed             0.00
                             10.70.37.61                0        0Bytes            10             0             0      completed             0.00
                            10.70.37.167                0        0Bytes            10             0             0      completed             0.00
                             10.70.37.69                0        0Bytes             0             0             0    not started             0.00
volume rebalance: vol_dis: success: 

5. Gluster volume info output.

Volume Name: vol_dis
Type: Distribute
Volume ID: 982f06fb-619b-4e5f-b647-605074c1f468
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: 10.70.37.166:/rhs/brick1/b1
Brick2: 10.70.37.61:/rhs/brick1/b2
Brick3: 10.70.37.167:/rhs/brick1/b3
Options Reconfigured:
auth.allow: *
user.cifs: enable
nfs.disable: off

From the above it is clear that there are no bricks on the node 10.70.37.69.



Actual results:
The node which was not involved in  rebalance  also comes up in the status output.

Expected results:
The node which has not participated in rebalance should not be listed as part of the status output.

Additional info:

--- Additional comment from RHEL Product and Program Management on 2013-10-16 19:25:44 IST ---

Since this issue was entered in bugzilla, the release flag has been
set to ? to ensure that it is properly evaluated for this release.

--- Additional comment from Dusmant on 2013-10-17 15:32:13 IST ---

Remove brick status also has the exact same issue like rebalance. Probably they are all related.

Comment 1 Kaushal 2013-11-21 07:29:26 UTC
Commits bc9f0bb5ce(cli: List only nodes which have rebalance started in rebalance status) and 3c38ba1e7b(glusterd: Start rebalance only where required) have been merged into master.

Comment 2 Anand Avati 2013-12-03 23:19:38 UTC
COMMIT: http://review.gluster.org/6337 committed in master by Anand Avati (avati) 
------
commit 916785766777ea74c30df17b6e2c572bc1c9a534
Author: Kaushal M <kaushal>
Date:   Fri Nov 22 13:03:57 2013 +0530

    cli: More checks in rebalance status output
    
    Change-Id: Ibd2edc5608ae6d3370607bff1c626c8347c4deda
    BUG: 1031887
    Signed-off-by: Kaushal M <kaushal>
    Reviewed-on: http://review.gluster.org/6337
    Reviewed-by: Krishnan Parthasarathi <kparthas>
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Anand Avati <avati>

Comment 3 Anand Avati 2013-12-23 08:59:15 UTC
REVIEW: http://review.gluster.org/6561 (cli: More checks in rebalance status output) posted (#1) for review on release-3.5 by Krishnan Parthasarathi (kparthas)

Comment 4 Anand Avati 2013-12-23 14:56:25 UTC
COMMIT: http://review.gluster.org/6561 committed in release-3.5 by Vijay Bellur (vbellur) 
------
commit 3ef4b7eb9d1f4e305e1b7c85ee5bb51d7b18e305
Author: Krishnan Parthasarathi <kparthas>
Date:   Mon Dec 23 14:07:40 2013 +0530

    cli: More checks in rebalance status output
    
    Change-Id: Ibd2edc5608ae6d3370607bff1c626c8347c4deda
    BUG: 1031887
    Signed-off-by: Krishnan Parthasarathi <kparthas>
    Reviewed-on: http://review.gluster.org/6561
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Vijay Bellur <vbellur>

Comment 5 Niels de Vos 2014-04-17 11:50:49 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.5.0, please reopen this bug report.

glusterfs-3.5.0 has been announced on the Gluster Developers mailinglist [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/6137
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user