Bug 1019846 - Rebalance : Status command shows the node which does not participate in rebalance.
Rebalance : Status command shows the node which does not participate in rebal...
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterfs (Show other bugs)
2.1
Unspecified Unspecified
high Severity high
: ---
: RHGS 2.1.2
Assigned To: Kaushal
senaik
: ZStream
: 1034643 (view as bug list)
Depends On: 1031887
Blocks:
  Show dependency treegraph
 
Reported: 2013-10-16 09:38 EDT by RamaKasturi
Modified: 2015-09-01 08:23 EDT (History)
12 users (show)

See Also:
Fixed In Version: glusterfs-3.4.0.44.1u2rhs
Doc Type: Bug Fix
Doc Text:
Previously, the rebalance status command would also display peers that did not have any associated bricks. With this fix, the rebalance status command works as expected.
Story Points: ---
Clone Of:
: 1031887 (view as bug list)
Environment:
Last Closed: 2014-02-25 02:52:52 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description RamaKasturi 2013-10-16 09:38:34 EDT
Description of problem:
Status command shows the node which does not participate in rebalance.

Version-Release number of selected component (if applicable):


How reproducible:
Always

Steps to Reproduce:
1. Create a  distributed Volume with 2 bricks.

2.Fuse mount the volume and create some files 
for i in {1..300} ; do dd if=/dev/urandom of=f"$i" bs=10M count=1; done

3.Add a brick to the volume and start rebalance .

4. Once rebalance completes, add another node and run the command "gluster vol rebalance <volName> status" 

The following is the output displayed.
 Node Rebalanced-files          size       scanned      failures       skipped         status run time in secs
                               ---------      -----------   -----------   -----------   -----------   -----------   ------------   --------------
                               localhost                0        0Bytes            10             0             0      completed             0.00
                             10.70.37.61                0        0Bytes            10             0             0      completed             0.00
                            10.70.37.167                0        0Bytes            10             0             0      completed             0.00
                             10.70.37.69                0        0Bytes             0             0             0    not started             0.00
volume rebalance: vol_dis: success: 

5. Gluster volume info output.

Volume Name: vol_dis
Type: Distribute
Volume ID: 982f06fb-619b-4e5f-b647-605074c1f468
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: 10.70.37.166:/rhs/brick1/b1
Brick2: 10.70.37.61:/rhs/brick1/b2
Brick3: 10.70.37.167:/rhs/brick1/b3
Options Reconfigured:
auth.allow: *
user.cifs: enable
nfs.disable: off

From the above it is clear that there are no bricks on the node 10.70.37.69.



Actual results:
The node which was not involved in  rebalance  also comes up in the status output.

Expected results:
The node which has not participated in rebalance should not be listed as part of the status output.

Additional info:
Comment 2 Dusmant 2013-10-17 06:02:13 EDT
Remove brick status also has the exact same issue like rebalance. Probably they are all related.
Comment 4 Vijaikumar Mallikarjuna 2013-11-26 06:03:03 EST
*** Bug 1034643 has been marked as a duplicate of this bug. ***
Comment 5 senaik 2013-12-11 06:51:41 EST
Verison : 3.4.0.44.1u2rhs

The node which is not participating in rebalance does not come up in the status output . 

Steps : 
======
gluster peer status
Number of Peers: 3

Hostname: 10.70.34.88
Uuid: b88e9ce9-504f-45ba-8a71-206c3c6df6f9
State: Peer in Cluster (Connected)

Hostname: 10.70.34.85
Uuid: b2461aa6-24bb-4d70-b43b-d6a73ab84698
State: Peer in Cluster (Connected)

Hostname: 10.70.34.87
Uuid: 2972dc01-4c8b-4d73-916b-8797a65a4e51
State: Peer in Cluster (Connected)


1) create a distribute /distributed replicate volume with 3 bricks and start it such that there are no bricks on one of the peers 

gluster v create vol3 10.70.34.86:/rhs/brick1/d1 10.70.34.88:/rhs/brick1/d2 10.70.34.85:/rhs/brick1/d3
volume create: vol3: success: please start the volume to access data
[root@boost c3]# gluster v start vol3
volume start: vol3: success

2) Fuse /NFS mount the volume and create some files 

3) Add bricks to the volume and start rebalance 

gluster v add-brick vol3 10.70.34.86:/rhs/brick1/d4 10.70.34.88:/rhs/brick1/d5
volume add-brick: success

4) check rebalance status 

[root@boost d3]# gluster v rebalance vol3 status
Node Rebalanced-files size  scanned  failures  skipped  status run time in secs
                               
localhost   0     0Bytes    106      0      13      completed               0.00
10.70.34.88 0     0Bytes    106      0      15      completed               0.00
10.70.34.86 10    10.0MB    110      0       0      completed               1.00
volume rebalance: vol3: success: 

node which is not participating in rebalance is not shown in rebalance status . 

5) restarted glusterd and checked rebalance status again , output is the same 

Moving the bug to 'Verified' state
Comment 6 Pavithra 2014-01-03 01:13:53 EST
Please verify the doc text for technical accuracy.
Comment 7 Kaushal 2014-01-03 02:13:51 EST
The doc text looks okay.
Comment 9 errata-xmlrpc 2014-02-25 02:52:52 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHEA-2014-0208.html

Note You need to log in before you can comment on or make changes to this bug.