Bug 982104 - Rebalance Status message not showing correct status .
Rebalance Status message not showing correct status .
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterfs (Show other bugs)
2.1
Unspecified Unspecified
high Severity medium
: ---
: RHGS 2.1.2
Assigned To: Kaushal
senaik
: ZStream
Depends On: 1006247
Blocks:
  Show dependency treegraph
 
Reported: 2013-07-08 03:19 EDT by senaik
Modified: 2015-09-01 08:24 EDT (History)
9 users (show)

See Also:
Fixed In Version: glusterfs-3.4.0.44.1u2rhs
Doc Type: Bug Fix
Doc Text:
Previously, the add-brick command would reset the rebalance status. As a result, the 'rebalance status' command displayed a wrong status. With this fix, 'rebalance status' command works as expected.
Story Points: ---
Clone Of:
: 1006247 (view as bug list)
Environment:
Last Closed: 2014-02-25 02:32:46 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description senaik 2013-07-08 03:19:29 EDT
Description of problem:
======================== 
After executing rebalance operation on the volume , add some more bricks and check rebalance status . It shows 'not started' but shows the files that were rebalanced in the previous operation . 


Version-Release number of selected component (if applicable):
============================================================ 
3.4.0.12rhs.beta3-1.el6rhs.x86_64


How reproducible:

Steps to Reproduce:
=================== 
1.Create a distributed volume 

2.Add 2 bricks and start rebalance 

3.Check rebalance status 

gluster v rebalance vol_11 status
Node   Rebalanced-files  size    scanned   failures    status run time in secs

localhost   28         280.0MB    305         0      completed      9.00
10.70.34.85 26         260.0MB    278         0      completed      9.00
10.70.34.86 40         400.0MB    344         0      completed      10.00

4. Add 2 more bricks 

5. Check rebalance status (with out starting another rebalabce operation)

gluster v rebalance vol_11 status
Node   Rebalanced-files  size    scanned   failures    status run time in secs

localhost   28         280.0MB    305         0      not started    9.00
10.70.34.85 26         260.0MB    278         0      not started    9.00
10.70.34.86 40         400.0MB    344         0      not started    10.00

Actual results:
==============
Status shows not started where as rebalanced files shows the no of files rebalanced in previous operation 

Expected results:
================ 
If Status shows 'not started' , then the other parameters like rebalanced files , size , scanned and run time should show '0'. 

Ideally if a new rebalance operation has not been started , the status should still show the status of the previous rebalance operation 

Additional info:
Comment 3 Sahina Bose 2013-08-28 02:10:12 EDT
Another related issue,

When gluster rebalance is completed, and a brick added to volume after completion, the gluster volume status all gives incorrect output:

[root@localhost ~]# gluster volume status all
Status of volume: dv1
Gluster process						Port	Online	Pid
------------------------------------------------------------------------------
Brick 10.70.42.152:/brcks/dvb1				49156	Y	11341
Brick 10.70.42.152:/brcks/dvb2				49157	Y	11351
Brick 10.70.42.152:/brcks/dvb3				49158	Y	27642
NFS Server on localhost					2049	Y	27652
 
           Task                                      ID         Status
           ----                                      --         ------
      Rebalance    4a96c34d-fe5e-48b3-b349-80621abb85f3              0


Here, that taskid was for the previous rebalance operation. 
The status is incorrectly shown as "Not Started"

This affects the task monitoring in RHSC. Can this be fixed, please?
Comment 6 Dusmant 2013-10-28 13:48:04 EDT
RHSC has dependency on this bug. Hence moving the priority to High
Comment 7 senaik 2013-12-27 06:58:23 EST
Version : glusterfs 3.4.0.52rhs
=======

Repeated the steps as mentioned in Steps to reproduce . After adding brick once rebalance is completed and checking Rebalance Status shows the below output:

[root@jay tmp]# gluster v add-brick vol3 10.70.34.89:/rhs/brick1/c7 10.70.34.87:/rhs/brick1/c8
volume add-brick: success
[root@jay tmp]# gluster v rebalance vol3 status
Node Rebalanced-files   size  scanned failures skipped  status   run time in secs
----  --------------  ------ -------- -------- -------  ------   ----------------                               
localhost   0        0Bytes     53       0        2    completed       0.00
10.70.34.88 0        0Bytes     54       0        2    completed       0.00
10.70.34.87 15       15.0MB     60       0        0    completed       1.00
10.70.34.89  6       6.0MB      61       0        0    completed                              0.00
volume rebalance: vol3: success: 

Checking gluster volume status all shows the Rebalance Task is "Completed" with the Task ID same as when Rebalance Operation was started 

[root@jay tmp]# gluster v rebalance vol4 start
volume rebalance: vol4: success: Starting rebalance on volume vol4 has been successful.
ID: ffc8ce22-6dc4-43a3-b9df-b30d54e18646


Status of volume: vol4
Gluster process						Port	Online	Pid
------------------------------------------------------------------------------
Brick 10.70.34.86:/rhs/brick1/d1			49159	Y	14375
Brick 10.70.34.87:/rhs/brick1/d2			49166	Y	7937
Brick 10.70.34.88:/rhs/brick1/d3			49155	Y	8617
Brick 10.70.34.89:/rhs/brick1/d4			49160	Y	22961
Brick 10.70.34.87:/rhs/brick1/d5			49167	Y	7973
Brick 10.70.34.89:/rhs/brick1/d6			49161	Y	23005
Brick 10.70.34.87:/rhs/brick1/d7			49168	Y	8021
NFS Server on localhost					2049	Y	14535
NFS Server on 10.70.34.88				2049	Y	8693
NFS Server on 10.70.34.87				2049	Y	8039
NFS Server on 10.70.34.89				2049	Y	23017
 
Task Status of Volume vol4
------------------------------------------------------------------------------
Task                 : Rebalance           
ID                   : ffc8ce22-6dc4-43a3-b9df-b30d54e18646
Status               : completed           
 
Marking the bug as 'Verified'
Comment 8 Pavithra 2014-01-03 00:56:53 EST
Can you please verify if the doc text is technically correct?
Comment 9 Kaushal 2014-01-03 02:11:15 EST
The doc text looks fine.
Comment 11 errata-xmlrpc 2014-02-25 02:32:46 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHEA-2014-0208.html

Note You need to log in before you can comment on or make changes to this bug.