Bug 1040345 - Rebalance : Restarting glusterd after rebalance is completed, shows rebalance status is in progress even though there are no files which are rebalanced
Summary: Rebalance : Restarting glusterd after rebalance is completed, shows rebalanc...
Keywords:
Status: CLOSED DEFERRED
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: distribute
Version: 2.1
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: Nithya Balachandran
QA Contact: storage-qa-internal@redhat.com
URL:
Whiteboard:
Depends On:
Blocks: 1286154
TreeView+ depends on / blocked
 
Reported: 2013-12-11 09:05 UTC by senaik
Modified: 2015-11-27 12:09 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1286154 (view as bug list)
Environment:
Last Closed: 2015-11-27 12:09:07 UTC
Embargoed:


Attachments (Terms of Use)

Description senaik 2013-12-11 09:05:51 UTC
Description of problem:
=======================
Restarting  glusterd after rebalance is completed, and on checking rebalance status , it shows that rebalance is in progress even though there are no files which are rebalanced


Version-Release number of selected component (if applicable):
============================================================
glusterfs 3.4.0.44.1u2rhs

How reproducible:
=================
always


Steps to Reproduce:
===================
1.Create a distribute volume with 3 bricks and start it 

2.Fuse mount the volume and create files 

3.Add bricks and start rebalance 

4.After rebalance is completed , stop and start glusterd

[root@boost d3]# gluster v rebalance vol3 status
                                    Node Rebalanced-files          size       scanned      failures       skipped               status   run time in secs
                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
                               localhost                0        0Bytes           100             0             0            completed               0.00
                             10.70.34.88                0        0Bytes           100             0             0            completed               0.00
                             10.70.34.86                0        0Bytes           100             0             0            completed               0.00
volume rebalance: vol3: success: 

[root@boost d3]# service glusterd stop
Stopping glusterd:                                         [  OK  ]
[root@boost d3]# service glusterd start
Starting glusterd:                                         [  OK  ]


5. Check rebalance status immediately . Status shows rebalance is 'in progress'. [ Rebalance was NOT STARTED after restarting glusterd and there are no files being rebalanced ]

gluster v rebalance vol3 status
                                    Node Rebalanced-files          size       scanned      failures       skipped               status   run time in secs
                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
                               localhost                0        0Bytes             0             0             0          in progress               0.00
                             10.70.34.88                0        0Bytes             0             0             0          in progress               0.00
                             10.70.34.86                0        0Bytes           100             0             0            completed               0.00
volume rebalance: vol3: success: 



Actual results:
===============
Rebalance status shows it is in progress after restarting glusterd (no files being rebalanced and no rebalance start command executed)

Expected results:
=================
After restarting glusterd , rebalance status should show the previous rebalance operation completed unless a new add brick or rebalance start is executed . 


Additional info:

Comment 4 Susant Kumar Palai 2015-11-27 12:09:07 UTC
Cloning this to 3.1. to be fixed in future release.


Note You need to log in before you can comment on or make changes to this bug.