Bug 1046908 - [Rebalance]:on restarting glusterd, the completed rebalance is starting again on that node
Summary: [Rebalance]:on restarting glusterd, the completed rebalance is starting again...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: distribute
Version: 2.1
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: RHGS 3.0.0
Assignee: Susant Kumar Palai
QA Contact: shylesh
URL:
Whiteboard:
: 996003 1136798 (view as bug list)
Depends On:
Blocks: 923774 1075087 1136310 1136798
TreeView+ depends on / blocked
 
Reported: 2013-12-27 11:02 UTC by senaik
Modified: 2021-09-09 11:32 UTC (History)
11 users (show)

Fixed In Version: glusterfs-3.6.0.14-1.el6rhs
Doc Type: Bug Fix
Doc Text:
Previously the glusterd Management Service would not maintain the status of rebalance. As a result, after a node reboot, rebalance processes that were complete would also restart. With this fix, after a node reboot the completed rebalance processes do not restart.
Clone Of:
: 1075087 1136310 (view as bug list)
Environment:
Last Closed: 2014-09-22 19:31:00 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2014:1278 0 normal SHIPPED_LIVE Red Hat Storage Server 3.0 bug fix and enhancement update 2014-09-22 23:26:55 UTC

Description senaik 2013-12-27 11:02:45 UTC
Description of problem:
=======================
After adding brick and restarting glusterd and checking rebalance status shows rebalance is in progress on that node


Version-Release number of selected component (if applicable):
============================================================
glusterfs 3.4.0.52rhs


How reproducible:
================
not tried


Steps to Reproduce:
==================
4 node cluster 

1.Create a distribute volume with 3 bricks (with 3 nodes) and start it 

2.Fuse mount the volume and create some files 

3.Add bricks and start rebalance 

4.Check rebalance status - only the nodes participating in rebalance is shown in the status 

gluster v rebalance vol2 status
Node Rebalanced-files  size scanned  failures  skipped status   run time in secs
----- ---------------  ----  -------  -------  ------- ------   ---------------                               
localhost      7      7.0MB    57      0          2    completed      0.00
10.70.34.88    0      0Bytes   56      0          5    completed      0.00
10.70.34.87    3      3.0MB    58      0          2    completed      0.00
volume rebalance: vol2: success: 

5.Add 2 more bricks (including the 4th node)and check rebalance status 
 gluster v add-brick vol2 10.70.34.89:/rhs/brick1/b6 10.70.34.87:/rhs/brick1/b7
volume add-brick: success

gluster v rebalance vol2 status
Node Rebalanced-files    size scanned failures skipped status   run time in secs
---- ----------------   -----  ------- -------  ------- ----- ------------------
                               
localhost      7         7.0MB    57    0        2     completed        0.00
10.70.34.88    0        0Bytes    56    0        5     completed        0.00
10.70.34.87    3         3.0MB    58    0        2     completed        0.00
volume rebalance: vol2: success: 

6.On another node check rebalance status 

gluster v rebalance vol2 status
Node Rebalanced-files  size scanned failures skipped   status   run time in secs
                              
localhost      3      3.0MB    58      0       2      completed      0.00
10.70.34.86    7      7.0MB    57      0       2      completed      0.00
10.70.34.88    0      0Bytes   56      0       5      completed      0.00
volume rebalance: vol2: success: 

Restart glusterd and check rebalance status again 

volume rebalance: vol2: success: 
[root@junior brick1]# service glusterd restart
Stopping glusterd:                                         [  OK  ]
Starting glusterd:                                         [  OK  ]
[root@junior brick1]# gluster v rebalance vol2 status
Node Rebalanced-files  size  scanned failures skipped  status   run time in secs
---- ---------------- ----- -------- --------- ------- ------  -----------------                            
localhost     0       0Bytes    0      0         0   in progress     0.00
10.70.34.86   7       7.0MB     57     0         2   completed       0.00
10.70.34.88   0      0Bytes     56     0         5   completed       0.00
volume rebalance: vol2: success: 

Rebalance process is started after restarting glusterd but the 4th node is not listed in the status output 

Actual results:
==============
Restarting glusterd after rebalance process is completed starts rebalance process

Expected results:
================
Untill rebalance is started , rebalance process should not start on the volume 


Additional info:
===============

[root@jay tmp]# gluster v i vol2
 
Volume Name: vol2
Type: Distribute
Volume ID: 36ed3ae5-b1d6-4f69-816f-aee572da17a3
Status: Started
Number of Bricks: 9
Transport-type: tcp
Bricks:
Brick1: 10.70.34.86:/rhs/brick1/b1
Brick2: 10.70.34.87:/rhs/brick1/b2
Brick3: 10.70.34.88:/rhs/brick1/b3
Brick4: 10.70.34.86:/rhs/brick1/b4
Brick5: 10.70.34.87:/rhs/brick1/b5
Brick6: 10.70.34.89:/rhs/brick1/b6
Brick7: 10.70.34.87:/rhs/brick1/b7
Brick8: 10.70.34.89:/rhs/brick1/b8
Brick9: 10.70.34.87:/rhs/brick1/b9

[root@jay tmp]# getfattr -d -m . -e hex /rhs/brick1/b1/getfattr: Removing leading '/' from absolute path names
# file: rhs/brick1/b1/
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x0000000100000000e38e38e0ffffffff
trusted.glusterfs.volume-id=0x36ed3ae5b1d64f69816faee572da17a3


 getfattr -d -m . -e hex /rhs/brick1/b2/
getfattr: Removing leading '/' from absolute path names
# file: rhs/brick1/b2/
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x00000001000000005555555471c71c6f
trusted.glusterfs.volume-id=0x36ed3ae5b1d64f69816faee572da17a3


 getfattr -d -m . -e hex /rhs/brick1/b3/
getfattr: Removing leading '/' from absolute path names
# file: rhs/brick1/b3/
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x000000010000000071c71c708e38e38b
trusted.glusterfs.volume-id=0x36ed3ae5b1d64f69816faee572da17a3


getfattr -d -m . -e hex /rhs/brick1/b4/
getfattr: Removing leading '/' from absolute path names
# file: rhs/brick1/b4/
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x000000010000000038e38e3855555553
trusted.glusterfs.volume-id=0x36ed3ae5b1d64f69816faee572da17a3

 getfattr -d -m . -e hex /rhs/brick1/b5/
getfattr: Removing leading '/' from absolute path names
# file: rhs/brick1/b5/
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x0000000100000000000000001c71c71b
trusted.glusterfs.volume-id=0x36ed3ae5b1d64f69816faee572da17a3


getfattr -d -m . -e hex /rhs/brick1/b6/
getfattr: Removing leading '/' from absolute path names
# file: rhs/brick1/b6/
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x00000001000000001c71c71c38e38e37
trusted.glusterfs.volume-id=0x36ed3ae5b1d64f69816faee572da17a3


getfattr -d -m . -e hex /rhs/brick1/b7/
getfattr: Removing leading '/' from absolute path names
# file: rhs/brick1/b7/
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x0000000100000000c71c71c4e38e38df
trusted.glusterfs.volume-id=0x36ed3ae5b1d64f69816faee572da17a3


getfattr -d -m . -e hex /rhs/brick1/b8/
getfattr: Removing leading '/' from absolute path names
# file: rhs/brick1/b8/
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x0000000100000000aaaaaaa8c71c71c3
trusted.glusterfs.volume-id=0x36ed3ae5b1d64f69816faee572da17a3

 getfattr -d -m . -e hex /rhs/brick1/b9/
getfattr: Removing leading '/' from absolute path names
# file: rhs/brick1/b9/
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x00000001000000008e38e38caaaaaaa7
trusted.glusterfs.volume-id=0x36ed3ae5b1d64f69816faee572da17a3

Comment 3 spandura 2014-02-13 06:23:59 UTC
I am able to re-create the same issue multiple times. 

Case executed:-
===============
1. Create 2 x 2 distribute-replicate volume.Start the volume

2. Create fuse mount. Create files/dirs from mount point 

3. Add 2 more bricks to volume to change the type to 3 x 2.

4. Start rebalance. 

5. Wait for rebalance to complete.

6. restart "glusterd" on any of the storage nodes. 

Result:-
=========
restarts "rebalance" process.

Comment 5 Krutika Dhananjay 2014-02-13 07:24:02 UTC
Updating the bug with my findings:

Root Cause:
==========

Glusterd remembers the status of rebalance process (if running) for every volume in the file /var/lib/glusterd/vols/<volname>/node_state.info.

Example:

[root@localhost rhs]# gluster v status dis tasks
Task Status of Volume dis
------------------------------------------------------------------------------
Task                 : Rebalance           
ID                   : b3bbbf09-d783-4e18-a81e-8f1ee846edf0
Status               : completed  


[root@localhost rhs]# cat /var/lib/glusterd/vols/dis/node_state.info 
rebalance_status=1
rebalance_op=19
rebalance-id=b3bbbf09-d783-4e18-a81e-8f1ee846edf0

However, after rebalance is complete, the rebalance state is not cleaned up. And when glusterd is stopped and started again, it tries to restart all the daemons that it thinks it had spawned before being brought down. It reads (now obsolete) rebalance configuration from node_state.info and restarts the rebalance process.

For a volume that is no longer undergoing rebalance in a given node, its node_state.info should look like the following:

[root@localhost rhs]# gluster v status kd tasks
Task Status of Volume kd
------------------------------------------------------------------------------
There are no active volume tasks


[root@localhost rhs]# cat /var/lib/glusterd/vols/kd/node_state.info 
rebalance_status=0
rebalance_op=0
rebalance-id=00000000-0000-0000-0000-000000000000

Comment 6 Susant Kumar Palai 2014-05-27 08:57:47 UTC
upstream patch : http://review.gluster.org/#/c/7214/

Comment 7 Susant Kumar Palai 2014-06-03 06:50:33 UTC
*** Bug 996003 has been marked as a duplicate of this bug. ***

Comment 8 shylesh 2014-06-18 10:19:10 UTC
verified on 3.6.0.18-1.el6rhs.x86_64
Now restarting glusterd or rebooting the machine doesn't trigger rebalance which is already completed.

Comment 9 Pavithra 2014-08-05 06:00:04 UTC
Hi Susant,

Please review the edited doc text for technical accuracy and sign off.

Comment 10 Susant Kumar Palai 2014-08-05 09:36:17 UTC
Yes Pavithra, the doc looks fine.

Comment 11 Nagaprasad Sathyanarayana 2014-09-04 04:32:09 UTC
*** Bug 1136798 has been marked as a duplicate of this bug. ***

Comment 13 errata-xmlrpc 2014-09-22 19:31:00 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHEA-2014-1278.html


Note You need to log in before you can comment on or make changes to this bug.