Bug 1004692 - Rebalance : Files are rebalanced everytime, when rebalance is started after add brick and starting rebalance multiple times while I /O is in progress on the client
Summary: Rebalance : Files are rebalanced everytime, when rebalance is started after a...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: distribute
Version: 2.1
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: ---
Assignee: Nithya Balachandran
QA Contact: storage-qa-internal@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-09-05 09:04 UTC by senaik
Modified: 2015-11-27 12:10 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-11-27 12:10:07 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description senaik 2013-09-05 09:04:43 UTC
Description of problem:
======================== 
On adding brick when I/O is in progress on the client , files are rebalanced everytime rebalance is started

Version-Release number of selected component (if applicable):
============================================================= 
glusterfs 3.4.0.30rhs


How reproducible:
=================
not tried 

Steps to Reproduce:
==================== 

1.Create a distribute volume with 4 bricks
 
2.Fuse /NFS mount the volume and create a directory and create some files 
for i in {1..500} ; do dd if=/dev/urandom of=f"$i" bs=10M count=1; done

3.While file creation is in progress , add 2 bricks to the volume and start rebalance and check rebalance status 

gluster v rebalance vol1 start

gluster v rebalance vol1 status

Node   Rebalanced-files size  scanned failures  skipped  status run time in secs
----   ---------------- ----  -------  -------  --------  -----  ---------------
localhost      0      0Bytes   43        0        8    completed     0.00
10.70.34.86    4      40.0MB   44        0        4    completed     1.00
10.70.34.89    0      0Bytes   43        0        0    completed     1.00
10.70.34.88    9      90.0MB   56        0        2    completed     1.00
10.70.34.87    9      90.0MB   48        0        7    completed     2.00

volume rebalance: vol1: success: 

4. Execute Rebalance start force command

gluster v rebalance vol1 start force

Node   Rebalanced-files size  scanned failures  skipped  status run time in secs
----   ---------------- ----  -------  -------  --------  -----  ---------------
localhost      11     110.0MB  146        0        0    completed     2.00
10.70.34.86    14     140.0MB  152        0        0    completed     3.00
10.70.34.89    0      0Bytes   132        0        0    completed     0.00
10.70.34.88    6       60.0MB  138        0        0    completed     1.00
10.70.34.87    12     120.0MB  155        0        0    completed     2.00
volume rebalance: vol1: success: 

5. Start rebalance again (I/O is still in progress on client ) and check status 

Node   Rebalanced-files size  scanned failures  skipped  status run time in secs
----   ---------------- ----  -------  -------  --------  -----  ---------------
localhost      0       0Bytes  257        0        19    completed     1.00
10.70.34.86    11     110.0MB  254        0        31    completed     3.00
10.70.34.89    0      0Bytes   257        0        0     completed     0.00
10.70.34.88    13     130.0MB  196        0        0     in progress   3.00
10.70.34.87    5      50.0MB   259        0        22    completed     2.00
volume rebalance: vol1: success: 


Actual results:
=============== 
Files are rebalanced everytime rebalance is started 

Expected results:
================
After adding brick and starting rebalance , the layout has already been fixed , so even when file creation is still in progress on the client , files must now hash to the existing bricks (since new bricks have not been added) and files should not be rebalancing again 


Additional info:
================ 
[root@boost brick1]# gluster v i vol1
 
Volume Name: vol1
Type: Distribute
Volume ID: 3e7b0869-77f9-4a4a-a148-ab9402e79add
Status: Started
Number of Bricks: 6
Transport-type: tcp
Bricks:
Brick1: 10.70.34.85:/rhs/brick1/a1
Brick2: 10.70.34.86:/rhs/brick1/a2
Brick3: 10.70.34.87:/rhs/brick1/a3
Brick4: 10.70.34.88:/rhs/brick1/a4
Brick5: 10.70.34.89:/rhs/brick1/a5
Brick6: 10.70.34.88:/rhs/brick1/a6


Note You need to log in before you can comment on or make changes to this bug.