Bug 1529440 - Files are not rebalanced if destination brick(available size) is of smaller size than source brick(available size)
Summary: Files are not rebalanced if destination brick(available size) is of smaller s...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: distribute
Version: mainline
Hardware: Unspecified
OS: Unspecified
medium
high
Target Milestone: ---
Assignee: Nithya Balachandran
QA Contact:
URL:
Whiteboard:
Depends On: 1475133
Blocks: 1530455
TreeView+ depends on / blocked
 
Reported: 2017-12-28 05:51 UTC by Nithya Balachandran
Modified: 2018-03-15 11:24 UTC (History)
5 users (show)

Fixed In Version: glusterfs-4.0.0
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1475133
: 1530455 (view as bug list)
Environment:
Last Closed: 2018-03-15 11:24:00 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Nithya Balachandran 2017-12-28 05:51:01 UTC
+++ This bug was initially created as a clone of Bug #1475133 +++

Description of problem:
=======================
When we add a brick which is of smaller size, when we trigger rebalance, files are not moved from the src_brick to dst_brick if src_available_space >dst_available_space
Below are the problems due to this situation
1)we must not be checking dst_available_space but , rather see if the file being migrated has sufficient space to fit in the dst_brick or not
2)even though we are not migrating, we scan whole filesystem, everytime a rebalance is triggered. This is waste of cpu cycles. 
Agree, that layout fixing happens, but fixing would have happened as part of first rebalance completion. We don't have to probably try to fix everytime, given that when i am writing new files, they are anyways getting distributed b/w bricks, and it is only old files, which needs to be rebalanced(ie due to add brick), and first rebalance would have ideally done fix layout. yes there can be exceptions, like the layout fixing may not have been compeleted due to abort or stop by user or unexpected events



How reproducible:
=========
always

Steps to Reproduce:
1.create a distribute(or distrep) volume with all bricks of say 10GB
2.do some IOs
3.now add brick of size lesser than 10gb say 5gb
3.trigger rebalance
4. add new files
5. from backend, add a file of 5gb on old bricks(to make dst_avail_size >src_avail_size)

Actual results:
==========
after step 3, rebalance completes, but no files are migrated
with step4, new files are distributed as per hashing
with step 5, now rebalance will migrate some old files to dst, but it does only till src_avail_size <dst_avail_size

Comment 1 Nithya Balachandran 2017-12-28 05:52:46 UTC
I have modified the check to consider percentages instead of absolute values as we want to keep a fairly equitable distribution across bricks.

Comment 2 Worker Ant 2017-12-28 06:00:17 UTC
REVIEW: https://review.gluster.org/19101 (cluster/dht: Use percentages for space check) posted (#1) for review on master by N Balachandran

Comment 3 Worker Ant 2018-01-02 18:40:22 UTC
COMMIT: https://review.gluster.org/19101 committed in master by \"N Balachandran\" <nbalacha> with a commit message- cluster/dht: Use percentages for space check

With heterogenous bricks now being supported in DHT
we could run into issues where files are not migrated
even though there is sufficient space in newly added bricks
which just happen to be considerably smaller than older
bricks. Using percentages instead of absolute available
space for space checks can mitigate that to some extent.

Marking bug-1247563.t as that used to depend on the easier
code to prevent a file from migrating. This will be removed
once we find a way to force a file migration failure.

Change-Id: I3452520511f304dbf5af86f0632f654a92fcb647
BUG: 1529440
Signed-off-by: N Balachandran <nbalacha>

Comment 4 Shyamsundar 2018-03-15 11:24:00 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-4.0.0, please open a new bug report.

glusterfs-4.0.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2018-March/000092.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.