Bug 1366494 - Rebalance is not considering the brick sizes while fixing the layout
Summary: Rebalance is not considering the brick sizes while fixing the layout
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: distribute
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Nithya Balachandran
QA Contact:
URL:
Whiteboard:
Depends On: 1257182
Blocks: 1374135
TreeView+ depends on / blocked
 
Reported: 2016-08-12 06:41 UTC by Nithya Balachandran
Modified: 2017-03-06 17:22 UTC (History)
10 users (show)

Fixed In Version: glusterfs-3.10.0
Clone Of: 1257182
: 1374135 (view as bug list)
Environment:
Last Closed: 2017-03-06 17:22:10 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Nithya Balachandran 2016-08-12 06:41:33 UTC
+++ This bug was initially created as a clone of Bug #1257182 +++

Problem statement:
============================

Rebalance is not considering the brick sizes while fixing the layout of the volume

Steps/procedure:

1. create a distribute volume using one brick of 100GB .
2. Mount it on the client using FUSE and create directory and 1000 files
3. add brick of 200GB from the another node and run the rebalance from the same node


Actual results:
================
Though Brick2 is of 200GB, it is holding only 327 and another brick has 676. Direcotry ranges are given below 




[root@rhs-client9 dht4]# getfattr -d -m . -e hex /rhs/brick2/dht4/data
getfattr: Removing leading '/' from absolute path names
# file: rhs/brick2/dht4/data
security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000
trusted.gfid=0x2bcf9f94144a4decb533a419885784cc
trusted.glusterfs.dht=0x0000000100000000aaa972d0ffffffff (200 GB Brick)


[root@rhs-client4 dht4]# getfattr -d -m . -e hex /rhs/brick1/dht4/data
getfattr: Removing leading '/' from absolute path names
# file: rhs/brick1/dht4/data
security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000
trusted.gfid=0x2bcf9f94144a4decb533a419885784cc
trusted.glusterfs.dht=0x000000010000000000000000aaa972cf (100 GB Brick)



Expected results:
==================
while fixing the layout re-balance  should consider the brick sizes 


Output:
===================
[root@rhs-client4 dht4]# gluster vol status dht4
Status of volume: dht4
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick rhs-client4.lab.eng.blr.redhat.com:/r
hs/brick1/dht4                              49158     0          Y       20117
Brick rhs-client9.lab.eng.blr.redhat.com:/r
hs/brick2/dht4                              49157     0          Y       29628
NFS Server on localhost                     2049      0          Y       20301
NFS Server on rhs-client39.lab.eng.blr.redh
at.com                                      N/A       N/A        N       N/A  
NFS Server on rhs-client9.lab.eng.blr.redha
t.com                                       N/A       N/A        N       N/A  
 
Task Status of Volume dht4
------------------------------------------------------------------------------
Task                 : Rebalance           
ID                   : b93f08b3-e59c-4e30-bd0f-b405e553bdb3
Status               : completed        

[root@rhs-client9 dht4]# df -h | grep brick2
/dev/mapper/rhel_rhs--client9-vol1  200G   60M  200G   1% /rhs/brick2

[root@rhs-client4 dht4]# df -h | grep brick1
/dev/mapper/rhgs_rhs--client4-vol1  100G   84M  100G   1% /rhs/brick1

--- Additional comment from Red Hat Bugzilla Rules Engine on 2015-08-26 08:33:08 EDT ---

This bug is automatically being proposed for the current z-stream release of Red Hat Gluster Storage 3 by setting the release flag 'rhgs‑3.1.z' to '?'. 

If this bug should be proposed for a different release, please manually change the proposed release flag.

--- Additional comment from Raghavendra G on 2016-06-28 02:42:03 EDT ---

The ranges allocated are:

>>> 0xffffffff - 0xaaa972d0
1431735599
>>> 0xaaa972cf
2863231695

Though the ranges are in the ratio 1:2, they are allocated to wrong bricks. Large range is allocated to smaller brick. Need to fix it.

--- Additional comment from John Skeoch on 2016-07-13 18:35:18 EDT ---

User rmekala's account has been closed

--- Additional comment from Red Hat Bugzilla Rules Engine on 2016-08-09 07:17:24 EDT ---

Since this bug has been approved for the RHGS 3.2.0 release of Red Hat Gluster Storage 3, through release flag 'rhgs-3.2.0+', and through the Internal Whiteboard entry of '3.2.0', the Target Release is being automatically set to 'RHGS 3.2.0'

Comment 1 Nithya Balachandran 2016-08-16 04:51:50 UTC
RCA:

The volume was created with a single brick. On adding a second much larger brick and running a rebalance, the layout is recalculated for all existing directories by calling dht_fix_layout_of_directory (). This function generates a new weighted layout in dht_selfheal_layout_new_directory () but then calls dht_selfheal_layout_maximize_overlap () on the newly generated layout. This function does not consider the relative brick sizes and as the original brick had a complete layout (0x00000000-0xffffffff), the layout is swapped to maximize the overlap with the old layout.

Comment 2 Jeff Darcy 2016-08-16 13:00:27 UTC
Nithya's analysis is correct.  We generate a new layout based on brick sizes, then attempt to optimize it for maximum overlap with the current layout.  That optimization is important to minimize data movement, but unfortunately it's broken in this case because it doesn't account properly for where each range already resides.  I wrote that function BTW, so it's my fault.  For now, we should probably just disable the optimization phase when we're weighting by brick size.  Longer term, what we need to do is fix dht_selfheal_layout_maximize_overlap.  There's a place where it tries to determine whether a particular swap would be an improvement or not.  That particular calculation needs to be enhance to account for the *actual* current and proposed locations for a range, instead of (effectively) inferring those locations from ordinal positions.

Comment 3 Worker Ant 2016-09-06 05:48:47 UTC
REVIEW: http://review.gluster.org/15403 (cluster/dht: Skip layout overlap maximization on weighted rebalance) posted (#1) for review on master by N Balachandran (nbalacha)

Comment 4 Worker Ant 2016-09-06 17:12:56 UTC
REVIEW: http://review.gluster.org/15403 (cluster/dht: Skip layout overlap maximization on weighted rebalance) posted (#2) for review on master by N Balachandran (nbalacha)

Comment 5 Worker Ant 2016-09-07 08:11:50 UTC
REVIEW: http://review.gluster.org/15403 (cluster/dht: Skip layout overlap maximization on weighted rebalance) posted (#3) for review on master by N Balachandran (nbalacha)

Comment 6 Worker Ant 2016-09-07 16:49:46 UTC
REVIEW: http://review.gluster.org/15403 (cluster/dht: Skip layout overlap maximization on weighted rebalance) posted (#4) for review on master by N Balachandran (nbalacha)

Comment 7 Worker Ant 2016-09-08 19:00:54 UTC
COMMIT: http://review.gluster.org/15403 committed in master by Jeff Darcy (jdarcy) 
------
commit 2209f6d49bc8d16beed52b1d145be936d2da941a
Author: N Balachandran <nbalacha>
Date:   Tue Sep 6 11:13:20 2016 +0530

    cluster/dht: Skip layout overlap maximization on weighted rebalance
    
    dht_selfheal_layout_maximize_overlap () does not consider
    chunk sizes while calculating overlaps. Temporarily
    enabling this operation if only if weighted rebalance is disabled
    or all bricks are the same size.
    
    Change-Id: I5ed16cdff2551b826a1759ca8338921640bfc7b3
    BUG: 1366494
    Signed-off-by: N Balachandran <nbalacha>
    Reviewed-on: http://review.gluster.org/15403
    Smoke: Gluster Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.org>
    Reviewed-by: Raghavendra G <rgowdapp>
    NetBSD-regression: NetBSD Build System <jenkins.org>

Comment 8 Shyamsundar 2017-03-06 17:22:10 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.10.0, please open a new bug report.

glusterfs-3.10.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/gluster-users/2017-February/030119.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.