Bug 956208 - DHT - rebalance - Once User has run rebalance with 'start force' option and after successful completion of it, nothing got changed on volume.re run of rebalance is migrating files
Summary: DHT - rebalance - Once User has run rebalance with 'start force' option and ...
Keywords:
Status: CLOSED DEFERRED
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: distribute
Version: 2.1
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: ---
: ---
Assignee: Nithya Balachandran
QA Contact: Matt Zywusko
URL:
Whiteboard:
Depends On:
Blocks: 1286070
TreeView+ depends on / blocked
 
Reported: 2013-04-24 13:17 UTC by Rachana Patel
Modified: 2015-11-27 10:31 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1286070 (view as bug list)
Environment:
Last Closed: 2015-11-27 10:31:37 UTC
Embargoed:


Attachments (Terms of Use)

Description Rachana Patel 2013-04-24 13:17:04 UTC
Description of problem:
DHT - rebalance - Once User has run rebalance with 'start force'  option and after successful completion of it,  nothing got  changed on volume.re run of rebalance is migrating files

Version-Release number of selected component (if applicable):
3.4.0.1rhs-1.el6rhs.x86_64

How reproducible:
intermittent

Steps to Reproduce:
1. Create a Distributed volume having 2 or more sub-volume and start the volume.

2. Fuse Mount the volume from the client-1 using “mount -t glusterfs  server:/<volume> <client-1_mount_point>”

mount -t glusterfs XXX:/<volname> /mnt/XXX

3. Change sub-vol per dir option

4.Run rebalance command for that volume with force option and check status

5.execute command 'gluster volume rebalance <volname> status'
  
[root@mia ~]# gluster volume rebalance v2 start force
volume rebalance: v2: success: Starting rebalance on volume v2 has been successful.
ID: d21026ea-db01-4d02-ae53-79965de20385
[root@mia ~]# gluster volume rebalance v2 status
                                    Node Rebalanced-files          size       scanned      failures         status run time in secs
                               ---------      -----------   -----------   -----------   -----------   ------------   --------------
                               localhost              310        0Bytes          1231             0      completed             8.00
             fred.lab.eng.blr.redhat.com                0        0Bytes          1230             0      completed             4.00
             fred.lab.eng.blr.redhat.com                0        0Bytes          1230             0      completed             4.00
              fan.lab.eng.blr.redhat.com              271        0Bytes          1490             0      completed             8.00
volume rebalance: v2: success:

6. Dont do any I/O on mount point(dont access any file or dir)

Now again execute same command and check output
[root@mia ~]# gluster volume rebalance v2 start force
volume rebalance: v2: success: Starting rebalance on volume v2 has been successful.
ID: 67a8d540-1e53-4c86-8004-f736f65e74dc
[root@mia ~]# gluster volume rebalance v2 status
                                    Node Rebalanced-files          size       scanned      failures         status run time in secs
                               ---------      -----------   -----------   -----------   -----------   ------------   --------------
                               localhost               14        0Bytes          1230             0      completed             2.00
             fred.lab.eng.blr.redhat.com                0        0Bytes          1230             0      completed             3.00
             fred.lab.eng.blr.redhat.com                0        0Bytes          1230             0      completed             3.00
              fan.lab.eng.blr.redhat.com               11        0Bytes          1241             0      completed             2.00
volume rebalance: v2: success: 
 
Actual results:
status show that files are again rebalances on local host and fan.


Once rebalance is run with 'start force' option and nothing got changed on volume then  why re execution of rebalance is moving some files?

Additional info:

Comment 3 Scott Haines 2013-09-27 17:07:47 UTC
Targeting for 3.0.0 (Denali) release.

Comment 5 vsomyaju 2014-03-11 07:38:25 UTC
Observation:

a. Added gf_log  which shows
    OUTPUT-<client-name>, <start-of layout>, <stop/end of layout>

Different layout formed in different run by node-1 rebalance process.

Run-1: "gluster volume rebalance volume1 start force"



RUN-1:   Layout for root fixed by node-1 rebal.
-----------------------------------------------


[2014-03-11 07:26:55.576688] I [dht-common.c:2646:dht_setxattr] 0-volume1-dht: fixing the layout of /

[2014-03-11 07:26:55.576739] I [dht-selfheal.c:786:dht_fix_layout_of_directory] 0-volume1-dht: OUTPUT-volume1-client-0, 0, 4294967295

[2014-03-11 07:26:55.576757] I [dht-selfheal.c:791:dht_fix_layout_of_directory] 0-volume1-dht: OUTPUT-volume1-client-1, 0, 0

[2014-03-11 07:26:55.576766] I [dht-selfheal.c:795:dht_fix_layout_of_directory] 0-volume1-dht: OUTPT-volume1-client-2, 0, 0


RUN-2:    Layout for root fixed by Node-2.
-------------------------------------------------

[2014-03-11 07:29:14.470715] I [dht-selfheal.c:786:dht_fix_layout_of_directory] 0-volume1-dht: OUTPUT-volume1-client-0, 0, 0

[2014-03-11 07:29:14.470734] I [dht-selfheal.c:791:dht_fix_layout_of_directory] 0-volume1-dht: OUTPUT-volume1-client-1, 0, 4294967295

[2014-03-11 07:29:14.470740] I [dht-selfheal.c:795:dht_fix_layout_of_directory] 0-volume1-dht: OUTPT-volume1-client-2, 0, 0




In FIRST run, all files at node-1 are hashed as well cached there. So no files from node-1 will be migrated.

In SECOND run, all file at node-1 are hashed to node-2 now, so needs to be migrated from node-1 to node-2.

Comment 6 Nagaprasad Sathyanarayana 2014-05-06 11:43:41 UTC
Dev ack to 3.0 RHS BZs

Comment 8 Raghavendra G 2015-02-24 06:21:16 UTC
Most likely a duplicate of bz 1162306. Need further analysis to confirm


Note You need to log in before you can comment on or make changes to this bug.