Bug 1229238 - Data Tiering:Adding new bricks to a tiered volume(which defaults to cold tier) is messing or skewing up the dht hash ranges
Summary: Data Tiering:Adding new bricks to a tiered volume(which defaults to cold tier...
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: tier
Version: rhgs-3.1
Hardware: Unspecified
OS: Linux
urgent
urgent
Target Milestone: ---
: ---
Assignee: Bug Updates Notification Mailing List
QA Contact: Nag Pavan Chilakam
URL:
Whiteboard:
Depends On: 1206596
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-06-08 10:19 UTC by Nag Pavan Chilakam
Modified: 2016-09-17 15:44 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of: 1206596
Environment:
Last Closed: 2015-06-14 20:36:27 UTC
Embargoed:


Attachments (Terms of Use)

Description Nag Pavan Chilakam 2015-06-08 10:19:36 UTC
+++ This bug was initially created as a clone of Bug #1206596 +++

Description of problem:
======================
Currently if a user adds a brick, the brick becomes part of the cold tier.
But when you check the getfattrs of the new bricks and cold bricks, the hash ranges get messed up.
The following was the pattern i observed:
Let's assume that we have 3 cold bricks and the hash range is from 1-100.
This means let B1 have 0-33, B2 have 34-66 and B3 have 67-100
Now when i add a new brick it must not even get a hash range as i havent run a fix layout or a rebalance, but this is how they are getting rearranged
B1->0-33
B2->67-100(gets the same as one of the other bricks)
B3->67-100
New brick ->takes over the hash ranges 34-66
This can be a serious issue with distributing the files

Version-Release number of selected component (if applicable):
============================================================
3.7 upstream nightlies build http://download.gluster.org/pub/gluster/glusterfs/nightly/glusterfs/epel-6-x86_64/glusterfs-3.7dev-0.803.gitf64666f.autobuild/

glusterfs 3.7dev built on Mar 26 2015 01:04:24


How reproducible:
================
easily


Steps to Reproduce:
==================
1.create a distribute volume
2.attach a tier using attach-tier to the volume
3.issue a volume info or volume status command and note down the xattrs using getfattr
4. Now try to add a new brick. It can be seen that the brick gets added to the cold tier without any choice and reissue getfattr command



Expected results:
==================
DHT hash should not get skewed and new brick should not even get a hash range until a fix layout or rebalance is issued manually.

Additional info(CLI):
===================
[root@rhs-client44 ~]# gluster v info tiervol10
 
Volume Name: tiervol10
Type: Tier
Volume ID: e6223c16-50fa-4916-b8b9-a83db6e8ec6c
Status: Started
Number of Bricks: 5 x 1 = 5
Transport-type: tcp
Bricks:
Brick1: rhs-client44:/pavanbrick2/tiervol10/hb1
Brick2: rhs-client37:/pavanbrick2/tiervol10/hb1
Brick3: rhs-client44:/pavanbrick1/tiervol10/b1
Brick4: rhs-client37:/pavanbrick1/tiervol10/b1
Brick5: rhs-client38:/pavanbrick1/tiervol10/b1

[root@rhs-client44 ~]# gluster v add-brick tiervol10 rhs-client38:/pavanbrick2/tiervol10/newbrick
volume add-brick: success
[root@rhs-client44 ~]# gluster v info tiervol10
 
Volume Name: tiervol10
Type: Tier
Volume ID: e6223c16-50fa-4916-b8b9-a83db6e8ec6c
Status: Started
Number of Bricks: 6 x 1 = 6
Transport-type: tcp
Bricks:
Brick1: rhs-client44:/pavanbrick2/tiervol10/hb1
Brick2: rhs-client37:/pavanbrick2/tiervol10/hb1
Brick3: rhs-client44:/pavanbrick1/tiervol10/b1
Brick4: rhs-client37:/pavanbrick1/tiervol10/b1
Brick5: rhs-client38:/pavanbrick1/tiervol10/b1
Brick6: rhs-client38:/pavanbrick2/tiervol10/newbrick
[root@rhs-client44 ~]# gluster v status tiervol10
Status of volume: tiervol10
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick rhs-client44:/pavanbrick2/tiervol10/h
b1                                          49171     0          Y       2784 
Brick rhs-client37:/pavanbrick2/tiervol10/h
b1                                          49167     0          Y       17933
Brick rhs-client44:/pavanbrick1/tiervol10/b
1                                           49168     0          Y       29334
Brick rhs-client37:/pavanbrick1/tiervol10/b
1                                           49164     0          Y       1075 
Brick rhs-client38:/pavanbrick1/tiervol10/b
1                                           49161     0          Y       19137
Brick rhs-client38:/pavanbrick2/tiervol10/n
ewbrick                                     49162     0          Y       20362
NFS Server on localhost                     2049      0          Y       2956 
NFS Server on rhs-client37                  2049      0          Y       18060
NFS Server on 10.70.36.62                   2049      0          Y       20383
 
Task Status of Volume tiervol10
------------------------------------------------------------------------------
There are no active volume tasks
 
[root@rhs-client44 ~]# gluster v detach-tier tiervol10
volume remove-brick unknown: success
[root@rhs-client44 ~]# gluster v info tiervol10
 
Volume Name: tiervol10
Type: Distribute
Volume ID: e6223c16-50fa-4916-b8b9-a83db6e8ec6c
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: rhs-client37:/pavanbrick1/tiervol10/b1
Brick2: rhs-client38:/pavanbrick1/tiervol10/b1
Brick3: rhs-client38:/pavanbrick2/tiervol10/newbrick




=================COLD BRICKS============================================
###########before adding new brick#################
[root@rhs-client44 ~]# getfattr -d -e hex -m . /pavanbrick1/tiervol10/*
getfattr: Removing leading '/' from absolute path names
# file: pavanbrick1/tiervol10/b1
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x0000000100000000aaa79b0effffffff
trusted.glusterfs.volume-id=0xbb730f770c0841a5ad3649e42f0bac1b
trusted.tier-gfid=0x0000000100000000000000009995e877



[root@rhs-client38 ~]# getfattr -d -e hex -m . /pavanbrick1/tiervol10/*
getfattr: Removing leading '/' from absolute path names
# file: pavanbrick1/tiervol10/b1
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x00000001000000005553cd87aaa79b0d
trusted.glusterfs.volume-id=0xbb730f770c0841a5ad3649e42f0bac1b
trusted.tier-gfid=0x0000000100000000000000009995e877


[root@rhs-client37 ~]# getfattr -d -e hex -m . /pavanbrick1/tiervol10/*
getfattr: Removing leading '/' from absolute path names
# file: pavanbrick1/tiervol10/b1
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x0000000100000000000000005553cd86
trusted.glusterfs.volume-id=0xbb730f770c0841a5ad3649e42f0bac1b
trusted.tier-gfid=0x0000000100000000000000009995e877

########after adding new brick####################
[root@rhs-client44 ~]# getfattr -d -e hex -m . /pavanbrick1/tiervol10/*
getfattr: Removing leading '/' from absolute path names
# file: pavanbrick1/tiervol10/b1
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x0000000100000000aaa79b0effffffff
trusted.glusterfs.volume-id=0xbb730f770c0841a5ad3649e42f0bac1b
trusted.tier-gfid=0x0000000100000000000000009995e877

[root@rhs-client37 ~]# getfattr -d -e hex -m . /pavanbrick1/tiervol10/*
getfattr: Removing leading '/' from absolute path names
# file: pavanbrick1/tiervol10/b1
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x0000000100000000aaa79b0effffffff
trusted.glusterfs.volume-id=0xbb730f770c0841a5ad3649e42f0bac1b
trusted.tier-gfid=0x0000000100000000000000009995e877

[root@rhs-client38 ~]# getfattr -d -e hex -m . /pavanbrick1/tiervol10/*
getfattr: Removing leading '/' from absolute path names
# file: pavanbrick1/tiervol10/b1
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x0000000100000000000000005553cd86
trusted.glusterfs.volume-id=0xbb730f770c0841a5ad3649e42f0bac1b
trusted.tier-gfid=0x0000000100000000000000009995e877





Newly added brick
=================
[root@rhs-client38 ~]# getfattr -d -e hex -m . /pavanbrick2/tiervol10/*
getfattr: Removing leading '/' from absolute path names
# file: pavanbrick2/tiervol10/newbrick
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x00000001000000005553cd87aaa79b0d
trusted.glusterfs.volume-id=0xbb730f770c0841a5ad3649e42f0bac1b

=================HOT BRICKS=========================================
[root@rhs-client44 ~]# getfattr -d -e hex -m . /pavanbrick2/tiervol10/*
getfattr: Removing leading '/' from absolute path names
# file: pavanbrick2/tiervol10/hb1
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x0000000100000000000000007ffe7c30
trusted.glusterfs.volume-id=0xbb730f770c0841a5ad3649e42f0bac1b
trusted.tier-gfid=0x00000001000000009995e878ffffffff

[root@rhs-client37 ~]# getfattr -d -e hex -m . /pavanbrick2/tiervol10/*
getfattr: Removing leading '/' from absolute path names
# file: pavanbrick2/tiervol10/hb1
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x00000001000000007ffe7c31ffffffff
trusted.glusterfs.volume-id=0xbb730f770c0841a5ad3649e42f0bac1b
trusted.tier-gfid=0x00000001000000009995e878ffffffff

--- Additional comment from nchilaka on 2015-04-20 01:42:44 EDT ---

As discussed with stakeholders,removing the tag for qe_tracker_everglades  for all add/remove brick issues

Comment 2 Dan Lambright 2015-06-14 20:36:27 UTC
Layout ranges have no meaning in tiered volumes.


Note You need to log in before you can comment on or make changes to this bug.