Bug 1206602 - Data Tiering: Newly added bricks not getting tier-gfid
Summary: Data Tiering: Newly added bricks not getting tier-gfid
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: GlusterFS
Classification: Community
Component: tiering
Version: mainline
Hardware: Unspecified
OS: Linux
high
urgent
Target Milestone: ---
Assignee: Joseph Elwin Fernandes
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1260923
TreeView+ depends on / blocked
 
Reported: 2015-03-27 13:44 UTC by Nag Pavan Chilakam
Modified: 2016-06-20 00:01 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-05-09 19:00:15 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)

Description Nag Pavan Chilakam 2015-03-27 13:44:07 UTC
Description of problem:
======================
Currently if a user adds a brick, the brick becomes part of the cold tier.
But if we issue getfattr -d -e hex -m . <newbrick>, it doesnt display the tier-gfid, even though it displays other information like vol-id etc
eg:
[root@rhs-client38 ~]# getfattr -d -e hex -m . /pavanbrick2/tiervol10/*
getfattr: Removing leading '/' from absolute path names
# file: pavanbrick2/tiervol10/newbrick
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x00000001000000005553cd87aaa79b0d
trusted.glusterfs.volume-id=0xbb730f770c0841a5ad3649e42f0bac1b



Tiering-gfid will need to be assigned as soon as a brick is added


Version-Release number of selected component (if applicable):
============================================================
3.7 upstream nightlies build http://download.gluster.org/pub/gluster/glusterfs/nightly/glusterfs/epel-6-x86_64/glusterfs-3.7dev-0.803.gitf64666f.autobuild/

glusterfs 3.7dev built on Mar 26 2015 01:04:24


How reproducible:
================
easily


Steps to Reproduce:
==================
1.create a distribute volume
2.attach a tier using attach-tier to the volume
3.issue a volume info or volume status command
4. Now try to add a new brick. It can be seen that the brick gets added to the cold tier but the same is not shown when we fetch the xattrs

NOTE:
=====
How am I confirming it is getting added to cold tier and not hot tier?
Ans: there are two ways
1) the cold bricks has ranges get messed up
2)another way is when we issue a detach-tier, this brick stays back with the volume and doesnt get detached. Hence concluding it is a cold brick


Expected results:
==================
tier-gfid must be added to the newly added brick

Additional info(CLI):
===================
[root@rhs-client44 ~]# gluster v info tiervol10
 
Volume Name: tiervol10
Type: Tier
Volume ID: e6223c16-50fa-4916-b8b9-a83db6e8ec6c
Status: Started
Number of Bricks: 5 x 1 = 5
Transport-type: tcp
Bricks:
Brick1: rhs-client44:/pavanbrick2/tiervol10/hb1
Brick2: rhs-client37:/pavanbrick2/tiervol10/hb1
Brick3: rhs-client44:/pavanbrick1/tiervol10/b1
Brick4: rhs-client37:/pavanbrick1/tiervol10/b1
Brick5: rhs-client38:/pavanbrick1/tiervol10/b1

[root@rhs-client44 ~]# gluster v add-brick tiervol10 rhs-client38:/pavanbrick2/tiervol10/newbrick
volume add-brick: success
[root@rhs-client44 ~]# gluster v info tiervol10
 
Volume Name: tiervol10
Type: Tier
Volume ID: e6223c16-50fa-4916-b8b9-a83db6e8ec6c
Status: Started
Number of Bricks: 6 x 1 = 6
Transport-type: tcp
Bricks:
Brick1: rhs-client44:/pavanbrick2/tiervol10/hb1
Brick2: rhs-client37:/pavanbrick2/tiervol10/hb1
Brick3: rhs-client44:/pavanbrick1/tiervol10/b1
Brick4: rhs-client37:/pavanbrick1/tiervol10/b1
Brick5: rhs-client38:/pavanbrick1/tiervol10/b1
Brick6: rhs-client38:/pavanbrick2/tiervol10/newbrick
[root@rhs-client44 ~]# gluster v status tiervol10
Status of volume: tiervol10
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick rhs-client44:/pavanbrick2/tiervol10/h
b1                                          49171     0          Y       2784 
Brick rhs-client37:/pavanbrick2/tiervol10/h
b1                                          49167     0          Y       17933
Brick rhs-client44:/pavanbrick1/tiervol10/b
1                                           49168     0          Y       29334
Brick rhs-client37:/pavanbrick1/tiervol10/b
1                                           49164     0          Y       1075 
Brick rhs-client38:/pavanbrick1/tiervol10/b
1                                           49161     0          Y       19137
Brick rhs-client38:/pavanbrick2/tiervol10/n
ewbrick                                     49162     0          Y       20362
NFS Server on localhost                     2049      0          Y       2956 
NFS Server on rhs-client37                  2049      0          Y       18060
NFS Server on 10.70.36.62                   2049      0          Y       20383
 
Task Status of Volume tiervol10
------------------------------------------------------------------------------
There are no active volume tasks
 
[root@rhs-client44 ~]# gluster v detach-tier tiervol10
volume remove-brick unknown: success
[root@rhs-client44 ~]# gluster v info tiervol10
 
Volume Name: tiervol10
Type: Distribute
Volume ID: e6223c16-50fa-4916-b8b9-a83db6e8ec6c
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: rhs-client37:/pavanbrick1/tiervol10/b1
Brick2: rhs-client38:/pavanbrick1/tiervol10/b1
Brick3: rhs-client38:/pavanbrick2/tiervol10/newbrick


=================HOT BRICKS=========================================
[root@rhs-client44 ~]# getfattr -d -e hex -m . /pavanbrick2/tiervol10/*
getfattr: Removing leading '/' from absolute path names
# file: pavanbrick2/tiervol10/hb1
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x0000000100000000000000007ffe7c30
trusted.glusterfs.volume-id=0xbb730f770c0841a5ad3649e42f0bac1b
trusted.tier-gfid=0x00000001000000009995e878ffffffff

[root@rhs-client37 ~]# getfattr -d -e hex -m . /pavanbrick2/tiervol10/*
getfattr: Removing leading '/' from absolute path names
# file: pavanbrick2/tiervol10/hb1
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x00000001000000007ffe7c31ffffffff
trusted.glusterfs.volume-id=0xbb730f770c0841a5ad3649e42f0bac1b
trusted.tier-gfid=0x00000001000000009995e878ffffffff







=================COLD BRICKS============================================
[root@rhs-client44 ~]# getfattr -d -e hex -m . /pavanbrick1/tiervol10/*
getfattr: Removing leading '/' from absolute path names
# file: pavanbrick1/tiervol10/b1
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x0000000100000000aaa79b0effffffff
trusted.glusterfs.volume-id=0xbb730f770c0841a5ad3649e42f0bac1b
trusted.tier-gfid=0x0000000100000000000000009995e877

[root@rhs-client37 ~]# getfattr -d -e hex -m . /pavanbrick1/tiervol10/*
getfattr: Removing leading '/' from absolute path names
# file: pavanbrick1/tiervol10/b1
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x0000000100000000aaa79b0effffffff
trusted.glusterfs.volume-id=0xbb730f770c0841a5ad3649e42f0bac1b
trusted.tier-gfid=0x0000000100000000000000009995e877

[root@rhs-client38 ~]# getfattr -d -e hex -m . /pavanbrick1/tiervol10/*
getfattr: Removing leading '/' from absolute path names
# file: pavanbrick1/tiervol10/b1
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x0000000100000000000000005553cd86
trusted.glusterfs.volume-id=0xbb730f770c0841a5ad3649e42f0bac1b
trusted.tier-gfid=0x0000000100000000000000009995e877





Newly added brick
=================
[root@rhs-client38 ~]# getfattr -d -e hex -m . /pavanbrick2/tiervol10/*
getfattr: Removing leading '/' from absolute path names
# file: pavanbrick2/tiervol10/newbrick
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x00000001000000005553cd87aaa79b0d
trusted.glusterfs.volume-id=0xbb730f770c0841a5ad3649e42f0bac1b

Comment 1 Nag Pavan Chilakam 2015-03-27 13:47:30 UTC
I even tried to run a rebalance to see if it fixes this issue, so that we can atleast have a workaround, though not a recommended one, but rebalance fails on a tiered volume as raised in bug#1205624
Neither does just a fix-layout alone fix the issue.
This too fails

Comment 2 Nag Pavan Chilakam 2015-04-20 05:42:54 UTC
As discussed with stakeholders,removing the tag for qe_tracker_everglades  for all add/remove brick issues

Comment 3 Nag Pavan Chilakam 2015-11-05 12:18:07 UTC
add/remove not supported, hence marking as won;t fix instead of not a bug


Note You need to log in before you can comment on or make changes to this bug.