Bug 1206517 - Data Tiering:Distribute-replicate type Volume not getting converted to a tiered volume on attach-tier
Summary: Data Tiering:Distribute-replicate type Volume not getting converted to a tier...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: tiering
Version: mainline
Hardware: Unspecified
OS: Linux
urgent
urgent
Target Milestone: ---
Assignee: Dan Lambright
QA Contact:
URL:
Whiteboard:
Depends On: 1194753 1207867
Blocks: qe_tracker_everglades glusterfs-3.7.0 1260923
TreeView+ depends on / blocked
 
Reported: 2015-03-27 10:35 UTC by Nag Pavan Chilakam
Modified: 2015-10-30 17:32 UTC (History)
3 users (show)

Fixed In Version: glusterfs-3.7dev-0.869.gitf5e4c94.el6
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-05-14 17:27:04 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)

Description Nag Pavan Chilakam 2015-03-27 10:35:10 UTC
Description of problem:
=======================
When attaching a tier to a distribute-replicate volume, the attach passes but the volume doesnt get converted to a tiered volume.
But given that dist-rep volumes are most deployed among all type of volumes, tiering must be supported on dist-rep volumes too

Version-Release number of selected component (if applicable):
============================================================
3.7 upstream nightlies build http://download.gluster.org/pub/gluster/glusterfs/nightly/glusterfs/epel-6-x86_64/glusterfs-3.7dev-0.803.gitf64666f.autobuild/

glusterfs 3.7dev built on Mar 26 2015 01:04:24

How reproducible:
=================
Easy to reproduce


Steps to Reproduce:
==================
1.create a gluster volume of type distribute-replicate type and start the volume
2.attach a tier to the volume using attach-tier. 
3. Now check the volume type. It still shows as dist-rep instead of tier-volume.
4. Also check the xattrs of the bricks, they dont have any tier attributes, even after mounting 
5. after mounting volume, now write some files to the volume.
It can be seen that all the files just get distributed and replicated over all the bricks and their repective replica pairs. They have nothing to do with if the bricks are part of cold or hot tier


Actual results:
===============
The volume doesnt get converted to tiered volume.
Neither the volume info shows or the dht.tier gets added.
Also the files also get dispersed over just like a regular dist-rep volume

Expected results:
================
A dist-rep volume should be able to be converted to tiered volume and behave like a tiered volume.
But currently attach-tier is only working like an add-brick command


Additional info(CLI logs):
===============
[root@rhs-client44 ~]# gluster v create tier_distrep replica 2 rhs-client44:/pavanbrick1/tier_distrep/b1 rhs-client37:/pavanbrick1/tier_distrep/b1m rhs-client37:/pavanbrick1/tier_distrep/b2 rhs-client38:/pavanbrick1/tier_distrep/b2m rhs-client44:/pavanbrick1/tier_distrep/b3m rhs-client38:/pavanbrick1/tier_distrep/b3
volume create: tier_distrep: success: please start the volume to access data
[root@rhs-client44 ~]# gluster v info tier_distrep
 
Volume Name: tier_distrep
Type: Distributed-Replicate
Volume ID: ad81ef54-70ec-41f2-800c-17e5025acb26
Status: Created
Number of Bricks: 3 x 2 = 6
Transport-type: tcp
Bricks:
Brick1: rhs-client44:/pavanbrick1/tier_distrep/b1
Brick2: rhs-client37:/pavanbrick1/tier_distrep/b1m
Brick3: rhs-client37:/pavanbrick1/tier_distrep/b2
Brick4: rhs-client38:/pavanbrick1/tier_distrep/b2m
Brick5: rhs-client44:/pavanbrick1/tier_distrep/b3m
Brick6: rhs-client38:/pavanbrick1/tier_distrep/b3
[root@rhs-client44 ~]# gluster v attach-tier
Usage: volume attach-tier <VOLNAME> [<replica COUNT>] <NEW-BRICK>...
[root@rhs-client44 ~]# gluster v attach-tier tier_distrep rhs-client44:/pavanbrick2/tier_distrep/hb1 rhs-client37:/pavanbrick2/tier_distrep/hb1m rhs-client37:/pavanbrick2/tier_distrep/hb2 rhs-client38:/pavanbrick2/tier_distrep/hb2m
volume add-brick: success
[root@rhs-client44 ~]# gluster v info tier_distrep
 
Volume Name: tier_distrep
Type: Distributed-Replicate
Volume ID: ad81ef54-70ec-41f2-800c-17e5025acb26
Status: Created
Number of Bricks: 5 x 2 = 10
Transport-type: tcp
Bricks:
Brick1: rhs-client38:/pavanbrick2/tier_distrep/hb2m
Brick2: rhs-client37:/pavanbrick2/tier_distrep/hb2
Brick3: rhs-client37:/pavanbrick2/tier_distrep/hb1m
Brick4: rhs-client44:/pavanbrick2/tier_distrep/hb1
Brick5: rhs-client44:/pavanbrick1/tier_distrep/b1
Brick6: rhs-client37:/pavanbrick1/tier_distrep/b1m
Brick7: rhs-client37:/pavanbrick1/tier_distrep/b2
Brick8: rhs-client38:/pavanbrick1/tier_distrep/b2m
Brick9: rhs-client44:/pavanbrick1/tier_distrep/b3m
Brick10: rhs-client38:/pavanbrick1/tier_distrep/b3
[root@rhs-client44 ~]# gluster v status tier_distrep
Volume tier_distrep is not started
[root@rhs-client44 ~]# gluster v start tier_distrep
volume start: tier_distrep: success
[root@rhs-client44 ~]# gluster v info tier_distrep
 
Volume Name: tier_distrep
Type: Distributed-Replicate
Volume ID: ad81ef54-70ec-41f2-800c-17e5025acb26
Status: Started
Number of Bricks: 5 x 2 = 10
Transport-type: tcp
Bricks:
Brick1: rhs-client38:/pavanbrick2/tier_distrep/hb2m
Brick2: rhs-client37:/pavanbrick2/tier_distrep/hb2
Brick3: rhs-client37:/pavanbrick2/tier_distrep/hb1m
Brick4: rhs-client44:/pavanbrick2/tier_distrep/hb1
Brick5: rhs-client44:/pavanbrick1/tier_distrep/b1
Brick6: rhs-client37:/pavanbrick1/tier_distrep/b1m
Brick7: rhs-client37:/pavanbrick1/tier_distrep/b2
Brick8: rhs-client38:/pavanbrick1/tier_distrep/b2m
Brick9: rhs-client44:/pavanbrick1/tier_distrep/b3m
Brick10: rhs-client38:/pavanbrick1/tier_distrep/b3
[root@rhs-client44 ~]# gluster v status tier_distrep
Status of volume: tier_distrep
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick rhs-client38:/pavanbrick2/tier_distre
p/hb2m                                      49155     0          Y       1927 
Brick rhs-client37:/pavanbrick2/tier_distre
p/hb2                                       49155     0          Y       32498
Brick rhs-client37:/pavanbrick2/tier_distre
p/hb1m                                      49156     0          Y       32518
Brick rhs-client44:/pavanbrick2/tier_distre
p/hb1                                       49161     0          Y       28127
Brick rhs-client44:/pavanbrick1/tier_distre
p/b1                                        49162     0          Y       28147
Brick rhs-client37:/pavanbrick1/tier_distre
p/b1m                                       49157     0          Y       32538
Brick rhs-client37:/pavanbrick1/tier_distre
p/b2                                        49158     0          Y       32558
Brick rhs-client38:/pavanbrick1/tier_distre
p/b2m                                       49156     0          Y       1950 
Brick rhs-client44:/pavanbrick1/tier_distre
p/b3m                                       49163     0          Y       28167
Brick rhs-client38:/pavanbrick1/tier_distre
p/b3                                        49157     0          Y       1973 
NFS Server on localhost                     2049      0          Y       28188
Self-heal Daemon on localhost               N/A       N/A        Y       28197
NFS Server on 10.70.36.62                   2049      0          Y       2001 
Self-heal Daemon on 10.70.36.62             N/A       N/A        Y       2013 
NFS Server on rhs-client37                  2049      0          Y       32580
Self-heal Daemon on rhs-client37            N/A       N/A        Y       32588
 
Task Status of Volume tier_distrep
------------------------------------------------------------------------------
There are no active volume tasks




#######################
Xattrs


[root@rhs-client44 ~]# getfattr -d -e hex -m . /pavanbrick1/tier_distrep/*
getfattr: Removing leading '/' from absolute path names
# file: pavanbrick1/tier_distrep/b1
trusted.afr.dirty=0x000000000000000000000000
trusted.afr.tier_distrep-client-0=0x000000000000000000000000
trusted.afr.tier_distrep-client-1=0x000000000000000000000000
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x00000001000000003331f8286663f04f
trusted.glusterfs.volume-id=0xad81ef5470ec41f2800c17e5025acb26

# file: pavanbrick1/tier_distrep/b3m
trusted.afr.dirty=0x000000000000000000000000
trusted.afr.tier_distrep-client-4=0x000000000000000000000000
trusted.afr.tier_distrep-client-5=0x000000000000000000000000
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x00000001000000009995e878ccc7e09f
trusted.glusterfs.volume-id=0xad81ef5470ec41f2800c17e5025acb26

[root@rhs-client44 ~]# 
[root@rhs-client44 ~]# 
[root@rhs-client44 ~]# 
[root@rhs-client44 ~]# getfattr -d -e hex -m . /pavanbrick2/tier_distrep/*
getfattr: Removing leading '/' from absolute path names
# file: pavanbrick2/tier_distrep/hb1
trusted.afr.dirty=0x000000000000000000000000
trusted.afr.tier_distrep-client-6=0x000000000000000000000000
trusted.afr.tier_distrep-client-7=0x000000000000000000000000
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x0000000100000000000000003331f827
trusted.glusterfs.volume-id=0xad81ef5470ec41f2800c17e5025acb26

[root@rhs-client44 ~]# 
#################################################################################################################
[root@rhs-client38 ~]# getfattr -d -e hex -m . /pavanbrick1/tier_distrep/*
getfattr: Removing leading '/' from absolute path names
# file: pavanbrick1/tier_distrep/b2m
trusted.afr.dirty=0x000000000000000000000000
trusted.afr.tier_distrep-client-2=0x000000000000000000000000
trusted.afr.tier_distrep-client-3=0x000000000000000000000000
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x00000001000000006663f0509995e877
trusted.glusterfs.volume-id=0xad81ef5470ec41f2800c17e5025acb26

# file: pavanbrick1/tier_distrep/b3
trusted.afr.dirty=0x000000000000000000000000
trusted.afr.tier_distrep-client-4=0x000000000000000000000000
trusted.afr.tier_distrep-client-5=0x000000000000000000000000
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x00000001000000009995e878ccc7e09f
trusted.glusterfs.volume-id=0xad81ef5470ec41f2800c17e5025acb26

[root@rhs-client38 ~]# 
[root@rhs-client38 ~]# 
[root@rhs-client38 ~]# 
[root@rhs-client38 ~]# getfattr -d -e hex -m . /pavanbrick2/tier_distrep/*
getfattr: Removing leading '/' from absolute path names
# file: pavanbrick2/tier_distrep/hb2m
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x0000000100000000ccc7e0a0ffffffff
trusted.glusterfs.volume-id=0xad81ef5470ec41f2800c17e5025acb26

####################################################################################################################
[root@rhs-client37 ~]# getfattr -d -e hex -m . /pavanbrick1/tier_distrep/*
getfattr: Removing leading '/' from absolute path names
# file: pavanbrick1/tier_distrep/b1m
trusted.afr.dirty=0x000000000000000000000000
trusted.afr.tier_distrep-client-0=0x000000000000000000000000
trusted.afr.tier_distrep-client-1=0x000000000000000000000000
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x00000001000000003331f8286663f04f
trusted.glusterfs.volume-id=0xad81ef5470ec41f2800c17e5025acb26

# file: pavanbrick1/tier_distrep/b2
trusted.afr.dirty=0x000000000000000000000000
trusted.afr.tier_distrep-client-2=0x000000000000000000000000
trusted.afr.tier_distrep-client-3=0x000000000000000000000000
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x00000001000000006663f0509995e877
trusted.glusterfs.volume-id=0xad81ef5470ec41f2800c17e5025acb26

[root@rhs-client37 ~]# 
[root@rhs-client37 ~]# 
[root@rhs-client37 ~]# 
[root@rhs-client37 ~]# getfattr -d -e hex -m . /pavanbrick2/tier_distrep/*
getfattr: Removing leading '/' from absolute path names
# file: pavanbrick2/tier_distrep/hb1m
trusted.afr.dirty=0x000000000000000000000000
trusted.afr.tier_distrep-client-6=0x000000000000000000000000
trusted.afr.tier_distrep-client-7=0x000000000000000000000000
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x0000000100000000000000003331f827
trusted.glusterfs.volume-id=0xad81ef5470ec41f2800c17e5025acb26

# file: pavanbrick2/tier_distrep/hb2
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x0000000100000000ccc7e0a0ffffffff
trusted.glusterfs.volume-id=0xad81ef5470ec41f2800c17e5025acb26

[root@rhs-client37 ~]#

Comment 1 Dan Lambright 2015-03-28 03:10:50 UTC
Fix 10029 has been written for this problem. Note it is assigned to bug-1198618

Comment 2 Anand Avati 2015-03-30 18:29:25 UTC
REVIEW: http://review.gluster.org/10054 (glusterd: Support distributed replicated volumes on hot tier) posted (#1) for review on master by Dan Lambright (dlambrig@redhat.com)

Comment 3 Anand Avati 2015-04-01 15:05:28 UTC
REVIEW: http://review.gluster.org/10054 (glusterd: Support distributed replicated volumes on hot tier) posted (#2) for review on master by Dan Lambright (dlambrig@redhat.com)

Comment 4 Dan Lambright 2015-04-01 17:03:43 UTC
Note fix 10054 depends fix 10080.

Comment 5 Anand Avati 2015-04-03 18:08:41 UTC
REVIEW: http://review.gluster.org/10054 (glusterd: Support distributed replicated volumes on hot tier) posted (#3) for review on master by Dan Lambright (dlambrig@redhat.com)

Comment 6 Anand Avati 2015-04-03 18:53:22 UTC
REVIEW: http://review.gluster.org/10054 (glusterd: Support distributed replicated volumes on hot tier) posted (#4) for review on master by Dan Lambright (dlambrig@redhat.com)

Comment 7 Anand Avati 2015-04-08 04:58:22 UTC
REVIEW: http://review.gluster.org/10054 (glusterd: Support distributed replicated volumes on hot tier) posted (#5) for review on master by Dan Lambright (dlambrig@redhat.com)

Comment 8 Anand Avati 2015-04-08 07:29:52 UTC
COMMIT: http://review.gluster.org/10054 committed in master by Kaleb KEITHLEY (kkeithle@redhat.com) 
------
commit a8260044291cb6eee44974d8c52caa9f4cfb3993
Author: Dan Lambright <dlambrig@redhat.com>
Date:   Mon Mar 30 14:27:44 2015 -0400

    glusterd: Support distributed replicated volumes on hot tier
    
    We did not set up the graph properly for hot tiers with replicated
    subvolumes. Also add check that the file has not already been moved
    by another replicated brick on the same node.
    
    Change-Id: I9adef565ab60f6774810962d912168b77a6032fa
    BUG: 1206517
    Signed-off-by: Dan Lambright <dlambrig@redhat.com>
    Reviewed-on: http://review.gluster.org/10054
    Reviewed-by: Joseph Fernandes <josferna@redhat.com>
    Tested-by: Gluster Build System <jenkins@build.gluster.com>
    Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>

Comment 9 Niels de Vos 2015-05-14 17:27:04 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 10 Niels de Vos 2015-05-14 17:28:35 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 11 Niels de Vos 2015-05-14 17:35:18 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.