Description of problem: ======================= When attaching a tier to a distribute-replicate volume, the attach passes but the volume doesnt get converted to a tiered volume. But given that dist-rep volumes are most deployed among all type of volumes, tiering must be supported on dist-rep volumes too Version-Release number of selected component (if applicable): ============================================================ 3.7 upstream nightlies build http://download.gluster.org/pub/gluster/glusterfs/nightly/glusterfs/epel-6-x86_64/glusterfs-3.7dev-0.803.gitf64666f.autobuild/ glusterfs 3.7dev built on Mar 26 2015 01:04:24 How reproducible: ================= Easy to reproduce Steps to Reproduce: ================== 1.create a gluster volume of type distribute-replicate type and start the volume 2.attach a tier to the volume using attach-tier. 3. Now check the volume type. It still shows as dist-rep instead of tier-volume. 4. Also check the xattrs of the bricks, they dont have any tier attributes, even after mounting 5. after mounting volume, now write some files to the volume. It can be seen that all the files just get distributed and replicated over all the bricks and their repective replica pairs. They have nothing to do with if the bricks are part of cold or hot tier Actual results: =============== The volume doesnt get converted to tiered volume. Neither the volume info shows or the dht.tier gets added. Also the files also get dispersed over just like a regular dist-rep volume Expected results: ================ A dist-rep volume should be able to be converted to tiered volume and behave like a tiered volume. But currently attach-tier is only working like an add-brick command Additional info(CLI logs): =============== [root@rhs-client44 ~]# gluster v create tier_distrep replica 2 rhs-client44:/pavanbrick1/tier_distrep/b1 rhs-client37:/pavanbrick1/tier_distrep/b1m rhs-client37:/pavanbrick1/tier_distrep/b2 rhs-client38:/pavanbrick1/tier_distrep/b2m rhs-client44:/pavanbrick1/tier_distrep/b3m rhs-client38:/pavanbrick1/tier_distrep/b3 volume create: tier_distrep: success: please start the volume to access data [root@rhs-client44 ~]# gluster v info tier_distrep Volume Name: tier_distrep Type: Distributed-Replicate Volume ID: ad81ef54-70ec-41f2-800c-17e5025acb26 Status: Created Number of Bricks: 3 x 2 = 6 Transport-type: tcp Bricks: Brick1: rhs-client44:/pavanbrick1/tier_distrep/b1 Brick2: rhs-client37:/pavanbrick1/tier_distrep/b1m Brick3: rhs-client37:/pavanbrick1/tier_distrep/b2 Brick4: rhs-client38:/pavanbrick1/tier_distrep/b2m Brick5: rhs-client44:/pavanbrick1/tier_distrep/b3m Brick6: rhs-client38:/pavanbrick1/tier_distrep/b3 [root@rhs-client44 ~]# gluster v attach-tier Usage: volume attach-tier <VOLNAME> [<replica COUNT>] <NEW-BRICK>... [root@rhs-client44 ~]# gluster v attach-tier tier_distrep rhs-client44:/pavanbrick2/tier_distrep/hb1 rhs-client37:/pavanbrick2/tier_distrep/hb1m rhs-client37:/pavanbrick2/tier_distrep/hb2 rhs-client38:/pavanbrick2/tier_distrep/hb2m volume add-brick: success [root@rhs-client44 ~]# gluster v info tier_distrep Volume Name: tier_distrep Type: Distributed-Replicate Volume ID: ad81ef54-70ec-41f2-800c-17e5025acb26 Status: Created Number of Bricks: 5 x 2 = 10 Transport-type: tcp Bricks: Brick1: rhs-client38:/pavanbrick2/tier_distrep/hb2m Brick2: rhs-client37:/pavanbrick2/tier_distrep/hb2 Brick3: rhs-client37:/pavanbrick2/tier_distrep/hb1m Brick4: rhs-client44:/pavanbrick2/tier_distrep/hb1 Brick5: rhs-client44:/pavanbrick1/tier_distrep/b1 Brick6: rhs-client37:/pavanbrick1/tier_distrep/b1m Brick7: rhs-client37:/pavanbrick1/tier_distrep/b2 Brick8: rhs-client38:/pavanbrick1/tier_distrep/b2m Brick9: rhs-client44:/pavanbrick1/tier_distrep/b3m Brick10: rhs-client38:/pavanbrick1/tier_distrep/b3 [root@rhs-client44 ~]# gluster v status tier_distrep Volume tier_distrep is not started [root@rhs-client44 ~]# gluster v start tier_distrep volume start: tier_distrep: success [root@rhs-client44 ~]# gluster v info tier_distrep Volume Name: tier_distrep Type: Distributed-Replicate Volume ID: ad81ef54-70ec-41f2-800c-17e5025acb26 Status: Started Number of Bricks: 5 x 2 = 10 Transport-type: tcp Bricks: Brick1: rhs-client38:/pavanbrick2/tier_distrep/hb2m Brick2: rhs-client37:/pavanbrick2/tier_distrep/hb2 Brick3: rhs-client37:/pavanbrick2/tier_distrep/hb1m Brick4: rhs-client44:/pavanbrick2/tier_distrep/hb1 Brick5: rhs-client44:/pavanbrick1/tier_distrep/b1 Brick6: rhs-client37:/pavanbrick1/tier_distrep/b1m Brick7: rhs-client37:/pavanbrick1/tier_distrep/b2 Brick8: rhs-client38:/pavanbrick1/tier_distrep/b2m Brick9: rhs-client44:/pavanbrick1/tier_distrep/b3m Brick10: rhs-client38:/pavanbrick1/tier_distrep/b3 [root@rhs-client44 ~]# gluster v status tier_distrep Status of volume: tier_distrep Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick rhs-client38:/pavanbrick2/tier_distre p/hb2m 49155 0 Y 1927 Brick rhs-client37:/pavanbrick2/tier_distre p/hb2 49155 0 Y 32498 Brick rhs-client37:/pavanbrick2/tier_distre p/hb1m 49156 0 Y 32518 Brick rhs-client44:/pavanbrick2/tier_distre p/hb1 49161 0 Y 28127 Brick rhs-client44:/pavanbrick1/tier_distre p/b1 49162 0 Y 28147 Brick rhs-client37:/pavanbrick1/tier_distre p/b1m 49157 0 Y 32538 Brick rhs-client37:/pavanbrick1/tier_distre p/b2 49158 0 Y 32558 Brick rhs-client38:/pavanbrick1/tier_distre p/b2m 49156 0 Y 1950 Brick rhs-client44:/pavanbrick1/tier_distre p/b3m 49163 0 Y 28167 Brick rhs-client38:/pavanbrick1/tier_distre p/b3 49157 0 Y 1973 NFS Server on localhost 2049 0 Y 28188 Self-heal Daemon on localhost N/A N/A Y 28197 NFS Server on 10.70.36.62 2049 0 Y 2001 Self-heal Daemon on 10.70.36.62 N/A N/A Y 2013 NFS Server on rhs-client37 2049 0 Y 32580 Self-heal Daemon on rhs-client37 N/A N/A Y 32588 Task Status of Volume tier_distrep ------------------------------------------------------------------------------ There are no active volume tasks ####################### Xattrs [root@rhs-client44 ~]# getfattr -d -e hex -m . /pavanbrick1/tier_distrep/* getfattr: Removing leading '/' from absolute path names # file: pavanbrick1/tier_distrep/b1 trusted.afr.dirty=0x000000000000000000000000 trusted.afr.tier_distrep-client-0=0x000000000000000000000000 trusted.afr.tier_distrep-client-1=0x000000000000000000000000 trusted.gfid=0x00000000000000000000000000000001 trusted.glusterfs.dht=0x00000001000000003331f8286663f04f trusted.glusterfs.volume-id=0xad81ef5470ec41f2800c17e5025acb26 # file: pavanbrick1/tier_distrep/b3m trusted.afr.dirty=0x000000000000000000000000 trusted.afr.tier_distrep-client-4=0x000000000000000000000000 trusted.afr.tier_distrep-client-5=0x000000000000000000000000 trusted.gfid=0x00000000000000000000000000000001 trusted.glusterfs.dht=0x00000001000000009995e878ccc7e09f trusted.glusterfs.volume-id=0xad81ef5470ec41f2800c17e5025acb26 [root@rhs-client44 ~]# [root@rhs-client44 ~]# [root@rhs-client44 ~]# [root@rhs-client44 ~]# getfattr -d -e hex -m . /pavanbrick2/tier_distrep/* getfattr: Removing leading '/' from absolute path names # file: pavanbrick2/tier_distrep/hb1 trusted.afr.dirty=0x000000000000000000000000 trusted.afr.tier_distrep-client-6=0x000000000000000000000000 trusted.afr.tier_distrep-client-7=0x000000000000000000000000 trusted.gfid=0x00000000000000000000000000000001 trusted.glusterfs.dht=0x0000000100000000000000003331f827 trusted.glusterfs.volume-id=0xad81ef5470ec41f2800c17e5025acb26 [root@rhs-client44 ~]# ################################################################################################################# [root@rhs-client38 ~]# getfattr -d -e hex -m . /pavanbrick1/tier_distrep/* getfattr: Removing leading '/' from absolute path names # file: pavanbrick1/tier_distrep/b2m trusted.afr.dirty=0x000000000000000000000000 trusted.afr.tier_distrep-client-2=0x000000000000000000000000 trusted.afr.tier_distrep-client-3=0x000000000000000000000000 trusted.gfid=0x00000000000000000000000000000001 trusted.glusterfs.dht=0x00000001000000006663f0509995e877 trusted.glusterfs.volume-id=0xad81ef5470ec41f2800c17e5025acb26 # file: pavanbrick1/tier_distrep/b3 trusted.afr.dirty=0x000000000000000000000000 trusted.afr.tier_distrep-client-4=0x000000000000000000000000 trusted.afr.tier_distrep-client-5=0x000000000000000000000000 trusted.gfid=0x00000000000000000000000000000001 trusted.glusterfs.dht=0x00000001000000009995e878ccc7e09f trusted.glusterfs.volume-id=0xad81ef5470ec41f2800c17e5025acb26 [root@rhs-client38 ~]# [root@rhs-client38 ~]# [root@rhs-client38 ~]# [root@rhs-client38 ~]# getfattr -d -e hex -m . /pavanbrick2/tier_distrep/* getfattr: Removing leading '/' from absolute path names # file: pavanbrick2/tier_distrep/hb2m trusted.gfid=0x00000000000000000000000000000001 trusted.glusterfs.dht=0x0000000100000000ccc7e0a0ffffffff trusted.glusterfs.volume-id=0xad81ef5470ec41f2800c17e5025acb26 #################################################################################################################### [root@rhs-client37 ~]# getfattr -d -e hex -m . /pavanbrick1/tier_distrep/* getfattr: Removing leading '/' from absolute path names # file: pavanbrick1/tier_distrep/b1m trusted.afr.dirty=0x000000000000000000000000 trusted.afr.tier_distrep-client-0=0x000000000000000000000000 trusted.afr.tier_distrep-client-1=0x000000000000000000000000 trusted.gfid=0x00000000000000000000000000000001 trusted.glusterfs.dht=0x00000001000000003331f8286663f04f trusted.glusterfs.volume-id=0xad81ef5470ec41f2800c17e5025acb26 # file: pavanbrick1/tier_distrep/b2 trusted.afr.dirty=0x000000000000000000000000 trusted.afr.tier_distrep-client-2=0x000000000000000000000000 trusted.afr.tier_distrep-client-3=0x000000000000000000000000 trusted.gfid=0x00000000000000000000000000000001 trusted.glusterfs.dht=0x00000001000000006663f0509995e877 trusted.glusterfs.volume-id=0xad81ef5470ec41f2800c17e5025acb26 [root@rhs-client37 ~]# [root@rhs-client37 ~]# [root@rhs-client37 ~]# [root@rhs-client37 ~]# getfattr -d -e hex -m . /pavanbrick2/tier_distrep/* getfattr: Removing leading '/' from absolute path names # file: pavanbrick2/tier_distrep/hb1m trusted.afr.dirty=0x000000000000000000000000 trusted.afr.tier_distrep-client-6=0x000000000000000000000000 trusted.afr.tier_distrep-client-7=0x000000000000000000000000 trusted.gfid=0x00000000000000000000000000000001 trusted.glusterfs.dht=0x0000000100000000000000003331f827 trusted.glusterfs.volume-id=0xad81ef5470ec41f2800c17e5025acb26 # file: pavanbrick2/tier_distrep/hb2 trusted.gfid=0x00000000000000000000000000000001 trusted.glusterfs.dht=0x0000000100000000ccc7e0a0ffffffff trusted.glusterfs.volume-id=0xad81ef5470ec41f2800c17e5025acb26 [root@rhs-client37 ~]#
Fix 10029 has been written for this problem. Note it is assigned to bug-1198618
REVIEW: http://review.gluster.org/10054 (glusterd: Support distributed replicated volumes on hot tier) posted (#1) for review on master by Dan Lambright (dlambrig)
REVIEW: http://review.gluster.org/10054 (glusterd: Support distributed replicated volumes on hot tier) posted (#2) for review on master by Dan Lambright (dlambrig)
Note fix 10054 depends fix 10080.
REVIEW: http://review.gluster.org/10054 (glusterd: Support distributed replicated volumes on hot tier) posted (#3) for review on master by Dan Lambright (dlambrig)
REVIEW: http://review.gluster.org/10054 (glusterd: Support distributed replicated volumes on hot tier) posted (#4) for review on master by Dan Lambright (dlambrig)
REVIEW: http://review.gluster.org/10054 (glusterd: Support distributed replicated volumes on hot tier) posted (#5) for review on master by Dan Lambright (dlambrig)
COMMIT: http://review.gluster.org/10054 committed in master by Kaleb KEITHLEY (kkeithle) ------ commit a8260044291cb6eee44974d8c52caa9f4cfb3993 Author: Dan Lambright <dlambrig> Date: Mon Mar 30 14:27:44 2015 -0400 glusterd: Support distributed replicated volumes on hot tier We did not set up the graph properly for hot tiers with replicated subvolumes. Also add check that the file has not already been moved by another replicated brick on the same node. Change-Id: I9adef565ab60f6774810962d912168b77a6032fa BUG: 1206517 Signed-off-by: Dan Lambright <dlambrig> Reviewed-on: http://review.gluster.org/10054 Reviewed-by: Joseph Fernandes <josferna> Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Kaleb KEITHLEY <kkeithle>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report. glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user