+++ This bug was initially created as a clone of Bug #1206546 +++ Description of problem: ====================== Currently there is no easy way of identifying hot and cold tier bricks easily from the CLI commands like volume info or volume status. If at all the user wants to know which brick belongs to which tier, he/she needs to issue a getfattr to see the tier hash value, which is cumbersome. It would be ideal to display the tiers and the respective bricks in the volume info command output itself. For eg, the current o/p of "volume info " of a distribute tiered volume is as below: [root@rhs-client44 ~]# gluster v info tiervol10 Volume Name: tiervol10 Type: Tier Volume ID: e6223c16-50fa-4916-b8b9-a83db6e8ec6c Status: Started Number of Bricks: 5 x 1 = 5 Transport-type: tcp Bricks: Brick1: rhs-client37:/pavanbrick2/tiervol10/hb1 Brick2: rhs-client44:/pavanbrick2/tiervol10/hb1 Brick3: rhs-client44:/pavanbrick1/tiervol10/b1 Brick4: rhs-client37:/pavanbrick1/tiervol10/b1 Brick5: rhs-client38:/pavanbrick1/tiervol10/b1 I would suggest something like below should be o/p: [root@rhs-client44 ~]# gluster v info tiervol10 Volume Name: tiervol10 Type: Tier Volume ID: e6223c16-50fa-4916-b8b9-a83db6e8ec6c Status: Started Number of Bricks: 5 x 1 = 5 Transport-type: tcp Bricks: HOT TIER: Brick1: rhs-client37:/pavanbrick2/tiervol10/hb1 Brick2: rhs-client44:/pavanbrick2/tiervol10/hb1 COLD TIER: Brick1: rhs-client44:/pavanbrick1/tiervol10/b1 Brick2: rhs-client37:/pavanbrick1/tiervol10/b1 Brick3: rhs-client38:/pavanbrick1/tiervol10/b1 Version-Release number of selected component (if applicable): ============================================================ 3.7 upstream nightlies build http://download.gluster.org/pub/gluster/glusterfs/nightly/glusterfs/epel-6-x86_64/glusterfs-3.7dev-0.803.gitf64666f.autobuild/ glusterfs 3.7dev built on Mar 26 2015 01:04:24 How reproducible: ================ Easily reproducible Steps to Reproduce: 1.create a distribute volume 2.attach a tier using attach-tier to the volume 3.issue a volume info or volume status command Actual results: ============== No easy way to identify cold/hot tier bricks Expected results: ================ Enhance the volume info o/p to display hot and cold bricks dedicatedly. --- Additional comment from Anand Avati on 2015-04-22 02:28:52 EDT --- REVIEW: http://review.gluster.org/10328 (cli/tiering: display hot tier, and cold tier separately) posted (#1) for review on master by mohammed rafi kc (rkavunga) --- Additional comment from Anand Avati on 2015-04-22 10:57:15 EDT --- REVIEW: http://review.gluster.org/10328 (cli/tiering: display hot tier, and cold tier separately) posted (#2) for review on master by mohammed rafi kc (rkavunga) --- Additional comment from Dan Lambright on 2015-04-22 12:54:43 EDT --- --- Additional comment from Mohammed Rafi KC on 2015-04-23 07:18:13 EDT --- upstream patch : http://review.gluster.org/#/c/10328/ --- Additional comment from Anand Avati on 2015-04-27 02:35:27 EDT --- REVIEW: http://review.gluster.org/10328 (cli/tiering: display hot tier, and cold tier separately) posted (#3) for review on master by mohammed rafi kc (rkavunga) --- Additional comment from Anand Avati on 2015-04-29 09:06:12 EDT --- REVIEW: http://review.gluster.org/10328 (cli/tiering: display hot tier, and cold tier separately) posted (#4) for review on master by mohammed rafi kc (rkavunga) --- Additional comment from Anand Avati on 2015-05-02 09:23:23 EDT --- REVIEW: http://review.gluster.org/10328 (cli/tiering: display hot tier, and cold tier separately) posted (#5) for review on master by mohammed rafi kc (rkavunga) --- Additional comment from Anand Avati on 2015-05-03 07:01:05 EDT --- REVIEW: http://review.gluster.org/10328 (cli/tiering: display hot tier, and cold tier separately) posted (#6) for review on master by mohammed rafi kc (rkavunga) --- Additional comment from Anand Avati on 2015-05-05 04:41:35 EDT --- REVIEW: http://review.gluster.org/10328 (cli/tiering: display hot tier, and cold tier separately) posted (#7) for review on master by mohammed rafi kc (rkavunga)
*** Bug 1219843 has been marked as a duplicate of this bug. ***
REVIEW: http://review.gluster.org/10675 (cli/tiering: display hot tier, and cold tier separately) posted (#2) for review on release-3.7 by mohammed rafi kc (rkavunga)
REVIEW: http://review.gluster.org/10676 (cli/tiering: volume info should display details about tier) posted (#2) for review on release-3.7 by mohammed rafi kc (rkavunga)
COMMIT: http://review.gluster.org/10675 committed in release-3.7 by Krishnan Parthasarathi (kparthas) ------ commit 3958136d603b1dc11986b50723ab79457da45fee Author: Mohammed Rafi KC <rkavunga> Date: Wed Apr 22 11:17:08 2015 +0530 cli/tiering: display hot tier, and cold tier separately back port of http://review.gluster.org/#/c/10328 cli commands display the brick information without a way to distinguish hot tier, and cold tier. This patch will change all the cli related output, without changing the corresponding xml output. This patch will change following things >> gluster volume info Volume Name: patchy Type: Tier Volume ID: 7745d367-811a-4fe9-a500-d04e7afa94bf Status: Created Number of Bricks: 3 x 2 = 6 Transport-type: tcp Hot Bricks: Brick1: hostname:/home/brick21 Brick2: hostname:/home/brick20 Cold Bricks: Brick3: hostname:/home/brick19 Brick4: hostname:/home/brick16 Brick5: hostname:/home/brick17 Brick6: hostname:/home/brick18 >>gluster volume status Status of volume: patchy Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Hot Bricks: Brick hostname:/home/brick21 49152 0 Y 4690 Brick hostname:/home/brick20 49153 0 Y 4707 Cold Bricks: Brick hostname:/home/brick19 49154 0 Y 4724 Brick hostname:/home/brick16 49155 0 Y 4741 Brick hostname:/home/brick17 49156 0 Y 4758 Brick hostname:/home/brick18 49157 0 Y 4775 NFS Server on localhost 2049 0 Y 4793 Task Status of Volume patchy ------------------------------------------------------------------------------ There are no active volume tasks >>gluster volume status pathy detail Status of volume: patchy Hot Bricks: ------------------------------------------------------------------------------ Brick : Brick hostname:/home/brick21 TCP Port : 49162 RDMA Port : 0 Online : Y Pid : 22677 File System : ext4 Device : /dev/mapper/luks-cd077c56-42ba-44b1-8195-f214b9bc990c Mount Options : rw,seclabel,relatime,data=ordered Inode Size : 256 Disk Space Free : 127.3GB Total Disk Space : 165.4GB Inode Count : 11026432 Free Inodes : 10998043 ------------------------------------------------------------------------------ Brick : Brick hostname:/home/brick20 TCP Port : 49161 RDMA Port : 0 Online : Y Pid : 22660 File System : ext4 Device : /dev/mapper/luks-cd077c56-42ba-44b1-8195-f214b9bc990c Mount Options : rw,seclabel,relatime,data=ordered Inode Size : 256 Disk Space Free : 127.3GB Total Disk Space : 165.4GB Inode Count : 11026432 Free Inodes : 10998043 Cold Bricks: ------------------------------------------------------------------------------ Brick : Brick hostname:/home/brick19 TCP Port : 49157 RDMA Port : 0 Online : Y Pid : 22501 File System : ext4 Device : /dev/mapper/luks-cd077c56-42ba-44b1-8195-f214b9bc990c Mount Options : rw,seclabel,relatime,data=ordered Inode Size : 256 Disk Space Free : 127.3GB Total Disk Space : 165.4GB Inode Count : 11026432 Free Inodes : 10998043 ------------------------------------------------------------------------------ Brick : Brick hostname:/home/brick16 TCP Port : 49158 RDMA Port : 0 Online : Y Pid : 22518 File System : ext4 Device : /dev/mapper/luks-cd077c56-42ba-44b1-8195-f214b9bc990c Mount Options : rw,seclabel,relatime,data=ordered Inode Size : 256 Disk Space Free : 127.3GB Total Disk Space : 165.4GB Inode Count : 11026432 Free Inodes : 10998043 ------------------------------------------------------------------------------ Brick : Brick hostname:/home/brick17 TCP Port : 49159 RDMA Port : 0 Online : Y Pid : 22535 File System : ext4 Device : /dev/mapper/luks-cd077c56-42ba-44b1-8195-f214b9bc990c Mount Options : rw,seclabel,relatime,data=ordered Inode Size : 256 Disk Space Free : 127.3GB Total Disk Space : 165.4GB Inode Count : 11026432 Free Inodes : 10998043 ------------------------------------------------------------------------------ Brick : Brick hostname:/home/brick18 TCP Port : 49160 RDMA Port : 0 Online : Y Pid : 22552 File System : ext4 Device : /dev/mapper/luks-cd077c56-42ba-44b1-8195-f214b9bc990c Mount Options : rw,seclabel,relatime,data=ordered Inode Size : 256 Disk Space Free : 127.3GB Total Disk Space : 165.4GB Inode Count : 11026432 Free Inodes : 10998043 Change-Id: I7d584eb8782129c12876cce2ba8ffba6c0a620bd BUG: 1219842 Signed-off-by: Mohammed Rafi KC <rkavunga> Reviewed-on: http://review.gluster.org/10675 Reviewed-by: Dan Lambright <dlambrig> Tested-by: Gluster Build System <jenkins.com> Tested-by: NetBSD Build System Reviewed-by: Krishnan Parthasarathi <kparthas> Tested-by: Krishnan Parthasarathi <kparthas>
COMMIT: http://review.gluster.org/10676 committed in release-3.7 by Vijay Bellur (vbellur) ------ commit f19c019cef4abfdb065d72c088fabf7d3d7805ff Author: Mohammed Rafi KC <rkavunga> Date: Wed Apr 22 20:07:11 2015 +0530 cli/tiering: volume info should display details about tier Back port of http://review.gluster.org/#/c/10339/ >> gluster volume info patchy Volume Name: patchy Type: Tier Volume ID: 8bf1a1ca-6417-484f-821f-18973a7502a8 Status: Created Number of Bricks: 8 Transport-type: tcp Hot Tier : Hot Tier Type : Replicate Number of Bricks: 1 x 2 = 2 Brick1: hostname:/home/brick30 Brick2: hostname:/home/brick31 Cold Bricks: Cold Tier Type : Disperse Number of Bricks: 1 x (4 + 2) = 6 Brick3: hostname:/home/brick20 Brick4: hostname:/home/brick21 Brick5: hostname:/home/brick23 Brick6: hostname:/home/brick24 Brick7: hostname:/home/brick25 Brick8: hostname:/home/brick26 Change-Id: I7b9025af81263ebecd641b4b6897b20db8b67195 BUG: 1219842 Signed-off-by: Mohammed Rafi KC <rkavunga> Reviewed-on: http://review.gluster.org/10676 Reviewed-by: Dan Lambright <dlambrig> Tested-by: NetBSD Build System Tested-by: Gluster Build System <jenkins.com>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report. glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user