+++ This bug was initially created as a clone of Bug #1258340 +++ Description of problem: ======================== when we trigger a detach tier start on a tier vol, it shows in the volume status task as "remove brick" instead of "Detach tier". This is ambiguous. Version-Release number of selected component (if applicable): =========================================================== [root@nag-manual-node1 ~]# gluster --version glusterfs 3.7.3 built on Aug 27 2015 01:23:05 Repository revision: git://git.gluster.com/glusterfs.git Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com> GlusterFS comes with ABSOLUTELY NO WARRANTY. You may redistribute copies of GlusterFS under the terms of the GNU General Public License. [root@nag-manual-node1 ~]# rpm -qa|grep gluster glusterfs-libs-3.7.3-0.82.git6c4096f.el6.x86_64 glusterfs-fuse-3.7.3-0.82.git6c4096f.el6.x86_64 glusterfs-server-3.7.3-0.82.git6c4096f.el6.x86_64 glusterfs-3.7.3-0.82.git6c4096f.el6.x86_64 glusterfs-api-3.7.3-0.82.git6c4096f.el6.x86_64 glusterfs-cli-3.7.3-0.82.git6c4096f.el6.x86_64 python-gluster-3.7.3-0.82.git6c4096f.el6.noarch glusterfs-client-xlators-3.7.3-0.82.git6c4096f.el6.x86_64 Steps to Reproduce: =================== 1.create a tier vol and start it 2.issue a detach tier start 3. check the vol status. Actual results: ============== It shows as remove brick in process rather than detach tier as below [root@tettnang glusterfs]# gluster v status xyz Status of volume: xyz Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Hot Bricks: Brick yarrow:/rhs/brick7/xyz 49163 0 Y 7147 Brick tettnang:/rhs/brick7/xyz 49161 0 Y 21091 Cold Bricks: Brick tettnang:/rhs/brick1/xyz 49159 0 Y 20879 Brick yarrow:/rhs/brick1/xyz 49161 0 Y 7075 Brick tettnang:/rhs/brick2/xyz 49160 0 Y 20901 Brick yarrow:/rhs/brick2/xyz 49162 0 Y 7093 NFS Server on localhost N/A N/A N N/A NFS Server on zod N/A N/A N N/A NFS Server on yarrow N/A N/A N N/A Task Status of Volume xyz ------------------------------------------------------------------------------ Task : Remove brick ID : ddfd6e52-d789-4d43-98cc-8378c9db5aa4 Removed bricks: tettnang:/rhs/brick7/xyz yarrow:/rhs/brick7/xyz Status : completed Expected results: ================= It should mention task as "detach tier" --- Additional comment from nchilaka on 2015-08-31 02:38:53 EDT --- marking priority as urgent, given that it is very obvious and visible to the user --- Additional comment from Mohammed Rafi KC on 2015-09-01 08:31:53 EDT ---
REVIEW: http://review.gluster.org/12149 (Tiering: change in status for remove brick and rebalance) posted (#1) for review on master by hari gowtham (hari.gowtham005)
REVIEW: http://review.gluster.org/12158 (Tier/cli: tier related information in volume info) posted (#1) for review on master by hari gowtham (hari.gowtham005)
REVIEW: http://review.gluster.org/12158 (Tier/cli: tier related information in volume info) posted (#2) for review on master by Dan Lambright (dlambrig)
REVIEW: http://review.gluster.org/12149 (Tiering: change in status for remove brick and rebalance) posted (#2) for review on master by hari gowtham (hari.gowtham005)
REVIEW: http://review.gluster.org/12158 (Tier/cli: tier related information in volume info) posted (#3) for review on master by hari gowtham (hari.gowtham005)
REVIEW: http://review.gluster.org/12149 (Tiering: change in status for remove brick and rebalance) posted (#3) for review on master by hari gowtham (hari.gowtham005)
REVIEW: http://review.gluster.org/12158 (Tier/cli: tier related information in volume info) posted (#4) for review on master by hari gowtham (hari.gowtham005)
This bug has the fix for another bug: 1258441
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report. glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/ [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user