Bug 1261837 - Data Tiering:Volume task status showing as remove brick when detach tier is trigger
Summary: Data Tiering:Volume task status showing as remove brick when detach tier is t...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: tiering
Version: mainline
Hardware: Unspecified
OS: Unspecified
urgent
low
Target Milestone: ---
Assignee: hari gowtham
QA Contact: bugs@gluster.org
URL:
Whiteboard:
Depends On: 1258340
Blocks: 1260923
TreeView+ depends on / blocked
 
Reported: 2015-09-10 09:44 UTC by hari gowtham
Modified: 2016-06-16 13:36 UTC (History)
5 users (show)

Fixed In Version: glusterfs-3.8rc2
Doc Type: Bug Fix
Doc Text:
Clone Of: 1258340
Environment:
Last Closed: 2016-06-16 13:36:20 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description hari gowtham 2015-09-10 09:44:43 UTC
+++ This bug was initially created as a clone of Bug #1258340 +++

Description of problem:
========================
when we trigger a detach tier start on a tier vol, it shows in the volume status task as "remove brick" instead of "Detach tier". This is ambiguous.


Version-Release number of selected component (if applicable):
===========================================================
 
[root@nag-manual-node1 ~]# gluster --version
glusterfs 3.7.3 built on Aug 27 2015 01:23:05
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.
[root@nag-manual-node1 ~]# rpm -qa|grep gluster
glusterfs-libs-3.7.3-0.82.git6c4096f.el6.x86_64
glusterfs-fuse-3.7.3-0.82.git6c4096f.el6.x86_64
glusterfs-server-3.7.3-0.82.git6c4096f.el6.x86_64
glusterfs-3.7.3-0.82.git6c4096f.el6.x86_64
glusterfs-api-3.7.3-0.82.git6c4096f.el6.x86_64
glusterfs-cli-3.7.3-0.82.git6c4096f.el6.x86_64
python-gluster-3.7.3-0.82.git6c4096f.el6.noarch
glusterfs-client-xlators-3.7.3-0.82.git6c4096f.el6.x86_64




Steps to Reproduce:
===================
1.create a tier vol and start it
2.issue a detach tier start
3. check the vol status.

Actual results:
==============
It shows as remove brick in process rather than detach tier as below
[root@tettnang glusterfs]# gluster v status xyz
Status of volume: xyz
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Hot Bricks:
Brick yarrow:/rhs/brick7/xyz                49163     0          Y       7147 
Brick tettnang:/rhs/brick7/xyz              49161     0          Y       21091
Cold Bricks:
Brick tettnang:/rhs/brick1/xyz              49159     0          Y       20879
Brick yarrow:/rhs/brick1/xyz                49161     0          Y       7075 
Brick tettnang:/rhs/brick2/xyz              49160     0          Y       20901
Brick yarrow:/rhs/brick2/xyz                49162     0          Y       7093 
NFS Server on localhost                     N/A       N/A        N       N/A  
NFS Server on zod                           N/A       N/A        N       N/A  
NFS Server on yarrow                        N/A       N/A        N       N/A  
 
Task Status of Volume xyz
------------------------------------------------------------------------------
Task                 : Remove brick        
ID                   : ddfd6e52-d789-4d43-98cc-8378c9db5aa4
Removed bricks:     
tettnang:/rhs/brick7/xyz
yarrow:/rhs/brick7/xyz
Status               : completed           



Expected results:
=================
It should mention task as "detach tier"

--- Additional comment from nchilaka on 2015-08-31 02:38:53 EDT ---

marking priority as urgent, given that it is very obvious and visible to the user

--- Additional comment from Mohammed Rafi KC on 2015-09-01 08:31:53 EDT ---

Comment 1 Vijay Bellur 2015-09-10 09:45:37 UTC
REVIEW: http://review.gluster.org/12149 (Tiering: change in status for remove brick and rebalance) posted (#1) for review on master by hari gowtham (hari.gowtham005)

Comment 2 Vijay Bellur 2015-09-11 07:26:12 UTC
REVIEW: http://review.gluster.org/12158 (Tier/cli: tier related information in volume info) posted (#1) for review on master by hari gowtham (hari.gowtham005)

Comment 3 Vijay Bellur 2015-09-12 14:17:50 UTC
REVIEW: http://review.gluster.org/12158 (Tier/cli: tier related information in volume info) posted (#2) for review on master by Dan Lambright (dlambrig)

Comment 4 Vijay Bellur 2015-09-15 10:22:50 UTC
REVIEW: http://review.gluster.org/12149 (Tiering: change in status for remove brick and rebalance) posted (#2) for review on master by hari gowtham (hari.gowtham005)

Comment 5 Vijay Bellur 2015-09-16 06:14:15 UTC
REVIEW: http://review.gluster.org/12158 (Tier/cli: tier related information in volume info) posted (#3) for review on master by hari gowtham (hari.gowtham005)

Comment 6 Vijay Bellur 2015-09-18 05:53:38 UTC
REVIEW: http://review.gluster.org/12149 (Tiering: change in status for remove brick and rebalance) posted (#3) for review on master by hari gowtham (hari.gowtham005)

Comment 7 Vijay Bellur 2015-09-18 06:08:48 UTC
REVIEW: http://review.gluster.org/12158 (Tier/cli: tier related information in volume info) posted (#4) for review on master by hari gowtham (hari.gowtham005)

Comment 8 hari gowtham 2015-09-29 06:42:12 UTC
This bug has the fix for another bug: 1258441

Comment 9 Niels de Vos 2016-06-16 13:36:20 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.

glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.