Bug 1258340 - Data Tiering:Volume task status showing as remove brick when detach tier is trigger
Summary: Data Tiering:Volume task status showing as remove brick when detach tier is t...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: tiering
Version: 3.7.5
Hardware: Unspecified
OS: Unspecified
urgent
low
Target Milestone: ---
Assignee: hari gowtham
QA Contact: bugs@gluster.org
URL:
Whiteboard:
Depends On:
Blocks: 1260923 1261837
TreeView+ depends on / blocked
 
Reported: 2015-08-31 06:38 UTC by Nag Pavan Chilakam
Modified: 2015-10-30 17:32 UTC (History)
4 users (show)

Fixed In Version: glusterfs-3.7.5
Clone Of:
: 1261837 (view as bug list)
Environment:
Last Closed: 2015-10-14 10:28:44 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Nag Pavan Chilakam 2015-08-31 06:38:16 UTC
Description of problem:
========================
when we trigger a detach tier start on a tier vol, it shows in the volume status task as "remove brick" instead of "Detach tier". This is ambiguous.


Version-Release number of selected component (if applicable):
===========================================================
 
[root@nag-manual-node1 ~]# gluster --version
glusterfs 3.7.3 built on Aug 27 2015 01:23:05
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.
[root@nag-manual-node1 ~]# rpm -qa|grep gluster
glusterfs-libs-3.7.3-0.82.git6c4096f.el6.x86_64
glusterfs-fuse-3.7.3-0.82.git6c4096f.el6.x86_64
glusterfs-server-3.7.3-0.82.git6c4096f.el6.x86_64
glusterfs-3.7.3-0.82.git6c4096f.el6.x86_64
glusterfs-api-3.7.3-0.82.git6c4096f.el6.x86_64
glusterfs-cli-3.7.3-0.82.git6c4096f.el6.x86_64
python-gluster-3.7.3-0.82.git6c4096f.el6.noarch
glusterfs-client-xlators-3.7.3-0.82.git6c4096f.el6.x86_64




Steps to Reproduce:
===================
1.create a tier vol and start it
2.issue a detach tier start
3. check the vol status.

Actual results:
==============
It shows as remove brick in process rather than detach tier as below
[root@tettnang glusterfs]# gluster v status xyz
Status of volume: xyz
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Hot Bricks:
Brick yarrow:/rhs/brick7/xyz                49163     0          Y       7147 
Brick tettnang:/rhs/brick7/xyz              49161     0          Y       21091
Cold Bricks:
Brick tettnang:/rhs/brick1/xyz              49159     0          Y       20879
Brick yarrow:/rhs/brick1/xyz                49161     0          Y       7075 
Brick tettnang:/rhs/brick2/xyz              49160     0          Y       20901
Brick yarrow:/rhs/brick2/xyz                49162     0          Y       7093 
NFS Server on localhost                     N/A       N/A        N       N/A  
NFS Server on zod                           N/A       N/A        N       N/A  
NFS Server on yarrow                        N/A       N/A        N       N/A  
 
Task Status of Volume xyz
------------------------------------------------------------------------------
Task                 : Remove brick        
ID                   : ddfd6e52-d789-4d43-98cc-8378c9db5aa4
Removed bricks:     
tettnang:/rhs/brick7/xyz
yarrow:/rhs/brick7/xyz
Status               : completed           



Expected results:
=================
It should mention task as "detach tier"

Comment 1 Nag Pavan Chilakam 2015-08-31 06:38:53 UTC
marking priority as urgent, given that it is very obvious and visible to the user

Comment 2 Mohammed Rafi KC 2015-09-01 12:31:53 UTC
*** Bug 1258441 has been marked as a duplicate of this bug. ***

Comment 3 hari gowtham 2015-09-29 06:42:40 UTC
This bug has the fix for another bug: 1258441

Comment 4 Pranith Kumar K 2015-10-14 10:28:44 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-glusterfs-3.7.5, please open a new bug report.

glusterfs-glusterfs-3.7.5 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://www.gluster.org/pipermail/gluster-users/2015-October/023968.html
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 5 Pranith Kumar K 2015-10-14 10:37:59 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.5, please open a new bug report.

glusterfs-3.7.5 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://www.gluster.org/pipermail/gluster-users/2015-October/023968.html
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.