Bug 1258340 - Data Tiering:Volume task status showing as remove brick when detach tier is trigger
Data Tiering:Volume task status showing as remove brick when detach tier is t...
Status: CLOSED CURRENTRELEASE
Product: GlusterFS
Classification: Community
Component: tiering (Show other bugs)
3.7.5
Unspecified Unspecified
urgent Severity low
: ---
: ---
Assigned To: hari gowtham
bugs@gluster.org
: Triaged
Depends On:
Blocks: 1260923 1261837
  Show dependency treegraph
 
Reported: 2015-08-31 02:38 EDT by nchilaka
Modified: 2015-10-30 13:32 EDT (History)
4 users (show)

See Also:
Fixed In Version: glusterfs-3.7.5
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1261837 (view as bug list)
Environment:
Last Closed: 2015-10-14 06:28:44 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description nchilaka 2015-08-31 02:38:16 EDT
Description of problem:
========================
when we trigger a detach tier start on a tier vol, it shows in the volume status task as "remove brick" instead of "Detach tier". This is ambiguous.


Version-Release number of selected component (if applicable):
===========================================================
 
[root@nag-manual-node1 ~]# gluster --version
glusterfs 3.7.3 built on Aug 27 2015 01:23:05
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.
[root@nag-manual-node1 ~]# rpm -qa|grep gluster
glusterfs-libs-3.7.3-0.82.git6c4096f.el6.x86_64
glusterfs-fuse-3.7.3-0.82.git6c4096f.el6.x86_64
glusterfs-server-3.7.3-0.82.git6c4096f.el6.x86_64
glusterfs-3.7.3-0.82.git6c4096f.el6.x86_64
glusterfs-api-3.7.3-0.82.git6c4096f.el6.x86_64
glusterfs-cli-3.7.3-0.82.git6c4096f.el6.x86_64
python-gluster-3.7.3-0.82.git6c4096f.el6.noarch
glusterfs-client-xlators-3.7.3-0.82.git6c4096f.el6.x86_64




Steps to Reproduce:
===================
1.create a tier vol and start it
2.issue a detach tier start
3. check the vol status.

Actual results:
==============
It shows as remove brick in process rather than detach tier as below
[root@tettnang glusterfs]# gluster v status xyz
Status of volume: xyz
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Hot Bricks:
Brick yarrow:/rhs/brick7/xyz                49163     0          Y       7147 
Brick tettnang:/rhs/brick7/xyz              49161     0          Y       21091
Cold Bricks:
Brick tettnang:/rhs/brick1/xyz              49159     0          Y       20879
Brick yarrow:/rhs/brick1/xyz                49161     0          Y       7075 
Brick tettnang:/rhs/brick2/xyz              49160     0          Y       20901
Brick yarrow:/rhs/brick2/xyz                49162     0          Y       7093 
NFS Server on localhost                     N/A       N/A        N       N/A  
NFS Server on zod                           N/A       N/A        N       N/A  
NFS Server on yarrow                        N/A       N/A        N       N/A  
 
Task Status of Volume xyz
------------------------------------------------------------------------------
Task                 : Remove brick        
ID                   : ddfd6e52-d789-4d43-98cc-8378c9db5aa4
Removed bricks:     
tettnang:/rhs/brick7/xyz
yarrow:/rhs/brick7/xyz
Status               : completed           



Expected results:
=================
It should mention task as "detach tier"
Comment 1 nchilaka 2015-08-31 02:38:53 EDT
marking priority as urgent, given that it is very obvious and visible to the user
Comment 2 Mohammed Rafi KC 2015-09-01 08:31:53 EDT
*** Bug 1258441 has been marked as a duplicate of this bug. ***
Comment 3 hari gowtham 2015-09-29 02:42:40 EDT
This bug has the fix for another bug: 1258441
Comment 4 Pranith Kumar K 2015-10-14 06:28:44 EDT
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-glusterfs-3.7.5, please open a new bug report.

glusterfs-glusterfs-3.7.5 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://www.gluster.org/pipermail/gluster-users/2015-October/023968.html
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user
Comment 5 Pranith Kumar K 2015-10-14 06:37:59 EDT
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.5, please open a new bug report.

glusterfs-3.7.5 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://www.gluster.org/pipermail/gluster-users/2015-October/023968.html
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Note You need to log in before you can comment on or make changes to this bug.