Bug 1207227 - Data Tiering:remove cold/hot brick seems to be behaving like or emulating detach-tier
Summary: Data Tiering:remove cold/hot brick seems to be behaving like or emulating det...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: tiering
Version: mainline
Hardware: x86_64
OS: Linux
urgent
urgent
Target Milestone: ---
Assignee: Mohammed Rafi KC
QA Contact: bugs@gluster.org
URL:
Whiteboard:
Depends On:
Blocks: 1229241 1260923
TreeView+ depends on / blocked
 
Reported: 2015-03-30 13:18 UTC by Nag Pavan Chilakam
Modified: 2018-10-08 09:53 UTC (History)
4 users (show)

Fixed In Version: glusterfs-4.1.4
Clone Of:
: 1229241 (view as bug list)
Environment:
Last Closed: 2018-10-08 09:53:02 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Nag Pavan Chilakam 2015-03-30 13:18:52 UTC
Description of problem:
=======================
In a tiered volume,removing a brick fails.
When i tried to remove a cold brick it failed but when i observed the gluster volume status, it says that it tried removing the hot-tiered bricks
So, is the code such that a remove-brick is trying to do nothing but just detach a tier? If so it is a serious problem.






[root@interstellar glusterfs]# gluster v status vol2
Status of volume: vol2
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick transformers:/pavanbrick2/vol2/hb1    49155     0          Y       30169
Brick interstellar:/pavanbrick2/vol2/hb1    49155     0          Y       25245
Brick interstellar:/pavanbrick1/vol2/b1     49154     0          Y       25112
Brick transformers:/pavanbrick1/vol2/b1     49154     0          Y       30106
NFS Server on localhost                     N/A       N/A        N       N/A  
NFS Server on 10.70.34.44                   N/A       N/A        N       N/A  
 
Task Status of Volume vol2
------------------------------------------------------------------------------
Task                 : Remove brick        
ID                   : 2ef83cf9-4e2f-4bc0-8b8b-90ed3fb3fca7
Removed bricks:     
interstellar:/pavanbrick2/vol2/hb1
transformers:/pavanbrick2/vol2/hb1
Status               : failed              


Version-Release number of selected component (if applicable):
============================================================
3.7 upstream nightlies build http://download.gluster.org/pub/gluster/glusterfs/nightly/glusterfs/epel-6-x86_64/glusterfs-3.7dev-0.821.git0934432.autobuild//

root@interstellar glusterfs]# gluster --version
glusterfs 3.7dev built on Mar 28 2015 01:05:28
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.


How reproducible:
=================
Easy to reproduce


Steps to Reproduce:
==================
1.create a gluster volume(i created a distribute type) and start the volume and attach a tier to the volume using attach-tier
Following is the vol i created
[root@interstellar glusterfs]# gluster v create vol2  interstellar:/pavanbrick1/vol2/b1 transformers:/pavanbrick1/vol2/b1
volume create: vol2: success: please start the volume to access data
[root@interstellar glusterfs]# gluster v start vol2
gluster v savolume start: vol2: success

[root@interstellar glusterfs]# gluster v status vol2
Status of volume: vol2
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick interstellar:/pavanbrick1/vol2/b1     49154     0          Y       25112
Brick transformers:/pavanbrick1/vol2/b1     49154     0          Y       30106
NFS Server on localhost                     2049      0          Y       25136
NFS Server on 10.70.34.44                   2049      0          Y       30129
 
Task Status of Volume vol2
------------------------------------------------------------------------------
There are no active volume tasks
 

[root@interstellar glusterfs]# gluster v attach-tier vol2  interstellar:/pavanbrick2/vol2/hb1 transformers:/pavanbrick2/vol2/hb1
volume add-brick: success
[root@interstellar glusterfs]# gluster v info vol2

Volume Name: vol2
Type: Tier
Volume ID: 8f6cd80e-f058-4713-a598-2bb641fa64cf
Status: Started
Number of Bricks: 4 x 1 = 4
Transport-type: tcp
Bricks:
Brick1: transformers:/pavanbrick2/vol2/hb1
Brick2: interstellar:/pavanbrick2/vol2/hb1
Brick3: interstellar:/pavanbrick1/vol2/b1
Brick4: transformers:/pavanbrick1/vol2/b1
[root@interstellar glusterfs]# gluster v status vol2
Status of volume: vol2
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick transformers:/pavanbrick2/vol2/hb1    49155     0          Y       30169
Brick interstellar:/pavanbrick2/vol2/hb1    49155     0          Y       25245
Brick interstellar:/pavanbrick1/vol2/b1     49154     0          Y       25112
Brick transformers:/pavanbrick1/vol2/b1     49154     0          Y       30106
NFS Server on localhost                     N/A       N/A        N       N/A  
NFS Server on 10.70.34.44                   N/A       N/A        N       N/A  
 
Task Status of Volume vol2
------------------------------------------------------------------------------
There are no active volume tasks


2.Now i am trying to remove a coldbrick as below
[root@interstellar glusterfs]# gluster v remove-brick vol2 transformers:/pavanbrick1/vol2/b1 start
volume remove-brick start: success
ID: 2ef83cf9-4e2f-4bc0-8b8b-90ed3fb3fca7
[root@interstellar glusterfs]# gluster v remove-brick vol2 transformers:/pavanbrick1/vol2/b1 status
                                    Node Rebalanced-files          size       scanned      failures       skipped               status   run time in secs
                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
                               localhost                0        0Bytes             0             0             0               failed               0.00
                             10.70.34.44                0        0Bytes             0             0             0               failed               0.00


3.If you see i tried to remove the brick transformers:/pavanbrick1/vol2/b1 start which is a cold brick.
But see the vol status it shows it tried to remove hot bricks as below:
[root@interstellar glusterfs]# gluster v status vol2
Status of volume: vol2
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick transformers:/pavanbrick2/vol2/hb1    49155     0          Y       30169
Brick interstellar:/pavanbrick2/vol2/hb1    49155     0          Y       25245
Brick interstellar:/pavanbrick1/vol2/b1     49154     0          Y       25112
Brick transformers:/pavanbrick1/vol2/b1     49154     0          Y       30106
NFS Server on localhost                     N/A       N/A        N       N/A  
NFS Server on 10.70.34.44                   N/A       N/A        N       N/A  
 
Task Status of Volume vol2
------------------------------------------------------------------------------
Task                 : Remove brick        
ID                   : 2ef83cf9-4e2f-4bc0-8b8b-90ed3fb3fca7
Removed bricks:     
interstellar:/pavanbrick2/vol2/hb1
transformers:/pavanbrick2/vol2/hb1
Status               : failed              

Expected results:
================
remove a cold brick should remove a cold brick and not touch hot tier


Additional info(CLI logs):
===============

Comment 1 Nag Pavan Chilakam 2015-03-30 13:23:19 UTC
sosreports@rhsqe-repo:/var/www/html/sosreports/bug.1207227

Comment 2 Nag Pavan Chilakam 2015-04-20 05:43:30 UTC
As discussed with stakeholders,removing the tag for qe_tracker_everglades(bz#1186580)  for all add/remove brick issues

Comment 3 Mohammed Rafi KC 2015-04-23 11:46:08 UTC
upstream patch : http://review.gluster.org/#/c/10349/

Comment 4 Niels de Vos 2015-05-15 13:07:40 UTC
This change should not be in "ON_QA", the patch posted for this bug is only available in the master branch and not in a release yet. Moving back to MODIFIED until there is an beta release for the next GlusterFS version.

Comment 5 Amar Tumballi 2018-10-08 09:53:02 UTC
This bug was ON_QA status, and on GlusterFS product in bugzilla, we don't have that as a valid status. We are closing it as 'CURRENT RELEASE ' to indicate the availability of the fix, please reopen if found again.


Note You need to log in before you can comment on or make changes to this bug.