Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1229242 - data tiering:force Remove brick is detaching-tier
data tiering:force Remove brick is detaching-tier
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: tier (Show other bugs)
3.1
x86_64 Linux
urgent Severity urgent
: ---
: RHGS 3.1.0
Assigned To: Mohammed Rafi KC
nchilaka
: Triaged
Depends On: 1207238
Blocks: 1202842
  Show dependency treegraph
 
Reported: 2015-06-08 06:22 EDT by nchilaka
Modified: 2016-09-17 11:40 EDT (History)
8 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1207238
Environment:
Last Closed: 2015-07-29 00:58:35 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:1495 normal SHIPPED_LIVE Important: Red Hat Gluster Storage 3.1 update 2015-07-29 04:26:26 EDT

  None (edit)
Description nchilaka 2015-06-08 06:22:12 EDT
+++ This bug was initially created as a clone of Bug #1207238 +++

Description of problem:
=======================
In a tiered volume,removing a brick fails.
So if we try force remove-brick , either on cold or hot, it just goes and detaches tier. This is a serious bug


Version-Release number of selected component (if applicable):
============================================================
3.7 upstream nightlies build http://download.gluster.org/pub/gluster/glusterfs/nightly/glusterfs/epel-6-x86_64/glusterfs-3.7dev-0.821.git0934432.autobuild//

root@interstellar glusterfs]# gluster --version
glusterfs 3.7dev built on Mar 28 2015 01:05:28
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.


How reproducible:
=================
Easy to reproduce


Steps to Reproduce:
==================
1.create a gluster volume(i created a distribute type) and start the volume and attach a tier to the volume using attach-tier


2.Now do  a remove-brick and use force option as below
gluster v remove-brick voly transformers:/pavanbrick2/voly/hb1 force
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
volume remove-brick commit force: success

3. This forcefully just detaches tier instead of the brick

Expected results:
================
remove a  brick should not remove the tier

Additional info(CLI logs):
=========================
[root@interstellar glusterfs]# gluster v create voly interstellar:/pavanbrick1/voly/b1 transformers:/pavanbrick1/voly/b1
volume create: voly: success: please start the volume to access data
[root@interstellar glusterfs]# gluster v start voly
volume start: voly: success
[root@interstellar glusterfs]# gluster v info voly
 
Volume Name: voly
Type: Distribute
Volume ID: 22412494-df85-4458-a80f-0e4c0cc76572
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: interstellar:/pavanbrick1/voly/b1
Brick2: transformers:/pavanbrick1/voly/b1
[root@interstellar glusterfs]# gluster v status voly
Status of volume: voly
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick interstellar:/pavanbrick1/voly/b1     49158     0          Y       27482
Brick transformers:/pavanbrick1/voly/b1     49158     0          Y       31954
NFS Server on localhost                     N/A       N/A        N       N/A  
NFS Server on 10.70.34.44                   N/A       N/A        N       N/A  
 
Task Status of Volume voly
------------------------------------------------------------------------------
There are no active volume tasks
 
[root@interstellar glusterfs]# gluster v attach-tier voly interstellar:/pavanbrick2/voly/hb1 transformers:/pavanbrick2/voly/hb1
volume add-brick: success
[root@interstellar glusterfs]# gluster v info voly
 
Volume Name: voly
Type: Tier
Volume ID: 22412494-df85-4458-a80f-0e4c0cc76572
Status: Started
Number of Bricks: 4 x 1 = 4
Transport-type: tcp
Bricks:
Brick1: transformers:/pavanbrick2/voly/hb1
Brick2: interstellar:/pavanbrick2/voly/hb1
Brick3: interstellar:/pavanbrick1/voly/b1
Brick4: transformers:/pavanbrick1/voly/b1
[root@interstellar glusterfs]# gluster v status voly
Status of volume: voly
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick transformers:/pavanbrick2/voly/hb1    49159     0          Y       32013
Brick interstellar:/pavanbrick2/voly/hb1    49159     0          Y       27567
Brick interstellar:/pavanbrick1/voly/b1     49158     0          Y       27482
Brick transformers:/pavanbrick1/voly/b1     49158     0          Y       31954
NFS Server on localhost                     N/A       N/A        N       N/A  
NFS Server on 10.70.34.44                   N/A       N/A        N       N/A  
 
Task Status of Volume voly
------------------------------------------------------------------------------
There are no active volume tasks
 
[root@interstellar glusterfs]# gluster v remove-brick voly transformers:/pavanbrick2/voly/hb1 force
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
volume remove-brick commit force: success
[root@interstellar glusterfs]# gluster v status voly
Status of volume: voly
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick interstellar:/pavanbrick1/voly/b1     49158     0          Y       27482
Brick transformers:/pavanbrick1/voly/b1     49158     0          Y       31954
NFS Server on localhost                     N/A       N/A        N       N/A  
NFS Server on 10.70.34.44                   N/A       N/A        N       N/A  
 
Task Status of Volume voly
------------------------------------------------------------------------------
There are no active volume tasks
 
[root@interstellar glusterfs]# gluster v info voly
 
Volume Name: voly
Type: Tier
Volume ID: 22412494-df85-4458-a80f-0e4c0cc76572
Status: Started
Number of Bricks: 2 x 1 = 2
Transport-type: tcp
Bricks:
Brick1: interstellar:/pavanbrick1/voly/b1
Brick2: transformers:/pavanbrick1/voly/b1
[root@interstellar glusterfs]#

--- Additional comment from nchilaka on 2015-03-30 09:39:35 EDT ---

sosreports@rhsqe-repo:/home/repo/sosreports/1207238

--- Additional comment from nchilaka on 2015-03-30 09:40:21 EDT ---

Also, if you note, the remove brick even after acting like a detach tier, the volume type still remains as "tier"

--- Additional comment from nchilaka on 2015-04-20 01:43:42 EDT ---

As discussed with stakeholders,removing the tag for qe_tracker_everglades(bz#1186580)  for all add/remove brick issues

--- Additional comment from Mohammed Rafi KC on 2015-04-23 07:45:18 EDT ---

upstream patch : http://review.gluster.org/#/c/10349/

--- Additional comment from Niels de Vos on 2015-05-15 09:07:43 EDT ---

This change should not be in "ON_QA", the patch posted for this bug is only available in the master branch and not in a release yet. Moving back to MODIFIED until there is an beta release for the next GlusterFS version.
Comment 2 Triveni Rao 2015-06-18 06:38:30 EDT
https://bugzilla.redhat.com/show_bug.cgi?id=1229242 -------------  data tiering:force Remove brick is detaching-tier



[root@rhsqa14-vm3 ~]# gluster v create test 10.70.47.159:/rhs/brick1/t0 10.70.46.2:/rhs/brick1/t0 10.70.47.159:/rhs/brick2/t0 10.70.46.2:/rhs/brick2/t0
volume create: test: success: please start the volume to access data
[root@rhsqa14-vm3 ~]# gluster v start test
volume start: test: success
[root@rhsqa14-vm3 ~]# gluster v info
 
Volume Name: ecvol
Type: Disperse
Volume ID: 140cf106-24e8-4c0b-8b87-75c4d361fdca
Status: Started
Number of Bricks: 1 x (4 + 2) = 6
Transport-type: tcp
Bricks:
Brick1: 10.70.47.159:/rhs/brick1/e0
Brick2: 10.70.46.2:/rhs/brick1/e0
Brick3: 10.70.47.159:/rhs/brick2/e0
Brick4: 10.70.46.2:/rhs/brick2/e0
Brick5: 10.70.47.159:/rhs/brick3/e0
Brick6: 10.70.46.2:/rhs/brick3/e0
Options Reconfigured:
performance.readdir-ahead: on
 
Volume Name: test
Type: Distribute
Volume ID: 0b2070bd-eca0-4f1a-bdab-207a3f0a95f7
Status: Started
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: 10.70.47.159:/rhs/brick1/t0
Brick2: 10.70.46.2:/rhs/brick1/t0
Brick3: 10.70.47.159:/rhs/brick2/t0
Brick4: 10.70.46.2:/rhs/brick2/t0
Options Reconfigured:
performance.readdir-ahead: on
[root@rhsqa14-vm3 ~]# 
[root@rhsqa14-vm3 ~]# 
[root@rhsqa14-vm3 ~]# gluster v attach-tier test 10.70.47.159:/rhs/brick3/t0 10.70.46.2:/rhs/brick3/t0
Attach tier is recommended only for testing purposes in this release. Do you want to continue? (y/n) y
volume attach-tier: success
volume rebalance: test: success: Rebalance on test has been started successfully. Use rebalance status command to check status of the rebalance process.
ID: af7dd4b2-b4b7-4d72-9e12-847e3c231eea

[root@rhsqa14-vm3 ~]# gluster v info test
 
Volume Name: test
Type: Tier
Volume ID: 0b2070bd-eca0-4f1a-bdab-207a3f0a95f7
Status: Started
Number of Bricks: 6
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distribute
Number of Bricks: 2
Brick1: 10.70.46.2:/rhs/brick3/t0
Brick2: 10.70.47.159:/rhs/brick3/t0
Cold Tier:
Cold Tier Type : Distribute
Number of Bricks: 4
Brick3: 10.70.47.159:/rhs/brick1/t0
Brick4: 10.70.46.2:/rhs/brick1/t0
Brick5: 10.70.47.159:/rhs/brick2/t0
Brick6: 10.70.46.2:/rhs/brick2/t0
Options Reconfigured:
performance.readdir-ahead: on
[root@rhsqa14-vm3 ~]# 


[root@rhsqa14-vm3 ~]# gluster v remove-brick test 10.70.46.2:/rhs/brick3/t0 10.70.47.159:/rhs/brick3/t0 start
volume remove-brick start: failed: Removing brick from a Tier volume is not allowed
[root@rhsqa14-vm3 ~]# gluster v remove-brick test 10.70.46.2:/rhs/brick3/t0 10.70.47.159:/rhs/brick3/t0 force
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
volume remove-brick commit force: failed: Removing brick from a Tier volume is not allowed
[root@rhsqa14-vm3 ~]# 
[root@rhsqa14-vm3 ~]# gluster v remove-brick test 10.70.46.2:/rhs/brick3/t0 10.70.47.159:/rhs/brick3/t0 commit
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
volume remove-brick commit: failed: Removing brick from a Tier volume is not allowed
[root@rhsqa14-vm3 ~]# 


[root@rhsqa14-vm3 ~]# gluster  v info test
 
Volume Name: test
Type: Tier
Volume ID: 0b2070bd-eca0-4f1a-bdab-207a3f0a95f7
Status: Started
Number of Bricks: 6
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distribute
Number of Bricks: 2
Brick1: 10.70.46.2:/rhs/brick3/t0
Brick2: 10.70.47.159:/rhs/brick3/t0
Cold Tier:
Cold Tier Type : Distribute
Number of Bricks: 4
Brick3: 10.70.47.159:/rhs/brick1/t0
Brick4: 10.70.46.2:/rhs/brick1/t0
Brick5: 10.70.47.159:/rhs/brick2/t0
Brick6: 10.70.46.2:/rhs/brick2/t0
Options Reconfigured:
performance.readdir-ahead: on
[root@rhsqa14-vm3 ~]# 


tried with single brick removal also :

[root@rhsqa14-vm3 ~]# gluster v info test
 
Volume Name: test
Type: Tier
Volume ID: 0b2070bd-eca0-4f1a-bdab-207a3f0a95f7
Status: Started
Number of Bricks: 6
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distribute
Number of Bricks: 2
Brick1: 10.70.46.2:/rhs/brick3/t0
Brick2: 10.70.47.159:/rhs/brick3/t0
Cold Tier:
Cold Tier Type : Distribute
Number of Bricks: 4
Brick3: 10.70.47.159:/rhs/brick1/t0
Brick4: 10.70.46.2:/rhs/brick1/t0
Brick5: 10.70.47.159:/rhs/brick2/t0
Brick6: 10.70.46.2:/rhs/brick2/t0
Options Reconfigured:
performance.readdir-ahead: on
[root@rhsqa14-vm3 ~]# gluster v remove-brick test 10.70.46.2:/rhs/brick3/t0 start
volume remove-brick start: failed: Removing brick from a Tier volume is not allowed
[root@rhsqa14-vm3 ~]# gluster v remove-brick test 10.70.46.2:/rhs/brick3/t0 force
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y 
volume remove-brick commit force: failed: Removing brick from a Tier volume is not allowed
[root@rhsqa14-vm3 ~]# gluster v remove-brick test 10.70.46.2:/rhs/brick3/t0 commit
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
volume remove-brick commit: failed: Removing brick from a Tier volume is not allowed
[root@rhsqa14-vm3 ~]# 

[root@rhsqa14-vm3 ~]# rpm -qa | grep gluster
glusterfs-3.7.1-3.el6rhs.x86_64
glusterfs-cli-3.7.1-3.el6rhs.x86_64
glusterfs-geo-replication-3.7.1-3.el6rhs.x86_64
glusterfs-libs-3.7.1-3.el6rhs.x86_64
glusterfs-client-xlators-3.7.1-3.el6rhs.x86_64
glusterfs-fuse-3.7.1-3.el6rhs.x86_64
glusterfs-server-3.7.1-3.el6rhs.x86_64
glusterfs-rdma-3.7.1-3.el6rhs.x86_64
glusterfs-api-3.7.1-3.el6rhs.x86_64
glusterfs-debuginfo-3.7.1-3.el6rhs.x86_64
[root@rhsqa14-vm3 ~]# 


this bug is verified with IO tried to remove brick but not successful.
Comment 4 errata-xmlrpc 2015-07-29 00:58:35 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-1495.html

Note You need to log in before you can comment on or make changes to this bug.