Bug 1033469 - Do not allow removal of replicas using "remove-brick" command
Summary: Do not allow removal of replicas using "remove-brick" command
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterfs
Version: 2.1
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: RHGS 2.1.2
Assignee: Ravishankar N
QA Contact: spandura
URL:
Whiteboard:
Depends On:
Blocks: 1021928
TreeView+ depends on / blocked
 
Reported: 2013-11-22 07:35 UTC by spandura
Modified: 2015-05-15 18:16 UTC (History)
4 users (show)

Fixed In Version: glusterfs-3.4.0.47.1u2rhs-1
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-02-25 08:05:21 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2014:0208 0 normal SHIPPED_LIVE Red Hat Storage 2.1 enhancement and bug fix update #2 2014-02-25 12:20:30 UTC

Description spandura 2013-11-22 07:35:23 UTC
Description of problem:
======================
Consider the case of 1 x 3 replicate volume (brick1, brick2, brick3). If we want to remove a replica (brick1) and reduce the replica count to 2 we use "gluster volume remove-brick <VOLNAME> replica 2 <BRICK1> force". 

With this operation, the afr extended attributes should now be changed to : 

brick1 : "trusted.afr.vol_dis_1_rep_2-client-0"   -> deleted 

brick2 : "trusted.afr.vol_dis_1_rep_2-client-1" -> "trusted.afr.vol_dis_1_rep_2-client-0" 

brick3 : "trusted.afr.vol_dis_1_rep_2-client-2" -> "trusted.afr.vol_dis_1_rep_2-client-2"

If we are not making changes to the above mentioned extended attributes and if there are any pending self-heals , then healing can happen between incorrect bricks. 

Currently we are not handling the changes to extended attributes . ( Refer to bug : https://bugzilla.redhat.com/show_bug.cgi?id=1028307 ). Until "1028307" bug is fixed, Do not allow removal of replicas from the volume using "remove-brick" comamnd. 

Version-Release number of selected component (if applicable):
============================================================
glusterfs 3.4.0.43.1u2rhs built on Nov 12 2013 07:38:20

Comment 2 Ravishankar N 2013-11-22 12:32:03 UTC
Patch review: https://code.engineering.redhat.com/gerrit/#/c/16184/
This patch has not been sent upstream. It is a temporary fix and must be reverted as described in the commit message.

Comment 3 spandura 2013-12-30 04:55:43 UTC
Verified the fix on the build " glusterfs 3.4.0.52rhs built on Dec 19 2013 12:20:16 " . Bug is fixed . Moving the bug to verified state. 

Following are the cases tested. 

1) Created a dis_rep_vol. Tried to remove one replica limb from each sub-volume. 

2) Started dis_rep_vol . Remove one replica limb from each sub-volume with "force" option. 

3)  Remove one replica limb from each sub-volume with "commit" option. 

4)  Remove one replica limb from each sub-volume without any options. 

Output for the above cases:
=============================
1)  root@rhs-client11 [Dec-30-2013- 4:46:35] >gluster v info vol_dis_rep
 
Volume Name: vol_dis_rep
Type: Distributed-Replicate
Volume ID: 17bf18c3-277c-4e57-90dc-ef9213efd1ae
Status: Created
Number of Bricks: 3 x 3 = 9
Transport-type: tcp
Bricks:
Brick1: rhs-client11:/rhs/bricks/b1
Brick2: rhs-client12:/rhs/bricks/b1-rep1
Brick3: rhs-client13:/rhs/bricks/b1-rep2
Brick4: rhs-client11:/rhs/bricks/b2
Brick5: rhs-client12:/rhs/bricks/b2-rep1
Brick6: rhs-client13:/rhs/bricks/b2-rep2
Brick7: rhs-client11:/rhs/bricks/b3
Brick8: rhs-client12:/rhs/bricks/b3-rep1
Brick9: rhs-client13:/rhs/bricks/b3-rep2

root@rhs-client11 [Dec-30-2013- 4:46:53] >gluster v remove-brick vol_dis_rep replica 2 rhs-client13:/rhs/bricks/b1-rep2 rhs-client12:/rhs/bricks/b2-rep1 rhs-client11:/rhs/bricks/b3 
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
volume remove-brick commit force: failed: Reducing replica count of volume is disallowed in glusterfs 3.4.0.52rhs


2) root@rhs-client11 [Dec-30-2013- 4:48:22] >gluster v remove-brick vol_dis_rep replica 2 rhs-client13:/rhs/bricks/b1-rep2 rhs-client12:/rhs/bricks/b2-rep1 rhs-client11:/rhs/bricks/b3  commit 
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
volume remove-brick commit: failed: Reducing replica count of volume is disallowed in glusterfs 3.4.0.52rhs

root@rhs-client11 [Dec-30-2013- 4:48:25] >
root@rhs-client11 [Dec-30-2013- 4:48:27] >gluster v info
  
Volume Name: vol_dis_rep
Type: Distributed-Replicate
Volume ID: 17bf18c3-277c-4e57-90dc-ef9213efd1ae
Status: Started
Number of Bricks: 3 x 3 = 9
Transport-type: tcp
Bricks:
Brick1: rhs-client11:/rhs/bricks/b1
Brick2: rhs-client12:/rhs/bricks/b1-rep1
Brick3: rhs-client13:/rhs/bricks/b1-rep2
Brick4: rhs-client11:/rhs/bricks/b2
Brick5: rhs-client12:/rhs/bricks/b2-rep1
Brick6: rhs-client13:/rhs/bricks/b2-rep2
Brick7: rhs-client11:/rhs/bricks/b3
Brick8: rhs-client12:/rhs/bricks/b3-rep1
Brick9: rhs-client13:/rhs/bricks/b3-rep2

3) root@rhs-client11 [Dec-30-2013- 4:48:08] >gluster v remove-brick vol_dis_rep replica 2 rhs-client13:/rhs/bricks/b1-rep2 rhs-client12:/rhs/bricks/b2-rep1 rhs-client11:/rhs/bricks/b3  force
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
volume remove-brick commit force: failed: Reducing replica count of volume is disallowed in glusterfs 3.4.0.52rhs

4) root@rhs-client11 [Dec-30-2013- 4:47:37] >gluster v remove-brick vol_dis_rep replica 2 rhs-client13:/rhs/bricks/b1-rep2 rhs-client12:/rhs/bricks/b2-rep1 rhs-client11:/rhs/bricks/b3 
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
volume remove-brick commit force: failed: Reducing replica count of volume is disallowed in glusterfs 3.4.0.52rhs

Comment 5 errata-xmlrpc 2014-02-25 08:05:21 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHEA-2014-0208.html


Note You need to log in before you can comment on or make changes to this bug.