Bug 996026 - Performing, 'remove-brick commit' on a brick, without performing 'remove-brick start', removes the brick
Summary: Performing, 'remove-brick commit' on a brick, without performing 'remove-bric...
Keywords:
Status: CLOSED DUPLICATE of bug 1046568
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterfs
Version: 2.1
Hardware: x86_64
OS: Linux
high
medium
Target Milestone: ---
: ---
Assignee: Bug Updates Notification Mailing List
QA Contact: Sudhir D
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-08-12 09:02 UTC by SATHEESARAN
Modified: 2014-03-19 11:36 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-03-19 11:34:54 UTC
Embargoed:


Attachments (Terms of Use)

Description SATHEESARAN 2013-08-12 09:02:19 UTC
Description of problem:
Performing 'remove brick commit' on a brick, before doing 'remove brick start', removes the brick

Version-Release number of selected component (if applicable):
glusterfs-3.4.0.18rhs-1

How reproducible:
Always

Steps to Reproduce:
1. Perform 'remove-brick commit' on a brick
(i.e) gluster volume remove-brick <vol-name> <brick-path> commit

Actual results:
The brick is removed

Expected results:
1. There should be error message conveying that, since the 'remove-brick start' operation was not initiated on that brick, 'remove-brick commit' is not
allowed.

2. Brick should not be removed

Additional info:
Not all the time, an Admin/Anybody would end up performing 'remove-brick commit' before doing, 'remove-brick start'and that makes no sense.

But accidents may happen, after 'remove-brick start' on one brick, Admin may,mistakenly, end up, providing some other brick for commit operation,as below

[Mon Aug 12 08:34:52 UTC 2013 root.37.54:~ ] # gluster volume create dvol 10.70.37.54:/rhs/brick3/dir1 10.70.37.205:/rhs/brick3/dir2 10.70.37.61:/rhs/brick3/dir3 10.70.37.86:/rhs/brick3/dir4
volume create: dvol: success: please start the volume to access data

[Mon Aug 12 08:36:24 UTC 2013 root.37.54:~ ] # gluster volume start dvol
volume start: dvol: success

[Mon Aug 12 08:36:35 UTC 2013 root.37.54:~ ] # gluster volume info dvol
 
Volume Name: dvol
Type: Distribute
Volume ID: 21df39a9-16ab-4ebf-a6cc-5bd4d15391f3
Status: Started
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: 10.70.37.54:/rhs/brick3/dir1
Brick2: 10.70.37.205:/rhs/brick3/dir2
Brick3: 10.70.37.61:/rhs/brick3/dir3
Brick4: 10.70.37.86:/rhs/brick3/dir4

[Mon Aug 12 08:36:59 UTC 2013 root.37.54:~ ] # gluster volume status dvol
Status of volume: dvol
Gluster process						Port	Online	Pid
------------------------------------------------------------------------------
Brick 10.70.37.54:/rhs/brick3/dir1			49154	Y	20298
Brick 10.70.37.205:/rhs/brick3/dir2			49156	Y	18915
Brick 10.70.37.61:/rhs/brick3/dir3			49154	Y	15804
Brick 10.70.37.86:/rhs/brick3/dir4			49153	Y	15617
NFS Server on localhost					2049	Y	20312
NFS Server on 10.70.37.205				2049	Y	18927
NFS Server on 10.70.37.61				2049	Y	15816
NFS Server on 10.70.37.86				2049	Y	15630
 
There are no active volume tasks

Here I am trying to remove, ----->10.70.37.86:/rhs/brick3/dir4 <-----
[Mon Aug 12 08:37:07 UTC 2013 root.37.54:~ ] # gluster volume remove-brick dvol 10.70.37.86:/rhs/brick3/dir4 start
volume remove-brick start: success
ID: 9d2e1b86-fa7c-4106-a83b-49fdbade18a4

Here I have committed another brick, --->10.70.37.61:/rhs/brick3/dir3<---
by mistake, thinking that this was the brick, previously used for 'remove-brick start'
[Mon Aug 12 08:37:48 UTC 2013 root.37.54:~ ] # gluster volume remove-brick dvol 10.70.37.61:/rhs/brick3/dir3 commit
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
volume remove-brick commit: success

[Mon Aug 12 08:39:31 UTC 2013 root.37.54:~ ] # gluster volume info dvol
 
Volume Name: dvol
Type: Distribute
Volume ID: 21df39a9-16ab-4ebf-a6cc-5bd4d15391f3
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: 10.70.37.54:/rhs/brick3/dir1
Brick2: 10.70.37.205:/rhs/brick3/dir2
Brick3: 10.70.37.86:/rhs/brick3/dir4

[Mon Aug 12 08:39:44 UTC 2013 root.37.54:~ ] # gluster volume status dvol
Status of volume: dvol
Gluster process						Port	Online	Pid
------------------------------------------------------------------------------
Brick 10.70.37.54:/rhs/brick3/dir1			49154	Y	20298
Brick 10.70.37.205:/rhs/brick3/dir2			49156	Y	18915
Brick 10.70.37.86:/rhs/brick3/dir4			49153	Y	15617
NFS Server on localhost					2049	Y	20414
NFS Server on 10.70.37.205				2049	Y	18995
NFS Server on 10.70.37.61				2049	Y	15884
NFS Server on 10.70.37.86				2049	Y	15691
 
There are no active volume tasks
[Mon Aug 12 08:40:05 UTC 2013 root.37.54:~ ] # gluster volume status dvol^C
[Mon Aug 12 08:41:25 UTC 2013 root.37.54:~ ] # gluster volume status dvol
Status of volume: dvol
Gluster process						Port	Online	Pid
------------------------------------------------------------------------------
Brick 10.70.37.54:/rhs/brick3/dir1			49154	Y	20298
Brick 10.70.37.205:/rhs/brick3/dir2			49156	Y	18915
Brick 10.70.37.86:/rhs/brick3/dir4			49153	Y	15617
NFS Server on localhost					2049	Y	20414
NFS Server on 10.70.37.61				2049	Y	15884
NFS Server on 10.70.37.205				2049	Y	18995
NFS Server on 10.70.37.86				2049	Y	15691
 
There are no active volume tasks

Comment 1 SATHEESARAN 2014-03-19 11:34:54 UTC

*** This bug has been marked as a duplicate of bug 1046568 ***

Comment 2 SATHEESARAN 2014-03-19 11:36:58 UTC
Marked this bug as the duplicate of 1046568

The reason behind it was, the issue was tracked as one of the issue in bigger requirement BZ https://bugzilla.redhat.com/show_bug.cgi?id=1046568#c0


Note You need to log in before you can comment on or make changes to this bug.