Bug 1003914 - remove-brick :- If User executes 'gluster volume remove-brick <vol> <brick> commit' command before 'gluster volume remove-brick <vol> <brick> start' commands; it removes brick and ends in data loss
Summary: remove-brick :- If User executes 'gluster volume remove-brick <vol> <brick> c...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterfs
Version: 2.1
Hardware: x86_64
OS: Linux
high
high
Target Milestone: ---
: RHGS 3.0.0
Assignee: Vijaikumar Mallikarjuna
QA Contact: amainkar
URL:
Whiteboard:
Depends On:
Blocks: 1027171
TreeView+ depends on / blocked
 
Reported: 2013-09-03 13:59 UTC by Rachana Patel
Modified: 2016-05-11 22:47 UTC (History)
6 users (show)

Fixed In Version: glusterfs-3.6.0-3.0.el6rhs
Doc Type: Bug Fix
Doc Text:
Previously when 'remove-brick commit' is executed 'remove-brick start' no warning was displayed and it removes the brick with data loss. After the fix when 'remove-brick commit' is executed 'remove-brick start' error is displayed: Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y volume remove-brick commit: failed: Brick 10.70.35.172:/brick0 is not decommissioned. Use start or force option
Clone Of:
: 1027171 (view as bug list)
Environment:
Last Closed: 2014-09-22 19:28:48 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2014:1278 0 normal SHIPPED_LIVE Red Hat Storage Server 3.0 bug fix and enhancement update 2014-09-22 23:26:55 UTC

Description Rachana Patel 2013-09-03 13:59:54 UTC
Description of problem:
remove-brick :- If User executes 'gluster volume remove-brick <vol> <brick> commit'  command before  'gluster volume remove-brick <vol> <brick> start' commands; it removes brick and ends in data loss

Version-Release number of selected component (if applicable):
3.4.0.30rhs-2.el6_4.x86_64

How reproducible:
always

Steps to Reproduce:
1.had a volume as below
[root@DHT1 ~]# gluster v info dht
 
Volume Name: dht
Type: Distribute
Volume ID: 55e30768-af49-4ab9-8f2a-87fd0af87a69
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: 10.70.37.195:/rhs/brick1/d2
Brick2: 10.70.37.66:/rhs/brick1/d1
Brick3: 10.70.37.66:/rhs/brick1/d2

2. had a data on it 
[u1@rhs-client22 dht]$ mount | grep dht
10.70.37.66:/dht on /mnt/dht type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)
[u1@rhs-client22 dht]$ ls
f1  f10  f11  f12  f13  f14  f15  f16  f17  f18  f19  f2  f20  f3  f4  f5  f6  f7  f8  f9

3. execute remove-brick with commit option before start option
[root@DHT1 ~]# gluster volume remove-brick dht 10.70.37.195://rhs/brick1/d2 commit
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
volume remove-brick commit: success


4. verify volume info and data on mount point 

[root@DHT1 ~]# gluster v info dht
 
Volume Name: dht
Type: Distribute
Volume ID: 55e30768-af49-4ab9-8f2a-87fd0af87a69
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 10.70.37.66:/rhs/brick1/d1
Brick2: 10.70.37.66:/rhs/brick1/d2

[u1@rhs-client22 dht]$ ls
f11  f12  f13  f14  f15  f16  f17  f3  f4  f6  f7  f8  f9


Actual results:
remove-brick commit(without start) removes brick without data migration and it ends in data loss

Expected results:


Additional info:

Comment 2 Vijaikumar Mallikarjuna 2014-05-13 09:03:49 UTC
Patch http://review.gluster.org/#/c/6233/ fixes the issue

Comment 3 Vivek Agarwal 2014-05-22 11:51:45 UTC
Merged as a part of rebase

Comment 4 Rachana Patel 2014-06-17 11:33:15 UTC
verified with - 3.6.0.18-1.el6rhs.x86_64
it's giving error

[root@OVM5 ~]# gluster volume remove-brick test1  10.70.35.172:/brick0 commit
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
volume remove-brick commit: failed: Brick 10.70.35.172:/brick0 is not decommissioned. Use start or force option

hence moving bug to verified

Comment 6 errata-xmlrpc 2014-09-22 19:28:48 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHEA-2014-1278.html


Note You need to log in before you can comment on or make changes to this bug.