Bug 1003914 - remove-brick :- If User executes 'gluster volume remove-brick <vol> <brick> commit' command before 'gluster volume remove-brick <vol> <brick> start' commands; it removes brick and ends in data loss
remove-brick :- If User executes 'gluster volume remove-brick <vol> <brick> c...
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterfs (Show other bugs)
2.1
x86_64 Linux
high Severity high
: ---
: RHGS 3.0.0
Assigned To: Vijaikumar Mallikarjuna
amainkar
:
Depends On:
Blocks: 1027171
  Show dependency treegraph
 
Reported: 2013-09-03 09:59 EDT by Rachana Patel
Modified: 2016-05-11 18:47 EDT (History)
6 users (show)

See Also:
Fixed In Version: glusterfs-3.6.0-3.0.el6rhs
Doc Type: Bug Fix
Doc Text:
Previously when 'remove-brick commit' is executed 'remove-brick start' no warning was displayed and it removes the brick with data loss. After the fix when 'remove-brick commit' is executed 'remove-brick start' error is displayed: Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y volume remove-brick commit: failed: Brick 10.70.35.172:/brick0 is not decommissioned. Use start or force option
Story Points: ---
Clone Of:
: 1027171 (view as bug list)
Environment:
Last Closed: 2014-09-22 15:28:48 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Rachana Patel 2013-09-03 09:59:54 EDT
Description of problem:
remove-brick :- If User executes 'gluster volume remove-brick <vol> <brick> commit'  command before  'gluster volume remove-brick <vol> <brick> start' commands; it removes brick and ends in data loss

Version-Release number of selected component (if applicable):
3.4.0.30rhs-2.el6_4.x86_64

How reproducible:
always

Steps to Reproduce:
1.had a volume as below
[root@DHT1 ~]# gluster v info dht
 
Volume Name: dht
Type: Distribute
Volume ID: 55e30768-af49-4ab9-8f2a-87fd0af87a69
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: 10.70.37.195:/rhs/brick1/d2
Brick2: 10.70.37.66:/rhs/brick1/d1
Brick3: 10.70.37.66:/rhs/brick1/d2

2. had a data on it 
[u1@rhs-client22 dht]$ mount | grep dht
10.70.37.66:/dht on /mnt/dht type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)
[u1@rhs-client22 dht]$ ls
f1  f10  f11  f12  f13  f14  f15  f16  f17  f18  f19  f2  f20  f3  f4  f5  f6  f7  f8  f9

3. execute remove-brick with commit option before start option
[root@DHT1 ~]# gluster volume remove-brick dht 10.70.37.195://rhs/brick1/d2 commit
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
volume remove-brick commit: success


4. verify volume info and data on mount point 

[root@DHT1 ~]# gluster v info dht
 
Volume Name: dht
Type: Distribute
Volume ID: 55e30768-af49-4ab9-8f2a-87fd0af87a69
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 10.70.37.66:/rhs/brick1/d1
Brick2: 10.70.37.66:/rhs/brick1/d2

[u1@rhs-client22 dht]$ ls
f11  f12  f13  f14  f15  f16  f17  f3  f4  f6  f7  f8  f9


Actual results:
remove-brick commit(without start) removes brick without data migration and it ends in data loss

Expected results:


Additional info:
Comment 2 Vijaikumar Mallikarjuna 2014-05-13 05:03:49 EDT
Patch http://review.gluster.org/#/c/6233/ fixes the issue
Comment 3 Vivek Agarwal 2014-05-22 07:51:45 EDT
Merged as a part of rebase
Comment 4 Rachana Patel 2014-06-17 07:33:15 EDT
verified with - 3.6.0.18-1.el6rhs.x86_64
it's giving error

[root@OVM5 ~]# gluster volume remove-brick test1  10.70.35.172:/brick0 commit
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
volume remove-brick commit: failed: Brick 10.70.35.172:/brick0 is not decommissioned. Use start or force option

hence moving bug to verified
Comment 6 errata-xmlrpc 2014-09-22 15:28:48 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHEA-2014-1278.html

Note You need to log in before you can comment on or make changes to this bug.