Bug 1040408 - Reducing replica count using 'remove-brick' command fails when bricks are listed in random order.
Summary: Reducing replica count using 'remove-brick' command fails when bricks are lis...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: glusterd
Version: mainline
Hardware: x86_64
OS: Linux
unspecified
unspecified
Target Milestone: ---
Assignee: Ravishankar N
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1039992
TreeView+ depends on / blocked
 
Reported: 2013-12-11 11:32 UTC by Ravishankar N
Modified: 2014-11-11 08:25 UTC (History)
2 users (show)

Fixed In Version: glusterfs-3.6.0beta1
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-11-11 08:25:35 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Ravishankar N 2013-12-11 11:32:39 UTC
Description of problem:
Reducing replica count of a volume using remove-brick command works only when the bricks are given in the same order as listed in gluster volume info.

Version-Release number of selected component (if applicable):


How reproducible:
Always

Steps to Reproduce:
1. gluster v create testvol replica 3 10.70.42.203:/brick/brick{1..6}
2. gluster v start testvol

3.
[root@tuxvm4 glusterfs]# gluster v info

Volume Name: testvol
Type: Distributed-Replicate
Volume ID: 6088e685-ddfb-4e0a-887e-261ec0fa85f8
Status: Created
Number of Bricks: 2 x 3 = 6
Transport-type: tcp
Bricks:
Brick1: 10.70.42.203:/brick/brick1
Brick2: 10.70.42.203:/brick/brick2
Brick3: 10.70.42.203:/brick/brick3
Brick4: 10.70.42.203:/brick/brick4
Brick5: 10.70.42.203:/brick/brick5
Brick6: 10.70.42.203:/brick/brick6


4. gluster v remove-brick repvol replica 2 10.70.42.203:/brick/brick{6,3}-->This fails

5.gluster v remove-brick repvol replica 2 10.70.42.203:/brick/brick{3,6}-->This succeeds.


Actual results:
[root@tuxvm4 glusterfs]# gluster v remove-brick testvol replica 2 10.70.42.203:/brick/brick{6,3}
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
volume remove-brick commit force: failed: Bricks are from same subvol
[root@tuxvm4 glusterfs]#

Expected results:
[root@tuxvm4 glusterfs]# gluster v remove-brick testvol replica 2 10.70.42.203:/brick/brick{6,3}
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
volume remove-brick commit force: success
[root@tuxvm4 glusterfs]#

Additional info:

Comment 1 Anand Avati 2013-12-12 04:24:02 UTC
REVIEW: http://review.gluster.org/6489 (glusterd: fix error in remove-brick-replica validation) posted (#1) for review on master by Ravishankar N (ravishankar)

Comment 2 Anand Avati 2013-12-12 04:41:42 UTC
REVIEW: http://review.gluster.org/6489 (glusterd: fix error in remove-brick-replica validation) posted (#2) for review on master by Ravishankar N (ravishankar)

Comment 3 Anand Avati 2013-12-13 05:06:53 UTC
REVIEW: http://review.gluster.org/6489 (glusterd: fix error in remove-brick-replica validation) posted (#3) for review on master by Ravishankar N (ravishankar)

Comment 4 Anand Avati 2013-12-13 09:55:37 UTC
REVIEW: http://review.gluster.org/6489 (glusterd: fix error in remove-brick-replica validation) posted (#4) for review on master by Ravishankar N (ravishankar)

Comment 5 Anand Avati 2013-12-13 16:50:30 UTC
COMMIT: http://review.gluster.org/6489 committed in master by Vijay Bellur (vbellur) 
------
commit 7fc2499db89e385332f09fb06c10cb524f761875
Author: Ravishankar N <ravishankar>
Date:   Wed Dec 11 17:30:13 2013 +0530

    glusterd: fix error in remove-brick-replica validation
    
    Problem:
    Reducing replica count of a volume using remove-brick command fails
    if bricks are specified in a random order.
    
    Fix: Modify subvol_matcher_verify() to permit order agnostic
    replica count reduction.
    
    
    Change-Id: I1f3d33e82a70d9b69c297f69c4c1b847937d1031
    BUG: 1040408
    Signed-off-by: Ravishankar N <ravishankar>
    Reviewed-on: http://review.gluster.org/6489
    Reviewed-by: Krishnan Parthasarathi <kparthas>
    Tested-by: Gluster Build System <jenkins.com>

Comment 6 Niels de Vos 2014-09-22 12:33:38 UTC
A beta release for GlusterFS 3.6.0 has been released. Please verify if the release solves this bug report for you. In case the glusterfs-3.6.0beta1 release does not have a resolution for this issue, leave a comment in this bug and move the status to ASSIGNED. If this release fixes the problem for you, leave a note and change the status to VERIFIED.

Packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update (possibly an "updates-testing" repository) infrastructure for your distribution.

[1] http://supercolony.gluster.org/pipermail/gluster-users/2014-September/018836.html
[2] http://supercolony.gluster.org/pipermail/gluster-users/

Comment 7 Niels de Vos 2014-11-11 08:25:35 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.6.1, please reopen this bug report.

glusterfs-3.6.1 has been announced [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://supercolony.gluster.org/pipermail/gluster-users/2014-November/019410.html
[2] http://supercolony.gluster.org/mailman/listinfo/gluster-users


Note You need to log in before you can comment on or make changes to this bug.