Bug 1040408

Summary: Reducing replica count using 'remove-brick' command fails when bricks are listed in random order.
Product: [Community] GlusterFS Reporter: Ravishankar N <ravishankar>
Component: glusterdAssignee: Ravishankar N <ravishankar>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: mainlineCC: bugs, gluster-bugs
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: glusterfs-3.6.0beta1 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2014-11-11 08:25:35 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1039992    

Description Ravishankar N 2013-12-11 11:32:39 UTC
Description of problem:
Reducing replica count of a volume using remove-brick command works only when the bricks are given in the same order as listed in gluster volume info.

Version-Release number of selected component (if applicable):


How reproducible:
Always

Steps to Reproduce:
1. gluster v create testvol replica 3 10.70.42.203:/brick/brick{1..6}
2. gluster v start testvol

3.
[root@tuxvm4 glusterfs]# gluster v info

Volume Name: testvol
Type: Distributed-Replicate
Volume ID: 6088e685-ddfb-4e0a-887e-261ec0fa85f8
Status: Created
Number of Bricks: 2 x 3 = 6
Transport-type: tcp
Bricks:
Brick1: 10.70.42.203:/brick/brick1
Brick2: 10.70.42.203:/brick/brick2
Brick3: 10.70.42.203:/brick/brick3
Brick4: 10.70.42.203:/brick/brick4
Brick5: 10.70.42.203:/brick/brick5
Brick6: 10.70.42.203:/brick/brick6


4. gluster v remove-brick repvol replica 2 10.70.42.203:/brick/brick{6,3}-->This fails

5.gluster v remove-brick repvol replica 2 10.70.42.203:/brick/brick{3,6}-->This succeeds.


Actual results:
[root@tuxvm4 glusterfs]# gluster v remove-brick testvol replica 2 10.70.42.203:/brick/brick{6,3}
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
volume remove-brick commit force: failed: Bricks are from same subvol
[root@tuxvm4 glusterfs]#

Expected results:
[root@tuxvm4 glusterfs]# gluster v remove-brick testvol replica 2 10.70.42.203:/brick/brick{6,3}
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
volume remove-brick commit force: success
[root@tuxvm4 glusterfs]#

Additional info:

Comment 1 Anand Avati 2013-12-12 04:24:02 UTC
REVIEW: http://review.gluster.org/6489 (glusterd: fix error in remove-brick-replica validation) posted (#1) for review on master by Ravishankar N (ravishankar)

Comment 2 Anand Avati 2013-12-12 04:41:42 UTC
REVIEW: http://review.gluster.org/6489 (glusterd: fix error in remove-brick-replica validation) posted (#2) for review on master by Ravishankar N (ravishankar)

Comment 3 Anand Avati 2013-12-13 05:06:53 UTC
REVIEW: http://review.gluster.org/6489 (glusterd: fix error in remove-brick-replica validation) posted (#3) for review on master by Ravishankar N (ravishankar)

Comment 4 Anand Avati 2013-12-13 09:55:37 UTC
REVIEW: http://review.gluster.org/6489 (glusterd: fix error in remove-brick-replica validation) posted (#4) for review on master by Ravishankar N (ravishankar)

Comment 5 Anand Avati 2013-12-13 16:50:30 UTC
COMMIT: http://review.gluster.org/6489 committed in master by Vijay Bellur (vbellur) 
------
commit 7fc2499db89e385332f09fb06c10cb524f761875
Author: Ravishankar N <ravishankar>
Date:   Wed Dec 11 17:30:13 2013 +0530

    glusterd: fix error in remove-brick-replica validation
    
    Problem:
    Reducing replica count of a volume using remove-brick command fails
    if bricks are specified in a random order.
    
    Fix: Modify subvol_matcher_verify() to permit order agnostic
    replica count reduction.
    
    
    Change-Id: I1f3d33e82a70d9b69c297f69c4c1b847937d1031
    BUG: 1040408
    Signed-off-by: Ravishankar N <ravishankar>
    Reviewed-on: http://review.gluster.org/6489
    Reviewed-by: Krishnan Parthasarathi <kparthas>
    Tested-by: Gluster Build System <jenkins.com>

Comment 6 Niels de Vos 2014-09-22 12:33:38 UTC
A beta release for GlusterFS 3.6.0 has been released. Please verify if the release solves this bug report for you. In case the glusterfs-3.6.0beta1 release does not have a resolution for this issue, leave a comment in this bug and move the status to ASSIGNED. If this release fixes the problem for you, leave a note and change the status to VERIFIED.

Packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update (possibly an "updates-testing" repository) infrastructure for your distribution.

[1] http://supercolony.gluster.org/pipermail/gluster-users/2014-September/018836.html
[2] http://supercolony.gluster.org/pipermail/gluster-users/

Comment 7 Niels de Vos 2014-11-11 08:25:35 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.6.1, please reopen this bug report.

glusterfs-3.6.1 has been announced [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://supercolony.gluster.org/pipermail/gluster-users/2014-November/019410.html
[2] http://supercolony.gluster.org/mailman/listinfo/gluster-users