Bug 1507466 - reset-brick commit force failed with glusterd_volume_brickinfo_get Returning -1
Summary: reset-brick commit force failed with glusterd_volume_brickinfo_get Returning -1
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: glusterd
Version: mainline
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
Assignee: Atin Mukherjee
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1507172 1507877 1507880
TreeView+ depends on / blocked
 
Reported: 2017-10-30 10:19 UTC by Atin Mukherjee
Modified: 2017-12-08 17:45 UTC (History)
8 users (show)

Fixed In Version: glusterfs-3.13.0
Clone Of: 1507172
: 1507877 1507880 (view as bug list)
Environment:
Last Closed: 2017-12-08 17:45:00 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Comment 1 Atin Mukherjee 2017-10-30 10:23:48 UTC
Problem Description:
On a two node cluster, reset-brick commit force fails.


Version:
Mainline

Reproducer steps:

1. Create a two node cluster.
2. Create a replica 2 volume, each brick hosted in different nodes
gluster v create test-vol replica 2 172.17.0.2:/tmp/b1 172.17.0.3:/tmp/b1 force
3. gluster v reset-brick test-vol 172.17.0.2:/tmp/b1 start
4. gluster v reset-brick test-vol 172.17.0.2:/tmp/b1  172.17.0.2:/tmp/b1 commit force

Step 4 fails.

Comment 2 Atin Mukherjee 2017-10-30 11:18:07 UTC
upstream patch :

Comment 3 Atin Mukherjee 2017-10-30 11:20:20 UTC
patch : https://review.gluster.org/#/c/18581

Comment 4 Worker Ant 2017-10-31 11:14:53 UTC
COMMIT: https://review.gluster.org/18581 committed in master by  

------------- glusterd: delete source brick only once in reset-brick commit force

While stopping the brick which is to be reset and replaced delete_brick
flag was passed as true which resulted glusterd to free up to source
brick before the actual operation. This results commit force to fail
failing to find the source brickinfo.

Change-Id: I1aa7508eff7cc9c9b5d6f5163f3bb92736d6df44
BUG: 1507466
Signed-off-by: Atin Mukherjee <amukherj>

Comment 5 Ravishankar N 2017-11-02 08:58:48 UTC
(In reply to Atin Mukherjee from comment #1)
> Problem Description:
> On a two node cluster, reset-brick commit force fails.
> 
> 
> Version:
> Mainline
> 
> Reproducer steps:
> 
> 1. Create a two node cluster.
> 2. Create a replica 2 volume, each brick hosted in different nodes
> gluster v create test-vol replica 2 172.17.0.2:/tmp/b1 172.17.0.3:/tmp/b1
> force
> 3. gluster v reset-brick test-vol 172.17.0.2:/tmp/b1 start
> 4. gluster v reset-brick test-vol 172.17.0.2:/tmp/b1  172.17.0.2:/tmp/b1
> commit force
> 
> Step 4 fails.

Step4 succeeds for me with or without the patch in comment #3, provided we start the volume before step 3.

Comment 6 Karthik U S 2017-11-02 09:13:14 UTC
(In reply to Ravishankar N from comment #5)
> (In reply to Atin Mukherjee from comment #1)
> > Problem Description:
> > On a two node cluster, reset-brick commit force fails.
> > 
> > 
> > Version:
> > Mainline
> > 
> > Reproducer steps:
> > 
> > 1. Create a two node cluster.
> > 2. Create a replica 2 volume, each brick hosted in different nodes
> > gluster v create test-vol replica 2 172.17.0.2:/tmp/b1 172.17.0.3:/tmp/b1
> > force
> > 3. gluster v reset-brick test-vol 172.17.0.2:/tmp/b1 start
> > 4. gluster v reset-brick test-vol 172.17.0.2:/tmp/b1  172.17.0.2:/tmp/b1
> > commit force
> > 
> > Step 4 fails.
> 
> Step4 succeeds for me with or without the patch in comment #3, provided we
> start the volume before step 3.

I guess you are testing it on a single node setup. Try to do this on a multi node setup & from node1 try resetting the brick on node2, without this patch. It will fail.

Comment 7 Ravishankar N 2017-11-02 09:30:19 UTC
(In reply to Karthik U S from comment #6)
art the volume before step 3.
> 
> I guess you are testing it on a single node setup. Try to do this on a multi
> node setup & from node1 try resetting the brick on node2, without this
> patch. It will fail.

Ah, makes sense now. Thanks!

Comment 8 Shyamsundar 2017-12-08 17:45:00 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.13.0, please open a new bug report.

glusterfs-3.13.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2017-December/000087.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.