Problem Description: On a two node cluster, reset-brick commit force fails. Version: Mainline Reproducer steps: 1. Create a two node cluster. 2. Create a replica 2 volume, each brick hosted in different nodes gluster v create test-vol replica 2 172.17.0.2:/tmp/b1 172.17.0.3:/tmp/b1 force 3. gluster v reset-brick test-vol 172.17.0.2:/tmp/b1 start 4. gluster v reset-brick test-vol 172.17.0.2:/tmp/b1 172.17.0.2:/tmp/b1 commit force Step 4 fails.
upstream patch :
patch : https://review.gluster.org/#/c/18581
COMMIT: https://review.gluster.org/18581 committed in master by ------------- glusterd: delete source brick only once in reset-brick commit force While stopping the brick which is to be reset and replaced delete_brick flag was passed as true which resulted glusterd to free up to source brick before the actual operation. This results commit force to fail failing to find the source brickinfo. Change-Id: I1aa7508eff7cc9c9b5d6f5163f3bb92736d6df44 BUG: 1507466 Signed-off-by: Atin Mukherjee <amukherj>
(In reply to Atin Mukherjee from comment #1) > Problem Description: > On a two node cluster, reset-brick commit force fails. > > > Version: > Mainline > > Reproducer steps: > > 1. Create a two node cluster. > 2. Create a replica 2 volume, each brick hosted in different nodes > gluster v create test-vol replica 2 172.17.0.2:/tmp/b1 172.17.0.3:/tmp/b1 > force > 3. gluster v reset-brick test-vol 172.17.0.2:/tmp/b1 start > 4. gluster v reset-brick test-vol 172.17.0.2:/tmp/b1 172.17.0.2:/tmp/b1 > commit force > > Step 4 fails. Step4 succeeds for me with or without the patch in comment #3, provided we start the volume before step 3.
(In reply to Ravishankar N from comment #5) > (In reply to Atin Mukherjee from comment #1) > > Problem Description: > > On a two node cluster, reset-brick commit force fails. > > > > > > Version: > > Mainline > > > > Reproducer steps: > > > > 1. Create a two node cluster. > > 2. Create a replica 2 volume, each brick hosted in different nodes > > gluster v create test-vol replica 2 172.17.0.2:/tmp/b1 172.17.0.3:/tmp/b1 > > force > > 3. gluster v reset-brick test-vol 172.17.0.2:/tmp/b1 start > > 4. gluster v reset-brick test-vol 172.17.0.2:/tmp/b1 172.17.0.2:/tmp/b1 > > commit force > > > > Step 4 fails. > > Step4 succeeds for me with or without the patch in comment #3, provided we > start the volume before step 3. I guess you are testing it on a single node setup. Try to do this on a multi node setup & from node1 try resetting the brick on node2, without this patch. It will fail.
(In reply to Karthik U S from comment #6) art the volume before step 3. > > I guess you are testing it on a single node setup. Try to do this on a multi > node setup & from node1 try resetting the brick on node2, without this > patch. It will fail. Ah, makes sense now. Thanks!
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.13.0, please open a new bug report. glusterfs-3.13.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/announce/2017-December/000087.html [2] https://www.gluster.org/pipermail/gluster-users/