Description of problem: Problem: Say, we have a 100 node gluster pool, on which we created a volume 'NEWVOL', we have used the volume and now it's time for deletion NEWVOL, unfortunately say one among 100 nodes went down, we all know still we can delete the volume from any of the remaining 99 nodes, and the deletion will be successful Current Behaviour: Now, after some time the node which was down while performing volume deletion operation comes up and volume NEWVOL is re-spawned/synced to all the rest 99 nodes, which is not expected and may create problems Expected Behaviour: Up on node which was down comes up it should delete the volume by looking at peers Version-Release number of selected component (if applicable): mainline
REVIEW: http://review.gluster.org/12963 (glusterd: fix gluster volume sync after successful deletion) posted (#1) for review on master by Prasanna Kumar Kalever (pkalever)
This bug was accidentally moved from POST to MODIFIED via an error in automation, please see mmccune with any questions
Looks like these issues are now resolved. Would be good to clarify and mark this CLOSED.
commit 0b450b8b35 has fixed this issue. So, closing this bug as a duplicate of https://bugzilla.redhat.com/show_bug.cgi?id=1605077 (which is used to track the bug, while the patch has posted for review). Thanks, Sanju *** This bug has been marked as a duplicate of bug 1605077 ***