Bug 1291262 - glusterd: fix gluster volume sync after successful deletion
glusterd: fix gluster volume sync after successful deletion
Status: POST
Product: GlusterFS
Classification: Community
Component: glusterd (Show other bugs)
All All
medium Severity medium
: ---
: ---
Assigned To: Prasanna Kumar Kalever
: Triaged
Depends On:
  Show dependency treegraph
Reported: 2015-12-14 08:00 EST by Prasanna Kumar Kalever
Modified: 2018-02-06 18:32 EST (History)
1 user (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed:
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Prasanna Kumar Kalever 2015-12-14 08:00:58 EST
Description of problem:
Say, we have a 100 node gluster pool, on which we created a volume        'NEWVOL', we have used the volume and now it's time for deletion NEWVOL,       unfortunately say one among 100 nodes went down, we all know still we         can delete the volume from any of the remaining 99 nodes, and the deletion will be successful                                      

Current Behaviour:                             
Now, after some time the node which was down while performing volume deletion    operation comes up and volume NEWVOL is re-spawned/synced to all the rest 99  nodes, which is not expected and may create problems
Expected Behaviour:                            
Up on node which was down comes up it should delete the volume by looking at peers 

Version-Release number of selected component (if applicable):
Comment 1 Vijay Bellur 2015-12-14 08:32:55 EST
REVIEW: http://review.gluster.org/12963 (glusterd: fix gluster volume sync after successful deletion) posted (#1) for review on master by Prasanna Kumar Kalever (pkalever@redhat.com)
Comment 2 Mike McCune 2016-03-28 19:22:56 EDT
This bug was accidentally moved from POST to MODIFIED via an error in automation, please see mmccune@redhat.com with any questions

Note You need to log in before you can comment on or make changes to this bug.