Bug 1344631 - fail delete volume operation if one of the glusterd instance is down in cluster
Summary: fail delete volume operation if one of the glusterd instance is down in cluster
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: glusterd
Version: 3.8.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Atin Mukherjee
QA Contact:
URL:
Whiteboard:
Depends On: 1344407 1344634
Blocks: 1344625
TreeView+ depends on / blocked
 
Reported: 2016-06-10 08:33 UTC by Atin Mukherjee
Modified: 2016-06-16 12:34 UTC (History)
2 users (show)

Fixed In Version: glusterfs-3.8.0
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1344407
Environment:
Last Closed: 2016-06-16 12:34:04 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Atin Mukherjee 2016-06-10 08:33:38 UTC
+++ This bug was initially created as a clone of Bug #1344407 +++

Description of problem:

If a volume is deleted when one of the glusterd instance on a node is down in the cluster then once glusterd comes back it re-syncs the same volume to all of the nodes. User will get annoyed to see the volume back into the namespace.

Version-Release number of selected component (if applicable):
mainline

How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

--- Additional comment from Vijay Bellur on 2016-06-09 11:38:08 EDT ---

REVIEW: http://review.gluster.org/14681 (glusterd: fail volume delete if one of the node is down) posted (#2) for review on master by Atin Mukherjee (amukherj)

--- Additional comment from Vijay Bellur on 2016-06-10 03:31:02 EDT ---

COMMIT: http://review.gluster.org/14681 committed in master by Kaushal M (kaushal) 
------
commit 5016cc548d4368b1c180459d6fa8ae012bb21d6e
Author: Atin Mukherjee <amukherj>
Date:   Thu Jun 9 18:22:43 2016 +0530

    glusterd: fail volume delete if one of the node is down
    
    Deleting a volume on a cluster where one of the node in the cluster is down is
    buggy since once that node comes back the resync of the same volume will happen.
    Till we bring in the soft delete feature tracked in
    http://review.gluster.org/12963 this is a safe guard to block the volume
    deletion.
    
    Change-Id: I9c13869c4a7e7a947f88842c6dc6f231c0eeda6c
    BUG: 1344407
    Signed-off-by: Atin Mukherjee <amukherj>
    Reviewed-on: http://review.gluster.org/14681
    Smoke: Gluster Build System <jenkins.com>
    CentOS-regression: Gluster Build System <jenkins.com>
    Reviewed-by: Kaushal M <kaushal>
    NetBSD-regression: NetBSD Build System <jenkins.org>

Comment 1 Vijay Bellur 2016-06-10 08:35:05 UTC
REVIEW: http://review.gluster.org/14691 (glusterd: fail volume delete if one of the node is down) posted (#1) for review on release-3.8 by Atin Mukherjee (amukherj)

Comment 2 Vijay Bellur 2016-06-10 09:37:40 UTC
REVIEW: http://review.gluster.org/14691 (glusterd: fail volume delete if one of the node is down) posted (#2) for review on release-3.8 by Atin Mukherjee (amukherj)

Comment 3 Vijay Bellur 2016-06-13 10:42:18 UTC
REVIEW: http://review.gluster.org/14691 (glusterd: fail volume delete if one of the node is down) posted (#3) for review on release-3.8 by Atin Mukherjee (amukherj)

Comment 4 Vijay Bellur 2016-06-13 11:18:11 UTC
REVIEW: http://review.gluster.org/14691 (glusterd: fail volume delete if one of the node is down) posted (#4) for review on release-3.8 by Atin Mukherjee (amukherj)

Comment 5 Vijay Bellur 2016-06-13 14:39:50 UTC
COMMIT: http://review.gluster.org/14691 committed in release-3.8 by Niels de Vos (ndevos) 
------
commit a238ad371c32feddb5af8a48642870bc6b9ee767
Author: Atin Mukherjee <amukherj>
Date:   Thu Jun 9 18:22:43 2016 +0530

    glusterd: fail volume delete if one of the node is down
    
    Backport of http://review.gluster.org/14681
    
    Deleting a volume on a cluster where one of the node in the cluster is down is
    buggy since once that node comes back the resync of the same volume will happen.
    Till we bring in the soft delete feature tracked in
    http://review.gluster.org/12963 this is a safe guard to block the volume
    deletion.
    
    Please note the test file which is backported from this commit has an issue
    where we start the volume and then try to delete it which is anyway going to
    fail. So the test actually doesn't validate the fix.
    http://review.gluster.org/#/c/14693/ in master fixed the problem and the same is
    ported as part of this commit as well.
    
    Cherry picked from commit 5016cc548d4368b1c180459d6fa8ae012bb21d6e:
    > Change-Id: I9c13869c4a7e7a947f88842c6dc6f231c0eeda6c
    > BUG: 1344407
    > Signed-off-by: Atin Mukherjee <amukherj>
    > Reviewed-on: http://review.gluster.org/14681
    > Smoke: Gluster Build System <jenkins.com>
    > CentOS-regression: Gluster Build System <jenkins.com>
    > Reviewed-by: Kaushal M <kaushal>
    > NetBSD-regression: NetBSD Build System <jenkins.org>
    
    Change-Id: I9c13869c4a7e7a947f88842c6dc6f231c0eeda6c
    BUG: 1344631
    Signed-off-by: Atin Mukherjee <amukherj>
    Reviewed-on: http://review.gluster.org/14691
    Reviewed-by: Niels de Vos <ndevos>
    Smoke: Gluster Build System <jenkins.com>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.com>

Comment 6 Niels de Vos 2016-06-16 12:34:04 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.

glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.