Bug 1605077 - If a node disconnects during volume delete, it assumes deleted volume as a freshly created volume when it is back online
Summary: If a node disconnects during volume delete, it assumes deleted volume as a fr...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: glusterd
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
: 1291262 (view as bug list)
Depends On:
Blocks: 1618221 1631248
TreeView+ depends on / blocked
 
Reported: 2018-07-20 06:39 UTC by Sanju
Modified: 2019-05-14 09:56 UTC (History)
5 users (show)

Fixed In Version: glusterfs-6.0
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1618221 (view as bug list)
Environment:
Last Closed: 2018-10-23 15:14:52 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Sanju 2018-07-20 06:39:21 UTC
Description of problem:
In a cluster of n nodes, if a node goes down during the volume delete operation, When the node is back online, it will have the information about the deleted volume. The node assumes this volume as a freshly created volume and display the volume name if we trigger volume list command. All the remaining nodes in the cluster do not have any information this volume.

Version-Release number of selected component (if applicable):
mainline

How reproducible:
Always

Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:
When the disconnected node is back online, deleted volume's info should be removed from the node. volume list command should not display the volume name of deleted volume.

Additional info:

Comment 1 Worker Ant 2018-07-31 07:29:32 UTC
REVIEW: https://review.gluster.org/20592 (glusterd: ignore importingvolume which is undergoing a delete operation) posted (#1) for review on master by Atin Mukherjee

Comment 2 Worker Ant 2018-08-16 12:37:20 UTC
COMMIT: https://review.gluster.org/20592 committed in master by "Atin Mukherjee" <amukherj> with a commit message- glusterd: ignore importing volume which is undergoing a delete operation

Problem explanation:

Assuming in a 3 nodes cluster, if N1 originates a delete operation and
while N1's commit phase completes, either glusterd service of N2 or N3
gets disconnected from N1 (before completing the commit phase), N1 will
attempt to end up importing the volume which is in-flight for a delete
in other nodes as a fresh resulting into an incorrect configuration
state.

Fix:

Mark a volume as stage deleted once a volume delete operation passes
it's staging phase and reset this flag during unlock phase. Now during
this intermediate phase if the same volume gets imported to other peers,
it shouldn't considered to be recreated.

An automated .t is quite tough to implement with the current infra.

Test Case:

1. Keep creating and deleting volumes in a loop on a 3 node cluster
2. Simulate n/w failure between the peers (ifdown followed by ifup)
3. Check if output of 'gluster v list | wc -l' is same across all 3
nodes during 1 & 2.

Change-Id: Ifdd5dc39699120258d7fdd42fe2deb9de25c6246
Fixes: bz#1605077
Signed-off-by: Atin Mukherjee <amukherj>

Comment 3 Worker Ant 2018-10-23 06:25:56 UTC
REVIEW: https://review.gluster.org/21463 (glusterd: improve logging for stage_deleted flag) posted (#1) for review on master by Sanju Rakonde

Comment 4 Worker Ant 2018-10-23 10:55:23 UTC
COMMIT: https://review.gluster.org/21463 committed in master by "Amar Tumballi" <amarts> with a commit message- glusterd: improve logging for stage_deleted flag

Change-Id: I5f0667a47ddd24cb00949c875c19f3d1dbd8d603
fixes: bz#1605077
Signed-off-by: Sanju Rakonde <srakonde>

Comment 5 Shyamsundar 2018-10-23 15:14:52 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.0, please open a new bug report.

glusterfs-5.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://lists.gluster.org/pipermail/announce/2018-October/000115.html
[2] https://www.gluster.org/pipermail/gluster-users/

Comment 6 Shyamsundar 2019-03-25 16:30:33 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report.

glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html
[2] https://www.gluster.org/pipermail/gluster-users/

Comment 7 Sanju 2019-05-14 09:56:53 UTC
*** Bug 1291262 has been marked as a duplicate of this bug. ***


Note You need to log in before you can comment on or make changes to this bug.