Bug 1605077
Summary: | If a node disconnects during volume delete, it assumes deleted volume as a freshly created volume when it is back online | |||
---|---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | Sanju <srakonde> | |
Component: | glusterd | Assignee: | bugs <bugs> | |
Status: | CLOSED CURRENTRELEASE | QA Contact: | ||
Severity: | unspecified | Docs Contact: | ||
Priority: | unspecified | |||
Version: | mainline | CC: | amukherj, bugs, prasanna.kalever, rtalur, srakonde | |
Target Milestone: | --- | |||
Target Release: | --- | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | ||||
Fixed In Version: | glusterfs-6.0 | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 1618221 (view as bug list) | Environment: | ||
Last Closed: | 2018-10-23 15:14:52 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1618221, 1631248 |
Description
Sanju
2018-07-20 06:39:21 UTC
REVIEW: https://review.gluster.org/20592 (glusterd: ignore importingvolume which is undergoing a delete operation) posted (#1) for review on master by Atin Mukherjee COMMIT: https://review.gluster.org/20592 committed in master by "Atin Mukherjee" <amukherj> with a commit message- glusterd: ignore importing volume which is undergoing a delete operation Problem explanation: Assuming in a 3 nodes cluster, if N1 originates a delete operation and while N1's commit phase completes, either glusterd service of N2 or N3 gets disconnected from N1 (before completing the commit phase), N1 will attempt to end up importing the volume which is in-flight for a delete in other nodes as a fresh resulting into an incorrect configuration state. Fix: Mark a volume as stage deleted once a volume delete operation passes it's staging phase and reset this flag during unlock phase. Now during this intermediate phase if the same volume gets imported to other peers, it shouldn't considered to be recreated. An automated .t is quite tough to implement with the current infra. Test Case: 1. Keep creating and deleting volumes in a loop on a 3 node cluster 2. Simulate n/w failure between the peers (ifdown followed by ifup) 3. Check if output of 'gluster v list | wc -l' is same across all 3 nodes during 1 & 2. Change-Id: Ifdd5dc39699120258d7fdd42fe2deb9de25c6246 Fixes: bz#1605077 Signed-off-by: Atin Mukherjee <amukherj> REVIEW: https://review.gluster.org/21463 (glusterd: improve logging for stage_deleted flag) posted (#1) for review on master by Sanju Rakonde COMMIT: https://review.gluster.org/21463 committed in master by "Amar Tumballi" <amarts> with a commit message- glusterd: improve logging for stage_deleted flag Change-Id: I5f0667a47ddd24cb00949c875c19f3d1dbd8d603 fixes: bz#1605077 Signed-off-by: Sanju Rakonde <srakonde> This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.0, please open a new bug report. glusterfs-5.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2018-October/000115.html [2] https://www.gluster.org/pipermail/gluster-users/ This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ *** Bug 1291262 has been marked as a duplicate of this bug. *** |