Bug 1344634
Summary: | fail delete volume operation if one of the glusterd instance is down in cluster | ||
---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | Atin Mukherjee <amukherj> |
Component: | glusterd | Assignee: | Atin Mukherjee <amukherj> |
Status: | CLOSED CURRENTRELEASE | QA Contact: | |
Severity: | unspecified | Docs Contact: | |
Priority: | unspecified | ||
Version: | 3.7.12 | CC: | bugs, kaushal |
Target Milestone: | --- | Keywords: | Triaged |
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | glusterfs-3.7.13 | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | 1344407 | Environment: | |
Last Closed: | 2016-07-20 13:55:16 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1344407 | ||
Bug Blocks: | 1344625, 1344631 |
Description
Atin Mukherjee
2016-06-10 08:36:21 UTC
REVIEW: http://review.gluster.org/14692 (glusterd: fail volume delete if one of the node is down) posted (#1) for review on release-3.7 by Atin Mukherjee (amukherj) REVIEW: http://review.gluster.org/14692 (glusterd: fail volume delete if one of the node is down) posted (#2) for review on release-3.7 by Atin Mukherjee (amukherj) REVIEW: http://review.gluster.org/14692 (glusterd: fail volume delete if one of the node is down) posted (#3) for review on release-3.7 by Atin Mukherjee (amukherj) COMMIT: http://review.gluster.org/14692 committed in release-3.7 by Atin Mukherjee (amukherj) ------ commit 1a21cfba8e7a5f4ac1b8a8c3b8e06574b237420d Author: Atin Mukherjee <amukherj> Date: Thu Jun 9 18:22:43 2016 +0530 glusterd: fail volume delete if one of the node is down Backport of http://review.gluster.org/14681 Deleting a volume on a cluster where one of the node in the cluster is down is buggy since once that node comes back the resync of the same volume will happen. Till we bring in the soft delete feature tracked in http://review.gluster.org/12963 this is a safe guard to block the volume deletion. Please note the test file which is backported from this commit has an issue where we start the volume and then try to delete it which is anyway going to fail. So the test actually doesn't validate the fix. http://review.gluster.org/#/c/14693/ in master fixed the problem and the same is ported as part of this commit as well. Change-Id: I9c13869c4a7e7a947f88842c6dc6f231c0eeda6c BUG: 1344634 Signed-off-by: Atin Mukherjee <amukherj> Reviewed-on: http://review.gluster.org/14681 Smoke: Gluster Build System <jenkins.com> CentOS-regression: Gluster Build System <jenkins.com> Reviewed-by: Kaushal M <kaushal> NetBSD-regression: NetBSD Build System <jenkins.org> Reviewed-on: http://review.gluster.org/14692 Smoke: Gluster Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> Reviewed-by: Prashanth Pai <ppai> This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.13, please open a new bug report. glusterfs-3.7.13 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://www.gluster.org/pipermail/gluster-users/2016-July/027604.html [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user |