Bug 976902
Summary: | gluster peer detach force should fail if the peer has bricks as part of a distributed volume | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Tushar Katarki <tkatarki> |
Component: | glusterfs | Assignee: | Atin Mukherjee <amukherj> |
Status: | CLOSED ERRATA | QA Contact: | spandura |
Severity: | high | Docs Contact: | |
Priority: | high | ||
Version: | 2.1 | CC: | amukherj, kaushal, nsathyan, pprakash, psriniva, rhs-bugs, sasundar, sdharane, spandura, ssaha, ssamanta, vbellur |
Target Milestone: | --- | ||
Target Release: | RHGS 3.0.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | glusterfs-3.6.0.3-1.el6rhs | Doc Type: | Bug Fix |
Doc Text: |
peer detach force should fail if the peer (to be detached) has bricks as part of a distributed volume.
However if the peer holds all of the bricks of that volume and if that peer holds no other bricks, peer detach should succeed.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2014-09-22 19:28:30 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 983590 | ||
Bug Blocks: |
Description
Tushar Katarki
2013-06-21 20:09:46 UTC
(In reply to Tushar Katarki from comment #0) > Description of problem: > > In a recent case at Barclay's that was reported on Gluster-users list, it > appears that the user used "gluster peer detach force" on a node that had > bricks that were still part of a volume. This caused issues as follows. > > "Since bricks from a detached node are part of the volume configuration, > further volume configuration changes are now failing. The normal use case > for "detach force" is to remove a dead node that does not have any bricks > from it in any volume. If a dead node has bricks, it is recommended to > remove bricks from it before issuing a detach force. This is incorrect. The 'force' option was introduced to specifically remove dead peers with bricks on it. A normal detach checks for bricks on the peer being detached, the 'force' option bypasses the check. Both variants of detach do not care if the peer being detached is online or not. > > To avoid this situation, it is best if gluster peer detach force fails when > used on a node that still has bricks that are part of a volume. It should > further tell user to remove bricks from the volume before using detach > force. > > Version-Release number of selected component (if applicable): > > > How reproducible: > > > Steps to Reproduce: > 1. > 2. > 3. > > Actual results: > > > Expected results: > > > Additional info: Going ahead with removing the 'force' option for 'peer detach' as it causes more problems than it solves. We will need to create correct documents for the steps to be followed to remove a downed and unrecoverable peer. Patch submitted, review comments awaited. Patch review link : http://review.gluster.org/#/c/5325/ RCA --- Peer detach force was not validating whether the peer to be detached holds any associated bricks, so in that case detaching a peer forcibly would cause data loss which is always a "Not good" thing. Setting flags required to add BZs to RHS 3.0 Errata Verified the fix on the build "glusterfs 3.6.0.15 built on Jun 9 2014 11:03:54". Cases verified: ============== 1. "peer detach" of a online peer when the peer has bricks (online) in it should fail 2. "peer detach" of a online peer when the peer has bricks (offline) in it should fail 3. "peer detach force" of a offline peer when the peer has bricks (online) in it should fail. 4. "peer detach force" of a offline peer when the peer has bricks (offline) in it should fail. Bug is fixed. Moving the bug to verified state. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHEA-2014-1278.html |