Bug 1696334

Summary: Improve gluster-cli error message when node detach fails due to existing bricks
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Raghavendra Talur <rtalur>
Component: glusterdAssignee: Sanju <srakonde>
Status: CLOSED ERRATA QA Contact: Kshithij Iyer <kiyer>
Severity: low Docs Contact:
Priority: low    
Version: rhgs-3.5CC: amukherj, kiyer, rhinduja, rhs-bugs, storage-qa-internal, vbellur
Target Milestone: ---   
Target Release: RHGS 3.5.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-6.0-2 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1697866 (view as bug list) Environment:
Last Closed: 2019-10-30 12:20:50 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1697866    
Bug Blocks: 1696806    

Description Raghavendra Talur 2019-04-04 14:47:57 UTC
Description of problem:

When a gluster peer node has failed due to hardware issues, it should be possible to detach it.

Currently, the peer detach command fails because the peer hosts one or more bricks.

If delete of the volume that has that brick is attempted then volume delete fails with "Not all peers are up" error.

One way out is to use a replace-brick command and move the brick to some other node.

However, it might not be possible to replace-brick sometimes. 

A trick that worked for us was to use remove-brick to convert the replica 3 volume to replica 2 and then peer detach the node.


May be the peer detach command can show the trick in output. Something on the lines:


"This peer has one or more bricks. If the peer is lost and is not recoverable then you should use either replace-brick or remove-brick procedure to remove all bricks from the peer and attempt the peer detach again"

Comment 2 Sanju 2019-04-09 08:49:41 UTC
upstream patch: https://review.gluster.org/#/c/glusterfs/+/22534/

Comment 10 errata-xmlrpc 2019-10-30 12:20:50 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2019:3249