Bug 1696334 - Improve gluster-cli error message when node detach fails due to existing bricks
Summary: Improve gluster-cli error message when node detach fails due to existing bricks
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterd
Version: rhgs-3.5
Hardware: Unspecified
OS: Unspecified
low
low
Target Milestone: ---
: RHGS 3.5.0
Assignee: Sanju
QA Contact: Kshithij Iyer
URL:
Whiteboard:
Depends On: 1697866
Blocks: 1696806
TreeView+ depends on / blocked
 
Reported: 2019-04-04 14:47 UTC by Raghavendra Talur
Modified: 2019-10-30 12:21 UTC (History)
6 users (show)

Fixed In Version: glusterfs-6.0-2
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1697866 (view as bug list)
Environment:
Last Closed: 2019-10-30 12:20:50 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2019:3249 0 None None None 2019-10-30 12:21:14 UTC

Description Raghavendra Talur 2019-04-04 14:47:57 UTC
Description of problem:

When a gluster peer node has failed due to hardware issues, it should be possible to detach it.

Currently, the peer detach command fails because the peer hosts one or more bricks.

If delete of the volume that has that brick is attempted then volume delete fails with "Not all peers are up" error.

One way out is to use a replace-brick command and move the brick to some other node.

However, it might not be possible to replace-brick sometimes. 

A trick that worked for us was to use remove-brick to convert the replica 3 volume to replica 2 and then peer detach the node.


May be the peer detach command can show the trick in output. Something on the lines:


"This peer has one or more bricks. If the peer is lost and is not recoverable then you should use either replace-brick or remove-brick procedure to remove all bricks from the peer and attempt the peer detach again"

Comment 2 Sanju 2019-04-09 08:49:41 UTC
upstream patch: https://review.gluster.org/#/c/glusterfs/+/22534/

Comment 10 errata-xmlrpc 2019-10-30 12:20:50 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2019:3249


Note You need to log in before you can comment on or make changes to this bug.