Bug 1527302 - [RFE] Implement reset-brick usage on the gluster volume
Summary: [RFE] Implement reset-brick usage on the gluster volume
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: rhhi
Version: rhhi-1.1
Hardware: x86_64
OS: Linux
high
medium
Target Milestone: ---
: RHHI-V 1.5
Assignee: Sahina Bose
QA Contact: SATHEESARAN
URL:
Whiteboard:
Depends On: 1396993 1555200
Blocks: 1520833 1548985
TreeView+ depends on / blocked
 
Reported: 2017-12-19 07:24 UTC by Sahina Bose
Modified: 2018-11-08 05:38 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
The brick reset operation can now be performed from the Administration Portal.
Clone Of: 1396993
: 1590639 (view as bug list)
Environment:
Last Closed: 2018-11-08 05:37:25 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2018:3523 0 None None None 2018-11-08 05:38:50 UTC

Description Sahina Bose 2017-12-19 07:24:11 UTC
+++ This bug was initially created as a clone of Bug #1396993 +++

Description of problem:
-----------------------
In case of gluster volume, When its required to replace a brick with other then 'gluster volume replace-brick' would be useful. But it won't be useful in the case where the source and destination brick paths are the same. 

To address this case with gluster-3.9, reset-brick was introduced which could help in resetting the brick.

There are 2 typical cases
1. Server which has a brick is reprovisioned and user wanted to make use of that same brick again in the volume ( case when brick has data )

2. Server which has a brick is reprovisioned and brick is also formatted. ( case when brick has no data )

Take for example there are 3 servers ( server1, server2, server3 ). Gluster replica 3 volume is created with bricks in each of these servers. ( brick1-server1, brick2-server2, brick3-server3 )

Server3 has gone bad and got reprovisioned, but it may got its brick ( brick3-server3 ) intact in some cases or bricks formatted in some other case.

If its the case1 scenario then,
1. Start reset-brick operation:
# gluster volume reset-brick <vol> <server3>:/<brick3-server3> start

2. Perform reset-brick commit:
# gluster volume reset-brick <vol> <server3>:/<brick3-server3> commit force

healing would happen for the new data whatever was created when the server3 was down.

If its the case2 where brick is also formatted then the same above steps should be used but commit doesn't required to be force. In this case, the new brick would be completely healed.

Version-Release number of selected component (if applicable):
--------------------------------------------------------------
NA

How reproducible:
-----------------
NA

Steps to Reproduce:
-------------------
NA

Actual results:
---------------
NA

Expected results:
-----------------
NA

Additional info:
----------------
https://bugzilla.redhat.com/show_bug.cgi?id=1266876

--- Additional comment from Laura Bailey on 2017-06-28 23:29:11 EDT ---

Is this going to be a UI option for users to implement this, or will they need to perform CLI reset-brick commands as already documented for RHGS?

--- Additional comment from Sahina Bose on 2017-07-14 05:25:48 EDT ---

(In reply to Laura Bailey from comment #1)
> Is this going to be a UI option for users to implement this, or will they
> need to perform CLI reset-brick commands as already documented for RHGS?

This is a UI option, and will only be used to replace the network used by the brick.

Sas, in the usecase you mention :
Server3 has gone bad and got reprovisioned, but it may got its brick ( brick3-server3 ) intact in some cases or bricks formatted in some other case.

Does server3 have a new gluster UUID? In this case will reset-brick work?

--- Additional comment from Laura Bailey on 2017-07-16 21:23:43 EDT ---

Thanks Sahina; added this to the spreadsheet of bugs to be cloned for docs work.

--- Additional comment from SATHEESARAN on 2017-09-26 05:21:11 EDT ---

(In reply to Sahina Bose from comment #2)
> (In reply to Laura Bailey from comment #1)
> > Is this going to be a UI option for users to implement this, or will they
> > need to perform CLI reset-brick commands as already documented for RHGS?
> 
> This is a UI option, and will only be used to replace the network used by
> the brick.
> 
> Sas, in the usecase you mention :
> Server3 has gone bad and got reprovisioned, but it may got its brick (
> brick3-server3 ) intact in some cases or bricks formatted in some other case.
> 
> Does server3 have a new gluster UUID? In this case will reset-brick work?

Sorry for that missing information. If the new gluster UUID has changed, then reset-brick will not work

Comment 3 SATHEESARAN 2018-05-12 11:20:06 UTC
Verified with RHV 4.2.3-6 and glusterfs-3.12 with the following steps.

1. Create a new volume in a hyperconverged setup
2. Select a brick and perform a reset brick

Observed that the reset brick was successful and the contents of the bricks are healed successfully.

Comment 7 errata-xmlrpc 2018-11-08 05:37:25 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2018:3523


Note You need to log in before you can comment on or make changes to this bug.