Bug 1396993 - [RFE] Implement reset-brick usage on the gluster volume
Summary: [RFE] Implement reset-brick usage on the gluster volume
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: Frontend.WebAdmin
Version: 4.1.0
Hardware: x86_64
OS: Linux
high
high
Target Milestone: ovirt-4.2.2
: 4.2.2.4
Assignee: Gobinda Das
QA Contact: SATHEESARAN
URL:
Whiteboard:
Depends On:
Blocks: Gluster-HC-3 1527302 1555200
TreeView+ depends on / blocked
 
Reported: 2016-11-21 11:00 UTC by SATHEESARAN
Modified: 2018-06-13 06:03 UTC (History)
4 users (show)

Fixed In Version: ovirt-engine-4.2.2.4
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1527302 1555200 (view as bug list)
Environment:
Last Closed: 2018-05-14 15:10:57 UTC
oVirt Team: Gluster
Embargoed:
sabose: ovirt-4.2?
rule-engine: planning_ack?
rule-engine: devel_ack+
sasundar: testing_ack+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 83979 0 master MERGED gluster: api for reset-brick 2020-05-25 07:15:47 UTC
oVirt gerrit 83988 0 master ABANDONED engine: Support for reset brick 2020-05-25 07:15:48 UTC
oVirt gerrit 85880 0 master MERGED webadmin: Support for reset brick 2020-05-25 07:15:48 UTC
oVirt gerrit 85884 0 master ABANDONED common: Common code to support reset brick feature 2020-05-25 07:15:48 UTC
oVirt gerrit 85885 0 master MERGED engine: backend code to support reset-brick feature 2020-05-25 07:15:48 UTC
oVirt gerrit 88894 0 ovirt-4.2 MERGED gluster: api for reset-brick 2020-05-25 07:15:47 UTC
oVirt gerrit 88897 0 ovirt-engine-4.2 MERGED engine: backend code to support reset-brick feature 2020-05-25 07:15:48 UTC
oVirt gerrit 88898 0 ovirt-engine-4.2 MERGED webadmin: Support for reset brick 2020-05-25 07:15:48 UTC

Description SATHEESARAN 2016-11-21 11:00:09 UTC
Description of problem:
-----------------------
In case of gluster volume, When its required to replace a brick with other then 'gluster volume replace-brick' would be useful. But it won't be useful in the case where the source and destination brick paths are the same. 

To address this case with gluster-3.9, reset-brick was introduced which could help in resetting the brick.

There are 2 typical cases
1. Server which has a brick is reprovisioned and user wanted to make use of that same brick again in the volume ( case when brick has data )

2. Server which has a brick is reprovisioned and brick is also formatted. ( case when brick has no data )

Take for example there are 3 servers ( server1, server2, server3 ). Gluster replica 3 volume is created with bricks in each of these servers. ( brick1-server1, brick2-server2, brick3-server3 )

Server3 has gone bad and got reprovisioned, but it may got its brick ( brick3-server3 ) intact in some cases or bricks formatted in some other case.

If its the case1 scenario then,
1. Start reset-brick operation:
# gluster volume reset-brick <vol> <server3>:/<brick3-server3> start

2. Perform reset-brick commit:
# gluster volume reset-brick <vol> <server3>:/<brick3-server3> commit force

healing would happen for the new data whatever was created when the server3 was down.

If its the case2 where brick is also formatted then the same above steps should be used but commit doesn't required to be force. In this case, the new brick would be completely healed.

Version-Release number of selected component (if applicable):
--------------------------------------------------------------
NA

How reproducible:
-----------------
NA

Steps to Reproduce:
-------------------
NA

Actual results:
---------------
NA

Expected results:
-----------------
NA

Additional info:
----------------
https://bugzilla.redhat.com/show_bug.cgi?id=1266876

Comment 2 Sahina Bose 2017-07-14 09:25:48 UTC
(In reply to Laura Bailey from comment #1)
> Is this going to be a UI option for users to implement this, or will they
> need to perform CLI reset-brick commands as already documented for RHGS?

This is a UI option, and will only be used to replace the network used by the brick.

Sas, in the usecase you mention :
Server3 has gone bad and got reprovisioned, but it may got its brick ( brick3-server3 ) intact in some cases or bricks formatted in some other case.

Does server3 have a new gluster UUID? In this case will reset-brick work?

Comment 4 SATHEESARAN 2017-09-26 09:21:11 UTC
(In reply to Sahina Bose from comment #2)
> (In reply to Laura Bailey from comment #1)
> > Is this going to be a UI option for users to implement this, or will they
> > need to perform CLI reset-brick commands as already documented for RHGS?
> 
> This is a UI option, and will only be used to replace the network used by
> the brick.
> 
> Sas, in the usecase you mention :
> Server3 has gone bad and got reprovisioned, but it may got its brick (
> brick3-server3 ) intact in some cases or bricks formatted in some other case.
> 
> Does server3 have a new gluster UUID? In this case will reset-brick work?

Sorry for that missing information. If the new gluster UUID has changed, then reset-brick will not work

Comment 5 SATHEESARAN 2018-05-12 11:19:25 UTC
Verified with RHV 4.2.3-6 and glusterfs-3.12 with the following steps.

1. Create a new volume in a hyperconverged setup
2. Select a brick and perform a reset brick

Observed that the reset brick was successful and the contents of the bricks are healed successfully.

Comment 6 Sandro Bonazzola 2018-05-14 15:10:57 UTC
This bugzilla is included in oVirt 4.2.2 release, published on March 28th 2018.

Since the problem described in this bug report should be
resolved in oVirt 4.2.2 release, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.