Description of problem: ----------------------- In case of gluster volume, When its required to replace a brick with other then 'gluster volume replace-brick' would be useful. But it won't be useful in the case where the source and destination brick paths are the same. To address this case with gluster-3.9, reset-brick was introduced which could help in resetting the brick. There are 2 typical cases 1. Server which has a brick is reprovisioned and user wanted to make use of that same brick again in the volume ( case when brick has data ) 2. Server which has a brick is reprovisioned and brick is also formatted. ( case when brick has no data ) Take for example there are 3 servers ( server1, server2, server3 ). Gluster replica 3 volume is created with bricks in each of these servers. ( brick1-server1, brick2-server2, brick3-server3 ) Server3 has gone bad and got reprovisioned, but it may got its brick ( brick3-server3 ) intact in some cases or bricks formatted in some other case. If its the case1 scenario then, 1. Start reset-brick operation: # gluster volume reset-brick <vol> <server3>:/<brick3-server3> start 2. Perform reset-brick commit: # gluster volume reset-brick <vol> <server3>:/<brick3-server3> commit force healing would happen for the new data whatever was created when the server3 was down. If its the case2 where brick is also formatted then the same above steps should be used but commit doesn't required to be force. In this case, the new brick would be completely healed. Version-Release number of selected component (if applicable): -------------------------------------------------------------- NA How reproducible: ----------------- NA Steps to Reproduce: ------------------- NA Actual results: --------------- NA Expected results: ----------------- NA Additional info: ---------------- https://bugzilla.redhat.com/show_bug.cgi?id=1266876
(In reply to Laura Bailey from comment #1) > Is this going to be a UI option for users to implement this, or will they > need to perform CLI reset-brick commands as already documented for RHGS? This is a UI option, and will only be used to replace the network used by the brick. Sas, in the usecase you mention : Server3 has gone bad and got reprovisioned, but it may got its brick ( brick3-server3 ) intact in some cases or bricks formatted in some other case. Does server3 have a new gluster UUID? In this case will reset-brick work?
(In reply to Sahina Bose from comment #2) > (In reply to Laura Bailey from comment #1) > > Is this going to be a UI option for users to implement this, or will they > > need to perform CLI reset-brick commands as already documented for RHGS? > > This is a UI option, and will only be used to replace the network used by > the brick. > > Sas, in the usecase you mention : > Server3 has gone bad and got reprovisioned, but it may got its brick ( > brick3-server3 ) intact in some cases or bricks formatted in some other case. > > Does server3 have a new gluster UUID? In this case will reset-brick work? Sorry for that missing information. If the new gluster UUID has changed, then reset-brick will not work
Verified with RHV 4.2.3-6 and glusterfs-3.12 with the following steps. 1. Create a new volume in a hyperconverged setup 2. Select a brick and perform a reset brick Observed that the reset brick was successful and the contents of the bricks are healed successfully.
This bugzilla is included in oVirt 4.2.2 release, published on March 28th 2018. Since the problem described in this bug report should be resolved in oVirt 4.2.2 release, it has been closed with a resolution of CURRENT RELEASE. If the solution does not work for you, please open a new bug report.