Bug 763924 (GLUSTER-2192)

Summary: add-brick and remove-brick changes order of subvolumes
Product: [Community] GlusterFS Reporter: Harshavardhana <fharshav>
Component: glusterdAssignee: tcp
Status: CLOSED NOTABUG QA Contact:
Severity: low Docs Contact:
Priority: low    
Version: 3.1.1CC: amarts, cww, gluster-bugs, krishna, vijay
Target Milestone: ---   
Target Release: ---   
Hardware: All   
OS: Linux   
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: Type: ---
Regression: RTNR Mount Type: ---
Documentation: DNR CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Description Harshavardhana 2010-12-05 02:11:00 EST
For example if you have 48 bricks :- 

Remove 5 bricks and add 5 bricks, adding bricks are not sequential and never maintain proper topology. 

This topology or order of sub-volumes is important in case of distribute. 

Such a change in topology results in a live i/o to exit. But its a problem as the order of sub-volumes has changed, distribute tries to rehash everything and writes new layout. 

This case was tested @ customer site during LUN migration. 

Current solution is to use replace-brick over a stopped volume to do in-place brick replacement.
Comment 1 tcp 2011-01-20 05:58:05 EST
I am not clear about what the actual problem is.

Are we adding a new brick, or are we adding the old ones back (with no change in data but with a change in the brick parameters - like change in hostname, change in path etc) ?

Can you please elaborate on the LUN migration setup ? And, elucidate what the actual problem is ? Do you mean that the problem in such a scenario is that the rehashing will cause in creation of lot of link files?
Comment 2 tcp 2011-01-25 01:16:07 EST
Can you please respond to the update?
Comment 3 Harshavardhana 2011-01-26 17:16:13 EST
(In reply to comment #2)
> Can you please respond to the update?

Resolving this since, replace-brick fixes the issue.