Bug 1408418 - Rebalance process is behaving differently for AFR and EC volume.
Summary: Rebalance process is behaving differently for AFR and EC volume.
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: replicate
Version: rhgs-3.2
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: ---
Assignee: Ravishankar N
QA Contact: Nag Pavan Chilakam
URL:
Whiteboard:
Depends On:
Blocks: 1536024
TreeView+ depends on / blocked
 
Reported: 2016-12-23 10:27 UTC by Byreddy
Modified: 2018-09-10 12:07 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1536024 (view as bug list)
Environment:
Last Closed: 2018-09-10 12:07:22 UTC
Embargoed:


Attachments (Terms of Use)

Description Byreddy 2016-12-23 10:27:04 UTC
Description of problem:
=======================
Rebalance process is behaving differently for AFR and EC volume that is,
for EC volume, during rebalance,it is doing heal operation BUT for AFR volume during rebalance, heal is not happening because heal is disabled, here is the bug fix which disabled the heal during rebalance for AFR volume type - 

https://bugzilla.redhat.com/show_bug.cgi?id=808977


Version-Release number of selected component (if applicable):
==============================================================
glusterfs-3.8.4-9.el6rhs.x86_64


How reproducible:
=================
Always


Steps to Reproduce:
===================
ON EC volume:

1. Have EC  volume and fuse mount it
2. Make one brick down.
3. Write enough data from fuse mount
4. Add one more sub volume to the volume
5. bring up the offline brick using volume force option
6. trigger the rebalance manually
7. Check the rebalance logs // you will see any heal related things in rebalance log


ON AFR (Dis-Rep - 2*2) volume:

1. Have AFR Dis-Rep  volume and fuse mount it
2. Make one brick down.
3. Write enough data from fuse mount
4. Add one more sub volume to the volume
5. bring up the offline brick using volume force option
6. trigger the rebalance manually
7. Check the rebalance logs  // you won't see any heal related info in rebalance log



Actual results:
===============
Rebalance process is behaving  differently  for AFR and EC volume.



Expected results:
=================
We expect same rebalance behaviour for EC and AFR volume type.
and we have to find out why heal is disabled during rebalance for AFR volume in the bug fix - 
https://bugzilla.redhat.com/show_bug.cgi?id=808977



Additional info:

Comment 2 Nag Pavan Chilakam 2016-12-23 12:16:02 UTC
Also, one example of the difference in behavior:
1)in a distrep vol say 2x2 , if a brick is down and the user tries to add a new set of bricks, the add-brick fails, saying bricks are down
But the same passes on an ec-vol

Comment 3 Nag Pavan Chilakam 2016-12-26 10:14:10 UTC
not  a regression or blocker, can be deferred from 3.2


Note You need to log in before you can comment on or make changes to this bug.