Description of problem: ======================= Rebalance process is behaving differently for AFR and EC volume that is, for EC volume, during rebalance,it is doing heal operation BUT for AFR volume during rebalance, heal is not happening because heal is disabled, here is the bug fix which disabled the heal during rebalance for AFR volume type - https://bugzilla.redhat.com/show_bug.cgi?id=808977 Version-Release number of selected component (if applicable): ============================================================== glusterfs-3.8.4-9.el6rhs.x86_64 How reproducible: ================= Always Steps to Reproduce: =================== ON EC volume: 1. Have EC volume and fuse mount it 2. Make one brick down. 3. Write enough data from fuse mount 4. Add one more sub volume to the volume 5. bring up the offline brick using volume force option 6. trigger the rebalance manually 7. Check the rebalance logs // you will see any heal related things in rebalance log ON AFR (Dis-Rep - 2*2) volume: 1. Have AFR Dis-Rep volume and fuse mount it 2. Make one brick down. 3. Write enough data from fuse mount 4. Add one more sub volume to the volume 5. bring up the offline brick using volume force option 6. trigger the rebalance manually 7. Check the rebalance logs // you won't see any heal related info in rebalance log Actual results: =============== Rebalance process is behaving differently for AFR and EC volume. Expected results: ================= We expect same rebalance behaviour for EC and AFR volume type. and we have to find out why heal is disabled during rebalance for AFR volume in the bug fix - https://bugzilla.redhat.com/show_bug.cgi?id=808977 Additional info:
Also, one example of the difference in behavior: 1)in a distrep vol say 2x2 , if a brick is down and the user tries to add a new set of bricks, the add-brick fails, saying bricks are down But the same passes on an ec-vol
not a regression or blocker, can be deferred from 3.2