Description of problem: ======================= If migration process reads data that is not yet healed because source brick is down it may lead to data loss Snippet: ======= This is same old, AP vs CP systems. 2-way replication is AP system. User knows this. If he does copy at the time because the brick was down he will lose the data in the new copy. Not the source itself. But with rebalance/tiering this problem becomes severe as the source file is lost. One way to fix it is to give an option in afr where reads won't succeed if all the bricks are not up. Rebalance and tiering should use it. We don't have this problem in 3-way replica and arbiter. Version-Release number of selected component (if applicable): ============================================================= 3.7.5-19 How reproducible: ================ Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
Steps to reproduce will be updated.
This bug was accidentally moved from POST to MODIFIED via an error in automation, please see mmccune with any questions
Latest (glusterfs-3.10+) has this fix.