Bug 1306241

Summary: Tiering and AFR may result in data loss
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Bhaskarakiran <byarlaga>
Component: replicateAssignee: Pranith Kumar K <pkarampu>
Status: CLOSED CURRENTRELEASE QA Contact: storage-qa-internal <storage-qa-internal>
Severity: unspecified Docs Contact:
Priority: high    
Version: rhgs-3.1CC: amainkar, atumball, mzywusko, ravishankar, rcyriac, rhs-bugs, sheggodu
Target Milestone: ---Keywords: ZStream
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1306398 (view as bug list) Environment:
Last Closed: 2017-10-26 09:48:19 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1289852, 1306398    

Description Bhaskarakiran 2016-02-10 11:52:53 UTC
Description of problem:
=======================

If migration process reads data that is not yet healed because source brick is down it may lead to data loss


Snippet:
=======

This is same old, AP vs CP systems. 2-way replication is AP system. User
knows this. If he does copy at the time because the brick was down he
will lose the data in the new copy. Not the source itself. But with
rebalance/tiering this problem becomes severe as the source file is lost.

One way to fix it is to give an option in afr where reads won't succeed
if all the bricks are not up. Rebalance and tiering should use it. We
don't have this problem in 3-way replica and arbiter.


Version-Release number of selected component (if applicable):
=============================================================
3.7.5-19

How reproducible:
================ 

Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 2 Bhaskarakiran 2016-02-10 11:53:25 UTC
Steps to reproduce will be updated.

Comment 4 Mike McCune 2016-03-28 23:19:36 UTC
This bug was accidentally moved from POST to MODIFIED via an error in automation, please see mmccune with any questions

Comment 6 Amar Tumballi 2017-10-26 09:48:19 UTC
Latest (glusterfs-3.10+) has this fix.