Bug 1306241 - Tiering and AFR may result in data loss
Summary: Tiering and AFR may result in data loss
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: replicate
Version: rhgs-3.1
Hardware: Unspecified
OS: Unspecified
high
unspecified
Target Milestone: ---
: ---
Assignee: Pranith Kumar K
QA Contact: storage-qa-internal@redhat.com
URL:
Whiteboard:
Depends On:
Blocks: 1289852 1306398
TreeView+ depends on / blocked
 
Reported: 2016-02-10 11:52 UTC by Bhaskarakiran
Modified: 2017-10-26 09:48 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1306398 (view as bug list)
Environment:
Last Closed: 2017-10-26 09:48:19 UTC
Embargoed:


Attachments (Terms of Use)

Description Bhaskarakiran 2016-02-10 11:52:53 UTC
Description of problem:
=======================

If migration process reads data that is not yet healed because source brick is down it may lead to data loss


Snippet:
=======

This is same old, AP vs CP systems. 2-way replication is AP system. User
knows this. If he does copy at the time because the brick was down he
will lose the data in the new copy. Not the source itself. But with
rebalance/tiering this problem becomes severe as the source file is lost.

One way to fix it is to give an option in afr where reads won't succeed
if all the bricks are not up. Rebalance and tiering should use it. We
don't have this problem in 3-way replica and arbiter.


Version-Release number of selected component (if applicable):
=============================================================
3.7.5-19

How reproducible:
================ 

Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 2 Bhaskarakiran 2016-02-10 11:53:25 UTC
Steps to reproduce will be updated.

Comment 4 Mike McCune 2016-03-28 23:19:36 UTC
This bug was accidentally moved from POST to MODIFIED via an error in automation, please see mmccune with any questions

Comment 6 Amar Tumballi 2017-10-26 09:48:19 UTC
Latest (glusterfs-3.10+) has this fix.


Note You need to log in before you can comment on or make changes to this bug.