Bug 1306398 - Tiering and AFR may result in data loss
Summary: Tiering and AFR may result in data loss
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: replicate
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Pranith Kumar K
QA Contact:
URL:
Whiteboard:
Depends On: 1306241
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-02-10 17:30 UTC by Pranith Kumar K
Modified: 2017-03-27 18:27 UTC (History)
4 users (show)

Fixed In Version: glusterfs-3.9.0
Doc Type: Bug Fix
Doc Text:
Clone Of: 1306241
Environment:
Last Closed: 2017-03-27 18:27:08 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Comment 1 Vijay Bellur 2016-02-10 17:33:51 UTC
REVIEW: http://review.gluster.org/13425 (cluster/afr: Give option to do consistent-io) posted (#1) for review on master by Pranith Kumar Karampuri (pkarampu)

Comment 2 Mike McCune 2016-03-28 23:19:36 UTC
This bug was accidentally moved from POST to MODIFIED via an error in automation, please see mmccune with any questions

Comment 3 Vijay Bellur 2016-08-05 12:37:07 UTC
REVIEW: http://review.gluster.org/13425 (cluster/afr: Give option to do consistent-io) posted (#2) for review on master by Pranith Kumar Karampuri (pkarampu)

Comment 4 Vijay Bellur 2016-08-16 10:41:17 UTC
REVIEW: http://review.gluster.org/13425 (cluster/afr: Give option to do consistent-io) posted (#3) for review on master by Pranith Kumar Karampuri (pkarampu)

Comment 5 Vijay Bellur 2016-08-16 10:41:21 UTC
REVIEW: http://review.gluster.org/15177 (glusterd: Use consistent-io for rebalance) posted (#1) for review on master by Pranith Kumar Karampuri (pkarampu)

Comment 6 Worker Ant 2016-08-22 15:46:02 UTC
REVIEW: http://review.gluster.org/15177 (glusterd: Use consistent-io for rebalance) posted (#2) for review on master by Pranith Kumar Karampuri (pkarampu)

Comment 7 Worker Ant 2016-08-22 15:46:05 UTC
REVIEW: http://review.gluster.org/13425 (cluster/afr: Give option to do consistent-io) posted (#4) for review on master by Pranith Kumar Karampuri (pkarampu)

Comment 8 Worker Ant 2016-08-22 16:06:47 UTC
REVIEW: http://review.gluster.org/15177 (glusterd: Use consistent-io for rebalance) posted (#3) for review on master by Pranith Kumar Karampuri (pkarampu)

Comment 9 Worker Ant 2016-08-22 16:06:50 UTC
REVIEW: http://review.gluster.org/13425 (cluster/afr: Give option to do consistent-io) posted (#5) for review on master by Pranith Kumar Karampuri (pkarampu)

Comment 10 Worker Ant 2016-08-22 20:55:46 UTC
COMMIT: http://review.gluster.org/13425 committed in master by Pranith Kumar Karampuri (pkarampu) 
------
commit 413594ed647400f1b39e05d4f1b12ad846e48800
Author: Pranith Kumar K <pkarampu>
Date:   Tue Aug 16 16:04:37 2016 +0530

    cluster/afr: Give option to do consistent-io
    
    Problem:
    When tiering/rebalance does migrations and afr with 2-way replica is in
    picture, migration can read stale data if the source brick goes down and writes
    to the destination. After this deletion of the file leads to permanent loss of
    data after migration.
    
    Fix:
    Rebalance/tiering should migrate only when the data is definitely not stale. So
    introduce an option in afr called consistent-io which will be enabled in
    migration daemons.
    
    BUG: 1306398
    Change-Id: I750f65091cc70a3ed4bf3c12f83d0949af43920a
    Signed-off-by: Pranith Kumar K <pkarampu>
    Reviewed-on: http://review.gluster.org/13425
    Reviewed-by: Anuradha Talur <atalur>
    Reviewed-by: Krutika Dhananjay <kdhananj>
    Smoke: Gluster Build System <jenkins.org>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.org>

Comment 11 Shyamsundar 2017-03-27 18:27:08 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.9.0, please open a new bug report.

glusterfs-3.9.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/gluster-users/2016-November/029281.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.