Bug 1335836 - Heal info shows split-brain for .shard directory though only one brick was down
Summary: Heal info shows split-brain for .shard directory though only one brick was down
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: replicate
Version: 3.7.11
Hardware: Unspecified
OS: Unspecified
high
unspecified
Target Milestone: ---
Assignee: Anuradha
QA Contact:
URL:
Whiteboard:
Depends On: 1332949 1335652
Blocks: 1335829
TreeView+ depends on / blocked
 
Reported: 2016-05-13 10:37 UTC by Anuradha
Modified: 2016-09-20 02:00 UTC (History)
7 users (show)

Fixed In Version: glusterfs-3.7.12
Doc Type: Bug Fix
Doc Text:
Clone Of: 1335652
Environment:
Last Closed: 2016-06-28 12:17:59 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Comment 1 Vijay Bellur 2016-05-13 10:42:11 UTC
REVIEW: http://review.gluster.org/14332 (cluster/afr : Do post-op in case of symmetric errors) posted (#1) for review on release-3.7 by Anuradha Talur (atalur)

Comment 2 Vijay Bellur 2016-05-24 05:21:38 UTC
COMMIT: http://review.gluster.org/14332 committed in release-3.7 by Pranith Kumar Karampuri (pkarampu) 
------
commit 324329deee862ba28873172b3124031b5783471e
Author: Anuradha Talur <atalur>
Date:   Fri May 13 15:34:06 2016 +0530

    cluster/afr : Do post-op in case of symmetric errors
    
            Backport of: http://review.gluster.org/#/c/14310/
    
    In afr_changelog_post_op_now(), if there was any error,
    meaning op_ret < 0, post-op was not being done even when
    the errors were symmetric and there were no "failed
    subvols".
    
    Fix:
    When the errors are symmetric, perform post-op.
    
    How was the bug found :
    In a 1 X 3 volume with shard and write behind on
    when writes were done into a file with one brick down,
    the trusted.afr.dirty xattr's value for .shard directory
    would keep increasing as post op was not done but pre-op was.
    This incorrectly showed .shard to be in split-brain.
    
    RCA:
    When WB is on, due to multiple writes being sent on
    offset lying in the same shard, chances are that
    same shard file will be created more than once
    with the second one failing with op_ret < 0
    and op_errno = EEXIST.
    
    As op_ret was negative, afr wouldn't do post-op,
    leading to no decrement of trusted.afr.dirty xattr.
    Thus showing .shard directory to be in split-brain.
    
            >Change-Id: I711bdeaa1397244e6a7790e96f0c84501798fc59
            >BUG: 1335652
            >Signed-off-by: Anuradha Talur <atalur>
    
    Change-Id: I711bdeaa1397244e6a7790e96f0c84501798fc59
    BUG: 1335836
    Signed-off-by: Anuradha Talur <atalur>
    Reviewed-on: http://review.gluster.org/14332
    Smoke: Gluster Build System <jenkins.com>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.com>
    Reviewed-by: Ravishankar N <ravishankar>
    Reviewed-by: Pranith Kumar Karampuri <pkarampu>

Comment 3 Kaushal 2016-06-28 12:17:59 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.12, please open a new bug report.

glusterfs-3.7.12 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://www.gluster.org/pipermail/gluster-devel/2016-June/049918.html
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.