Bug 1335829 - Heal info shows split-brain for .shard directory though only one brick was down
Summary: Heal info shows split-brain for .shard directory though only one brick was down
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: replicate
Version: 3.8.0
Hardware: Unspecified
OS: Unspecified
high
unspecified
Target Milestone: ---
Assignee: Anuradha
QA Contact:
URL:
Whiteboard:
Depends On: 1332949 1335652 1335836
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-05-13 10:30 UTC by Anuradha
Modified: 2016-09-20 02:00 UTC (History)
7 users (show)

Fixed In Version: glusterfs-3.8rc2
Clone Of: 1335652
Environment:
Last Closed: 2016-06-16 14:06:29 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Comment 1 Vijay Bellur 2016-05-13 10:35:33 UTC
REVIEW: http://review.gluster.org/14331 (cluster/afr : Do post-op in case of symmetric errors) posted (#1) for review on release-3.8 by Anuradha Talur (atalur)

Comment 2 Vijay Bellur 2016-05-24 08:37:13 UTC
COMMIT: http://review.gluster.org/14331 committed in release-3.8 by Niels de Vos (ndevos) 
------
commit ae53e70543efdad9164667469f8dfad8dc7dac86
Author: Anuradha Talur <atalur>
Date:   Fri May 13 15:34:06 2016 +0530

    cluster/afr : Do post-op in case of symmetric errors
    
            Backport of: http://review.gluster.org/#/c/14310/
    
    In afr_changelog_post_op_now(), if there was any error,
    meaning op_ret < 0, post-op was not being done even when
    the errors were symmetric and there were no "failed
    subvols".
    
    Fix:
    When the errors are symmetric, perform post-op.
    
    How was the bug found :
    In a 1 X 3 volume with shard and write behind on
    when writes were done into a file with one brick down,
    the trusted.afr.dirty xattr's value for .shard directory
    would keep increasing as post op was not done but pre-op was.
    This incorrectly showed .shard to be in split-brain.
    
    RCA:
    When WB is on, due to multiple writes being sent on
    offset lying in the same shard, chances are that
    same shard file will be created more than once
    with the second one failing with op_ret < 0
    and op_errno = EEXIST.
    
    As op_ret was negative, afr wouldn't do post-op,
    leading to no decrement of trusted.afr.dirty xattr.
    Thus showing .shard directory to be in split-brain.
    
            >Change-Id: I711bdeaa1397244e6a7790e96f0c84501798fc59
            >BUG: 1335652
            >Signed-off-by: Anuradha Talur <atalur>
    
    Change-Id: I711bdeaa1397244e6a7790e96f0c84501798fc59
    BUG: 1335829
    Signed-off-by: Anuradha Talur <atalur>
    Reviewed-on: http://review.gluster.org/14331
    NetBSD-regression: NetBSD Build System <jenkins.org>
    Smoke: Gluster Build System <jenkins.com>
    Reviewed-by: Ravishankar N <ravishankar>
    CentOS-regression: Gluster Build System <jenkins.com>
    Reviewed-by: Niels de Vos <ndevos>

Comment 3 Niels de Vos 2016-06-16 14:06:29 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.

glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.