Bug 1142614 - files with open fd's getting into split-brain when bricks goes offline and comes back online
Summary: files with open fd's getting into split-brain when bricks goes offline and co...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: replicate
Version: 3.5.3
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
Assignee: Pranith Kumar K
QA Contact:
URL:
Whiteboard: Regression
Depends On: 1131466 1142601 1142612
Blocks: glusterfs-3.5.3
TreeView+ depends on / blocked
 
Reported: 2014-09-17 06:18 UTC by Pranith Kumar K
Modified: 2014-11-21 16:14 UTC (History)
4 users (show)

Fixed In Version: glusterfs-3.5.3
Doc Type: Bug Fix
Doc Text:
Clone Of: 1142612
Environment:
Last Closed: 2014-11-21 16:02:57 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Comment 1 Anand Avati 2014-09-17 06:20:11 UTC
REVIEW: http://review.gluster.org/8757 (cluster/afr: Launch self-heal only when all the brick status is known) posted (#1) for review on release-3.5 by Pranith Kumar Karampuri (pkarampu)

Comment 2 Anand Avati 2014-09-30 09:52:27 UTC
REVIEW: http://review.gluster.org/8757 (cluster/afr: Launch self-heal only when all the brick status is known) posted (#2) for review on release-3.5 by Pranith Kumar Karampuri (pkarampu)

Comment 3 Anand Avati 2014-10-01 07:14:31 UTC
COMMIT: http://review.gluster.org/8757 committed in release-3.5 by Niels de Vos (ndevos) 
------
commit bee0c740b54669a8be11acea405d021bb50d3c54
Author: Pranith Kumar K <pkarampu>
Date:   Wed Sep 17 11:48:24 2014 +0530

    cluster/afr: Launch self-heal only when all the brick status is known
    
    Problem:
    File goes into split-brain because of wrong erasing of xattrs.
    
    RCA:
    The issue happens because index self-heal is triggered even before all the
    bricks are up. So what ends up happening while erasing the xattrs is, xattrs
    are erased only on the sink brick for the brick that it thinks is up leading to
    split-brain
    
    Example:
    lets say the xattrs before heal started are:
    brick 2:
    trusted.afr.vol1-client-2=0x000000020000000000000000
    trusted.afr.vol1-client-3=0x000000020000000000000000
    
    brick 3:
    trusted.afr.vol1-client-2=0x000010040000000000000000
    trusted.afr.vol1-client-3=0x000000000000000000000000
    
    if only brick-2 came up at the time of triggering the self-heal only
    'trusted.afr.vol1-client-2' is erased leading to the following xattrs:
    
    brick 2:
    trusted.afr.vol1-client-2=0x000000000000000000000000
    trusted.afr.vol1-client-3=0x000000020000000000000000
    
    brick 3:
    trusted.afr.vol1-client-2=0x000010040000000000000000
    trusted.afr.vol1-client-3=0x000000000000000000000000
    
    So the file goes into split-brain.
    
    Change-Id: I79f9a289d2118a715d262398221037b684a53d2a
    BUG: 1142614
    Signed-off-by: Pranith Kumar K <pkarampu>
    Reviewed-on: http://review.gluster.org/8757
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Krutika Dhananjay <kdhananj>
    Reviewed-by: Niels de Vos <ndevos>

Comment 4 Niels de Vos 2014-10-05 13:00:19 UTC
The first (and last?) Beta for GlusterFS 3.5.3 has been released [1]. Please verify if the release solves this bug report for you. In case the glusterfs-3.5.3beta1 release does not have a resolution for this issue, leave a comment in this bug and move the status to ASSIGNED. If this release fixes the problem for you, leave a note and change the status to VERIFIED.

Packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update (possibly an "updates-testing" repository) infrastructure for your distribution.

[1] http://supercolony.gluster.org/pipermail/gluster-users/2014-October/018990.html
[2] http://supercolony.gluster.org/pipermail/gluster-users/

Comment 5 Niels de Vos 2014-11-05 09:24:56 UTC
The second Beta for GlusterFS 3.5.3 has been released [1]. Please verify if the release solves this bug report for you. In case the glusterfs-3.5.3beta2 release does not have a resolution for this issue, leave a comment in this bug and move the status to ASSIGNED. If this release fixes the problem for you, leave a note and change the status to VERIFIED.

Packages for several distributions have been made available on [2] to make testing easier.

[1] http://supercolony.gluster.org/pipermail/gluster-users/2014-November/019359.html
[2] http://download.gluster.org/pub/gluster/glusterfs/qa-releases/3.5.3beta2/

Comment 6 Niels de Vos 2014-11-21 16:02:57 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.5.3, please reopen this bug report.

glusterfs-3.5.3 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://supercolony.gluster.org/pipermail/announce/2014-November/000042.html
[2] http://supercolony.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.