Bug 1541458 - Changes to self-heal logic w.r.t. detecting of split-brains
Summary: Changes to self-heal logic w.r.t. detecting of split-brains
Keywords:
Status: CLOSED EOL
Alias: None
Product: GlusterFS
Classification: Community
Component: replicate
Version: 3.13
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Ravishankar N
QA Contact:
URL:
Whiteboard:
Depends On: 1539358 1542380 1597123
Blocks: 1384983
TreeView+ depends on / blocked
 
Reported: 2018-02-02 15:59 UTC by Ravishankar N
Modified: 2018-07-02 06:21 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1539358
Environment:
Last Closed: 2018-06-20 18:26:24 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Ravishankar N 2018-02-02 15:59:45 UTC
+++ This bug was initially created as a clone of Bug #1539358 +++

Description of problem:
    We currently don't have a roll-back/undoing of post-ops if quorum is not
    met. Though the FOP is still unwound with failure, the xattrs remain on
    the disk.  Due to these partial post-ops and partial heals (healing only when
    2 bricks are up), we can end up in split-brain purely from the afr
    xattrs point of view i.e each brick is blamed by atleast one of the
    others. These scenarios are hit when there is frequent
    connect/disconnect of the client/shd to the bricks.

--- Additional comment from Worker Ant on 2018-01-28 03:40:54 EST ---

REVIEW: https://review.gluster.org/19349 (afr: don't treat false cases of split-brain as genuine) posted (#1) for review on master by Ravishankar N

--- Additional comment from Worker Ant on 2018-02-01 09:18:21 EST ---

COMMIT: https://review.gluster.org/19349 committed in master by "Ravishankar N" <ravishankar> with a commit message- afr: don't treat all cases all bricks being blamed as split-brain

Problem:
We currently don't have a roll-back/undoing of post-ops if quorum is not
met. Though the FOP is still unwound with failure, the xattrs remain on
the disk.  Due to these partial post-ops and partial heals (healing only when
2 bricks are up), we can end up in split-brain purely from the afr
xattrs point of view i.e each brick is blamed by atleast one of the
others. These scenarios are hit when there is frequent
connect/disconnect of the client/shd to the bricks while I/O or heal
are in progress.

Fix:
Instead of undoing the post-op, pick a source based on the xattr values.
If 2 bricks blame one, the blamed one must be treated as sink.
If there is no majority, all are sources. Once we pick a source,
self-heal will then do the heal instead of erroring out due to
split-brain.

Change-Id: I3d0224b883eb0945785ade0e9697a1c828aec0ae
BUG: 1539358
Signed-off-by: Ravishankar N <ravishankar>

Comment 1 Worker Ant 2018-02-02 16:00:56 UTC
REVIEW: https://review.gluster.org/19479 (afr: don't treat all cases all bricks being blamed as split-brain) posted (#1) for review on release-3.13 by Ravishankar N

Comment 2 Worker Ant 2018-02-06 14:28:26 UTC
COMMIT: https://review.gluster.org/19479 committed in release-3.13 by "Shyamsundar Ranganathan" <srangana> with a commit message- afr: don't treat all cases all bricks being blamed as split-brain

Problem:
We currently don't have a roll-back/undoing of post-ops if quorum is not
met. Though the FOP is still unwound with failure, the xattrs remain on
the disk.  Due to these partial post-ops and partial heals (healing only when
2 bricks are up), we can end up in split-brain purely from the afr
xattrs point of view i.e each brick is blamed by atleast one of the
others. These scenarios are hit when there is frequent
connect/disconnect of the client/shd to the bricks while I/O or heal
are in progress.

Fix:
Instead of undoing the post-op, pick a source based on the xattr values.
If 2 bricks blame one, the blamed one must be treated as sink.
If there is no majority, all are sources. Once we pick a source,
self-heal will then do the heal instead of erroring out due to
split-brain.

Change-Id: I3d0224b883eb0945785ade0e9697a1c828aec0ae
BUG: 1541458
Signed-off-by: Ravishankar N <ravishankar>
(cherry picked from commit 0e6e8216823c2d9dafb81aae0f6ee3497c23d140)

Comment 3 Shyamsundar 2018-06-20 18:26:24 UTC
This bug reported is against a version of Gluster that is no longer maintained (or has been EOL'd). See https://www.gluster.org/release-schedule/ for the versions currently maintained.

As a result this bug is being closed.

If the bug persists on a maintained version of gluster or against the mainline gluster repository, request that it be reopened and the Version field be marked appropriately.


Note You need to log in before you can comment on or make changes to this bug.