Bug 1468279 - metadata heal not happening despite having an active sink
metadata heal not happening despite having an active sink
Status: CLOSED CURRENTRELEASE
Product: GlusterFS
Classification: Community
Component: replicate (Show other bugs)
mainline
Unspecified Unspecified
unspecified Severity unspecified
: ---
: ---
Assigned To: Ravishankar N
: Triaged
Depends On:
Blocks: 1471611 1471612 1471613
  Show dependency treegraph
 
Reported: 2017-07-06 10:33 EDT by Ravishankar N
Modified: 2017-09-05 13:36 EDT (History)
2 users (show)

See Also:
Fixed In Version: glusterfs-3.12.0
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1471611 1471612 1471613 (view as bug list)
Environment:
Last Closed: 2017-08-23 06:07:02 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Ravishankar N 2017-07-06 10:33:23 EDT
Description of problem:

In a replica 3 volume, afr pending xattrs for metadata on 3 bricks are as follows:

Brick1- C1=1
Brick2- C0=1
Brick3- C0=0, C1=0

With these xattrs, __afr_selfheal_metadata_prepare() returned zero sinks and heal was not happening.

How to recreate:
1. Kill B1, do metadata modification operation on a file (say setfattr).
2. Kill B2, bring up B1, let metadata heal happen from B3 to B1.
3. Again do metadata modification operation on a file.
4. Kill B1, bring up B2, let metadata heal happen from B3 to B2.
5. Bring up B1 again. All bricks are up now.
6. Heal info never comes to zero.
Comment 1 Worker Ant 2017-07-06 10:33:59 EDT
REVIEW: https://review.gluster.org/17717 (afr: mark non sources as sinks in metadata heal) posted (#1) for review on master by Ravishankar N (ravishankar@redhat.com)
Comment 2 Worker Ant 2017-07-06 13:00:59 EDT
REVIEW: https://review.gluster.org/17717 (afr: mark non sources as sinks in metadata heal) posted (#2) for review on master by Ravishankar N (ravishankar@redhat.com)
Comment 3 Worker Ant 2017-07-07 01:58:16 EDT
REVIEW: https://review.gluster.org/17717 (afr: mark non sources as sinks in metadata heal) posted (#3) for review on master by Ravishankar N (ravishankar@redhat.com)
Comment 4 Worker Ant 2017-07-13 03:19:17 EDT
REVIEW: https://review.gluster.org/17717 (afr: mark non sources as sinks in metadata heal) posted (#4) for review on master by Ravishankar N (ravishankar@redhat.com)
Comment 5 Worker Ant 2017-07-13 06:45:57 EDT
REVIEW: https://review.gluster.org/17717 (afr: mark non sources as sinks in metadata heal) posted (#5) for review on master by Ravishankar N (ravishankar@redhat.com)
Comment 6 Worker Ant 2017-07-13 07:53:39 EDT
REVIEW: https://review.gluster.org/17717 (afr: mark non sources as sinks in metadata heal) posted (#6) for review on master by Ravishankar N (ravishankar@redhat.com)
Comment 7 Worker Ant 2017-07-13 13:23:02 EDT
COMMIT: https://review.gluster.org/17717 committed in master by Pranith Kumar Karampuri (pkarampu@redhat.com) 
------
commit 77c1ed5fd299914e91ff034d78ef6e3600b9151c
Author: Ravishankar N <ravishankar@redhat.com>
Date:   Thu Jul 6 19:49:47 2017 +0530

    afr: mark non sources as sinks in metadata heal
    
    Problem:
    In a 3 way replica, when the source brick does not have pending xattrs
    for the sinks, but the 2 sinks blame each other, metadata heal was not
    happpening because we were not setting all non-sources as sinks.
    
    Fix: Mark all non-sources as sinks, like it is done in data and entry
    heal.
    
    Change-Id: I534978940f5087302e307fcc810a48ffe898ce08
    BUG: 1468279
    Signed-off-by: Ravishankar N <ravishankar@redhat.com>
    Reviewed-on: https://review.gluster.org/17717
    Smoke: Gluster Build System <jenkins@build.gluster.org>
    Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
    CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Comment 8 Shyamsundar 2017-09-05 13:36:29 EDT
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.12.0, please open a new bug report.

glusterfs-3.12.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2017-September/000082.html
[2] https://www.gluster.org/pipermail/gluster-users/

Note You need to log in before you can comment on or make changes to this bug.