+++ This bug was initially created as a clone of Bug #1468279 +++ Description of problem: In a replica 3 volume, afr pending xattrs for metadata on 3 bricks are as follows: Brick1- C1=1 Brick2- C0=1 Brick3- C0=0, C1=0 With these xattrs, __afr_selfheal_metadata_prepare() returned zero sinks and heal was not happening. How to recreate: 1. Kill B1, do metadata modification operation on a file (say setfattr). 2. Kill B2, bring up B1, let metadata heal happen from B3 to B1. 3. Again do metadata modification operation on a file. 4. Kill B1, bring up B2, let metadata heal happen from B3 to B2. 5. Bring up B1 again. All bricks are up now. 6. Heal info never comes to zero. --- Additional comment from Worker Ant on 2017-07-06 10:33:59 EDT --- REVIEW: https://review.gluster.org/17717 (afr: mark non sources as sinks in metadata heal) posted (#1) for review on master by Ravishankar N (ravishankar) --- Additional comment from Worker Ant on 2017-07-06 13:00:59 EDT --- REVIEW: https://review.gluster.org/17717 (afr: mark non sources as sinks in metadata heal) posted (#2) for review on master by Ravishankar N (ravishankar) --- Additional comment from Worker Ant on 2017-07-07 01:58:16 EDT --- REVIEW: https://review.gluster.org/17717 (afr: mark non sources as sinks in metadata heal) posted (#3) for review on master by Ravishankar N (ravishankar) --- Additional comment from Worker Ant on 2017-07-13 03:19:17 EDT --- REVIEW: https://review.gluster.org/17717 (afr: mark non sources as sinks in metadata heal) posted (#4) for review on master by Ravishankar N (ravishankar) --- Additional comment from Worker Ant on 2017-07-13 06:45:57 EDT --- REVIEW: https://review.gluster.org/17717 (afr: mark non sources as sinks in metadata heal) posted (#5) for review on master by Ravishankar N (ravishankar) --- Additional comment from Worker Ant on 2017-07-13 07:53:39 EDT --- REVIEW: https://review.gluster.org/17717 (afr: mark non sources as sinks in metadata heal) posted (#6) for review on master by Ravishankar N (ravishankar) --- Additional comment from Worker Ant on 2017-07-13 13:23:02 EDT --- COMMIT: https://review.gluster.org/17717 committed in master by Pranith Kumar Karampuri (pkarampu) ------ commit 77c1ed5fd299914e91ff034d78ef6e3600b9151c Author: Ravishankar N <ravishankar> Date: Thu Jul 6 19:49:47 2017 +0530 afr: mark non sources as sinks in metadata heal Problem: In a 3 way replica, when the source brick does not have pending xattrs for the sinks, but the 2 sinks blame each other, metadata heal was not happpening because we were not setting all non-sources as sinks. Fix: Mark all non-sources as sinks, like it is done in data and entry heal. Change-Id: I534978940f5087302e307fcc810a48ffe898ce08 BUG: 1468279 Signed-off-by: Ravishankar N <ravishankar> Reviewed-on: https://review.gluster.org/17717 Smoke: Gluster Build System <jenkins.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu> CentOS-regression: Gluster Build System <jenkins.org>
REVIEW: https://review.gluster.org/17784 (afr: mark non sources as sinks in metadata heal) posted (#1) for review on release-3.8 by Ravishankar N (ravishankar)
COMMIT: https://review.gluster.org/17784 committed in release-3.8 by jiffin tony Thottan (jthottan) ------ commit 117daf0c792f52b4c3fbc685b2f6b15841c81772 Author: Ravishankar N <> Date: Mon Jul 17 11:23:43 2017 +0530 afr: mark non sources as sinks in metadata heal Backport of https://review.gluster.org/#/c/17717/ Problem: In a 3 way replica, when the source brick does not have pending xattrs for the sinks, but the 2 sinks blame each other, metadata heal was not happpening because we were not setting all non-sources as sinks. Fix: Mark all non-sources as sinks, like it is done in data and entry heal. Change-Id: I534978940f5087302e307fcc810a48ffe898ce08 BUG: 1471613 Signed-off-by: Ravishankar N <ravishankar> Reviewed-on: https://review.gluster.org/17784 Smoke: Gluster Build System <jenkins.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu> CentOS-regression: Gluster Build System <jenkins.org>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.15, please open a new bug report. glusterfs-3.8.15 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] blah [2] https://www.gluster.org/pipermail/gluster-users/
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.15, please open a new bug report. glusterfs-3.8.15 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2017-August/000080.html [2] https://www.gluster.org/pipermail/gluster-users/