+++ This bug was initially created as a clone of Bug #1318751 +++ Description of problem: When there are 2 sources and one sink and if two self-heal daemons try to acquire locks at the same time, there is a chance that it gets a lock on one source and sink leading partial to heal. This will need one more heal from the remaining source to sink for the complete self-heal. This is not optimal. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: --- Additional comment from Vijay Bellur on 2016-03-17 12:57:55 EDT --- REVIEW: http://review.gluster.org/13766 (cluster/afr: Fix partial heals in 3-way replication) posted (#1) for review on master by Pranith Kumar Karampuri (pkarampu) --- Additional comment from Vijay Bellur on 2016-04-12 12:06:17 EDT --- REVIEW: http://review.gluster.org/13766 (cluster/afr: Fix partial heals in 3-way replication) posted (#2) for review on master by Pranith Kumar Karampuri (pkarampu) --- Additional comment from Vijay Bellur on 2016-04-15 05:51:10 EDT --- COMMIT: http://review.gluster.org/13766 committed in master by Pranith Kumar Karampuri (pkarampu) ------ commit 8deedef565df49def75083678f8d1558c7b1f7d3 Author: Pranith Kumar K <pkarampu> Date: Thu Mar 17 19:42:00 2016 +0530 cluster/afr: Fix partial heals in 3-way replication Problem: When there are 2 sources and one sink and if two self-heal daemons try to acquire locks at the same time, there is a chance that it gets a lock on one source and sink leading partial to heal. This will need one more heal from the remaining source to sink for the complete self-heal. This is not optimal. Fix: Upgrade non-blocking locks to blocking lock on all the subvolumes, if the number of locks acquired is majority and there were eagains. BUG: 1318751 Change-Id: Iae10b8d3402756c4164b98cc49876056ff7a61e5 Signed-off-by: Pranith Kumar K <pkarampu> Reviewed-on: http://review.gluster.org/13766 Smoke: Gluster Build System <jenkins.com> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.com> Reviewed-by: Ravishankar N <ravishankar>
REVIEW: http://review.gluster.org/14008 (cluster/afr: Fix partial heals in 3-way replication) posted (#1) for review on release-3.7 by Pranith Kumar Karampuri (pkarampu)
COMMIT: http://review.gluster.org/14008 committed in release-3.7 by Pranith Kumar Karampuri (pkarampu) ------ commit 6b88d97c4a9e999180d77463e38ad14fc9d944cf Author: Pranith Kumar K <pkarampu> Date: Thu Mar 17 19:42:00 2016 +0530 cluster/afr: Fix partial heals in 3-way replication Problem: When there are 2 sources and one sink and if two self-heal daemons try to acquire locks at the same time, there is a chance that it gets a lock on one source and sink leading partial to heal. This will need one more heal from the remaining source to sink for the complete self-heal. This is not optimal. Fix: Upgrade non-blocking locks to blocking lock on all the subvolumes, if the number of locks acquired is majority and there were eagains. >BUG: 1318751 >Change-Id: Iae10b8d3402756c4164b98cc49876056ff7a61e5 >Signed-off-by: Pranith Kumar K <pkarampu> >Reviewed-on: http://review.gluster.org/13766 >Smoke: Gluster Build System <jenkins.com> >NetBSD-regression: NetBSD Build System <jenkins.org> >CentOS-regression: Gluster Build System <jenkins.com> >Reviewed-by: Ravishankar N <ravishankar> >(cherry picked from commit 8deedef565df49def75083678f8d1558c7b1f7d3) Change-Id: Ia164360dc1474a717f63633f5deb2c39cc15017c BUG: 1327863 Signed-off-by: Pranith Kumar K <pkarampu> Reviewed-on: http://review.gluster.org/14008 Smoke: Gluster Build System <jenkins.com> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.com>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.12, please open a new bug report. glusterfs-3.7.12 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://www.gluster.org/pipermail/gluster-devel/2016-June/049918.html [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user