+++ This bug was initially created as a clone of Bug #1226507 +++ --- Additional comment from Anand Avati on 2015-05-30 01:07:39 EDT --- REVIEW: http://review.gluster.org/11012 (afr: honour selfheal enable/disable volume set options) posted (#1) for review on master by Ravishankar N (ravishankar) --- Additional comment from Krutika Dhananjay on 2015-05-30 01:22:52 EDT --- --- Additional comment from Anand Avati on 2015-06-01 07:03:27 EDT --- REVIEW: http://review.gluster.org/11012 (afr: honour selfheal enable/disable volume set options) posted (#2) for review on master by Ravishankar N (ravishankar)
REVIEW: http://review.gluster.org/11062 (afr: honour selfheal enable/disable volume set options) posted (#1) for review on release-3.7 by Ravishankar N (ravishankar)
COMMIT: http://review.gluster.org/11062 committed in release-3.7 by Pranith Kumar Karampuri (pkarampu) ------ commit 261f2cd3d03b76248c446d047086032ce18ad1c5 Author: Ravishankar N <ravishankar> Date: Wed Jun 3 15:45:02 2015 +0530 afr: honour selfheal enable/disable volume set options afr-v1 had the following volume set options that are used to enable/ disable self-heals from happening in AFR xlator when loaded in the client graph: cluster.metadata-self-heal cluster.data-self-heal cluster.entry-self-heal In afr-v2, these 3 heals can happen from the client if there is an inode refresh. This patch allows such heals to proceed only if the corresponding volume set options are set to true. Change-Id: I8d97d6020611152e73a269f3fdb607652c66cc86 BUG: 1227674 Signed-off-by: Ravishankar N <ravishankar> Reviewed-on: http://review.gluster.org/11012 Tested-by: NetBSD Build System <jenkins.org> Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu> (cherry picked from commit da111ae21429d33179cd11409bc171fae9d55194) Signed-off-by: Ravishankar N <ravishankar> Reviewed-on: http://review.gluster.org/11062
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.2, please reopen this bug report. glusterfs-3.7.2 has been announced on the Gluster Packaging mailinglist [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://www.gluster.org/pipermail/packaging/2015-June/000006.html [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user