Description of problem: It is observed that self-heal is doing fsync even after setting ensure-durability off option Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
REVIEW: http://review.gluster.org/14048 (cluster/afr: Do not fsync when durability is off) posted (#1) for review on master by Pranith Kumar Karampuri (pkarampu)
REVIEW: http://review.gluster.org/14048 (cluster/afr: Do not fsync when durability is off) posted (#2) for review on master by Pranith Kumar Karampuri (pkarampu)
REVIEW: http://review.gluster.org/14048 (cluster/afr: Do not fsync when durability is off) posted (#3) for review on master by Pranith Kumar Karampuri (pkarampu)
REVIEW: http://review.gluster.org/14048 (cluster/afr: Do not fsync when durability is off) posted (#4) for review on master by Pranith Kumar Karampuri (pkarampu)
REVIEW: http://review.gluster.org/14048 (cluster/afr: Do not fsync when durability is off) posted (#5) for review on master by Pranith Kumar Karampuri (pkarampu)
REVIEW: http://review.gluster.org/14048 (cluster/afr: Do not fsync when durability is off) posted (#6) for review on master by Pranith Kumar Karampuri (pkarampu)
REVIEW: http://review.gluster.org/14048 (cluster/afr: Do not fsync when durability is off) posted (#7) for review on master by Pranith Kumar Karampuri (pkarampu)
COMMIT: http://review.gluster.org/14048 committed in master by Jeff Darcy (jdarcy) ------ commit 302e218f68ef5edab6b369411d6f06cafea08ce1 Author: Pranith Kumar K <pkarampu> Date: Fri Apr 22 11:43:45 2016 +0530 cluster/afr: Do not fsync when durability is off BUG: 1329501 Change-Id: Id402c20f2fa19b22bc402295e03e7a0ea96b0c40 Signed-off-by: Pranith Kumar K <pkarampu> Reviewed-on: http://review.gluster.org/14048 Reviewed-by: Ravishankar N <ravishankar> Smoke: Gluster Build System <jenkins.com> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.com> Reviewed-by: Jeff Darcy <jdarcy>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report. glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/ [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user