+++ This bug was initially created as a clone of Bug #1286017 +++ Description of problem: The arbiter brick anyway stores zero-byte files. There's no point of sending writes to the arbiter brick, only for it to unwind the writes without performing any action. --- Additional comment from Vijay Bellur on 2015-11-27 04:05:48 EST --- REVIEW: http://review.gluster.org/12777 (afr: skip healing data blocks for arbiter) posted (#1) for review on master by Ravishankar N (ravishankar)
REVIEW: http://review.gluster.org/12778 (afr: skip healing data blocks for arbiter) posted (#1) for review on release-3.7 by Ravishankar N (ravishankar)
REVIEW: http://review.gluster.org/12778 (afr: skip healing data blocks for arbiter) posted (#2) for review on release-3.7 by Ravishankar N (ravishankar)
COMMIT: http://review.gluster.org/12778 committed in release-3.7 by Pranith Kumar Karampuri (pkarampu) ------ commit 11d91f4e5fb66596addd8906b2f65a4137bd580a Author: Ravishankar N <ravishankar> Date: Mon Jan 11 12:58:16 2016 +0000 afr: skip healing data blocks for arbiter Backport of http://review.gluster.org/12777 1 ....but still do other parts of data-self-heal like restoring the time and undo pending xattrs. 2. Perform undo_pending inside inodelks. 3. If arbiter is the only sink, do these other parts of data-self-heal inside a single lock-unlock sequence. Change-Id: I64c9d5b594375f852bfb73dee02c66a9a67a7176 BUG: 1286169 Signed-off-by: Ravishankar N <ravishankar> Reviewed-on: http://review.gluster.org/12778 Smoke: Gluster Build System <jenkins.com> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu>
v3.7.7 contains a fix
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.7, please open a new bug report. glusterfs-3.7.7 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://www.gluster.org/pipermail/gluster-users/2016-February/025292.html [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user