Description of problem: The arbiter brick anyway stores zero-byte files. There's no point of sending writes to the arbiter brick, only for it to unwind the writes without performing any action.
REVIEW: http://review.gluster.org/12777 (afr: skip healing data blocks for arbiter) posted (#1) for review on master by Ravishankar N (ravishankar)
REVIEW: http://review.gluster.org/12777 (afr: skip healing data blocks for arbiter) posted (#2) for review on master by Ravishankar N (ravishankar)
REVIEW: http://review.gluster.org/12777 (afr: skip healing data blocks for arbiter) posted (#3) for review on master by Ravishankar N (ravishankar)
REVIEW: http://review.gluster.org/12777 (afr: skip healing data blocks for arbiter) posted (#4) for review on master by Ravishankar N (ravishankar)
REVIEW: http://review.gluster.org/12777 (afr: skip healing data blocks for arbiter) posted (#5) for review on master by Ravishankar N (ravishankar)
REVIEW: http://review.gluster.org/12777 (afr: skip healing data blocks for arbiter) posted (#6) for review on master by Ravishankar N (ravishankar)
REVIEW: http://review.gluster.org/12777 (afr: skip healing data blocks for arbiter) posted (#7) for review on master by Ravishankar N (ravishankar)
COMMIT: http://review.gluster.org/12777 committed in master by Pranith Kumar Karampuri (pkarampu) ------ commit b95ad51e00d6076d37809bcc50b89fee1cf248ef Author: Ravishankar N <ravishankar> Date: Mon Jan 11 12:58:16 2016 +0000 afr: skip healing data blocks for arbiter 1 ....but still do other parts of data-self-heal like restoring the time and undo pending xattrs. 2. Perform undo_pending inside inodelks. 3. If arbiter is the only sink, do these other parts of data-self-heal inside a single lock-unlock sequence. Change-Id: I64c9d5b594375f852bfb73dee02c66a9a67a7176 BUG: 1286017 Signed-off-by: Ravishankar N <ravishankar> Reviewed-on: http://review.gluster.org/12777 Smoke: Gluster Build System <jenkins.com> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report. glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/ [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user