+++ This bug was initially created as a clone of Bug #1361249 +++ posix_zerofill() implements zerofilling of a given (offset,length) by doing a writev in a loop followed by an optional fsync on the file.
REVIEW: http://review.gluster.org/15044 (posix: leverage FALLOC_FL_ZERO_RANGE in zerofill fop) posted (#1) for review on release-3.8 by Ravishankar N (ravishankar)
REVIEW: http://review.gluster.org/15044 (posix: leverage FALLOC_FL_ZERO_RANGE in zerofill fop) posted (#2) for review on release-3.8 by Ravishankar N (ravishankar)
COMMIT: http://review.gluster.org/15044 committed in release-3.8 by Pranith Kumar Karampuri (pkarampu) ------ commit fe1054110ac54750ca0333a727d83b14a98e165e Author: Ravishankar N <ravishankar> Date: Thu Jul 28 20:42:45 2016 +0530 posix: leverage FALLOC_FL_ZERO_RANGE in zerofill fop Backport of http://review.gluster.org/#/c/15037/ posix_zerofill() implements zerofilling of a given (offset,length) by doing a writev in a loop followed by an optional fsync on the file. fallocate(2) has a FALLOC_FL_ZERO_RANGE flag which does away with all this and provides the same result (from a userspace application point of view) with a single syscall. This patch attempts the zerofill with the latter and falls back to the former if it fails. Tested using a libgfapi based C program on XFS and observed using gdb that posix_zerofill()'s call to fallocate with FALLOC_FL_ZERO_RANGE was a success. Change-Id: Iceaf0cbc57c52dac63540872e8538d79e8dee631 BUG: 1361483 Signed-off-by: Ravishankar N <ravishankar> Reviewed-on: http://review.gluster.org/15044 Smoke: Gluster Build System <jenkins.org> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.2, please open a new bug report. glusterfs-3.8.2 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://www.gluster.org/pipermail/announce/2016-August/000058.html [2] https://www.gluster.org/pipermail/gluster-users/