Backport of https://bugzilla.redhat.com/show_bug.cgi?id=1361249 to 3.7.
REVIEW: http://review.gluster.org/15082 (posix: leverage FALLOC_FL_ZERO_RANGE in zerofill fop) posted (#1) for review on release-3.7 by Oleksandr Natalenko (oleksandr)
REVIEW: http://review.gluster.org/15082 (posix: leverage FALLOC_FL_ZERO_RANGE in zerofill fop) posted (#2) for review on release-3.7 by Oleksandr Natalenko (oleksandr)
COMMIT: http://review.gluster.org/15082 committed in release-3.7 by Pranith Kumar Karampuri (pkarampu) ------ commit 0f2c1fdee27cf6c35dee129d14f7226a20464c23 Author: Ravishankar N <ravishankar> Date: Thu Jul 28 20:42:45 2016 +0530 posix: leverage FALLOC_FL_ZERO_RANGE in zerofill fop posix_zerofill() implements zerofilling of a given (offset,length) by doing a writev in a loop followed by an optional fsync on the file. fallocate(2) has a FALLOC_FL_ZERO_RANGE flag which does away with all this and provides the same result (from a userspace application point of view) with a single syscall. This patch attempts the zerofill with the latter and falls back to the former if it fails. Tested using a libgfapi based C program on XFS and observed using gdb that posix_zerofill()'s call to fallocate with FALLOC_FL_ZERO_RANGE was a success. > Reviewed-on: http://review.gluster.org/15037 > Reviewed-on: http://review.gluster.org/15100 > Smoke: Gluster Build System <jenkins.org> > CentOS-regression: Gluster Build System <jenkins.org> > NetBSD-regression: NetBSD Build System <jenkins.org> > Reviewed-by: Pranith Kumar Karampuri <pkarampu> BUG: 1363750 Change-Id: I77e9b7de0d59c255f06b0c39c43a276990081727 Signed-off-by: Ravishankar N <ravishankar> Signed-off-by: Oleksandr Natalenko <oleksandr> Reviewed-on: http://review.gluster.org/15082 Smoke: Gluster Build System <jenkins.org> NetBSD-regression: NetBSD Build System <jenkins.org> Reviewed-by: Prashanth Pai <ppai> CentOS-regression: Gluster Build System <jenkins.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.15, please open a new bug report. glusterfs-3.7.15 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://www.gluster.org/pipermail/gluster-devel/2016-September/050714.html [2] https://www.gluster.org/pipermail/gluster-users/