REVIEW: http://review.gluster.org/13057 (performance/write-behind: retry "failed syncs to backend") posted (#1) for review on release-3.7 by Raghavendra G (rgowdapp)
REVIEW: http://review.gluster.org/13057 (performance/write-behind: retry "failed syncs to backend") posted (#2) for review on release-3.7 by Raghavendra G (rgowdapp)
REVIEW: http://review.gluster.org/13057 (performance/write-behind: retry "failed syncs to backend") posted (#3) for review on release-3.7 by Raghavendra G (rgowdapp)
REVIEW: http://review.gluster.org/13057 (performance/write-behind: retry "failed syncs to backend") posted (#4) for review on release-3.7 by Raghavendra Talur (rtalur)
REVIEW: http://review.gluster.org/13057 (performance/write-behind: retry "failed syncs to backend") posted (#5) for review on release-3.7 by Raghavendra G (rgowdapp)
REVIEW: http://review.gluster.org/13057 (performance/write-behind: retry "failed syncs to backend") posted (#6) for review on release-3.7 by Raghavendra G (rgowdapp)
REVIEW: http://review.gluster.org/13057 (performance/write-behind: retry "failed syncs to backend") posted (#7) for review on release-3.7 by Raghavendra G (rgowdapp)
REVIEW: http://review.gluster.org/13057 (performance/write-behind: retry "failed syncs to backend") posted (#8) for review on release-3.7 by Raghavendra G (rgowdapp)
COMMIT: http://review.gluster.org/13057 committed in release-3.7 by Raghavendra G (rgowdapp) ------ commit e424283c1f40386e5e3323b44df1a591ca62a7e8 Author: Raghavendra G <rgowdapp> Date: Tue Nov 17 12:57:54 2015 +0530 performance/write-behind: retry "failed syncs to backend" 1. When sync fails, the cached-write is still preserved unless there is a flush/fsync waiting on it. 2. When a sync fails and there is a flush/fsync waiting on the cached-write, the cache is thrown away and no further retries will be made. In other words flush/fsync act as barriers for all the previous writes. The behaviour of fsync acting as a barrier is controlled by an option (see below for details). All previous writes are either successfully synced to backend or forgotten in case of an error. Without such barrier fop (especially flush which is issued prior to a close), we end up retrying for ever even after fd is closed. 3. If a fop is waiting on cached-write and syncing to backend fails, the waiting fop is failed. 4. sync failures when no fop is waiting are ignored and are not propagated to application. For eg., a. first attempt of sync of a cached-write w1 fails b. second attempt of sync of w1 succeeds If there are no fops dependent on w1 are issued b/w a and b, application won't know about failure encountered in a. 5. The effect of repeated sync failures is that, there will be no cache for future writes and they cannot be written behind. fsync as a barrier and resync of cached writes post fsync failure: ================================================================== Whether to keep retrying failed syncs post fsync is controlled by an option "resync-failed-syncs-after-fsync". By default, this option is set to "off". If sync of "cached-writes issued before fsync" (to backend) fails, this option configures whether to retry syncing them after fsync or forget them. If set to on, cached-writes are retried till a "flush" fop (or a successful sync) on sync failures. fsync itself is failed irrespective of the value of this option, when there is a sync failure of any cached-writes issued before fsync. Change-Id: I6097c0257bfb9ee5b1f616fbe6a0576ae9af369a Signed-off-by: Raghavendra G <rgowdapp> BUG: 1293534 Signed-off-by: Raghavendra Talur <rtalur> Reviewed-on: http://review.gluster.org/13057 Smoke: Gluster Build System <jenkins.com> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.com>
This bug was accidentally moved from POST to MODIFIED via an error in automation, please see mmccune with any questions
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.9, please open a new bug report. glusterfs-3.7.9 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://www.gluster.org/pipermail/gluster-users/2016-March/025922.html [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user