Bug 1293534

Summary: guest paused due to IO error from gluster based storage doesn't resume automatically or manually
Product: [Community] GlusterFS Reporter: Raghavendra G <rgowdapp>
Component: write-behindAssignee: Raghavendra G <rgowdapp>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: high Docs Contact:
Priority: high    
Version: 3.7.6CC: amureini, bmcclain, bugs, chayang, danken, dfediuck, ebenahar, gklein, jcody, jraju, juzhang, knoel, meverett, mkenneth, pagupta, rbalakri, rcyriac, rpacheco, rtalur, sankarshan, sasundar, tlavigne, virt-maint
Target Milestone: ---Keywords: Triaged
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard: gluster
Fixed In Version: glusterfs-3.7.9 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1279730 Environment:
Last Closed: 2016-04-19 07:25:16 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1279730    
Bug Blocks: 1279240    

Comment 1 Vijay Bellur 2015-12-22 04:49:54 UTC
REVIEW: http://review.gluster.org/13057 (performance/write-behind: retry "failed syncs to backend") posted (#1) for review on release-3.7 by Raghavendra G (rgowdapp)

Comment 2 Vijay Bellur 2015-12-22 09:58:33 UTC
REVIEW: http://review.gluster.org/13057 (performance/write-behind: retry "failed syncs to backend") posted (#2) for review on release-3.7 by Raghavendra G (rgowdapp)

Comment 3 Vijay Bellur 2015-12-29 16:34:19 UTC
REVIEW: http://review.gluster.org/13057 (performance/write-behind: retry "failed syncs to backend") posted (#3) for review on release-3.7 by Raghavendra G (rgowdapp)

Comment 4 Vijay Bellur 2015-12-30 11:09:03 UTC
REVIEW: http://review.gluster.org/13057 (performance/write-behind: retry "failed syncs to backend") posted (#4) for review on release-3.7 by Raghavendra Talur (rtalur)

Comment 5 Vijay Bellur 2016-01-06 09:53:37 UTC
REVIEW: http://review.gluster.org/13057 (performance/write-behind: retry "failed syncs to backend") posted (#5) for review on release-3.7 by Raghavendra G (rgowdapp)

Comment 6 Vijay Bellur 2016-01-13 12:27:45 UTC
REVIEW: http://review.gluster.org/13057 (performance/write-behind: retry "failed syncs to backend") posted (#6) for review on release-3.7 by Raghavendra G (rgowdapp)

Comment 7 Vijay Bellur 2016-01-13 13:46:50 UTC
REVIEW: http://review.gluster.org/13057 (performance/write-behind: retry "failed syncs to backend") posted (#7) for review on release-3.7 by Raghavendra G (rgowdapp)

Comment 8 Vijay Bellur 2016-02-11 08:36:14 UTC
REVIEW: http://review.gluster.org/13057 (performance/write-behind: retry "failed syncs to backend") posted (#8) for review on release-3.7 by Raghavendra G (rgowdapp)

Comment 9 Vijay Bellur 2016-02-16 09:13:50 UTC
COMMIT: http://review.gluster.org/13057 committed in release-3.7 by Raghavendra G (rgowdapp) 
------
commit e424283c1f40386e5e3323b44df1a591ca62a7e8
Author: Raghavendra G <rgowdapp>
Date:   Tue Nov 17 12:57:54 2015 +0530

    performance/write-behind: retry "failed syncs to backend"
    
    1. When sync fails, the cached-write is still preserved unless there
       is a flush/fsync waiting on it.
    
    2. When a sync fails and there is a flush/fsync waiting on the
       cached-write, the cache is thrown away and no further retries will
       be made. In other words flush/fsync act as barriers for all the
       previous writes. The behaviour of fsync acting as a barrier is
       controlled by an option (see below for details). All previous
       writes are either successfully synced to backend or forgotten in
       case of an error. Without such barrier fop (especially flush which
       is issued prior to a close), we end up retrying for ever even after
       fd is closed.
    
    3. If a fop is waiting on cached-write and syncing to backend fails,
       the waiting fop is failed.
    
    4. sync failures when no fop is waiting are ignored and are not
       propagated to application. For eg.,
       a. first attempt of sync of a cached-write w1 fails
       b. second attempt of sync of w1 succeeds
    
       If there are no fops dependent on w1 are issued b/w a and b,
       application won't know about failure encountered in a.
    
    5. The effect of repeated sync failures is that, there will be no
       cache for future writes and they cannot be written behind.
    
    fsync as a barrier and resync of cached writes post fsync failure:
    ==================================================================
    Whether to keep retrying failed syncs post fsync is controlled by an
    option "resync-failed-syncs-after-fsync". By default, this option is
    set to "off".
    
    If sync of "cached-writes issued before fsync" (to backend) fails,
    this option configures whether to retry syncing them after fsync or
    forget them. If set to on, cached-writes are retried till a "flush"
    fop (or a successful sync) on sync failures. fsync itself is failed
    irrespective of the value of this option, when there is a sync failure
    of any cached-writes issued before fsync.
    
    Change-Id: I6097c0257bfb9ee5b1f616fbe6a0576ae9af369a
    Signed-off-by: Raghavendra G <rgowdapp>
    BUG: 1293534
    Signed-off-by: Raghavendra Talur <rtalur>
    Reviewed-on: http://review.gluster.org/13057
    Smoke: Gluster Build System <jenkins.com>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.com>

Comment 10 Mike McCune 2016-03-28 23:25:37 UTC
This bug was accidentally moved from POST to MODIFIED via an error in automation, please see mmccune with any questions

Comment 11 Kaushal 2016-04-19 07:25:16 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.9, please open a new bug report.

glusterfs-3.7.9 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://www.gluster.org/pipermail/gluster-users/2016-March/025922.html
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user