Bug 1337831 - one of vm goes to paused state when network goes down and comes up back
Summary: one of vm goes to paused state when network goes down and comes up back
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: replicate
Version: 3.7.11
Hardware: Unspecified
OS: Unspecified
high
urgent
Target Milestone: ---
Assignee: Pranith Kumar K
QA Contact:
URL:
Whiteboard:
Depends On: 1330044 1336612 1337822
Blocks: 1311817
TreeView+ depends on / blocked
 
Reported: 2016-05-20 08:02 UTC by Pranith Kumar K
Modified: 2016-06-28 12:18 UTC (History)
14 users (show)

Fixed In Version: glusterfs-3.7.12
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1337822
Environment:
Last Closed: 2016-06-28 12:18:31 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Comment 1 Vijay Bellur 2016-05-20 08:18:45 UTC
REVIEW: http://review.gluster.org/14452 (cluster/afr: If possible give errno received from lower xlators) posted (#1) for review on release-3.7 by Pranith Kumar Karampuri (pkarampu)

Comment 2 Vijay Bellur 2016-05-20 08:18:52 UTC
REVIEW: http://review.gluster.org/14453 (cluster/afr: Refresh inode for inode-write fops in need) posted (#1) for review on release-3.7 by Pranith Kumar Karampuri (pkarampu)

Comment 3 Vijay Bellur 2016-05-20 11:39:46 UTC
COMMIT: http://review.gluster.org/14452 committed in release-3.7 by Pranith Kumar Karampuri (pkarampu) 
------
commit ccb463647eaba1798943e1eb9ce6e6b3fa2e71c2
Author: Pranith Kumar K <pkarampu>
Date:   Tue May 17 06:38:57 2016 +0530

    cluster/afr: If possible give errno received from lower xlators
    
    In case of 3 way replication with quorum enabled with sharding,
    if one bricks is brought down and brought back up sometimes
    fops fail with EROFS because the mknod of shard file fails with
    two good nodes with EEXIST. So even when quorum is not met, it
    makes sense to unwind with the errno returned by lower xlators
    as much as possible.
    
     >Change-Id: Iabd91cd7c270f5dfe6cbd18c50e59c299a331552
     >BUG: 1336612
     >Signed-off-by: Pranith Kumar K <pkarampu>
     >Reviewed-on: http://review.gluster.org/14369
     >Smoke: Gluster Build System <jenkins.com>
     >NetBSD-regression: NetBSD Build System <jenkins.org>
     >CentOS-regression: Gluster Build System <jenkins.com>
     >Reviewed-by: Ravishankar N <ravishankar>
    
    BUG: 1337831
    Change-Id: I18979db118911e588da318094b2d22f5d426efd5
    Signed-off-by: Pranith Kumar K <pkarampu>
    Reviewed-on: http://review.gluster.org/14452
    Reviewed-by: Ravishankar N <ravishankar>
    Smoke: Gluster Build System <jenkins.com>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.com>

Comment 4 Vijay Bellur 2016-05-26 13:28:46 UTC
REVIEW: http://review.gluster.org/14453 (cluster/afr: Refresh inode for inode-write fops in need) posted (#2) for review on release-3.7 by Pranith Kumar Karampuri (pkarampu)

Comment 5 Vijay Bellur 2016-05-29 11:13:41 UTC
COMMIT: http://review.gluster.org/14453 committed in release-3.7 by Pranith Kumar Karampuri (pkarampu) 
------
commit 1d28634b9aab65b08c1c2e9a6f48619c9fa494dc
Author: Pranith Kumar K <pkarampu>
Date:   Mon May 16 15:05:36 2016 +0530

    cluster/afr: Refresh inode for inode-write fops in need
    
    Problem:
    If a named fresh-lookup is done on an loc and the fop fails on one of the
    bricks or not sent on one of the bricks, but by the time response comes to afr,
    if the brick is up, 'can_interpret' will be set to false in afr_lookup_done(),
    this will lead to inode-ctx for that inode to be not set, this can lead to EIO
    in case of a transaction as it depends on 'readable' array to be available by
    that point.
    
    Fix:
    Refresh inode for inode-write fops for the ctx to be set if it is not already
    done at the time of named fresh-lookup or if the file is in split-brain where
    we need to perform one more refresh before failing the fop to check if the file
    is still in split-brain or not.
    
     >BUG: 1336612
     >Change-Id: I5c50b62c8de06129b8516039f7c252e5008c47a5
     >Signed-off-by: Pranith Kumar K <pkarampu>
     >Reviewed-on: http://review.gluster.org/14368
     >Smoke: Gluster Build System <jenkins.com>
     >NetBSD-regression: NetBSD Build System <jenkins.org>
     >Reviewed-by: Ravishankar N <ravishankar>
     >CentOS-regression: Gluster Build System <jenkins.com>
     >Backport of http://review.gluster.org/14545
    
    BUG: 1337831
    Change-Id: If4465ab8fc506e1f905b623b82a53bdab8f5cffd
    Signed-off-by: Pranith Kumar K <pkarampu>
    Reviewed-on: http://review.gluster.org/14453
    NetBSD-regression: NetBSD Build System <jenkins.org>
    Reviewed-by: Ravishankar N <ravishankar>
    Smoke: Gluster Build System <jenkins.com>
    CentOS-regression: Gluster Build System <jenkins.com>

Comment 6 Kaushal 2016-06-28 12:18:31 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.12, please open a new bug report.

glusterfs-3.7.12 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://www.gluster.org/pipermail/gluster-devel/2016-June/049918.html
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.