Bug 1336612 - one of vm goes to paused state when network goes down and comes up back
Summary: one of vm goes to paused state when network goes down and comes up back
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: replicate
Version: mainline
Hardware: Unspecified
OS: Unspecified
high
urgent
Target Milestone: ---
Assignee: Pranith Kumar K
QA Contact:
URL:
Whiteboard:
Depends On: 1330044
Blocks: 1337822 1337831
TreeView+ depends on / blocked
 
Reported: 2016-05-17 02:24 UTC by Pranith Kumar K
Modified: 2017-03-27 18:21 UTC (History)
13 users (show)

Fixed In Version: glusterfs-3.9.0
Doc Type: Bug Fix
Doc Text:
Clone Of: 1330044
: 1337822 (view as bug list)
Environment:
Last Closed: 2017-03-27 18:21:34 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)
random-io generator (1.09 KB, text/plain)
2016-05-17 02:29 UTC, Pranith Kumar K
no flags Details

Comment 1 Pranith Kumar K 2016-05-17 02:29:10 UTC
Created attachment 1158145 [details]
random-io generator

Steps to recreate the issue in simple steps:
On the mount point run:
gcc read-write.c
touch {1..1000}
for i in {1..1000}; do ./a.out /mnt/r3/$i & done

In another terminal, run:
while true; kill first-brick; sleep 1m; gluster volume start r3 force; sleep 1m; done

This will lead to two failures:
1) Failures which say there is Input/Output error
2) Failures with EROFS

Pranith

Comment 2 Vijay Bellur 2016-05-17 02:41:57 UTC
REVIEW: http://review.gluster.org/14368 (cluster/afr: Refresh inode for inode-write fops in need) posted (#1) for review on master by Pranith Kumar Karampuri (pkarampu)

Comment 3 Vijay Bellur 2016-05-17 02:42:04 UTC
REVIEW: http://review.gluster.org/14369 (cluster/afr: If possible give errno received from lower xlators) posted (#1) for review on master by Pranith Kumar Karampuri (pkarampu)

Comment 4 Vijay Bellur 2016-05-19 05:14:38 UTC
COMMIT: http://review.gluster.org/14369 committed in master by Pranith Kumar Karampuri (pkarampu) 
------
commit 0660aa47b5ea782a5f7051544110cf0da73d598d
Author: Pranith Kumar K <pkarampu>
Date:   Tue May 17 06:38:57 2016 +0530

    cluster/afr: If possible give errno received from lower xlators
    
    In case of 3 way replication with quorum enabled with sharding,
    if one bricks is brought down and brought back up sometimes
    fops fail with EROFS because the mknod of shard file fails with
    two good nodes with EEXIST. So even when quorum is not met, it
    makes sense to unwind with the errno returned by lower xlators
    as much as possible.
    
    Change-Id: Iabd91cd7c270f5dfe6cbd18c50e59c299a331552
    BUG: 1336612
    Signed-off-by: Pranith Kumar K <pkarampu>
    Reviewed-on: http://review.gluster.org/14369
    Smoke: Gluster Build System <jenkins.com>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.com>
    Reviewed-by: Ravishankar N <ravishankar>

Comment 5 Vijay Bellur 2016-05-19 15:45:44 UTC
REVIEW: http://review.gluster.org/14368 (cluster/afr: Refresh inode for inode-write fops in need) posted (#2) for review on master by Pranith Kumar Karampuri (pkarampu)

Comment 6 Vijay Bellur 2016-05-19 19:23:59 UTC
REVIEW: http://review.gluster.org/14439 (cluster/afr: Refresh inode for inode-write fops in need) posted (#1) for review on master by Pranith Kumar Karampuri (pkarampu)

Comment 7 Vijay Bellur 2016-05-19 19:25:03 UTC
REVIEW: http://review.gluster.org/14440 (cluster/afr: Refresh inode for inode-write fops in need) posted (#1) for review on master by Pranith Kumar Karampuri (pkarampu)

Comment 8 Vijay Bellur 2016-05-20 04:28:02 UTC
COMMIT: http://review.gluster.org/14368 committed in master by Pranith Kumar Karampuri (pkarampu) 
------
commit 8a71e498fdcedacd1a32e121b3e081c61ee57a2e
Author: Pranith Kumar K <pkarampu>
Date:   Mon May 16 15:05:36 2016 +0530

    cluster/afr: Refresh inode for inode-write fops in need
    
    Problem:
    If a named fresh-lookup is done on an loc and the fop fails on one of the
    bricks or not sent on one of the bricks, but by the time response comes to afr,
    if the brick is up, 'can_interpret' will be set to false in afr_lookup_done(),
    this will lead to inode-ctx for that inode to be not set, this can lead to EIO
    in case of a transaction as it depends on 'readable' array to be available by
    that point.
    
    Fix:
    Refresh inode for inode-write fops for the ctx to be set if it is not already
    done at the time of named fresh-lookup or if the file is in split-brain where
    we need to perform one more refresh before failing the fop to check if the file
    is still in split-brain or not.
    
    BUG: 1336612
    Change-Id: I5c50b62c8de06129b8516039f7c252e5008c47a5
    Signed-off-by: Pranith Kumar K <pkarampu>
    Reviewed-on: http://review.gluster.org/14368
    Smoke: Gluster Build System <jenkins.com>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    Reviewed-by: Ravishankar N <ravishankar>
    CentOS-regression: Gluster Build System <jenkins.com>

Comment 9 Vijay Bellur 2016-05-26 13:25:39 UTC
REVIEW: http://review.gluster.org/14545 (cluster/afr: Fix warning about unused variable) posted (#2) for review on master by Pranith Kumar Karampuri (pkarampu)

Comment 10 Vijay Bellur 2016-05-30 05:12:33 UTC
COMMIT: http://review.gluster.org/14545 committed in master by Pranith Kumar Karampuri (pkarampu) 
------
commit bab6bf418bd2be8210135c32f349a5a8d7d7bb91
Author: Pranith Kumar K <pkarampu>
Date:   Thu May 26 18:45:59 2016 +0530

    cluster/afr: Fix warning about unused variable
    
    BUG: 1336612
    Change-Id: Ife1ce4b11776a303df04321b4a8fc5de745389d6
    Signed-off-by: Pranith Kumar K <pkarampu>
    Reviewed-on: http://review.gluster.org/14545
    Smoke: Gluster Build System <jenkins.com>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.com>
    Reviewed-by: Ravishankar N <ravishankar>

Comment 11 Shyamsundar 2017-03-27 18:21:34 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.9.0, please open a new bug report.

glusterfs-3.9.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/gluster-users/2016-November/029281.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.