Bug 1428670 - Disconnects in nfs mount leads to IO hang and mount inaccessible
Summary: Disconnects in nfs mount leads to IO hang and mount inaccessible
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: rpc
Version: 3.10
Hardware: All
OS: Linux
unspecified
urgent
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On: 1427012
Blocks: 1409135 1425740
TreeView+ depends on / blocked
 
Reported: 2017-03-03 06:06 UTC by Raghavendra G
Modified: 2017-04-05 00:01 UTC (History)
11 users (show)

Fixed In Version: glusterfs-3.10.1
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1427012
Environment:
Last Closed: 2017-04-05 00:01:42 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Comment 1 Worker Ant 2017-03-03 06:09:59 UTC
REVIEW: https://review.gluster.org/16835 (rpc/clnt: remove locks while notifying CONNECT/DISCONNECT) posted (#1) for review on release-3.10 by Raghavendra G (rgowdapp)

Comment 2 Worker Ant 2017-03-06 16:20:36 UTC
COMMIT: https://review.gluster.org/16835 committed in release-3.10 by Shyamsundar Ranganathan (srangana) 
------
commit fab2c6d574742e6c356d6b364d720540fc354fe8
Author: Raghavendra G <rgowdapp>
Date:   Tue Feb 28 13:13:59 2017 +0530

    rpc/clnt: remove locks while notifying CONNECT/DISCONNECT
    
    Locking during notify was introduced as part of commit
    aa22f24f5db7659387704998ae01520708869873 [1]. The fix was introduced
    to fix out-of-order CONNECT/DISCONNECT events from rpc-clnt to parent
    xlators [2]. However as part of handling DISCONNECT protocol/client
    does unwind saved frames (with failure) waiting for responses. This
    saved_frames_unwind can be a costly operation and hence ideally
    shouldn't be included in the critical section of notifylock, as it
    unnecessarily delays the reconnection to same brick. Also, its not a
    good practise to pass control to other xlators holding a lock as it
    can lead to deadlocks. So, this patch removes locking in rpc-clnt
    while notifying parent xlators.
    
    To fix [2], two changes are present in this patch:
    
    * notify DISCONNECT before cleaning up rpc connection (same as commit
      a6b63e11b7758cf1bfcb6798, patch [3]).
    * protocol/client uses rpc_clnt_cleanup_and_start, which cleans up rpc
      connection and does a start while handling a DISCONNECT event from
      rpc. Note that patch [3] was reverted as rpc_clnt_start called in
      quick_reconnect path of protocol/client didn't invoke connect on
      transport as the connection was not cleaned up _yet_ (as cleanup was
      moved post notification in rpc-clnt). This resulted in clients never
      attempting connect to bricks.
    
    Note that one of the neater ways to fix [2] (without using locks) is
    to introduce generation numbers to map CONNECT and DISCONNECTS across
    epochs and ignore DISCONNECT events if they don't belong to current
    epoch. However, this approach is a bit complex to implement and
    requires time. So, current patch is a hacky stop-gap fix till we come
    up with a more cleaner solution.
    
    [1] http://review.gluster.org/15916
    [2] https://bugzilla.redhat.com/show_bug.cgi?id=1386626
    [3] http://review.gluster.org/15681
    
    >Change-Id: I62daeee8bb1430004e28558f6eb133efd4ccf418
    >Signed-off-by: Raghavendra G <rgowdapp>
    >BUG: 1427012
    >Reviewed-on: https://review.gluster.org/16784
    >Smoke: Gluster Build System <jenkins.org>
    >Reviewed-by: Milind Changire <mchangir>
    >NetBSD-regression: NetBSD Build System <jenkins.org>
    >CentOS-regression: Gluster Build System <jenkins.org>
    (cherry picked from commit 773f32caf190af4ee48818279b6e6d3c9f2ecc79)
    
    Change-Id: I62daeee8bb1430004e28558f6eb133efd4ccf418
    Signed-off-by: Raghavendra G <rgowdapp>
    BUG: 1428670
    Reviewed-on: https://review.gluster.org/16835
    Smoke: Gluster Build System <jenkins.org>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.org>
    Reviewed-by: Shyamsundar Ranganathan <srangana>

Comment 3 Shyamsundar 2017-04-05 00:01:42 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.10.1, please open a new bug report.

glusterfs-3.10.1 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/gluster-users/2017-April/030494.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.