Bug 1338969 - common-ha: ganesha.nfsd not put into NFS-GRACE after fail-back
Summary: common-ha: ganesha.nfsd not put into NFS-GRACE after fail-back
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: common-ha
Version: 3.7.11
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Kaleb KEITHLEY
QA Contact:
URL:
Whiteboard:
Depends On: 1338967 1338968
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-05-23 18:33 UTC by Kaleb KEITHLEY
Modified: 2016-06-28 12:18 UTC (History)
1 user (show)

Fixed In Version: glusterfs-3.7.12
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1338968
Environment:
Last Closed: 2016-06-28 11:42:27 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Kaleb KEITHLEY 2016-05-23 18:33:51 UTC
+++ This bug was initially created as a clone of Bug #1338968 +++

+++ This bug was initially created as a clone of Bug #1338967 +++

Description of problem:

The surviving ganesha.nfsds are put into NFS-GRACE after a fail-over (triggered when one of the ganesha.nfsds dies).

When the ganesha.nfsd is restarted and the floating IP (VIP) fails back, the original surviving ganesha.nfsds should be put into NFS-GRACE again for post fail-back lock recovery, etc.


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Vijay Bellur 2016-05-23 19:33:16 UTC
REVIEW: http://review.gluster.org/14506 (common-ha: post fail-back, ganesha.nfsds are not put into NFS-GRACE) posted (#2) for review on master by Kaleb KEITHLEY (kkeithle)

Comment 2 Vijay Bellur 2016-05-23 19:45:59 UTC
REVIEW: http://review.gluster.org/14508 (common-ha: post fail-back, ganesha.nfsds are not put into NFS-GRACE) posted (#1) for review on release-3.7 by Kaleb KEITHLEY (kkeithle)

Comment 3 Vijay Bellur 2016-05-24 09:36:48 UTC
COMMIT: http://review.gluster.org/14508 committed in release-3.7 by Kaleb KEITHLEY (kkeithle) 
------
commit 2de43f41b0d9a4e6b08447e86cc83ac3f4bc7684
Author: Kaleb S KEITHLEY <kkeithle>
Date:   Mon May 23 15:41:51 2016 -0400

    common-ha: post fail-back, ganesha.nfsds are not put into NFS-GRACE
    
    A little known, rarely used feature of pacemaker called
    "notification" is used to follow the status of the ganesha.nfsds
    in the cluster. This is done with location constraints and other
    Black Magick.
    
    When a nfsd dies, the ganesha-active attribute is cleared, the
    associated floating IP (VIP) fails over to another node, and the
    ganesha_grace notify method is invoked with post-stop on all the
    nodes where the ganesha.nfsd is still running. The notify methods
    send dbus msgs to put their nfsds into NFS-GRACE, and the nfsds
    perform their grace processing, e.g. taking over locks from the
    failed nfsd.
    
    N.B. Fail-back was originally not planned to be a feature for
    glusterfs-3.7, but we sorta got it for free.
    
    For fail-back, the opposite occurs. The ganesha-active attribute
    is recreated, the floating IP fails back, and the notify method is
    invoked with pre-start on all the nodes where the surviving
    ganesha.nfsds continue to run. The notify methods send dbus msgs
    again to put their nsfds into NFS-GRACE again, and the nfsds clean
    up their locks.
    
    backport mainline
    > http://review.gluster.org/14506
    > BUG: 1338967
    release-3.8
    > http://review.gluster.org/14507
    > BUG: 1338968
    
    Change-Id: I3fc64afa20ae3a928143d69aa533a8df68dd680e
    BUG: 1338969
    Signed-off-by: Kaleb S KEITHLEY <kkeithle>
    Reviewed-on: http://review.gluster.org/14508
    NetBSD-regression: NetBSD Build System <jenkins.org>
    Smoke: Gluster Build System <jenkins.com>
    CentOS-regression: Gluster Build System <jenkins.com>
    Reviewed-by: soumya k <skoduri>

Comment 4 Kaushal 2016-06-28 12:18:56 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.12, please open a new bug report.

glusterfs-3.7.12 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://www.gluster.org/pipermail/gluster-devel/2016-June/049918.html
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.