Bug 1278332 - nfs-ganesha server do not enter grace period during failover/failback
Summary: nfs-ganesha server do not enter grace period during failover/failback
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: gluster-nfs
Version: rhgs-3.1
Hardware: All
OS: All
unspecified
medium
Target Milestone: ---
: RHGS 3.1.3
Assignee: Kaleb KEITHLEY
QA Contact: Shashank Raj
URL:
Whiteboard:
Depends On: 1290865 1317424
Blocks: 1299184
TreeView+ depends on / blocked
 
Reported: 2015-11-05 09:57 UTC by Soumya Koduri
Modified: 2019-11-14 07:06 UTC (History)
19 users (show)

Fixed In Version: glusterfs-3.7.9-1
Doc Type: Bug Fix
Doc Text:
NFS-ganesha servers were not always able to fail over gracefully if a node was shut down. This meant that NFS clients that connected to these nodes were unable to recover state after the shutdown, because that state had not been gracefully handed off to another node. This resulted in a hang in the client mount point. These processes have been updated so that NFS-ganesha servers have a grace period in which to hand off their processes before shutdown so that clients can continue to access data and reclaim any lost state.
Clone Of:
: 1290865 (view as bug list)
Environment:
Last Closed: 2016-06-23 04:56:24 UTC
Embargoed:
sankarshan: needinfo+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:1240 0 normal SHIPPED_LIVE Red Hat Gluster Storage 3.1 Update 3 2016-06-23 08:51:28 UTC

Description Soumya Koduri 2015-11-05 09:57:56 UTC
Description of problem:
While working on one of the issues raised by GSS, I found out that nfs-ganesha servers do not enter grace period all the time during failover & failback.

Below are my observations while debugging the issue - 

* if the system is rebooted, nfs-mon of that system should have created dead-ip. But it doesn't happen. 

* ganesha_grace compares pcs_status collected during its monitor() and restart().
                pcs status | grep dead_ip-1 | sort > /tmp/.pcs_status

                logger "ganesha_grace_start(), comparing"
                result=$(diff /var/run/ganesha/pcs_status1 /tmp/.pcs_status | grep '^>')
                if [[ ${result} ]]; then

It so happens that sometimes even though we have dead_ip (when nfs service goes down), monitor could have kicked in first and copied the status to /var/run/ganesha/pcs_status and we could have got the same status in start() too i.e, in /tmp/.pcs_status, so the diff shall be empty.

* during fail-back too the race between monitor() and start() result in nfs-server not being in grace.

Version-Release number of selected component (if applicable):
RHGS 3.1

How reproducible:
Almost consistent especially in reboot & VIP failback cases.

Comment 3 Soumya Koduri 2016-01-27 11:49:31 UTC
The fix is posted upstream for review - 
   http://review.gluster.org/13275

Comment 4 Dustin Black 2016-03-03 19:44:42 UTC
Correcting a backwards dependency chain.

Comment 6 Soumya Koduri 2016-03-10 09:42:47 UTC
(In reply to Soumya Koduri from comment #3)
> The fix is posted upstream for review - 
>    http://review.gluster.org/13275

Sorry. Had given wrong link. The fix provided upstream is -
 http://review.gluster.org/12964

Comment 7 Mike McCune 2016-03-28 22:53:35 UTC
This bug was accidentally moved from POST to MODIFIED via an error in automation, please see mmccune with any questions

Comment 8 Dustin Black 2016-03-31 22:58:20 UTC
FWIW I've tested RHGS 3.1.2 with Kaleb's patches, and it corrects the the VIP failover problem for me. In my VM lab, I can pause one of the nodes in a 2-node ganesha-ha configuration, and the VIP will quickly failover to the other node. Prior to this patch, the VIP would not failover and in fact the VIP on the remaining 'up' node would quickly disappear, causing a complete failure of the HA system.

Comment 19 Shashank Raj 2016-05-02 07:01:40 UTC
Verified this bug with 3.1.3 latest build and the original issue, where nfs-ganesha was not entering in grace period during failover/failback, can not be reproduced.

However there are other grace related bugs which are observed during verification as below and can be tracked separately:

>>>> Bug 1329887 - Unexpected behavior observed when nfs-ganesha enters grace period. (https://bugzilla.redhat.com/show_bug.cgi?id=1329887)

Description: During failover/failback, nfs-ganesha enters grace period only for 60 seconds and IO's get stopped for somewhere around 70-75 seconds

>>>> Bug 1330218 - Shutting down I/O serving node, takes 15-20 mins for IO to resume from failed over node. (https://bugzilla.redhat.com/show_bug.cgi?id=1330218)

Description: Shutting down I/O serving node, takes 15-20 mins for IO to resume from failed over node.

Since the original reported issue is not reproducible any more and seems to be working fine with latest ganesha builds, hence marking this bug as Verified.

Comment 21 Alok 2016-05-02 09:02:25 UTC
Providing PM approval for the accelerated fix.

Comment 27 Anoop 2016-05-16 04:06:04 UTC
Do we have a  build with required fixes (both)? Kindly post the brew link on the bug so that we can pick it up for verification.

Comment 30 Shashank Raj 2016-05-24 14:32:26 UTC
There are lot many regressions we are seeing related to failover/failback with 3.1.3 build and there are couple of open bugs as of now for 3.1.3.

And to verify this bug for hotfix we need to take care of other existing/open bugs, which doesn't look like a good idea to do as of now.
 
so probably if everyone agrees, can we drop this bug (related patches) from the hotfix build and provide a new build which contains the fixes of only 2 bugs as below:

https://review.gerrithub.io/#/c/263358/ (BZ#1306691 crash fix)
http://review.gluster.org/13459 (BZ#1301542)

Comment 37 Soumya Koduri 2016-06-14 06:52:47 UTC
doc_text looks good to me.

Comment 39 errata-xmlrpc 2016-06-23 04:56:24 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2016:1240


Note You need to log in before you can comment on or make changes to this bug.