Bug 1278336 - nfs client I/O stuck post IP failover
Summary: nfs client I/O stuck post IP failover
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: common-ha
Version: rhgs-3.1
Hardware: All
OS: All
unspecified
medium
Target Milestone: ---
: RHGS 3.2.0
Assignee: Soumya Koduri
QA Contact: Arthy Loganathan
URL:
Whiteboard:
Depends On: 1302545 1303037 1354439 1363722 1389293
Blocks: 1330218 1351515 1351530
TreeView+ depends on / blocked
 
Reported: 2015-11-05 10:03 UTC by Soumya Koduri
Modified: 2020-07-16 08:38 UTC (History)
13 users (show)

Fixed In Version: glusterfs-3.8.4-5
Doc Type: Bug Fix
Doc Text:
Previously, during virtual IP failover, the TCP packets sent by a client and received by the server may be out of sequence because of previous failures to close the TCP socket. This could result in mount points becoming unresponsive. Portblock resource agents now 'tickle' TCP connections to ensure that packets are in sequence after failover.
Clone Of:
: 1354439 (view as bug list)
Environment:
Last Closed: 2017-03-23 05:24:06 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1330218 0 medium CLOSED Shutting down I/O serving node, takes around ~9mins for IO to resume from failed over node in heterogeneous client scena... 2021-02-22 00:41:40 UTC
Red Hat Product Errata RHSA-2017:0486 0 normal SHIPPED_LIVE Moderate: Red Hat Gluster Storage 3.2.0 security, bug fix, and enhancement update 2017-03-23 09:18:45 UTC

Internal Links: 1330218

Description Soumya Koduri 2015-11-05 10:03:53 UTC
Description of problem:

While testing nfs-ganesha HA IP failover/failback cases, we have noticed that the client I/O gets stuck sometimes.

Version-Release number of selected component (if applicable):
RHGS 3.1

How reproducible:
Not always


Actual results:

Client I/O gets stuck

Expected results:

Client I/O should resume post IP failover.

Additional info:
I am attaching pkt trace taken from the client side. I see many TCP re-transmission requests post failover. Need to debug that.

Comment 2 Soumya Koduri 2015-11-05 10:12:58 UTC
The I/O resumes post failback though. Shall attach pkt traces in both the cases.

Comment 4 Soumya Koduri 2015-11-06 07:43:56 UTC
Root cause'd the problem. I can now consistently reproduce this issue. This problem happens during second consecutive fail-over of VIP to the same node- 

say 
* server1 has VIP1, server2 has VIP2
* client connected to VIP1/server1.
* Server1 has gone down, VIP1 moved to server2
* Client is now connected to VIP1/server2 
* Server1 comes back online. VIP1 moved back to server1
* Now suppose server1 goes down again, VIP1 is failed over back to server2.

Here is when the client I/O gets stuck. The issue is with the TCP connection now being reset
by the server2 during VIP failback. Still finding out how/where to get the fix. Shall update the bug.

Comment 5 Soumya Koduri 2015-11-06 07:47:52 UTC
The workaround for this issue is to restart the nfs-ganesha server on the server2. That shall reset the TCP connections.

Comment 6 Soumya Koduri 2015-11-06 09:42:47 UTC
Correction to my comment#4 above. This issue seems to happening after couple of failover and failback to the same node. Couple of times I have seen the node which has taken over the VIP sending PSH ACK or SYN ACK packets when client tries to re-establish TCP connection. But after couple of fail-over scenarios, that doesn't happen.

Comment 7 Soumya Koduri 2015-11-10 10:34:34 UTC
Have posted question to few technical mailing list to understand TCP behaviour. Meawhile as suggested by Niels, tried out pacemaker portblock resource agent to tickle few invalid TCP packets from the server which forces client to reset its connection and thus allowing I/O to continue.

Now need to check how we can plug in this new resouce agent into existing scripts.

Meanwhile as a workaround, whenever the client seem to be stuck post failover, create the below resource agent on the server machine hosting the VIP -

pcs resource create ganesha_portblock ocf:heartbeat:portblock protocol=tcp portno=2049 action=unblock ip=VIP reset_local_on_unblock_stop=on tickle_dir=/run/gluster/shared_storage/tickle_dir/

Post the I/O resume delete it -

pcs resource delete ganesha_portblock

Comment 8 Soumya Koduri 2015-11-19 05:30:35 UTC
We are checking with Networking experts internally on this peculiar TCP behaviour.

mail thread: http://post-office.corp.redhat.com/archives/tech-list/2015-November/msg00173.html

As mentioned in the https://bugzilla.redhat.com/show_bug.cgi?id=369991#c16 , this seems a well known issue with the repetitive failovers of NFS servers in the cluster. CTDB uses TCP tickle ACKs as a workaround/to overcome this issue. As mentioned in the above note, we shall try to use pacemaker portblock to achieve the similar behaviour.
Note: this resource agent is not yet packaged in RHEL downstream. So it may take sometime to package it separately. We shall discuss about the same with Cluster-suite team and update.

Comment 9 Niels de Vos 2016-01-27 11:44:56 UTC
Soumya, please open a bug against the resource-agents package to get portblock included.

Comment 10 Soumya Koduri 2016-01-28 06:42:40 UTC
Done. I have opened bug1302545

Comment 11 Jiffin 2016-03-07 09:22:49 UTC
fix for https://bugzilla.redhat.com/show_bug.cgi?id=1302545 got merged

Comment 24 Arthy Loganathan 2016-11-23 12:40:56 UTC
Created 4 node Ganesha cluster and did failover/failback multiple times on the same node and the IOs are not getting hung.

Verified the fix in build,
glusterfs-ganesha-3.8.4-5.el7rhgs.x86_64
nfs-ganesha-gluster-2.4.1-1.el7rhgs.x86_64

Comment 31 Soumya Koduri 2017-03-08 06:57:07 UTC
Doc text looks good to me. Thanks!

Comment 33 errata-xmlrpc 2017-03-23 05:24:06 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2017-0486.html


Note You need to log in before you can comment on or make changes to this bug.