Bug 1278336 - nfs client I/O stuck post IP failover
nfs client I/O stuck post IP failover
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: common-ha (Show other bugs)
3.1
All All
unspecified Severity medium
: ---
: RHGS 3.2.0
Assigned To: Soumya Koduri
Arthy Loganathan
:
Depends On: 1363722 1302545 1303037 1354439 1389293
Blocks: 1330218 1351515 1351530
  Show dependency treegraph
 
Reported: 2015-11-05 05:03 EST by Soumya Koduri
Modified: 2017-03-23 01:24 EDT (History)
13 users (show)

See Also:
Fixed In Version: glusterfs-3.8.4-5
Doc Type: Bug Fix
Doc Text:
Previously, during virtual IP failover, the TCP packets sent by a client and received by the server may be out of sequence because of previous failures to close the TCP socket. This could result in mount points becoming unresponsive. Portblock resource agents now 'tickle' TCP connections to ensure that packets are in sequence after failover.
Story Points: ---
Clone Of:
: 1354439 (view as bug list)
Environment:
Last Closed: 2017-03-23 01:24:06 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Soumya Koduri 2015-11-05 05:03:53 EST
Description of problem:

While testing nfs-ganesha HA IP failover/failback cases, we have noticed that the client I/O gets stuck sometimes.

Version-Release number of selected component (if applicable):
RHGS 3.1

How reproducible:
Not always


Actual results:

Client I/O gets stuck

Expected results:

Client I/O should resume post IP failover.

Additional info:
I am attaching pkt trace taken from the client side. I see many TCP re-transmission requests post failover. Need to debug that.
Comment 2 Soumya Koduri 2015-11-05 05:12:58 EST
The I/O resumes post failback though. Shall attach pkt traces in both the cases.
Comment 4 Soumya Koduri 2015-11-06 02:43:56 EST
Root cause'd the problem. I can now consistently reproduce this issue. This problem happens during second consecutive fail-over of VIP to the same node- 

say 
* server1 has VIP1, server2 has VIP2
* client connected to VIP1/server1.
* Server1 has gone down, VIP1 moved to server2
* Client is now connected to VIP1/server2 
* Server1 comes back online. VIP1 moved back to server1
* Now suppose server1 goes down again, VIP1 is failed over back to server2.

Here is when the client I/O gets stuck. The issue is with the TCP connection now being reset
by the server2 during VIP failback. Still finding out how/where to get the fix. Shall update the bug.
Comment 5 Soumya Koduri 2015-11-06 02:47:52 EST
The workaround for this issue is to restart the nfs-ganesha server on the server2. That shall reset the TCP connections.
Comment 6 Soumya Koduri 2015-11-06 04:42:47 EST
Correction to my comment#4 above. This issue seems to happening after couple of failover and failback to the same node. Couple of times I have seen the node which has taken over the VIP sending PSH ACK or SYN ACK packets when client tries to re-establish TCP connection. But after couple of fail-over scenarios, that doesn't happen.
Comment 7 Soumya Koduri 2015-11-10 05:34:34 EST
Have posted question to few technical mailing list to understand TCP behaviour. Meawhile as suggested by Niels, tried out pacemaker portblock resource agent to tickle few invalid TCP packets from the server which forces client to reset its connection and thus allowing I/O to continue.

Now need to check how we can plug in this new resouce agent into existing scripts.

Meanwhile as a workaround, whenever the client seem to be stuck post failover, create the below resource agent on the server machine hosting the VIP -

pcs resource create ganesha_portblock ocf:heartbeat:portblock protocol=tcp portno=2049 action=unblock ip=VIP reset_local_on_unblock_stop=on tickle_dir=/run/gluster/shared_storage/tickle_dir/

Post the I/O resume delete it -

pcs resource delete ganesha_portblock
Comment 8 Soumya Koduri 2015-11-19 00:30:35 EST
We are checking with Networking experts internally on this peculiar TCP behaviour.

mail thread: http://post-office.corp.redhat.com/archives/tech-list/2015-November/msg00173.html

As mentioned in the https://bugzilla.redhat.com/show_bug.cgi?id=369991#c16 , this seems a well known issue with the repetitive failovers of NFS servers in the cluster. CTDB uses TCP tickle ACKs as a workaround/to overcome this issue. As mentioned in the above note, we shall try to use pacemaker portblock to achieve the similar behaviour.
Note: this resource agent is not yet packaged in RHEL downstream. So it may take sometime to package it separately. We shall discuss about the same with Cluster-suite team and update.
Comment 9 Niels de Vos 2016-01-27 06:44:56 EST
Soumya, please open a bug against the resource-agents package to get portblock included.
Comment 10 Soumya Koduri 2016-01-28 01:42:40 EST
Done. I have opened bug1302545
Comment 11 Jiffin 2016-03-07 04:22:49 EST
fix for https://bugzilla.redhat.com/show_bug.cgi?id=1302545 got merged
Comment 24 Arthy Loganathan 2016-11-23 07:40:56 EST
Created 4 node Ganesha cluster and did failover/failback multiple times on the same node and the IOs are not getting hung.

Verified the fix in build,
glusterfs-ganesha-3.8.4-5.el7rhgs.x86_64
nfs-ganesha-gluster-2.4.1-1.el7rhgs.x86_64
Comment 31 Soumya Koduri 2017-03-08 01:57:07 EST
Doc text looks good to me. Thanks!
Comment 33 errata-xmlrpc 2017-03-23 01:24:06 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2017-0486.html

Note You need to log in before you can comment on or make changes to this bug.