Bug 1278336
Summary: | nfs client I/O stuck post IP failover | |||
---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Soumya Koduri <skoduri> | |
Component: | common-ha | Assignee: | Soumya Koduri <skoduri> | |
Status: | CLOSED ERRATA | QA Contact: | Arthy Loganathan <aloganat> | |
Severity: | medium | Docs Contact: | ||
Priority: | unspecified | |||
Version: | rhgs-3.1 | CC: | akhakhar, amukherj, jthottan, kkeithle, mzywusko, ndevos, nlevinki, rcyriac, rhinduja, rhs-bugs, rnalakka, sankarshan, skoduri | |
Target Milestone: | --- | |||
Target Release: | RHGS 3.2.0 | |||
Hardware: | All | |||
OS: | All | |||
Whiteboard: | ||||
Fixed In Version: | glusterfs-3.8.4-5 | Doc Type: | Bug Fix | |
Doc Text: |
Previously, during virtual IP failover, the TCP packets sent by a client and received by the server may be out of sequence because of previous failures to close the TCP socket. This could result in mount points becoming unresponsive. Portblock resource agents now 'tickle' TCP connections to ensure that packets are in sequence after failover.
|
Story Points: | --- | |
Clone Of: | ||||
: | 1354439 (view as bug list) | Environment: | ||
Last Closed: | 2017-03-23 05:24:06 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | 1302545, 1303037, 1354439, 1363722, 1389293 | |||
Bug Blocks: | 1330218, 1351515, 1351530 |
Description
Soumya Koduri
2015-11-05 10:03:53 UTC
The I/O resumes post failback though. Shall attach pkt traces in both the cases. Root cause'd the problem. I can now consistently reproduce this issue. This problem happens during second consecutive fail-over of VIP to the same node- say * server1 has VIP1, server2 has VIP2 * client connected to VIP1/server1. * Server1 has gone down, VIP1 moved to server2 * Client is now connected to VIP1/server2 * Server1 comes back online. VIP1 moved back to server1 * Now suppose server1 goes down again, VIP1 is failed over back to server2. Here is when the client I/O gets stuck. The issue is with the TCP connection now being reset by the server2 during VIP failback. Still finding out how/where to get the fix. Shall update the bug. The workaround for this issue is to restart the nfs-ganesha server on the server2. That shall reset the TCP connections. Correction to my comment#4 above. This issue seems to happening after couple of failover and failback to the same node. Couple of times I have seen the node which has taken over the VIP sending PSH ACK or SYN ACK packets when client tries to re-establish TCP connection. But after couple of fail-over scenarios, that doesn't happen. Have posted question to few technical mailing list to understand TCP behaviour. Meawhile as suggested by Niels, tried out pacemaker portblock resource agent to tickle few invalid TCP packets from the server which forces client to reset its connection and thus allowing I/O to continue. Now need to check how we can plug in this new resouce agent into existing scripts. Meanwhile as a workaround, whenever the client seem to be stuck post failover, create the below resource agent on the server machine hosting the VIP - pcs resource create ganesha_portblock ocf:heartbeat:portblock protocol=tcp portno=2049 action=unblock ip=VIP reset_local_on_unblock_stop=on tickle_dir=/run/gluster/shared_storage/tickle_dir/ Post the I/O resume delete it - pcs resource delete ganesha_portblock We are checking with Networking experts internally on this peculiar TCP behaviour. mail thread: http://post-office.corp.redhat.com/archives/tech-list/2015-November/msg00173.html As mentioned in the https://bugzilla.redhat.com/show_bug.cgi?id=369991#c16 , this seems a well known issue with the repetitive failovers of NFS servers in the cluster. CTDB uses TCP tickle ACKs as a workaround/to overcome this issue. As mentioned in the above note, we shall try to use pacemaker portblock to achieve the similar behaviour. Note: this resource agent is not yet packaged in RHEL downstream. So it may take sometime to package it separately. We shall discuss about the same with Cluster-suite team and update. Soumya, please open a bug against the resource-agents package to get portblock included. Done. I have opened bug1302545 fix for https://bugzilla.redhat.com/show_bug.cgi?id=1302545 got merged Created 4 node Ganesha cluster and did failover/failback multiple times on the same node and the IOs are not getting hung. Verified the fix in build, glusterfs-ganesha-3.8.4-5.el7rhgs.x86_64 nfs-ganesha-gluster-2.4.1-1.el7rhgs.x86_64 Doc text looks good to me. Thanks! Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2017-0486.html |