RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1448170 - RHEL6.9: sunrpc reconnect logic now may trigger a SYN storm when a TCP connection drops and a burst of RPC commands hit the transport
Summary: RHEL6.9: sunrpc reconnect logic now may trigger a SYN storm when a TCP connec...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: kernel
Version: 6.9
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: rc
: ---
Assignee: Dave Wysochanski
QA Contact: Yongcheng Yang
URL:
Whiteboard:
Depends On: 1374441
Blocks: 1450850
TreeView+ depends on / blocked
 
Reported: 2017-05-04 16:40 UTC by Dave Wysochanski
Modified: 2021-09-09 12:17 UTC (History)
25 users (show)

Fixed In Version: kernel-2.6.32-704.el6
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1450850 (view as bug list)
Environment:
Last Closed: 2018-06-19 04:56:31 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Sample test case which tests NFSv3, v4.0, and v4.1 and checks for count of SYN packets (4.55 KB, text/plain)
2017-05-18 00:01 UTC, Dave Wysochanski
no flags Details
Sample output from test on patched kernel, but note NFSv4.1 fails due to TCP connection not getting dropped after 10 minutes for some reason, this might be a separate bug, not sure. (4.65 KB, application/octet-stream)
2017-05-18 00:02 UTC, Dave Wysochanski
no flags Details
Sample output from test on unpatched kernel, and note failure on NFSv4.0 due to SYN packet count == 8 which is more than expected == 3. However, for some reason NFSv3 passed the test and I cannot understand why this is - there's no burst of SYNs - I didn't see this often so I'm not sure about this. (4.68 KB, application/octet-stream)
2017-05-18 00:31 UTC, Dave Wysochanski
no flags Details
Sample test case which tests NFSv3, v4.0, and v4.1 and checks for count of SYN packets, v2 (4.96 KB, text/plain)
2017-05-19 20:44 UTC, Dave Wysochanski
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 3018371 0 None None None 2017-05-04 16:42:27 UTC
Red Hat Product Errata RHSA-2018:1854 0 normal SHIPPED_LIVE Important: kernel security and bug fix update 2018-06-19 08:58:56 UTC

Description Dave Wysochanski 2017-05-04 16:40:48 UTC
Description of problem:
There's a definite over-the-wire change in RHEL6.9 when a RPC TCP transport goes idle, the connection drops, and then is re-established.  If the connection drops, then a burst of RPC commands get triggered, they all seem to be able to initiate the TCP connect logic, which results in a burst of SYN packets over the wire.  Prior to RHEL6.9 in RHEL6.8 this does not occur.


Version-Release number of selected component (if applicable):


How reproducible:
Easy from what I can tell

Steps to Reproduce:
0. On NFS client, start tcpdump to NFS server
tcpdump -i eth0 -w /tmp/tcpdump2.pcap host 192.168.122.35 &

1. On NFS client, mount NFS share (I used RHEL6.8 NFS server)
mount -t nfs -overs=3 192.168.122.35:/exports/test /mnt/test

2. On NFS client, let the share go idle so the TCP connection drops
sleep 600

3. On NFS client, do something that generates multiple RPC tasks in parallel
for i in $(seq 0 50); do touch /mnt/test/test-file-$i.bin & done

4. Kill tcpdump
kill %1

5. Check tcpdump for presence of multiple SYN packets from the NFS client in less than one second.
tshark -ntad -r /tmp/tcpdump2.pcap -R 'tcp.flags.syn == 1'

Actual results:
A burst of TCP SYN packets from the NFS client

[root@rhel6u9-node1 ~]# uname -r
2.6.32-696.el6.x86_64
[root@rhel6u9-node1 ~]# tshark -ntad -r /tmp/tcpdump2.pcap -R 'tcp.flags.syn == 1'
Running as user "root" and group "root". This could be dangerous.
  3 2017-05-04 11:49:16.428344 192.168.122.18 -> 192.168.122.35 TCP 74 854 > 2049 [SYN] Seq=0 Win=14600 Len=0 MSS=1460 SACK_PERM=1 TSval=681601699 TSecr=0 WS=128
  4 2017-05-04 11:49:16.428591 192.168.122.18 -> 192.168.122.35 TCP 74 [TCP Port numbers reused] 854 > 2049 [SYN] Seq=0 Win=14600 Len=0 MSS=1460 SACK_PERM=1 TSval=681601699 TSecr=0 WS=128
  5 2017-05-04 11:49:16.428832 192.168.122.18 -> 192.168.122.35 TCP 74 [TCP Port numbers reused] 854 > 2049 [SYN] Seq=0 Win=14600 Len=0 MSS=1460 SACK_PERM=1 TSval=681601699 TSecr=0 WS=128
  6 2017-05-04 11:49:16.429069 192.168.122.18 -> 192.168.122.35 TCP 74 [TCP Port numbers reused] 854 > 2049 [SYN] Seq=0 Win=14600 Len=0 MSS=1460 SACK_PERM=1 TSval=681601699 TSecr=0 WS=128
  7 2017-05-04 11:49:16.429108 192.168.122.35 -> 192.168.122.18 TCP 74 2049 > 854 [SYN, ACK] Seq=0 Ack=4294955894 Win=14480 Len=0 MSS=1460 SACK_PERM=1 TSval=4098697564 TSecr=681601699 WS=64
 10 2017-05-04 11:49:16.429262 192.168.122.35 -> 192.168.122.18 TCP 74 [TCP Previous segment not captured] 2049 > 854 [SYN, ACK] Seq=3930 Ack=4294963811 Win=14480 Len=0 MSS=1460 SACK_PERM=1 TSval=4098697564 TSecr=681601699 WS=64
 13 2017-05-04 11:49:16.429492 192.168.122.18 -> 192.168.122.35 TCP 74 [TCP Port numbers reused] 854 > 2049 [SYN] Seq=0 Win=14600 Len=0 MSS=1460 SACK_PERM=1 TSval=681601700 TSecr=0 WS=128
 14 2017-05-04 11:49:16.429626 192.168.122.18 -> 192.168.122.35 TCP 74 [TCP Port numbers reused] 854 > 2049 [SYN] Seq=0 Win=14600 Len=0 MSS=1460 SACK_PERM=1 TSval=681601700 TSecr=0 WS=128
 15 2017-05-04 11:49:16.429762 192.168.122.35 -> 192.168.122.18 TCP 74 2049 > 854 [SYN, ACK] Seq=0 Ack=4294965198 Win=14480 Len=0 MSS=1460 SACK_PERM=1 TSval=4098697565 TSecr=681601700 WS=64
 18 2017-05-04 11:49:16.429950 192.168.122.18 -> 192.168.122.35 TCP 74 [TCP Port numbers reused] 854 > 2049 [SYN] Seq=0 Win=14600 Len=0 MSS=1460 SACK_PERM=1 TSval=681601700 TSecr=0 WS=128
 19 2017-05-04 11:49:16.430042 192.168.122.18 -> 192.168.122.35 TCP 74 [TCP Port numbers reused] 854 > 2049 [SYN] Seq=0 Win=14600 Len=0 MSS=1460 SACK_PERM=1 TSval=681601700 TSecr=0 WS=128
 20 2017-05-04 11:49:16.430208 192.168.122.35 -> 192.168.122.18 TCP 74 2049 > 854 [SYN, ACK] Seq=0 Ack=4294965840 Win=14480 Len=0 MSS=1460 SACK_PERM=1 TSval=4098697565 TSecr=681601700 WS=64
 23 2017-05-04 11:49:16.431216 192.168.122.18 -> 192.168.122.35 TCP 74 [TCP Port numbers reused] 854 > 2049 [SYN] Seq=0 Win=14600 Len=0 MSS=1460 SACK_PERM=1 TSval=681601701 TSecr=0 WS=128
 24 2017-05-04 11:49:16.431773 192.168.122.35 -> 192.168.122.18 TCP 74 2049 > 854 [SYN, ACK] Seq=0 Ack=1 Win=14480 Len=0 MSS=1460 SACK_PERM=1 TSval=4098697566 TSecr=681601701 WS=64


Expected results:
Only one SYN packet from the NFS client as on RHEL6.8

[root@rhel6u8-node1 01582181]# uname -r
2.6.32-642.el6.x86_64

[root@rhel6u8-node1 01582181]# tshark -ntad -r /tmp/tcpdump2.pcap | grep SYN
Running as user "root" and group "root". This could be dangerous.
  1 2017-05-04 12:01:13.520492 192.168.122.36 -> 192.168.122.35 TCP 74 957 > 2049 [SYN] Seq=0 Win=14600 Len=0 MSS=1460 SACK_PERM=1 TSval=3616371340 TSecr=0 WS=64
  2 2017-05-04 12:01:13.521670 192.168.122.35 -> 192.168.122.36 TCP 74 2049 > 957 [SYN, ACK] Seq=0 Ack=1 Win=14480 Len=0 MSS=1460 SACK_PERM=1 TSval=4099414666 TSecr=3616371340 WS=64
[root@rhel6u8-node1 01582181]#

Additional info:
I think this may be due to the series of patches that went in for https://bugzilla.redhat.com/show_bug.cgi?id=1321366 but I don't have a full explanation yet.

The following patch is believed to fix this problem but I've not confirmed yet.

commit 0fdea1e8a2853f79d39b8555cc9de16a7e0ab26f
Author: Trond Myklebust <trond.myklebust>
Date:   Wed Sep 16 23:43:17 2015 -0400

    SUNRPC: Ensure that we wait for connections to complete before retrying
    
    Commit 718ba5b87343, moved the responsibility for unlocking the socket to
    xs_tcp_setup_socket, meaning that the socket will be unlocked before we
    know that it has finished trying to connect. The following patch is based on
    an initial patch by Russell King to ensure that we delay clearing the
    XPRT_CONNECTING flag until we either know that we failed to initiate
    a connection attempt, or the connection attempt itself failed.
    
    Fixes: 718ba5b87343 ("SUNRPC: Add helpers to prevent socket create from racing")
    Reported-by: Russell King <linux.org.uk>
    Reported-by: Russell King <rmk+kernel.org.uk>
    Tested-by: Russell King <rmk+kernel.org.uk>
    Tested-by: Benjamin Coddington <bcodding>
    Signed-off-by: Trond Myklebust <trond.myklebust>


In addition, this may have caused a much more severe problem of a NFS DoS side-effect when iptables is enabled and an NFS server delays responding to a SYN, as the NFS client is unable to reconnect the NFS share.  I've not reproduced this effect yet, but I'm associating this bug with the customer case for now.  In the customer's tcpdump we see the SYN burst, then right afterwards see the SYN,ACK packets from the NFS server cause the NFS client to respond with "ICMP 102 Destination unreachable (Host administratively prohibited)" which indicates the iptables rules are catching the SYN,ACK coming back from the NFS server, and this bug is believed to be at least a contributing factor.  The customer can work around the problem by booting back to a 6.8 kernel or disabling iptables on the NFS client during the TCP reconnect sequence.

Comment 2 Dave Wysochanski 2017-05-04 17:11:07 UTC
Confirmed 0fdea1e8a2853f79d39b8555cc9de16a7e0ab26f returns the reconnect behavior back to RHEL6.8 with only one SYN packet sent.

[root@rhel6u9-node1 ~]# uname -r
2.6.32-696.1.1.el6.sf01836153.1.x86_64
[root@rhel6u9-node1 ~]# tshark -ntad -r /tmp/tcpdump3.pcap -R 'tcp.flags.syn == 1 || tcp.flags.fin == 1'
Running as user "root" and group "root". This could be dangerous.
  1 2017-05-04 13:09:04.253093 192.168.122.18 -> 192.168.122.35 TCP 74 887 > 2049 [SYN] Seq=0 Win=14600 Len=0 MSS=1460 SACK_PERM=1 TSval=1168942 TSecr=0 WS=128
  2 2017-05-04 13:09:04.254084 192.168.122.35 -> 192.168.122.18 TCP 74 2049 > 887 [SYN, ACK] Seq=0 Ack=1 Win=14480 Len=0 MSS=1460 SACK_PERM=1 TSval=4103485175 TSecr=1168942 WS=64
[root@rhel6u9-node1 ~]#

Comment 15 Dave Wysochanski 2017-05-08 11:54:02 UTC
Might need to take this patch too
commit 8b71798c0d389d4cadc884fc7d68c61ee8cd4f45
Author: Trond Myklebust <Trond.Myklebust>
Date:   Thu Sep 26 10:18:04 2013 -0400

    SUNRPC: Only update the TCP connect cookie on a successful connect
    
    Signed-off-by: Trond Myklebust <Trond.Myklebust>

diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
index 208a763..9928ba1 100644
--- a/net/sunrpc/xprtsock.c
+++ b/net/sunrpc/xprtsock.c
@@ -1511,6 +1511,7 @@ static void xs_tcp_state_change(struct sock *sk)
                        transport->tcp_copied = 0;
                        transport->tcp_flags =
                                TCP_RCV_COPY_FRAGHDR | TCP_RCV_COPY_XID;
+                       xprt->connect_cookie++;
 
                        xprt_wake_pending_tasks(xprt, -EAGAIN);
                }
@@ -2164,7 +2165,6 @@ static int xs_tcp_finish_connecting(struct rpc_xprt *xprt, struct socket *sock)
        case 0:
        case -EINPROGRESS:
                /* SYN_SENT! */
-               xprt->connect_cookie++;
                if (xprt->reestablish_timeout < XS_TCP_INIT_REEST_TO)
                        xprt->reestablish_timeout = XS_TCP_INIT_REEST_TO;
        }

Comment 16 Dave Wysochanski 2017-05-08 15:59:56 UTC
There's multiple missing patches (circa 2013) involving connect_cookie (at least commit 8b71798c0d389d4cadc884fc7d68c61ee8cd4f45 and 0a6605213040dd2fb479f0d1a9a87a1d7fa70904).  At this point I don't think we should derail this bug as it's got a clear test case and commit which fixes it.  As a separate effort we should consider backports of other patches for connect_cookie in RHEL6 as other issues may be present due to the omissions.

Comment 22 Phillip Lougher 2017-05-12 23:14:11 UTC
Patch(es) committed on kernel repository and kernel is undergoing testing

Comment 26 Phillip Lougher 2017-05-16 01:11:48 UTC
Patch(es) available on kernel-2.6.32-704.el6

Comment 32 Dave Wysochanski 2017-05-18 00:01:29 UTC
Created attachment 1279835 [details]
Sample test case which tests NFSv3, v4.0, and v4.1 and checks for count of SYN packets

Comment 33 Dave Wysochanski 2017-05-18 00:02:35 UTC
Created attachment 1279836 [details]
Sample output from test on patched kernel, but note NFSv4.1 fails due to TCP connection not getting dropped after 10 minutes for some reason, this might be a separate bug, not sure.

Comment 35 Dave Wysochanski 2017-05-18 00:31:54 UTC
Created attachment 1279846 [details]
Sample output from test on unpatched kernel, and note failure on NFSv4.0 due to SYN packet count == 8 which is more than expected == 3.  However, for some reason NFSv3 passed the test and I cannot understand why this is - there's no burst of SYNs - I didn't see this often so I'm not sure about this.

Comment 40 Dave Wysochanski 2017-05-19 19:07:30 UTC
This is all in our kbase https://access.redhat.com/solutions/3018371, but FWIW, this bug can trigger DoS of an NFS mount point in multiple ways and we don't need iptables to be enabled for that to happen.  In one of my reproduction environments I saw a partial DoS described as follows.  The NFS transports TCP 3-way handshake runs into problems due to the multiple SYNs from the NFS client.  In the below trace, the NFS server responding to the second SYN, which confuses the NFS client's TCP stack.  The following sequence occurs
    1) Frames 48-49: NFS client sends a duplicate SYN, the first one has Seq=3677241340, and the second one has Seq=3677245016
    2) Frame 50: NFS server responds with Ack=3677241341, which is a response to the first SYN from the NFS client
    3) Frame 51: NFS client responds with RST and Seq=3677241341, indicating the Ack packet in frame 50 is not understood, and the connection should be reset
    4) Frame 52: NFS server responds with RST, ACK, indicating it has reset the connection
    5) Frames 53-57: The sequence in 1-4 repeats
    6) Frames 58-62: The sequence in 1-4 repeats
~~~
 48 2017-05-17 20:24:50.684597 192.168.122.18 -> 192.168.122.16 TCP 74 [TCP Port numbers reused] 815 > 2049 [SYN] Seq=3677241340 Win=14600 Len=0 MSS=1460 SACK_PERM=1 TSval=966741 TSecr=0 WS=128
 49 2017-05-17 20:24:50.684833 192.168.122.18 -> 192.168.122.16 TCP 74 [TCP Port numbers reused] 815 > 2049 [SYN] Seq=3677245016 Win=14600 Len=0 MSS=1460 SACK_PERM=1 TSval=966742 TSecr=0 WS=128
 50 2017-05-17 20:24:50.685595 192.168.122.16 -> 192.168.122.18 TCP 74 2049 > 815 [SYN, ACK] Seq=1365940280 Ack=3677241341 Win=28960 Len=0 MSS=1460 SACK_PERM=1 TSval=1741462612 TSecr=966741 WS=128
 51 2017-05-17 20:24:50.685625 192.168.122.18 -> 192.168.122.16 TCP 54 815 > 2049 [RST] Seq=3677241341 Win=0 Len=0
 52 2017-05-17 20:24:50.685654 192.168.122.16 -> 192.168.122.18 TCP 54 2049 > 815 [RST, ACK] Seq=0 Ack=3677245017 Win=0 Len=0
 53 2017-05-17 20:24:50.689371 192.168.122.18 -> 192.168.122.16 TCP 74 [TCP Port numbers reused] 815 > 2049 [SYN] Seq=3677316102 Win=14600 Len=0 MSS=1460 SACK_PERM=1 TSval=966746 TSecr=0 WS=128
 54 2017-05-17 20:24:50.689452 192.168.122.18 -> 192.168.122.16 TCP 74 [TCP Port numbers reused] 815 > 2049 [SYN] Seq=3677317394 Win=14600 Len=0 MSS=1460 SACK_PERM=1 TSval=966746 TSecr=0 WS=128
 55 2017-05-17 20:24:50.689666 192.168.122.16 -> 192.168.122.18 TCP 74 2049 > 815 [SYN, ACK] Seq=1366005876 Ack=3677316103 Win=28960 Len=0 MSS=1460 SACK_PERM=1 TSval=1741462616 TSecr=966746 WS=128
 56 2017-05-17 20:24:50.689688 192.168.122.18 -> 192.168.122.16 TCP 54 815 > 2049 [RST] Seq=3677316103 Win=0 Len=0
 57 2017-05-17 20:24:50.689711 192.168.122.16 -> 192.168.122.18 TCP 54 2049 > 815 [RST, ACK] Seq=0 Ack=3677317395 Win=0 Len=0
 58 2017-05-17 20:24:50.689766 192.168.122.18 -> 192.168.122.16 TCP 74 [TCP Port numbers reused] 815 > 2049 [SYN] Seq=3677322298 Win=14600 Len=0 MSS=1460 SACK_PERM=1 TSval=966746 TSecr=0 WS=128
 59 2017-05-17 20:24:50.689826 192.168.122.18 -> 192.168.122.16 TCP 74 [TCP Port numbers reused] 815 > 2049 [SYN] Seq=3677323273 Win=14600 Len=0 MSS=1460 SACK_PERM=1 TSval=966747 TSecr=0 WS=128
 60 2017-05-17 20:24:50.689968 192.168.122.16 -> 192.168.122.18 TCP 74 2049 > 815 [SYN, ACK] Seq=1366010498 Ack=3677322299 Win=28960 Len=0 MSS=1460 SACK_PERM=1 TSval=1741462617 TSecr=966746 WS=128
 61 2017-05-17 20:24:50.689980 192.168.122.18 -> 192.168.122.16 TCP 54 815 > 2049 [RST] Seq=3677322299 Win=0 Len=0
 62 2017-05-17 20:24:50.690001 192.168.122.16 -> 192.168.122.18 TCP 54 2049 > 815 [RST, ACK] Seq=0 Ack=3677323274 Win=0 Len=0
~~~

Comment 41 Dave Wysochanski 2017-05-19 20:44:46 UTC
Created attachment 1280516 [details]
Sample test case which tests NFSv3, v4.0, and v4.1 and checks for count of SYN packets, v2

Changes from previous
- only sleep 5 minutes is necessary for idle (see XS_IDLE_DISC_TO)
- State which NFS version is 

Test still fails on NFSv4.1 as it expects a disconnect after idle.  However, in 4.1 we never go idle after mount since SEQUENCE ops are sent every second to renew the clientid (still needs a good code-level explanation of difference between 4.0 and 4.1 in this regard).

Comment 42 Dan Pritts 2017-05-19 20:56:53 UTC
If you are looking for another test case, we're affected by this.

NFS server is Isilon.  Seen the issue on at least two RHEL6.9 clients. 
 
Relatively easy to reproduce, client is lan-connected to isilon; disabling iptables INVALID blocking worked around the problem

Another troubled client traverses a checkpoint firewall to get to Isilon.   It's the only desktop linux user we have, so only client that traverses this firewall.   It's not entirely clear to me what's going on here, since the firewall is dropping packets from the isilon to the client - not the other way around.  Or, at least, it's not logging those drops, possible that it's dropping them for "invalid" reasons but not logging it.  I don't manage the firewall, can't look closely.  

Anyway, I don't need a response, but if you want some tcpdumps let me know.

Comment 50 Dan Pritts 2017-06-13 14:51:17 UTC
One of my affected clients has now had the problem without the DROP INVALID rule in place.

Comment 52 Dave Wysochanski 2017-06-13 15:59:26 UTC
(In reply to Dan Pritts from comment #50)
> One of my affected clients has now had the problem without the DROP INVALID
> rule in place.

Yes it's possible this bug can occur without any iptables.  The main 'signature' is that the second part of the TCP handshake (the SYN,ACK coming from the NFS server) is rejected by the NFS client, either by a iptables rule, or by the TCP stack itself, thinking the packed is invalid.  As a result the 3-way handshake won't complete and the NFS TCP connection remains down, possibly indefinitely.  See https://bugzilla.redhat.com/show_bug.cgi?id=1448170#c40 for more info of a typical signature without iptables

Comment 54 Yongcheng Yang 2017-09-27 06:35:44 UTC
Moving to VERIFIED according to test logs of comment #47.

Will include this case as regression test in the future.

Comment 57 jinjian.1 2017-11-14 16:31:03 UTC
Does Linux V3.X also have this issue?

I took a look at https://elixir.free-electrons.com/linux/v3.19.8/source/net/sunrpc/xprtsock.c.

Seems also need to patch.

Comment 58 Dave Wysochanski 2017-11-17 12:16:11 UTC
(In reply to jinjian.1 from comment #57)
> Does Linux V3.X also have this issue?
> 
> I took a look at
> https://elixir.free-electrons.com/linux/v3.19.8/source/net/sunrpc/xprtsock.c.
> 
> Seems also need to patch.

As far as I know 3.19.8 is not a supported Red Hat kernel.  If you have a question about whether this bug applies to a specific Red Hat kernel, please open a support case.

Comment 61 Andrew Lau 2018-06-15 05:27:25 UTC
Was this bug tagged "Fixed In Version: kernel-2.6.32-704.el6" by mistake?

The RPM changelog has this entry instead:

* Wed May 17 2017 Denys Vlasenko <dvlasenk> [2.6.32-696.5.1.el6]
- [fs] sunrpc: Ensure that we wait for connections to complete before retrying (Dave Wysochanski) [1450850 1448170]

and RHEL 6.9 only appears to be up to 2.6.32-696.30.1.el6 right now.

Comment 62 Dave Wysochanski 2018-06-15 10:48:53 UTC
(In reply to Andrew Lau from comment #61)
> Was this bug tagged "Fixed In Version: kernel-2.6.32-704.el6" by mistake?
> 

No, it's not a mistake.


> The RPM changelog has this entry instead:
> 
> * Wed May 17 2017 Denys Vlasenko <dvlasenk> [2.6.32-696.5.1.el6]
> - [fs] sunrpc: Ensure that we wait for connections to complete before
> retrying (Dave Wysochanski) [1450850 1448170]
> 
> and RHEL 6.9 only appears to be up to 2.6.32-696.30.1.el6 right now.

That is the RHEL6.9.z kernel (backport of same patch).

This bug is for the RHEL6.10 kernel (Y-stream).

The Y-stream contains much more testing but takes longer to release.  For critical bugs that affect many customers, we have z-stream backports that release much faster but have less testing.  You're seeing the difference between the two release streams here.

Comment 64 errata-xmlrpc 2018-06-19 04:56:31 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:1854


Note You need to log in before you can comment on or make changes to this bug.