Bug 1578889

Summary: TCP retransmissions on host-VM connections using OVS bridge leading to timeouts
Product: Red Hat Enterprise Linux 7 Reporter: Daniel Alvarez Sanchez <dalvarez>
Component: openvswitchAssignee: Eric Garver <egarver>
Status: CLOSED DUPLICATE QA Contact: ovs-qe
Severity: high Docs Contact:
Priority: high    
Version: 7.5CC: atragler, dalvarez, lmartins, tredaelli
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-05-24 13:25:31 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Daniel Alvarez Sanchez 2018-05-16 14:42:26 UTC
Description of problem:
While running OpenStack tests we see that a connection from the host to a guess VM in the same node is very slow due to lots of retransmissions apparently because of wrong checksums.

The layout is as follows:

host -> br-ex (OVS bridge) -> tap device -> VM

The 3-way handshake is very fast and the connection establishes normally.
However, when data is sent from the VM to the host (using netcat in my tests), we can see that data packets gets retransmitted by the VM and they reach the host with wrong checksum. Only until the checksum is correct (it eventually is correct after a few retranmissions), the host will send the ACK to the packet.

Version-Release number of selected component (if applicable):
Kernel 3.10.0-862.2.3.el7.x86_64
OVS: openvswitch-2.9.0-3.el7.x86_64

This didn't happen with CentOS kernel 7.4

How reproducible:
100%


Additional info:

This doesn't happen if instead from trying to reach the VM from the host we create an OVS internal port, put it in the namespace and repeat the process. In this case, the checksums are correct and the connection works as expected.

More details at https://bugs.launchpad.net/packstack/+bug/1771500

Comment 2 Daniel Alvarez Sanchez 2018-05-16 14:51:53 UTC
Looks like this is not failing neither on D/S CI nor TripleO CI jobs where they seem to attach a physical interface to br-ex. In packstack/devstack, jobs simply do the following:

$ sudo ip link set br-ex up
$ sudo ip route add 172.24.4.0/24 dev br-ex
$ sudo ip addr add 172.24.4.1/24 dev br-ex

And this is where we see the wrong checksums and retransmissions.

Comment 6 Daniel Alvarez Sanchez 2018-05-17 16:41:19 UTC
(In reply to Daniel Alvarez Sanchez from comment #2)
> Looks like this is not failing neither on D/S CI nor TripleO CI jobs where
> they seem to attach a physical interface to br-ex. In packstack/devstack,
> jobs simply do the following:
> 
> $ sudo ip link set br-ex up
> $ sudo ip route add 172.24.4.0/24 dev br-ex
> $ sudo ip addr add 172.24.4.1/24 dev br-ex
> 
> And this is where we see the wrong checksums and retransmissions.

I've repeated the tests in a failing setup attaching a phys interface to br-ex and it still fails. My guess is that this is not failing on Tripleo CI / DS because tempest is being run from a separate node (ie. starting connections to the FIP from a different node) so probably checksums are getting fixed somewhere in the way?

Comment 10 Eric Garver 2018-05-23 15:47:56 UTC
@fwestpha pointed me to bug 1572983. A new kernel with the nf_reset() fix works with my reproducer [0]. Basically, the skb's ctinfo was not being scrubbed properly.

Please retest after a kernel with the fix for bug 1572983 is available.

[0] http://git.engineering.redhat.com/git/users/egarver/ovs.git/commit/?h=bz1578889&id=364cecd5ef101ad5fd512c83e2ef686f192419d6

Comment 12 Daniel Alvarez Sanchez 2018-05-24 10:09:06 UTC
(In reply to Eric Garver from comment #10)
> @fwestpha pointed me to bug 1572983. A new kernel with the nf_reset() fix
> works with my reproducer [0]. Basically, the skb's ctinfo was not being
> scrubbed properly.
> 
> Please retest after a kernel with the fix for bug 1572983 is available.
> 
> [0]
> http://git.engineering.redhat.com/git/users/egarver/ovs.git/commit/
> ?h=bz1578889&id=364cecd5ef101ad5fd512c83e2ef686f192419d6

Timothy and I have tried  3.10.0-891.el7.test.x86_64 on OpenStack and I can confirm that it fixes the issue. However, we're not using network namespaces at all in OVN so not sure if it's this what fixes it or any other patch.

Comment 13 Eric Garver 2018-05-24 13:25:31 UTC
(In reply to Daniel Alvarez Sanchez from comment #12)
> (In reply to Eric Garver from comment #10)
> > @fwestpha pointed me to bug 1572983. A new kernel with the nf_reset() fix
> > works with my reproducer [0]. Basically, the skb's ctinfo was not being
> > scrubbed properly.
> > 
> > Please retest after a kernel with the fix for bug 1572983 is available.
> > 
> > [0]
> > http://git.engineering.redhat.com/git/users/egarver/ovs.git/commit/
> > ?h=bz1578889&id=364cecd5ef101ad5fd512c83e2ef686f192419d6
> 
> Timothy and I have tried  3.10.0-891.el7.test.x86_64 on OpenStack and I can
> confirm that it fixes the issue. However, we're not using network namespaces
> at all in OVN so not sure if it's this what fixes it or any other patch.

Thanks for confirming. I will mark this as a duplicate then.

FWIW, nf_reset() is called for various reasons. One of which occurs on OVS internal port RX.

*** This bug has been marked as a duplicate of bug 1572983 ***