Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 1383504

Summary: VM to VM netperf/TCP_RR over OVS-dpdk tunnel only delivered single digit transaction rate per second
Product: Red Hat Enterprise Linux 7 Reporter: Jean-Tsung Hsiao <jhsiao>
Component: openvswitchAssignee: Eelco Chaudron <echaudro>
Status: CLOSED NEXTRELEASE QA Contact: Jean-Tsung Hsiao <jhsiao>
Severity: medium Docs Contact:
Priority: medium    
Version: 7.3CC: aconole, aloughla, atragler, ctrautma, echaudro, jhsiao, kzhang, rcain
Target Milestone: rc   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-12-21 19:26:39 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Jean-Tsung Hsiao 2016-10-10 20:19:43 UTC
Description of problem: VM to VM netperf/TCP_RR over OVS-dpdk tunnel only delivered single digit transaction rate per second

*** test reults ***
[root@localhost ~]# for i in {1..3}
> do
> echo Test $i
> !net
netperf -H 172.16.3.120 -t TCP_RR -l 60
> done
Test 1
MIGRATED TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 172.16.3.120 () port 0 AF_INET : first burst 0
Local /Remote
Socket Size   Request  Resp.   Elapsed  Trans.
Send   Recv   Size     Size    Time     Rate         
bytes  Bytes  bytes    bytes   secs.    per sec   

16384  87380  1        1       60.00       9.05   
16384  87380 
Test 2
MIGRATED TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 172.16.3.120 () port 0 AF_INET : first burst 0
Local /Remote
Socket Size   Request  Resp.   Elapsed  Trans.
Send   Recv   Size     Size    Time     Rate         
bytes  Bytes  bytes    bytes   secs.    per sec   

16384  87380  1        1       60.00       9.03   
16384  87380 
Test 3
MIGRATED TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 172.16.3.120 () port 0 AF_INET : first burst 0
Local /Remote
Socket Size   Request  Resp.   Elapsed  Trans.
Send   Recv   Size     Size    Time     Rate         
bytes  Bytes  bytes    bytes   secs.    per sec   

16384  87380  1        1       60.00       9.62   
16384  87380 

*** OVS-dpdk vxlan tunnel ***

f09d57f0-e921-44b8-a544-8e39b387a7be
    Bridge "ovsbr0"
        Port "dpdk0"
            Interface "dpdk0"
                type: dpdk
        Port "ovsbr0"
            Interface "ovsbr0"
                type: internal
    Bridge "ovsbr1"
        Port "ovsbr1"
            Interface "ovsbr1"
                type: internal
        Port "vhost0"
            Interface "vhost0"
                type: dpdkvhostuser
        Port "vxlan0"
            Interface "vxlan0"
                type: vxlan
                options: {dst_port="8472", key="1000", remote_ip="192.168.9.110"}
    ovs_version: "2.5.0"

Version-Release number of selected component (if applicable):


How reproducible:Reproducible


Steps to Reproduce:
Need two hosts each having a vhostuser guest
1. Configure an OVS-dpdk vxlan tunnel with a vhostuser port on each host.
2. Run netperf/TCP_RR test between the two guests.
3. Please see test results and the OVS-dpdk tunnel from the test-bed above.

Actual results:


Expected results:


Additional info:

Comment 1 Jean-Tsung Hsiao 2016-10-10 20:32:01 UTC
For your reference listed below are test results for 64 bytes UDP_STREAM and TCP_STREAM.

These two results seem to be normal.
 
*** 64 bytes UDP_STREAM ***

[root@localhost ~]# for i in {1..3}; do echo Test $i; netperf -H 172.16.3.120 -t UDP_STREAM -l 60 -- -m 64; done
Test 1
MIGRATED UDP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 172.16.3.120 () port 0 AF_INET
Socket  Message  Elapsed      Messages                
Size    Size     Time         Okay Errors   Throughput
bytes   bytes    secs            #      #   10^6bits/sec

212992      64   60.00     27101250      0     231.26
212992           60.00     15965386            136.24

Test 2
MIGRATED UDP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 172.16.3.120 () port 0 AF_INET
Socket  Message  Elapsed      Messages                
Size    Size     Time         Okay Errors   Throughput
bytes   bytes    secs            #      #   10^6bits/sec

212992      64   60.00     27134502      0     231.55
212992           60.00     15451765            131.85

Test 3
MIGRATED UDP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 172.16.3.120 () port 0 AF_INET
Socket  Message  Elapsed      Messages                
Size    Size     Time         Okay Errors   Throughput
bytes   bytes    secs            #      #   10^6bits/sec

212992      64   60.00     27125907      0     231.47
212992           60.00     16206556            138.30

 
*** TCP_STREAM ***

[root@localhost ~]# for i in {1..3}; do echo Test $i; netperf -H 172.16.3.120 -l 60; done
Test 1
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 172.16.3.120 () port 0 AF_INET
Recv   Send    Send                          
Socket Socket  Message  Elapsed              
Size   Size    Size     Time     Throughput  
bytes  bytes   bytes    secs.    10^6bits/sec  

 87380  16384  16384    60.00    3163.56   
Test 2
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 172.16.3.120 () port 0 AF_INET
Recv   Send    Send                          
Socket Socket  Message  Elapsed              
Size   Size    Size     Time     Throughput  
bytes  bytes   bytes    secs.    10^6bits/sec  

 87380  16384  16384    60.00    3270.43   
Test 3
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 172.16.3.120 () port 0 AF_INET
Recv   Send    Send                          
Socket Socket  Message  Elapsed              
Size   Size    Size     Time     Throughput  
bytes  bytes   bytes    secs.    10^6bits/sec  

 87380  16384  16384    60.01    3336.79

Comment 2 Aaron Conole 2016-10-10 20:36:54 UTC
Please attach sosreports from the host and guests.  This doesn't have any rhel, or ovs versions attached.

Comment 3 Jean-Tsung Hsiao 2016-10-10 21:53:11 UTC
*** vhostuser guest of Host netqe9 ***

[root@localhost ~]# rpm -qa | grep dpdk
dpdk-16.04-4.el7fdb.x86_64
dpdk-tools-16.04-4.el7fdb.x86_64
 
[root@localhost ~]# rpm -qa | grep openvswitch
openvswitch-2.5.0-3.el7.x86_64

[root@localhost ~]# uname -a
Linux localhost.localdomain 3.10.0-229.el7.x86_64 #1 SMP Thu Jan 29 18:37:38 EST 2015 x86_64 x86_64 x86_64 GNU/Linux
[root@localhost ~]# 

*** Host netqe9 ***

[root@netqe9 ovs-dpdk-tunneling]# rpm -qa | grep dpdk
dpdk-16.04-4.el7fdb.x86_64
dpdk-tools-16.04-4.el7fdb.x86_64

[root@netqe9 ovs-dpdk-tunneling]# rpm -qa | grep openvswitch
openvswitch-2.5.0-14.git20160727.el7fdb.x86_64

[root@netqe9 ovs-dpdk-tunneling]# uname -a
Linux netqe9.knqe.lab.eng.bos.redhat.com 3.10.0-506.el7.x86_64 #1 SMP Mon Sep 12 23:31:02 EDT 2016 x86_64 x86_64 x86_64 GNU/Linux
[root@netqe9 ovs-dpdk-tunneling]#

*** vhostuser guest of Host netqe10 ***

[root@localhost jhsiao]# rpm -qa | grep dpdk
dpdk-tools-16.04-4.el7fdb.x86_64
dpdk-16.04-4.el7fdb.x86_64
 
[root@localhost jhsiao]# rpm -qa | grep openvswitch
openvswitch-2.5.0-3.el7.x86_64

[root@localhost jhsiao]# uname -a
Linux localhost.localdomain 3.10.0-229.el7.x86_64 #1 SMP Thu Jan 29 18:37:38 EST 2015 x86_64 x86_64 x86_64 GNU/Linux

*** Host netqe10 ***

[root@netqe10 sos]# rpm -qa | grep dpdk
dpdk-tools-16.04-4.el7fdb.x86_64
dpdk-16.04-4.el7fdb.x86_64

[root@netqe10 sos]# rpm -qa | grep openvswitch
openvswitch-2.5.0-14.git20160727.el7fdp.x86_64

[root@netqe10 sos]# uname -a
Linux netqe10.knqe.lab.eng.bos.redhat.com 3.10.0-506.el7.x86_64 #1 SMP Mon Sep 12 23:31:02 EDT 2016 x86_64 x86_64 x86_64 GNU/Linux

Comment 12 Jean-Tsung Hsiao 2016-12-21 16:07:47 UTC
Looks like the issue is gone with ovs-2.6.1-2 & dpdk-16.11-2.

[root@localhost dpdk-vxlan-tunnel-4Q-ovs-2.6.1-dpdk-16.11]# netperf -H 172.16.63.2 -t TCP_RR -l 60
MIGRATED TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 172.16.63.2 () port 0 AF_INET : first burst 0
Local /Remote
Socket Size   Request  Resp.   Elapsed  Trans.
Send   Recv   Size     Size    Time     Rate         
bytes  Bytes  bytes    bytes   secs.    per sec   

16384  87380  1        1       60.00    21651.47   
16384  87380

Comment 13 Eelco Chaudron 2016-12-21 19:26:39 UTC
Closing BZ as fix in latest is confirmed.