Bug 1889788 - [E810][ovs-dpdk] OVS dpdk datapath pvp throughput test getting low results
Summary: [E810][ovs-dpdk] OVS dpdk datapath pvp throughput test getting low results
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux Fast Datapath
Classification: Red Hat
Component: DPDK
Version: RHEL 8.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: Amnon Ilan
QA Contact: liting
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-10-20 15:00 UTC by Zhiqiang Fang
Modified: 2020-10-21 16:59 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-10-21 16:59:07 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Zhiqiang Fang 2020-10-20 15:00:59 UTC
Description of problem:

The performance test results are lower than expectation on OVS dpdk datapath pvp throughput test (referring to RFC2544) on E810 NIC card.

The test method is described here
https://github.com/ctrautma/RHEL_NIC_QUALIFICATION/tree/ansible#2-throughput-test

specifically on the tests 1Q2PMD and 2Q4PMD:
 - OVS-DPDK 1Q --ovs dpdk datapath pvp 64/1500 bytes throughput test (topo #2)
 - OVS-DPDK 2Q --ovs dpdk datapath pvp 64/1500 bytes throughput test (topo #2)

As below is the topology: 

ovs dpdk datapath pvp
+DUT-----------------------------------------+       +----------------------+
|VM----------------|                         |       |                 Trex |
|     |-----NIC1(vhostuserclient)--|(vf)----NIC1-----|TRAFFICGEN_TREX_PORT1 |          
| testpmd          |         ovs-bridge       |      |                      |
|     |-----NIC2(vhostuserclient)--|(vf)----NIC2-----|TRAFFICGEN_TREX_PORT2 |            
|                  |                         |       |                      |
|-------------------                         |       +----------------------+
+--------------------------------------------+



Version-Release number of selected component (if applicable):

E810 info:
# ethtool -i ens1f0
driver: ice
version: 0.8.2-k
firmware-version: 1.40 0x80003ab8 1.2735.0
expansion-rom-version: 
bus-info: 0000:3b:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes

# rpm -qa | grep openv
openvswitch-selinux-extra-policy-1.0-23.el8fdp.noarch
openvswitch2.13-2.13.0-41.el8fdb.x86_64


# ovs-vsctl list Open_vSwitch
_uuid               : b5c7a3ac-2d92-4327-8ff6-e44f718b07ad
bridges             : [11894e3b-d760-4e9b-95c5-e3fe76394040]
cur_cfg             : 804
datapath_types      : [netdev, system]
datapaths           : {}
db_version          : "8.2.0"
dpdk_initialized    : true
dpdk_version        : "DPDK 19.11.1"
...
iface_types         : [dpdk, dpdkr, dpdkvhostuser, dpdkvhostuserclient, erspan, geneve, gre, internal, ip6erspan, ip6gre, lisp, patch, stt, system, tap, vxlan]
manager_options     : []
next_cfg            : 804
other_config        : {dpdk-init="true", dpdk-socket-mem="4096,4096", pmd-cpu-mask="0x140000140000", vhost-iommu-support="true"}
ovs_version         : "2.13.0"
ssl                 : []
statistics          : {}
system_type         : rhel
system_version      : "8.3"

# uname -r
4.18.0-240.el8.x86_64
# cat /etc/redhat-release 
Red Hat Enterprise Linux release 8.3 (Ootpa)


How reproducible:

Please refer to the procedure described on github webpage above.


Steps to Reproduce:
1.
2.
3.

Actual results:

Our test results are:
 - 1q 2pmd 64 bytes results are not steady, range from 3.0 to 3.5Mpps and sometimes even dropped to 1.0Mpps
 - 2q 4pmd 64 bytes results are low, at 3.1 to 3.6 Mpps range

Expected results:
 - 1q 2pmd 64 bytes: 3.7+ Mpps
 - 2q 4pmd 64 bytes: 7+ Mpps

Additional info:

Comment 1 Christian Trautman 2020-10-21 16:59:07 UTC
After discussion today with Intel we will wait for 20.11 DPDK and re-open this bug if needed after trying that version.


Note You need to log in before you can comment on or make changes to this bug.