Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 2210319

Summary: [17.1][OVS-DPDK][RHEL9] perf regression with retbleed mitigation on skylake CPUs
Product: Red Hat OpenStack Reporter: Miguel Angel Nieto <mnietoji>
Component: openvswitchAssignee: Robin Jarry <rjarry>
Status: CLOSED COMPLETED QA Contact: Eran Kuris <ekuris>
Severity: medium Docs Contact:
Priority: medium    
Version: 17.1 (Wallaby)CC: apevec, chrisw, dmarchan, ekuris, erpeters, eshulman, fbaudin, fleitner, gregraka, hakhande, jamsmith, jmario, llong, lsvaty, mburns, pasik, rjarry
Target Milestone: z3Keywords: Performance, Triaged
Target Release: 17.1   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: kernel-5.14.0-284.44.1.el9_2 Doc Type: Known Issue
Doc Text:
Currently, the Retbleed vulnerability mitigation in RHEL 9.2 can cause a performance drop for Open vSwitch with Data Plane Development Kit (OVS-DPDK) on Intel Skylake CPUs. + This performance regression happens only if C-states are disabled in the BIOS, Hyper-Threading Technology is enabled, and OVS-DPDK is using only one logical core of a given core. + *Workaround:* Assign both logical cores to OVS-DPDK or to SR-IOV guests that have DPDK running as recommended in _link:{defaultURL}/configuring_network_functions_virtualization/index[Configuring network functions virtualization]_.
Story Points: ---
Clone Of:
: 2216242 (view as bug list) Environment:
Last Closed: 2024-07-11 10:32:09 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 2216242    
Bug Blocks:    

Description Miguel Angel Nieto 2023-05-26 15:03:30 UTC
Description of problem:
Performance is very low when using e810 nic in a ovs-dpdk scenario. This happen in both ml2-ovs and ovn

Test conditions:

       PF ----------------- ovs bridge e810 -------  
TREX                                                Testpmd
       PF ----------------- ovs bridge e810 -------

Using 1 pmd core, 1 flow

x710         ---> 2.8 mpps
e810 in 16.2 ---> 2.5 mpps
e810 in 17.1 ---> 2.1 mpps .. 16% less than in 16.2, 25% less than x710

Some time ago i already opened this bug but at that point the performance was only 0.5 mpps. After solving that issues we get 2.1 mpps, which is still low
https://bugzilla.redhat.com/show_bug.cgi?id=2179366

***********************
OVS configuration
***********************
    Bridge br-dpdk1
        fail_mode: standalone
        datapath_type: netdev
        Port dpdk3
            Interface dpdk3
                type: dpdk
                options: {dpdk-devargs="0000:3b:00.1"}
        Port patch-provnet-f6ce6a34-0212-4c3d-b69c-7eed2b8f056e-to-br-int
            Interface patch-provnet-f6ce6a34-0212-4c3d-b69c-7eed2b8f056e-to-br-int
                type: patch
                options: {peer=patch-br-int-to-provnet-f6ce6a34-0212-4c3d-b69c-7eed2b8f056e}
        Port br-dpdk1
            Interface br-dpdk1
                type: internal
    Bridge br-dpdk0
        fail_mode: standalone
        datapath_type: netdev
        Port patch-provnet-c2fd3e23-f6db-427b-94ee-6b3749cd1863-to-br-int
            Interface patch-provnet-c2fd3e23-f6db-427b-94ee-6b3749cd1863-to-br-int
                type: patch
                options: {peer=patch-br-int-to-provnet-c2fd3e23-f6db-427b-94ee-6b3749cd1863}
        Port dpdk2
            Interface dpdk2
                type: dpdk
                options: {dpdk-devargs="0000:3b:00.0"}
        Port br-dpdk0
            Interface br-dpdk0
                type: internal

[root@computeovsdpdksriov-r740 tripleo-admin]# lspci | grep 3b
3b:00.0 Ethernet controller: Intel Corporation Ethernet Controller E810-C for QSFP (rev 02)
3b:00.1 Ethernet controller: Intel Corporation Ethernet Controller E810-C for QSFP (rev 02)

Version-Release number of selected component (if applicable):
RHOS-17.1-RHEL-9-20230517.n.1


How reproducible:
Run ovs-dpdk performance testcase


Steps to Reproduce:
1.
2.
3.

Actual results:
2.1 mpps


Expected results:
I think i should get at least 2.5 mpps


Additional info: