Bug 2230215 - Enic card: ovs dpdk pvp multi queue case has a problem for not having increased performance
Summary: Enic card: ovs dpdk pvp multi queue case has a problem for not having increas...
Keywords:
Status: NEW
Alias: None
Product: Red Hat Enterprise Linux Fast Datapath
Classification: Red Hat
Component: openvswitch2.17
Version: FDP 23.F
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: OVS Triage
QA Contact: liting
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-08-09 01:56 UTC by liting
Modified: 2023-08-11 06:01 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed:
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker FD-3088 0 None None None 2023-08-09 01:57:39 UTC

Description liting 2023-08-09 01:56:16 UTC
Description of problem:
Enic card: ovs dpdk pvp multi queue case has a problem for not having increased performance

Version-Release number of selected component (if applicable):
openvswitch2.17-2.17.0-88.el8fdp

How reproducible:


Steps to Reproduce:
Run ovs dpdk vhostuser pvp case, 1q2pmd, 2q4pmd, 4q8pmd.

Actual results:
netqe37 25g enic:
https://beaker.engineering.redhat.com/jobs/7818637
https://beaker-archive.hosts.prod.psi.bos.redhat.com/beaker-logs/2023/05/78186/7818637/13852406/159783260/enic_10.html
rx_desc/tx_desc =1024/1024 setting: 
   1q 2pmd viommu novlan 64byte case: 2.3mpps
   1q 4pmd viommu novlan 64byte case: 7.7mpps
   2q 4pmd viommu novlan 64byte case: 4.6mpps
   4q 8pmd viommu novlan 64byte case: 5.7mpps
   1q 2pmd noviommu vlan 64byte case: 3.4mpps
   1q 4pmd noviommu vlan 64byte case: 5.6mpps
   2q 4pmd noviommu vlan 64byte case: 7.5mpps
   4q 8pmd noviommu vlan 64byte case: 5.7mpps

For netqe26 10g enic, pervious result of ovs2.17 on rhel8.6
https://beaker.engineering.redhat.com/jobs/7643468
https://beaker-archive.hosts.prod.psi.bos.redhat.com/beaker-logs/2023/03/76434/7643468/13581553/157786151/enic_10.html
 rx_desc/tx_desc =1024/1024 setting:
   1q 2pmd viommu novlan 64byte case: 4.0mpps
   1q 4pmd viommu novlan 64byte case: 6.2mpps
   2q 4pmd viommu novlan 64byte case: 4.0mpps
   4q 8pmd viommu novlan 64byte case: 5.8mpps
   1q 2pmd noviommu vlan 64byte case: 3.0mpps
   1q 4pmd noviommu vlan 64byte case: 4.4mpps
   2q 4pmd noviommu vlan 64byte case: 2.6mpps
   4q 8pmd noviommu vlan 64byte case: 3.3mpps

Expected results:
Theovs dpdk pvp multi queue case should have increased performance.

Additional info:

Comment 1 liting 2023-08-10 01:48:25 UTC
I also run the performance test on rhel9.2. It also has this issue. And I create the sosreport to the attachment.
rx_desc/tx_desc =2048/2048 setting: 
https://beaker.engineering.redhat.com/jobs/8168398
https://beaker-archive.hosts.prod.psi.bos.redhat.com/beaker-logs/2023/08/81683/8168398/14390433/164343661/enic_10.html
   1q 2pmd viommu novlan 64byte case: 3.0mpps
   1q 4pmd viommu novlan 64byte case: 6.3mpps
   2q 4pmd viommu novlan 64byte case: 6.1mpps
   4q 8pmd viommu novlan 64byte case: 6.5mpps
   1q 2pmd noviommu vlan 64byte case: 4.5mpps
   1q 4pmd noviommu vlan 64byte case: 7.5mpps
   2q 4pmd noviommu vlan 64byte case: 7.1mpps
   4q 8pmd noviommu vlan 64byte case: 7.0mpps

Comment 3 liting 2023-08-11 06:01:18 UTC
I change to rx_desc/tx_desc =2048/2048 and run the job on rhel8.6.
https://beaker.engineering.redhat.com/jobs/8172688
https://beaker-archive.hosts.prod.psi.bos.redhat.com/beaker-logs/2023/08/81726/8172688/14398161/164413367/enic_10.html
 1q 2pmd viommu novlan 64byte case: 3.4mpps
   1q 4pmd viommu novlan 64byte case: 5.5mpps
   2q 4pmd viommu novlan 64byte case: 7mpps
   4q 8pmd viommu novlan 64byte case: 7.1mpps
   1q 2pmd noviommu vlan 64byte case: 2.3mpps
   1q 4pmd noviommu vlan 64byte case: 7.5mpps
   2q 4pmd noviommu vlan 64byte case: 4.6mpps
   4q 8pmd noviommu vlan 64byte case: 6.9mpps


Note You need to log in before you can comment on or make changes to this bug.