Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
The FDP team is no longer accepting new bugs in Bugzilla. Please report your issues under FDP project in Jira. Thanks.

Bug 2230215

Summary: Enic card: ovs dpdk pvp multi queue case has a problem for not having increased performance
Product: Red Hat Enterprise Linux Fast Datapath Reporter: liting <tli>
Component: openvswitch2.17Assignee: Aaron Conole <aconole>
Status: CLOSED EOL QA Contact: liting <tli>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: FDP 23.FCC: ctrautma, fleitner, jhsiao, ralongi
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2024-10-08 17:49:14 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description liting 2023-08-09 01:56:16 UTC
Description of problem:
Enic card: ovs dpdk pvp multi queue case has a problem for not having increased performance

Version-Release number of selected component (if applicable):
openvswitch2.17-2.17.0-88.el8fdp

How reproducible:


Steps to Reproduce:
Run ovs dpdk vhostuser pvp case, 1q2pmd, 2q4pmd, 4q8pmd.

Actual results:
netqe37 25g enic:
https://beaker.engineering.redhat.com/jobs/7818637
https://beaker-archive.hosts.prod.psi.bos.redhat.com/beaker-logs/2023/05/78186/7818637/13852406/159783260/enic_10.html
rx_desc/tx_desc =1024/1024 setting: 
   1q 2pmd viommu novlan 64byte case: 2.3mpps
   1q 4pmd viommu novlan 64byte case: 7.7mpps
   2q 4pmd viommu novlan 64byte case: 4.6mpps
   4q 8pmd viommu novlan 64byte case: 5.7mpps
   1q 2pmd noviommu vlan 64byte case: 3.4mpps
   1q 4pmd noviommu vlan 64byte case: 5.6mpps
   2q 4pmd noviommu vlan 64byte case: 7.5mpps
   4q 8pmd noviommu vlan 64byte case: 5.7mpps

For netqe26 10g enic, pervious result of ovs2.17 on rhel8.6
https://beaker.engineering.redhat.com/jobs/7643468
https://beaker-archive.hosts.prod.psi.bos.redhat.com/beaker-logs/2023/03/76434/7643468/13581553/157786151/enic_10.html
 rx_desc/tx_desc =1024/1024 setting:
   1q 2pmd viommu novlan 64byte case: 4.0mpps
   1q 4pmd viommu novlan 64byte case: 6.2mpps
   2q 4pmd viommu novlan 64byte case: 4.0mpps
   4q 8pmd viommu novlan 64byte case: 5.8mpps
   1q 2pmd noviommu vlan 64byte case: 3.0mpps
   1q 4pmd noviommu vlan 64byte case: 4.4mpps
   2q 4pmd noviommu vlan 64byte case: 2.6mpps
   4q 8pmd noviommu vlan 64byte case: 3.3mpps

Expected results:
Theovs dpdk pvp multi queue case should have increased performance.

Additional info:

Comment 1 liting 2023-08-10 01:48:25 UTC
I also run the performance test on rhel9.2. It also has this issue. And I create the sosreport to the attachment.
rx_desc/tx_desc =2048/2048 setting: 
https://beaker.engineering.redhat.com/jobs/8168398
https://beaker-archive.hosts.prod.psi.bos.redhat.com/beaker-logs/2023/08/81683/8168398/14390433/164343661/enic_10.html
   1q 2pmd viommu novlan 64byte case: 3.0mpps
   1q 4pmd viommu novlan 64byte case: 6.3mpps
   2q 4pmd viommu novlan 64byte case: 6.1mpps
   4q 8pmd viommu novlan 64byte case: 6.5mpps
   1q 2pmd noviommu vlan 64byte case: 4.5mpps
   1q 4pmd noviommu vlan 64byte case: 7.5mpps
   2q 4pmd noviommu vlan 64byte case: 7.1mpps
   4q 8pmd noviommu vlan 64byte case: 7.0mpps

Comment 3 liting 2023-08-11 06:01:18 UTC
I change to rx_desc/tx_desc =2048/2048 and run the job on rhel8.6.
https://beaker.engineering.redhat.com/jobs/8172688
https://beaker-archive.hosts.prod.psi.bos.redhat.com/beaker-logs/2023/08/81726/8172688/14398161/164413367/enic_10.html
 1q 2pmd viommu novlan 64byte case: 3.4mpps
   1q 4pmd viommu novlan 64byte case: 5.5mpps
   2q 4pmd viommu novlan 64byte case: 7mpps
   4q 8pmd viommu novlan 64byte case: 7.1mpps
   1q 2pmd noviommu vlan 64byte case: 2.3mpps
   1q 4pmd noviommu vlan 64byte case: 7.5mpps
   2q 4pmd noviommu vlan 64byte case: 4.6mpps
   4q 8pmd noviommu vlan 64byte case: 6.9mpps

Comment 4 ovs-bot 2024-10-08 17:49:14 UTC
This bug did not meet the criteria for automatic migration and is being closed.
If the issue remains, please open a new ticket in https://issues.redhat.com/browse/FDP