Bug 1910196

Summary: qede nic: The pvp sirov case only got 14mpps throughput
Product: Red Hat Enterprise Linux Fast Datapath Reporter: liting <tli>
Component: DPDKAssignee: Flavio Leitner <fleitner>
DPDK sub component: ovs-dpdk QA Contact: liting <tli>
Status: NEW --- Docs Contact:
Severity: unspecified    
Priority: unspecified CC: ctrautma, ktraynor, qding
Version: FDP 21.A   
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description liting 2020-12-23 03:40:16 UTC
Description of problem:


Version-Release number of selected component (if applicable):
[root@dell-per730-52 ~]# rpm -qa|grep dpdk
dpdk-tools-19.11.3-1.el8.x86_64
dpdk-19.11.3-1.el8.x86_64

[root@dell-per730-52 ~]# uname -a
Linux dell-per730-52.rhts.eng.pek2.redhat.com 4.18.0-240.el8.x86_64 #1 SMP Wed Sep 23 05:13:10 EDT 2020 x86_64 x86_64 x86_64 GNU/Linux

How reproducible:


Steps to Reproduce:
1. Enable sriov of qede port and create vf port

2. bind vf to dpdk
/usr/share/dpdk/usertools/dpdk-devbind.py --bind=vfio-pci 0000:82:02.1 0000:82:11.1

3. Start guest.
sudo -E taskset -c 3,5,33 /usr/libexec/qemu-kvm -m 8192 -smp 3 -cpu host,migratable=off -drive if=ide,file=rhel8.3-vsperf-1Q-viommu.qcow2 -boot c --enable-kvm -monitor unix:/tmp/vm0monitor,server,nowait -object memory-backend-file,id=mem,size=8192M,mem-path=/dev/hugepages,share=on -numa node,memdev=mem -mem-prealloc -nographic -vnc :0 -name Client0 -snapshot -net none -no-reboot -M q35,kernel-irqchip=split -device intel-iommu,device-iotlb=on,intremap,caching-mode=true -device pcie-root-port,id=root.1,slot=1 -device pcie-root-port,id=root.2,slot=2 -device pcie-root-port,id=root.3,slot=3 -device pcie-root-port,id=root.4,slot=4 -device vfio-pci,bus=root.2,host=0000:82:02.1 -device vfio-pci,bus=root.3,host=0000:82:11.1	

4. In guest, use testpmd forward packet.
 sysctl vm.nr_hugepages=1
 mkdir -p /dev/hugepages
 mount -t hugetlbfs hugetlbfs /dev/hugepages
 modprobe vfio-pci
/usr/share/dpdk/usertools/dpdk-devbind.py -b vfio-pci 02:00.0 03:00.0
/usr/bin/testpmd -l 0,1,2 -n 4 --socket-mem 1024 --legacy-mem -- --burst=64 -i --rxd=512 --txd=512 --nb-cores=2 --txq=1 --rxq=1 --auto-start --forward-mode=mac

5. Use trex send rfc2544 traffic.

Actual results:
The card is 25G i40e. It only got about 14mpps throughput.

Expected results:
It should got about 29mpps at least.

Additional info:
beaker job:
https://beaker.engineering.redhat.com/jobs/4920472

Comment 3 liting 2021-11-01 02:52:06 UTC
Sorry, There is a mistake in comment 1. Please ignore comment 1. 
The sriov case got 16mpps on rhel8.2(kernel 4.18.0-193.19.1.el8_2.x86_64)
job:
https://beaker.engineering.redhat.com/jobs/5919581
result:
https://beaker-archive.host.prod.eng.bos.redhat.com/beaker-logs/2021/10/59195/5919581/10841263/133620684/qede_25.html