Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
The FDP team is no longer accepting new bugs in Bugzilla. Please report your issues under FDP project in Jira. Thanks.

Bug 2107069

Summary: bf-2 card: testpmd as switch case got low performance
Product: Red Hat Enterprise Linux Fast Datapath Reporter: liting <tli>
Component: DPDKAssignee: Balazs Nemeth <bnemeth>
DPDK sub component: other QA Contact: liting <tli>
Status: CLOSED EOL Docs Contact:
Severity: unspecified    
Priority: unspecified CC: ctrautma, fleitner, jhsiao, ktraynor
Version: FDP 22.E   
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2024-10-08 17:49:14 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description liting 2022-07-14 09:17:48 UTC
Description of problem:
bf-2 card: testpmd as switch case got low performance 

Version-Release number of selected component (if applicable):
[root@netqe30 ~]# ethtool -i ens7f0np0
driver: mlx5_core
version: 5.14.0-70.17.1.el9_0.x86_64
firmware-version: 24.33.1048 (MT_0000000540)
expansion-rom-version: 
bus-info: 0000:86:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: yes

[root@netqe30 ~]# lspci|grep BlueField-2
86:00.0 Ethernet controller: Mellanox Technologies MT42822 BlueField-2 integrated ConnectX-6 Dx network controller (rev 01)
86:00.1 Ethernet controller: Mellanox Technologies MT42822 BlueField-2 integrated ConnectX-6 Dx network controller (rev 01)
86:00.2 DMA controller: Mellanox Technologies MT42822 BlueField-2 SoC Management Interface (rev 01)

[root@netqe30 ~]# rpm -qa|grep dpdk
dpdk-21.11-1.el9_0.x86_64
[root@netqe30 ~]# uname -r
5.14.0-70.17.1.el9_0.x86_64

How reproducible:


Steps to Reproduce:
1. Start testpmd:
[root@netqe30 perf]# /usr/bin/dpdk-testpmd -l 55,53,51 -n 4 --socket-mem 1024,1024 --vdev net_vhost0,iface=/tmp/vhost0,client=1,iommu-support=1,queues=1 --vdev net_vhost1,iface=/tmp/vhost1,client=1,iommu-support=1,queues=1 -- -i --nb-cores=2 --txq=1 --rxq=1 --forward-mode=io
EAL: Detected CPU lcores: 56
EAL: Detected NUMA nodes: 2
EAL: Detected shared linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: No available 2048 kB hugepages reported
EAL: Probe PCI driver: mlx5_pci (15b3:a2d6) device: 0000:86:00.0 (socket 1)
EAL: Probe PCI driver: mlx5_pci (15b3:a2d6) device: 0000:86:00.1 (socket 1)
EAL: Probe PCI driver: mlx5_pci (15b3:1013) device: 0000:af:00.0 (socket 1)
mlx5_net: No available register for sampler.
EAL: Probe PCI driver: mlx5_pci (15b3:1013) device: 0000:af:00.1 (socket 1)
mlx5_net: No available register for sampler.
TELEMETRY: No legacy callbacks, legacy socket not created
Interactive-mode selected
Set io packet forwarding mode
testpmd: create a new mbuf pool <mb_pool_1>: n=163456, size=2176, socket=1
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 1)
Port 0: B8:CE:F6:75:52:F8
Configuring Port 1 (socket 1)
Port 1: B8:CE:F6:75:52:F9
Configuring Port 2 (socket 1)
Port 2: 24:8A:07:87:22:CE
Configuring Port 3 (socket 1)
Port 3: 24:8A:07:87:22:CF
Configuring Port 4 (socket 1)
VHOST_CONFIG: vhost-user client: socket created, fd: 48
VHOST_CONFIG: new device, handle is 0, path is /tmp/vhost0
Port 4: 56:48:4F:53:54:04
Configuring Port 5 (socket 1)
VHOST_CONFIG: vhost-user client: socket created, fd: 51
VHOST_CONFIG: new device, handle is 1, path is /tmp/vhost1
Port 5: 56:48:4F:53:54:05
Checking link statuses...
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_GET_PROTOCOL_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_PROTOCOL_FEATURES
VHOST_CONFIG: negotiated Vhost-user protocol features: 0xcbf
VHOST_CONFIG: read message VHOST_USER_GET_QUEUE_NUM
VHOST_CONFIG: read message VHOST_USER_SET_SLAVE_REQ_FD
VHOST_CONFIG: read message VHOST_USER_SET_OWNER
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:0 file:53
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:1 file:54
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_GET_PROTOCOL_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_PROTOCOL_FEATURES
VHOST_CONFIG: negotiated Vhost-user protocol features: 0xcbf
VHOST_CONFIG: read message VHOST_USER_GET_QUEUE_NUM
VHOST_CONFIG: read message VHOST_USER_SET_SLAVE_REQ_FD
VHOST_CONFIG: read message VHOST_USER_SET_OWNER
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:0 file:56
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:1 file:57
Done
testpmd> set portlist 0,4,1,5
previous number of forwarding ports 6 - changed to number of configured ports 4
testpmd> start
io packet forwarding - ports=4 - cores=2 - streams=4 - NUMA support enabled, MP allocation mode: native
Logical Core 53 (socket 1) forwards packets on 2 streams:
  RX P=0/Q=0 (socket 1) -> TX P=4/Q=0 (socket 1) peer=02:00:00:00:00:04
  RX P=4/Q=0 (socket 1) -> TX P=0/Q=0 (socket 1) peer=02:00:00:00:00:00
Logical Core 55 (socket 1) forwards packets on 2 streams:
  RX P=1/Q=0 (socket 1) -> TX P=5/Q=0 (socket 1) peer=02:00:00:00:00:05
  RX P=5/Q=0 (socket 1) -> TX P=1/Q=0 (socket 1) peer=02:00:00:00:00:01

  io packet forwarding packets/burst=32
  nb forwarding cores=2 - nb forwarding ports=4
  port 0: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x10000
    RX queue: 0
      RX desc=256 - RX free threshold=64
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=256 - TX free threshold=0
      TX threshold registers: pthresh=0 hthresh=0  wthresh=0
      TX offloads=0x10000 - TX RS bit threshold=0
  port 1: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x10000
    RX queue: 0
      RX desc=256 - RX free threshold=64
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=256 - TX free threshold=0
      TX threshold registers: pthresh=0 hthresh=0  wthresh=0
      TX offloads=0x10000 - TX RS bit threshold=0
  port 2: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x10000
    RX queue: 0
      RX desc=256 - RX free threshold=64
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=256 - TX free threshold=0
      TX threshold registers: pthresh=0 hthresh=0  wthresh=0
      TX offloads=0x10000 - TX RS bit threshold=0
  port 3: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x10000
    RX queue: 0
      RX desc=256 - RX free threshold=64
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=256 - TX free threshold=0
      TX threshold registers: pthresh=0 hthresh=0  wthresh=0
      TX offloads=0x10000 - TX RS bit threshold=0
  port 4: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x0
    RX queue: 0
      RX desc=0 - RX free threshold=0
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=0 - TX free threshold=0
      TX threshold registers: pthresh=0 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=0
  port 5: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x0
    RX queue: 0
      RX desc=0 - RX free threshold=0
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=0 - TX free threshold=0
      TX threshold registers: pthresh=0 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=0
testpmd> 


2. virsh define guest, the guest xml as follows
[root@netqe30 ~]# virsh dumpxml g1
<domain type='kvm' id='1'>
  <name>g1</name>
  <uuid>ef05dda4-a19b-4d0f-aa8d-46671879b136</uuid>
  <memory unit='KiB'>8388608</memory>
  <currentMemory unit='KiB'>8388608</currentMemory>
  <memoryBacking>
    <hugepages>
      <page size='1048576' unit='KiB'/>
    </hugepages>
    <locked/>
    <access mode='shared'/>
  </memoryBacking>
  <vcpu placement='static'>3</vcpu>
  <cputune>
    <vcpupin vcpu='0' cpuset='7'/>
    <vcpupin vcpu='1' cpuset='5'/>
    <vcpupin vcpu='2' cpuset='3'/>
    <emulatorpin cpuset='0'/>
  </cputune>
  <numatune>
    <memory mode='strict' nodeset='0'/>
  </numatune>
  <resource>
    <partition>/machine</partition>
  </resource>
  <os>
    <type arch='x86_64' machine='pc-q35-rhel9.0.0'>hvm</type>
    <boot dev='hd'/>
  </os>
  <features>
    <acpi/>
    <apic/>
    <pmu state='off'/>
    <vmport state='off'/>
    <ioapic driver='qemu'/>
  </features>
  <cpu mode='host-passthrough' check='none' migratable='on'>
    <feature policy='require' name='tsc-deadline'/>
    <numa>
      <cell id='0' cpus='0-2' memory='8388608' unit='KiB' memAccess='shared'/>
    </numa>
  </cpu>
  <clock offset='utc'>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <pm>
    <suspend-to-mem enabled='no'/>
    <suspend-to-disk enabled='no'/>
  </pm>
  <devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/var/lib/libvirt/images/g1.qcow2' index='1'/>
      <backingStore/>
      <target dev='vda' bus='virtio'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
    </disk>
    <controller type='usb' index='0' model='none'>
      <alias name='usb'/>
    </controller>
    <controller type='pci' index='0' model='pcie-root'>
      <alias name='pcie.0'/>
    </controller>
    <controller type='pci' index='1' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='1' port='0x10'/>
      <alias name='pci.1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </controller>
    <controller type='pci' index='2' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='2' port='0x11'/>
      <alias name='pci.2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </controller>
    <controller type='pci' index='3' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='3' port='0x8'/>
      <alias name='pci.3'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </controller>
    <controller type='pci' index='4' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='4' port='0x9'/>
      <alias name='pci.4'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </controller>
    <controller type='pci' index='5' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='5' port='0xa'/>
      <alias name='pci.5'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </controller>
    <controller type='pci' index='6' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='6' port='0xb'/>
      <alias name='pci.6'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </controller>
    <controller type='sata' index='0'>
      <alias name='ide'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
    </controller>
    <interface type='bridge'>
      <mac address='52:54:00:01:02:03'/>
      <source bridge='virbr0'/>
      <target dev='vnet0'/>
      <model type='virtio'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
    </interface>
    <interface type='vhostuser'>
      <mac address='00:de:ad:00:00:01'/>
      <source type='unix' path='/tmp/vhost0' mode='server'/>
      <target dev=''/>
      <model type='virtio'/>
      <driver name='vhost' rx_queue_size='1024' tx_queue_size='1024' iommu='on' ats='on'>
        <host mrg_rxbuf='off'/>
      </driver>
      <alias name='net1'/>
      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
    </interface>
    <interface type='vhostuser'>
      <mac address='00:de:ad:00:00:02'/>
      <source type='unix' path='/tmp/vhost1' mode='server'/>
      <target dev=''/>
      <model type='virtio'/>
      <driver name='vhost' rx_queue_size='1024' tx_queue_size='1024' iommu='on' ats='on'>
        <host mrg_rxbuf='off'/>
      </driver>
      <alias name='net2'/>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </interface>
    <serial type='pty'>
      <source path='/dev/pts/3'/>
      <target type='isa-serial' port='0'>
        <model name='isa-serial'/>
      </target>
      <alias name='serial0'/>
    </serial>
    <console type='pty' tty='/dev/pts/3'>
      <source path='/dev/pts/3'/>
      <target type='serial' port='0'/>
      <alias name='serial0'/>
    </console>
    <input type='mouse' bus='ps2'>
      <alias name='input0'/>
    </input>
    <input type='keyboard' bus='ps2'>
      <alias name='input1'/>
    </input>
    <graphics type='vnc' port='5900' autoport='yes' listen='0.0.0.0'>
      <listen type='address' address='0.0.0.0'/>
    </graphics>
    <audio id='1' type='none'/>
    <video>
      <model type='cirrus' vram='16384' heads='1' primary='yes'/>
      <alias name='video0'/>
      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
    </video>
    <memballoon model='virtio'>
      <alias name='balloon0'/>
      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
    </memballoon>
    <iommu model='intel'>
      <driver intremap='on' caching_mode='on' iotlb='on'/>
    </iommu>
  </devices>
  <seclabel type='dynamic' model='selinux' relabel='yes'>
    <label>system_u:system_r:svirt_t:s0:c599,c987</label>
    <imagelabel>system_u:object_r:svirt_image_t:s0:c599,c987</imagelabel>
  </seclabel>
  <seclabel type='dynamic' model='dac' relabel='yes'>
    <label>+107:+107</label>
    <imagelabel>+107:+107</imagelabel>
  </seclabel>
</domain>

3. start testpmd inside guest
[root@localhost ~]# dpdk-testpmd -l 0-2 -n 1 --socket-mem 1024 -- -i --forward-mode=io --burst=32 --rxd=8192 --txd=8192 --max-pkt-len=9600 --mbuf-size=9728 --nb-cores=2 --rxq=1 --txq=1 --mbcache=512  --auto-start
EAL: Detected CPU lcores: 3
EAL: Detected NUMA nodes: 1
EAL: Detected shared linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: No available 2048 kB hugepages reported
EAL: VFIO support initialized
EAL: Probe PCI driver: net_virtio (1af4:1041) device: 0000:02:00.0 (socket 0)
eth_virtio_pci_init(): Failed to init PCI device
EAL: Requested device 0000:02:00.0 cannot be used
EAL: Probe PCI driver: net_virtio (1af4:1041) device: 0000:03:00.0 (socket 0)
EAL: Using IOMMU type 1 (Type 1)
EAL: Probe PCI driver: net_virtio (1af4:1041) device: 0000:04:00.0 (socket 0)
TELEMETRY: No legacy callbacks, legacy socket not created
Interactive-mode selected
Set io packet forwarding mode
Auto-start selected
testpmd: create a new mbuf pool <mb_pool_0>: n=180224, size=9728, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
EAL: Error disabling MSI-X interrupts for fd 25
Port 0: 00:DE:AD:00:00:01
Configuring Port 1 (socket 0)
EAL: Error disabling MSI-X interrupts for fd 29
Port 1: 00:DE:AD:00:00:02
Checking link statuses...
Done
Start automatic packet forwarding
io packet forwarding - ports=2 - cores=2 - streams=2 - NUMA support enabled, MP allocation mode: native
Logical Core 1 (socket 0) forwards packets on 1 streams:
  RX P=0/Q=0 (socket 0) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01
Logical Core 2 (socket 0) forwards packets on 1 streams:
  RX P=1/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00

  io packet forwarding packets/burst=32
  nb forwarding cores=2 - nb forwarding ports=2
  port 0: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x0
    RX queue: 0
      RX desc=8192 - RX free threshold=0
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=8192 - TX free threshold=0
      TX threshold registers: pthresh=0 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=0
  port 1: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x0
    RX queue: 0
      RX desc=8192 - RX free threshold=0
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=8192 - TX free threshold=0
      TX threshold registers: pthresh=0 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=0
testpmd> 

4. use T-rex send traffic
[root@netqe29 trafficgen]# ./binary-search.py --traffic-generator=trex-txrx --frame-size=64 --num-flows=1024 --max-loss-pct=0 --search-runtime=10 --validation-runtime=10 --rate-tolerance=10 --runtime-tolerance=10 --rate=25 --rate-unit=% --duplicate-packet-failure=retry-to-fail --negative-packet-loss=retry-to-fail --rate=25 --rate-unit=% --one-shot=0 --use-src-ip-flows=1 --use-dst-ip-flows=1 --use-src-mac-flows=1 --use-dst-mac-flows=1

Actual results:
The testpmd as switch case just got about 435322 pps, about 0.4mpps.

Trex binary-search result info.
[2022-07-14 04:57:53.716154][BSO] Finished binary-search
[2022-07-14 04:57:53.716155][BSO] RESULT:
[2022-07-14 04:57:53.716407][BSO] [
[2022-07-14 04:57:53.716407][BSO] {
[2022-07-14 04:57:53.716407][BSO]     "rx_l1_bps": 146268366.005128,
[2022-07-14 04:57:53.716407][BSO]     "rx_l2_bps": 104477404.28937714,
[2022-07-14 04:57:53.716407][BSO]     "rx_packets": 2188407,
[2022-07-14 04:57:53.716407][BSO]     "rx_lost_packets": 0,
[2022-07-14 04:57:53.716407][BSO]     "rx_lost_packets_pct": 0.0,
[2022-07-14 04:57:53.716407][BSO]     "rx_pps": 217661.2589362024,
[2022-07-14 04:57:53.716407][BSO]     "rx_lost_pps": 0.0,
[2022-07-14 04:57:53.716407][BSO]     "rx_latency_average": 32.03167724609375,
[2022-07-14 04:57:53.716407][BSO]     "rx_latency_packets": 10000,
[2022-07-14 04:57:53.716407][BSO]     "rx_latency_lost_packets": 0,
[2022-07-14 04:57:53.716407][BSO]     "rx_latency_lost_packets_pct": 0.0,
[2022-07-14 04:57:53.716407][BSO]     "rx_latency_maximum": 74.0,
[2022-07-14 04:57:53.716407][BSO]     "rx_latency_minimum": 10.0,
[2022-07-14 04:57:53.716407][BSO]     "rx_latency_l1_bps": 668378.2587294228,
[2022-07-14 04:57:53.716407][BSO]     "rx_latency_l2_bps": 477413.04194958776,
[2022-07-14 04:57:53.716407][BSO]     "rx_latency_pps": 994.6105040616411,
[2022-07-14 04:57:53.716407][BSO]     "rx_latency_lost_pps": 0.0,
[2022-07-14 04:57:53.716407][BSO]     "rx_active": true,
[2022-07-14 04:57:53.716407][BSO]     "tx_l1_bps": 146268366.005128,
[2022-07-14 04:57:53.716407][BSO]     "tx_l2_bps": 104477404.28937714,
[2022-07-14 04:57:53.716407][BSO]     "tx_packets": 2188407,
[2022-07-14 04:57:53.716407][BSO]     "tx_pps": 217661.2589362024,
[2022-07-14 04:57:53.716407][BSO]     "tx_pps_target": 217840.78507196336,
[2022-07-14 04:57:53.716407][BSO]     "tx_latency_packets": 10000,
[2022-07-14 04:57:53.716407][BSO]     "tx_latency_l1_bps": 668378.2587294228,
[2022-07-14 04:57:53.716407][BSO]     "tx_latency_l2_bps": 477413.04194958776,
[2022-07-14 04:57:53.716407][BSO]     "tx_latency_pps": 994.6105040616411,
[2022-07-14 04:57:53.716407][BSO]     "tx_active": true,
[2022-07-14 04:57:53.716407][BSO]     "tx_tolerance_min": 196056.70656476702,
[2022-07-14 04:57:53.716407][BSO]     "tx_tolerance_max": 239624.86357915972
[2022-07-14 04:57:53.716407][BSO] }
[2022-07-14 04:57:53.716407][BSO] ,
[2022-07-14 04:57:53.716407][BSO] {
[2022-07-14 04:57:53.716407][BSO]     "rx_l1_bps": 146268366.005128,
[2022-07-14 04:57:53.716407][BSO]     "rx_l2_bps": 104477404.28937714,
[2022-07-14 04:57:53.716407][BSO]     "rx_packets": 2188407,
[2022-07-14 04:57:53.716407][BSO]     "rx_lost_packets": 0,
[2022-07-14 04:57:53.716407][BSO]     "rx_lost_packets_pct": 0.0,
[2022-07-14 04:57:53.716407][BSO]     "rx_pps": 217661.2589362024,
[2022-07-14 04:57:53.716407][BSO]     "rx_lost_pps": 0.0,
[2022-07-14 04:57:53.716407][BSO]     "rx_latency_average": 34.28121376037598,
[2022-07-14 04:57:53.716407][BSO]     "rx_latency_packets": 10000,
[2022-07-14 04:57:53.716407][BSO]     "rx_latency_lost_packets": 0,
[2022-07-14 04:57:53.716407][BSO]     "rx_latency_lost_packets_pct": 0.0,
[2022-07-14 04:57:53.716407][BSO]     "rx_latency_maximum": 64.0,
[2022-07-14 04:57:53.716407][BSO]     "rx_latency_minimum": 10.0,
[2022-07-14 04:57:53.716407][BSO]     "rx_latency_l1_bps": 668378.2587294228,
[2022-07-14 04:57:53.716407][BSO]     "rx_latency_l2_bps": 477413.04194958776,
[2022-07-14 04:57:53.716407][BSO]     "rx_latency_pps": 994.6105040616411,
[2022-07-14 04:57:53.716407][BSO]     "rx_latency_lost_pps": 0.0,
[2022-07-14 04:57:53.716407][BSO]     "rx_active": true,
[2022-07-14 04:57:53.716407][BSO]     "tx_l1_bps": 146268366.005128,
[2022-07-14 04:57:53.716407][BSO]     "tx_l2_bps": 104477404.28937714,
[2022-07-14 04:57:53.716407][BSO]     "tx_packets": 2188407,
[2022-07-14 04:57:53.716407][BSO]     "tx_pps": 217661.2589362024,
[2022-07-14 04:57:53.716407][BSO]     "tx_pps_target": 217840.78507196336,
[2022-07-14 04:57:53.716407][BSO]     "tx_latency_packets": 10000,
[2022-07-14 04:57:53.716407][BSO]     "tx_latency_l1_bps": 668378.2587294228,
[2022-07-14 04:57:53.716407][BSO]     "tx_latency_l2_bps": 477413.04194958776,
[2022-07-14 04:57:53.716407][BSO]     "tx_latency_pps": 994.6105040616411,
[2022-07-14 04:57:53.716407][BSO]     "tx_active": true,
[2022-07-14 04:57:53.716407][BSO]     "tx_tolerance_min": 196056.70656476702,
[2022-07-14 04:57:53.716407][BSO]     "tx_tolerance_max": 239624.86357915972
[2022-07-14 04:57:53.716407][BSO] }
[2022-07-14 04:57:53.716407][BSO] ]
[2022-07-14 04:57:53.716407][BSO] 


The host testpmd output
testpmd> stop
Telling cores to stop...
Waiting for lcores to finish...

  ---------------------- Forward statistics for port 0  ----------------------
  RX-packets: 165860537      RX-dropped: 49361962      RX-total: 215222499
  TX-packets: 170940604      TX-dropped: 0             TX-total: 170940604
  ----------------------------------------------------------------------------

  ---------------------- Forward statistics for port 4  ----------------------
  RX-packets: 170940604      RX-dropped: 0             RX-total: 170940604
  TX-packets: 165859940      TX-dropped: 597           TX-total: 165860537
  ----------------------------------------------------------------------------

  ---------------------- Forward statistics for port 1  ----------------------
  RX-packets: 181839069      RX-dropped: 33378115      RX-total: 215217184
  TX-packets: 163092359      TX-dropped: 0             TX-total: 163092359
  ----------------------------------------------------------------------------

  ---------------------- Forward statistics for port 5  ----------------------
  RX-packets: 163092359      RX-dropped: 0             RX-total: 163092359
  TX-packets: 181838733      TX-dropped: 336           TX-total: 181839069
  ----------------------------------------------------------------------------

  +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
  RX-packets: 681732569      RX-dropped: 82740077      RX-total: 764472646
  TX-packets: 681731636      TX-dropped: 933           TX-total: 681732569
  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

The testpmd output inside guest.
testpmd> stop
Telling cores to stop...
Waiting for lcores to finish...

  ---------------------- Forward statistics for port 0  ----------------------
  RX-packets: 165859940      RX-dropped: 0             RX-total: 165859940
  TX-packets: 170940604      TX-dropped: 10898129      TX-total: 181838733
  ----------------------------------------------------------------------------

  ---------------------- Forward statistics for port 1  ----------------------
  RX-packets: 181838733      RX-dropped: 0             RX-total: 181838733
  TX-packets: 163092359      TX-dropped: 2767581       TX-total: 165859940
  ----------------------------------------------------------------------------

  +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
  RX-packets: 347698673      RX-dropped: 0             RX-total: 347698673
  TX-packets: 334032963      TX-dropped: 13665710      TX-total: 347698673
  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++


Expected results:
The testpmd as switch case on i40e and mlx5 card got about 6-7mpps. 

Additional info:
https://beaker.engineering.redhat.com/jobs/6812847

Comment 5 liting 2024-01-04 01:33:50 UTC
After I added the  --rxd=1024 --txd=1024 to testpmd command, it can got the normal result on bf-2 card.
https://beaker.engineering.redhat.com/jobs/8746933
https://beaker-archive.host.prod.eng.bos.redhat.com/beaker-logs/2024/01/87469/8746933/15276529/171132951/mlx5_25.html

Comment 6 ovs-bot 2024-10-08 17:49:14 UTC
This bug did not meet the criteria for automatic migration and is being closed.
If the issue remains, please open a new ticket in https://issues.redhat.com/browse/FDP