Bug 2229960 - testpmd max-pkt-len function does not work on mlx5_core driver
Summary: testpmd max-pkt-len function does not work on mlx5_core driver
Keywords:
Status: NEW
Alias: None
Product: Red Hat Enterprise Linux Fast Datapath
Classification: Red Hat
Component: DPDK
Version: FDP 23.F
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: Open vSwitch development team
QA Contact: liting
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-08-08 10:06 UTC by liting
Modified: 2023-08-08 10:09 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed:
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker FD-3086 0 None None None 2023-08-08 10:09:18 UTC

Description liting 2023-08-08 10:06:48 UTC
Description of problem:
testpmd max-pkt-len function does not work on mlx5_core driver

Version-Release number of selected component (if applicable):
dpdk-20.11-4.el8_4.x86_64
dpdk-22.11-1.el9.x86_64
dpdk-21.11-3.el8.x86_64

How reproducible:


Steps to Reproduce:
Case 1: testpmd start with two pf ports
start testpmd with the two pf ports, and start traffic with trex sender.
[root@dell-per730-56 ~]# dpdk-testpmd -l 0,1,2,3,4  -n 4 -a 0000:04:00.0 -a 0000:04:00.1 --socket-mem 4096,4096   -- -i --numa --max-pkt-len=1500
EAL: Detected 48 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: No available hugepages reported in hugepages-2048kB
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:04:00.0 (socket 0)
mlx5_pci: Size 0xFFFF is not power of 2, will be aligned to 0x10000.
EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:04:00.1 (socket 0)
mlx5_pci: Size 0xFFFF is not power of 2, will be aligned to 0x10000.
EAL: No legacy callbacks, legacy socket not created
Interactive-mode selected
testpmd: create a new mbuf pool <mb_pool_0>: n=179456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
testpmd: create a new mbuf pool <mb_pool_1>: n=179456, size=2176, socket=1
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
Port 0: EC:0D:9A:A0:1E:54
Configuring Port 1 (socket 0)
Port 1: EC:0D:9A:A0:1E:55
Checking link statuses...
Done
testpmd> 

testpmd> set verbose 1
  src=EC:00:00:02:42:24 - dst=EC:0D:9A:A0:1E:25 - type=0x0800 - length=1596 - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_UDP  - sw ptype: L2_ETHER L3_IPV4 L4_UDP  - l2_len=14 - l3_len=20 - l4_len=8 - Receive queue=0x0
  ol_flags: PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN 
restore info: - no tunnel info - no outer header - no miss group
  src=EC:00:00:02:4E:24 - dst=EC:0D:9A:A0:1E:25 - type=0x0800 - length=1596 - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_UDP  - sw ptype: L2_ETHER L3_IPV4 L4_UDP  - l2_len=14 - l3_len=20 - l4_len=8 - Receive queue=0x0
  ol_flags: PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN 
restore info: - no tunnel info - no outer header - no miss group


testpmd> quit
Telling cores to stop...
Waiting for lcores to finish...

  ---------------------- Forward statistics for port 0  ----------------------
  RX-packets: 142072847      RX-dropped: 0             RX-total: 142072847
  TX-packets: 142066183      TX-dropped: 32            TX-total: 142066215
  ----------------------------------------------------------------------------

  ---------------------- Forward statistics for port 1  ----------------------
  RX-packets: 142066215      RX-dropped: 0             RX-total: 142066215
  TX-packets: 142072847      TX-dropped: 0             TX-total: 142072847
  ----------------------------------------------------------------------------

  +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
  RX-packets: 284139062      RX-dropped: 0             RX-total: 284139062
  TX-packets: 284139030      TX-dropped: 32            TX-total: 284139062
  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

send traffic with 1600byte packet.
[root@dell-per740-03 trafficgen]# ./binary-search.py --traffic-generator=trex-txrx --frame-size=1600 --num-flows=1024 --max-loss-pct=0 --search-runtime=10 --validation-runtime=10 --rate-tolerance=10 --runtime-tolerance=10 --rate=25 --rate-unit=% --duplicate-packet-failure=retry-to-fail --negative-packet-loss=retry-to-fail --warmup-trial --warmup-trial-runtime=10 --rate=25 --rate-unit=% --one-shot=0 --use-src-ip-flows=1 --use-dst-ip-flows=1 --use-src-mac-flows=0 --use-dst-mac-flows=0  --send-teaching-measurement --send-teaching-warmup --teaching-warmup-packet-type=generic --teaching-warmup-packet-rate=10000 --use-src-ip-flows=1 --use-dst-ip-flows=1 --use-src-mac-flows=1 --use-dst-mac-flows=0 --use-device-stats


case2: testpmd start with two vfs ports
[root@dell-per730-56 ~]# dpdk-testpmd -l 0,1,2,3,4  -n 4 -a 0000:04:00.2 -a 0000:04:04.2 --socket-mem 4096,4096   -- -i --numa --max-pkt-len=1500
EAL: Detected 48 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: No available hugepages reported in hugepages-2048kB
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: Probe PCI driver: mlx5_pci (15b3:1018) device: 0000:04:00.2 (socket 0)
mlx5_pci: Size 0xFFFF is not power of 2, will be aligned to 0x10000.
EAL: Probe PCI driver: mlx5_pci (15b3:1018) device: 0000:04:04.2 (socket 0)
mlx5_pci: Size 0xFFFF is not power of 2, will be aligned to 0x10000.
EAL: No legacy callbacks, legacy socket not created
Interactive-mode selected
testpmd: create a new mbuf pool <mb_pool_0>: n=179456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
testpmd: create a new mbuf pool <mb_pool_1>: n=179456, size=2176, socket=1
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
Port 0: 00:DE:AD:01:01:01
Configuring Port 1 (socket 0)
Port 1: 00:DE:AD:02:02:02
Checking link statuses...
Done
testpmd> start
io packet forwarding - ports=2 - cores=1 - streams=2 - NUMA support enabled, MP allocation mode: native
Logical Core 1 (socket 1) forwards packets on 2 streams:
  RX P=0/Q=0 (socket 0) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00

  io packet forwarding packets/burst=32
  nb forwarding cores=1 - nb forwarding ports=2
  port 0: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x0
    RX queue: 0
      RX desc=256 - RX free threshold=64
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=256 - TX free threshold=0
      TX threshold registers: pthresh=0 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=0
  port 1: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x0
    RX queue: 0
      RX desc=256 - RX free threshold=64
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=256 - TX free threshold=0
      TX threshold registers: pthresh=0 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=0
testpmd>set verbose 1
testpmd>
  src=00:00:00:02:D1:02 - dst=00:DE:AD:02:02:02 - type=0x0800 - length=1596 - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_UDP  - sw ptype: L2_ETHER L3_IPV4 L4_UDP  - l2_len=14 - l3_len=20 - l4_len=8 - Receive queue=0x0
  ol_flags: PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN 
restore info: - no tunnel info - no outer header - no miss group
  src=00:00:00:02:DD:02 - dst=00:DE:AD:02:02:02 - type=0x0800 - length=1596 - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_UDP  - sw ptype: L2_ETHER L3_IPV4 L4_UDP  - l2_len=14 - l3_len=20 - l4_len=8 - Receive queue=0x0
  ol_flags: PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN 
restore info: - no tunnel info - no outer header - no miss group
  src=00:00:00:02:E9:02 - dst=00:DE:AD:02:02:02 - type=0x0800 - length=1596 - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_UDP  - sw ptype: L2_ETHER L3_IPV4 L4_UDP  - l2_len=14 - l3_len=20 - l4_len=8 - Receive queue=0x0
  ol_flags: PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN 
restore info: - no tunnel info - no outer header - no miss group

send traffic with 1600byte packet.
[root@dell-per740-03 trafficgen]# ./binary-search.py --traffic-generator=trex-txrx --frame-size=1600 --num-flows=1024 --max-loss-pct=0 --search-runtime=10 --validation-runtime=10 --rate-tolerance=10 --runtime-tolerance=10 --rate=25 --rate-unit=% --duplicate-packet-failure=retry-to-fail --negative-packet-loss=retry-to-fail --warmup-trial --warmup-trial-runtime=10 --rate=25 --rate-unit=% --one-shot=0 --use-src-ip-flows=1 --use-dst-ip-flows=1 --use-src-mac-flows=0 --use-dst-mac-flows=0 --src-macs=00:00:00:00:00:01,00:00:00:00:00:02 --dst-macs=00:de:ad:01:01:01,00:de:ad:02:02:02 --send-teaching-measurement --send-teaching-warmup --teaching-warmup-packet-type=generic --teaching-warmup-packet-rate=10000 --use-src-ip-flows=1 --use-dst-ip-flows=1 --use-src-mac-flows=1 --use-dst-mac-flows=0 --use-device-stats

case3: create vf for per pf, and start guest with the vf, and start testpmd inside guest
[root@localhost ~]# dpdk-testpmd -l 0-2 -n 1 --socket-mem 1024 -- -i --forward-mode=mac --burst=32 --rxd=4096 --txd=4096 --max-pkt-len=9000 --mbuf-size=9728 --nb-cores=2 --rxq=1 --txq=1 --eth-peer=0,00:00:00:00:00:01 --eth-peer=1,00:00:00:00:00:02 --mbcache=512 --auto-start
EAL: Detected CPU lcores: 3
EAL: Detected NUMA nodes: 1
EAL: Detected shared linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: No available 2048 kB hugepages reported
EAL: Probe PCI driver: net_virtio (1af4:1041) device: 0000:02:00.0 (socket 0)
eth_virtio_pci_init(): Failed to init PCI device
EAL: Requested device 0000:02:00.0 cannot be used
EAL: Probe PCI driver: mlx5_pci (15b3:1018) device: 0000:03:00.0 (socket 0)
[  774.088320] mlx5_core 0000:03:00.0: mlx5_destroy_flow_table:2141:(pid 2637): Flow table 262152 wasn't destroyed, refcount > 1
EAL: Probe PCI driver: mlx5_pci (15b3:1018) device: 0000:04:00.0 (socket 0)
[  774.195886] mlx5_core 0000:04:00.0: mlx5_destroy_flow_table:2141:(pid 2637): Flow table 262152 wasn't destroyed, refcount > 1
TELEMETRY: No legacy callbacks, legacy socket not created
Interactive-mode selected
Set mac packet forwarding mode
Auto-start selected
testpmd: create a new mbuf pool <mb_pool_0>: n=180224, size=9728, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
[  774.882714] device enp3s0 left promiscuous mode
Port 0: 00:DE:AD:01:01:01
Configuring Port 1 (socket 0)
[  775.245239] device enp4s0 left promiscuous mode
Port 1: 00:DE:AD:02:02:02
Checking link statuses...
Done
[  775.435685] device enp3s0 entered promiscuous mode
[  775.526781] device enp4s0 entered promiscuous mode
Start automatic packet forwarding
mac packet forwarding - ports=2 - cores=2 - streams=2 - NUMA support enabled, MP allocation mode: native
Logical Core 1 (socket 0) forwards packets on 1 streams:
  RX P=0/Q=0 (socket 0) -> TX P=1/Q=0 (socket 0) peer=00:00:00:00:00:02
Logical Core 2 (socket 0) forwards packets on 1 streams:
  RX P=1/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=00:00:00:00:00:01

  mac packet forwarding packets/burst=32
  nb forwarding cores=2 - nb forwarding ports=2
  port 0: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x10000
    RX queue: 0
      RX desc=4096 - RX free threshold=64
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=4096 - TX free threshold=0
      TX threshold registers: pthresh=0 hthresh=0  wthresh=0
      TX offloads=0x10000 - TX RS bit threshold=0
  port 1: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x10000
    RX queue: 0
      RX desc=4096 - RX free threshold=64
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=4096 - TX free threshold=0
      TX threshold registers: pthresh=0 hthresh=0  wthresh=0
      TX offloads=0x10000 - TX RS bit threshold=0
testpmd> 
testpmd> set verbose 1
  src=00:00:00:03:E1:01 - dst=00:DE:AD:01:01:01 - type=0x0800 - length=9196 - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_UDP  - sw ptype: L2_ETHER L3_IPV4 L4_UDP  - l2_len=14 - l3_len=20 - l4_len=8 - Receive queue=0x0
  ol_flags: RTE_MBUF_F_RX_L4_CKSUM_GOOD RTE_MBUF_F_RX_IP_CKSUM_GOOD RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN 
restore info: - no tunnel info - no outer header - no miss group
  src=00:00:00:03:ED:01 - dst=00:DE:AD:01:01:01 - type=0x0800 - length=9196 - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_UDP  - sw ptype: L2_ETHER L3_IPV4 L4_UDP  - l2_len=14 - l3_len=20 - l4_len=8 - Receive queue=0x0
  ol_flags: RTE_MBUF_F_RX_L4_CKSUM_GOOD RTE_MBUF_F_RX_IP_CKSUM_GOOD RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN 
restore info: - no tunnel info - no outer header - no miss group
  src=00:00:00:03:F9:01 - dst=00:DE:AD:01:01:01 - type=0x0800 - length=9196 - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_UDP  - sw ptype: L2_ETHER L3_IPV4 L4_UDP  - l2_len=14 - l3_len=20 - l4_len=8 - Receive queue=0x0
  ol_flags: RTE_MBUF_F_RX_L4_CKSUM_GOOD RTE_MBUF_F_RX_IP_CKSUM_GOOD RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN 
restore info: - no tunnel info - no outer header - no miss group

send traffic with 9200byte packet.
[root@dell-per740-03 trafficgen]# ./binary-search.py --traffic-generator=trex-txrx --frame-size=9200 --num-flows=1024 --max-loss-pct=0 --search-runtime=10 --validation-runtime=10 --rate-tolerance=10 --runtime-tolerance=10 --rate=25 --rate-unit=% --duplicate-packet-failure=retry-to-fail --negative-packet-loss=retry-to-fail --warmup-trial --warmup-trial-runtime=10 --rate=25 --rate-unit=% --one-shot=0 --use-src-ip-flows=1 --use-dst-ip-flows=1 --use-src-mac-flows=0 --use-dst-mac-flows=0 --src-macs=00:00:00:00:00:01,00:00:00:00:00:02 --dst-macs=00:de:ad:01:01:01,00:de:ad:02:02:02 --send-teaching-measurement --send-teaching-warmup --teaching-warmup-packet-type=generic --teaching-warmup-packet-rate=10000 --use-src-ip-flows=1 --use-dst-ip-flows=1 --use-src-mac-flows=1 --use-dst-mac-flows=0 --use-device-stats


Actual results:
If the size of the traffic packet is greater than max-pkt-len, testpmd still receive the packet and forward it, so max-pkt-len function does not work. Other driver(such as i40e,ice,bnxt_en) do not have this issue. 

Expected results:
If the size of the traffic packet is greater than max-pkt-len, testpmd should not receive the packet and forward it.

Additional info:


Note You need to log in before you can comment on or make changes to this bug.