Bug 2230308 - mlx5_core driver: testpmd start failed when the max-pkt-len is larger than 2100
Summary: mlx5_core driver: testpmd start failed when the max-pkt-len is larger than 2100
Keywords:
Status: NEW
Alias: None
Product: Red Hat Enterprise Linux Fast Datapath
Classification: Red Hat
Component: DPDK
Version: FDP 23.F
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: Open vSwitch development team
QA Contact: liting
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-08-09 09:45 UTC by liting
Modified: 2023-08-10 07:59 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed:
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker FD-3089 0 None None None 2023-08-09 09:47:14 UTC

Description liting 2023-08-09 09:45:58 UTC
Description of problem:
mlx5_core driver: testpmd start failed when the max-pkt-len is larger than 2100

Version-Release number of selected component (if applicable):
[root@dell-per730-56 ~]# uname -r
5.14.0-284.26.1.el9_2.x86_64

[root@dell-per730-56 ~]# rpm -qa|grep dpdk
dpdk-22.11-1.el9.x86_64
dpdk-tools-22.11-1.el9.x86_64

How reproducible:


Steps to Reproduce:
1. create one vf for two pf
[root@dell-per730-56 ~]# echo 1 > /sys/devices/pci0000:00/0000:00:02.0/0000:04:00.0/sriov_numvfs
[root@dell-per730-56 ~]# echo 1 > /sys/devices/pci0000:00/0000:00:02.0/0000:04:00.1/sriov_numvfs
[root@dell-per730-56 ~]# ip a
6: enp4s0f0np0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9200 qdisc mq state UP mode DEFAULT group default qlen 1000
    link/ether ec:0d:9a:a0:1e:54 brd ff:ff:ff:ff:ff:ff
    vf 0     link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state auto, trust off, query_rss off
7: enp4s0f1np1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9200 qdisc mq state UP mode DEFAULT group default qlen 1000
    link/ether ec:0d:9a:a0:1e:55 brd ff:ff:ff:ff:ff:ff
    vf 0     link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state auto, trust off, query_rss off

2. configure mtu of pf and vf port to 9200
[root@dell-per730-56 ~]# ip link set  enp4s0f0np0 mtu 9200
[root@dell-per730-56 ~]# ip link set  enp4s0f1np1 mtu 9200
[root@dell-per730-56 ~]# ip link set dev enp4s0f0v0 mtu 9200
[root@dell-per730-56 ~]# ip link set dev enp4s0f1v0 mtu 9200

3. start testpmd with -max-pkt-len=2100 and -max-pkt-len=9200
[root@dell-per730-56 ~]# dpdk-testpmd -l 0,1,2,3,4  -n 4 -a 0000:04:00.2 -a 0000:04:04.2 --socket-mem 4096,4096   -- -i --numa --max-pkt-len=2100
EAL: Detected CPU lcores: 48
EAL: Detected NUMA nodes: 2
EAL: Detected shared linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: VFIO support initialized
EAL: Probe PCI driver: mlx5_pci (15b3:1018) device: 0000:04:00.2 (socket 0)
EAL: Probe PCI driver: mlx5_pci (15b3:1018) device: 0000:04:04.2 (socket 0)
TELEMETRY: No legacy callbacks, legacy socket not created
Interactive-mode selected
testpmd: create a new mbuf pool <mb_pool_0>: n=179456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
testpmd: create a new mbuf pool <mb_pool_1>: n=179456, size=2176, socket=1
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
mlx5_net: port 0 Rx queue 0: Scatter offload is not configured and no enough mbuf space(2176) to contain the maximum RX packet length(2100) with head-room(128)
mlx5_net: port 0 unable to allocate rx queue index 0
Fail to configure port 0 rx queues
EAL: Error - exiting with code: 1
  Cause: Start ports failed
Port 0 is closed
Port 1 is closed

[root@dell-per730-56 ~]# dpdk-testpmd -l 0,1,2,3,4  -n 4 -a 0000:04:00.2 -a 0000:04:04.2 --socket-mem 4096,4096   -- -i --numa --max-pkt-len=9200
EAL: Detected CPU lcores: 48
EAL: Detected NUMA nodes: 2
EAL: Detected shared linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: VFIO support initialized
EAL: Probe PCI driver: mlx5_pci (15b3:1018) device: 0000:04:00.2 (socket 0)
EAL: Probe PCI driver: mlx5_pci (15b3:1018) device: 0000:04:04.2 (socket 0)
TELEMETRY: No legacy callbacks, legacy socket not created
Interactive-mode selected
testpmd: create a new mbuf pool <mb_pool_0>: n=179456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
testpmd: create a new mbuf pool <mb_pool_1>: n=179456, size=2176, socket=1
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
mlx5_net: port 0 Rx queue 0: Scatter offload is not configured and no enough mbuf space(2176) to contain the maximum RX packet length(9200) with head-room(128)
mlx5_net: port 0 unable to allocate rx queue index 0
Fail to configure port 0 rx queues
EAL: Error - exiting with code: 1
  Cause: Start ports failed
Port 0 is closed
Port 1 is closed

[root@dell-per730-56 ~]# dpdk-testpmd -l 0,1,2,3,4  -n 4 -a 0000:04:00.2 -a 0000:04:04.2 --socket-mem 4096,4096   -- -i --numa --max-pkt-len=2000
EAL: Detected CPU lcores: 48
EAL: Detected NUMA nodes: 2
EAL: Detected shared linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: VFIO support initialized
EAL: Probe PCI driver: mlx5_pci (15b3:1018) device: 0000:04:00.2 (socket 0)
EAL: Probe PCI driver: mlx5_pci (15b3:1018) device: 0000:04:04.2 (socket 0)
TELEMETRY: No legacy callbacks, legacy socket not created
Interactive-mode selected
testpmd: create a new mbuf pool <mb_pool_0>: n=179456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
testpmd: create a new mbuf pool <mb_pool_1>: n=179456, size=2176, socket=1
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
Port 0: 16:D9:63:AD:01:4D
Configuring Port 1 (socket 0)
Port 1: C2:B5:2A:38:1F:BD
Checking link statuses...
Done
testpmd> 


Actual results:
testpmd start with --max-pkt-len=2100 failed.
testpmd start with --max-pkt-len=2000 successfully.

Expected results:
testpmd start with --max-pkt-len=2100 should be successfully.

Additional info:


Note You need to log in before you can comment on or make changes to this bug.