Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
The FDP team is no longer accepting new bugs in Bugzilla. Please report your issues under FDP project in Jira. Thanks.

Bug 2058456

Summary: qede card: testpmd as switch case run failed with dpdk-21.11-1.el9
Product: Red Hat Enterprise Linux Fast Datapath Reporter: liting <tli>
Component: DPDKAssignee: Maxime Coquelin <maxime.coquelin>
DPDK sub component: other QA Contact: liting <tli>
Status: CLOSED EOL Docs Contact:
Severity: medium    
Priority: medium CC: ctrautma, fleitner, jhsiao, ktraynor
Version: FDP 22.A   
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2024-10-08 17:49:14 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description liting 2022-02-25 03:23:08 UTC
Description of problem:


Version-Release number of selected component (if applicable):
[root@dell-per730-52 ~]# rpm -qa|grep dpdk
dpdk-21.11-1.el9.x86_64
[root@dell-per730-52 ~]# uname -r
5.14.0-58.el9.x86_64


How reproducible:
Run testpmd as switch case

Steps to Reproduce:
1. Bind qede card to dpdk
[root@dell-per730-52 ~]# driverctl -v list-overrides
0000:82:00.0 vfio-pci (FastLinQ QL45000 Series 25GbE Controller (FastLinQ QL45212H 25GbE Adapter))
0000:82:00.1 vfio-pci (FastLinQ QL45000 Series 25GbE Controller (FastLinQ QL45212H 25GbE Adapter))

2. Run following testpmd command
[root@dell-per730-52 bash_perf_result]# /usr/bin/dpdk-testpmd -l 55,27,53 -n 4 --socket-mem 1024,1024 --vdev net_vhost0,iface=/tmp/vhost0,client=1,iommu-support=1,queues=1 --vdev net_vhost1,iface=/tmp/vhost1,client=1,iommu-support=1,queues=1 -- -i --nb-cores=2 --txq=1 --rxq=1 --forward-mode=io
EAL: Detected CPU lcores: 56
EAL: Detected NUMA nodes: 2
EAL: Detected shared linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: No available 2048 kB hugepages reported
EAL: VFIO support initialized
EAL: Using IOMMU type 1 (Type 1)
EAL: Probe PCI driver: net_qede (1077:1656) device: 0000:82:00.0 (socket 1)
EAL: Probe PCI driver: net_qede (1077:1656) device: 0000:82:00.1 (socket 1)
TELEMETRY: No legacy callbacks, legacy socket not created
Interactive-mode selected
Set io packet forwarding mode
testpmd: create a new mbuf pool <mb_pool_1>: n=163456, size=2176, socket=1
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 1)

Port 0: link state change event
Port 0: 00:0E:1E:D3:F1:B2
Configuring Port 1 (socket 1)

Port 1: link state change event
Port 1: 00:0E:1E:D3:F1:B3
Configuring Port 2 (socket 1)

Port 1: link state change event
VHOST_CONFIG: vhost-user client: socket created, fd: 34
VHOST_CONFIG: new device, handle is 0, path is /tmp/vhost0
Port 2: 56:48:4F:53:54:02
Configuring Port 3 (socket 1)
VHOST_CONFIG: vhost-user client: socket created, fd: 38
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: new device, handle is 1, path is /tmp/vhost1
Port 3: 56:48:4F:53:54:03
Checking link statuses...
VHOST_CONFIG: read message VHOST_USER_GET_PROTOCOL_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_PROTOCOL_FEATURES
VHOST_CONFIG: negotiated Vhost-user protocol features: 0xcbf
VHOST_CONFIG: read message VHOST_USER_GET_QUEUE_NUM
VHOST_CONFIG: read message VHOST_USER_SET_SLAVE_REQ_FD
VHOST_CONFIG: read message VHOST_USER_SET_OWNER
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:0 file:40
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:1 file:41
VHOST_CONFIG: read message VHOST_USER_SET_FEATURES
VHOST_CONFIG: negotiated Virtio features: 0x150200000
VHOST_CONFIG: read message VHOST_USER_SET_MEM_TABLE
VHOST_CONFIG: guest memory region size: 0x80000000
	 guest physical addr: 0x0
	 guest virtual  addr: 0x7fc340000000
	 host  virtual  addr: 0x7fef00000000
	 mmap addr : 0x7fef00000000
	 mmap size : 0x80000000
	 mmap align: 0x40000000
	 mmap off  : 0x0
VHOST_CONFIG: guest memory region size: 0x180000000
	 guest physical addr: 0x100000000
	 guest virtual  addr: 0x7fc3c0000000
	 host  virtual  addr: 0x7fed80000000
	 mmap addr : 0x7fed00000000
	 mmap size : 0x200000000
	 mmap align: 0x40000000
	 mmap off  : 0x80000000
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:0 file:44
VHOST_CONFIG: reallocated device on node 0
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:0 file:45
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:0 file:-1
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: read message VHOST_USER_GET_PROTOCOL_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:1 file:40
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:1 file:45
VHOST_CONFIG: read message VHOST_USER_SET_PROTOCOL_FEATURES
VHOST_CONFIG: negotiated Vhost-user protocol features: 0xcbf
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:1 file:-1
VHOST_CONFIG: read message VHOST_USER_GET_QUEUE_NUM
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 0

Port 2: queue state event
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 1

Port 2: queue state event
VHOST_CONFIG: virtio is now ready for processing.

Port 2: link state change event
VHOST_CONFIG: read message VHOST_USER_SET_SLAVE_REQ_FD
VHOST_CONFIG: read message VHOST_USER_SET_OWNER
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:0 file:45
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:1 file:46
VHOST_CONFIG: read message VHOST_USER_SET_FEATURES
VHOST_CONFIG: negotiated Virtio features: 0x150200000
VHOST_CONFIG: read message VHOST_USER_SET_MEM_TABLE
VHOST_CONFIG: guest memory region size: 0x80000000
	 guest physical addr: 0x0
	 guest virtual  addr: 0x7fc340000000
	 host  virtual  addr: 0x7fec80000000
	 mmap addr : 0x7fec80000000
	 mmap size : 0x80000000
	 mmap align: 0x40000000
	 mmap off  : 0x0
VHOST_CONFIG: guest memory region size: 0x180000000
	 guest physical addr: 0x100000000
	 guest virtual  addr: 0x7fc3c0000000
	 host  virtual  addr: 0x7feb00000000
	 mmap addr : 0x7fea80000000
	 mmap size : 0x200000000
	 mmap align: 0x40000000
	 mmap off  : 0x80000000
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:0 file:49
VHOST_CONFIG: reallocated device on node 0
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:0 file:50
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:0 file:-1
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:1 file:45
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:1 file:50
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:1 file:-1
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 0

Port 3: queue state event
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 1

Port 3: queue state event
VHOST_CONFIG: virtio is now ready for processing.

Port 3: link state change event
Done
testpmd> set portlist 0,2,1,3
Warning: NUMA should be configured manually by using --port-numa-config and --ring-numa-config parameters along with --numa.
testpmd> set portlist 0,2,1,3
testpmd> show port stats all

  ######################## NIC statistics for port 0  ########################
  RX-packets: 0          RX-missed: 0          RX-bytes:  0
  RX-errors: 0
  RX-nombuf:  0         
  TX-packets: 0          TX-errors: 0          TX-bytes:  0

  Throughput (since last show)
  Rx-pps:            0          Rx-bps:            0
  Tx-pps:            0          Tx-bps:            0
  ############################################################################

  ######################## NIC statistics for port 1  ########################
  RX-packets: 0          RX-missed: 0          RX-bytes:  0
  RX-errors: 0
  RX-nombuf:  0         
  TX-packets: 0          TX-errors: 0          TX-bytes:  0

  Throughput (since last show)
  Rx-pps:            0          Rx-bps:            0
  Tx-pps:            0          Tx-bps:            0
  ############################################################################

  ######################## NIC statistics for port 2  ########################
  RX-packets: 0          RX-missed: 0          RX-bytes:  0
  RX-errors: 0
  RX-nombuf:  0         
  TX-packets: 0          TX-errors: 0          TX-bytes:  0

  Throughput (since last show)
  Rx-pps:            0          Rx-bps:            0
  Tx-pps:            0          Tx-bps:            0
  ############################################################################

  ######################## NIC statistics for port 3  ########################
  RX-packets: 0          RX-missed: 0          RX-bytes:  0
  RX-errors: 0
  RX-nombuf:  0         
  TX-packets: 0          TX-errors: 0          TX-bytes:  0

  Throughput (since last show)
  Rx-pps:            0          Rx-bps:            0
  Tx-pps:            0          Tx-bps:            0
  ############################################################################

3. Start guest and run testpmd inside guest
[root@localhost ~]# dpdk-testpmd -l 0-2 -n 1 --socket-mem 1024 -- -i --forward-mode=io --burst=32 --rxd=8192 --txd=8192 --max-pkt-len=9600 --mbuf-size=9728 --nb-cores=2 --rxq=1 --txq=1 --mbcache=512  --auto-start
EAL: Detected CPU lcores: 3
EAL: Detected NUMA nodes: 1
EAL: Detected shared linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: No available 2048 kB hugepages reported
EAL: VFIO support initialized
EAL: Probe PCI driver: net_virtio (1af4:1041) device: 0000:02:00.0 (socket 0)
eth_virtio_pci_init(): Failed to init PCI device
EAL: Requested device 0000:02:00.0 cannot be used
EAL: Probe PCI driver: net_virtio (1af4:1041) device: 0000:03:00.0 (socket 0)
EAL: Using IOMMU type 1 (Type 1)
EAL: Probe PCI driver: net_virtio (1af4:1041) device: 0000:04:00.0 (socket 0)
TELEMETRY: No legacy callbacks, legacy socket not created
Interactive-mode selected
Set io packet forwarding mode
Auto-start selected
testpmd: create a new mbuf pool <mb_pool_0>: n=180224, size=9728, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
EAL: Error disabling MSI-X interrupts for fd 25
Port 0: 00:DE:AD:00:00:01
Configuring Port 1 (socket 0)
EAL: Error disabling MSI-X interrupts for fd 29
Port 1: 00:DE:AD:00:00:02
Checking link statuses...
Done
Start automatic packet forwarding
io packet forwarding - ports=2 - cores=2 - streams=2 - NUMA support enabled, MP allocation mode: native
Logical Core 1 (socket 0) forwards packets on 1 streams:
  RX P=0/Q=0 (socket 0) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01
Logical Core 2 (socket 0) forwards packets on 1 streams:
  RX P=1/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00

  io packet forwarding packets/burst=32
  nb forwarding cores=2 - nb forwarding ports=2
  port 0: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x0
    RX queue: 0
      RX desc=8192 - RX free threshold=0
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=8192 - TX free threshold=0
      TX threshold registers: pthresh=0 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=0
  port 1: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x0
    RX queue: 0
      RX desc=8192 - RX free threshold=0
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=8192 - TX free threshold=0
      TX threshold registers: pthresh=0 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=0
testpmd> 

4. Use trex send traffic
[root@dell-per730-53 trafficgen]# ./binary-search.py --traffic-generator=trex-txrx --frame-size=64 --num-flows=1024 --max-loss-pct=0 --search-runtime=10 --validation-runtime=60 --rate-tolerance=10 --runtime-tolerance=10 --rate=25 --rate-unit=% --duplicate-packet-failure=retry-to-fail --negative-packet-loss=retry-to-fail --rate=25 --rate-unit=% --one-shot=0 --use-src-ip-flows=1 --use-dst-ip-flows=1 --use-src-mac-flows=1 --use-dst-mac-flows=1 --send-teaching-measurement --send-teaching-warmup --teaching-warmup-packet-type=generic --teaching-warmup-packet-rate=1000 --warmup-trial --warmup-trial-runtime=10


Actual results:
The result pps is 0.
The testpmd output on host: The physic port drop many packet.
testpmd> show port stats all

  ######################## NIC statistics for port 0  ########################
  RX-packets: 17910      RX-missed: 104201048  RX-bytes:  1146240
  RX-errors: 0
  RX-nombuf:  89314377  
  TX-packets: 32760      TX-errors: 0          TX-bytes:  2096640

  Throughput (since last show)
  Rx-pps:           36          Rx-bps:        18672
  Tx-pps:           66          Tx-bps:        34152
  ############################################################################

  ######################## NIC statistics for port 1  ########################
  RX-packets: 32760      RX-missed: 104186198  RX-bytes:  2096640
  RX-errors: 0
  RX-nombuf:  89297145  
  TX-packets: 17910      TX-errors: 0          TX-bytes:  1146240

  Throughput (since last show)
  Rx-pps:           66          Rx-bps:        34152
  Tx-pps:           36          Tx-bps:        18672
  ############################################################################

  ######################## NIC statistics for port 2  ########################
  RX-packets: 32760      RX-missed: 0          RX-bytes:  1965600
  RX-errors: 0
  RX-nombuf:  0         
  TX-packets: 17910      TX-errors: 0          TX-bytes:  1074600

  Throughput (since last show)
  Rx-pps:           66          Rx-bps:        32024
  Tx-pps:           36          Tx-bps:        17504
  ############################################################################

  ######################## NIC statistics for port 3  ########################
  RX-packets: 17910      RX-missed: 0          RX-bytes:  1074600
  RX-errors: 0
  RX-nombuf:  0         
  TX-packets: 32760      TX-errors: 0          TX-bytes:  1965600

  Throughput (since last show)
  Rx-pps:           36          Rx-bps:        17504
  Tx-pps:           66          Tx-bps:        32024
  ############################################################################


Expected results:
The tespmd as switch case work well.

Additional info:

Comment 1 liting 2022-02-28 08:18:39 UTC
For dpdk-21.11-1.el8.x86_64, it also cannot work well.
https://beaker.engineering.redhat.com/jobs/6349188

Comment 2 liting 2022-02-28 11:58:22 UTC
sriov case also run failed.
https://beaker.engineering.redhat.com/jobs/6349912
It work well on dpdk-20.11-3.el8.x86_64.rpm
https://beaker.engineering.redhat.com/jobs/6349706

Comment 3 Flavio Leitner 2023-06-14 17:35:40 UTC
Maxime, could you please take a look at this issue? Thanks, fbl.

Comment 5 liting 2023-10-12 10:09:38 UTC
For dpdk-21.11-3.el8.x86_64, it still has this issue
https://beaker.engineering.redhat.com/jobs/8418053

Comment 6 ovs-bot 2024-10-08 17:49:14 UTC
This bug did not meet the criteria for automatic migration and is being closed.
If the issue remains, please open a new ticket in https://issues.redhat.com/browse/FDP