Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
The FDP team is no longer accepting new bugs in Bugzilla. Please report your issues under FDP project in Jira. Thanks.

Bug 2057833

Summary: bnxt_en: testpmd as switch case run failed with dpdk-21.11
Product: Red Hat Enterprise Linux Fast Datapath Reporter: liting <tli>
Component: DPDKAssignee: David Marchand <dmarchan>
DPDK sub component: other QA Contact: liting <tli>
Status: CLOSED CURRENTRELEASE Docs Contact:
Severity: unspecified    
Priority: unspecified CC: ctrautma, dmarchan, jhsiao, ktraynor
Version: FDP 22.A   
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-10-12 04:22:23 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description liting 2022-02-24 06:45:12 UTC
Description of problem:
bnxt_en: testpmd as switch case run failed with dpdk-21.11

Version-Release number of selected component (if applicable):
[root@netqe22 perf]# rpm -qa|grep dpdk
dpdk-21.11-1.el8.x86_64
[root@netqe22 perf]# uname -r
4.18.0-365.el8.x86_64

[root@netqe22 perf]# ethtool -i enp130s0f0np0
driver: bnxt_en
version: 4.18.0-365.el8.x86_64
firmware-version: 20.6.143.0/pkg 20.06.04.06
expansion-rom-version: 
bus-info: 0000:82:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: no


How reproducible:


Steps to Reproduce:
Run testpmd as switch case
1. bind two port to dpdk
 [root@netqe22 perf]# driverctl -v set-override 0000:82:00.0 vfio-pci
[root@netqe22 perf]# driverctl -v set-override 0000:82:00.1 vfio-pci
2. Run testpmd command as following
[root@netqe22 perf]# /usr/bin/dpdk-testpmd -l 47,23,45 -n 4 --socket-mem 1024,1024 --vdev net_vhost0,iface=/tmp/vhost0,client=1,iommu-support=1,queues=1 --vdev net_vhost1,iface=/tmp/vhost1,client=1,iommu-support=1,queues=1 -- -i --nb-cores=2 --txq=1 --rxq=1 --forward-mode=io
EAL: Detected CPU lcores: 48
EAL: Detected NUMA nodes: 2
EAL: Detected shared linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: No available 2048 kB hugepages reported
EAL: VFIO support initialized
EAL: Using IOMMU type 1 (Type 1)
EAL: Probe PCI driver: net_bnxt (14e4:16d7) device: 0000:82:00.0 (socket 1)
ethdev initialisation failed
EAL: Releasing PCI mapped resource for 0000:82:00.0
EAL: Calling pci_unmap_resource for 0000:82:00.0 at 0x2200000000
EAL: Calling pci_unmap_resource for 0000:82:00.0 at 0x2200010000
EAL: Calling pci_unmap_resource for 0000:82:00.0 at 0x2200110000
EAL: Requested device 0000:82:00.0 cannot be used
EAL: Using IOMMU type 1 (Type 1)
EAL: Probe PCI driver: net_bnxt (14e4:16d7) device: 0000:82:00.1 (socket 1)
ethdev initialisation failed
EAL: Releasing PCI mapped resource for 0000:82:00.1
EAL: Calling pci_unmap_resource for 0000:82:00.1 at 0x2200112000
EAL: Calling pci_unmap_resource for 0000:82:00.1 at 0x2200122000
EAL: Calling pci_unmap_resource for 0000:82:00.1 at 0x2200222000
EAL: Requested device 0000:82:00.1 cannot be used
TELEMETRY: No legacy callbacks, legacy socket not created
Interactive-mode selected
Set io packet forwarding mode
testpmd: create a new mbuf pool <mb_pool_1>: n=163456, size=2176, socket=1
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 1)
VHOST_CONFIG: vhost-user client: socket created, fd: 27
VHOST_CONFIG: new device, handle is 0, path is /tmp/vhost0
Port 0: 56:48:4F:53:54:00
Configuring Port 1 (socket 1)
VHOST_CONFIG: vhost-user client: socket created, fd: 31
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: new device, handle is 1, path is /tmp/vhost1
Port 1: 56:48:4F:53:54:01
Checking link statuses...
VHOST_CONFIG: read message VHOST_USER_GET_PROTOCOL_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_PROTOCOL_FEATURES
VHOST_CONFIG: negotiated Vhost-user protocol features: 0xcbf
VHOST_CONFIG: read message VHOST_USER_GET_QUEUE_NUM
VHOST_CONFIG: read message VHOST_USER_SET_SLAVE_REQ_FD
VHOST_CONFIG: read message VHOST_USER_SET_OWNER
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:0 file:33
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:1 file:34
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_GET_PROTOCOL_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_PROTOCOL_FEATURES
VHOST_CONFIG: negotiated Vhost-user protocol features: 0xcbf
VHOST_CONFIG: read message VHOST_USER_GET_QUEUE_NUM
VHOST_CONFIG: read message VHOST_USER_SET_SLAVE_REQ_FD
VHOST_CONFIG: read message VHOST_USER_SET_OWNER
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:0 file:36
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:1 file:37
Done
testpmd> show port stats all

  ######################## NIC statistics for port 0  ########################
  RX-packets: 0          RX-missed: 0          RX-bytes:  0
  RX-errors: 0
  RX-nombuf:  0         
  TX-packets: 0          TX-errors: 0          TX-bytes:  0

  Throughput (since last show)
  Rx-pps:            0          Rx-bps:            0
  Tx-pps:            0          Tx-bps:            0
  ############################################################################

  ######################## NIC statistics for port 1  ########################
  RX-packets: 0          RX-missed: 0          RX-bytes:  0
  RX-errors: 0
  RX-nombuf:  0         
  TX-packets: 0          TX-errors: 0          TX-bytes:  0

  Throughput (since last show)
  Rx-pps:            0          Rx-bps:            0
  Tx-pps:            0          Tx-bps:            0
  ############################################################################



Actual results:
The testpmd should has four ports, testpmd has no two physic ports. 

Expected results:
The dpdk-20.11-3.el8 can work well as following.
[root@netqe22 perf]# /usr/bin/dpdk-testpmd -l 47,23,45 -n 4 --socket-mem 1024,1024 --vdev net_vhost0,iface=/tmp/vhost0,client=1,iommu-support=1,queues=1 --vdev net_vhost1,iface=/tmp/vhost1,client=1,iommu-support=1,queues=1 -- -i --nb-cores=2 --txq=1 --rxq=1 --forward-mode=io
EAL: Detected 48 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: No available hugepages reported in hugepages-2048kB
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL:   using IOMMU type 1 (Type 1)
EAL: Probe PCI driver: net_bnxt (14e4:16d7) device: 0000:82:00.0 (socket 1)
EAL: Probe PCI driver: net_bnxt (14e4:16d7) device: 0000:82:00.1 (socket 1)
EAL: No legacy callbacks, legacy socket not created
Interactive-mode selected
Set io packet forwarding mode
testpmd: create a new mbuf pool <mb_pool_1>: n=163456, size=2176, socket=1
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 1)

Port 0: link state change event
Port 0: 00:0A:F7:B7:09:50
Configuring Port 1 (socket 1)

Port 1: link state change event
Port 1: 00:0A:F7:B7:09:51
Configuring Port 2 (socket 1)
VHOST_CONFIG: vhost-user client: socket created, fd: 38
VHOST_CONFIG: new device, handle is 0
Port 2: 56:48:4F:53:54:02
Configuring Port 3 (socket 1)
VHOST_CONFIG: vhost-user client: socket created, fd: 41
VHOST_CONFIG: new device, handle is 1
Port 3: 56:48:4F:53:54:03
Checking link statuses...
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_GET_PROTOCOL_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_PROTOCOL_FEATURES
VHOST_CONFIG: negotiated Vhost-user protocol features: 0xcbf
VHOST_CONFIG: read message VHOST_USER_GET_QUEUE_NUM
VHOST_CONFIG: read message VHOST_USER_SET_SLAVE_REQ_FD
VHOST_CONFIG: read message VHOST_USER_SET_OWNER
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:0 file:43
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:1 file:44
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_GET_PROTOCOL_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_PROTOCOL_FEATURES
VHOST_CONFIG: negotiated Vhost-user protocol features: 0xcbf
VHOST_CONFIG: read message VHOST_USER_GET_QUEUE_NUM
VHOST_CONFIG: read message VHOST_USER_SET_SLAVE_REQ_FD
VHOST_CONFIG: read message VHOST_USER_SET_OWNER
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:0 file:46
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:1 file:47
Done
testpmd> show port stats all

  ######################## NIC statistics for port 0  ########################
  RX-packets: 1          RX-missed: 0          RX-bytes:  110
  RX-errors: 12001282
  RX-nombuf:  0         
  TX-packets: 0          TX-errors: 0          TX-bytes:  0

  Throughput (since last show)
  Rx-pps:            0          Rx-bps:            0
  Tx-pps:            0          Tx-bps:            0
  ############################################################################

  ######################## NIC statistics for port 1  ########################
  RX-packets: 1          RX-missed: 0          RX-bytes:  110
  RX-errors: 0
  RX-nombuf:  0         
  TX-packets: 0          TX-errors: 0          TX-bytes:  0

  Throughput (since last show)
  Rx-pps:            0          Rx-bps:            0
  Tx-pps:            0          Tx-bps:            0
  ############################################################################

  ######################## NIC statistics for port 2  ########################
  RX-packets: 0          RX-missed: 0          RX-bytes:  0
  RX-errors: 0
  RX-nombuf:  0         
  TX-packets: 0          TX-errors: 0          TX-bytes:  0

  Throughput (since last show)
  Rx-pps:            0          Rx-bps:            0
  Tx-pps:            0          Tx-bps:            0
  ############################################################################

  ######################## NIC statistics for port 3  ########################
  RX-packets: 0          RX-missed: 0          RX-bytes:  0
  RX-errors: 0
  RX-nombuf:  0         
  TX-packets: 0          TX-errors: 0          TX-bytes:  0

  Throughput (since last show)
  Rx-pps:            0          Rx-bps:            0
  Tx-pps:            0          Tx-bps:            0
  ############################################################################
testpmd> 


Additional info:
https://beaker.engineering.redhat.com/jobs/6335929

Comment 1 liting 2022-02-24 09:26:12 UTC
sriov case also run failed.
testpmd command inside guest:
dpdk-testpmd -l 0-2 -n 1 --socket-mem 1024 -- -i --forward-mode=mac --burst=32 --rxd=4096 --txd=4096 --max-pkt-len=9120 --mbuf-size=9728 --nb-cores=2 --rxq=1 --txq=1 --eth-peer=0,00:00:00:00:00:01 --eth-peer=1,00:00:00:00:00:02 --mbcache=512 --auto-start
EAL: Detected CPU lcores: 3
EAL: Detected NUMA nodes: 1
EAL: Detected shared linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: No available 2048 kB hugepages reported
EAL: VFIO support initialized
EAL: Probe PCI driver: net_virtio (1af4:1041) device: 0000:02:00.0 (socket 0)
eth_virtio_pci_init(): Failed to init PCI device
EAL: Requested device 0000:02:00.0 cannot be used
EAL: Using IOMMU type 1 (Type 1)
EAL: Probe PCI driver: net_bnxt (14e4:16dc) device: 0000:03:00.0 (socket 0)
ethdev initialisation failed
EAL: Releasing PCI mapped resource for 0000:03:00.0
EAL: Calling pci_unmap_resource for 0000:03:00.0 at 0x1180000000
EAL: Calling pci_unmap_resource for 0000:03:00.0 at 0x1180004000
EAL: Calling pci_unmap_resource for 0000:03:00.0 at 0x1180104000
EAL: Requested device 0000:03:00.0 cannot be used
EAL: Using IOMMU type 1 (Type 1)
EAL: Probe PCI driver: net_bnxt (14e4:16dc) device: 0000:04:00.0 (socket 0)
ethdev initialisation failed
EAL: Releasing PCI mapped resource for 0000:04:00.0
EAL: Calling pci_unmap_resource for 0000:04:00.0 at 0x1180108000
EAL: Calling pci_unmap_resource for 0000:04:00.0 at 0x118010c000
EAL: Calling pci_unmap_resource for 0000:04:00.0 at 0x118020c000
EAL: Requested device 0000:04:00.0 cannot be used
TELEMETRY: No legacy callbacks, legacy socket not created
testpmd: No probed ethernet devices
Fail: input rxq (1) can't be greater than max_rx_queues (0) of port 0
EAL: Error - exiting with code: 1
  Cause: rxq 1 invalid - must be >= 0 && <= 0
Interactive-mode selected
  

And dpdk-21.11-1.el9 is also has this issue.
https://beaker.engineering.redhat.com/jobs/6336885

Comment 2 David Marchand 2022-02-28 15:45:45 UTC
It seems to be the same issue than bz2055531.

Comment 4 liting 2023-10-11 08:07:56 UTC
Review the jobs of bnxt_en on anl154, 
For dpdk-22.11-3.el9_2.x86_64, testpmd as switch case work well.
https://beaker.engineering.redhat.com/jobs/8174594
For dpdk-22.11-1.el9.x86_64, sriov case work well.
https://beaker.engineering.redhat.com/jobs/8152617
For dpdk-20.11-4.el8_4.x86_64, sriov case work well.
https://beaker.engineering.redhat.com/jobs/8164165