Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
The FDP team is no longer accepting new bugs in Bugzilla. Please report your issues under FDP project in Jira. Thanks.

Bug 2133956

Summary: [BCM57504] it add 25g bnxt_en driver card to OVS-DPDK bridge failed
Product: Red Hat Enterprise Linux Fast Datapath Reporter: liting <tli>
Component: openvswitchAssignee: Timothy Redaelli <tredaelli>
openvswitch sub component: ovs-dpdk QA Contact: qding
Status: CLOSED NOTABUG Docs Contact:
Severity: unspecified    
Priority: unspecified CC: ctrautma, fleitner, jhsiao, ktraynor
Version: FDP 22.I   
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2024-03-12 14:56:46 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description liting 2022-10-12 00:45:26 UTC
Description of problem:
 [BCM57504] it add 25g bnxt_en driver card to OVS-DPDK bridge failed

Version-Release number of selected component (if applicable):
fdp22.I
Rhel8/OVS 2.15,OVS 2.16
Rhel9/OVS 2.17

How reproducible:


Steps to Reproduce:
1. Run ovs dpdk pvp performance case


Actual results:
56572fd1-3ba7-4232-b4f3-3e05905cba94
    Bridge ovsbr0
        datapath_type: netdev
        Port ovsbr0
            Interface ovsbr0
                type: internal
        Port dpdk0
            Interface dpdk0
                type: dpdk
                options: {dpdk-devargs="0000:ca:00.0", n_rxq="1", n_rxq_desc="1024", n_txq_desc="1024"}
                error: "Error attaching device '0000:ca:00.0' to DPDK"
        Port vhost1
            Interface vhost1
                type: dpdkvhostuserclient
                options: {vhost-server-path="/tmp/vhostuser/vhost1"}
        Port vhost0
            Interface vhost0
                type: dpdkvhostuserclient
                options: {vhost-server-path="/tmp/vhostuser/vhost0"}
        Port dpdk1
            Interface dpdk1
                type: dpdk
                options: {dpdk-devargs="0000:ca:00.1", n_rxq="1", n_rxq_desc="1024", n_txq_desc="1024"}
                error: "Error attaching device '0000:ca:00.1' to DPDK"
    ovs_version: "2.15.6"

Add bnxt_en driver card to OVS-DPDK bridge failed
2022-10-11T14:18:13.847Z|00066|netdev_dpdk|INFO|State of queue 0 ( tx_qid 0 ) of vhost device '/tmp/vhostuser/vhost1' changed to 'enabled'
2022-10-11T14:18:13.847Z|00067|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
2022-10-11T14:18:13.847Z|00068|dpdk|INFO|VHOST_CONFIG: set queue enable: 1 to qp idx: 1
2022-10-11T14:18:13.860Z|00343|dpdk|INFO|EAL: PCI device 0000:ca:00.0 on NUMA socket 1
2022-10-11T14:18:13.860Z|00344|dpdk|INFO|EAL:   probe driver: 14e4:1751 net_bnxt
2022-10-11T14:18:13.860Z|00345|dpdk|ERR|EAL:   0000:ca:00.0 VFIO group is not viable! Not all devices in IOMMU group bound to VFIO or unbound
2022-10-11T14:18:13.860Z|00346|dpdk|ERR|EAL: Driver cannot attach the device (0000:ca:00.0)
2022-10-11T14:18:13.860Z|00347|dpdk|ERR|EAL: Failed to attach device on primary process
2022-10-11T14:18:13.860Z|00348|netdev_dpdk|WARN|Error attaching device '0000:ca:00.0' to DPDK
2022-10-11T14:18:13.860Z|00349|netdev|WARN|dpdk0: could not set configuration (Invalid argument)
2022-10-11T14:18:13.860Z|00069|netdev_dpdk|INFO|State of queue 1 ( rx_qid 0 ) of vhost device '/tmp/vhostuser/vhost1' changed to 'enabled'
2022-10-11T14:18:13.860Z|00350|dpdk|ERR|Invalid port_id=128

Expected results:
Add bnxt_en card to ovs-dpdk bridge successfully.

Additional info:
https://beaker.engineering.redhat.com/jobs/7098337
https://beaker.engineering.redhat.com/jobs/7098336
https://beaker.engineering.redhat.com/jobs/7096390

[root@dell-per750-37 ~]# ethtool -i ens7f0np0
driver: bnxt_en
version: 5.14.0-70.26.1.el9_0.x86_64
firmware-version: 216.4.16.8/pkg 216.0.333.11
expansion-rom-version: 
bus-info: 0000:ca:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: no

[root@dell-per750-37 ~]# lspci|grep BCM57504
ca:00.0 Ethernet controller: Broadcom Inc. and subsidiaries BCM57504 NetXtreme-E 10Gb/25Gb/40Gb/50Gb/100Gb/200Gb Ethernet (rev 11)
ca:00.1 Ethernet controller: Broadcom Inc. and subsidiaries BCM57504 NetXtreme-E 10Gb/25Gb/40Gb/50Gb/100Gb/200Gb Ethernet (rev 11)
ca:00.2 Ethernet controller: Broadcom Inc. and subsidiaries BCM57504 NetXtreme-E 10Gb/25Gb/40Gb/50Gb/100Gb/200Gb Ethernet (rev 11)
ca:00.3 Ethernet controller: Broadcom Inc. and subsidiaries BCM57504 NetXtreme-E 10Gb/25Gb/40Gb/50Gb/100Gb/200Gb Ethernet (rev 11)

Comment 1 liting 2022-10-15 01:03:34 UTC
It is a 4*25g broadcom card, and I used two of them to run dpdk test. Following is a workaround. Bind all ports to dpdk, not only bind the two ports to dpdk. And then it can add two dpdk port to ovs dpdk bridge successfully.
driverctl set-override 0000:ca:00.0 vfio-pci
driverctl set-override 0000:ca:00.1 vfio-pci
driverctl set-override 0000:ca:00.2 vfio-pci
driverctl set-override 0000:ca:00.3 vfio-pci

Comment 2 qding 2022-11-17 09:02:08 UTC
To attach part of the ports to dpdk, other devices in the same iommu group must be bound to vfio-pci too.

[root@dell-per750-37 perf]# ls /sys/bus/pci/devices/0000:ca:00.0/iommu_group/devices
0000:ca:00.0  0000:ca:00.1  0000:ca:00.2  0000:ca:00.3
[root@dell-per750-37 perf]#

Comment 3 Flavio Leitner 2024-03-12 14:56:46 UTC
I don't think we can split the ports to different drivers.
It is a hardware limitation.
It may be possible with a new arch/system/driver.