Description of problem: [BCM57504] it add 25g bnxt_en driver card to OVS-DPDK bridge failed Version-Release number of selected component (if applicable): fdp22.I Rhel8/OVS 2.15,OVS 2.16 Rhel9/OVS 2.17 How reproducible: Steps to Reproduce: 1. Run ovs dpdk pvp performance case Actual results: 56572fd1-3ba7-4232-b4f3-3e05905cba94 Bridge ovsbr0 datapath_type: netdev Port ovsbr0 Interface ovsbr0 type: internal Port dpdk0 Interface dpdk0 type: dpdk options: {dpdk-devargs="0000:ca:00.0", n_rxq="1", n_rxq_desc="1024", n_txq_desc="1024"} error: "Error attaching device '0000:ca:00.0' to DPDK" Port vhost1 Interface vhost1 type: dpdkvhostuserclient options: {vhost-server-path="/tmp/vhostuser/vhost1"} Port vhost0 Interface vhost0 type: dpdkvhostuserclient options: {vhost-server-path="/tmp/vhostuser/vhost0"} Port dpdk1 Interface dpdk1 type: dpdk options: {dpdk-devargs="0000:ca:00.1", n_rxq="1", n_rxq_desc="1024", n_txq_desc="1024"} error: "Error attaching device '0000:ca:00.1' to DPDK" ovs_version: "2.15.6" Add bnxt_en driver card to OVS-DPDK bridge failed 2022-10-11T14:18:13.847Z|00066|netdev_dpdk|INFO|State of queue 0 ( tx_qid 0 ) of vhost device '/tmp/vhostuser/vhost1' changed to 'enabled' 2022-10-11T14:18:13.847Z|00067|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE 2022-10-11T14:18:13.847Z|00068|dpdk|INFO|VHOST_CONFIG: set queue enable: 1 to qp idx: 1 2022-10-11T14:18:13.860Z|00343|dpdk|INFO|EAL: PCI device 0000:ca:00.0 on NUMA socket 1 2022-10-11T14:18:13.860Z|00344|dpdk|INFO|EAL: probe driver: 14e4:1751 net_bnxt 2022-10-11T14:18:13.860Z|00345|dpdk|ERR|EAL: 0000:ca:00.0 VFIO group is not viable! Not all devices in IOMMU group bound to VFIO or unbound 2022-10-11T14:18:13.860Z|00346|dpdk|ERR|EAL: Driver cannot attach the device (0000:ca:00.0) 2022-10-11T14:18:13.860Z|00347|dpdk|ERR|EAL: Failed to attach device on primary process 2022-10-11T14:18:13.860Z|00348|netdev_dpdk|WARN|Error attaching device '0000:ca:00.0' to DPDK 2022-10-11T14:18:13.860Z|00349|netdev|WARN|dpdk0: could not set configuration (Invalid argument) 2022-10-11T14:18:13.860Z|00069|netdev_dpdk|INFO|State of queue 1 ( rx_qid 0 ) of vhost device '/tmp/vhostuser/vhost1' changed to 'enabled' 2022-10-11T14:18:13.860Z|00350|dpdk|ERR|Invalid port_id=128 Expected results: Add bnxt_en card to ovs-dpdk bridge successfully. Additional info: https://beaker.engineering.redhat.com/jobs/7098337 https://beaker.engineering.redhat.com/jobs/7098336 https://beaker.engineering.redhat.com/jobs/7096390 [root@dell-per750-37 ~]# ethtool -i ens7f0np0 driver: bnxt_en version: 5.14.0-70.26.1.el9_0.x86_64 firmware-version: 216.4.16.8/pkg 216.0.333.11 expansion-rom-version: bus-info: 0000:ca:00.0 supports-statistics: yes supports-test: yes supports-eeprom-access: yes supports-register-dump: yes supports-priv-flags: no [root@dell-per750-37 ~]# lspci|grep BCM57504 ca:00.0 Ethernet controller: Broadcom Inc. and subsidiaries BCM57504 NetXtreme-E 10Gb/25Gb/40Gb/50Gb/100Gb/200Gb Ethernet (rev 11) ca:00.1 Ethernet controller: Broadcom Inc. and subsidiaries BCM57504 NetXtreme-E 10Gb/25Gb/40Gb/50Gb/100Gb/200Gb Ethernet (rev 11) ca:00.2 Ethernet controller: Broadcom Inc. and subsidiaries BCM57504 NetXtreme-E 10Gb/25Gb/40Gb/50Gb/100Gb/200Gb Ethernet (rev 11) ca:00.3 Ethernet controller: Broadcom Inc. and subsidiaries BCM57504 NetXtreme-E 10Gb/25Gb/40Gb/50Gb/100Gb/200Gb Ethernet (rev 11)
It is a 4*25g broadcom card, and I used two of them to run dpdk test. Following is a workaround. Bind all ports to dpdk, not only bind the two ports to dpdk. And then it can add two dpdk port to ovs dpdk bridge successfully. driverctl set-override 0000:ca:00.0 vfio-pci driverctl set-override 0000:ca:00.1 vfio-pci driverctl set-override 0000:ca:00.2 vfio-pci driverctl set-override 0000:ca:00.3 vfio-pci
To attach part of the ports to dpdk, other devices in the same iommu group must be bound to vfio-pci too. [root@dell-per750-37 perf]# ls /sys/bus/pci/devices/0000:ca:00.0/iommu_group/devices 0000:ca:00.0 0000:ca:00.1 0000:ca:00.2 0000:ca:00.3 [root@dell-per750-37 perf]#