Bug 1471943 - Support for bnxt PMD for OVS-DPDK vswitch OVS 2.9 + DPDK 17.11 [NEEDINFO]
Support for bnxt PMD for OVS-DPDK vswitch OVS 2.9 + DPDK 17.11
Status: NEW
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: openvswitch-dpdk (Show other bugs)
7.4
Unspecified Unspecified
medium Severity medium
: pre-dev-freeze
: ---
Assigned To: Rashid Khan
: Extras
Depends On: 1470370
Blocks: 1438583 1445812 1514088
  Show dependency treegraph
 
Reported: 2017-07-17 14:02 EDT by Andy Gospodarek
Modified: 2018-01-12 16:17 EST (History)
17 users (show)

See Also:
Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1470370
: 1514088 (view as bug list)
Environment:
Last Closed:
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
gospo: needinfo? (atelang)


Attachments (Terms of Use)

  None (edit)
Description Andy Gospodarek 2017-07-17 14:02:47 EDT
+++ This bug was initially created as a clone of Bug #1470370 +++

DPDK has supported the bnxt PMD since 16.07.

Filing this to be sure that the DPDK included in RHEL7.5 has support for the bnxt PMD.
Comment 3 Andy Gospodarek 2018-01-09 16:51:11 EST
Why was this moved from being an OpenStack bug to being just a RHEL7 bug?
Comment 4 Andy Gospodarek 2018-01-09 16:52:03 EST
I realized today that there is some confusion about DPDK and OVS+DPDK support for the bnxt PMD in RHEL7.

The latest DPDK Stable Tree for 16.11[1] contains support for bnxt PCI IDs[2] that should satisfy most mutual customer requests.  It also has supported a VFIO backend since it was included upstream as a supported PMD.  We are requesting that this is pulled into the Open vSwitch package for RHEL7 Fast Datapath channel.

We are currently working to gather full PVP test results with the expectation that having those results will result in support for bnxt PMD in OVS+DPDK included in the Fast Datapath Channel.  The team had some issues getting TRex running since it also did not include support for the bnxt PMD, but the team was able to use it with another vendor's adapter as the transmitter.  Tests show an increase in throughput between the L3 OVS+DPDK datapath and L3 OVS+kernel datapath, so this is a positive sign.

Thanks!

1. http://dpdk.org/browse/dpdk-stable/log/?h=16.11
2. http://dpdk.org/browse/dpdk-stable/commit/?h=16.11&id=6c2d431164f3d8374fc9ce4396746495b46cab95
Comment 5 Marcelo Ricardo Leitner 2018-01-10 07:35:28 EST
(In reply to Andy Gospodarek from comment #3)
> Why was this moved from being an OpenStack bug to being just a RHEL7 bug?

Hi Gospo. I'll not clear the needinfo so atelang can confirm if it's really this, but it's likely because OpenStack doesn't do kernel patches: instead it just consumes what is on RHEL. So for something get into OpenStack, it has to be added in RHEL, and then when OpenStack rebase to a newer set of packages, it will pull the fix. Hope that makes sense.
Comment 6 Anita Tragler 2018-01-12 14:30:22 EST
(In reply to Andy Gospodarek from comment #4)
> I realized today that there is some confusion about DPDK and OVS+DPDK
> support for the bnxt PMD in RHEL7.
> 
> The latest DPDK Stable Tree for 16.11[1] contains support for bnxt PCI
> IDs[2] that should satisfy most mutual customer requests.  It also has
> supported a VFIO backend since it was included upstream as a supported PMD. 
> We are requesting that this is pulled into the Open vSwitch package for
> RHEL7 Fast Datapath channel.
> 
> We are currently working to gather full PVP test results with the
> expectation that having those results will result in support for bnxt PMD in
> OVS+DPDK included in the Fast Datapath Channel.  The team had some issues
> getting TRex running since it also did not include support for the bnxt PMD,
> but the team was able to use it with another vendor's adapter as the
> transmitter.  Tests show an increase in throughput between the L3 OVS+DPDK
> datapath and L3 OVS+kernel datapath, so this is a positive sign.
> 
> Thanks!
> 
> 1. http://dpdk.org/browse/dpdk-stable/log/?h=16.11
> 2.
> http://dpdk.org/browse/dpdk-stable/commit/?h=16.
> 11&id=6c2d431164f3d8374fc9ce4396746495b46cab95

Hi Andy,
It appears there are 2 bugzillas requesting bnxt in OVS-DPDK Bug 1471943, Bug 1518914. I'm using this older BZ for OVS 2.9 + DPDK 17.11 (RHEL 7.5, RHOSP13) and perhaps you can modify the other Bug 1518914 to request OVS 2.6 +DPDK 16.11 (RHOSP10).
I believe your DPDK 16.11 patch for VFIO support "support for PCI IDs commit 6c2d431164f3d8374fc9ce4396746495b46cab95" missed DPDK 16.11.4 build by a few days and hence missed opportunity for OVS 2.7 inclusion for RHSOP12.
We do Fast datapath patch builds every 6 weeks if the need arises. We can consider Bug 1518914 for next patch in March/April timeframe.
Thanks
-anita
Anita Tragler
Product Manager Networking/NFV Platform
Comment 7 Andy Gospodarek 2018-01-12 15:50:42 EST
(In reply to Anita Tragler from comment #6)
> 
> Hi Andy,
> It appears there are 2 bugzillas requesting bnxt in OVS-DPDK Bug 1471943,
> Bug 1518914. I'm using this older BZ for OVS 2.9 + DPDK 17.11 (RHEL 7.5,
> RHOSP13) and perhaps you can modify the other Bug 1518914 to request OVS 2.6
> +DPDK 16.11 (RHOSP10).

After th call yesterday, I realized I may have put an incorrect comment in at least one BZ.  I'll double check all the ones I have opened to make sure they are correct.

> I believe your DPDK 16.11 patch for VFIO support "support for PCI IDs commit
> 6c2d431164f3d8374fc9ce4396746495b46cab95" missed DPDK 16.11.4 build by a few
> days and hence missed opportunity for OVS 2.7 inclusion for RHSOP12.
> We do Fast datapath patch builds every 6 weeks if the need arises. We can
> consider Bug 1518914 for next patch in March/April timeframe.
> Thanks
> -anita
> Anita Tragler
> Product Manager Networking/NFV Platform

Anita, just to offer clarification, VFIO support is already in DPDK 16.11.4.

Our only initial request was to have RH enable it in the fast datapath channel and after realizing that we did not have support for as many IDs in 16.11-stable as were in the latest upstream we backported the PCI ID additions and also wanted to make sure RH knew we wanted those added as well.
Comment 8 Andy Gospodarek 2018-01-12 16:17:35 EST
Just to confirm VFIO support for bnxt in DPDK 16.11-stable:

$ ./tools/dpdk-devbind.py -s
Network devices using DPDK-compatible driver
============================================
0000:01:00.0 'BCM57414 NetXtreme-E 10Gb/25Gb RDMA Ethernet Controller' drv=vfio-pci unused=bnxt_en
0000:01:00.1 'BCM57414 NetXtreme-E 10Gb/25Gb RDMA Ethernet Controller' drv=vfio-pci unused=bnxt_en
[...]

$ sudo ./build/build/app/test-pmd/testpmd -c 0xfe -n 7 -- --total-num-mbufs=40960 -i --rxq=2 --txq=2 --nb-cores=2 --rxd=256 --txd=256   
EAL: Detected 8 lcore(s)
EAL: 1 hugepages of size 1073741824 reserved, but no mounted hugetlbfs found for that size
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: PCI device 0000:01:00.0 on NUMA socket -1
EAL:   probe driver: 14e4:16d7 net_bnxt
EAL:   using IOMMU type 8 (No-IOMMU)
PMD: Broadcom Cumulus driver bnxt
PMD: 1.8.3:210.1.5
PMD: Driver HWRM version: 1.5.1
PMD: BNXT Driver/HWRM API mismatch.
PMD: Firmware API version is newer than driver.
PMD: The driver may be missing features.
PMD: bnxt found at mem d1200000, node addr 0x7f070b600000M
EAL: PCI device 0000:01:00.1 on NUMA socket -1
EAL:   probe driver: 14e4:16d7 net_bnxt
PMD: 1.8.3:210.1.5
PMD: Driver HWRM version: 1.5.1
PMD: BNXT Driver/HWRM API mismatch.
PMD: Firmware API version is newer than driver.
PMD: The driver may be missing features.
PMD: bnxt found at mem d1210000, node addr 0x7f070b712000M
Interactive-mode selected
USER1: create a new mbuf pool <mbuf_pool_socket_0>: n=40960, size=2176, socket=0
Configuring Port 0 (socket 0)
PMD: Port 0 Link Down
Port 0: 00:0A:F7:B7:03:E0
Configuring Port 1 (socket 0)
PMD: Port 1 Link Down
Port 1: 00:0A:F7:B7:03:E1
Checking link statuses...
Port 0 Link Down
Port 1 Link Down
Done
testpmd> 

Whether RH decides to add support for the newer PCI IDs that were added _just_ after 16.11.4 was cut or not, VFIO (rather than only igb-uio) should be working for the bnxt PMD as it exists upstream today.

Note You need to log in before you can comment on or make changes to this bug.