Bug 1471943 - [fdProd] Support for bnxt PMD for OVS-DPDK vswitch OVS 2.9 + DPDK 17.11
[fdProd] Support for bnxt PMD for OVS-DPDK vswitch OVS 2.9 + DPDK 17.11
Status: CLOSED CURRENTRELEASE
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: openvswitch (Show other bugs)
7.4
Unspecified Unspecified
high Severity medium
: pre-dev-freeze
: ---
Assigned To: Timothy Redaelli
ovs-qe@redhat.com
: Extras
: 1518914 (view as bug list)
Depends On: 1470370 1548355
Blocks: 1507952 1507957 1514088
  Show dependency treegraph
 
Reported: 2017-07-17 14:02 EDT by Andy Gospodarek
Modified: 2018-05-03 11:31 EDT (History)
20 users (show)

See Also:
Fixed In Version: openvswitch-2.9.0-19.el7fdp
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1470370
: 1514088 (view as bug list)
Environment:
Last Closed: 2018-05-03 11:31:45 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Andy Gospodarek 2017-07-17 14:02:47 EDT
+++ This bug was initially created as a clone of Bug #1470370 +++

DPDK has supported the bnxt PMD since 16.07.

Filing this to be sure that the DPDK included in RHEL7.5 has support for the bnxt PMD.
Comment 3 Andy Gospodarek 2018-01-09 16:51:11 EST
Why was this moved from being an OpenStack bug to being just a RHEL7 bug?
Comment 4 Andy Gospodarek 2018-01-09 16:52:03 EST
I realized today that there is some confusion about DPDK and OVS+DPDK support for the bnxt PMD in RHEL7.

The latest DPDK Stable Tree for 16.11[1] contains support for bnxt PCI IDs[2] that should satisfy most mutual customer requests.  It also has supported a VFIO backend since it was included upstream as a supported PMD.  We are requesting that this is pulled into the Open vSwitch package for RHEL7 Fast Datapath channel.

We are currently working to gather full PVP test results with the expectation that having those results will result in support for bnxt PMD in OVS+DPDK included in the Fast Datapath Channel.  The team had some issues getting TRex running since it also did not include support for the bnxt PMD, but the team was able to use it with another vendor's adapter as the transmitter.  Tests show an increase in throughput between the L3 OVS+DPDK datapath and L3 OVS+kernel datapath, so this is a positive sign.

Thanks!

1. http://dpdk.org/browse/dpdk-stable/log/?h=16.11
2. http://dpdk.org/browse/dpdk-stable/commit/?h=16.11&id=6c2d431164f3d8374fc9ce4396746495b46cab95
Comment 5 Marcelo Ricardo Leitner 2018-01-10 07:35:28 EST
(In reply to Andy Gospodarek from comment #3)
> Why was this moved from being an OpenStack bug to being just a RHEL7 bug?

Hi Gospo. I'll not clear the needinfo so atelang can confirm if it's really this, but it's likely because OpenStack doesn't do kernel patches: instead it just consumes what is on RHEL. So for something get into OpenStack, it has to be added in RHEL, and then when OpenStack rebase to a newer set of packages, it will pull the fix. Hope that makes sense.
Comment 6 Anita Tragler 2018-01-12 14:30:22 EST
(In reply to Andy Gospodarek from comment #4)
> I realized today that there is some confusion about DPDK and OVS+DPDK
> support for the bnxt PMD in RHEL7.
> 
> The latest DPDK Stable Tree for 16.11[1] contains support for bnxt PCI
> IDs[2] that should satisfy most mutual customer requests.  It also has
> supported a VFIO backend since it was included upstream as a supported PMD. 
> We are requesting that this is pulled into the Open vSwitch package for
> RHEL7 Fast Datapath channel.
> 
> We are currently working to gather full PVP test results with the
> expectation that having those results will result in support for bnxt PMD in
> OVS+DPDK included in the Fast Datapath Channel.  The team had some issues
> getting TRex running since it also did not include support for the bnxt PMD,
> but the team was able to use it with another vendor's adapter as the
> transmitter.  Tests show an increase in throughput between the L3 OVS+DPDK
> datapath and L3 OVS+kernel datapath, so this is a positive sign.
> 
> Thanks!
> 
> 1. http://dpdk.org/browse/dpdk-stable/log/?h=16.11
> 2.
> http://dpdk.org/browse/dpdk-stable/commit/?h=16.
> 11&id=6c2d431164f3d8374fc9ce4396746495b46cab95

Hi Andy,
It appears there are 2 bugzillas requesting bnxt in OVS-DPDK Bug 1471943, Bug 1518914. I'm using this older BZ for OVS 2.9 + DPDK 17.11 (RHEL 7.5, RHOSP13) and perhaps you can modify the other Bug 1518914 to request OVS 2.6 +DPDK 16.11 (RHOSP10).
I believe your DPDK 16.11 patch for VFIO support "support for PCI IDs commit 6c2d431164f3d8374fc9ce4396746495b46cab95" missed DPDK 16.11.4 build by a few days and hence missed opportunity for OVS 2.7 inclusion for RHSOP12.
We do Fast datapath patch builds every 6 weeks if the need arises. We can consider Bug 1518914 for next patch in March/April timeframe.
Thanks
-anita
Anita Tragler
Product Manager Networking/NFV Platform
Comment 7 Andy Gospodarek 2018-01-12 15:50:42 EST
(In reply to Anita Tragler from comment #6)
> 
> Hi Andy,
> It appears there are 2 bugzillas requesting bnxt in OVS-DPDK Bug 1471943,
> Bug 1518914. I'm using this older BZ for OVS 2.9 + DPDK 17.11 (RHEL 7.5,
> RHOSP13) and perhaps you can modify the other Bug 1518914 to request OVS 2.6
> +DPDK 16.11 (RHOSP10).

After th call yesterday, I realized I may have put an incorrect comment in at least one BZ.  I'll double check all the ones I have opened to make sure they are correct.

> I believe your DPDK 16.11 patch for VFIO support "support for PCI IDs commit
> 6c2d431164f3d8374fc9ce4396746495b46cab95" missed DPDK 16.11.4 build by a few
> days and hence missed opportunity for OVS 2.7 inclusion for RHSOP12.
> We do Fast datapath patch builds every 6 weeks if the need arises. We can
> consider Bug 1518914 for next patch in March/April timeframe.
> Thanks
> -anita
> Anita Tragler
> Product Manager Networking/NFV Platform

Anita, just to offer clarification, VFIO support is already in DPDK 16.11.4.

Our only initial request was to have RH enable it in the fast datapath channel and after realizing that we did not have support for as many IDs in 16.11-stable as were in the latest upstream we backported the PCI ID additions and also wanted to make sure RH knew we wanted those added as well.
Comment 8 Andy Gospodarek 2018-01-12 16:17:35 EST
Just to confirm VFIO support for bnxt in DPDK 16.11-stable:

$ ./tools/dpdk-devbind.py -s
Network devices using DPDK-compatible driver
============================================
0000:01:00.0 'BCM57414 NetXtreme-E 10Gb/25Gb RDMA Ethernet Controller' drv=vfio-pci unused=bnxt_en
0000:01:00.1 'BCM57414 NetXtreme-E 10Gb/25Gb RDMA Ethernet Controller' drv=vfio-pci unused=bnxt_en
[...]

$ sudo ./build/build/app/test-pmd/testpmd -c 0xfe -n 7 -- --total-num-mbufs=40960 -i --rxq=2 --txq=2 --nb-cores=2 --rxd=256 --txd=256   
EAL: Detected 8 lcore(s)
EAL: 1 hugepages of size 1073741824 reserved, but no mounted hugetlbfs found for that size
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: PCI device 0000:01:00.0 on NUMA socket -1
EAL:   probe driver: 14e4:16d7 net_bnxt
EAL:   using IOMMU type 8 (No-IOMMU)
PMD: Broadcom Cumulus driver bnxt
PMD: 1.8.3:210.1.5
PMD: Driver HWRM version: 1.5.1
PMD: BNXT Driver/HWRM API mismatch.
PMD: Firmware API version is newer than driver.
PMD: The driver may be missing features.
PMD: bnxt found at mem d1200000, node addr 0x7f070b600000M
EAL: PCI device 0000:01:00.1 on NUMA socket -1
EAL:   probe driver: 14e4:16d7 net_bnxt
PMD: 1.8.3:210.1.5
PMD: Driver HWRM version: 1.5.1
PMD: BNXT Driver/HWRM API mismatch.
PMD: Firmware API version is newer than driver.
PMD: The driver may be missing features.
PMD: bnxt found at mem d1210000, node addr 0x7f070b712000M
Interactive-mode selected
USER1: create a new mbuf pool <mbuf_pool_socket_0>: n=40960, size=2176, socket=0
Configuring Port 0 (socket 0)
PMD: Port 0 Link Down
Port 0: 00:0A:F7:B7:03:E0
Configuring Port 1 (socket 0)
PMD: Port 1 Link Down
Port 1: 00:0A:F7:B7:03:E1
Checking link statuses...
Port 0 Link Down
Port 1 Link Down
Done
testpmd> 

Whether RH decides to add support for the newer PCI IDs that were added _just_ after 16.11.4 was cut or not, VFIO (rather than only igb-uio) should be working for the bnxt PMD as it exists upstream today.
Comment 9 atelang 2018-02-20 09:18:31 EST
(In reply to Marcelo Ricardo Leitner from comment #5)
> (In reply to Andy Gospodarek from comment #3)
> > Why was this moved from being an OpenStack bug to being just a RHEL7 bug?
> 
> Hi Gospo. I'll not clear the needinfo so atelang can confirm if it's really
> this, but it's likely because OpenStack doesn't do kernel patches: instead
> it just consumes what is on RHEL. So for something get into OpenStack, it
> has to be added in RHEL, and then when OpenStack rebase to a newer set of
> packages, it will pull the fix. Hope that makes sense.

Marcelo is right on this. We consume OVS, DPDK from RHEL.
Comment 11 Andy Gospodarek 2018-02-20 10:09:27 EST
Thanks atelang and mleitner for the clarification.  I think I had opened this at the request of Anita and André, but may have gotten the Product wrong at the time.
Comment 12 Trinh Dao 2018-02-28 13:59:45 EST
RH, any update on this bug?
Comment 13 Joseph Kachuck 2018-03-02 10:18:37 EST
Hello,
This is to late for RHEL 7.5. This is now requested for RHEL 7.6.

Thank You
Joe Kachuck
Comment 14 Andy Gospodarek 2018-03-02 10:38:04 EST
Woah this is a bit of a surprise.

Any clarification on why here why the bnxt PMD is not going to make the upcoming OVS 2.9 + DPDK 17.11 fast datapath channel release?
Comment 15 Anita Tragler 2018-03-02 11:22:46 EST
OVS-DPDK content is available through a separate Fast datapath(FD) channel/repo, independent of RHEL 7. Fast datapath team is planning new FD beta 18.03(March) build with BNXT with latest OVS 2.9 and DPDK 17.11 patches from upstream available with RHEL 7.5 and RHOSP13 beta (April 11th) for partner testing and POCs. This build will be included in FDP 18.04 (late April) for RHOSP13 GA ( May 2018) with RHEL 7.5
Comment 16 Andy Gospodarek 2018-03-05 16:12:50 EST
Thanks for that clarification, Anita.  Does this mean that this should be moved back to RHEL7.5, then?
Comment 17 Andy Gospodarek 2018-03-22 15:50:22 EDT
Quick question for the team at RH.  When support for the bnxt PMD lands in the Fast Datapath Channel, will this include support in OSP10, OSP11, and OSP12?
Comment 18 Rashid Khan 2018-03-22 15:55:50 EDT
(In reply to Andy Gospodarek from comment #17)
> Quick question for the team at RH.  When support for the bnxt PMD lands in
> the Fast Datapath Channel, will this include support in OSP10, OSP11, and
> OSP12?

12 and 13
Comment 19 Andy Gospodarek 2018-03-22 17:22:04 EDT
(In reply to Rashid Khan from comment #18)
> (In reply to Andy Gospodarek from comment #17)
> > Quick question for the team at RH.  When support for the bnxt PMD lands in
> > the Fast Datapath Channel, will this include support in OSP10, OSP11, and
> > OSP12?
> 
> 12 and 13

Thanks!
Comment 20 Anita Tragler 2018-04-18 15:52:36 EDT
HI Andy
FDP 18.04 OVS 2.9 build is for RHOSP 13. Older versions OSP 10, 12 may update to OVS 2.9 but that is still under discussion. 
This will be supported with RHEL 7.5
Comment 21 Timothy Redaelli 2018-04-26 11:18:46 EDT
Part of upcoming FDP 18.04
Comment 22 Davide Caratti 2018-04-27 03:12:15 EDT
*** Bug 1518914 has been marked as a duplicate of this bug. ***

Note You need to log in before you can comment on or make changes to this bug.