RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1612503 - [RFE] Support OVS-DPDK ports, bonds, and user bridges
Summary: [RFE] Support OVS-DPDK ports, bonds, and user bridges
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: NetworkManager
Version: 8.1
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: 8.1
Assignee: Lubomir Rintel
QA Contact: Desktop QE
URL:
Whiteboard:
Depends On:
Blocks: 1689408 1701002
TreeView+ depends on / blocked
 
Reported: 2018-08-05 09:00 UTC by Leon Goldberg
Modified: 2020-11-14 11:09 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-11-05 22:28:59 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:3623 0 None None None 2019-11-05 22:29:27 UTC

Internal Links: 1645689

Description Leon Goldberg 2018-08-05 09:00:41 UTC
Support for OVS-DPDK ports is currently missing.

DPDK ports in OVS are added by specifying port type as DPDK and by specifying the PCI address of the device, e.g.:

ovs-vsctl add-port br0 myportnameone -- set Interface myportnameone \
    type=dpdk options:dpdk-devargs=0000:06:00.0

Here's a brief overview of what's required to use OVS-DPDK ports; there are several prerequisites for a device to be OVS-DPDK compatible:

Hardware:
- DPDK compatible NIC
- IOMMU compatible CPU

Kernel:
- hugepage allocation
- iommu enabled and set to passthrough

Device driver:
- Designated NIC should use a DPDK compatible userspace driver, e.g. vfio-pci

Performance tuning: in multiple NUMA environments, cores could be partitioned and multiple PMD threads could be spawned per each node.

I am unsure what is in NM's scope and what is not.

Comment 3 Leon Goldberg 2018-08-08 11:27:30 UTC
I forgot to mention the primary motivation for this.

Besides being very nice to have, currently both RHEV and OpenStack have their own ways of dealing/configuring devices for DPDK usage. As such, it would be very beneficial for NM to both being able to control OVS-DPDK ports, but also to consolidate OVS-DPDK control for RHEV and OpenStack.

Comment 4 Thomas Haller 2018-08-09 10:22:07 UTC
I personally think that this RFE makes a lot of sense, and clearly there is interest and use-cases. Lubomir seemed less convinced so far.

We must first understand OVS-DPDK better, to make the right decision. This action-item is with NetworkManager team.

If somebody wants to provide more information, it would be interesting to collect information how you currently configure OVS-DPDK. I assume you write ifcfg-file and use /etc/sysconfig/network-scripts/ifup-ovs and TYPE=OVSDPDKPort? How do the ifcfg files look? Or do you call ovs-vsctl from a script? Any pointers to the scripts? What about type=dpdkvhostuser?

Comment 5 Leon Goldberg 2018-08-09 10:52:16 UTC
(In reply to Thomas Haller from comment #4)
> I personally think that this RFE makes a lot of sense, and clearly there is
> interest and use-cases. Lubomir seemed less convinced so far.
> 
> We must first understand OVS-DPDK better, to make the right decision. This
> action-item is with NetworkManager team.
> 
> If somebody wants to provide more information, it would be interesting to
> collect information how you currently configure OVS-DPDK. I assume you write
> ifcfg-file and use /etc/sysconfig/network-scripts/ifup-ovs and
> TYPE=OVSDPDKPort? How do the ifcfg files look? Or do you call ovs-vsctl from
> a script? Any pointers to the scripts? What about type=dpdkvhostuser?

- Configuring an OVS-DPDK port is done as any other ordinary OVS port, with the 
addition of specifying a type:

  ovs-vsctl add-port br0 myportnameone -- set Interface myportnameone \
      type=dpdk options:dpdk-devargs=0000:06:00.0

- Specifying OVS configuration is also possible in ifcfg:
https://github.com/osrg/openvswitch/blob/master/rhel/README.RHEL


- OVS vhostuser server/client ports replace the ordinary linux tap devices usedto allow VM/host connectivity in userspace (http://docs.openvswitch.org/en/latest/topics/dpdk/vhost-user/)

This is a server/client model. One leg creates a socket and listens on it, and the other connects to it (preferred way is QEMU acting as server, OVS as client)

Comment 7 Dan Sneddon 2018-11-02 21:46:55 UTC
Please note that OpenStack Platform also requires some other DPDK configurations:

Device types from ifcfg configuration:

OVSUserBridge
OVSDPDKBond

Comment 8 Dan Sneddon 2019-03-18 18:10:15 UTC
(In reply to Dan Sneddon from comment #7)
> Please note that OpenStack Platform also requires some other DPDK
> configurations:
> 
> Device types from ifcfg configuration:
> 
> OVSUserBridge
> OVSDPDKBond

Also note that OpenStack requires that we be able to set the following parameters on DPDK devices:

OVS-DPDK bonds: Set rx_queue. This is done with the legacy network init scripts using the RX_QUEUE=<num> statement in the ifcfg file.

OVS-DPDK ports: Set driver, with the OpenStack default being "vfio-pci". Also rx_queue.

We also would like to be able to set OVS options, using a similar mechanism to the "OVS_EXTRA" setting in legacy network init-scripts used with the network service.

Comment 16 Vladimir Benes 2019-07-24 10:56:56 UTC
After following this OvS DPDK guide I was able to set things up via ovs-vsctl but not via NM, after some debugging Lubo's found out that netdev param was missing.

Comment 17 Vladimir Benes 2019-07-24 10:57:14 UTC
guide here:
https://dpdk-guide.gitlab.io/dpdk-guide/ovs/ports.html

Comment 18 Vladimir Benes 2019-08-01 13:07:12 UTC
moving to VERIFIED as it looks we have some fixes pulled in. We still need a bunch more:
https://bugzilla.redhat.com/show_bug.cgi?id=1732791
https://bugzilla.redhat.com/show_bug.cgi?id=1734032

we can consider this working and covered by:
add_dpdk_port test
https://gitlab.freedesktop.org/NetworkManager/NetworkManager-ci/blob/master/nmcli/features/ovs.feature#L452
and
add_dpdk_bond_sriov test
https://gitlab.freedesktop.org/NetworkManager/NetworkManager-ci/blob/master/nmcli/features/ovs.feature#L467

Comment 20 errata-xmlrpc 2019-11-05 22:28:59 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:3623


Note You need to log in before you can comment on or make changes to this bug.