Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Support for OVS-DPDK ports is currently missing.
DPDK ports in OVS are added by specifying port type as DPDK and by specifying the PCI address of the device, e.g.:
ovs-vsctl add-port br0 myportnameone -- set Interface myportnameone \
type=dpdk options:dpdk-devargs=0000:06:00.0
Here's a brief overview of what's required to use OVS-DPDK ports; there are several prerequisites for a device to be OVS-DPDK compatible:
Hardware:
- DPDK compatible NIC
- IOMMU compatible CPU
Kernel:
- hugepage allocation
- iommu enabled and set to passthrough
Device driver:
- Designated NIC should use a DPDK compatible userspace driver, e.g. vfio-pci
Performance tuning: in multiple NUMA environments, cores could be partitioned and multiple PMD threads could be spawned per each node.
I am unsure what is in NM's scope and what is not.
I forgot to mention the primary motivation for this.
Besides being very nice to have, currently both RHEV and OpenStack have their own ways of dealing/configuring devices for DPDK usage. As such, it would be very beneficial for NM to both being able to control OVS-DPDK ports, but also to consolidate OVS-DPDK control for RHEV and OpenStack.
I personally think that this RFE makes a lot of sense, and clearly there is interest and use-cases. Lubomir seemed less convinced so far.
We must first understand OVS-DPDK better, to make the right decision. This action-item is with NetworkManager team.
If somebody wants to provide more information, it would be interesting to collect information how you currently configure OVS-DPDK. I assume you write ifcfg-file and use /etc/sysconfig/network-scripts/ifup-ovs and TYPE=OVSDPDKPort? How do the ifcfg files look? Or do you call ovs-vsctl from a script? Any pointers to the scripts? What about type=dpdkvhostuser?
(In reply to Thomas Haller from comment #4)
> I personally think that this RFE makes a lot of sense, and clearly there is
> interest and use-cases. Lubomir seemed less convinced so far.
>
> We must first understand OVS-DPDK better, to make the right decision. This
> action-item is with NetworkManager team.
>
> If somebody wants to provide more information, it would be interesting to
> collect information how you currently configure OVS-DPDK. I assume you write
> ifcfg-file and use /etc/sysconfig/network-scripts/ifup-ovs and
> TYPE=OVSDPDKPort? How do the ifcfg files look? Or do you call ovs-vsctl from
> a script? Any pointers to the scripts? What about type=dpdkvhostuser?
- Configuring an OVS-DPDK port is done as any other ordinary OVS port, with the
addition of specifying a type:
ovs-vsctl add-port br0 myportnameone -- set Interface myportnameone \
type=dpdk options:dpdk-devargs=0000:06:00.0
- Specifying OVS configuration is also possible in ifcfg:
https://github.com/osrg/openvswitch/blob/master/rhel/README.RHEL
- OVS vhostuser server/client ports replace the ordinary linux tap devices usedto allow VM/host connectivity in userspace (http://docs.openvswitch.org/en/latest/topics/dpdk/vhost-user/)
This is a server/client model. One leg creates a socket and listens on it, and the other connects to it (preferred way is QEMU acting as server, OVS as client)
(In reply to Dan Sneddon from comment #7)
> Please note that OpenStack Platform also requires some other DPDK
> configurations:
>
> Device types from ifcfg configuration:
>
> OVSUserBridge
> OVSDPDKBond
Also note that OpenStack requires that we be able to set the following parameters on DPDK devices:
OVS-DPDK bonds: Set rx_queue. This is done with the legacy network init scripts using the RX_QUEUE=<num> statement in the ifcfg file.
OVS-DPDK ports: Set driver, with the OpenStack default being "vfio-pci". Also rx_queue.
We also would like to be able to set OVS options, using a similar mechanism to the "OVS_EXTRA" setting in legacy network init-scripts used with the network service.
After following this OvS DPDK guide I was able to set things up via ovs-vsctl but not via NM, after some debugging Lubo's found out that netdev param was missing.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHBA-2019:3623
Support for OVS-DPDK ports is currently missing. DPDK ports in OVS are added by specifying port type as DPDK and by specifying the PCI address of the device, e.g.: ovs-vsctl add-port br0 myportnameone -- set Interface myportnameone \ type=dpdk options:dpdk-devargs=0000:06:00.0 Here's a brief overview of what's required to use OVS-DPDK ports; there are several prerequisites for a device to be OVS-DPDK compatible: Hardware: - DPDK compatible NIC - IOMMU compatible CPU Kernel: - hugepage allocation - iommu enabled and set to passthrough Device driver: - Designated NIC should use a DPDK compatible userspace driver, e.g. vfio-pci Performance tuning: in multiple NUMA environments, cores could be partitioned and multiple PMD threads could be spawned per each node. I am unsure what is in NM's scope and what is not.