Back to bug 1325680

Who When What Removed Added
Nir Yechiel 2016-04-10 15:36:06 UTC Keywords FutureFeature, Triaged
Priority unspecified high
Target Release --- 10.0
Target Milestone --- beta
Severity unspecified high
Red Hat Bugzilla 2016-04-10 15:36:06 UTC Doc Type Bug Fix Enhancement
John Skeoch 2016-04-18 07:44:31 UTC CC yeylon srevivo
Perry Myers 2016-04-19 01:09:44 UTC CC pmyers
Nir Yechiel 2016-05-17 13:32:32 UTC Link ID OpenStack gerrit 313871
Nir Yechiel 2016-06-07 08:42:57 UTC See Also https://bugzilla.redhat.com/show_bug.cgi?id=1329379
Mattia Gandolfi 2016-06-13 13:27:57 UTC CC mgandolf
Nir Yechiel 2016-06-22 12:20:21 UTC Blocks 1334442, 1335593
CC igor.duarte.cardoso
Ofer Blaut 2016-07-21 11:52:01 UTC CC ekuris
Ofer Blaut 2016-08-01 14:17:54 UTC CC oblaut
Joe Donohue 2016-08-10 13:06:16 UTC CC jdonohue
Nir Yechiel 2016-08-17 08:28:08 UTC Keywords InstallerIntegration
Status NEW ON_DEV
Summary [RFE] [Neutron] DPDK accelerated Open vSwitch - graduation to full support [RFE] [Neutron] [OSP-director] DPDK accelerated Open vSwitch - graduation to full support
Nir Yechiel 2016-08-31 10:03:13 UTC Depends On 1371868
Nir Yechiel 2016-08-31 10:03:43 UTC Status ON_DEV POST
Toni Freger 2016-09-13 10:28:19 UTC QA Contact tfreger yrachman
Scott Lewis 2016-09-13 13:58:01 UTC CC sclewis
Nir Yechiel 2016-09-13 16:14:08 UTC CC mburns, rhel-osp-director-maint
Component openstack-neutron openstack-tripleo-heat-templates
Nir Yechiel 2016-09-14 12:52:53 UTC Blocks 1329379
CC fbaudin
Jon Schlueter 2016-09-21 16:11:55 UTC Status POST MODIFIED
CC jschluet
Fixed In Version openstack-tripleo-heat-templates-5.0.0-0.20160907212643.90c852e.1.el7ost
errata-xmlrpc 2016-09-22 12:12:35 UTC Status MODIFIED ON_QA
Lucy Bopf 2016-10-06 01:56:15 UTC Blocks 1371868
Depends On 1371868
atelang 2016-10-14 15:25:30 UTC Depends On 1384562
Assaf Muller 2016-10-19 20:50:45 UTC Flags needinfo?(fbaudin)
Maxim Babushkin 2016-10-19 21:27:16 UTC CC mbabushk
Franck Baudin 2016-10-20 12:56:47 UTC CC dnavale
Flags needinfo?(fbaudin)
Franck Baudin 2016-10-20 12:57:27 UTC CC dcadzow
Lucy Bopf 2016-11-08 14:03:39 UTC Depends On 1384774
Yariv 2016-11-15 21:58:42 UTC Status ON_QA VERIFIED
Maxim Babushkin 2016-11-21 12:39:01 UTC Depends On 1383406
Maxim Babushkin 2016-11-21 14:31:58 UTC Depends On 1397074
Maxim Babushkin 2016-11-21 14:50:04 UTC Depends On 1380114
atelang 2016-11-22 18:49:26 UTC Depends On 1397537
Assaf Muller 2016-12-09 20:04:34 UTC Assignee amuller vchundur
Vijay Chundury 2016-12-09 20:20:52 UTC Doc Text Feature:
Today the installation and configuration of OVS+DPDK in openstack is done manually after overcloud deployment. This can be very challenging for the operator and tedious to do over a large number of compute nodes. The installation of OVS+DPDK needs be automated in tripleo.
Identification of the hardware capabilities for DPDK were all done manually today and the same shall be automated during introspection. This hardware detection also provides the operator with the data needed for configuring Heat templates.
As of today its not possible to have the co-existence of compute nodes with DPDK enabled hardware and without DPDK enabled hardware.


Reason:
Ironic Python Agent shall discover the below hardware details and store it in swift blob -

CPU flags for hugepages support - If pse exists then 2MB hugepages are supported If pdpe1gb exists then 1GB hugepages are supported
CPU flags for IOMMU - If VT-d/svm exists, then IOMMU is supported, provided IOMMU support is enabled in BIOS.
Compatible nics - Shall compare it with the list of NICs whitelisted for DPDK. The DPDK supported NICs are available at http://dpdk.org/doc/nics
The nodes without any of the above mentioned capabilities can’t be used for COMPUTE role with DPDK.

Operator shall have a provision to enable DPDK on compute nodes

The overcloud image for the nodes identified to be COMPUTE capable and having DPDK NICs, shall have the OVS+DPDK package instead of OVS. It shall also have packages dpdk and driverctl.

The device names of the DPDK capable NIC’s shall be obtained from T-H-T. The PCI address of DPDK NIC needs to be identified from the device name. It is required for whitelisting the DPDK NICs during PCI probe.

Hugepages needs to be enabled in the Compute nodes with DPDK.
CPU isolation needs to be done so that the CPU cores reserved for DPDK Poll Mode Drivers (PMD) are not used by the general kernel balancing, interrupt handling and scheduling algorithms.

On each COMPUTE node with DPDK enabled NIC, puppet shall configure the DPDK_OPTIONS for whitelisted NIC’s, CPU mask and number of memory channels for DPDK PMD. The DPDK_OPTIONS needs to be set in /etc/sysconfig/openvswitch

Os-net-config shall -

Associate the given interfaces with the dpdk drivers (default as vfio-pci driver) by identifying the pci address of the given interface. The driverctl shall be used to bind the driver persistently
Understand the ovs_user_bridge and ovs_dpdk_port types and configure the ifcfg scripts accordingly.
The “TYPE” ovs_user_bridge shall translate to OVS type OVSUserBridge and based on this OVS will configure the datapath type to ‘netdev’.
The “TYPE” ovs_dpdk_port shall translate OVS type OVSDPDKPort and based on this OVS adds the port to the bridge with interface type as ‘dpdk’
Understand the ovs_dpdk_bond and configure the ifcfg scripts accordingly.
On each COMPUTE node with DPDK enabled NIC, puppet shall -

Enable OVS+DPDK in /etc/neutron/plugins/ml2/openvswitch_agent.ini [OVS] datapath_type=netdev vhostuser_socket_dir=/var/run/openvswitch
Configure vhostuser ports in /var/run/openvswitch to be owned by qemu.
On each controller node, puppet shall -

Add NUMATopologyFilter to scheduler_default_filters in nova.conf.
Result:
The automation of the above mentioned enhanced platform awareness has been done, verified by QA.
Martin Lopes 2016-12-13 05:12:25 UTC CC mlopes
Doc Text Feature:
Today the installation and configuration of OVS+DPDK in openstack is done manually after overcloud deployment. This can be very challenging for the operator and tedious to do over a large number of compute nodes. The installation of OVS+DPDK needs be automated in tripleo.
Identification of the hardware capabilities for DPDK were all done manually today and the same shall be automated during introspection. This hardware detection also provides the operator with the data needed for configuring Heat templates.
As of today its not possible to have the co-existence of compute nodes with DPDK enabled hardware and without DPDK enabled hardware.


Reason:
Ironic Python Agent shall discover the below hardware details and store it in swift blob -

CPU flags for hugepages support - If pse exists then 2MB hugepages are supported If pdpe1gb exists then 1GB hugepages are supported
CPU flags for IOMMU - If VT-d/svm exists, then IOMMU is supported, provided IOMMU support is enabled in BIOS.
Compatible nics - Shall compare it with the list of NICs whitelisted for DPDK. The DPDK supported NICs are available at http://dpdk.org/doc/nics
The nodes without any of the above mentioned capabilities can’t be used for COMPUTE role with DPDK.

Operator shall have a provision to enable DPDK on compute nodes

The overcloud image for the nodes identified to be COMPUTE capable and having DPDK NICs, shall have the OVS+DPDK package instead of OVS. It shall also have packages dpdk and driverctl.

The device names of the DPDK capable NIC’s shall be obtained from T-H-T. The PCI address of DPDK NIC needs to be identified from the device name. It is required for whitelisting the DPDK NICs during PCI probe.

Hugepages needs to be enabled in the Compute nodes with DPDK.
CPU isolation needs to be done so that the CPU cores reserved for DPDK Poll Mode Drivers (PMD) are not used by the general kernel balancing, interrupt handling and scheduling algorithms.

On each COMPUTE node with DPDK enabled NIC, puppet shall configure the DPDK_OPTIONS for whitelisted NIC’s, CPU mask and number of memory channels for DPDK PMD. The DPDK_OPTIONS needs to be set in /etc/sysconfig/openvswitch

Os-net-config shall -

Associate the given interfaces with the dpdk drivers (default as vfio-pci driver) by identifying the pci address of the given interface. The driverctl shall be used to bind the driver persistently
Understand the ovs_user_bridge and ovs_dpdk_port types and configure the ifcfg scripts accordingly.
The “TYPE” ovs_user_bridge shall translate to OVS type OVSUserBridge and based on this OVS will configure the datapath type to ‘netdev’.
The “TYPE” ovs_dpdk_port shall translate OVS type OVSDPDKPort and based on this OVS adds the port to the bridge with interface type as ‘dpdk’
Understand the ovs_dpdk_bond and configure the ifcfg scripts accordingly.
On each COMPUTE node with DPDK enabled NIC, puppet shall -

Enable OVS+DPDK in /etc/neutron/plugins/ml2/openvswitch_agent.ini [OVS] datapath_type=netdev vhostuser_socket_dir=/var/run/openvswitch
Configure vhostuser ports in /var/run/openvswitch to be owned by qemu.
On each controller node, puppet shall -

Add NUMATopologyFilter to scheduler_default_filters in nova.conf.
Result:
The automation of the above mentioned enhanced platform awareness has been done, verified by QA.
Typically, the installation and configuration of OVS+DPDK in OpenStack is performed manually after overcloud deployment. This can be very challenging for the operator and tedious to do over a large number of Compute nodes. The installation of OVS+DPDK has now been automated in tripleo. Identification of the hardware capabilities for DPDK were previously done manually, and is now automated during introspection. This hardware detection also provides the operator with the data needed for configuring Heat templates. At present, it is not possible to have the co-existence of Compute nodes with DPDK-enabled hardware and without DPDK-enabled hardware.
The `ironic` Python Agent discovers the following hardware details and stores it in a swift blob:
* CPU flags for hugepages support - If pse exists then 2MB hugepages are supported If pdpe1gb exists then 1GB hugepages are supported
* CPU flags for IOMMU - If VT-d/svm exists, then IOMMU is supported, provided IOMMU support is enabled in BIOS.
* Compatible nics - compared with the list of NICs whitelisted for DPDK, as listed here http://dpdk.org/doc/nics

Nodes without any of the above-mentioned capabilities cannot be used for the Compute role with DPDK.

* Operator will have a provision to enable DPDK on Compute nodes.
* The overcloud image for the nodes identified to be Compute-capable and having DPDK NICs, will have the OVS+DPDK package instead of OVS. It will also have packages `dpdk` and `driverctl`.
* The device names of the DPDK capable NIC’s will be obtained from T-H-T. The PCI address of DPDK NIC needs to be identified from the device name. It is required for whitelisting the DPDK NICs during PCI probe.
* Hugepages needs to be enabled in the Compute nodes with DPDK.
* CPU isolation needs to be done so that the CPU cores reserved for DPDK Poll Mode Drivers (PMD) are not used by the general kernel balancing, interrupt handling and scheduling algorithms.
* On each Compute node with a DPDK-enabled NIC, puppet will configure the DPDK_OPTIONS for whitelisted NICs, CPU mask, and number of memory channels for DPDK PMD. The DPDK_OPTIONS needs to be set in /etc/sysconfig/openvswitch.

`Os-net-config` performs the following steps:
* Associate the given interfaces with the dpdk drivers (default as vfio-pci driver) by identifying the pci address of the given interface. The driverctl will be used to bind the driver persistently.
* Understand the ovs_user_bridge and ovs_dpdk_port types and configure the ifcfg scripts accordingly.
* The “TYPE” ovs_user_bridge will translate to OVS type OVSUserBridge and based on this OVS will configure the datapath type to ‘netdev’.
* The “TYPE” ovs_dpdk_port will translate OVS type OVSDPDKPort and based on this OVS adds the port to the bridge with interface type as ‘dpdk’
* Understand the ovs_dpdk_bond and configure the ifcfg scripts accordingly.

On each Compute node with a DPDK-enabled NIC, puppet will perform the following steps:
* Enable OVS+DPDK in /etc/neutron/plugins/ml2/openvswitch_agent.ini [OVS] datapath_type=netdev vhostuser_socket_dir=/var/run/openvswitch
* Configure vhostuser ports in /var/run/openvswitch to be owned by qemu.

On each controller node, puppet will perform the following steps:
* Add NUMATopologyFilter to scheduler_default_filters in nova.conf.

As a result, the automation of the above-mentioned enhanced platform awareness has been completed, and verified by QA testing.
errata-xmlrpc 2016-12-14 14:03:02 UTC Status VERIFIED RELEASE_PENDING
errata-xmlrpc 2016-12-14 15:32:16 UTC Status RELEASE_PENDING CLOSED
Resolution --- ERRATA
Last Closed 2016-12-14 10:32:16 UTC

Back to bug 1325680