Bug 1679694 - [Intel OSP16] Support for Intel Omni-Path(TM) fabric
Summary: [Intel OSP16] Support for Intel Omni-Path(TM) fabric
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-neutron
Version: 16.0 (Train)
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: ---
Assignee: OSP Team
QA Contact: Eran Kuris
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-02-21 16:52 UTC by Krish Raghuram
Modified: 2023-08-08 12:34 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-08-08 12:34:12 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker OSP-2362 0 None None None 2022-03-24 13:35:54 UTC

Description Krish Raghuram 2019-02-21 16:52:31 UTC
Intel OmniPath™ is a high-performance network fabric particularly suited to High-performance Computing workloads. It enforces traffic separation through logical partitioning of the network at the L2 layer, so only packets on the same VLAN are allowed on a given network port. Membership of the virtual fabric/network is done through a centralized OmniPath Fabric Manager, that runs on a network node and configures all the switches and node interfaces (HFIs)
     
Neutron will manage the virtual fabrics through Fabric Manager, as it will not know the detailed topology of the OmniPath fabric. Intel is enabling and submitting changes in the Neutron mechanism driver for OmniPath upstream into the Neutron stadium. 

Intel Omni-Path has been enabled in Red Hat Enterprise Linux (details to be verified)

Version-Release number of selected component (if applicable):
OpenStack Neutron version in OpenStack Train release (through the Neutron stadium)

2. Business Justification:
  a) Why is this feature needed?
     High network throughput and low latency are critical for HPC workloads. Traffic separation through logical portioning is an excellent way of achieving that
  b) What hardware does this enable?
   Intel Omni-Path NICs and switches
  c) Is this hardware on-board in a system (eg, LOM) or an add-on card? 
  Add-on card
  d) Business impact? 
     CSPs, Communication Service Providers (CoSPs) and large Enterprise can deploy demanding HPC workloads more cost-effectively
  e) Other business drivers: N/A

3. Primary contact at Partner, email, phone (chat)
   Manjeet.s.bhatia

4. Expected results :
- Nova/Ironic initiate creation of a bare metal node on a tenant network by interfacing with the Neutron mechanism driver to create an unbounded port on the tenant network
- Ironic driver interfaces with the mechanism driver to create a bounded port on the provisioning network, then provisions the node. Ironic driver then requests mechanism driver to delete the port on the provisioning network, and bound the port on the tenant network 
- Neutron mechanism driver uses the GUID of the new node to configure an Omni-Path port and bind it to the VF. First the port will be created with GUID added in binding:profile of port, which neutron will pass to mechanism driver for binding port request and mechanism driver will bind port to virtual fabric using GUID
- Omni-Path Fabric Manager calls Open Management Agent CLI to add the GUID to the virtual fabric, and checks port status to confirm it is configured

Additional info:
Neutron stadium submission of mechanism driver changes (with documentation) – TBD
OmniPath Fabric Manager link – TBD

Comment 1 Krish Raghuram 2019-03-11 16:58:10 UTC
Links to Intel Omni-Path Fabric Manager:
 - github link: https://github.com/intel/opa-fm
 - Intel Download center link: https://downloadcenter.intel.com/download/28522/Intel-Omni-Path-Software-Including-Intel-Omni-Path-Host-Fabric-Interface-Driver-?v=t

Comment 2 Krish Raghuram 2019-07-01 21:40:28 UTC
Neutron driver submission at https://review.openstack.org/#/c/651008/ merged

Comment 3 Krish Raghuram 2019-07-19 16:12:35 UTC
The upstream repo for the Neutron ML2 driver is at https://opendev.org/x/networking-omnipath


Note You need to log in before you can comment on or make changes to this bug.