Bug 1412017 - [RFE] Capability to simultaneously manage multiple NFVI-PoP's from a VIM [NEEDINFO]
Summary: [RFE] Capability to simultaneously manage multiple NFVI-PoP's from a VIM
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: rhosp-director
Version: 12.0 (Pike)
Hardware: x86_64
OS: Linux
high
high
Target Milestone: ---
: ---
Assignee: RHOS Maint
QA Contact: Sasha Smolyak
URL:
Whiteboard:
Depends On:
Blocks: 1476902 1521118
TreeView+ depends on / blocked
 
Reported: 2017-01-11 01:23 UTC by hrushi
Modified: 2019-07-11 20:14 UTC (History)
21 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-07-11 20:14:35 UTC
Target Upstream Version:
fbaudin: needinfo? (royoung)
fbaudin: needinfo? (aherr)


Attachments (Terms of Use)
Graphical representation of scenario and requirement (1.24 MB, application/zip)
2017-01-11 07:37 UTC, hrushi
no flags Details

Description hrushi 2017-01-11 01:23:49 UTC
Telco use cases do require a multiple-tier NFVi Point of Presence (PoP) or Central Office (CO) or Mobile Switching Center/Office (MSC/MSO) a.k.a micro DC severely constrained in:
Space
Power 
Cooling

Minimal upfront cost Control vs. Compute foot print. Centralized operation/maintenance (includes Upgrades)

From connectivity standpoint, telcos can provide guaranteed bandwidth and latency between Main DC and the micro DC. 

This requirement basically asks for RHOSP to support:
1. Deploying a compute node over WAN: Need not be through PXE booting using OSPd or image transfer. It can be bringing up a preinstalled compute node talking to the main DC controllers. 

2. Support inter-service communication between compute and controller to ensure OpenStack service do not error out. It is safe to assume that the infrastructure provides l2 connectivity between controller and remote compute nodes, enough bw (upto 2Gbps) and latency <2ms.

Comment 1 Barak 2017-01-11 05:57:37 UTC
3. A micro DC can be as simple as a single server. The idea is that the server will be an OpenStack host where the OpenStack control entities reside in the central DC. This implies a distributed OpenStack instance where the control is in one location and some of the OpenStack hosts are remote.
4. The main difference is for the networking part. For that it is suggested using the Neutron ODL plugin. HPE OpenSDN, implementing a Neutron ODL plugin addresses these issues.
5. The solution is fully managed from the central DC. As an example, a user can choose to instantiate a new VNF on one of the remote OpenStack hosts. Thus a VM image from the central DC will spawned on the remote OpenStack host.
6. Sometimes the remote location isn't easily accessible to the carrier technical team. Need to be be able to support (e.g. upgrade, install patches) the remote OpenStack host without accessing the actual server (remote support).

Comment 2 hrushi 2017-01-11 07:37:20 UTC
Created attachment 1239346 [details]
Graphical representation of scenario and requirement

Comment 3 Franck Baudin 2017-01-19 13:17:09 UTC
This is related to multi-site, adding the proper PM in the loop

Comment 4 Franck Baudin 2017-01-27 14:41:10 UTC
This won't be ready for RHOSP12, flagging for RHOSP13 tentatively , to be reassess when scoping RHOSP13.

Comment 10 Jaromir Coufal 2019-07-11 20:14:35 UTC
Obsolete RFE. DCN has been delivered.


Note You need to log in before you can comment on or make changes to this bug.