Bug 1412816 - [RFE] Multi-site: deploy overcloud using pre-existing Keystone database
Summary: [RFE] Multi-site: deploy overcloud using pre-existing Keystone database
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-tripleo-heat-templates
Version: 11.0 (Ocata)
Hardware: Unspecified
OS: Unspecified
Target Milestone: ---
: ---
Assignee: Jiri Stransky
QA Contact: Arik Chernetsky
: 1368965 (view as bug list)
Depends On:
Blocks: 1476902 1592486
TreeView+ depends on / blocked
Reported: 2017-01-12 21:30 UTC by Ian Pilcher
Modified: 2018-12-08 18:14 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Last Closed: 2018-12-08 18:14:21 UTC
Target Upstream Version:

Attachments (Terms of Use)

Description Ian Pilcher 2017-01-12 21:30:07 UTC
This functionality is needed to deploy a "shared Keystone" multi-site architecture.  In this architecture, each site is a separate OpenStack region, but the Keystone services all use a shared, replicated database.

This Keystone database will be manually deployed on a minimum of 3 servers, in separate locations.  Galera will be used to replicate the keystone database between these servers.  Some of these servers *may* be co-located with the OpenStack deployment.

Each site will have a local database VIP, which will be managed by pacemaker on the controller cluster and assigned to one of the controller hosts.  HAProxy will listen on that VIP and forward connections to the actual DB servers, with a strong preference for any local server(s).

The following information will be provided in an environment file:

 - local Keystone DB servers (0 or more IP/hostname[:port] combinations)
 - remote Keystone DB servers (2 or more IP/hostname[:port] combinations)
 - database name (default to keystone)
 - database credentials
 - local Keystone database VIP (optional)

Comment 2 Ian Pilcher 2017-01-26 16:47:21 UTC
A few additional thoughts around this ...

We believe that we won't hit the SELECT ... FOR UPDATE issue that forces us to use a single Galera writer in the general case:


Rolling N+1 upgrades should be possible by following the correct procedure:


This may require logic to disable the db_sync command that we use to upgrade the database schema in the normal upgrade case.

Also, we're going to want to put each region's service accounts (neutron, heat, nova, ... maybe admin) into a separate region-specific Keystone domain in order to avoid collisions between the different regions.  This domain will still use "local" (Keystone database) storage, and is distinct from the Active Directory/LDAP/SAML/etc. domain(s) used for normal user authentication.  I'm not sure if this is possible with TripleO today.

Comment 5 Chris Jones 2018-01-08 16:25:44 UTC
*** Bug 1368965 has been marked as a duplicate of this bug. ***

Comment 8 Harry Rybacki 2018-12-08 18:14:21 UTC
Closing as WONT FIX. Upstream is presently holding discussions related to how Keystone will fit into the Edge model.

Note You need to log in before you can comment on or make changes to this bug.