Red Hat Bugzilla – Bug 1010305
[Doc] Install a Highly Available OpenStack deployment using Foreman
Last modified: 2016-02-01 14:22:28 EST
Cloned for documentation impact, refer to Bug # 998597 for implementation details.
*** Bug 1010306 has been marked as a duplicate of this bug. ***
Per Ryan O'Hara (and he's the SME on this one):
Further info from the mail thread (mtg on Dec.6 to finalise):
These are the possible scenarios from the dev bug:
-Use Foreman to deploy RDO with load balanced services and HAProxy/keepalived (creation of LB Host Group)
-Use Foreman to deploy RDO with highly available mysql using Pacemaker/Corosync
-Use Foreman to deploy RDO with highly available qpid
Perry: LB using HAProxy of core API services and A/P HA of the database are
definitely in. Clustered qpid is a stretch at this point, but with
additional help from the MRG team perhaps we can get it in (tross cc'd).
Fabio:I just completed testing clustered qpid behind a LB and it´s working.
the test suite passes and glance can use it happily (that was the only
problematic service in this scenario). All HA setups will be driven only by pacemaker. keepalived is not necessary. keepalived is out of the picture. All clusters will be pacemaker based for consistency and service recovery features.
Ryan: The long version is that since haproxy is handling load-balancing and
keepalived would only be providing VIP failover, we decided against
using keepalived. I didn't document how to use it to provide HA for
haproxy in my RDO document. Since we're already using Pacemaker for
everything else HA, we don't need keepalived.
You can deploy a load-balancer (haproxy) via foreman today and it will
create a proxy for each OpenStack API service. I encourage people to
try it and direct questions to me. Currently there is no integration
between the HA host group and the LB host group, so the LB is still a
SPOF. We need to address this.
Ryan's statements are accurate with regard to the Load Balancer. He is the best source of information for any troubleshooting there, though I am happy to help as well. Since I do not see it listed other than in passing in the comments above, I'll mention HA Mysql. There is an 'HA Mysql' Hostgroup in the current release that can be tested. Simply set the virtual ip you want in the parameters for this group, along with the list of nodes that will be participating in the cluster (via IP). Full list of parameter options and descriptions are here. Apply this group to a minimum of two nodes (hosts in foreman terms) so the cluster can achieve quorum. The nodes will report success once quorum is reached, which may take a couple puppet agent runs. After that, the mysql host parameter can be us to set up one or more controllers to use this cluster for the database. Feel free to ping me directly with questions for fastest response in irc. I am on both internal and freenode networks as jayg.
It still needs a bit more detail that Ryan and I will try to add next week, after we are both back from holiday, but I have added an initial stab (based on summary provide by Ryan) at the steps to use a Load Balancer Host Group:
The doc above is from Dan Radez. This one is from Ryan: http://openstack.redhat.com/Load_Balance_OpenStack_API
Refocusing the requirements of this bug.
So for the OSP Installer 6.0 Guide, we've got documentation related to HA:
End-to-end Scenario: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/6/html/Installer_and_Foreman_Guide/chap-Deployment_Scenario_2_Advanced_Environment.html
NEEDINFOing sgordon -- Where do you want to go from here? Did you have suggestions for improving this current content so that it meets the requirements of this BZ?
This bug targets content for the RHOS 4.0 release.
Based on the current workload and the need to prioritize work on the RHEL-OSP 7.0 release, I am closing this bug.
Content that addresses the requirements set out in this bug for RHEL-OSP 7.0 shall be considered, and a new bug raised as necessary.