RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1182956 - [RFE] Support for High Availability on Red Hat OpenStack Platform (RHEL 8)
Summary: [RFE] Support for High Availability on Red Hat OpenStack Platform (RHEL 8)
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: pacemaker
Version: 8.3
Hardware: All
OS: Linux
high
high
Target Milestone: rc
: 8.5
Assignee: Chris Feist
QA Contact: cluster-qe@redhat.com
Steven J. Levine
URL:
Whiteboard:
: 1908099 (view as bug list)
Depends On: 1264181 1886074 1908146 1908147 1908148 1949114
Blocks: 1891054
TreeView+ depends on / blocked
 
Reported: 2015-01-16 10:23 UTC by Paul Needle
Modified: 2023-12-15 15:48 UTC (History)
35 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
.Support for High Availability on Red Hat OpenStack platform You can now configure a high availability cluster on the Red Hat OpenStack platform. In support of this feature, Red Hat provides the following new cluster agents: * `fence_openstack`: fencing agent for HA clusters on OpenStack * `openstack-info`: resource agent to configure the `openstack-info` cloned resource, which is required for an HA cluster on OpenStack * `openstack-virtual-ip`: resource agent to configure a virtual IP address resource * `openstack-floating-ip`: resource agent to configure a floating IP address resource * `openstack-cinder-volume`: resource agent to configure a block storage resource
Clone Of:
: 2121838 (view as bug list)
Environment:
Last Closed: 2022-11-09 15:37:59 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker CLUSTERQE-5921 0 None None None 2022-08-26 21:01:17 UTC
Red Hat Knowledge Base (Article) 3131311 0 None None None 2019-06-21 22:09:41 UTC
Red Hat Knowledge Base (Solution) 1365313 0 None None None Never

Description Paul Needle 2015-01-16 10:23:18 UTC
This Bugzilla is being used to track details regarding a feature request for full end-to-end high-availability for Red Hat OpenStack Platform guests.

Comment 5 Perry Myers 2015-02-18 14:25:22 UTC
Note, the current approach we're considering for Compute Node HA and guest/VM HA is roughly described here:
http://blog.russellbryant.net/2014/10/15/openstack-instance-ha-proposal

Comment 6 Fabio Massimo Di Nitto 2015-02-18 16:07:55 UTC

*** This bug has been marked as a duplicate of bug 1185030 ***

Comment 7 Russell Bryant 2015-02-19 19:14:47 UTC
I believe this was closed as a duplicate, though the requests are not quite the same.  The work in bug 1185030 is about failures of compute nodes themselves, while this request is more about responding to failures of applications in guests.

Comment 8 Andrew Beekhof 2015-02-19 20:05:18 UTC
The hurdle for HA inside the guest is always people's willingness to have another daemon running there. *cough* matahari *cough*

pacemaker-remoted can certainly fill this role however this option is currently incompatible with using pacemaker-remoted for the compute-node (nested pacemaker-remoted is not currently on the roadmap).  There is also the issue of where the guest's HA configuration should live.

UNLESS

- guest HA was limited to services inside a single guest (no cross-talk or co-ordination required between guests)
- the guests' HA configuration lived inside the guest (in a yet to be determined form)

If this is the case, we could use pacemaker-remoted inside the guest in stand-alone mode (no communication to the outside world).


Other options:

1. We implement the method for scaling corosync as discussed pre-DevConf this year.
  This would allow the compute nodes to be full members of the cluster and pacemaker-remoted to be used for guests.

  - Drawback, the configuration will get quite large as it will contain all compute nodes and all HA service configuration for guests.

2. We allow crosstalk between guests on a single compute node by leaving each as a single-node cluster and including the guest service configuration in the compute node's cluster.

  - Dubious benefit here, failure of the compute node would take down the entire virtual cluster.

3. For pure monitoring of guest services, we could make use of nagios agents (which normally run from outside the target host). This is already supported.

Comment 11 Stephen Gordon 2016-06-09 18:52:18 UTC
Bulk update to reflect scope of Red Hat OpenStack Platform 9 and Red Hat OpenStack Platform does not include this issue (No pm_ack+).

Comment 13 Andrew Beekhof 2017-08-04 01:28:50 UTC
Bumping to 7.6 for now but this may become a priority for OSP13 depending on how the Ceph folks implement their per-pool NFS servers.

Comment 30 Brandon Perkins 2020-12-16 15:42:26 UTC
*** Bug 1908099 has been marked as a duplicate of this bug. ***

Comment 38 Chris Feist 2022-11-09 15:37:59 UTC
Closing this CURRENTRELEASE as this is now supported as of 8.7

Comment 39 Red Hat Bugzilla 2023-09-18 00:11:04 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days


Note You need to log in before you can comment on or make changes to this bug.