Bug 1759890 - [RFE] - OpenStack able to target independent Ceph cluster for each availability zone
Summary: [RFE] - OpenStack able to target independent Ceph cluster for each availabili...
Keywords:
Status: CLOSED DUPLICATE of bug 1466008
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-tripleo-heat-templates
Version: 13.0 (Queens)
Hardware: x86_64
OS: Linux
high
high
Target Milestone: ---
: ---
Assignee: John Fulton
QA Contact: Yogev Rabl
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-10-09 10:51 UTC by Pradipta Kumar Sahoo
Modified: 2023-09-07 20:49 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-10-23 13:31:29 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker OSP-28279 0 None None None 2023-09-07 20:49:51 UTC

Description Pradipta Kumar Sahoo 2019-10-09 10:51:04 UTC
Description of problem:
As per the customer request, we would raise this RFE where OpenStack needs to target independent Ceph cluster for each availability zone. 
While this could be achieved manually it seems TripleO does not yet support to configure more than one external cluster.


Usecase:
-------
One single OpenStack deployment with each controller in the different server room.
The same way customer has one AZ per server room resulting in on separate role per compute in Director.
There is an independent Ceph cluster per server room and the customer wanted to override the Mon hosts, FSID and keyring for each compute role in Director.
The motivation behind this is that the lifecycle of the Ceph cluster is independent and failing one cluster during an upgrade is not breaking our availability zone constraint.


CTL            | CTL            | CTL            |
-------------------------------------------------|
SINGLE OPENSTACK control plane !
-------------------------------------------------|
COMPUTE_A1     | COMPUTE_B1     | COMPUTE_C1     |
CEPH_CLUSTER_1 | CEPH_CLUSTER_2 | CEPH_CLUSTER_3 |

PS: This is also causing problems for Glance as images can be located on a single pool


Version-Release number of selected component (if applicable):
Red Hat OpenStack 13

How reproducible:
In the customer environment

Comment 1 Giulio Fidente 2019-10-23 13:31:29 UTC
It will be possible to deploy multiple nova/cinder AZs using multiple heat stacks and have in each an isolated Ceph cluster from OSP16 (tech preview in OSP15)

*** This bug has been marked as a duplicate of bug 1466008 ***


Note You need to log in before you can comment on or make changes to this bug.