Red Hat Bugzilla – Bug 1309550
[RFE] Update Cinder heat template to allow multiple Ceph backends
Last modified: 2017-11-10 07:55:32 EST
Currently we only support a single Cinder backend for Ceph. Ceph provides the ability to have multiple storage pools that take root at different branches of a CRUSH hierarchy. This means that pools can be composed of different media, which can then provide different service block storage service levels in the form of Cinder Volume Types. An example of this is described in this blog post:
This bug did not make the OSP 8.0 release. It is being deferred to OSP 10.
This bugzilla has been removed from the release and needs to be reviewed and Triaged for another Target Release.
Currently, the RBD pool and associated Cinder backend are defined by these two
- THT parameter
- Default value: "volumes"
- Puppet hiera data
- Default value: "tripleo_ceph"
The design I have in mind would add a new THT parameter:
List of extra Ceph pools for use with RBD backends for Cinder. An
extra Cinder RBD backend driver is created for each pool in the
list. This is in addition to the standard RBD backend driver
associated with the CinderRbdPoolName.
Any pools specified in the (optional) list would automatically generate
additional Cinder backends. For example, deploying an environment file that
Would result in a Cinder deployment with three RBD backends:
RBD Pool Cinder Backend
Note 1: For Ceph clusters managed by TripleO, the "CephPools" THT parameter
can be used to create additional pools.
Note 2: The user/operator would be responsible for creating the Ceph CRUSH map
necessary to establish appropriate service levels for each RBD pool.
Note 3: The user/operator would be responsible for creating Cinder volume
types associated with each of the Cinder RBD backends. That is, the Cinder
backends would be automatically created, but the Cinder volume types would
need to be defined post-deployment.
Assuming it is sufficient to create additional Cinder backends pointing to different pools in the same Ceph cluster, this sounds sane to me.
We might have to revisit how it is implemented if we moved to support multiple Ceph clusters but to preserve the user experience I suppose we could assume the first to be the one where the additional backends will point to.
The ability to specify additional cinder backends each of which points to a specific Ceph Pool is a minimum request.
We have had discussion with sites that are seeking to configure a default Ceph cluster for an OpenStack cluster, but also to share multiple Ceph clusters across OpenStack clusters.
1x All Flash high performance Ceph cluster (0.1 PB) shared across OpenStack clusters as needed
1x Ultra High Capacity Ceph cluster for archival, shared across OpenStack clusters as needed (in once case 400 nodes)
Per Ceph cluster for dedicated use within that OpenStack cluster.
https://blueprints.launchpad.net/tripleo/+spec/multiple-cinder-rbd-backend will be proposed at the upcoming Queens PTG.
Patches have merged upstream.