Bug 1309550 - [RFE] Update Cinder heat template to allow multiple Ceph backends
[RFE] Update Cinder heat template to allow multiple Ceph backends
Status: POST
Product: Red Hat OpenStack
Classification: Red Hat
Component: rhosp-director (Show other bugs)
12.0 (Pike)
All Linux
high Severity high
: Upstream M2
: 13.0 (Queens)
Assigned To: Alan Bishop
Yogev Rabl
: FutureFeature, Triaged
Depends On: 1466008
Blocks: 1394885 1413723 1419948
  Show dependency treegraph
Reported: 2016-02-18 00:46 EST by Kyle Bader
Modified: 2017-11-10 07:55 EST (History)
34 users (show)

See Also:
Fixed In Version:
Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of:
Last Closed:
Type: Feature Request
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

External Trackers
Tracker ID Priority Status Summary Last Updated
OpenStack gerrit 506714 None None None 2017-09-22 12:45 EDT
OpenStack gerrit 506715 None None None 2017-09-22 12:45 EDT

  None (edit)
Description Kyle Bader 2016-02-18 00:46:18 EST
Currently we only support a single Cinder backend for Ceph. Ceph provides the ability to have multiple storage pools that take root at different branches of a CRUSH hierarchy. This means that pools can be composed of different media, which can then provide different service block storage service levels in the form of Cinder Volume Types. An example of this is described in this blog post:

Comment 2 Mike Burns 2016-04-07 17:11:06 EDT
This bug did not make the OSP 8.0 release.  It is being deferred to OSP 10.
Comment 7 Red Hat Bugzilla Rules Engine 2017-04-26 16:13:50 EDT
This bugzilla has been removed from the release and needs to be reviewed and Triaged for another Target Release.
Comment 8 Alan Bishop 2017-07-10 12:38:00 EDT
Currently, the RBD pool and associated Cinder backend are defined by these two
TripleO settings:

o CinderRbdPoolName
  - THT parameter
  - Default value: "volumes"
o cinder::backend::rbd::volume_backend_name
  - Puppet hiera data
  - Default value: "tripleo_ceph"

The design I have in mind would add a new THT parameter:

    default: ''
    description: >
      List of extra Ceph pools for use with RBD backends for Cinder. An
      extra Cinder RBD backend driver is created for each pool in the
      list. This is in addition to the standard RBD backend driver
      associated with the CinderRbdPoolName.
    type: comma_delimited_list

Any pools specified in the (optional) list would automatically generate
additional Cinder backends. For example, deploying an environment file that
contained this:

  CinderRbdExtraPools: fast,slow

Would result in a Cinder deployment with three RBD backends:

RBD Pool   Cinder Backend
--------   -----------------
volumes    tripleo_ceph
fast       tripelo_ceph_fast
slow       tripleo_ceph_slow

Note 1: For Ceph clusters managed by TripleO, the "CephPools" THT parameter
can be used to create additional pools.

      pg_num: 1024
      pgp_num: 1024
      pg_num: 512
      pgp_num: 512
Note 2: The user/operator would be responsible for creating the Ceph CRUSH map
necessary to establish appropriate service levels for each RBD pool.

Note 3: The user/operator would be responsible for creating Cinder volume
types associated with each of the Cinder RBD backends. That is, the Cinder
backends would be automatically created, but the Cinder volume types would
need to be defined post-deployment.
Comment 9 Giulio Fidente 2017-07-13 10:05:41 EDT
Assuming it is sufficient to create additional Cinder backends pointing to different pools in the same Ceph cluster, this sounds sane to me.

We might have to revisit how it is implemented if we moved to support multiple Ceph clusters but to preserve the user experience I suppose we could assume the first to be the one where the additional backends will point to.
Comment 10 John H Terpstra 2017-08-04 12:00:38 EDT
The ability to specify additional cinder backends each of which points to a specific Ceph Pool is a minimum request.

We have had discussion with sites that are seeking to configure a default Ceph cluster for an OpenStack cluster, but also to share multiple Ceph clusters across OpenStack clusters.  


1x All Flash high performance Ceph cluster (0.1 PB) shared across OpenStack clusters as needed

1x Ultra High Capacity Ceph cluster for archival, shared across OpenStack clusters as needed (in once case 400 nodes)

Per Ceph cluster for dedicated use within that OpenStack cluster.
Comment 11 Alan Bishop 2017-09-01 14:52:34 EDT
https://blueprints.launchpad.net/tripleo/+spec/multiple-cinder-rbd-backend will be proposed at the upcoming Queens PTG.
Comment 13 Alan Bishop 2017-11-10 07:34:05 EST
Patches have merged upstream.

Note You need to log in before you can comment on or make changes to this bug.