Bug 1309550

Summary: [RFE] Update Cinder heat template to allow multiple Ceph pools
Product: Red Hat OpenStack Reporter: Kyle Bader <kbader>
Component: openstack-tripleo-heat-templatesAssignee: Alan Bishop <abishop>
Status: CLOSED ERRATA QA Contact: Yogev Rabl <yrabl>
Severity: high Docs Contact: Derek <dcadzow>
Priority: high    
Version: 12.0 (Pike)CC: alan_bishop, arkady_kanevsky, cdevine, christopher_dearborn, dbecker, dchia, gael_rehault, gcharot, gfidente, ipilcher, joherr, john_terpstra, John_walsh, jraju, jschluet, j_t_williams, kbader, knylande, kschinck, kurt_hey, martinsson.patrik, mburns, morazi, nlevine, nsatsia, Paul_Dardeau, pgrist, rajini.karthik, randy_perryman, rhel-osp-director-maint, rsussman, scohen, smerrow, sputhenp, wayne_allen, yrabl
Target Milestone: Upstream M2Keywords: FutureFeature
Target Release: 13.0 (Queens)   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: openstack-tripleo-heat-templates-8.0.0-0.20180103192340.el7ost puppet-tripleo-8.1.1-0.20180102165828.el7ost Doc Type: Enhancement
Doc Text:
A new CinderRbdExtraPools Heat parameter has been added which specifies a list of Ceph pools for use with RBD backends for Cinder. An extra Cinder RBD backend driver is created for each pool in the list. This is in addition to the standard RBD backend driver associated with the CinderRbdPoolName. The new parameter is optional and defaults to an empty list. All of the pools are associated with a single Ceph cluster.
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-06-27 13:26:22 UTC Type: Feature Request
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1466008    
Bug Blocks: 1413723, 1419948, 1458798    

Description Kyle Bader 2016-02-18 05:46:18 UTC
Currently we only support a single Cinder backend for Ceph. Ceph provides the ability to have multiple storage pools that take root at different branches of a CRUSH hierarchy. This means that pools can be composed of different media, which can then provide different service block storage service levels in the form of Cinder Volume Types. An example of this is described in this blog post:

http://www.sebastien-han.fr/blog/2013/04/25/ceph-and-cinder-multi-backend/

Comment 2 Mike Burns 2016-04-07 21:11:06 UTC
This bug did not make the OSP 8.0 release.  It is being deferred to OSP 10.

Comment 7 Red Hat Bugzilla Rules Engine 2017-04-26 20:13:50 UTC
This bugzilla has been removed from the release and needs to be reviewed and Triaged for another Target Release.

Comment 8 Alan Bishop 2017-07-10 16:38:00 UTC
Currently, the RBD pool and associated Cinder backend are defined by these two
TripleO settings:

o CinderRbdPoolName
  - THT parameter
  - Default value: "volumes"
o cinder::backend::rbd::volume_backend_name
  - Puppet hiera data
  - Default value: "tripleo_ceph"

The design I have in mind would add a new THT parameter:

  CinderRbdExtraPools:
    default: ''
    description: >
      List of extra Ceph pools for use with RBD backends for Cinder. An
      extra Cinder RBD backend driver is created for each pool in the
      list. This is in addition to the standard RBD backend driver
      associated with the CinderRbdPoolName.
    type: comma_delimited_list

Any pools specified in the (optional) list would automatically generate
additional Cinder backends. For example, deploying an environment file that
contained this:

parameter_defaults:
  CinderRbdExtraPools: fast,slow

Would result in a Cinder deployment with three RBD backends:

RBD Pool   Cinder Backend
--------   -----------------
volumes    tripleo_ceph
fast       tripelo_ceph_fast
slow       tripleo_ceph_slow

Note 1: For Ceph clusters managed by TripleO, the "CephPools" THT parameter
can be used to create additional pools.

parameter_defaults:
  CephPools:
    fast:
      pg_num: 1024
      pgp_num: 1024
    slow:
      pg_num: 512
      pgp_num: 512
    
Note 2: The user/operator would be responsible for creating the Ceph CRUSH map
necessary to establish appropriate service levels for each RBD pool.

Note 3: The user/operator would be responsible for creating Cinder volume
types associated with each of the Cinder RBD backends. That is, the Cinder
backends would be automatically created, but the Cinder volume types would
need to be defined post-deployment.

Comment 9 Giulio Fidente 2017-07-13 14:05:41 UTC
Assuming it is sufficient to create additional Cinder backends pointing to different pools in the same Ceph cluster, this sounds sane to me.

We might have to revisit how it is implemented if we moved to support multiple Ceph clusters but to preserve the user experience I suppose we could assume the first to be the one where the additional backends will point to.

Comment 10 John H Terpstra 2017-08-04 16:00:38 UTC
The ability to specify additional cinder backends each of which points to a specific Ceph Pool is a minimum request.

We have had discussion with sites that are seeking to configure a default Ceph cluster for an OpenStack cluster, but also to share multiple Ceph clusters across OpenStack clusters.  

Example:

1x All Flash high performance Ceph cluster (0.1 PB) shared across OpenStack clusters as needed

1x Ultra High Capacity Ceph cluster for archival, shared across OpenStack clusters as needed (in once case 400 nodes)

Per Ceph cluster for dedicated use within that OpenStack cluster.

Comment 11 Alan Bishop 2017-09-01 18:52:34 UTC
https://blueprints.launchpad.net/tripleo/+spec/multiple-cinder-rbd-backend will be proposed at the upcoming Queens PTG.

Comment 13 Alan Bishop 2017-11-10 12:34:05 UTC
Patches have merged upstream.

Comment 19 errata-xmlrpc 2018-06-27 13:26:22 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2018:2086