Bug 1383779 - [Docs] [RFE] Document how to deploy Ceph Storage Nodes with differing ceph::profile::params::osds lists
Summary: [Docs] [RFE] Document how to deploy Ceph Storage Nodes with differing ceph::p...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: documentation
Version: 10.0 (Newton)
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: 10.0 (Newton)
Assignee: Dan Macpherson
QA Contact: RHOS Documentation Team
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-10-11 17:53 UTC by John Fulton
Modified: 2018-08-07 13:47 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
You can now use node-specific hiera to deploy Ceph storage nodes which do not have the same list of block devices. As a result, you can use node-specific hiera entries within the overcloud deployment's Heat templates to deploy non-similar OSD servers.
Clone Of:
Environment:
Last Closed: 2018-08-07 04:35:35 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description John Fulton 2016-10-11 17:53:10 UTC
OSPd assumes that all Ceph storage nodes are the same with respect to the node disk layout. The documentation [1] has the user provide a yaml file like the following:  

    ceph::profile::params::osds:
        '/dev/sdc':
          journal: '/dev/sdb'
        '/dev/sdd':
          journal: '/dev/sdb'

but want if the user has X nodes like the above in addition to Y nodes that look like the following: 

    ceph::profile::params::osds:
        '/dev/sdc':
          journal: '/dev/sdb'
        '/dev/sdd':
          journal: '/dev/sdb'
        '/dev/sde':
          journal: '/dev/sdb'
        '/dev/sdf':
          journal: '/dev/sdb'

For larger clouds the scenario of non-uniform hardware becomes more likely and OSPd should have a way to provision such hardware. 

[1] https://access.redhat.com/documentation/en/red-hat-openstack-platform/9/single/red-hat-ceph-storage-for-the-overcloud/#mapping_the_ceph_storage_node_disk_layout

Comment 2 John Fulton 2016-10-11 17:58:26 UTC
One solution is to use custom-roles [1] to create two roles, OSD-X and OSD-Y, and pass each a different list under CephStorageExtraConfig. 

[1] https://blueprints.launchpad.net/tripleo/+spec/custom-roles

Comment 3 John Fulton 2016-10-11 18:09:10 UTC
If OSPd interfaces with RHSC to deploy and manage Ceph in a future version of OSPd, it would be possible to use what was proposed in comment #2 and stay backwards compatible as requested [1] by passing the same information and encoding it for ceph-ansible via its representation of the inventory file [2]. 

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1380864

[2] 
[mons]
ceph-mon-01
ceph-mon-02
ceph-mon-03

[osds]
ceph-osd-01 devices="[ '/dev/sdb', '/dev/sdc' ]"
ceph-osd-02 devices="[ '/dev/sdb', '/dev/sdc', '/dev/sdd' ]"
ceph-osd-03 devices="[ '/dev/sdc' ]"

Comment 4 John Fulton 2016-10-11 20:00:52 UTC
Another solution (from kschinck) is to add OSD hints so you don't have to make the role so specific or not make a separate role just for a different list of disks.  

In Ironic you can use root device hints [1] like `openstack baremetal configure boot --root-device=smallest` and whatever the smallest disk is, would be the root device where TripleO would install the OS image. 

What if TripleO's Heat Ceph configurations supported options like:

- All non-SSDs larger than 200G are OSD data disks
- All SSDs are journal disks

The configuration implemenation code (e.g. puppet-ceph) could include logic to evenly distribute the journals. There would need to be logic to make sure the OS disk is identified and never used as an OSD; otherwise the OS being deployed could be overwritten. 

[1] https://specs.openstack.org/openstack/ironic-specs/specs/kilo-implemented/root-device-hints.html

Comment 5 Giulio Fidente 2016-10-18 12:36:05 UTC
using nodes with different disks lists is already supported in TripleO using node-specific hiera, see:
http://tripleo.org/advanced_deployment/node_specific_hieradata.html

some examples are in a blog post too:
http://giuliofidente.com/2016/08/ceph-tripleo-and-the-newton-release.html

I think we can close this as WORKSFORME

Comment 7 Giulio Fidente 2016-10-18 21:02:03 UTC
Can we document usage of:

http://tripleo.org/advanced_deployment/node_specific_hieradata.html

to deploy Ceph Storage Nodes with differing ceph::profile::params::osds lists?

There is an example at:

http://giuliofidente.com/2016/08/ceph-tripleo-and-the-newton-release.html

See "Customize the disks map for a specific node" section.

Comment 8 Giulio Fidente 2016-10-18 21:03:48 UTC
The original bug tracking feature development is BZ #1238807

Comment 9 Dan Macpherson 2018-08-07 04:35:35 UTC
So while not Ceph-specific, we do have documentation for configuring node-specific hieradata:

https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/10/html/advanced_overcloud_customization/chap-configuration_hooks#sect-Customizing_Hieradata_for_Individual_Nodes

However, closing this since OSP12+ is using ceph-ansible and the hiera-specific config no longer applies.


Note You need to log in before you can comment on or make changes to this bug.