Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1383779 - [Docs] [RFE] Document how to deploy Ceph Storage Nodes with differing ceph::profile::params::osds lists
[Docs] [RFE] Document how to deploy Ceph Storage Nodes with differing ceph::p...
Status: CLOSED CURRENTRELEASE
Product: Red Hat OpenStack
Classification: Red Hat
Component: documentation (Show other bugs)
10.0 (Newton)
Unspecified Unspecified
unspecified Severity unspecified
: ---
: 10.0 (Newton)
Assigned To: Dan Macpherson
RHOS Documentation Team
: Documentation, FutureFeature
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2016-10-11 13:53 EDT by John Fulton
Modified: 2018-08-07 09:47 EDT (History)
9 users (show)

See Also:
Fixed In Version:
Doc Type: Enhancement
Doc Text:
You can now use node-specific hiera to deploy Ceph storage nodes which do not have the same list of block devices. As a result, you can use node-specific hiera entries within the overcloud deployment's Heat templates to deploy non-similar OSD servers.
Story Points: ---
Clone Of:
Environment:
Last Closed: 2018-08-07 00:35:35 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description John Fulton 2016-10-11 13:53:10 EDT
OSPd assumes that all Ceph storage nodes are the same with respect to the node disk layout. The documentation [1] has the user provide a yaml file like the following:  

    ceph::profile::params::osds:
        '/dev/sdc':
          journal: '/dev/sdb'
        '/dev/sdd':
          journal: '/dev/sdb'

but want if the user has X nodes like the above in addition to Y nodes that look like the following: 

    ceph::profile::params::osds:
        '/dev/sdc':
          journal: '/dev/sdb'
        '/dev/sdd':
          journal: '/dev/sdb'
        '/dev/sde':
          journal: '/dev/sdb'
        '/dev/sdf':
          journal: '/dev/sdb'

For larger clouds the scenario of non-uniform hardware becomes more likely and OSPd should have a way to provision such hardware. 

[1] https://access.redhat.com/documentation/en/red-hat-openstack-platform/9/single/red-hat-ceph-storage-for-the-overcloud/#mapping_the_ceph_storage_node_disk_layout
Comment 2 John Fulton 2016-10-11 13:58:26 EDT
One solution is to use custom-roles [1] to create two roles, OSD-X and OSD-Y, and pass each a different list under CephStorageExtraConfig. 

[1] https://blueprints.launchpad.net/tripleo/+spec/custom-roles
Comment 3 John Fulton 2016-10-11 14:09:10 EDT
If OSPd interfaces with RHSC to deploy and manage Ceph in a future version of OSPd, it would be possible to use what was proposed in comment #2 and stay backwards compatible as requested [1] by passing the same information and encoding it for ceph-ansible via its representation of the inventory file [2]. 

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1380864

[2] 
[mons]
ceph-mon-01
ceph-mon-02
ceph-mon-03

[osds]
ceph-osd-01 devices="[ '/dev/sdb', '/dev/sdc' ]"
ceph-osd-02 devices="[ '/dev/sdb', '/dev/sdc', '/dev/sdd' ]"
ceph-osd-03 devices="[ '/dev/sdc' ]"
Comment 4 John Fulton 2016-10-11 16:00:52 EDT
Another solution (from kschinck) is to add OSD hints so you don't have to make the role so specific or not make a separate role just for a different list of disks.  

In Ironic you can use root device hints [1] like `openstack baremetal configure boot --root-device=smallest` and whatever the smallest disk is, would be the root device where TripleO would install the OS image. 

What if TripleO's Heat Ceph configurations supported options like:

- All non-SSDs larger than 200G are OSD data disks
- All SSDs are journal disks

The configuration implemenation code (e.g. puppet-ceph) could include logic to evenly distribute the journals. There would need to be logic to make sure the OS disk is identified and never used as an OSD; otherwise the OS being deployed could be overwritten. 

[1] https://specs.openstack.org/openstack/ironic-specs/specs/kilo-implemented/root-device-hints.html
Comment 5 Giulio Fidente 2016-10-18 08:36:05 EDT
using nodes with different disks lists is already supported in TripleO using node-specific hiera, see:
http://tripleo.org/advanced_deployment/node_specific_hieradata.html

some examples are in a blog post too:
http://giuliofidente.com/2016/08/ceph-tripleo-and-the-newton-release.html

I think we can close this as WORKSFORME
Comment 7 Giulio Fidente 2016-10-18 17:02:03 EDT
Can we document usage of:

http://tripleo.org/advanced_deployment/node_specific_hieradata.html

to deploy Ceph Storage Nodes with differing ceph::profile::params::osds lists?

There is an example at:

http://giuliofidente.com/2016/08/ceph-tripleo-and-the-newton-release.html

See "Customize the disks map for a specific node" section.
Comment 8 Giulio Fidente 2016-10-18 17:03:48 EDT
The original bug tracking feature development is BZ #1238807
Comment 9 Dan Macpherson 2018-08-07 00:35:35 EDT
So while not Ceph-specific, we do have documentation for configuring node-specific hieradata:

https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/10/html/advanced_overcloud_customization/chap-configuration_hooks#sect-Customizing_Hieradata_for_Individual_Nodes

However, closing this since OSP12+ is using ceph-ansible and the hiera-specific config no longer applies.

Note You need to log in before you can comment on or make changes to this bug.