Bug 1418337 - Ceph Crush map section missing from official documentation.
Summary: Ceph Crush map section missing from official documentation.
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: documentation
Version: 10.0 (Newton)
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
: 10.0 (Newton)
Assignee: Don Domingo
QA Contact: RHOS Documentation Team
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-02-01 15:15 UTC by VIKRANT
Modified: 2017-02-06 04:40 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-02-06 04:40:19 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description VIKRANT 2017-02-01 15:15:40 UTC
Description of problem:

In documentation[1] we have covered the new feature of having custom PG values for ceph pools which was introduced with [2] but I guess we are missing a section in documentation for one-more new feature [3] to set the ceph crush map using director. Can we add that section in documentation.

[1] https://access.redhat.com/documentation/en/red-hat-openstack-platform/10/single/red-hat-ceph-storage-for-the-overcloud/#custom-ceph-pools
[2] https://bugzilla.redhat.com/show_bug.cgi?id=1283721
[3] https://bugzilla.redhat.com/show_bug.cgi?id=1258120


Version-Release number of selected component (if applicable):
RHEL OSP 10

How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 2 Don Domingo 2017-02-02 04:18:00 UTC
Hi Vikrant, RE: ceph crush maps, I added a note on how to set it to 'false' in:

https://access.redhat.com/documentation/en/red-hat-openstack-platform/10/single/red-hat-ceph-storage-for-the-overcloud#Mapping_the_Ceph_Storage_Node_Disk_Layout

As I understand it, the main use case for doing this is to map different types of disk (eg SATA and SSD) on the same Ceph node. It sounded like a subset of disk mapping, which is why I just added it as a note to the "Mapping the Ceph Storage Node Disk Layout" section. Hope that helps!

If there are any other use cases for the 'osd_crush_update_on_start' resource that we need to cover, please let me know as well.


Note You need to log in before you can comment on or make changes to this bug.