Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1418337

Summary: Ceph Crush map section missing from official documentation.
Product: Red Hat OpenStack Reporter: VIKRANT <vaggarwa>
Component: documentationAssignee: Don Domingo <ddomingo>
Status: CLOSED NOTABUG QA Contact: RHOS Documentation Team <rhos-docs>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 10.0 (Newton)CC: ddomingo, lbopf, mburns, srevivo, vaggarwa
Target Milestone: ---   
Target Release: 10.0 (Newton)   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-02-06 04:40:19 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description VIKRANT 2017-02-01 15:15:40 UTC
Description of problem:

In documentation[1] we have covered the new feature of having custom PG values for ceph pools which was introduced with [2] but I guess we are missing a section in documentation for one-more new feature [3] to set the ceph crush map using director. Can we add that section in documentation.

[1] https://access.redhat.com/documentation/en/red-hat-openstack-platform/10/single/red-hat-ceph-storage-for-the-overcloud/#custom-ceph-pools
[2] https://bugzilla.redhat.com/show_bug.cgi?id=1283721
[3] https://bugzilla.redhat.com/show_bug.cgi?id=1258120


Version-Release number of selected component (if applicable):
RHEL OSP 10

How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 2 Don Domingo 2017-02-02 04:18:00 UTC
Hi Vikrant, RE: ceph crush maps, I added a note on how to set it to 'false' in:

https://access.redhat.com/documentation/en/red-hat-openstack-platform/10/single/red-hat-ceph-storage-for-the-overcloud#Mapping_the_Ceph_Storage_Node_Disk_Layout

As I understand it, the main use case for doing this is to map different types of disk (eg SATA and SSD) on the same Ceph node. It sounded like a subset of disk mapping, which is why I just added it as a note to the "Mapping the Ceph Storage Node Disk Layout" section. Hope that helps!

If there are any other use cases for the 'osd_crush_update_on_start' resource that we need to cover, please let me know as well.