Description of problem: In documentation[1] we have covered the new feature of having custom PG values for ceph pools which was introduced with [2] but I guess we are missing a section in documentation for one-more new feature [3] to set the ceph crush map using director. Can we add that section in documentation. [1] https://access.redhat.com/documentation/en/red-hat-openstack-platform/10/single/red-hat-ceph-storage-for-the-overcloud/#custom-ceph-pools [2] https://bugzilla.redhat.com/show_bug.cgi?id=1283721 [3] https://bugzilla.redhat.com/show_bug.cgi?id=1258120 Version-Release number of selected component (if applicable): RHEL OSP 10 How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
Hi Vikrant, RE: ceph crush maps, I added a note on how to set it to 'false' in: https://access.redhat.com/documentation/en/red-hat-openstack-platform/10/single/red-hat-ceph-storage-for-the-overcloud#Mapping_the_Ceph_Storage_Node_Disk_Layout As I understand it, the main use case for doing this is to map different types of disk (eg SATA and SSD) on the same Ceph node. It sounded like a subset of disk mapping, which is why I just added it as a note to the "Mapping the Ceph Storage Node Disk Layout" section. Hope that helps! If there are any other use cases for the 'osd_crush_update_on_start' resource that we need to cover, please let me know as well.