Description of problem: Support is being added to the installer/ceph-ansible to allow users to more effectively utilize NVMe devices on their OSD nodes. Documentation adds in the "Ceph Object Gateway for Production" and "Administrative" guides will be required. Additional info: The installer needs to support placement of performance sensitive Ceph OSD elements such as Filestore journals and RGW pools (i.e. bucketIndex) on NVMe devices along with OSD data on HDDs. Document the rationale for deploying Ceph in this way and the actual installation procedure for several hardware configurations.
I noted a few places in the "Ceph Object Gateway for Production" where this change will require updates Section 3.6 Bucket Indexes - currently recommends using one SSD for BI and one for journals. That needs to be updated. Section 5.1 CRUSH - currently has the following recommendations * do not create partitions for BI on SSDs used for OSD journals * do not use the same SSD drive to store journals, bucket indexes and data
The procedure needs to cover both usage of this playbook and the teardown procedure, which will be manual and NOT covered by the playbook.
Please provide documentation's link for reviewing.
Reviewed https://access.qa.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html-single/ceph_object_gateway_for_production/#using-nvme-with-lvm-optimally
Closing as these changes were published along with the Ceph 3.1 bits yesterday for GA: https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html-single/ceph_object_gateway_for_production/#using-nvme-with-lvm-optimally