As of RHCS 3.1, the recommended configuration for installation with multiple NVMe devices is to place RGW bucket index OSDs and all journals on NVMe devices with the remaining OSDs on spinning disk. With bluestore, the equivalent configuration places the bucket index OSD's db partitions on the NVMe devices with all data partitions on spinning disk. The only way to deploy this configuration currently is to use ceph-disk with ceph-ansible's 'non-collocated' scenario. The strategy used by ceph-volume lvm-batch lumps all NVMe devices into a single LV. Current best practices call for each bucket index db to be located entirely on a single device, with no sharing of devices. We need to determine whether we will: * Support this type of installation with ceph-disk for RHCS3.2 * Enhance ceph-volume to automatically map particular OSD db partitions to individual NVMe devices * Enhance ceph-ansible to enable users to create LVs manually and drive ceph-volume to assign LVs for db partitions (like osd_scenario: noncollocated) Any of these approaches (or some other approach) is acceptable, but one is needed.
Alfredo would you please comment what our current options are to align with best practices ?
Conclusion from discussions with Alfredo and Andrew was that ceph-disk non-collocated will be the supported installation method for RHCS 3.2. Going forward, the performance impact of grouping all bucket index db partitions in a single VG composed of all NVMe devices will be examined and the results will drive recommendations for RHCS 4.0.
Are there doc implications for this recommendation?
This has been addressed in ceph-ansible already, there is nothing that I can see ceph-volume being able to do here correctly. I think we can close this
Agreed.