It's easy to deploy bluestore with optimal disk configuration when the following is syntax is used HDDs are mixed with NVMes. osd_objectstore: bluestore osd_scenario: lvm devices: - /dev/sda - /dev/sdb - /dev/nvme0n1 - /dev/sdc - /dev/sdd - /dev/nvme1n1 ceph-volume does a great job of automating this case because it can differentiate between HDD (/dev/sda) and SSD (/dev/nvme0n1) and will configure bluestore DB devices on the SSDs. However, in a configuration where /dev/sda is an SSD (e.g. perhaps a SATA SSD), e.g. bug 1732915, then the NVMe SSDs are used as OSDs, not as bluestore DB devices. If the customer wants the SATA SSDs to be OSDs and the NVMe SSDs to be bluestore DB devices, then the customer needs to create the LVMs in advance and then pass ceph-ansible not a devices list but an lvm_volumes list like this: lvm_volumes: - data: ceph_lv0_data data_vg: ceph_vg0 db: ceph_lv0_db db_vg: ceph_vg_fast0 ... The process to create the lvm_volumes in advance is error prone. We can provide customers scripts to do this but it would be a much better customer experience if ceph-ansible had a role to create the LVMs in advance.
It looks like we can use this playbook: https://github.com/ceph/ceph-ansible/blob/v3.2.21/infrastructure-playbooks/lv-create.yml To solve a lot, if not all, of the problem raised by this bug. Next step is probably to experiment with the above.
Updating the QA Contact to a Hemant. Hemant will be rerouting them to the appropriate QE Associate. Regards, Giri