Description of problem:
When installing a ceph cluster to use NVMe for rocksdb/wal on a bluestore configuration the default partition sizes are far too small for modern storage configurations. From what I can tell the current baseline recommendation is 10GB of rocksdb per 1TB of OSD ; we should be able to use this as the standard default in ceph-ansible and document the effect of other issues such as the number of objects that might lead the customer to increase or decrease this manually.
Version-Release number of selected component (if applicable):
Every install we've tried with bluestore so far.
Steps to Reproduce:
1. Configure ceph-ansible to install using bluestore.
2. Find the rocksdb/wal partitions are too small based on recommendations.
Partitions are fixed at 1GB each for rocksdb and wal.
In our case the rocksdb should have been 18GB for a 1.8TB OSD and 2GB for WAL.
ceph-ansible is not responsible for that. It configures the devices with whatever ceph default has. You can use ceph_conf_overrides to set a different value prior to deploy.
Also, the size of your wal/db depends on the size of your devices and ceph-ansible does not know anything about this. So this is a configuration that the operator should change.
This is not a bug, I'm closing this, feel free to re-open if you have more concerns.