As a result of ceph-ansible BZ 1782253 it will be possible to deploy RHCSv4 with the autoscaler [1] enabled. This bug tracks documenting how do deploy Ceph with director with this feature enabled and should result in a new chapter in [2]. The accuracy of the proposed documentation needs to be verified by OpenStack QE before it is published. The requested content change applies to all versions [2] which use RHCSv4. That is: 15, 16 and newer. [1] https://ceph.io/rados/new-in-nautilus-pg-merging-and-autotuning/ [2] https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.0/html/deploying_an_overcloud_with_containerized_red_hat_ceph/index
Chapter: "Deploying with the Ceph Autoscaler Enabled" To enable the Ceph autoscaler ensure the following parameters are in your Heat environment override file before deployment. parameter_defaults: CephPools: - {"name": backups, "target_size_ratio": 0.1, "pg_autoscale_mode": True, "application": rbd} - {"name": volumes, "target_size_ratio": 0.5, "pg_autoscale_mode": True, "application": rbd} - {"name": vms, "target_size_ratio": 0.2, "pg_autoscale_mode": True, "application": rbd} - {"name": images, "target_size_ratio": 0.2, "pg_autoscale_mode": True, "application": rbd} In the above example the target_size_ratio should be set like a percentage of the total data. The above will result in the Ceph cluster being automatically tuned for the following expected distribution of data: - The Cinder backups pool will use 10% of the total data in the Ceph cluster - The Cinder volumes pool will use 50% of the total data in the Ceph cluster - The Glance images pool will use 20% of the total data in the Ceph cluster - The Nova vms pool will use 20% of the total data in the Ceph cluster With the autoscaler enabled it is not always necessary to directly override the default Ceph Placement Groups.
Cool. Thanks Nate, I'll actually create a Jira tracker for it and add to to-dos for 16.2 time frame.