Ceph auto scaling was implemented [1] and then ceph-ansible got the ability to deploy with it [2]. This bug tracks that it works as expected when OpenStack uses Ceph as an RBD backend. How to deploy with it: parameter_defaults: CephPools: - {"name": backups, "target_size_ratio": 0.1, "pg_autoscale_mode": True, "application": rbd} - {"name": volumes, "target_size_ratio": 0.5, "pg_autoscale_mode": True, "application": rbd} - {"name": vms, "target_size_ratio": 0.2, "pg_autoscale_mode": True, "application": rbd} - {"name": images, "target_size_ratio": 0.2, "pg_autoscale_mode": True, "application": rbd} Is it working? - Were pools created with differing pg_nums based on the target_size_ratios? - Do they adjust themselves based on data being added to the pool? As per the original RFE [1] we should see "Automatic adjustment of pg_num up or down based on usage and/or user hints about intended usage." [1] https://bugzilla.redhat.com/show_bug.cgi?id=1674084 [2] https://bugzilla.redhat.com/show_bug.cgi?id=1782253
Make sure you have a ceph-ansible version which is the fixed-in of dependent bug 1782253 when you test this feature.
Additional things to document: - The sum of all target_size_ratio should equal 1.0 (100%) [what happens otherwise and do we add a validation?] - We don't recommend mixing target_size_ratio for some pools and directly setting PG numbers for other pools - This only applies to new pools, it doesn't update the pool after it's created. That must be done via the CLI.
Verified, though I would document the configuration in a nicer way instead of parameter_defaults: CephPools: - {"name": backups, "target_size_ratio": 0.1, "pg_autoscale_mode": True, "application": rbd} - {"name": volumes, "target_size_ratio": 0.5, "pg_autoscale_mode": True, "application": rbd} - {"name": vms, "target_size_ratio": 0.2, "pg_autoscale_mode": True, "application": rbd} - {"name": images, "target_size_ratio": 0.2, "pg_autoscale_mode": True, "application": rbd} I would write: parameter_defaults: CephAnsibleDisksConfig: lvm_volumes: - crush_device_class: hdd data: /dev/vdb - crush_device_class: hdd data: /dev/vdc - crush_device_class: hdd data: /dev/vdd - crush_device_class: ssd data: /dev/vde - crush_device_class: ssd data: /dev/vdf osd_objectstore: bluestore osd_scenario: lvm CephAnsibleExtraConfig: create_crush_tree: true crush_rule_config: true crush_rules: - class: hdd default: true name: HDD root: default type: host - class: ssd default: false name: SSD root: default type: host CephPools: - application: rbd name: volumes pg_autoscale_mode: true pg_num: 32 rule_name: HDD target_size_ratio: 0.3 - application: rbd name: vms pg_autoscale_mode: true pg_num: 32 rule_name: HDD target_size_ratio: 0.2 - application: rbd name: images pg_autoscale_mode: true pg_num: 32 rule_name: HDD target_size_ratio: 0.2 - application: rbd name: backups pg_autoscale_mode: true pg_num: 32 rule_name: HDD target_size_ratio: 0.1 - application: rbd name: fastpool pg_autoscale_mode: true pg_num: 32 rule_name: SSD target_size_ratio: 0.2 CinderRbdExtraPools: fastpool