Bug 1821886
| Summary: | [TESTONLY] Test ceph pg auto scale | |||
|---|---|---|---|---|
| Product: | Red Hat OpenStack | Reporter: | John Fulton <johfulto> | |
| Component: | ceph | Assignee: | Giulio Fidente <gfidente> | |
| Status: | CLOSED CURRENTRELEASE | QA Contact: | Yogev Rabl <yrabl> | |
| Severity: | medium | Docs Contact: | ||
| Priority: | medium | |||
| Version: | 16.0 (Train) | CC: | asalvati, astillma, gcharot, jdurgin, lhh, lmarsh, nwolf, spower, sputhenp, yrabl | |
| Target Milestone: | z3 | Keywords: | FutureFeature, TestOnly, Triaged | |
| Target Release: | 16.1 (Train on RHEL 8.2) | |||
| Hardware: | Unspecified | |||
| OS: | Unspecified | |||
| Whiteboard: | ||||
| Fixed In Version: | Doc Type: | If docs needed, set a value | ||
| Doc Text: | Story Points: | --- | ||
| Clone Of: | ||||
| : | 1871864 (view as bug list) | Environment: | ||
| Last Closed: | 2020-12-18 14:45:48 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | 1782253 | |||
| Bug Blocks: | 1871864 | |||
Make sure you have a ceph-ansible version which is the fixed-in of dependent bug 1782253 when you test this feature. Additional things to document: - The sum of all target_size_ratio should equal 1.0 (100%) [what happens otherwise and do we add a validation?] - We don't recommend mixing target_size_ratio for some pools and directly setting PG numbers for other pools - This only applies to new pools, it doesn't update the pool after it's created. That must be done via the CLI. Verified, though I would document the configuration in a nicer way instead of
parameter_defaults:
CephPools:
- {"name": backups, "target_size_ratio": 0.1, "pg_autoscale_mode": True, "application": rbd}
- {"name": volumes, "target_size_ratio": 0.5, "pg_autoscale_mode": True, "application": rbd}
- {"name": vms, "target_size_ratio": 0.2, "pg_autoscale_mode": True, "application": rbd}
- {"name": images, "target_size_ratio": 0.2, "pg_autoscale_mode": True, "application": rbd}
I would write:
parameter_defaults:
CephAnsibleDisksConfig:
lvm_volumes:
- crush_device_class: hdd
data: /dev/vdb
- crush_device_class: hdd
data: /dev/vdc
- crush_device_class: hdd
data: /dev/vdd
- crush_device_class: ssd
data: /dev/vde
- crush_device_class: ssd
data: /dev/vdf
osd_objectstore: bluestore
osd_scenario: lvm
CephAnsibleExtraConfig:
create_crush_tree: true
crush_rule_config: true
crush_rules:
- class: hdd
default: true
name: HDD
root: default
type: host
- class: ssd
default: false
name: SSD
root: default
type: host
CephPools:
- application: rbd
name: volumes
pg_autoscale_mode: true
pg_num: 32
rule_name: HDD
target_size_ratio: 0.3
- application: rbd
name: vms
pg_autoscale_mode: true
pg_num: 32
rule_name: HDD
target_size_ratio: 0.2
- application: rbd
name: images
pg_autoscale_mode: true
pg_num: 32
rule_name: HDD
target_size_ratio: 0.2
- application: rbd
name: backups
pg_autoscale_mode: true
pg_num: 32
rule_name: HDD
target_size_ratio: 0.1
- application: rbd
name: fastpool
pg_autoscale_mode: true
pg_num: 32
rule_name: SSD
target_size_ratio: 0.2
CinderRbdExtraPools: fastpool
|
Ceph auto scaling was implemented [1] and then ceph-ansible got the ability to deploy with it [2]. This bug tracks that it works as expected when OpenStack uses Ceph as an RBD backend. How to deploy with it: parameter_defaults: CephPools: - {"name": backups, "target_size_ratio": 0.1, "pg_autoscale_mode": True, "application": rbd} - {"name": volumes, "target_size_ratio": 0.5, "pg_autoscale_mode": True, "application": rbd} - {"name": vms, "target_size_ratio": 0.2, "pg_autoscale_mode": True, "application": rbd} - {"name": images, "target_size_ratio": 0.2, "pg_autoscale_mode": True, "application": rbd} Is it working? - Were pools created with differing pg_nums based on the target_size_ratios? - Do they adjust themselves based on data being added to the pool? As per the original RFE [1] we should see "Automatic adjustment of pg_num up or down based on usage and/or user hints about intended usage." [1] https://bugzilla.redhat.com/show_bug.cgi?id=1674084 [2] https://bugzilla.redhat.com/show_bug.cgi?id=1782253