Bug 1810321
| Summary: | Document how to deploy with Ceph autoscaler enabled with director | ||
|---|---|---|---|
| Product: | Red Hat OpenStack | Reporter: | John Fulton <johfulto> |
| Component: | documentation | Assignee: | RHOS Documentation Team <rhos-docs> |
| Status: | CLOSED NOTABUG | QA Contact: | RHOS Documentation Team <rhos-docs> |
| Severity: | medium | Docs Contact: | |
| Priority: | medium | ||
| Version: | 16.0 (Train) | CC: | fpantano, lmarsh, ndeevy, nwolf |
| Target Milestone: | z2 | Keywords: | Documentation |
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2021-05-19 09:49:41 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | 1782253, 1812929 | ||
| Bug Blocks: | |||
|
Description
John Fulton
2020-03-04 23:44:45 UTC
Chapter: "Deploying with the Ceph Autoscaler Enabled"
To enable the Ceph autoscaler ensure the following parameters are in your Heat environment override file before deployment.
parameter_defaults:
CephPools:
- {"name": backups, "target_size_ratio": 0.1, "pg_autoscale_mode": True, "application": rbd}
- {"name": volumes, "target_size_ratio": 0.5, "pg_autoscale_mode": True, "application": rbd}
- {"name": vms, "target_size_ratio": 0.2, "pg_autoscale_mode": True, "application": rbd}
- {"name": images, "target_size_ratio": 0.2, "pg_autoscale_mode": True, "application": rbd}
In the above example the target_size_ratio should be set like a percentage of the total data. The above will result in the Ceph cluster being automatically tuned for the following expected distribution of data:
- The Cinder backups pool will use 10% of the total data in the Ceph cluster
- The Cinder volumes pool will use 50% of the total data in the Ceph cluster
- The Glance images pool will use 20% of the total data in the Ceph cluster
- The Nova vms pool will use 20% of the total data in the Ceph cluster
With the autoscaler enabled it is not always necessary to directly override the default Ceph Placement Groups.
Cool. Thanks Nate, I'll actually create a Jira tracker for it and add to to-dos for 16.2 time frame. |