Bug 1810321 - Document how to deploy with Ceph autoscaler enabled with director
Summary: Document how to deploy with Ceph autoscaler enabled with director
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: documentation
Version: 16.0 (Train)
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: z2
: ---
Assignee: RHOS Documentation Team
QA Contact: RHOS Documentation Team
URL:
Whiteboard:
Depends On: 1782253 1812929
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-03-04 23:44 UTC by John Fulton
Modified: 2022-08-18 17:11 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-05-19 09:49:41 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker OSP-3868 0 None None None 2022-08-18 17:11:25 UTC
Red Hat Issue Tracker RHOSPDOC-721 0 High Open [RFE] Ceph. Deploy with Ceph autoscaler enabled 2021-05-19 09:49:37 UTC

Description John Fulton 2020-03-04 23:44:45 UTC
As a result of ceph-ansible BZ 1782253 it will be possible to deploy RHCSv4 with the autoscaler [1] enabled. 

This bug tracks documenting how do deploy Ceph with director with this feature enabled and should result in a new chapter in [2]. The accuracy of the proposed documentation needs to be verified by OpenStack QE before it is published.

The requested content change applies to all versions [2] which use RHCSv4. That is: 15, 16 and newer.

[1] https://ceph.io/rados/new-in-nautilus-pg-merging-and-autotuning/
[2] https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.0/html/deploying_an_overcloud_with_containerized_red_hat_ceph/index

Comment 1 John Fulton 2020-03-05 00:01:03 UTC
Chapter: "Deploying with the Ceph Autoscaler Enabled"

To enable the Ceph autoscaler ensure the following parameters are in your Heat environment override file before deployment.

parameter_defaults:
  CephPools:
    - {"name": backups, "target_size_ratio": 0.1, "pg_autoscale_mode": True, "application": rbd}
    - {"name": volumes, "target_size_ratio": 0.5, "pg_autoscale_mode": True, "application": rbd}
    - {"name": vms,     "target_size_ratio": 0.2, "pg_autoscale_mode": True, "application": rbd}
    - {"name": images,  "target_size_ratio": 0.2, "pg_autoscale_mode": True, "application": rbd}

In the above example the target_size_ratio should be set like a percentage of the total data. The above will result in the Ceph cluster being automatically tuned for the following expected distribution of data:

- The Cinder backups pool will use 10% of the total data in the Ceph cluster
- The Cinder volumes pool will use 50% of the total data in the Ceph cluster
- The Glance images pool will use 20% of the total data in the Ceph cluster
- The Nova vms pool will use 20% of the total data in the Ceph cluster

With the autoscaler enabled it is not always necessary to directly override the default Ceph Placement Groups.

Comment 5 ndeevy 2021-05-19 09:00:18 UTC
Cool. Thanks Nate,
I'll actually create a Jira tracker for it and add to to-dos for 16.2 time frame.


Note You need to log in before you can comment on or make changes to this bug.