Bug 1810321

Summary: Document how to deploy with Ceph autoscaler enabled with director
Product: Red Hat OpenStack Reporter: John Fulton <johfulto>
Component: documentationAssignee: RHOS Documentation Team <rhos-docs>
Status: CLOSED NOTABUG QA Contact: RHOS Documentation Team <rhos-docs>
Severity: medium Docs Contact:
Priority: medium    
Version: 16.0 (Train)CC: fpantano, lmarsh, ndeevy, nwolf
Target Milestone: z2Keywords: Documentation
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-05-19 09:49:41 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1782253, 1812929    
Bug Blocks:    

Description John Fulton 2020-03-04 23:44:45 UTC
As a result of ceph-ansible BZ 1782253 it will be possible to deploy RHCSv4 with the autoscaler [1] enabled. 

This bug tracks documenting how do deploy Ceph with director with this feature enabled and should result in a new chapter in [2]. The accuracy of the proposed documentation needs to be verified by OpenStack QE before it is published.

The requested content change applies to all versions [2] which use RHCSv4. That is: 15, 16 and newer.

[1] https://ceph.io/rados/new-in-nautilus-pg-merging-and-autotuning/
[2] https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.0/html/deploying_an_overcloud_with_containerized_red_hat_ceph/index

Comment 1 John Fulton 2020-03-05 00:01:03 UTC
Chapter: "Deploying with the Ceph Autoscaler Enabled"

To enable the Ceph autoscaler ensure the following parameters are in your Heat environment override file before deployment.

parameter_defaults:
  CephPools:
    - {"name": backups, "target_size_ratio": 0.1, "pg_autoscale_mode": True, "application": rbd}
    - {"name": volumes, "target_size_ratio": 0.5, "pg_autoscale_mode": True, "application": rbd}
    - {"name": vms,     "target_size_ratio": 0.2, "pg_autoscale_mode": True, "application": rbd}
    - {"name": images,  "target_size_ratio": 0.2, "pg_autoscale_mode": True, "application": rbd}

In the above example the target_size_ratio should be set like a percentage of the total data. The above will result in the Ceph cluster being automatically tuned for the following expected distribution of data:

- The Cinder backups pool will use 10% of the total data in the Ceph cluster
- The Cinder volumes pool will use 50% of the total data in the Ceph cluster
- The Glance images pool will use 20% of the total data in the Ceph cluster
- The Nova vms pool will use 20% of the total data in the Ceph cluster

With the autoscaler enabled it is not always necessary to directly override the default Ceph Placement Groups.

Comment 5 ndeevy 2021-05-19 09:00:18 UTC
Cool. Thanks Nate,
I'll actually create a Jira tracker for it and add to to-dos for 16.2 time frame.