Bug 2085455 - [Docs] RHCS 5 add procedure for configuring autoscaler minimum and maximum number of PGs for pools
Summary: [Docs] RHCS 5 add procedure for configuring autoscaler minimum and maximum nu...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Documentation
Version: 5.0
Hardware: All
OS: All
unspecified
medium
Target Milestone: ---
: 5.3z4
Assignee: Rivka Pollack
QA Contact: Pawan
Ranjini M N
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-05-13 12:04 UTC by Sam Wachira
Modified: 2023-07-20 04:25 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-07-20 04:25:49 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-4301 0 None None None 2022-05-13 12:08:21 UTC

Description Sam Wachira 2022-05-13 12:04:31 UTC
Describe the issue:
- Red Hat Ceph Storage 5 documentation does not appear to contain a procedure for configuring autoscaler minimum and maximum number of PGs for pools

Describe the task you were trying to accomplish:
- Configure autoscaler to not automatically decrease the PG count below a set value or increase PGs beyond a set value.

Suggestions for improvement:
- Add a sub-section under [3.4] that explains how to configure autoscaler with pg_num_min and pg_num_max.

- Documentation is already available upstream:
 (https://docs.ceph.com/en/latest/rados/operations/placement-groups/#autoscaling-placement-groups)
Section: SPECIFYING BOUNDS ON A POOL’S PGS

Document URL:
(https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/5/html/storage_strategies_guide/placement_groups_pgs#setting-placement-group-auto-scaling)

Chapter/Section Number and Title:
3.4. Auto-scaling placement groups

Product Version:
Red Hat Ceph Storage 5

Environment Details:

Any other versions of this document that also needs this update:

Additional information:


Note You need to log in before you can comment on or make changes to this bug.