Bug 1545383
Summary: | [Docs][RFE][Ceph] Update minimum hardware requirements for RHCS3, which will be in OSP13, for OSDs | ||
---|---|---|---|
Product: | Red Hat OpenStack | Reporter: | John Fulton <johfulto> |
Component: | documentation | Assignee: | RHOS Documentation Team <rhos-docs> |
Status: | CLOSED WONTFIX | QA Contact: | RHOS Documentation Team <rhos-docs> |
Severity: | unspecified | Docs Contact: | |
Priority: | unspecified | ||
Version: | 13.0 (Queens) | CC: | afazekas, gfidente, johfulto, jomurphy, mburns, srevivo |
Target Milestone: | --- | Keywords: | Documentation |
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2021-04-19 10:01:13 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1539852 |
Description
John Fulton
2018-02-14 19:08:42 UTC
I think it might be useful to add a paragraph documenting how to customize the default pg_num for deployments using less than five OSDs. For example, to create 64 PGs for every Ceph pool, which would allow for a deployment with only three OSDs, use an environment file like the following: parameter_defaults: CephPoolDefaultPgNum: 64 Note that in the docs we have a section documenting how to set a different pg_num value for a particular pool [1]. 1. https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/12/html-single/deploying_an_overcloud_with_containerized_red_hat_ceph/#custom-ceph-pools The https://bugzilla.redhat.com/show_bug.cgi?id=1539852 was tried to use 128 pg with size 3 for each pool. We are expected to have 4 pools: images, instances, volumes, metrics. 200 was the per osd limit. 128*3*4 / 200 = 7.68 osd Including cinder-backup it will be 5 pools. 128*3*5 / 200 = 9.6 (10) Including a default existing pool: 128*3*6 / 200 = 11.52 (12) Luminous won't create pools unless: (pool_size * pg_num * pool_count) < (mon_max_pg_per_osd * osd_count) Given the defaults that's (3 * 128 * 7) < (600 * len(devices)) So given defaults, unless len(devices) >= 5, not all pools created *** Bug 1482424 has been marked as a duplicate of this bug. *** Hi John, Thanks for raising this bug. I'm reading through and wanted to confirm - are you asking for changes to the version 12 docs as well, or pointing out places we need to change only when we move to 13? (In reply to Lucy Bopf from comment #6) > Hi John, > > Thanks for raising this bug. > > I'm reading through and wanted to confirm - are you asking for changes to > the version 12 docs as well, or pointing out places we need to change only > when we move to 13? Lucy, I am not asking that this change be made to the OSP12 document, only the OSP13 document. It is because OSP13 uses RHCS3 (OSP12 uses RHCS2). In the statement I suggested, it's probably a good idea to include the versions: """ The recommended Red Hat Ceph Storage 3 node configuration with OSP13 requires at least five or more Object Storage Daemons (OSDs). The object storage daemons should correspond 1 to 1 to physical disks in a layout similar to the following: """ Thanks, John |