In the OSP13 version of the document "Deploying an Overcloud with Containerized Red Hat Ceph" section "1.3. Setting Requirements" please add an additional requirement. Please do the same for other v13 documents which use director to deploy Ceph (e.g. the HCI document). As discovered in BZ 1539852 [1], RHCS3 (to be introduced in OSP13, 12 used RHCS2) does not create a pool if the pg_number, pool size, and mon_max_pg_per_osd are outside of Ceph recommended practice for production clusters. The OSP12 version of this document [2] is not explicit enough about this requirement. Please add the following additional criteria. Under "Disk Layout" it currently reads: """ The recommended Red Hat Ceph Storage node configuration requires at least three or more disks in a layout similar to the following: """ Please update the above to the following: """ The recommended Red Hat Ceph Storage node configuration requires at least five or more Object Storage Daemons (OSDs). The object storage daemons should correspond 1 to 1 to physical disks in a layout similar to the following: """ [1] https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/12/html/deploying_an_overcloud_with_containerized_red_hat_ceph/intro#setting_requirements [2] https://bugzilla.redhat.com/show_bug.cgi?id=1539852
I think it might be useful to add a paragraph documenting how to customize the default pg_num for deployments using less than five OSDs. For example, to create 64 PGs for every Ceph pool, which would allow for a deployment with only three OSDs, use an environment file like the following: parameter_defaults: CephPoolDefaultPgNum: 64 Note that in the docs we have a section documenting how to set a different pg_num value for a particular pool [1]. 1. https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/12/html-single/deploying_an_overcloud_with_containerized_red_hat_ceph/#custom-ceph-pools
The https://bugzilla.redhat.com/show_bug.cgi?id=1539852 was tried to use 128 pg with size 3 for each pool. We are expected to have 4 pools: images, instances, volumes, metrics. 200 was the per osd limit. 128*3*4 / 200 = 7.68 osd
Including cinder-backup it will be 5 pools. 128*3*5 / 200 = 9.6 (10) Including a default existing pool: 128*3*6 / 200 = 11.52 (12)
Luminous won't create pools unless: (pool_size * pg_num * pool_count) < (mon_max_pg_per_osd * osd_count) Given the defaults that's (3 * 128 * 7) < (600 * len(devices)) So given defaults, unless len(devices) >= 5, not all pools created
*** Bug 1482424 has been marked as a duplicate of this bug. ***
Hi John, Thanks for raising this bug. I'm reading through and wanted to confirm - are you asking for changes to the version 12 docs as well, or pointing out places we need to change only when we move to 13?
(In reply to Lucy Bopf from comment #6) > Hi John, > > Thanks for raising this bug. > > I'm reading through and wanted to confirm - are you asking for changes to > the version 12 docs as well, or pointing out places we need to change only > when we move to 13? Lucy, I am not asking that this change be made to the OSP12 document, only the OSP13 document. It is because OSP13 uses RHCS3 (OSP12 uses RHCS2). In the statement I suggested, it's probably a good idea to include the versions: """ The recommended Red Hat Ceph Storage 3 node configuration with OSP13 requires at least five or more Object Storage Daemons (OSDs). The object storage daemons should correspond 1 to 1 to physical disks in a layout similar to the following: """ Thanks, John