Description of problem: OSP-d sets the number of PG to 128 for all pools. This is very arbitrary. The number of placement groups should be calculated. The documentation should link to the PG calculation tool (https://access.redhat.com/labs/cephpgc/) and explain how to set these values in OSP-d.
(In reply to Alexandre Marangone from comment #0) > Description of problem: > OSP-d sets the number of PG to 128 for all pools. This is very arbitrary. > The number of placement groups should be calculated. The documentation > should link to the PG calculation tool > (https://access.redhat.com/labs/cephpgc/) and explain how to set these > values in OSP-d. The only problem is that you can't seem to set specific placement group sizes for different pools through OSPd. You can only seem to set a one PG value for all by setting the following hieradata: ceph::profile::params::osd_pool_default_pg_num: 128 Alexandre, what you're asking for requires a specific PG setting for each pool, but that doesn't seem to be possible with through OSPd at the moment. We can only specify a global PG value that all pools use, which means all pool have the same PG number. Does that pose a problem? Also CC'ing gfidente for exposure on this issue. Giulio, is there some way to set specific PG for pools? I might have missed it.
hi Dan, specific PG per pool is not possible yet, tracked by BZ 1283721
Thanks, Giulio. Alexandre, it seems as though you can't set specific PG per pool yet. What did you want to do in terms of documentation?
I guess there are two ways to go: - Update the doc with a mention of access.redhat.com/labs/cephpgc/ and state that a manual PG # change is necessary after deployment. - Leave as is for now and update the doc when BZ 1283721 is done.
Might have to leave it for now. I'm a little hesitant to recommend manual changes after deployment, only because future deployment updates to the Overcloud can overwrite any manual changes. Having said that, I've made BZ 1283721 a dependency for this BZ. So when that gets resolved, I'll be notified and we can resume work on this BZ.
This looks like it's being targeted for OSP10. Not sure about whether it's going to be backported. For reference, the method to use to set specific pools is to use the ceph_pools Puppet param as an extra config value. An example in the storage environment file would look like this: parameter_defaults: ExtraConfig: tripleo::profile::base::ceph::mon::ceph_pools: mypool: size: 5 pg_num: 128 pgp_num: 128 Relevant upstream commit: https://review.openstack.org/#/c/346794/2/manifests/profile/base/ceph/mon.pp
hi Dan, please do not document solution in comment #7, it's not guaranteed to be backward compatible in the future and. Instead this bug includes two changes, the puppet part and the THT part, at https://review.openstack.org/#/c/346796/ Sample usage: parameter_defaults: CephPools: mypool: size: 5, pg_num: 128, pgp_num: 128 where mypool can be an arbitrary name for an additional pool to be created or the name of one of the pools (eg. volumes, vms, images, ...) for which the settings need to be customized
Giulio, ack.
Publish request sent. Updates should be live soon: https://access.redhat.com/documentation/en/red-hat-openstack-platform/10/single/red-hat-ceph-storage-for-the-overcloud#custom-ceph-pools