Bug 1319336 - [RFE] [Docs] Document Ceph placement group configuration in relation to OSP-d
Summary: [RFE] [Docs] Document Ceph placement group configuration in relation to OSP-d
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: documentation
Version: unspecified
Hardware: Unspecified
OS: Unspecified
high
unspecified
Target Milestone: ---
: 10.0 (Newton)
Assignee: Don Domingo
QA Contact: Deepti Navale
URL:
Whiteboard:
Depends On: 1283721
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-03-18 21:14 UTC by Alexandre Marangone
Modified: 2017-01-26 16:45 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-01-17 01:24:26 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Alexandre Marangone 2016-03-18 21:14:41 UTC
Description of problem:
OSP-d sets the number of PG to 128 for all pools. This is very arbitrary. 
The number of placement groups should be calculated. The documentation should link to the PG calculation tool (https://access.redhat.com/labs/cephpgc/) and explain how to set these values in OSP-d.

Comment 2 Dan Macpherson 2016-05-25 06:21:21 UTC
(In reply to Alexandre Marangone from comment #0)
> Description of problem:
> OSP-d sets the number of PG to 128 for all pools. This is very arbitrary. 
> The number of placement groups should be calculated. The documentation
> should link to the PG calculation tool
> (https://access.redhat.com/labs/cephpgc/) and explain how to set these
> values in OSP-d.

The only problem is that you can't seem to set specific placement group sizes for different pools through OSPd. You can only seem to set a one PG value for all by setting the following hieradata:

ceph::profile::params::osd_pool_default_pg_num: 128

Alexandre, what you're asking for requires a specific PG setting for each pool, but that doesn't seem to be possible with through OSPd at the moment. We can only specify a global PG value that all pools use, which means all pool have the same PG number. Does that pose a problem?

Also CC'ing gfidente for exposure on this issue. Giulio, is there some way to set specific PG for pools? I might have missed it.

Comment 3 Giulio Fidente 2016-05-31 20:19:10 UTC
hi Dan, specific PG per pool is not possible yet, tracked by BZ 1283721

Comment 4 Dan Macpherson 2016-06-01 00:34:28 UTC
Thanks, Giulio.

Alexandre, it seems as though you can't set specific PG per pool yet. What did you want to do in terms of documentation?

Comment 5 Alexandre Marangone 2016-06-01 15:59:32 UTC
I guess there are two ways to go:
 - Update the doc with a mention of access.redhat.com/labs/cephpgc/ and state that a manual PG # change is necessary after deployment.
 - Leave as is for now and update the doc when BZ 1283721 is done.

Comment 6 Dan Macpherson 2016-06-01 17:14:37 UTC
Might have to leave it for now. I'm a little hesitant to recommend manual changes after deployment, only because future deployment updates to the Overcloud can overwrite any manual changes.

Having said that, I've made BZ 1283721 a dependency for this BZ. So when that gets resolved, I'll be notified and we can resume work on this BZ.

Comment 7 Dan Macpherson 2016-08-16 05:10:04 UTC
This looks like it's being targeted for OSP10. Not sure about whether it's going to be backported.

For reference, the method to use to set specific pools is to use the ceph_pools Puppet param as an extra config value. An example in the storage environment file would look like this:

parameter_defaults:
  ExtraConfig:
    tripleo::profile::base::ceph::mon::ceph_pools:
      mypool:
        size: 5
        pg_num: 128
        pgp_num: 128

Relevant upstream commit: https://review.openstack.org/#/c/346794/2/manifests/profile/base/ceph/mon.pp

Comment 8 Giulio Fidente 2016-08-16 09:42:53 UTC
hi Dan, please do not document solution in comment #7, it's not guaranteed to be backward compatible in the future and.

Instead this bug includes two changes, the puppet part and the THT part, at https://review.openstack.org/#/c/346796/

Sample usage:

  parameter_defaults:
    CephPools:
      mypool:
       size: 5,
       pg_num: 128,
       pgp_num: 128

where mypool can be an arbitrary name for an additional pool to be created or the name of one of the pools (eg. volumes, vms, images, ...) for which the settings need to be customized

Comment 9 Dan Macpherson 2016-08-16 13:31:10 UTC
Giulio, ack.


Note You need to log in before you can comment on or make changes to this bug.