Bug 1292981 - Support customization of PGs per OSD in director
Support customization of PGs per OSD in director
Status: CLOSED DUPLICATE of bug 1283721
Product: Red Hat OpenStack
Classification: Red Hat
Component: rhosp-director (Show other bugs)
7.0 (Kilo)
All All
medium Severity high
: Upstream M2
: 11.0 (Ocata)
Assigned To: John Fulton
Yogev Rabl
: Triaged
: 1292982 (view as bug list)
Depends On:
Blocks: 1413723 1387433
  Show dependency treegraph
Reported: 2015-12-18 18:07 EST by Dan Yocum
Modified: 2017-01-16 14:43 EST (History)
6 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2017-01-11 11:10:49 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Dan Yocum 2015-12-18 18:07:27 EST
Description of problem:

Director doesn't create enough PGs even when the hieradata/ceph.yaml file is updated to create 4096.

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1. Deploy overcloud (3 control, 4 ceph, 1 compute), each ceph node has 11 SATA devices for OSDs
2. Run 'ceph health'

Actual results:

HEALTH_WARN too few pgs per osd (19 < min 30)

Expected results:

Not sure.  Not that error.
Additional info:
Comment 2 Mike Burns 2015-12-20 05:34:13 EST
*** Bug 1292982 has been marked as a duplicate of this bug. ***
Comment 4 Mike Burns 2016-04-07 17:00:12 EDT
This bug did not make the OSP 8.0 release.  It is being deferred to OSP 10.
Comment 7 seb 2016-10-13 10:03:39 EDT
PGs are configurable so we don't enforce any PGs.
If it's too low or too big it's a misconfiguration of the variable.
Comment 8 John Fulton 2016-10-13 11:16:36 EDT
Two points here. 

A. The admin needs to know how to set this value correctly as described in the doc: https://access.redhat.com/documentation/en/red-hat-ceph-storage/2/paged/storage-strategies-guide/chapter-3-placement-groups-pgs

B. If the admin changes the value, it should be propagated to the overcloud. 

The bug here is that point B above is not working correctly. Even with the OSPd 10 puddle 2016-10-07.4. 

Instead, what's happening is if the value gets updated on OSPd and the deploy re-run, then the value is only getting updated on the OSD servers ceph.conf. Instead, what should happen is this: 

1. The value should be getting set on the ceph MONITOR's ceph.conf (mons are responsible for creating pools, osds don't need to care)

2. A command like `ceph osd pool set $x pg_num $y` needs to be run on one of the ceph monitors for each pool, but only in the case of an update (on initial create this dose not need to be run). 

Marking this bug as verified. 

Note that this only affects updates to the Ceph cluster. Newly created Ceph clusters do not have this problem provided that the value was set correctly as described in point A above. However, OSPd should correctly support scenario B and do steps 1 and 2 above.
Comment 9 John Fulton 2016-10-13 11:18:40 EDT
Here's my testing to backup my claim that it's not doing the right thing and provide a workaround until the bug is fixed. 

I have a ceph cluster deployed by OSDpd: 

[root@overcloud-controller-0 ~]# ceph -s
    cluster de69d22e-90bb-11e6-b2c6-525400330666
     health HEALTH_WARN
            clock skew detected on mon.overcloud-controller-0
            too few PGs per OSD (14 < min 30)
            Monitor clock skew detected 
     monmap e1: 3 mons at {overcloud-controller-0=,overcloud-controller-1=,overcloud-controller-2=}
            election epoch 6, quorum 0,1,2 overcloud-controller-1,overcloud-controller-0,overcloud-controller-2
     osdmap e163: 48 osds: 48 up, 48 in
            flags sortbitwise
      pgmap v2999: 224 pgs, 6 pools, 10247 MB data, 1646 objects
            32631 MB used, 53596 GB / 53628 GB avail
                 224 active+clean
[root@overcloud-controller-0 ~]# 

I updated PG number setting on undercloud from 256 to 512: 

[stack@hci-director ~]$ diff custom-templates/custom-hci.yaml ~/backup/custom-templates/custom-hci.yaml 
<     ceph::profile::params::osd_pool_default_pg_num: 512
>     ceph::profile::params::osd_pool_default_pg_num: 256
[stack@hci-director ~]$ 

My deploy command uses the updated template. 

[stack@hci-director ~]$ cat deploy-hci.sh 
source ~/stackrc
time openstack overcloud deploy --templates ~/templates \
-e ~/templates/environments/puppet-pacemaker.yaml \
-e ~/templates/environments/storage-environment.yaml \
-e ~/templates/environments/network-isolation.yaml \
-e ~/templates/environments/hyperconverged-ceph.yaml \
-e ~/custom-templates/custom-hci.yaml \
--control-flavor control \
--control-scale 3 \
--compute-flavor compute \
--compute-scale 4 \
--ntp-server \
--neutron-bridge-mappings datacentre:br-ex,tenant:br-tenant \
--neutron-network-type vlan \
--neutron-network-vlan-ranges tenant:4051:4060 \

[stack@hci-director ~]$

re-running deploy to push configuration update to coverlcoud

[stack@hci-director ~]$ ./deploy-hci.sh 
Removing the current plan files
Uploading new plan files
Started Mistral Workflow. Execution ID: a8b01497-6504-4abb-ac8a-86e5c779b27c
2016-10-13 14:39:53Z [AllNodesDeploySteps]: UPDATE_COMPLETE  state changed
2016-10-13 14:40:03Z [overcloud]: UPDATE_COMPLETE  Stack UPDATE completed successfully

 Stack overcloud UPDATE_COMPLETE 

Overcloud Endpoint:
Overcloud Deployed

real    17m40.239s
user    0m2.207s
sys     0m0.201s
[stack@hci-director ~]$ 

The OSD has new value in hiera: 

[root@overcloud-novacompute-0 ~]# hiera ceph::profile::params::osd_pool_default_pg_num
[root@overcloud-novacompute-0 ~]#

PG number updated in ceph.conf: 

[root@overcloud-novacompute-0 ~]# grep pg_num /etc/ceph/ceph.conf
osd_pool_default_pg_num = 512
[root@overcloud-novacompute-0 ~]# 

However, the Monitor does not. 

[root@overcloud-controller-0 hieradata]# hiera ceph::profile::params::osd_pool_default_pg_num
[root@overcloud-controller-0 hieradata]# grep pg_num /etc/ceph/ceph.conf
osd_pool_default_pg_num = 32
[root@overcloud-controller-0 hieradata]# ceph osd pool get vms pg_num
pg_num: 32
[root@overcloud-controller-0 hieradata]# 

As a workaround the admin would need to run the following on the ceph cluster: 

 ceph osd pool set $pool pg_num $new_size

When the bug is fixed, ideally we'd want puppet-ceph to know that an update is happening and run the above. 

# for i in rbd images volumes vms; do
 ceph osd pool set $i pg_num 256;
 sleep 10
 ceph osd pool set $i pgp_num 256;
 sleep 10
set pool 0 pg_num to 256
set pool 0 pgp_num to 256
set pool 1 pg_num to 256
set pool 1 pgp_num to 256
set pool 2 pg_num to 256
set pool 2 pgp_num to 256
set pool 3 pg_num to 256
set pool 3 pgp_num to 256

As per Jacob Liberman: "The sleep statements are intended to ensure the cluster has time to complete the previous action before proceeding. If a large increase is needed increase pg_num in stages."
Comment 10 John Fulton 2016-10-13 12:24:33 EDT
reproduction above was with

[stack@hci-director ~]$ rpm -q openstack-tripleo-heat-templates puppet-ceph 
[stack@hci-director ~]$
Comment 11 John Fulton 2016-10-13 17:05:16 EDT
Another way to look at this is if we support updates to these values with OSPd or if OSPd's job here is just to set the value correctly the first and the admin should update it later by running `ceph osd pool set $pool pg_num $new_size` as part of the normal cloud maintenance. Perhaps supporting the update could be considered an RFE.
Comment 12 John Fulton 2017-01-11 11:10:49 EST
Update on this: 

- Reviewed this with more senior puppet-ceph devs. Consensus is that TripleO should assign the default correctly in ceph.conf of the monitor nodes so that when new pools are created they get the new default. 

- In an admin wishes to change his pool size for an existing pool, then they need to update it with `ceph osd pool set $i pgp_num $num`

- We verified that not only does TripleO support customization of PGs, it also supports customization of PGs _per pool_ as per 1283721 and see comment #33 to show that it's been tested. Thus, I'm closing this as a duplicate of said bug. https://bugzilla.redhat.com/show_bug.cgi?id=1283721#c33

*** This bug has been marked as a duplicate of bug 1283721 ***

Note You need to log in before you can comment on or make changes to this bug.