Description of problem: Puppet code used to deploy Ceph completely ignores any customisation done in puppet/hieradata/ceph.yaml, in the sense that all Ceph pools are created with a default size of 64 placement groups. I am proposing a code change that not only fixes this, but also allows configuring each pool with a different set of parameters: pg_num, pgp_num and size. This makes sense, as usually the Ceph pool used by Glance will be typically smaller than the Ceph pool used to store Cinder volumes or Nova ephermeral disks. Version-Release number of selected component (if applicable): openstack-tripleo-heat-templates-0.8.6-71.el7ost.noarch How reproducible: Always Steps to Reproduce: 1. Deploy an Overcloud 2. All Ceph pools will have pg_num == pgp_num == 64 no matter what is specified in the file puppet/hieradata/ceph.yaml Actual results: All Ceph pools will have pg_num == pgp_num == 64 no matter what is specified in the file puppet/hieradata/ceph.yaml Expected results: Ceph pools should be configured independently. Additional info: I have created a Gerrit change to implement the fix: https://review.openstack.org/#/c/247669
Can we please target the fix for 7.2? This is a critical bug that is affecting Telefónica's ability to deploy production OpenStack clusters.
*** This bug has been marked as a duplicate of bug 1252546 ***
(In reply to Felipe Alfaro Solana from comment #2) > Can we please target the fix for 7.2? This is a critical bug that is > affecting Telefónica's ability to deploy production OpenStack clusters. Hi Felipe, thanks a lot for your proposed patch. Unfortunately we cannot add more patches to 7.2 since we did not pass just dev freeze, it is already post blocker freeze. This will have to go to 8.0.
Will there be a 7.3 release before 8.0? We aren't going to upgrade to 8.0 until February next year and can't wait that long for this patch to be officially supported.
Felipe, there is no plan for 7.3 release before 8, since we are targeting ~2-month release cadence. Can you please send me an e-mail and provide better context? Based on that we will see what we can do to support you. Thanks, Jarda
Guilio, is this bug duplicate and therefor fixed in another bugzilla from comment #4?
hi, no this is not a duplicate, it is a valid RFE and it depends on 1252546 this RFE is about making it possible to use *different* values of pg_num (etc) for different pools; the patch for 1252546 is now merged upstream; we need to review and merge the patch for this BZ instead
Sample usage to override the defaults of the 'volumes' pool: parameter_defaults: ExtraConfig: CephPools: volumes: size: 5, pg_num: 128 pgp_num: 128
verification failed on openstack-tripleo-heat-templates-5.0.0-0.8.0rc3.el7ost.noarch I ran the following deployment command openstack overcloud deploy \ --templates /usr/share/openstack-tripleo-heat-templates/ \ -e usr/share/openstack-tripleo-heat-templates/environments/network-environment.yaml \ -e usr/share/openstack-tripleo-heat-templates/environments/net-two-nic-with-vlans.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-radosgw.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/cinder-backup.yaml \ --control-scale 3 \ --compute-scale 2 \ --ceph-storage-scale 3 \ --control-flavor control \ --compute-flavor compute \ --ceph-storage-flavor ceph-storage \ --libvirt-type qemu \ --ntp-server 0.rhel.pool.ntp.org Storage-environment.yaml content is: resource_registry: OS::TripleO::Services::CephMon: ../puppet/services/ceph-mon.yaml OS::TripleO::Services::CephOSD: ../puppet/services/ceph-osd.yaml OS::TripleO::Services::CephClient: ../puppet/services/ceph-client.yaml parameter_defaults: #### BACKEND SELECTION #### ## Whether to enable iscsi backend for Cinder. CinderEnableIscsiBackend: false ## Whether to enable rbd (Ceph) backend for Cinder. CinderEnableRbdBackend: true ## Cinder Backup backend can be either 'ceph' or 'swift'. CinderBackupBackend: ceph ## Whether to enable NFS backend for Cinder. # CinderEnableNfsBackend: false ## Whether to enable rbd (Ceph) backend for Nova ephemeral storage. NovaEnableRbdBackend: true ## Glance backend can be either 'rbd' (Ceph), 'swift' or 'file'. GlanceBackend: rbd ## Gnocchi backend can be either 'rbd' (Ceph), 'swift' or 'file'. GnocchiBackend: rbd ExtraConfig: ceph::profile::params::osds: '/dev/vda': {} '/dev/vdb': {} '/dev/vdc': {} '/dev/vdd': {} '/dev/vde': {} ceph::profile::params::osd_crush_update_on_start: false CephPools: volumes: size: 3 pg_num: 128 pgp_num: 128 vms: size: 1 pg_num: 128 pgp_num: 128 images: size: 5 pg_num: 128 pgp_num: 128 and the number of pg_num of the pools is 32
hi Yogev, I am *verry* sorry but my syntax in comment #26 was wrong. CephPools is a Heat parameter, like GlanceBackend and goes at the same level of it, not folded within ExtraConfig (which instead, pushed hieradata on the nodes) Is there any chance we could try this again and update the BZ?
Sure, I'll rerun it with the proper configuration.
Verified with the configuration resource_registry: OS::TripleO::Services::CephMon: ../puppet/services/ceph-mon.yaml OS::TripleO::Services::CephOSD: ../puppet/services/ceph-osd.yaml OS::TripleO::Services::CephClient: ../puppet/services/ceph-client.yaml parameter_defaults: CinderEnableIscsiBackend: false CinderEnableRbdBackend: true CinderBackupBackend: ceph NovaEnableRbdBackend: true GlanceBackend: rbd GnocchiBackend: rbd CephPools: volumes: size: 3 pg_num: 128 pgp_num: 128 vms: size: 1 pg_num: 128 pgp_num: 128 images: size: 5 pg_num: 128 pgp_num: 128 ExtraConfig: ceph::profile::params::osds: '/dev/vda': {} '/dev/vdb': {} '/dev/vdc': {} '/dev/vdd': {} '/dev/vde': {}
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHEA-2016-2948.html
*** Bug 1292981 has been marked as a duplicate of this bug. ***