Description of problem: OSP-d should be able to deploy ceph nodes with different types of disks. You may want to deploy ceph nodes with SSDs and HDDs thus providing different types of storage performance for the cloud users. This can be done with the crush maps as described in this post: http://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/
This bug did not make the OSP 8.0 release. It is being deferred to OSP 10.
Simple to implement, just add ability to expose this option in ospd: "osd crush update on start = false"
Hi Erno, Can you take on this one as well? Jeff
Definition of the config option proposed to puppet-ceph
Can the "osd crush update on start = false" be passed through ospd via CephStorageExtraConfig?
@Keith I don't know that but upstream patch is here: https://review.openstack.org/#/c/356376/
@Keith yes, now we can pass ExtraConfig like ceph::profile::params::osd_crush_update_on_start: false
Verification failed on puppet-ceph-2.2.1-2.el7ost.noarch. the storage environment YAML file is set: resource_registry: OS::TripleO::Services::CephMon: ../puppet/services/ceph-mon.yaml OS::TripleO::Services::CephOSD: ../puppet/services/ceph-osd.yaml OS::TripleO::Services::CephClient: ../puppet/services/ceph-client.yaml parameter_defaults: CinderEnableIscsiBackend: false CinderEnableRbdBackend: true CinderBackupBackend: ceph NovaEnableRbdBackend: true GlanceBackend: rbd GnocchiBackend: rbd ExtraConfig: ceph::profile::params::osds: '/dev/vda':{} '/dev/vdb':{} '/dev/vdc':{} ceph::profile::params::osd_crush_update_on_start: false ceph.conf from controller node: [global] osd_pool_default_pgp_num = 32 osd_pool_default_min_size = 1 auth_service_required = cephx mon_initial_members = overcloud-controller-0,overcloud-controller-1,overcloud-controller-2 fsid = d15ab0c2-9b64-11e6-a8df-525400bd4c6f cluster_network = 192.168.0.9/24 auth_supported = cephx auth_cluster_required = cephx mon_host = 192.168.0.15,192.168.0.9,192.168.0.19 auth_client_required = cephx osd_pool_default_size = 3 osd_pool_default_pg_num = 32 ms_bind_ipv6 = False public_network = 192.168.0.9/24 [mon.overcloud-controller-1] public_addr = 192.168.0.9 ceph.conf from ceph storage node: [global] osd_pool_default_min_size = 1 auth_service_required = cephx mon_initial_members = overcloud-controller-0,overcloud-controller-1,overcloud-controller-2 fsid = d15ab0c2-9b64-11e6-a8df-525400bd4c6f cluster_network = 192.168.0.16/24 auth_supported = cephx auth_cluster_required = cephx mon_host = 192.168.0.15,192.168.0.9,192.168.0.19 auth_client_required = cephx public_network = 192.168.0.16/24 In the Ceph storage node, the file /etc/puppet/hieradata/extraconfig.yaml: ceph::profile::params::osd_crush_update_on_start: false ceph::profile::params::osds: { "/dev/vda": {}, "/dev/vdb": {}, "/dev/vdc": {} }
In addition, the deployment command is: openstack overcloud deploy --templates -e usr/share/openstack-tripleo-heat-templates/environments/net-two-nic-with-vlans.yaml -e usr/share/openstack-tripleo-heat-templates/environments/network-environment.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml --control-scale 3 --compute-scale 2 --ceph-storage-scale 3 --control-flavor control --compute-flavor compute --ceph-storage-flavor ceph-storage --libvirt-type qemu --ntp-server clock.redhat.com
Yogev if you use ExtraConfig: ceph::osd_crush_update_on_start: false it should work. Any chance you could try this again?
Moving it back to ON_QA to retest with the proper configuration
Verified on puppet-ceph-2.2.1-3.el7ost.noarch. Deploy Ceph internal storage cluster with this YAML file: resource_registry: OS::TripleO::Services::CephMon: ../puppet/services/ceph-mon.yaml OS::TripleO::Services::CephOSD: ../puppet/services/ceph-osd.yaml OS::TripleO::Services::CephClient: ../puppet/services/ceph-client.yaml parameter_defaults: CinderEnableIscsiBackend: false CinderEnableRbdBackend: true CinderBackupBackend: ceph NovaEnableRbdBackend: true GlanceBackend: rbd GnocchiBackend: rbd ExtraConfig: ceph::profile::params::osds: '/dev/vda':{} '/dev/vdb':{} '/dev/vdc':{} ceph::osd_crush_update_on_start: false On the /etc/ceph/ceph.conf on the controller nodes: [global] osd_pool_default_pgp_num = 32 osd_pool_default_min_size = 1 auth_service_required = cephx mon_initial_members = overcloud-controller-0,overcloud-controller-1,overcloud-controller-2 fsid = 2a340e08-a116-11e6-b9b8-525400bd4c6f cluster_network = 192.168.2.18/24 auth_supported = cephx auth_cluster_required = cephx mon_host = 10.35.140.36,10.35.140.47,10.35.140.38 auth_client_required = cephx osd_pool_default_size = 3 osd_crush_update_on_start = False osd_pool_default_pg_num = 32 ms_bind_ipv6 = False public_network = 10.35.140.47/27 [mon.overcloud-controller-1] public_addr = 10.35.140.47 [client.radosgw.gateway] user = apache rgw_frontends = civetweb port=10.35.140.47:8080 log_file = /var/log/ceph/radosgw.log host = overcloud-controller-1 keyring = /etc/ceph/ceph.client.radosgw.gateway.keyring rgw_keystone_token_cache_size = 500 rgw_keystone_url = http://192.168.0.7:35357 rgw_s3_auth_use_keystone = True rgw_keystone_admin_token = ezcJv8srAGKqrA3vVrvg4xHWN rgw_keystone_accepted_roles = admin,_member_,Member On /etc/ceph/ceph.conf on the ceph-storage nodes: [global] osd_pool_default_min_size = 1 auth_service_required = cephx mon_initial_members = overcloud-controller-0,overcloud-controller-1,overcloud-controller-2 fsid = 2a340e08-a116-11e6-b9b8-525400bd4c6f cluster_network = 192.168.2.10/24 auth_supported = cephx auth_cluster_required = cephx mon_host = 10.35.140.36,10.35.140.47,10.35.140.38 auth_client_required = cephx osd_crush_update_on_start = False public_network = 10.35.140.46/27
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHEA-2016-2948.html