Bug 1258120 - [RFE] OSP-d support for Ceph crush maps
Summary: [RFE] OSP-d support for Ceph crush maps
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: puppet-ceph
Version: 7.0 (Kilo)
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: rc
: 10.0 (Newton)
Assignee: Erno Kuvaja
QA Contact: Yogev Rabl
URL:
Whiteboard:
Depends On:
Blocks: 1376333
TreeView+ depends on / blocked
 
Reported: 2015-08-29 10:54 UTC by Marius Cornea
Modified: 2019-05-29 01:25 UTC (History)
16 users (show)

Fixed In Version: puppet-ceph-2.2.1-1.el7ost
Doc Type: Enhancement
Doc Text:
Clone Of:
: 1417212 (view as bug list)
Environment:
Last Closed: 2016-12-14 15:14:52 UTC
Target Upstream Version:
Embargoed:
scohen: needinfo+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
OpenStack gerrit 356376 0 'None' MERGED Expose osd crush update on start option 2020-11-16 18:18:01 UTC
Red Hat Product Errata RHEA-2016:2948 0 normal SHIPPED_LIVE Red Hat OpenStack Platform 10 enhancement update 2016-12-14 19:55:27 UTC

Description Marius Cornea 2015-08-29 10:54:00 UTC
Description of problem:

OSP-d should be able to deploy ceph nodes with different types of disks. You may want to deploy ceph nodes with SSDs and HDDs thus providing different types of storage performance for the cloud users. This can be done with the crush maps as described in this post:

http://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/

Comment 4 Mike Burns 2016-04-07 20:47:27 UTC
This bug did not make the OSP 8.0 release.  It is being deferred to OSP 10.

Comment 6 seb 2016-07-07 16:28:32 UTC
Simple to implement, just add ability to expose this option in ospd:

"osd crush update on start = false"

Comment 7 Jeff Brown 2016-07-13 19:55:49 UTC
Hi Erno,

Can you take on this one as well?

Jeff

Comment 8 Erno Kuvaja 2016-08-17 10:47:11 UTC
Definition of the config option proposed to puppet-ceph

Comment 9 Keith Schincke 2016-08-18 12:32:15 UTC
Can the "osd crush update on start = false" be passed through ospd via CephStorageExtraConfig?

Comment 10 seb 2016-08-19 08:46:25 UTC
@Keith I don't know that but upstream patch is here: https://review.openstack.org/#/c/356376/

Comment 11 Erno Kuvaja 2016-08-22 07:14:25 UTC
@Keith yes, now we can pass ExtraConfig like ceph::profile::params::osd_crush_update_on_start: false

Comment 16 Yogev Rabl 2016-10-26 11:04:50 UTC
Verification failed on puppet-ceph-2.2.1-2.el7ost.noarch. 

the storage environment YAML file is set: 
resource_registry:
  OS::TripleO::Services::CephMon: ../puppet/services/ceph-mon.yaml
  OS::TripleO::Services::CephOSD: ../puppet/services/ceph-osd.yaml
  OS::TripleO::Services::CephClient: ../puppet/services/ceph-client.yaml


parameter_defaults:
  CinderEnableIscsiBackend: false
  CinderEnableRbdBackend: true
  CinderBackupBackend: ceph
  NovaEnableRbdBackend: true
  GlanceBackend: rbd
  GnocchiBackend: rbd
  ExtraConfig:
    ceph::profile::params::osds:
      '/dev/vda':{}
      '/dev/vdb':{}
      '/dev/vdc':{}
    ceph::profile::params::osd_crush_update_on_start: false


ceph.conf from controller node:

[global]
osd_pool_default_pgp_num = 32
osd_pool_default_min_size = 1
auth_service_required = cephx
mon_initial_members = overcloud-controller-0,overcloud-controller-1,overcloud-controller-2
fsid = d15ab0c2-9b64-11e6-a8df-525400bd4c6f
cluster_network = 192.168.0.9/24
auth_supported = cephx
auth_cluster_required = cephx
mon_host = 192.168.0.15,192.168.0.9,192.168.0.19
auth_client_required = cephx
osd_pool_default_size = 3
osd_pool_default_pg_num = 32
ms_bind_ipv6 = False
public_network = 192.168.0.9/24

[mon.overcloud-controller-1]
public_addr = 192.168.0.9


ceph.conf from ceph storage node:

[global]
osd_pool_default_min_size = 1
auth_service_required = cephx
mon_initial_members = overcloud-controller-0,overcloud-controller-1,overcloud-controller-2
fsid = d15ab0c2-9b64-11e6-a8df-525400bd4c6f
cluster_network = 192.168.0.16/24
auth_supported = cephx
auth_cluster_required = cephx
mon_host = 192.168.0.15,192.168.0.9,192.168.0.19
auth_client_required = cephx
public_network = 192.168.0.16/24

In the Ceph storage node, the file /etc/puppet/hieradata/extraconfig.yaml:
ceph::profile::params::osd_crush_update_on_start: false
ceph::profile::params::osds: {
  "/dev/vda": {},
  "/dev/vdb": {},
  "/dev/vdc": {}
}

Comment 17 Yogev Rabl 2016-10-26 15:08:03 UTC
In addition, the deployment command is:
openstack overcloud deploy --templates -e usr/share/openstack-tripleo-heat-templates/environments/net-two-nic-with-vlans.yaml -e usr/share/openstack-tripleo-heat-templates/environments/network-environment.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml --control-scale 3 --compute-scale 2 --ceph-storage-scale 3 --control-flavor control --compute-flavor compute --ceph-storage-flavor ceph-storage --libvirt-type qemu --ntp-server clock.redhat.com

Comment 18 Giulio Fidente 2016-10-28 08:57:59 UTC
Yogev if you use

ExtraConfig:
  ceph::osd_crush_update_on_start: false

it should work. Any chance you could try this again?

Comment 19 Yogev Rabl 2016-11-02 15:29:14 UTC
Moving it back to ON_QA to retest with the proper configuration

Comment 20 Yogev Rabl 2016-11-03 08:58:56 UTC
Verified on puppet-ceph-2.2.1-3.el7ost.noarch. 

Deploy Ceph internal storage cluster with this YAML file: 
resource_registry:
  OS::TripleO::Services::CephMon: ../puppet/services/ceph-mon.yaml
  OS::TripleO::Services::CephOSD: ../puppet/services/ceph-osd.yaml
  OS::TripleO::Services::CephClient: ../puppet/services/ceph-client.yaml


parameter_defaults:
  CinderEnableIscsiBackend: false
  CinderEnableRbdBackend: true
  CinderBackupBackend: ceph
  NovaEnableRbdBackend: true
  GlanceBackend: rbd
  GnocchiBackend: rbd
  ExtraConfig:
    ceph::profile::params::osds:
      '/dev/vda':{}
      '/dev/vdb':{}
      '/dev/vdc':{}
    ceph::osd_crush_update_on_start: false

On the /etc/ceph/ceph.conf on the controller nodes:


[global]
osd_pool_default_pgp_num = 32
osd_pool_default_min_size = 1
auth_service_required = cephx
mon_initial_members = overcloud-controller-0,overcloud-controller-1,overcloud-controller-2
fsid = 2a340e08-a116-11e6-b9b8-525400bd4c6f
cluster_network = 192.168.2.18/24
auth_supported = cephx
auth_cluster_required = cephx
mon_host = 10.35.140.36,10.35.140.47,10.35.140.38
auth_client_required = cephx
osd_pool_default_size = 3
osd_crush_update_on_start = False
osd_pool_default_pg_num = 32
ms_bind_ipv6 = False
public_network = 10.35.140.47/27

[mon.overcloud-controller-1]
public_addr = 10.35.140.47

[client.radosgw.gateway]
user = apache
rgw_frontends = civetweb port=10.35.140.47:8080
log_file = /var/log/ceph/radosgw.log
host = overcloud-controller-1
keyring = /etc/ceph/ceph.client.radosgw.gateway.keyring
rgw_keystone_token_cache_size = 500
rgw_keystone_url = http://192.168.0.7:35357
rgw_s3_auth_use_keystone = True
rgw_keystone_admin_token = ezcJv8srAGKqrA3vVrvg4xHWN
rgw_keystone_accepted_roles = admin,_member_,Member


On /etc/ceph/ceph.conf on the ceph-storage nodes:


[global]
osd_pool_default_min_size = 1
auth_service_required = cephx
mon_initial_members = overcloud-controller-0,overcloud-controller-1,overcloud-controller-2
fsid = 2a340e08-a116-11e6-b9b8-525400bd4c6f
cluster_network = 192.168.2.10/24
auth_supported = cephx
auth_cluster_required = cephx
mon_host = 10.35.140.36,10.35.140.47,10.35.140.38
auth_client_required = cephx
osd_crush_update_on_start = False
public_network = 10.35.140.46/27

Comment 23 errata-xmlrpc 2016-12-14 15:14:52 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHEA-2016-2948.html


Note You need to log in before you can comment on or make changes to this bug.