Bug 1258120 - [RFE] OSP-d support for Ceph crush maps
[RFE] OSP-d support for Ceph crush maps
Status: CLOSED ERRATA
Product: Red Hat OpenStack
Classification: Red Hat
Component: puppet-ceph (Show other bugs)
7.0 (Kilo)
Unspecified Unspecified
medium Severity medium
: rc
: 10.0 (Newton)
Assigned To: Erno Kuvaja
Yogev Rabl
: FutureFeature, InstallerIntegration, Triaged
Depends On:
Blocks: 1376333
  Show dependency treegraph
 
Reported: 2015-08-29 06:54 EDT by Marius Cornea
Modified: 2017-01-27 07:17 EST (History)
15 users (show)

See Also:
Fixed In Version: puppet-ceph-2.2.1-1.el7ost
Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of:
: 1417212 (view as bug list)
Environment:
Last Closed: 2016-12-14 10:14:52 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
scohen: needinfo+


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
OpenStack gerrit 356376 None None None 2016-08-17 06:47 EDT

  None (edit)
Description Marius Cornea 2015-08-29 06:54:00 EDT
Description of problem:

OSP-d should be able to deploy ceph nodes with different types of disks. You may want to deploy ceph nodes with SSDs and HDDs thus providing different types of storage performance for the cloud users. This can be done with the crush maps as described in this post:

http://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/
Comment 4 Mike Burns 2016-04-07 16:47:27 EDT
This bug did not make the OSP 8.0 release.  It is being deferred to OSP 10.
Comment 6 seb 2016-07-07 12:28:32 EDT
Simple to implement, just add ability to expose this option in ospd:

"osd crush update on start = false"
Comment 7 Jeff Brown 2016-07-13 15:55:49 EDT
Hi Erno,

Can you take on this one as well?

Jeff
Comment 8 Erno Kuvaja 2016-08-17 06:47:11 EDT
Definition of the config option proposed to puppet-ceph
Comment 9 Keith Schincke 2016-08-18 08:32:15 EDT
Can the "osd crush update on start = false" be passed through ospd via CephStorageExtraConfig?
Comment 10 seb 2016-08-19 04:46:25 EDT
@Keith I don't know that but upstream patch is here: https://review.openstack.org/#/c/356376/
Comment 11 Erno Kuvaja 2016-08-22 03:14:25 EDT
@Keith yes, now we can pass ExtraConfig like ceph::profile::params::osd_crush_update_on_start: false
Comment 16 Yogev Rabl 2016-10-26 07:04:50 EDT
Verification failed on puppet-ceph-2.2.1-2.el7ost.noarch. 

the storage environment YAML file is set: 
resource_registry:
  OS::TripleO::Services::CephMon: ../puppet/services/ceph-mon.yaml
  OS::TripleO::Services::CephOSD: ../puppet/services/ceph-osd.yaml
  OS::TripleO::Services::CephClient: ../puppet/services/ceph-client.yaml


parameter_defaults:
  CinderEnableIscsiBackend: false
  CinderEnableRbdBackend: true
  CinderBackupBackend: ceph
  NovaEnableRbdBackend: true
  GlanceBackend: rbd
  GnocchiBackend: rbd
  ExtraConfig:
    ceph::profile::params::osds:
      '/dev/vda':{}
      '/dev/vdb':{}
      '/dev/vdc':{}
    ceph::profile::params::osd_crush_update_on_start: false


ceph.conf from controller node:

[global]
osd_pool_default_pgp_num = 32
osd_pool_default_min_size = 1
auth_service_required = cephx
mon_initial_members = overcloud-controller-0,overcloud-controller-1,overcloud-controller-2
fsid = d15ab0c2-9b64-11e6-a8df-525400bd4c6f
cluster_network = 192.168.0.9/24
auth_supported = cephx
auth_cluster_required = cephx
mon_host = 192.168.0.15,192.168.0.9,192.168.0.19
auth_client_required = cephx
osd_pool_default_size = 3
osd_pool_default_pg_num = 32
ms_bind_ipv6 = False
public_network = 192.168.0.9/24

[mon.overcloud-controller-1]
public_addr = 192.168.0.9


ceph.conf from ceph storage node:

[global]
osd_pool_default_min_size = 1
auth_service_required = cephx
mon_initial_members = overcloud-controller-0,overcloud-controller-1,overcloud-controller-2
fsid = d15ab0c2-9b64-11e6-a8df-525400bd4c6f
cluster_network = 192.168.0.16/24
auth_supported = cephx
auth_cluster_required = cephx
mon_host = 192.168.0.15,192.168.0.9,192.168.0.19
auth_client_required = cephx
public_network = 192.168.0.16/24

In the Ceph storage node, the file /etc/puppet/hieradata/extraconfig.yaml:
ceph::profile::params::osd_crush_update_on_start: false
ceph::profile::params::osds: {
  "/dev/vda": {},
  "/dev/vdb": {},
  "/dev/vdc": {}
}
Comment 17 Yogev Rabl 2016-10-26 11:08:03 EDT
In addition, the deployment command is:
openstack overcloud deploy --templates -e usr/share/openstack-tripleo-heat-templates/environments/net-two-nic-with-vlans.yaml -e usr/share/openstack-tripleo-heat-templates/environments/network-environment.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml --control-scale 3 --compute-scale 2 --ceph-storage-scale 3 --control-flavor control --compute-flavor compute --ceph-storage-flavor ceph-storage --libvirt-type qemu --ntp-server clock.redhat.com
Comment 18 Giulio Fidente 2016-10-28 04:57:59 EDT
Yogev if you use

ExtraConfig:
  ceph::osd_crush_update_on_start: false

it should work. Any chance you could try this again?
Comment 19 Yogev Rabl 2016-11-02 11:29:14 EDT
Moving it back to ON_QA to retest with the proper configuration
Comment 20 Yogev Rabl 2016-11-03 04:58:56 EDT
Verified on puppet-ceph-2.2.1-3.el7ost.noarch. 

Deploy Ceph internal storage cluster with this YAML file: 
resource_registry:
  OS::TripleO::Services::CephMon: ../puppet/services/ceph-mon.yaml
  OS::TripleO::Services::CephOSD: ../puppet/services/ceph-osd.yaml
  OS::TripleO::Services::CephClient: ../puppet/services/ceph-client.yaml


parameter_defaults:
  CinderEnableIscsiBackend: false
  CinderEnableRbdBackend: true
  CinderBackupBackend: ceph
  NovaEnableRbdBackend: true
  GlanceBackend: rbd
  GnocchiBackend: rbd
  ExtraConfig:
    ceph::profile::params::osds:
      '/dev/vda':{}
      '/dev/vdb':{}
      '/dev/vdc':{}
    ceph::osd_crush_update_on_start: false

On the /etc/ceph/ceph.conf on the controller nodes:


[global]
osd_pool_default_pgp_num = 32
osd_pool_default_min_size = 1
auth_service_required = cephx
mon_initial_members = overcloud-controller-0,overcloud-controller-1,overcloud-controller-2
fsid = 2a340e08-a116-11e6-b9b8-525400bd4c6f
cluster_network = 192.168.2.18/24
auth_supported = cephx
auth_cluster_required = cephx
mon_host = 10.35.140.36,10.35.140.47,10.35.140.38
auth_client_required = cephx
osd_pool_default_size = 3
osd_crush_update_on_start = False
osd_pool_default_pg_num = 32
ms_bind_ipv6 = False
public_network = 10.35.140.47/27

[mon.overcloud-controller-1]
public_addr = 10.35.140.47

[client.radosgw.gateway]
user = apache
rgw_frontends = civetweb port=10.35.140.47:8080
log_file = /var/log/ceph/radosgw.log
host = overcloud-controller-1
keyring = /etc/ceph/ceph.client.radosgw.gateway.keyring
rgw_keystone_token_cache_size = 500
rgw_keystone_url = http://192.168.0.7:35357
rgw_s3_auth_use_keystone = True
rgw_keystone_admin_token = ezcJv8srAGKqrA3vVrvg4xHWN
rgw_keystone_accepted_roles = admin,_member_,Member


On /etc/ceph/ceph.conf on the ceph-storage nodes:


[global]
osd_pool_default_min_size = 1
auth_service_required = cephx
mon_initial_members = overcloud-controller-0,overcloud-controller-1,overcloud-controller-2
fsid = 2a340e08-a116-11e6-b9b8-525400bd4c6f
cluster_network = 192.168.2.10/24
auth_supported = cephx
auth_cluster_required = cephx
mon_host = 10.35.140.36,10.35.140.47,10.35.140.38
auth_client_required = cephx
osd_crush_update_on_start = False
public_network = 10.35.140.46/27
Comment 23 errata-xmlrpc 2016-12-14 10:14:52 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHEA-2016-2948.html

Note You need to log in before you can comment on or make changes to this bug.