Bug 1343009 - Director deployed *New* ceph cluster does not have crush tunable set as optimal
Summary: Director deployed *New* ceph cluster does not have crush tunable set as optimal
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: puppet-ceph
Version: 8.0 (Liberty)
Hardware: x86_64
OS: Linux
high
high
Target Milestone: rc
: 10.0 (Newton)
Assignee: Erno Kuvaja
QA Contact: Yogev Rabl
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-06-06 10:32 UTC by Vikhyat Umrao
Modified: 2016-12-14 15:36 UTC (History)
16 users (show)

Fixed In Version: puppet-ceph-2.2.1-3.el7ost
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-12-14 15:36:04 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
OpenStack gerrit 356376 0 'None' MERGED Expose osd crush update on start option 2020-07-22 11:45:18 UTC
Red Hat Knowledge Base (Solution) 2381441 0 None None None 2016-06-17 06:00:03 UTC
Red Hat Product Errata RHEA-2016:2948 0 normal SHIPPED_LIVE Red Hat OpenStack Platform 10 enhancement update 2016-12-14 19:55:27 UTC

Description Vikhyat Umrao 2016-06-06 10:32:50 UTC
Description of problem:
Director deployed *New* ceph cluster does not have crush tunable set as optimal 

[root@overcloud-controller-pbandark-0 heat-admin]# ceph osd crush show-tunables
{
    "choose_local_tries": 0,
    "choose_local_fallback_tries": 0,
    "choose_total_tries": 50,
    "chooseleaf_descend_once": 1,
    "chooseleaf_vary_r": 0,
    "straw_calc_version": 1,
    "allowed_bucket_algs": 22,
    "profile": "unknown", <------------------------------------- "it is unknown"
    "optimal_tunables": 0,
    "legacy_tunables": 0,
    "require_feature_tunables": 1,
    "require_feature_tunables2": 1,
    "require_feature_tunables3": 0,
    "has_v2_rules": 0,
    "has_v3_rules": 0,
    "has_v4_buckets": 0
}


Version-Release number of selected component (if applicable):
Red Hat Openstack Platform 8.0 (Liberty)

How reproducible:
Always

Steps to Reproduce:
1. Deploy a new openstack 8.0 setup with director 
2.  Run $ ceph osd crush show-tunables
3. It will give you crush tunable profile as "unknown"

Actual results:
Profile is set to "unknown"

Expected results:
Profile should be set to *optimal* with this if *New* cluster is getting installed with *hammer*  version and if we set tunable as *optimal* then profile would be *hammer*.

As given here : 
------------------------------

# ceph osd crush show-tunables
{
    "choose_local_tries": 0,
    "choose_local_fallback_tries": 0,
    "choose_total_tries": 50,
    "chooseleaf_descend_once": 1,
    "chooseleaf_vary_r": 1, <---------------------------------
    "straw_calc_version": 1,
    "allowed_bucket_algs": 54,
    "profile": "hammer", <-----------------------------------
    "optimal_tunables": 0,
    "legacy_tunables": 0,
    "require_feature_tunables": 1,
    "require_feature_tunables2": 1,
    "require_feature_tunables3": 1,
    "has_v2_rules": 1,
    "has_v3_rules": 0,
    "has_v4_buckets": 1
}



Additional info:

- This is only for newly deployed ceph cluster with the help of director 
- As cluster deployed with director will mostly use either same version of clients as installed in cluster.

Comment 1 Vikhyat Umrao 2016-06-06 10:38:57 UTC
- Same is given in Red Hat Ceph Storage Documentation [1].

# ceph osd crush tunables optimal

- I am not familiar with director and ceph integration but from initial analysis it looks like we can make use of two files :

- /usr/share/openstack-tripleo-heat-templates/puppet/hieradata/ceph.yaml
  Defining the tunable here as *optimal* 

- /etc/puppet/modules/ceph/manifests/mon.pp
For setting this tunable 

- As tunables can be set before creating the OSDs and after creating MONs once mons form the quorum. 

[1] https://access.redhat.com/documentation/en/red-hat-ceph-storage/version-1.3/installation-guide-for-red-hat-enterprise-linux/#adjust_crush_tunables

Comment 3 Jeff Brown 2016-07-13 19:21:15 UTC
Hi Erno,

Take a look at this at see if you can fix this one?

Thanks,

Jeff

Comment 4 Erno Kuvaja 2016-07-15 14:03:53 UTC
So the configuration phase does not separate upgrade and new install. If this must be for new installations only, we need to make this as post-install task.

Comment 5 seb 2016-07-19 13:54:20 UTC
We need to verify if this can be reproduce this bug in osp9, perhaps ceph itself fixed this issue (with packages), if not you can run a post-install task.
However this shouldn't be happening on a fresh new cluster.

Comment 6 Jeff Brown 2016-08-10 14:25:44 UTC
Please check that the problem is solved when deploying a new Ceph cluster.  OSP9 or OSP10 cluster install would be fine.

Comment 7 seb 2016-10-20 12:00:19 UTC
When deploying a new cluster with OSP10, we get the following tunables:

root@overcloud-cephstorage-0 ~]# ceph osd crush show-tunables
{
    "choose_local_tries": 0,
    "choose_local_fallback_tries": 0,
    "choose_total_tries": 50,
    "chooseleaf_descend_once": 1,
    "chooseleaf_vary_r": 1,
    "chooseleaf_stable": 0,
    "straw_calc_version": 1,
    "allowed_bucket_algs": 22,
    "profile": "firefly",
    "optimal_tunables": 0,
    "legacy_tunables": 0,
    "minimum_required_version": "firefly",
    "require_feature_tunables": 1,
    "require_feature_tunables2": 1,
    "has_v2_rules": 0,
    "require_feature_tunables3": 1,
    "has_v3_rules": 0,
    "has_v4_buckets": 0,
    "require_feature_tunables5": 0,
    "has_v5_rules": 0
}

They represent the default value for any new clusters being deployed.
If there is a desire to change the default Ceph's behaviour the tunables can be changed in a post-config task.

Results are fine.

Moving this to verified.

Comment 9 errata-xmlrpc 2016-12-14 15:36:04 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHEA-2016-2948.html


Note You need to log in before you can comment on or make changes to this bug.