Description of problem: Director deployed *New* ceph cluster does not have crush tunable set as optimal [root@overcloud-controller-pbandark-0 heat-admin]# ceph osd crush show-tunables { "choose_local_tries": 0, "choose_local_fallback_tries": 0, "choose_total_tries": 50, "chooseleaf_descend_once": 1, "chooseleaf_vary_r": 0, "straw_calc_version": 1, "allowed_bucket_algs": 22, "profile": "unknown", <------------------------------------- "it is unknown" "optimal_tunables": 0, "legacy_tunables": 0, "require_feature_tunables": 1, "require_feature_tunables2": 1, "require_feature_tunables3": 0, "has_v2_rules": 0, "has_v3_rules": 0, "has_v4_buckets": 0 } Version-Release number of selected component (if applicable): Red Hat Openstack Platform 8.0 (Liberty) How reproducible: Always Steps to Reproduce: 1. Deploy a new openstack 8.0 setup with director 2. Run $ ceph osd crush show-tunables 3. It will give you crush tunable profile as "unknown" Actual results: Profile is set to "unknown" Expected results: Profile should be set to *optimal* with this if *New* cluster is getting installed with *hammer* version and if we set tunable as *optimal* then profile would be *hammer*. As given here : ------------------------------ # ceph osd crush show-tunables { "choose_local_tries": 0, "choose_local_fallback_tries": 0, "choose_total_tries": 50, "chooseleaf_descend_once": 1, "chooseleaf_vary_r": 1, <--------------------------------- "straw_calc_version": 1, "allowed_bucket_algs": 54, "profile": "hammer", <----------------------------------- "optimal_tunables": 0, "legacy_tunables": 0, "require_feature_tunables": 1, "require_feature_tunables2": 1, "require_feature_tunables3": 1, "has_v2_rules": 1, "has_v3_rules": 0, "has_v4_buckets": 1 } Additional info: - This is only for newly deployed ceph cluster with the help of director - As cluster deployed with director will mostly use either same version of clients as installed in cluster.
- Same is given in Red Hat Ceph Storage Documentation [1]. # ceph osd crush tunables optimal - I am not familiar with director and ceph integration but from initial analysis it looks like we can make use of two files : - /usr/share/openstack-tripleo-heat-templates/puppet/hieradata/ceph.yaml Defining the tunable here as *optimal* - /etc/puppet/modules/ceph/manifests/mon.pp For setting this tunable - As tunables can be set before creating the OSDs and after creating MONs once mons form the quorum. [1] https://access.redhat.com/documentation/en/red-hat-ceph-storage/version-1.3/installation-guide-for-red-hat-enterprise-linux/#adjust_crush_tunables
Hi Erno, Take a look at this at see if you can fix this one? Thanks, Jeff
So the configuration phase does not separate upgrade and new install. If this must be for new installations only, we need to make this as post-install task.
We need to verify if this can be reproduce this bug in osp9, perhaps ceph itself fixed this issue (with packages), if not you can run a post-install task. However this shouldn't be happening on a fresh new cluster.
Please check that the problem is solved when deploying a new Ceph cluster. OSP9 or OSP10 cluster install would be fine.
When deploying a new cluster with OSP10, we get the following tunables: root@overcloud-cephstorage-0 ~]# ceph osd crush show-tunables { "choose_local_tries": 0, "choose_local_fallback_tries": 0, "choose_total_tries": 50, "chooseleaf_descend_once": 1, "chooseleaf_vary_r": 1, "chooseleaf_stable": 0, "straw_calc_version": 1, "allowed_bucket_algs": 22, "profile": "firefly", "optimal_tunables": 0, "legacy_tunables": 0, "minimum_required_version": "firefly", "require_feature_tunables": 1, "require_feature_tunables2": 1, "has_v2_rules": 0, "require_feature_tunables3": 1, "has_v3_rules": 0, "has_v4_buckets": 0, "require_feature_tunables5": 0, "has_v5_rules": 0 } They represent the default value for any new clusters being deployed. If there is a desire to change the default Ceph's behaviour the tunables can be changed in a post-config task. Results are fine. Moving this to verified.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHEA-2016-2948.html