Bug 1343009
Summary: | Director deployed *New* ceph cluster does not have crush tunable set as optimal | ||
---|---|---|---|
Product: | Red Hat OpenStack | Reporter: | Vikhyat Umrao <vumrao> |
Component: | puppet-ceph | Assignee: | Erno Kuvaja <ekuvaja> |
Status: | CLOSED ERRATA | QA Contact: | Yogev Rabl <yrabl> |
Severity: | high | Docs Contact: | |
Priority: | high | ||
Version: | 8.0 (Liberty) | CC: | bschmaus, dbecker, flucifre, gdrapeau, jjoyce, jschluet, mburns, morazi, pgrist, rhel-osp-director-maint, seb, slinaber, tvignaud, vcojot, vikumar, yrabl |
Target Milestone: | rc | Keywords: | Triaged |
Target Release: | 10.0 (Newton) | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | puppet-ceph-2.2.1-3.el7ost | Doc Type: | Bug Fix |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2016-12-14 15:36:04 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Vikhyat Umrao
2016-06-06 10:32:50 UTC
- Same is given in Red Hat Ceph Storage Documentation [1]. # ceph osd crush tunables optimal - I am not familiar with director and ceph integration but from initial analysis it looks like we can make use of two files : - /usr/share/openstack-tripleo-heat-templates/puppet/hieradata/ceph.yaml Defining the tunable here as *optimal* - /etc/puppet/modules/ceph/manifests/mon.pp For setting this tunable - As tunables can be set before creating the OSDs and after creating MONs once mons form the quorum. [1] https://access.redhat.com/documentation/en/red-hat-ceph-storage/version-1.3/installation-guide-for-red-hat-enterprise-linux/#adjust_crush_tunables Hi Erno, Take a look at this at see if you can fix this one? Thanks, Jeff So the configuration phase does not separate upgrade and new install. If this must be for new installations only, we need to make this as post-install task. We need to verify if this can be reproduce this bug in osp9, perhaps ceph itself fixed this issue (with packages), if not you can run a post-install task. However this shouldn't be happening on a fresh new cluster. Please check that the problem is solved when deploying a new Ceph cluster. OSP9 or OSP10 cluster install would be fine. When deploying a new cluster with OSP10, we get the following tunables: root@overcloud-cephstorage-0 ~]# ceph osd crush show-tunables { "choose_local_tries": 0, "choose_local_fallback_tries": 0, "choose_total_tries": 50, "chooseleaf_descend_once": 1, "chooseleaf_vary_r": 1, "chooseleaf_stable": 0, "straw_calc_version": 1, "allowed_bucket_algs": 22, "profile": "firefly", "optimal_tunables": 0, "legacy_tunables": 0, "minimum_required_version": "firefly", "require_feature_tunables": 1, "require_feature_tunables2": 1, "has_v2_rules": 0, "require_feature_tunables3": 1, "has_v3_rules": 0, "has_v4_buckets": 0, "require_feature_tunables5": 0, "has_v5_rules": 0 } They represent the default value for any new clusters being deployed. If there is a desire to change the default Ceph's behaviour the tunables can be changed in a post-config task. Results are fine. Moving this to verified. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHEA-2016-2948.html |