Bug 1594746
Summary: | [SEE/SD][restful-api] curl https://<hostname>:8003/osd fails with python traceback when crushmap uses root with combined device-class | ||||||
---|---|---|---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Tomas Petr <tpetr> | ||||
Component: | Ceph-Mgr Plugins | Assignee: | Boris Ranto <branto> | ||||
Status: | CLOSED ERRATA | QA Contact: | Parikshith <pbyregow> | ||||
Severity: | medium | Docs Contact: | Erin Donnelly <edonnell> | ||||
Priority: | medium | ||||||
Version: | 3.0 | CC: | assingh, branto, ceph-eng-bugs, ceph-qe-bugs, edonnell, kdreyer, mkasturi, tchandra, tserlin, vimishra, ykaul | ||||
Target Milestone: | z1 | ||||||
Target Release: | 3.2 | ||||||
Hardware: | Unspecified | ||||||
OS: | Unspecified | ||||||
Whiteboard: | |||||||
Fixed In Version: | RHEL: ceph-12.2.8-77.el7cp Ubuntu: ceph_12.2.8-62redhat1 | Doc Type: | Bug Fix | ||||
Doc Text: |
.HDD and SSD devices can now be mixed when accessing the `/osd` endpoint
Previously, the {product} RESTful API did not handle when HDD and SSD devices were mixed when accessing the `/osd` endpoint and returned an error. With this update, the OSD traversal algorithm has been improved to handle this scenario as expected.
|
Story Points: | --- | ||||
Clone Of: | Environment: | ||||||
Last Closed: | 2019-03-07 15:50:55 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Bug Depends On: | |||||||
Bug Blocks: | 1629656 | ||||||
Attachments: |
|
Description
Tomas Petr
2018-06-25 10:21:24 UTC
There is nothing special in the mgr.log even with debug_mgr=20 debug_mgrc=20 debug_ms=1, but I can upload it if needed. # ceph osd crush rule create-replicated cold default host hdd # ceph osd crush rule create-replicated hot default host ssd Crush map: # begin crush map tunable choose_local_tries 0 tunable choose_local_fallback_tries 0 tunable choose_total_tries 50 tunable chooseleaf_descend_once 1 tunable chooseleaf_vary_r 1 tunable chooseleaf_stable 1 tunable straw_calc_version 1 tunable allowed_bucket_algs 54 # devices device 0 osd.0 class hdd device 1 osd.1 class hdd device 2 osd.2 class hdd device 3 osd.3 class hdd device 4 osd.4 class hdd device 5 osd.5 class hdd device 6 osd.6 class ssd device 7 osd.7 class ssd device 8 osd.8 class ssd device 9 osd.9 class hdd device 10 osd.10 class hdd device 11 osd.11 class hdd device 12 osd.12 class ssd device 13 osd.13 class ssd device 14 osd.14 class ssd device 15 osd.15 class hdd device 16 osd.16 class hdd device 17 osd.17 class hdd device 18 osd.18 class hdd # types type 0 osd type 1 host type 2 chassis type 3 rack type 4 row type 5 pdu type 6 pod type 7 room type 8 datacenter type 9 region type 10 root # buckets host osds-0 { id -3 # do not change unnecessarily id -4 class hdd # do not change unnecessarily id -15 class ssd # do not change unnecessarily # weight 0.058 alg straw2 hash 0 # rjenkins1 item osd.0 weight 0.019 item osd.3 weight 0.019 item osd.13 weight 0.019 } host osds-1 { id -5 # do not change unnecessarily id -6 class hdd # do not change unnecessarily id -16 class ssd # do not change unnecessarily # weight 0.058 alg straw2 hash 0 # rjenkins1 item osd.1 weight 0.019 item osd.4 weight 0.019 item osd.12 weight 0.019 } host osds-5 { id -7 # do not change unnecessarily id -8 class hdd # do not change unnecessarily id -17 class ssd # do not change unnecessarily # weight 0.058 alg straw2 hash 0 # rjenkins1 item osd.2 weight 0.019 item osd.5 weight 0.019 item osd.14 weight 0.019 } host osds-4 { id -9 # do not change unnecessarily id -10 class hdd # do not change unnecessarily id -18 class ssd # do not change unnecessarily # weight 0.057 alg straw2 hash 0 # rjenkins1 item osd.7 weight 0.018 item osd.16 weight 0.019 item osd.17 weight 0.019 } host osds-2 { id -11 # do not change unnecessarily id -12 class hdd # do not change unnecessarily id -19 class ssd # do not change unnecessarily # weight 0.072 alg straw2 hash 0 # rjenkins1 item osd.6 weight 0.018 item osd.8 weight 0.018 item osd.15 weight 0.019 item osd.18 weight 0.018 } host osds-3 { id -13 # do not change unnecessarily id -14 class hdd # do not change unnecessarily id -20 class ssd # do not change unnecessarily # weight 0.058 alg straw2 hash 0 # rjenkins1 item osd.9 weight 0.019 item osd.10 weight 0.019 item osd.11 weight 0.019 } root default { id -1 # do not change unnecessarily id -2 class hdd # do not change unnecessarily id -21 class ssd # do not change unnecessarily # weight 0.363 alg straw2 hash 0 # rjenkins1 item osds-0 weight 0.058 item osds-1 weight 0.058 item osds-5 weight 0.058 item osds-4 weight 0.057 item osds-2 weight 0.072 item osds-3 weight 0.058 } # rules rule replicated_rule { id 0 type replicated min_size 1 max_size 10 step take default step chooseleaf firstn 0 type host step emit } rule cold { id 1 type replicated min_size 1 max_size 10 step take default class hdd step chooseleaf firstn 0 type host step emit } rule hot { id 2 type replicated min_size 1 max_size 10 step take default class ssd step chooseleaf firstn 0 type host step emit } # end crush map [root@mons-1 ~]# ceph osd crush show-tunables { "choose_local_tries": 0, "choose_local_fallback_tries": 0, "choose_total_tries": 50, "chooseleaf_descend_once": 1, "chooseleaf_vary_r": 1, "chooseleaf_stable": 1, "straw_calc_version": 1, "allowed_bucket_algs": 54, "profile": "jewel", "optimal_tunables": 1, "legacy_tunables": 0, "minimum_required_version": "jewel", "require_feature_tunables": 1, "require_feature_tunables2": 1, "has_v2_rules": 0, "require_feature_tunables3": 1, "has_v3_rules": 0, "has_v4_buckets": 1, "require_feature_tunables5": 1, "has_v5_rules": 0 } This should be fixed by https://github.com/ceph/ceph/pull/21138 We can back-port it downstream for the next z-stream release. FYI: Upstream luminous/mimic back-ports: https://github.com/ceph/ceph/pull/26199 https://github.com/ceph/ceph/pull/26200 Created attachment 1529157 [details]
Mgr log
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0475 |