Bug 1538207 - [RFE] Improve CRUSHTOOL to be able to modify CRUSH map as the ceph osd crush commands can
Summary: [RFE] Improve CRUSHTOOL to be able to modify CRUSH map as the ceph osd crush ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RADOS
Version: 3.0
Hardware: Unspecified
OS: Unspecified
low
low
Target Milestone: rc
: 3.1
Assignee: Kefu Chai
QA Contact: ceph-qe-bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-01-24 15:54 UTC by Eric Goirand
Modified: 2018-09-26 18:19 UTC (History)
9 users (show)

Fixed In Version: RHEL: ceph-12.2.5-13.el7cp Ubuntu: 12.2.5-4redhat1xenial
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-09-26 18:18:23 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 23471 0 None None None 2018-03-28 01:05:57 UTC
Github ceph ceph pull 20183 0 None None None 2018-03-27 18:07:50 UTC
Red Hat Product Errata RHBA-2018:2819 0 None None None 2018-09-26 18:19:39 UTC

Description Eric Goirand 2018-01-24 15:54:25 UTC
Description of problem:
When using crushtool to create a CRUSH map, it is not possible to create a complex CRUSH map, we have to edit the CRUSH map and modify it manually to do it.

Could be possible to be able to modify the CRUSH map using similar commands as the ceph osd crush { add-bucket, move or set } commands to automatically create their associated IDs and update the CRUSH map that would be generated by the crushtool --build.


Expected results:
Here is an example that would help : I'm trying to create a CRUSH map with 2 root buckets using different hosts with different number of OSDs in them.


crushtool --outfn testmap.bin --build --num_osds 60 host straw2 20 root straw2 0

Then, I would be using something similar to these commands on testmap.bin to create the new buckets

# ceph osd crush add-bucket root1 root => crushtool -i testmap.bin add-bucket root1 root
# ceph osd crush add-bucket host0-zone2 host => crushtool -i testmap.bin add-bucket host0-zone2 host
# ceph osd crush add-bucket host1-zone2 host => crushtool -i testmap.bin add-bucket host1-zone2 host
# ceph osd crush add-bucket host2-zone2 host => crushtool -i testmap.bin add-bucket host2-zone2 host

Then, something similar to this to move the buckets into their correct locations
# ceph osd crush move host0-zone2 root=root1 => crushtool -i testmap.bin move host0-zone2 root=root1
# ceph osd crush move host1-zone2 root=root1 => crushtool -i testmap.bin move host1-zone2 root=root1
# ceph osd crush move host2-zone2 root=root1 => crushtool -i testmap.bin move host2-zone2 root=root1

And, finally moving the OSDs into their correct new host similar to this :
# ceph osd crush set osd.18 <weight> root=root1 host=host0-zone2 => crushtool -i testmap.bin set osd.18 <weight> root=root1 host=host0-zone2
# ceph osd crush set osd.19 <weight> root=root1 host=host0-zone2 => crushtool -i testmap.bin set osd.19 <weight> root=root1 host=host0-zone2

# ceph osd crush set osd.38 <weight> root=root1 host=host1-zone2 => crushtool -i testmap.bin set osd.38 <weight> root=root1 host=host0-zone2
# ceph osd crush set osd.39 <weight> root=root1 host=host1-zone2 => crushtool -i testmap.bin set osd.39 <weight> root=root1 host=host0-zone2

# ceph osd crush set osd.58 <CRUSH Weight (take it from its former value)> root=root1 host=host2-zone2 => crushtool -i testmap.bin set osd.58 <weight> root=root1 host=host0-zone5
# ceph osd crush set osd.59 <CRUSH Weight (take it from its former value)> root=root1 host=host2-zone2 => crushtool -i testmap.bin set osd.59 <weight> root=root1 host=host0-zone2


Additional info:
Many Thanks,
Eric.

Comment 4 Kefu Chai 2018-01-30 11:13:39 UTC
> # ceph osd crush set osd.18 <weight> root=root1 host=host0-zone2 => crushtool -i testmap.bin set osd.18 <weight> root=root1 host=host0-zone2

please note, 

crushtool -i testmap.bin --update-item osd.18 <weight> --loc root root 1

will do the trick.

and i am adding following options to address this requirement

* --move: which moves bucket or item to specified location
* --add-bucket: which add bucket, and move it to the location if specified.

Comment 5 Kefu Chai 2018-01-30 11:14:48 UTC
fix pending on review, see https://github.com/ceph/ceph/pull/20183

Comment 6 Kefu Chai 2018-02-01 04:41:11 UTC
changed was merged upstream. but since 12.2.2 has been tagged. and RHCS 3.1 is based on 12.2.2. @Eric is it acceptable to target this feature to RHCS 4.0? i think it will be based on mimic.

Comment 7 Eric Goirand 2018-02-02 11:19:34 UTC
Hi Kefu,
If it can be introduced before RHCS 4.0, like in 3.x that would be great.
If it can't, then 4.0 will be ok.
Many Thanks,
Eric.

Comment 8 Kefu Chai 2018-03-27 13:22:44 UTC
i will cherry-pick this change into RHCS 3.1 once the downstream branch is created.

Comment 9 Ken Dreyer (Red Hat) 2018-03-27 22:58:50 UTC
Could we just backport this to v12.2.5 or v12.2.6 upstream, and then take that as it's available?

Comment 10 Kefu Chai 2018-03-28 01:05:36 UTC
Ken, it's viable and simpler from downstream's perspective. but because this is a feature instead of a fix, the backport could be controversial. i am filing a tracker ticket anyway. if the backport PR fails to get merged before 3.1 is released, we will still need to backport it to 3.1 branch.

Comment 12 Josh Durgin 2018-05-04 18:34:12 UTC
This is in 12.2.5.

Comment 15 errata-xmlrpc 2018-09-26 18:18:23 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:2819


Note You need to log in before you can comment on or make changes to this bug.