Description of problem: When using crushtool to create a CRUSH map, it is not possible to create a complex CRUSH map, we have to edit the CRUSH map and modify it manually to do it. Could be possible to be able to modify the CRUSH map using similar commands as the ceph osd crush { add-bucket, move or set } commands to automatically create their associated IDs and update the CRUSH map that would be generated by the crushtool --build. Expected results: Here is an example that would help : I'm trying to create a CRUSH map with 2 root buckets using different hosts with different number of OSDs in them. crushtool --outfn testmap.bin --build --num_osds 60 host straw2 20 root straw2 0 Then, I would be using something similar to these commands on testmap.bin to create the new buckets # ceph osd crush add-bucket root1 root => crushtool -i testmap.bin add-bucket root1 root # ceph osd crush add-bucket host0-zone2 host => crushtool -i testmap.bin add-bucket host0-zone2 host # ceph osd crush add-bucket host1-zone2 host => crushtool -i testmap.bin add-bucket host1-zone2 host # ceph osd crush add-bucket host2-zone2 host => crushtool -i testmap.bin add-bucket host2-zone2 host Then, something similar to this to move the buckets into their correct locations # ceph osd crush move host0-zone2 root=root1 => crushtool -i testmap.bin move host0-zone2 root=root1 # ceph osd crush move host1-zone2 root=root1 => crushtool -i testmap.bin move host1-zone2 root=root1 # ceph osd crush move host2-zone2 root=root1 => crushtool -i testmap.bin move host2-zone2 root=root1 And, finally moving the OSDs into their correct new host similar to this : # ceph osd crush set osd.18 <weight> root=root1 host=host0-zone2 => crushtool -i testmap.bin set osd.18 <weight> root=root1 host=host0-zone2 # ceph osd crush set osd.19 <weight> root=root1 host=host0-zone2 => crushtool -i testmap.bin set osd.19 <weight> root=root1 host=host0-zone2 # ceph osd crush set osd.38 <weight> root=root1 host=host1-zone2 => crushtool -i testmap.bin set osd.38 <weight> root=root1 host=host0-zone2 # ceph osd crush set osd.39 <weight> root=root1 host=host1-zone2 => crushtool -i testmap.bin set osd.39 <weight> root=root1 host=host0-zone2 # ceph osd crush set osd.58 <CRUSH Weight (take it from its former value)> root=root1 host=host2-zone2 => crushtool -i testmap.bin set osd.58 <weight> root=root1 host=host0-zone5 # ceph osd crush set osd.59 <CRUSH Weight (take it from its former value)> root=root1 host=host2-zone2 => crushtool -i testmap.bin set osd.59 <weight> root=root1 host=host0-zone2 Additional info: Many Thanks, Eric.
> # ceph osd crush set osd.18 <weight> root=root1 host=host0-zone2 => crushtool -i testmap.bin set osd.18 <weight> root=root1 host=host0-zone2 please note, crushtool -i testmap.bin --update-item osd.18 <weight> --loc root root 1 will do the trick. and i am adding following options to address this requirement * --move: which moves bucket or item to specified location * --add-bucket: which add bucket, and move it to the location if specified.
fix pending on review, see https://github.com/ceph/ceph/pull/20183
changed was merged upstream. but since 12.2.2 has been tagged. and RHCS 3.1 is based on 12.2.2. @Eric is it acceptable to target this feature to RHCS 4.0? i think it will be based on mimic.
Hi Kefu, If it can be introduced before RHCS 4.0, like in 3.x that would be great. If it can't, then 4.0 will be ok. Many Thanks, Eric.
i will cherry-pick this change into RHCS 3.1 once the downstream branch is created.
Could we just backport this to v12.2.5 or v12.2.6 upstream, and then take that as it's available?
Ken, it's viable and simpler from downstream's perspective. but because this is a feature instead of a fix, the backport could be controversial. i am filing a tracker ticket anyway. if the backport PR fails to get merged before 3.1 is released, we will still need to backport it to 3.1 branch.
This is in 12.2.5.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:2819