Description of problem:
Implement a way to set the device class field so that it appears as "LVMcache" when you do ceph osd tree.
In Luminous, anyone can create a crush rule that selects this device class. Then they can create a pool
that references the crush rule to restrict that pool to the OSDs with "LVMCache" class and this would
give us the behavior we want. Since we use ceph-volume and not ceph-disk, it would be logical for
this to be a parameter to ceph-volume, because this has to be set before the OSD is made available
to the Ceph cluster.
Additional info:
Here's what ceph-disk does today:
# ceph-disk prepare --help
...
--crush-device-class CRUSH_DEVICE_CLASS
crush device class to assign this disk to
...
Here's what ceph-disk does with the crush parameter above:
if self.args.crush_device_class:
write_one_line(path, 'crush_device_class',
self.args.crush_device_class)
so it just writes a file /var/lib/ceph/osd/${cluster}-${id}/crush_device_class that contains it. I verified by doing this with ceph-disk using this command...
ceph-disk -v prepare --crush-device-class LVMcache /dev/xvdd /dev/xvdb
and got this result:
[root@ip-172-31-44-71 mnt.Z6jpv8]# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 5.85609 root default
...
-7 1.95216 host ip-172-31-44-71
2 LVMcache 0.48819 osd.2 up 1.00000 1.00000
5 ssd 0.48799 osd.5 up 1.00000 1.00000
6 ssd 0.48799 osd.6 up 1.00000 1.00000
11 ssd 0.48799 osd.11 up 1.00000 1.00000
...
Then I create a crush rule using this:
ceph osd crush rule create-replicated dm-cache default host LVMcache
Then I create a pool using this crush rule
ceph osd pool create lvm-pool 8 8 replicated dm-cache
I then verified that the pool worked exactly the way I wanted.