dm-cache (aka lvm-cache) is a disk caching technology provided by RHEL. It can create a partition on a local SSD to be used as a cache for OSDs. To support the feature, we need to: * expose options for the user via ceph-ansible * enable ceph-disk to provision the caching device with relevant flags
This is quite similar to https://bugzilla.redhat.com/show_bug.cgi?id=1415779 would you please break this RFE into smaller bits with Seb?
FWIW ceph-disk does not provision devices, it formats pre-existing block devices for Ceph. The creation of the block device itself is outside of the scope of ceph-disk. In other words, as long as ceph-ansible knows how to provision the block device, it can be used as an argument to ceph-disk with no modification.
Loic, So you're saying that if a device is already properly provisioned with dm-cache support then ceph-disk already has what's needed to leverage that when creating the OSD? Thanks, Andrew
Yes, as long as ceph-disk is called with a block device, be it from lvm, dm-cache etc., it will do what it is supposed to do: create partitions, tag them for ceph, format them, mount them and launch an osd when done.
I guess I'm somewhat confused. Loic's comments above, which have not been refuted, seem to claim there's no ceph-disk work required here. ?
(In reply to Loic Dachary from comment #5) > Yes, as long as ceph-disk is called with a block device, be it from lvm, > dm-cache etc., it will do what it is supposed to do: create partitions, tag > them for ceph, format them, mount them and launch an osd when done. This is somewhat incorrect. There is no support in ceph-disk for devices that cannot be partitioned. That is, you cannot provide ceph-disk with a block device to use as-is, it will *insist* in making partitions. Not only it will insist in making partitions but it will want to write GPT labels (as part of other interactions with Ceph, systemd, and udev). As it is today: it is not possible to use LVM, dmcache, or anything that consists of multiple drives/partitions making a logical volume. I believe that what Loic means here is that ceph-disk should accept any block device *that allows partitioning*. Support for LVM or dmcache will involve a full rework on how ceph-disk does things to deploy OSDs. At the same time, I think that ceph-ansible should allow setting up an LVM or dmache logical volume that in turn could end up being passed onto ceph-disk to create the OSD. Here is an Ansible Playbook that sets up dmcache as OSDs [0] which would be greatly simplified if ceph-disk could handle lvm/dmcache volumes. [0] https://github.com/bengland2/dmcache-stat/blob/ceph-on-dm-cache/dmcache.yml
> I believe that what Loic means here is that ceph-disk should accept any block device *that allows partitioning*. Yes and I have verified an LV can be partitioned. I also believe (but that's a distant memory) this is how it is used by OpenStack when running virtual machines with a LVM backend. And by ganeti as well ?
https://github.com/ceph/ceph/pull/16632 got merged into master, it is now part of luminous as of 12.1.3
Moving this RFE to verified state. Tested dm-cache only in RHEL. As per decision from PM, Ubuntu and Container testing moved to 3.1 release since the support is yet to be provided in Ansible.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:3387