Bug 1415778 - [RFE] Support for dm-cache
Summary: [RFE] Support for dm-cache
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Volume
Version: 2.2
Hardware: x86_64
OS: Unspecified
urgent
urgent
Target Milestone: rc
: 3.0
Assignee: Alfredo Deza
QA Contact: Ramakrishnan Periyasamy
Bara Ancincova
URL:
Whiteboard:
Depends On:
Blocks: 1494421
TreeView+ depends on / blocked
 
Reported: 2017-01-23 17:12 UTC by Neil Levine
Modified: 2017-12-05 23:32 UTC (History)
10 users (show)

Fixed In Version: RHEL: ceph-12.1.4-1.el7cp Ubuntu: ceph_12.1.4-2redhat1xenial
Doc Type: Enhancement
Doc Text:
.Support for deploying logical volumes as OSDs A new utility, `ceph-volume`, is now supported. The utility enables deployment of logical volumes as OSDs on Red Hat Enterprise Linux. For details, see the link:https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html-single/block_device_guide/#using-the-ceph-volume-utility-to-deploy-osds[Using the ceph-volume Utility to Deploy OSDs] chapter in the Block Device Guide for Red{nbsp}Hat Ceph Storage. Note that `ceph-volume` does not support deploying logical volumes as OSDs in containers. In addition, `ceph-volume` is not tested on Ubuntu 16.04.03.
Clone Of:
Environment:
Last Closed: 2017-12-05 23:32:37 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ceph ceph pull 16632 0 None closed ceph-volume: initial take on ceph-volume CLI tool 2021-01-15 18:16:39 UTC
Red Hat Product Errata RHBA-2017:3387 0 normal SHIPPED_LIVE Red Hat Ceph Storage 3.0 bug fix and enhancement update 2017-12-06 03:03:45 UTC

Description Neil Levine 2017-01-23 17:12:22 UTC
dm-cache (aka lvm-cache) is a disk caching technology provided by RHEL. It can create a partition on a local SSD to be used as a cache for OSDs.

To support the feature, we need to:

* expose options for the user via ceph-ansible
* enable ceph-disk to provision the caching device with relevant flags

Comment 2 Christina Meno 2017-01-23 18:24:40 UTC
This is quite similar to https://bugzilla.redhat.com/show_bug.cgi?id=1415779
would you please break this RFE into smaller bits with Seb?

Comment 3 Loic Dachary 2017-02-15 15:29:33 UTC
FWIW ceph-disk does not provision devices, it formats pre-existing block devices for Ceph. The creation of the block device itself is outside of the scope of ceph-disk. In other words, as long as ceph-ansible knows how to provision the block device, it can be used as an argument to ceph-disk with no modification.

Comment 4 Andrew Schoen 2017-02-15 15:37:50 UTC
Loic,

So you're saying that if a device is already properly provisioned with dm-cache support then ceph-disk already has what's needed to leverage that when creating the OSD?

Thanks,
Andrew

Comment 5 Loic Dachary 2017-02-15 15:52:34 UTC
Yes, as long as ceph-disk is called with a block device, be it from lvm, dm-cache etc., it will do what it is supposed to do: create partitions, tag them for ceph, format them, mount them and launch an osd when done.

Comment 7 Dan Mick 2017-05-19 21:55:45 UTC
I guess I'm somewhat confused.  Loic's comments above, which have not been refuted, seem to claim there's no ceph-disk work required here.  ?

Comment 8 Alfredo Deza 2017-05-26 12:50:04 UTC
(In reply to Loic Dachary from comment #5)
> Yes, as long as ceph-disk is called with a block device, be it from lvm,
> dm-cache etc., it will do what it is supposed to do: create partitions, tag
> them for ceph, format them, mount them and launch an osd when done.

This is somewhat incorrect. There is no support in ceph-disk for devices that cannot be partitioned. That is, you cannot provide ceph-disk with a block device to use as-is, it will *insist* in making partitions.

Not only it will insist in making partitions but it will want to write GPT labels (as part of other interactions with Ceph, systemd, and udev).

As it is today: it is not possible to use LVM, dmcache, or anything that consists of multiple drives/partitions making a logical volume.

I believe that what Loic means here is that ceph-disk should accept any block device *that allows partitioning*.

Support for LVM or dmcache will involve a full rework on how ceph-disk does things to deploy OSDs.

At the same time, I think that ceph-ansible should allow setting up an LVM or dmache logical volume that in turn could end up being passed onto ceph-disk to create the OSD. 

Here is an Ansible Playbook that sets up dmcache as OSDs [0] which would be greatly simplified if ceph-disk could handle lvm/dmcache volumes. 

[0] https://github.com/bengland2/dmcache-stat/blob/ceph-on-dm-cache/dmcache.yml

Comment 9 Loic Dachary 2017-06-06 20:10:38 UTC
> I believe that what Loic means here is that ceph-disk should accept any block device *that allows partitioning*.

Yes and I have verified an LV can be partitioned. I also believe (but that's a distant memory) this is how it is used by OpenStack when running virtual machines with a LVM backend. And by ganeti as well ?

Comment 10 Alfredo Deza 2017-08-11 19:36:57 UTC
https://github.com/ceph/ceph/pull/16632 got merged into master, it is now part of luminous as of 12.1.3

Comment 14 Ramakrishnan Periyasamy 2017-11-08 06:12:44 UTC
Moving this RFE to verified state.

Tested dm-cache only in RHEL. As per decision from PM, Ubuntu and Container testing moved to 3.1 release since the support is yet to be provided in Ansible.

Comment 17 errata-xmlrpc 2017-12-05 23:32:37 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:3387


Note You need to log in before you can comment on or make changes to this bug.