Bug 1498521 - support device classes
Summary: support device classes
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Volume
Version: 3.0
Hardware: Unspecified
OS: Unspecified
low
low
Target Milestone: rc
: 3.1
Assignee: Alfredo Deza
QA Contact: Parikshith
URL:
Whiteboard:
Depends On:
Blocks: 1553640
TreeView+ depends on / blocked
 
Reported: 2017-10-04 13:50 UTC by Alfredo Deza
Modified: 2018-07-10 19:03 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-07-10 19:03:51 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 22674 0 None None None 2018-01-12 14:11:46 UTC
Github ceph ceph pull 19949 0 None closed ceph-volume: adds --crush-device-class flag for lvm prepare and create 2020-05-18 09:22:48 UTC

Description Alfredo Deza 2017-10-04 13:50:27 UTC
Description of problem:
Implement a way to set the device class field so that it appears as "LVMcache" when you do ceph osd tree.  
In Luminous, anyone can create a crush rule that selects this device class.  Then they can create a pool 
that references the crush rule to restrict that pool to the OSDs with "LVMCache" class and this would 
give us the behavior we want.  Since we use ceph-volume and not ceph-disk, it would be logical for 
this to be a parameter to ceph-volume, because this has to be set before the OSD is made available
to the Ceph cluster.  



Additional info:

Here's what ceph-disk does today:


# ceph-disk prepare --help 
...
  --crush-device-class CRUSH_DEVICE_CLASS
                       crush device class to assign this disk to
...
Here's what ceph-disk does with the crush parameter above:

        if self.args.crush_device_class:
           write_one_line(path, 'crush_device_class',
                          self.args.crush_device_class)


so it just writes a file /var/lib/ceph/osd/${cluster}-${id}/crush_device_class that contains it.  I verified by doing this with ceph-disk using this command...

ceph-disk -v prepare --crush-device-class LVMcache /dev/xvdd /dev/xvdb

and got this result:

[root@ip-172-31-44-71 mnt.Z6jpv8]# ceph osd tree
ID CLASS    WEIGHT  TYPE NAME                 STATUS REWEIGHT PRI-AFF
-1          5.85609 root default                                      
...
-7          1.95216     host ip-172-31-44-71                           
2 LVMcache 0.48819         osd.2                 up  1.00000 1.00000  
5      ssd 0.48799         osd.5                 up  1.00000 1.00000  
6      ssd 0.48799         osd.6                 up  1.00000 1.00000  
11      ssd 0.48799         osd.11                up  1.00000 1.00000
...

Then I create a crush rule using this:

ceph osd crush rule create-replicated dm-cache default host LVMcache

Then I create a pool using this crush rule

ceph osd pool create lvm-pool 8 8 replicated dm-cache 

I then verified that the pool worked exactly the way I wanted.


Note You need to log in before you can comment on or make changes to this bug.