Bug 1498521

Summary: support device classes
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Alfredo Deza <adeza>
Component: Ceph-VolumeAssignee: Alfredo Deza <adeza>
Status: CLOSED CURRENTRELEASE QA Contact: Parikshith <pbyregow>
Severity: low Docs Contact:
Priority: low    
Version: 3.0CC: anharris, aschoen, bengland, ceph-eng-bugs, ceph-qe-bugs, gfidente, hnallurv, jharriga, kdreyer, mhackett
Target Milestone: rc   
Target Release: 3.1   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-07-10 19:03:51 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1553640    

Description Alfredo Deza 2017-10-04 13:50:27 UTC
Description of problem:
Implement a way to set the device class field so that it appears as "LVMcache" when you do ceph osd tree.  
In Luminous, anyone can create a crush rule that selects this device class.  Then they can create a pool 
that references the crush rule to restrict that pool to the OSDs with "LVMCache" class and this would 
give us the behavior we want.  Since we use ceph-volume and not ceph-disk, it would be logical for 
this to be a parameter to ceph-volume, because this has to be set before the OSD is made available
to the Ceph cluster.  



Additional info:

Here's what ceph-disk does today:


# ceph-disk prepare --help 
...
  --crush-device-class CRUSH_DEVICE_CLASS
                       crush device class to assign this disk to
...
Here's what ceph-disk does with the crush parameter above:

        if self.args.crush_device_class:
           write_one_line(path, 'crush_device_class',
                          self.args.crush_device_class)


so it just writes a file /var/lib/ceph/osd/${cluster}-${id}/crush_device_class that contains it.  I verified by doing this with ceph-disk using this command...

ceph-disk -v prepare --crush-device-class LVMcache /dev/xvdd /dev/xvdb

and got this result:

[root@ip-172-31-44-71 mnt.Z6jpv8]# ceph osd tree
ID CLASS    WEIGHT  TYPE NAME                 STATUS REWEIGHT PRI-AFF
-1          5.85609 root default                                      
...
-7          1.95216     host ip-172-31-44-71                           
2 LVMcache 0.48819         osd.2                 up  1.00000 1.00000  
5      ssd 0.48799         osd.5                 up  1.00000 1.00000  
6      ssd 0.48799         osd.6                 up  1.00000 1.00000  
11      ssd 0.48799         osd.11                up  1.00000 1.00000
...

Then I create a crush rule using this:

ceph osd crush rule create-replicated dm-cache default host LVMcache

Then I create a pool using this crush rule

ceph osd pool create lvm-pool 8 8 replicated dm-cache 

I then verified that the pool worked exactly the way I wanted.