Bug 1375223 - Need for a documented procedure on setting up separate root hierarchies for SSD and Sata pools on the same host
Summary: Need for a documented procedure on setting up separate root hierarchies for S...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat
Component: Documentation
Version: 2.0
Hardware: All
OS: All
medium
medium
Target Milestone: rc
: 2.2
Assignee: Erin Donnelly
QA Contact: shylesh
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-09-12 13:53 UTC by jquinn
Modified: 2019-12-16 06:43 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-03-21 23:50:33 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 2735331 0 None None None 2016-10-28 16:24:12 UTC

Description jquinn 2016-09-12 13:53:06 UTC
Description of problem:  SBR-Ceph has received multiple cases and questions around setting up performance domains to allow separate pools for SSD and Sata disks on the same host.  This requires setting up separate root hierarchies and rules.  There is documentation upstream that we have provided them with, but it would be good to have official documentation to point them to. 


The link below will guide you to the Performance domains section which goes into detail about how pools may be divided up among the different drives, but does not provide the steps or examples on how to perform this. 

https://access.redhat.com/webassets/avalon/d/Red_Hat_Ceph_Storage-1.2.3-Storage_Strategies-en-US/Red_Hat_Ceph_Storage-1.2.3-Storage_Strategies-en-US.pdf - Section 3.3 Performance domains. 

In order to segregate the data going to these pools you need to create a new root hierarchy for the SSD drives.  In the CRUSH map there will need to be a defined host for the SSD on each host and the SATA drives on each host. You can then create a rule for SSD and set your pool to use this SSD ruleset. Below I have included a sample of what my hierarchy looks like in the lab and what my crush map looks like.

This is upstream documentation that I found to be the most descriptive on setting up separate performance domains for SSD and Sata drives. 

https://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/


[root@ceph2 ~]# ceph osd tree
ID WEIGHT  TYPE NAME           UP/DOWN REWEIGHT PRIMARY-AFFINITY
-6 1.00000 root ssd
-5 1.00000     host ceph4-SSD
 0 1.00000         osd.0            up  1.00000          1.00000
-1 3.79997 root default
-2 2.70000     host ceph4-SATA
 3 0.89999         osd.3            up  1.00000          1.00000
 6 0.89999         osd.6            up  1.00000          1.00000
 7 0.89999         osd.7            up  1.00000          1.00000
-3 0.54999     host ceph5
 1 0.45000         osd.1            up  1.00000          1.00000
 4 0.09999         osd.4            up  1.00000          1.00000
-4 0.54999     host ceph6
 2 0.45000         osd.2            up  1.00000          1.00000
 5 0.09999         osd.5            up  1.00000          1.00000
[root@ceph2 ~]#


# buckets ## SATA ##
host ceph4-SATA {
        id -2           # do not change unnecessarily
        # weight 2.700
        alg straw
        hash 0  # rjenkins1
        item osd.3 weight 0.900
        item osd.6 weight 0.900
        item osd.7 weight 0.900
}
host ceph5 {
        id -3           # do not change unnecessarily
        # weight 0.550
        alg straw
        hash 0  # rjenkins1
        item osd.1 weight 0.450
        item osd.4 weight 0.100
}
host ceph6 {
        id -4           # do not change unnecessarily
        # weight 0.550
        alg straw
        hash 0  # rjenkins1
        item osd.2 weight 0.450
        item osd.5 weight 0.100
}
# Buckets ## SSD ##
host ceph4-SSD {
        id -5           # do not change unnecessarily
        # weight 2.700
        alg straw
        hash 0  # rjenkins1
        item osd.0 weight 1.000
}
# SATA Root ###
root default {
        id -1           # do not change unnecessarily
        # weight 3.800
        alg straw
        hash 0  # rjenkins1
        item ceph4-SATA weight 2.700
        item ceph5 weight 0.550
        item ceph6 weight 0.550
}
root ssd {
        id -6           # do not change unnecessarily
        # weight 1.000
        alg straw2
        hash 0  # rjenkins1
        item ceph4-SSD weight 1.000
}
# rules
rule replicated_ruleset {
        ruleset 0
        type replicated
        min_size 1
        max_size 10
        step take default
        step chooseleaf firstn 0 type host
        step emit
}
rule ssd {
 ruleset 0
 type replicated
 min_size 1
 max_size 10
 step take ssd
 step chooseleaf firstn 0 type host
 step emit
}
rule sata {
 ruleset 1
 type replicated
 min_size 1
 max_size 10
 step take default
 step chooseleaf firstn 0 type host
 step emit
}
# end crush map

Version-Release number of selected component (if applicable):1.3.2 and 2.0


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:


Note You need to log in before you can comment on or make changes to this bug.