.`crush_device_class` can now be specified per path in an OSD specification
With this release, to allow users more flexibility with `crush_device_class` settings when deploying OSDs through Cephadm, `crush_device_class`, you can specify per path inside an OSD specification. It is also supported to provide these per-path `crush_device_classes` along with a service-wide `crush_device_class` for the OSD service. In cases of service-wide `crush_device_class`, the setting is considered as default, and the path-specified settings take priority.
.Example
----
service_type: osd
service_id: osd_using_paths
placement:
hosts:
- Node01
- Node02
crush_device_class: hdd
spec:
data_devices:
paths:
- path: /dev/sdb
crush_device_class: ssd
- path: /dev/sdc
crush_device_class: nvme
- /dev/sdd
db_devices:
paths:
- /dev/sde
wal_devices:
paths:
- /dev/sdf
----
DescriptionFrancesco Pantano
2022-12-06 10:02:39 UTC
With ceph-ansible is possible to have the following OSD definition:
CephAnsibleDisksConfig:
lvm_volumes:
- data: '/dev/vdx'
crush_device_class: 'ssd'
- data: '/dev/vdz'
crush_device_class: 'hdd'
which allows to test in the OpenStack CI both crush hierarchy and the rules associated to the defined pools.
With cephadm, --crush_device_class is a global option for ceph-volume, which prepares and activates disks in batch mode.
We should extend the DriveGroup "paths" definition within the OSDspec to allow something like:
data_devices:
paths:
- data: /dev/ceph_vg/ceph_lv_data
crush_device_class: ssd
- data: /dev/ceph_vg/ceph_lv_data2
crush_device_class: hdd
- data: /dev/ceph_vg/ceph_lv_data3
crush_device_class: hdd
and make ceph-volume able to prepare single osds with an associated `crush_device_class`
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory (Moderate: Red Hat Ceph Storage 6.1 security and bug fix update), and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHSA-2023:3623
With ceph-ansible is possible to have the following OSD definition: CephAnsibleDisksConfig: lvm_volumes: - data: '/dev/vdx' crush_device_class: 'ssd' - data: '/dev/vdz' crush_device_class: 'hdd' which allows to test in the OpenStack CI both crush hierarchy and the rules associated to the defined pools. With cephadm, --crush_device_class is a global option for ceph-volume, which prepares and activates disks in batch mode. We should extend the DriveGroup "paths" definition within the OSDspec to allow something like: data_devices: paths: - data: /dev/ceph_vg/ceph_lv_data crush_device_class: ssd - data: /dev/ceph_vg/ceph_lv_data2 crush_device_class: hdd - data: /dev/ceph_vg/ceph_lv_data3 crush_device_class: hdd and make ceph-volume able to prepare single osds with an associated `crush_device_class`