With ceph-ansible is possible to have the following OSD definition: CephAnsibleDisksConfig: lvm_volumes: - data: '/dev/vdx' crush_device_class: 'ssd' - data: '/dev/vdz' crush_device_class: 'hdd' which allows to test in the OpenStack CI both crush hierarchy and the rules associated to the defined pools. With cephadm, --crush_device_class is a global option for ceph-volume, which prepares and activates disks in batch mode. We should extend the DriveGroup "paths" definition within the OSDspec to allow something like: data_devices: paths: - data: /dev/ceph_vg/ceph_lv_data crush_device_class: ssd - data: /dev/ceph_vg/ceph_lv_data2 crush_device_class: hdd - data: /dev/ceph_vg/ceph_lv_data3 crush_device_class: hdd and make ceph-volume able to prepare single osds with an associated `crush_device_class`
To verify this bug with "internal ceph" in OSP17, create an osd_spec.yaml file containing: data_devices: paths: - data: /dev/ceph_vg/ceph_lv_data crush_device_class: ssd - data: /dev/ceph_vg/ceph_lv_data2 crush_device_class: hdd - data: /dev/ceph_vg/ceph_lv_data3 crush_device_class: hdd and when you run `openstack overcloud ceph deploy` pass it with `--osd-spec osd_spec.yaml` as described here: https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/features/deployed_ceph.html#overriding-which-disks-should-be-osds
Based on comment#12, Moving this BZ to verified state.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: Red Hat Ceph Storage 6.1 security and bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2023:3623