Bug 2151189 - [cephadm] DriveGroup can't handle multiple crush_device_classes
Summary: [cephadm] DriveGroup can't handle multiple crush_device_classes
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Cephadm
Version: 5.0
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 6.1
Assignee: Adam King
QA Contact: Manisha Saini
Akash Raj
URL:
Whiteboard:
Depends On: 2180567
Blocks: 2071977 2192813
TreeView+ depends on / blocked
 
Reported: 2022-12-06 10:02 UTC by Francesco Pantano
Modified: 2023-06-15 09:17 UTC (History)
9 users (show)

Fixed In Version: ceph-17.2.6-5.el9cp
Doc Type: Enhancement
Doc Text:
.`crush_device_class` can now be specified per path in an OSD specification With this release, to allow users more flexibility with `crush_device_class` settings when deploying OSDs through Cephadm, `crush_device_class`, you can specify per path inside an OSD specification. It is also supported to provide these per-path `crush_device_classes` along with a service-wide `crush_device_class` for the OSD service. In cases of service-wide `crush_device_class`, the setting is considered as default, and the path-specified settings take priority. .Example ---- service_type: osd service_id: osd_using_paths placement: hosts: - Node01 - Node02 crush_device_class: hdd spec: data_devices: paths: - path: /dev/sdb crush_device_class: ssd - path: /dev/sdc crush_device_class: nvme - /dev/sdd db_devices: paths: - /dev/sde wal_devices: paths: - /dev/sdf ----
Clone Of:
Environment:
Last Closed: 2023-06-15 09:16:25 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 58184 0 None None None 2022-12-06 10:07:23 UTC
Github ceph ceph pull 49555 0 None Merged Add per OSD crush_device_class definition 2023-03-22 23:02:27 UTC
Red Hat Issue Tracker RHCEPH-5742 0 None None None 2022-12-06 10:05:35 UTC
Red Hat Product Errata RHSA-2023:3623 0 None None None 2023-06-15 09:17:10 UTC

Description Francesco Pantano 2022-12-06 10:02:39 UTC
With ceph-ansible is possible to have the following OSD definition:

CephAnsibleDisksConfig:
  lvm_volumes:
    - data: '/dev/vdx'
      crush_device_class: 'ssd'
    - data: '/dev/vdz'
      crush_device_class: 'hdd'

which allows to test in the OpenStack CI both crush hierarchy and the rules associated to the defined pools.
With cephadm, --crush_device_class is a global option for ceph-volume, which prepares and activates disks in batch mode.

We should extend the DriveGroup "paths" definition within the OSDspec to allow something like:

data_devices:
  paths:
    - data: /dev/ceph_vg/ceph_lv_data
      crush_device_class: ssd
    - data: /dev/ceph_vg/ceph_lv_data2
      crush_device_class: hdd
    - data: /dev/ceph_vg/ceph_lv_data3
      crush_device_class: hdd

and make ceph-volume able to prepare single osds with an associated `crush_device_class`

Comment 4 John Fulton 2023-01-09 13:56:37 UTC
To verify this bug with "internal ceph" in OSP17, create an osd_spec.yaml file containing:

data_devices:
  paths:
    - data: /dev/ceph_vg/ceph_lv_data
      crush_device_class: ssd
    - data: /dev/ceph_vg/ceph_lv_data2
      crush_device_class: hdd
    - data: /dev/ceph_vg/ceph_lv_data3
      crush_device_class: hdd

and when you run `openstack overcloud ceph deploy` pass it with `--osd-spec osd_spec.yaml` as described here:

  https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/features/deployed_ceph.html#overriding-which-disks-should-be-osds

Comment 13 Manisha Saini 2023-04-26 20:15:54 UTC
Based on comment#12, Moving this BZ to verified state.

Comment 17 errata-xmlrpc 2023-06-15 09:16:25 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat Ceph Storage 6.1 security and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:3623


Note You need to log in before you can comment on or make changes to this bug.