Description of problem: With Ceph 5, the new default value of BlueStore’s min_alloc_size for SSDs and HDDs is 4 KB[1]. To apply this setting to existing OSD, they must be recreated. There is currently no easy way to get the actual value of min_alloc_size for a running OSD as the config db will only show the value used for new OSDs. After Ceph4 to Ceph5 migration, the min_alloc_size shows the new size, even when the OSD was created with a different value: # ceph daemon osd.4 config show | grep min_alloc_size "bluestore_min_alloc_size": "0", "bluestore_min_alloc_size_hdd": "4096", "bluestore_min_alloc_size_ssd": "4096" If the user decides to migrate their OSDs to use the new value, there is no easy way to track which OSD is using the new value or the old one. Version-Release number of selected component (if applicable): Ceph 16.2.10-138 Steps to Reproduce: 1. On ceph 4, check the min_alloc_size config: # ceph daemon osd.4 config show | grep min_alloc_size "bluestore_min_alloc_size": "0", "bluestore_min_alloc_size_hdd": "65536", "bluestore_min_alloc_size_ssd": "4096" 2. Migrate to Ceph 5 3. Check the configuration on the same OSD: # ceph daemon osd.4 config show | grep min_alloc_size "bluestore_min_alloc_size": "0", "bluestore_min_alloc_size_hdd": "4096", "bluestore_min_alloc_size_ssd": "4096" Actual results: The OSD config shows the new value, even if the OSD was created with the old one Expected results: Config (or metadata) shows the actual value used by the OSD. This is basically a tracker for https://github.com/ceph/ceph/pull/50506 to be included in Ceph 5 (if that is possible). Additional info: [1]: https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/5.0/html-single/release_notes/index#enhancements
Please specify the severity of this bug. Severity is defined here: https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 8.0 security, bug fix, and enhancement updates), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2024:10216