Bug 2183485 - [RFE] Report the OSD creation value of min_alloc_size in metadata
Summary: [RFE] Report the OSD creation value of min_alloc_size in metadata
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RADOS
Version: 5.3
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 8.0
Assignee: Adam Kupczyk
QA Contact: Harsh Kumar
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-03-31 10:44 UTC by Francois Andrieu
Modified: 2024-11-25 08:58 UTC (History)
12 users (show)

Fixed In Version: ceph-19.1.1-8.el9cp
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2024-11-25 08:58:54 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-6361 0 None None None 2023-03-31 10:46:19 UTC
Red Hat Product Errata RHBA-2024:10216 0 None None None 2024-11-25 08:58:57 UTC

Description Francois Andrieu 2023-03-31 10:44:11 UTC
Description of problem:
With Ceph 5, the new default value of BlueStore’s min_alloc_size for SSDs and HDDs is 4 KB[1].

To apply this setting to existing OSD, they must be recreated.
There is currently no easy way to get the actual value of min_alloc_size for a running OSD as the config db will only show the value used for new OSDs.

After Ceph4 to Ceph5 migration, the min_alloc_size shows the new size, even when the OSD was created with a different value:
# ceph daemon osd.4 config show | grep min_alloc_size
"bluestore_min_alloc_size": "0",
"bluestore_min_alloc_size_hdd": "4096",
"bluestore_min_alloc_size_ssd": "4096"

If the user decides to migrate their OSDs to use the new value, there is no easy way to track which OSD is using the new value or the old one.

Version-Release number of selected component (if applicable):
Ceph 16.2.10-138


Steps to Reproduce:
1. On ceph 4, check the min_alloc_size config:
# ceph daemon osd.4 config show | grep min_alloc_size
"bluestore_min_alloc_size": "0",
"bluestore_min_alloc_size_hdd": "65536",
"bluestore_min_alloc_size_ssd": "4096"

2. Migrate to Ceph 5
3. Check the configuration on the same OSD:
# ceph daemon osd.4 config show | grep min_alloc_size
"bluestore_min_alloc_size": "0",
"bluestore_min_alloc_size_hdd": "4096",
"bluestore_min_alloc_size_ssd": "4096"


Actual results:
The OSD config shows the new value, even if the OSD was created with the old one

Expected results:
Config (or metadata) shows the actual value used by the OSD.

This is basically a tracker for https://github.com/ceph/ceph/pull/50506 to be included in Ceph 5 (if that is possible).

Additional info:
[1]: https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/5.0/html-single/release_notes/index#enhancements

Comment 1 RHEL Program Management 2023-03-31 10:44:21 UTC
Please specify the severity of this bug. Severity is defined here:
https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.

Comment 13 errata-xmlrpc 2024-11-25 08:58:54 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 8.0 security, bug fix, and enhancement updates), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2024:10216


Note You need to log in before you can comment on or make changes to this bug.