Bug 2017890 - [Rados][cee/sd] Bluestore tuning parameter bluestore_min_alloc_size_(hdd|ssd) is updating dynamically through ceph.conf
Summary: [Rados][cee/sd] Bluestore tuning parameter bluestore_min_alloc_size_(hdd|ssd)...
Keywords:
Status: CLOSED DEFERRED
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RADOS
Version: 4.2
Hardware: x86_64
OS: Linux
unspecified
low
Target Milestone: ---
: 7.0
Assignee: Adam Kupczyk
QA Contact: Pawan
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-10-27 16:27 UTC by Kritik Sachdeva
Modified: 2023-07-06 20:15 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-07-06 20:15:40 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-2437 0 None None None 2021-11-29 12:41:05 UTC

Description Kritik Sachdeva 2021-10-27 16:27:47 UTC
Describe the issue:
 
In chapter 9.6 of the Administration Guide, it is mentioned that we can configure the parameter bluestore_min_alloc_size_(hdd|ssd) in the ceph.conf file for the new OSD and it does require rebuild for the exiting ones.
 
If we have a different set of OSD for different values of the configuration parameter bluestore_min_alloc_size_(hdd|ssd) then we can only see the value that is specified under the global or osd section in the ceph.conf file which causes confusion for the actual value for any OSD, irrespective of the new or old OSD. Like if after creating the new OSD with the updated value and later on when either we remove or update the parameter value, then it only shows the new updated value only.
 
Is this an expected behavior while updating this configuration parameter under the Global or OSD section of ceph.conf? Is there any way to get the exact value of this parameter for an OSD?
 
 
Describe the task you were trying to accomplish:
It shouldn't show the configuration parameter value specified in the ceph.conf file for either existing OSDs or new osd ( Created with new value )
 
Steps to reproduce:
 
#Test-1
- Edit the ceph.conf file on all of the nodes and add "bluestore_min_alloc_size_hdd" parameter with an updated value in the OSD or global section as:
...
[global]
cluster network = 10.74.248.0/21
fsid = e81f9e57-afff-464d-8cea-586e8ef6d1a5
...
mon_allow_pool_delete = true
 
[osd]
bluestore_min_alloc_size_hdd = 8192
...
 
- Restart all osd services on all of the nodes
- Check the value in the configuration using the command: ( For all the existing osd daemon ids )
 
$ ceph daemon osd.0 config get bluestore_min_alloc_size_hdd
{
    "bluestore_min_alloc_size_hdd": "8192"
}
 
- Create a 10Kb file using the dd command and upload it into a newly created pool:
$ rados put -p newp /tmp/10k.txt 10kb
 
- Check the output of the rados df command as:
$ rados df
POOL_NAME                USED OBJECTS CLONES COPIES MISSING_ON_PRIMARY UNFOUND DEGRADED   RD_OPS      RD  WR_OPS      WR USED COMPR UNDER COMPR
newp                  192 KiB       1      0      3                  0       0        0        0     0 B       5  30 KiB        0 B         0 B  --> Used size is 192Kib with 3 copies
 
Actual Results:
Each object allocated size is 64Kib irrespective of the configured  bluestore_min_alloc size which is 8Kib
 
Expected results:
Object allocated size should be 64Kib

 
Chapter/Section Number and Title:
Chapter 9, Section 9.6, Tuning Ceph Bluestore for small writes
 
 
Product Version: RHCS 4.2


Note You need to log in before you can comment on or make changes to this bug.