Bug 2017890

Summary: [Rados][cee/sd] Bluestore tuning parameter bluestore_min_alloc_size_(hdd|ssd) is updating dynamically through ceph.conf
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Kritik Sachdeva <ksachdev>
Component: RADOSAssignee: Adam Kupczyk <akupczyk>
Status: CLOSED DEFERRED QA Contact: Pawan <pdhiran>
Severity: low Docs Contact:
Priority: unspecified    
Version: 4.2CC: akupczyk, bhubbard, ceph-eng-bugs, nojha, rzarzyns, sseshasa, vumrao
Target Milestone: ---   
Target Release: 7.0   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-07-06 20:15:40 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Kritik Sachdeva 2021-10-27 16:27:47 UTC
Describe the issue:
 
In chapter 9.6 of the Administration Guide, it is mentioned that we can configure the parameter bluestore_min_alloc_size_(hdd|ssd) in the ceph.conf file for the new OSD and it does require rebuild for the exiting ones.
 
If we have a different set of OSD for different values of the configuration parameter bluestore_min_alloc_size_(hdd|ssd) then we can only see the value that is specified under the global or osd section in the ceph.conf file which causes confusion for the actual value for any OSD, irrespective of the new or old OSD. Like if after creating the new OSD with the updated value and later on when either we remove or update the parameter value, then it only shows the new updated value only.
 
Is this an expected behavior while updating this configuration parameter under the Global or OSD section of ceph.conf? Is there any way to get the exact value of this parameter for an OSD?
 
 
Describe the task you were trying to accomplish:
It shouldn't show the configuration parameter value specified in the ceph.conf file for either existing OSDs or new osd ( Created with new value )
 
Steps to reproduce:
 
#Test-1
- Edit the ceph.conf file on all of the nodes and add "bluestore_min_alloc_size_hdd" parameter with an updated value in the OSD or global section as:
...
[global]
cluster network = 10.74.248.0/21
fsid = e81f9e57-afff-464d-8cea-586e8ef6d1a5
...
mon_allow_pool_delete = true
 
[osd]
bluestore_min_alloc_size_hdd = 8192
...
 
- Restart all osd services on all of the nodes
- Check the value in the configuration using the command: ( For all the existing osd daemon ids )
 
$ ceph daemon osd.0 config get bluestore_min_alloc_size_hdd
{
    "bluestore_min_alloc_size_hdd": "8192"
}
 
- Create a 10Kb file using the dd command and upload it into a newly created pool:
$ rados put -p newp /tmp/10k.txt 10kb
 
- Check the output of the rados df command as:
$ rados df
POOL_NAME                USED OBJECTS CLONES COPIES MISSING_ON_PRIMARY UNFOUND DEGRADED   RD_OPS      RD  WR_OPS      WR USED COMPR UNDER COMPR
newp                  192 KiB       1      0      3                  0       0        0        0     0 B       5  30 KiB        0 B         0 B  --> Used size is 192Kib with 3 copies
 
Actual Results:
Each object allocated size is 64Kib irrespective of the configured  bluestore_min_alloc size which is 8Kib
 
Expected results:
Object allocated size should be 64Kib

 
Chapter/Section Number and Title:
Chapter 9, Section 9.6, Tuning Ceph Bluestore for small writes
 
 
Product Version: RHCS 4.2