Bug 2092089

Summary: [Workload-DFG] Setting osd_memory_target with osd/host does not succeed
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Tim Wilkinson <twilkins>
Component: CephadmAssignee: Adam King <adking>
Status: CLOSED ERRATA QA Contact: Pranav Prakash <prprakas>
Severity: high Docs Contact: Akash Raj <akraj>
Priority: high    
Version: 5.1CC: adking, akraj, racpatel, tserlin, vereddy, vivk, vumrao
Target Milestone: ---Keywords: Regression
Target Release: 5.2   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: ceph-16.2.8-40.el8cp Doc Type: Bug Fix
Doc Text:
.`cephadm` no longer removes `osd_memory_target` config settings at host level Previously, if `osd_memory_target_autotune` was turned off globally, `cephadm` would remove the values that the user set for `osd_memory_target` at the host level. Additionally, for hosts with FQDN name, even though the crush map uses a short name, `cephadm` would still set the config option using the FQDN. Due to this, users could not manually set `osd_memory_target` at the host level and `osd_memory_target` auto tuning would not work with FQDN hosts. With this fix, the `osd_memory_target` config settings is not removed from `cephadm` at the host level if `osd_memory_target_autotune` is set to `false`. It also always users a short name for hosts when setting host level `osd_memory_target`. If at the host level `osd_memory_target_autotune` is set to `false`, users can manually set the `osd_memory_target` and have the options not be removed by `cephadm`. Additionally, autotuning should now work with hosts added to `cephadm` with FQDN names.
Story Points: ---
Clone Of: Environment:
Last Closed: 2022-08-09 17:38:27 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 2102272    

Comment 23 errata-xmlrpc 2022-08-09 17:38:27 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat Ceph Storage Security, Bug Fix, and Enhancement Update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:5997