Bug 2111282

Summary: Misleading information displayed using osd_mclock_max_capacity_iops_[hdd, ssd] command.
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: skanta
Component: RADOSAssignee: Sridhar Seshasayee <sseshasa>
Status: CLOSED ERRATA QA Contact: skanta
Severity: medium Docs Contact: Akash Raj <akraj>
Priority: unspecified    
Version: 6.0CC: akraj, akupczyk, amathuri, bhubbard, ceph-eng-bugs, cephqe-warriors, choffman, kdreyer, ksirivad, lflores, nojha, pdhange, rfriedma, rzarzyns, sseshasa, vumrao
Target Milestone: ---   
Target Release: 6.1   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
.Upon querying the IOPS capacity for an OSD, only the configuration option that matches the underlying device type shows the measured/default value Previously, the `osd_mclock_max_capacity_iops_[ssd|hdd]` values were set depending on the OSD's underlying device type. The configuration options also had default values that were displayed when queried. For example, if the underlying device type for an OSD was SSD, the default value for the HDD option, `osd_mclock_max_capacity_iops_hdd`, was also displayed with a non-zero value. Due to this, displaying values for both HDD and SSD options of an OSD when queried, caused confusion regarding the correct option to interpret. With this fix, the IOPS capacity-related configuration option of the OSD that matches the underlying device type is set and the alternate/inactive configuration option is set to `0`. When a user queries the IOPS capacity for an OSD, only the configuration option that matches the underlying device type shows the measured/default value. The alternative/inactive option is set to 0 to clearly indicate that it is disabled.
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-06-15 09:15:33 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 2180567    
Bug Blocks: 2192813    

Comment 4 skanta 2022-10-08 00:43:18 UTC
*** Bug 2132972 has been marked as a duplicate of this bug. ***

Comment 24 Ken Dreyer (Red Hat) 2023-04-03 19:21:06 UTC
https://github.com/ceph/ceph/pull/49281 will be in v17.2.6

Comment 32 errata-xmlrpc 2023-06-15 09:15:33 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat Ceph Storage 6.1 security and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:3623