Bug 2299482

Summary: [Workload-DFG][mClock] Inconsistent client throughput during recovery with mClock balanced profile
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Vikhyat Umrao <vumrao>
Component: RADOSAssignee: Sridhar Seshasayee <sseshasa>
Status: CLOSED ERRATA QA Contact: skanta
Severity: high Docs Contact:
Priority: unspecified    
Version: 7.1CC: bhubbard, ceph-eng-bugs, cephqe-warriors, hakumar, kbader, mcaldeir, ngangadh, nojha, pdhange, pdhiran, racpatel, rpollack, rzarzyns, sseshasa, tpetr, tserlin, vumrao
Target Milestone: ---Flags: sseshasa: needinfo-
pdhange: needinfo-
Target Release: 6.1z9   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: ceph-17.2.6-264.el9cp Doc Type: Bug Fix
Doc Text:
.New shard and multiple worker threads configuration now yields significant results in terms of consistency of client Previously, scheduling with mClock was not optimal with multiple OSD shards on a HDD based Ceph cluster. Hence, the client throughput was found to be inconsistent across test runs coupled with multiple reported slow requests during recovery and backfill operations. With this fix, HDD OSD shard configuration is updated as follows: - osd_op_num_shards_hdd = 1 (was 5) - osd_op_num_threads_per_shard_hdd = 5 (was 1) Now, the new shard and multiple worker threads configuration yields significant results in terms of consistency of client and recovery throughput across multiple test runs.
Story Points: ---
Clone Of: 2294594 Environment:
Last Closed: 2025-04-28 05:29:53 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 2294594    
Bug Blocks: 2299480    

Description Vikhyat Umrao 2024-07-23 14:57:38 UTC
+++ This bug was initially created as a clone of Bug #2294594 +++

Comment 19 errata-xmlrpc 2025-04-28 05:29:53 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat Ceph Storage 6.1 bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2025:4238