Bug 2299482 - [Workload-DFG][mClock] Inconsistent client throughput during recovery with mClock balanced profile
Summary: [Workload-DFG][mClock] Inconsistent client throughput during recovery with mC...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RADOS
Version: 7.1
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: 6.1z9
Assignee: Sridhar Seshasayee
QA Contact: skanta
URL:
Whiteboard:
: 2300310 (view as bug list)
Depends On: 2294594
Blocks: 2299480
TreeView+ depends on / blocked
 
Reported: 2024-07-23 14:57 UTC by Vikhyat Umrao
Modified: 2025-04-28 05:30 UTC (History)
17 users (show)

Fixed In Version: ceph-17.2.6-264.el9cp
Doc Type: Bug Fix
Doc Text:
.New shard and multiple worker threads configuration now yields significant results in terms of consistency of client Previously, scheduling with mClock was not optimal with multiple OSD shards on a HDD based Ceph cluster. Hence, the client throughput was found to be inconsistent across test runs coupled with multiple reported slow requests during recovery and backfill operations. With this fix, HDD OSD shard configuration is updated as follows: - osd_op_num_shards_hdd = 1 (was 5) - osd_op_num_threads_per_shard_hdd = 5 (was 1) Now, the new shard and multiple worker threads configuration yields significant results in terms of consistency of client and recovery throughput across multiple test runs.
Clone Of: 2294594
Environment:
Last Closed: 2025-04-28 05:29:53 UTC
Embargoed:
sseshasa: needinfo-
pdhange: needinfo-


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ceph ceph pull 53092 0 None Merged quincy: osd/scheduler/mClockScheduler: Use same profile and client ids for all clients to ensure allocated QoS limit con... 2025-03-19 07:43:28 UTC
Github ceph ceph pull 62385 0 None open quincy: common/options: Change HDD OSD shard configuration defaults for mClock 2025-03-19 07:43:28 UTC
Red Hat Issue Tracker RHCEPH-9395 0 None None None 2024-07-23 14:58:40 UTC
Red Hat Knowledge Base (Solution) 7092973 0 None None None 2024-11-16 14:27:53 UTC
Red Hat Product Errata RHSA-2025:4238 0 None None None 2025-04-28 05:30:03 UTC

Description Vikhyat Umrao 2024-07-23 14:57:38 UTC
+++ This bug was initially created as a clone of Bug #2294594 +++

Comment 19 errata-xmlrpc 2025-04-28 05:29:53 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat Ceph Storage 6.1 bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2025:4238


Note You need to log in before you can comment on or make changes to this bug.