Bug 1337018 - [RFE] filestore: randomize split threshold
Summary: [RFE] filestore: randomize split threshold
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RADOS
Version: 1.3.2
Hardware: x86_64
OS: Linux
urgent
urgent
Target Milestone: rc
: 3.0
Assignee: Josh Durgin
QA Contact: shylesh
URL:
Whiteboard:
: 1219974 (view as bug list)
Depends On:
Blocks: 1258382 1494421
TreeView+ depends on / blocked
 
Reported: 2016-05-18 04:48 UTC by Vikhyat Umrao
Modified: 2021-03-11 14:34 UTC (History)
9 users (show)

Fixed In Version: RHEL: ceph-12.1.2-1.el7cp Ubuntu: ceph_12.1.2-2redhat1xenial
Doc Type: Bug Fix
Doc Text:
.Split threshold is now randomized Previously, the split threshold was not randomized, so that many OSDs reached it at the same time. As a consequence, such OSDs incurred high latency because they all split directories at once. With this update, the split threshold is randomized which ensures that OSDs split directories over a large period of time.
Clone Of:
: 1533266 (view as bug list)
Environment:
Last Closed: 2017-12-05 23:29:38 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 15835 0 None None None 2016-05-18 04:51:51 UTC
Red Hat Bugzilla 1219974 0 high CLOSED [RFE] filestore merge threadhold and split multiple defaults may not be ideal 2021-02-22 00:41:40 UTC
Red Hat Bugzilla 1332874 0 urgent CLOSED Slow/blocked requests for a specific pool "rbd" which has approx 66 million objects 2021-02-22 00:41:40 UTC
Red Hat Knowledge Base (Solution) 3364191 0 None None None 2018-02-26 14:44:23 UTC
Red Hat Product Errata RHBA-2017:3387 0 normal SHIPPED_LIVE Red Hat Ceph Storage 3.0 bug fix and enhancement update 2017-12-06 03:03:45 UTC

Internal Links: 1219974 1332874

Description Vikhyat Umrao 2016-05-18 04:48:41 UTC
Description of problem:
[RFE] filestore: randomize split threshold

- If the distribution of files is roughly even, many osds will reach the split threshold at the same time, causing them all to incur high latency as they all split directories at once.

- A simple change that may mitigate this is to randomize the split threshold, similar to the randomized scrub threshold, so different osds split directories over a larger period of time.

For more information Please Check Bugzilla : https://bugzilla.redhat.com/show_bug.cgi?id=1291632 and specific comment#55 : https://bugzilla.redhat.com/show_bug.cgi?id=1332874#c55



Version-Release number of selected component (if applicable):
Red Hat Ceph Storage 1.3.2
Upstream Hammer

Comment 9 Josh Durgin 2017-05-18 21:01:37 UTC
*** Bug 1219974 has been marked as a duplicate of this bug. ***

Comment 10 Ian Colle 2017-08-01 03:49:38 UTC
https://github.com/ceph/ceph/pull/15689 Merged

Comment 17 errata-xmlrpc 2017-12-05 23:29:38 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:3387


Note You need to log in before you can comment on or make changes to this bug.