Bug 1636267 - [RFE] Introduce an option or flag to throttle the pg deletion process
Summary: [RFE] Introduce an option or flag to throttle the pg deletion process
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat
Component: RADOS
Version: 3.1
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: rc
: 3.2
Assignee: David Zafman
QA Contact: Parikshith
Bara Ancincova
URL:
Whiteboard:
Depends On:
Blocks: 1629656
TreeView+ depends on / blocked
 
Reported: 2018-10-04 21:38 UTC by Vikhyat Umrao
Modified: 2019-02-05 21:12 UTC (History)
11 users (show)

Fixed In Version: RHEL: ceph-12.2.8-29.el7cp Ubuntu: ceph_12.2.8-27redhat1
Doc Type: Enhancement
Doc Text:
.New options: `osd_delete_sleep`, `osd_remove_threads`, and `osd_recovery_threads` This update adds a new configuration option, `osd_delete_sleep` to throttle object delete operations. In addition, the `osd_disk_threads` option has been replaced with the `osd_remove_threads` and `osd_recovery_threads` options so that users can separately configure the threads for these tasks. These changes help to throttle the rate of object delete operations to reduce the impact on client operations. This is especially important when migrating placement groups (PGs). When using these options, every removal thread sleeps for the number of seconds specified between small batches of removal operations.
Clone Of:
Environment:
Last Closed: 2019-01-03 19:02:01 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:0020 None None None 2019-01-03 19:02:15 UTC
Ceph Project Bug Tracker 36321 None None None 2018-10-04 21:42:55 UTC
Ceph Project Bug Tracker 36474 None None None 2018-10-23 23:31:19 UTC
Red Hat Knowledge Base (Solution) 3888822 None None None 2019-02-05 21:12:34 UTC

Description Vikhyat Umrao 2018-10-04 21:38:58 UTC
Description of problem:
[RFE] Introduce an option or flag to throttle the pg deletion process 

We have an interesting request maybe engineering has some insight too. Once the pg migrates, the pg gets removed from the HDD. Which totally makes sense, in our case, this is hurting pretty bad as it is pegging the HDD, and basically the only thing that seems to be blocking requests for us. We actually don’t care if the data stays on these drives though since we are trashing them anyways. But is there a way to reduce how aggressive this cleanup process is?

A process which could be used to avoid the effect of cleanup from the stray pgs, even skipping the process completely would help from old disks.

This helps during pool migration from old hardware to new hardware OSD's(Disks) etc.

Version-Release number of selected component (if applicable):
RHCS 3

Comment 5 Vikhyat Umrao 2018-10-10 22:26:13 UTC
https://github.com/ceph/ceph/pull/24501

Comment 29 errata-xmlrpc 2019-01-03 19:02:01 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0020


Note You need to log in before you can comment on or make changes to this bug.