Bug 1636267

Summary: [RFE] Introduce an option or flag to throttle the pg deletion process
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Vikhyat Umrao <vumrao>
Component: RADOSAssignee: David Zafman <dzafman>
Status: CLOSED ERRATA QA Contact: Parikshith <pbyregow>
Severity: medium Docs Contact: Bara Ancincova <bancinco>
Priority: medium    
Version: 3.1CC: ceph-eng-bugs, ceph-qe-bugs, dzafman, hnallurv, jbrier, jdurgin, kchai, kdreyer, nojha, rperiyas, tserlin
Target Milestone: rcKeywords: FutureFeature
Target Release: 3.2   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: RHEL: ceph-12.2.8-29.el7cp Ubuntu: ceph_12.2.8-27redhat1 Doc Type: Enhancement
Doc Text:
.New options: `osd_delete_sleep`, `osd_remove_threads`, and `osd_recovery_threads` This update adds a new configuration option, `osd_delete_sleep` to throttle object delete operations. In addition, the `osd_disk_threads` option has been replaced with the `osd_remove_threads` and `osd_recovery_threads` options so that users can separately configure the threads for these tasks. These changes help to throttle the rate of object delete operations to reduce the impact on client operations. This is especially important when migrating placement groups (PGs). When using these options, every removal thread sleeps for the number of seconds specified between small batches of removal operations.
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-01-03 19:02:01 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1629656    

Description Vikhyat Umrao 2018-10-04 21:38:58 UTC
Description of problem:
[RFE] Introduce an option or flag to throttle the pg deletion process 

We have an interesting request maybe engineering has some insight too. Once the pg migrates, the pg gets removed from the HDD. Which totally makes sense, in our case, this is hurting pretty bad as it is pegging the HDD, and basically the only thing that seems to be blocking requests for us. We actually don’t care if the data stays on these drives though since we are trashing them anyways. But is there a way to reduce how aggressive this cleanup process is?

A process which could be used to avoid the effect of cleanup from the stray pgs, even skipping the process completely would help from old disks.

This helps during pool migration from old hardware to new hardware OSD's(Disks) etc.

Version-Release number of selected component (if applicable):
RHCS 3

Comment 5 Vikhyat Umrao 2018-10-10 22:26:13 UTC
https://github.com/ceph/ceph/pull/24501

Comment 29 errata-xmlrpc 2019-01-03 19:02:01 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0020