Bug 1732137

Summary: [RFE] Changing osd_recovery_max_omap_entries_per_chunk default value 8096 as per the disk type
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Vikhyat Umrao <vumrao>
Component: RADOSAssignee: Neha Ojha <nojha>
Status: CLOSED DEFERRED QA Contact: Manohar Murthy <mmurthy>
Severity: medium Docs Contact:
Priority: medium    
Version: 3.2CC: assingh, ceph-eng-bugs, dzafman, jdurgin, kchai, nojha
Target Milestone: ---Keywords: FutureFeature, Performance
Target Release: 5.1   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-12-11 01:12:43 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1733598    

Description Vikhyat Umrao 2019-07-22 19:06:59 UTC
Description of problem:
[RFE] Changing osd_recovery_max_omap_entries_per_chunk default value 8192 as per the disk type

Version-Release number of selected component (if applicable):
RHCS 3.2

osd_recovery_max_omap_entries_per_chunk 
osd_recovery_max_omap_entries_per_chunk_ssd
osd_recovery_max_omap_entries_per_chunk_hdd

Comment 2 Giridhar Ramaraju 2019-08-05 13:08:46 UTC
Updating the QA Contact to a Hemant. Hemant will be rerouting them to the appropriate QE Associate. 

Regards,
Giri

Comment 3 Giridhar Ramaraju 2019-08-05 13:10:10 UTC
Updating the QA Contact to a Hemant. Hemant will be rerouting them to the appropriate QE Associate. 

Regards,
Giri

Comment 4 Yaniv Kaul 2020-12-09 13:25:56 UTC
Moving to 5.1, although if we have not changed this in a year+, and since I'm unsure it's still relevant in Pacific, we may opt to close it.

Comment 5 Vikhyat Umrao 2020-12-11 01:12:43 UTC
Works fine now with Bluestore we have not seen new reports where we to have this option for specific devices. Closing this one for now.