Bug 1505559

Summary: [RHCS 3] OSD heartbeat timeout due to too many omap entries read in each 'chunk' being backfilled
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Vikhyat Umrao <vumrao>
Component: RADOSAssignee: Josh Durgin <jdurgin>
Status: CLOSED ERRATA QA Contact: Ramakrishnan Periyasamy <rperiyas>
Severity: medium Docs Contact:
Priority: high    
Version: 2.4CC: ceph-eng-bugs, dzafman, flucifre, hnallurv, kchai, kdreyer
Target Milestone: z1Keywords: CodeChange
Target Release: 3.0   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: RHEL: ceph-12.2.1-43.el7cp Ubuntu: ceph_12.2.1-45redhat1xenial Doc Type: No Doc Update
Doc Text:
undefined
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-03-08 15:51:04 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1505561    

Description Vikhyat Umrao 2017-10-23 21:09:04 UTC
Description of problem:
OSD heartbeat timeout due to too many omap entries read in each 'chunk' being backfilled


Version-Release number of selected component (if applicable):
Red Hat Ceph Storage 2.4 

How reproducible:
Always in the scale lab


Additional info:

Testing in the scale lab uncovered an area where osds were
hitting heartbeat timeouts during backfill.

This turned out to be due to too many omap entries read in each 'chunk'
being backfilled, and only resetting the heartbeat once an entire chunk was assembled. We should ask Athena to tune this down for their
bucket index osds - setting:

Max number of omap entries per chunk:

    osd_recovery_max_omap_entries_per_chunk = 1024

Josh is planning to add a patch for 3.0 and 2.5 to reset the heartbeat in middle of gathering a chunk, which should make the current default of 64000
entries per backfill chunk work even with large stores.

Comment 7 Ramakrishnan Periyasamy 2018-02-07 10:48:58 UTC
Moving this bug to verified state.

Created objects with heavy omap entries around 50000 and start random osd failures not observed any issues.

Default value of "osd_recovery_max_omap_entries_per_chunk": "8096"

Comment 10 errata-xmlrpc 2018-03-08 15:51:04 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:0474