Bug 1505559 - [RHCS 3] OSD heartbeat timeout due to too many omap entries read in each 'chunk' being backfilled
Summary: [RHCS 3] OSD heartbeat timeout due to too many omap entries read in each 'chu...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RADOS
Version: 2.4
Hardware: x86_64
OS: Linux
high
medium
Target Milestone: z1
: 3.0
Assignee: Josh Durgin
QA Contact: Ramakrishnan Periyasamy
URL:
Whiteboard:
Depends On:
Blocks: 1505561
TreeView+ depends on / blocked
 
Reported: 2017-10-23 21:09 UTC by Vikhyat Umrao
Modified: 2021-06-10 13:20 UTC (History)
6 users (show)

Fixed In Version: RHEL: ceph-12.2.1-43.el7cp Ubuntu: ceph_12.2.1-45redhat1xenial
Doc Type: No Doc Update
Doc Text:
undefined
Clone Of:
Environment:
Last Closed: 2018-03-08 15:51:04 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 21897 0 None None None 2017-10-23 21:22:55 UTC
Red Hat Bugzilla 1505561 0 high CLOSED [RHCS 2.y] OSD heartbeat timeout due to too many omap entries read in each 'chunk' being backfilled 2021-06-10 13:20:05 UTC
Red Hat Knowledge Base (Solution) 3222691 0 None None None 2017-10-23 21:37:07 UTC
Red Hat Product Errata RHBA-2018:0474 0 normal SHIPPED_LIVE Red Hat Ceph Storage 3.0 bug fix update 2018-03-08 20:51:53 UTC

Internal Links: 1505561

Description Vikhyat Umrao 2017-10-23 21:09:04 UTC
Description of problem:
OSD heartbeat timeout due to too many omap entries read in each 'chunk' being backfilled


Version-Release number of selected component (if applicable):
Red Hat Ceph Storage 2.4 

How reproducible:
Always in the scale lab


Additional info:

Testing in the scale lab uncovered an area where osds were
hitting heartbeat timeouts during backfill.

This turned out to be due to too many omap entries read in each 'chunk'
being backfilled, and only resetting the heartbeat once an entire chunk was assembled. We should ask Athena to tune this down for their
bucket index osds - setting:

Max number of omap entries per chunk:

    osd_recovery_max_omap_entries_per_chunk = 1024

Josh is planning to add a patch for 3.0 and 2.5 to reset the heartbeat in middle of gathering a chunk, which should make the current default of 64000
entries per backfill chunk work even with large stores.

Comment 7 Ramakrishnan Periyasamy 2018-02-07 10:48:58 UTC
Moving this bug to verified state.

Created objects with heavy omap entries around 50000 and start random osd failures not observed any issues.

Default value of "osd_recovery_max_omap_entries_per_chunk": "8096"

Comment 10 errata-xmlrpc 2018-03-08 15:51:04 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:0474


Note You need to log in before you can comment on or make changes to this bug.