Bug 1682967 - [RHCS3] Ceph doesn't change OSD status when osd daemon faces read error
Summary: [RHCS3] Ceph doesn't change OSD status when osd daemon faces read error
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RADOS
Version: 3.1
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: z2
: 4.1
Assignee: David Zafman
QA Contact: Pawan
Aron Gunn
URL:
Whiteboard:
Depends On:
Blocks: 1658754 1816167 1855470
TreeView+ depends on / blocked
 
Reported: 2019-02-26 00:11 UTC by Keisuke Matsuzaki
Modified: 2024-03-25 15:14 UTC (History)
13 users (show)

Fixed In Version: ceph-14.2.8-100.el8cp, ceph-14.2.8-100.el7cp
Doc Type: Enhancement
Doc Text:
.The storage cluster status changes when a Ceph OSD encounters an I/O error With this release, the Ceph Monitor now has a `mon_osd_warn_num_repaired` option, which is set to `10` by default. If any Ceph OSD has repaired more than this many I/O errors in stored data, a `OSD_TOO_MANY_REPAIRS` health warning status is generated. To clear this warning, the new `clear_shards_repaired` option has been added to the `ceph tell` command. For example: [source,subs="verbatim,quotes"] ---- ceph tell osd._NUMBER_ clear_shards_repaired [_COUNT_] ---- By default, the `clear_shards_repaired` option sets the repair count to `0`. To be warned again if additional Ceph OSD repairs are performed, you can specify the value of the `mon_osd_warn_num_repaired` option.
Clone Of:
: 1855470 (view as bug list)
Environment:
Last Closed: 2020-09-30 17:24:49 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 41564 0 None None None 2019-08-28 20:37:49 UTC
Github ceph ceph pull 36379 0 None closed nautilus: mon: Warn when too many reads are repaired on an OSD 2021-02-04 10:09:19 UTC
Red Hat Product Errata RHBA-2020:4144 0 None None None 2020-09-30 17:25:27 UTC

Comment 39 errata-xmlrpc 2020-09-30 17:24:49 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 4.1 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:4144


Note You need to log in before you can comment on or make changes to this bug.