Bug 1414613 - jewel: osd/ECBackend.cc: 201: FAILED assert(res.errors.empty())
Summary: jewel: osd/ECBackend.cc: 201: FAILED assert(res.errors.empty())
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RADOS
Version: 2.1
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: rc
: 2.2
Assignee: David Zafman
QA Contact: shylesh
Bara Ancincova
URL:
Whiteboard:
Depends On:
Blocks: 1412948
TreeView+ depends on / blocked
 
Reported: 2017-01-19 02:21 UTC by Benjamin Schmaus
Modified: 2021-03-11 14:54 UTC (History)
9 users (show)

Fixed In Version: RHEL: ceph-10.2.5-11.el7cp Ubuntu: ceph_10.2.5-5redhat1xenial
Doc Type: Bug Fix
Doc Text:
.OSD nodes no longer crash when an I/O error occurs Previously, if an I/O error occurred on one of the objects in an erasure-coded pool during recovery, the primary OSD node of the placement group containing the object hit the runtime check. Consequently, this OSD terminated unexpectedly. With this update, Ceph leaves the object unrecovered without hitting the runtime check. As a result, OSDs no longer crash in such a case.
Clone Of:
Environment:
Last Closed: 2017-03-14 15:48:05 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 17970 0 None None None 2017-01-19 02:21:22 UTC
Red Hat Product Errata RHBA-2017:0514 0 normal SHIPPED_LIVE Red Hat Ceph Storage 2.2 bug fix and enhancement update 2017-03-21 07:24:26 UTC

Description Benjamin Schmaus 2017-01-19 02:21:22 UTC
Description of problem:

Need backport in Jewel for the below issue.

Upstream bug: http://tracker.ceph.com/issues/17970
Upstream merge: https://github.com/ceph/ceph/pull/12088


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 2 shylesh 2017-01-19 07:46:55 UTC
Hi,

Could you please let me know what are the steps to verify this bug. I went through upstream bug and respective PR but couldn't come up with concrete steps.

Thanks,
Shylesh

Comment 7 Ian Colle 2017-01-21 17:30:33 UTC
David - please merge fix upstream and backport to downstream.

Comment 21 errata-xmlrpc 2017-03-14 15:48:05 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2017-0514.html


Note You need to log in before you can comment on or make changes to this bug.