Bug 1414613
Summary: | jewel: osd/ECBackend.cc: 201: FAILED assert(res.errors.empty()) | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Benjamin Schmaus <bschmaus> |
Component: | RADOS | Assignee: | David Zafman <dzafman> |
Status: | CLOSED ERRATA | QA Contact: | shylesh <shmohan> |
Severity: | medium | Docs Contact: | Bara Ancincova <bancinco> |
Priority: | unspecified | ||
Version: | 2.1 | CC: | ceph-eng-bugs, ceph-qe-bugs, dzafman, hnallurv, icolle, kchai, kdreyer, shmohan, vumrao |
Target Milestone: | rc | Keywords: | CodeChange |
Target Release: | 2.2 | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | RHEL: ceph-10.2.5-11.el7cp Ubuntu: ceph_10.2.5-5redhat1xenial | Doc Type: | Bug Fix |
Doc Text: |
.OSD nodes no longer crash when an I/O error occurs
Previously, if an I/O error occurred on one of the objects in an erasure-coded pool during recovery, the primary OSD node of the placement group containing the object hit the runtime check. Consequently, this OSD terminated unexpectedly. With this update, Ceph leaves the object unrecovered without hitting the runtime check. As a result, OSDs no longer crash in such a case.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2017-03-14 15:48:05 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1412948 |
Description
Benjamin Schmaus
2017-01-19 02:21:22 UTC
Hi, Could you please let me know what are the steps to verify this bug. I went through upstream bug and respective PR but couldn't come up with concrete steps. Thanks, Shylesh David - please merge fix upstream and backport to downstream. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2017-0514.html |