Bug 2109886
Summary: | [RADOS] Two OSDs are not coming up after rebooting entire cluster | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Uday kurundwade <ukurundw> |
Component: | RADOS | Assignee: | Adam Kupczyk <akupczyk> |
Status: | CLOSED ERRATA | QA Contact: | skanta |
Severity: | high | Docs Contact: | Akash Raj <akraj> |
Priority: | unspecified | ||
Version: | 5.2 | CC: | adking, akraj, akupczyk, amathuri, anarnold, bhubbard, ceph-eng-bugs, cephqe-warriors, choffman, ksirivad, lflores, mkasturi, nojha, pdhange, rfriedma, rmandyam, rzarzyns, skanta, sseshasa, vumrao |
Target Milestone: | --- | ||
Target Release: | 5.3 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | ceph-16.2.10-4.el8cp | Doc Type: | Bug Fix |
Doc Text: |
.RocksDB error does not occur for small writes
BlueStore employs a strategy of deferring small writes for HDDs and stores data in RocksDB.
Cleaning deferred data from RocksDB is a background process which is not synchronized with BlueFS.
With this release, deferred replay no longer overwrites BlueFS data and some RocksDB errors do not occur, such as:
* `osd_superblock` corruption.
* CURRENT does not end with newline.
* `.sst` files checksum error.
[NOTE]
====
Do not write deferred data as the write location might either contain a proper object or be empty.
It is not possible to corrupt object data this way. BlueFS is the only entity that can allocate this space.
====
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2023-01-11 17:40:00 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 2126049 |
Comment 4
Prashant Dhange
2022-07-26 14:22:56 UTC
Hi Akash, I just deleted some parts that no longer made sense. The new text is: " BlueStore employs a strategy of deferring small writes for HDDs and stores data in RocksDB. Cleaning deferred data from RocksDB is a background process which is not synchronized with BlueFS. With this fix, some RocksDB errors does not occur, such as : * `osd_superblock` corruption. * CURRENT does not end with newline. * `.sst` files checksum error. " But it misses an explanation what the fix actually is: " The fix is that deferred replay no longer overwrites BlueFS data. " Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: Red Hat Ceph Storage 5.3 security update and Bug Fix), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2023:0076 |