Bug 2109886 - [RADOS] Two OSDs are not coming up after rebooting entire cluster
Summary: [RADOS] Two OSDs are not coming up after rebooting entire cluster
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RADOS
Version: 5.2
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 5.3
Assignee: Adam Kupczyk
QA Contact: skanta
Akash Raj
URL:
Whiteboard:
Depends On:
Blocks: 2126049
TreeView+ depends on / blocked
 
Reported: 2022-07-22 12:24 UTC by Uday kurundwade
Modified: 2023-01-11 17:41 UTC (History)
20 users (show)

Fixed In Version: ceph-16.2.10-4.el8cp
Doc Type: Bug Fix
Doc Text:
.RocksDB error does not occur for small writes BlueStore employs a strategy of deferring small writes for HDDs and stores data in RocksDB. Cleaning deferred data from RocksDB is a background process which is not synchronized with BlueFS. With this release, deferred replay no longer overwrites BlueFS data and some RocksDB errors do not occur, such as: * `osd_superblock` corruption. * CURRENT does not end with newline. * `.sst` files checksum error. [NOTE] ==== Do not write deferred data as the write location might either contain a proper object or be empty. It is not possible to corrupt object data this way. BlueFS is the only entity that can allocate this space. ====
Clone Of:
Environment:
Last Closed: 2023-01-11 17:40:00 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ceph ceph pull 47296 0 None Merged pacific: os/bluestore: Fix collision between BlueFS and BlueStore deferred writes 2022-08-19 14:39:43 UTC
Red Hat Issue Tracker RHCEPH-4885 0 None None None 2022-07-22 12:29:19 UTC
Red Hat Product Errata RHSA-2023:0076 0 None None None 2023-01-11 17:41:01 UTC

Comment 4 Prashant Dhange 2022-07-26 14:22:56 UTC
Hi Uday,

I would like to inspect the disks associated with down OSDs. Can you provide access to this cluster if the cluster is in same state when you reported this issue ?

Regards,
Prashant

Comment 21 Adam Kupczyk 2022-10-04 17:09:51 UTC
Hi Akash,

I just deleted some parts that no longer made sense.
The new text is:
"
BlueStore employs a strategy of deferring small writes for HDDs and stores data in RocksDB.
Cleaning deferred data from RocksDB is a background process which is not synchronized with BlueFS.

With this fix, some RocksDB errors does not occur, such as :

* `osd_superblock` corruption.
* CURRENT does not end with newline.
* `.sst` files checksum error.
"

But it misses an explanation what the fix actually is:
"
The fix is that deferred replay no longer overwrites BlueFS data.
"

Comment 42 errata-xmlrpc 2023-01-11 17:40:00 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat Ceph Storage 5.3 security update and Bug Fix), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:0076


Note You need to log in before you can comment on or make changes to this bug.