Hi Uday, I would like to inspect the disks associated with down OSDs. Can you provide access to this cluster if the cluster is in same state when you reported this issue ? Regards, Prashant
Hi Akash, I just deleted some parts that no longer made sense. The new text is: " BlueStore employs a strategy of deferring small writes for HDDs and stores data in RocksDB. Cleaning deferred data from RocksDB is a background process which is not synchronized with BlueFS. With this fix, some RocksDB errors does not occur, such as : * `osd_superblock` corruption. * CURRENT does not end with newline. * `.sst` files checksum error. " But it misses an explanation what the fix actually is: " The fix is that deferred replay no longer overwrites BlueFS data. "
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: Red Hat Ceph Storage 5.3 security update and Bug Fix), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2023:0076