Bug 2003207 - [Bluestore] Remove the possibility of replay log and file inconsistency
Summary: [Bluestore] Remove the possibility of replay log and file inconsistency
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RADOS
Version: 5.0
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 5.1
Assignee: Adam Kupczyk
QA Contact: skanta
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-09-10 15:49 UTC by Vikhyat Umrao
Modified: 2022-04-04 10:21 UTC (History)
10 users (show)

Fixed In Version: ceph-16.2.6-1.el8cp
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-04-04 10:21:38 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 51130 0 None None None 2021-09-10 15:50:12 UTC
Github ceph ceph pull 42424 0 None None None 2021-09-10 15:56:18 UTC
Red Hat Issue Tracker RHCEPH-1667 0 None None None 2021-09-10 15:51:09 UTC
Red Hat Product Errata RHSA-2022:1174 0 None None None 2022-04-04 10:21:50 UTC

Description Vikhyat Umrao 2021-09-10 15:49:01 UTC
Description of problem:
[Bluestore] Remove the possibility of replay log and file inconsistency
https://tracker.ceph.com/issues/50965

In power-off conditions, BlueFS can create corrupted files.

It is possible to create a condition in which a BlueFS contains a file that is corrupted. It can happen when the BlueFS replay log is on device A and we just wrote to device B.

Scenario:
1) write to file h1 on a SLOW device
2) flush h1 (and trigger h1 mark to be added to bluefs replay log, but no fdatasync yet)
3) write to file h2 on DB
4) fsync h2 (forces replay log to be written, after fdatasync to DB)
5) power off

As a result, we have file h1 that is properly declared in the replay log but with uninitialized content.


Version-Release number of selected component (if applicable):
RHCS 4.x, 5.x

Comment 7 errata-xmlrpc 2022-04-04 10:21:38 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat Ceph Storage 5.1 Security, Enhancement, and Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:1174


Note You need to log in before you can comment on or make changes to this bug.