Bug 2248999 - [cee/sd][cephfs] mds pods are crashing with ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
Summary: [cee/sd][cephfs] mds pods are crashing with ceph_assert(state == LOCK_XLOCK |...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: CephFS
Version: 6.1
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 7.0z2
Assignee: Xiubo Li
QA Contact: Hemanth Kumar
Disha Walvekar
URL:
Whiteboard:
Depends On:
Blocks: 2270485
TreeView+ depends on / blocked
 
Reported: 2023-11-10 05:32 UTC by Xiubo Li
Modified: 2024-05-09 17:09 UTC (History)
7 users (show)

Fixed In Version: ceph-18.2.0-166.el9cp
Doc Type: Bug Fix
Doc Text:
Previously, when the journal logs were successfully flushed, you could set the lockers’ state to LOCK_SYNC or LOCK_PREXLOCK when the xclock count was non-zero. However, the MDS would not allow that and would crash. With this fix, MDS allows the lockers’ state to LOCK_SYNC or LOCK_PREXLOCK when the xclock count is non-zero and the MDS does not crash.
Clone Of:
Environment:
Last Closed: 2024-05-07 12:10:16 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 62524 0 None None None 2023-11-10 05:36:34 UTC
Red Hat Issue Tracker RHCEPH-7889 0 None None None 2023-11-10 05:32:36 UTC
Red Hat Knowledge Base (Solution) 7045353 0 None None None 2023-11-18 20:19:32 UTC
Red Hat Product Errata RHBA-2024:2743 0 None None None 2024-05-07 12:10:21 UTC

Comment 1 RHEL Program Management 2023-11-10 05:32:10 UTC
Please specify the severity of this bug. Severity is defined here:
https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.

Comment 8 errata-xmlrpc 2024-05-07 12:10:16 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 7.0 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2024:2743


Note You need to log in before you can comment on or make changes to this bug.