Bug 2248998
| Summary: | [cee/sd][cephfs] mds pods are crashing with ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock()) | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Xiubo Li <xiubli> |
| Component: | CephFS | Assignee: | Xiubo Li <xiubli> |
| Status: | CLOSED ERRATA | QA Contact: | Hemanth Kumar <hyelloji> |
| Severity: | high | Docs Contact: | Disha Walvekar <dwalveka> |
| Priority: | unspecified | ||
| Version: | 6.1 | CC: | ceph-eng-bugs, cephqe-warriors, dwalveka, mcaldeir, tserlin, vereddy, vshankar |
| Target Milestone: | --- | ||
| Target Release: | 6.1z4 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | ceph-17.2.6-178.el9cp | Doc Type: | Bug Fix |
| Doc Text: |
.The MDS no longer crashes when the journal logs are flushed
Previously, when the journal logs were successfully flushed, you could set the lockers’ state to `LOCK_SYNC` or `LOCK_PREXLOCK` when the `xclock` count was non-zero. However, the MDS would not allow that and would crash.
With this fix, MDS allows the lockers’ state to `LOCK_SYNC` or `LOCK_PREXLOCK` when the `xclock` count is non-zero and the MDS does not crash.
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2024-02-08 18:12:59 UTC | Type: | --- |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | |||
| Bug Blocks: | 2216930, 2238319 | ||
|
Comment 1
RHEL Program Management
2023-11-10 05:22:55 UTC
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 6.1 Bug Fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2024:0747 |