Bug 2189132
Summary: | Ceph MDS pods stuck in CrashLoopBackOff / assert in MDCache::add_inode ceph::__ceph_assert_fail(ceph::assert_data const&) | |||
---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Bipin Kunal <bkunal> | |
Component: | CephFS | Assignee: | Xiubo Li <xiubli> | |
Status: | CLOSED ERRATA | QA Contact: | Amarnath <amk> | |
Severity: | urgent | Docs Contact: | Akash Raj <akraj> | |
Priority: | unspecified | |||
Version: | 4.2 | CC: | akraj, amk, bniver, ceph-eng-bugs, cephqe-warriors, ebenahar, gfarnum, hyelloji, khiremat, khover, mcaldeir, muagarwa, rsachere, sostapov, tserlin, vereddy, vshankar, vumrao, xiubli | |
Target Milestone: | --- | |||
Target Release: | 6.1z1 | |||
Hardware: | x86_64 | |||
OS: | Linux | |||
Whiteboard: | ||||
Fixed In Version: | ceph-17.2.6-87.el9cp | Doc Type: | Bug Fix | |
Doc Text: |
.MDS no longer crashes when allocating `CInode`
Previously, when replaying the journals, if the `inodetable` or the `sessionmap`
versions did not match, the `CInode` would be added to the _inode_map_. But the ino may still be in the `inodetable` or `sessions' `prealloc inos` list. Due to this, when allocating a new ino number, if the corresponding `CInode` was already in the _inode_map_, the MDS would crash.
With this fix, allocating ino# is skipped when allocating the new `CInode` and the MDS does not crash.
|
Story Points: | --- | |
Clone Of: | 2188602 | |||
: | 2189134 (view as bug list) | Environment: | ||
Last Closed: | 2023-08-03 16:45:09 UTC | Type: | --- | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 2188602, 2189134, 2221020 |
Comment 11
Xiubo Li
2023-06-15 10:19:26 UTC
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 6.1 Bug Fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2023:4473 |