Bug 2265504
Summary: | [7.0z backport] [GSS] MDS crashes after upgrade to RHCS 6.1z4 - "assert_condition": "lock->get_state() == LOCK_LOCK || lock->get_state() == LOCK_MIX || lock->get_state() == LOCK_MIX_SYNC2" | |||
---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Bipin Kunal <bkunal> | |
Component: | CephFS | Assignee: | Venky Shankar <vshankar> | |
Status: | CLOSED ERRATA | QA Contact: | Amarnath <amk> | |
Severity: | high | Docs Contact: | Disha Walvekar <dwalveka> | |
Priority: | unspecified | |||
Version: | 6.1 | CC: | bkunal, ceph-eng-bugs, cephqe-warriors, dwalveka, hyelloji, mcaldeir, ngangadh, nravinas, rsachere, tserlin, vshankar, xiubli | |
Target Milestone: | --- | |||
Target Release: | 7.0z2 | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | ||||
Fixed In Version: | ceph-18.2.0-169.el9cp | Doc Type: | Bug Fix | |
Doc Text: |
Previously, due to incorrect lock assertion in ceph-mds, ceph-mds would crash when some inodes were replicated in a multi-mds cluster.
With this fix, the lock state in the assertion is validated and no crash is observed.
|
Story Points: | --- | |
Clone Of: | 2265415 | |||
: | 2265505 (view as bug list) | Environment: | ||
Last Closed: | 2024-05-07 12:10:56 UTC | Type: | --- | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | 2265415 | |||
Bug Blocks: | 2265505, 2270485 |
Comment 1
Manny
2024-02-22 18:38:25 UTC
Hi Venky, Could you please confirm if this BZ needs to be added to the 7.0z2 release notes? If so, please provide the doc type and the doc text. Regards, Amarnath (In reply to Amarnath from comment #6) > Hi Venky, > > Could you please confirm if this BZ needs to be added to the 7.0z2 release > notes? If so, please provide the doc type and the doc text. Done. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 7.0 Bug Fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2024:2743 |