Bug 2265505
Summary: | [6.1z backport] [GSS] MDS crashes after upgrade to RHCS 6.1z4 - "assert_condition": "lock->get_state() == LOCK_LOCK || lock->get_state() == LOCK_MIX || lock->get_state() == LOCK_MIX_SYNC2" | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Bipin Kunal <bkunal> |
Component: | CephFS | Assignee: | Venky Shankar <vshankar> |
Status: | CLOSED ERRATA | QA Contact: | Amarnath <amk> |
Severity: | high | Docs Contact: | Akash Raj <akraj> |
Priority: | unspecified | ||
Version: | 6.1 | CC: | akraj, bkunal, ceph-eng-bugs, cephqe-warriors, hyelloji, mcaldeir, nravinas, rsachere, tserlin, vereddy, vshankar, xiubli |
Target Milestone: | --- | ||
Target Release: | 6.1z5 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | ceph-17.2.6-203.el9cp | Doc Type: | Bug Fix |
Doc Text: |
Previously, due to incorrect lock assertion in ceph-mds, ceph-mds would crash when some inodes were replicated in a multi-mds cluster.
With this fix, the lock state in the assertion is validated and no crash is observed.
|
Story Points: | --- |
Clone Of: | 2265504 | Environment: | |
Last Closed: | 2024-04-01 10:20:31 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 2265415, 2265504 | ||
Bug Blocks: | 2267617 |
Comment 1
Manny
2024-02-22 18:38:12 UTC
As per the comment : https://bugzilla.redhat.com/show_bug.cgi?id=2248825#c27 We ran sanity : http://magna002.ceph.redhat.com/cephci-jenkins/cephci-run-B1Z31R [root@ceph-amk-bz-up-b1z31r-node7 ~]# ceph versions { "mon": { "ceph version 17.2.6-205.el9cp (d2906f0987908581de69deb71dabc40289bce7e9) quincy (stable)": 3 }, "mgr": { "ceph version 17.2.6-205.el9cp (d2906f0987908581de69deb71dabc40289bce7e9) quincy (stable)": 2 }, "osd": { "ceph version 17.2.6-205.el9cp (d2906f0987908581de69deb71dabc40289bce7e9) quincy (stable)": 12 }, "mds": { "ceph version 17.2.6-205.el9cp (d2906f0987908581de69deb71dabc40289bce7e9) quincy (stable)": 3 }, "overall": { "ceph version 17.2.6-205.el9cp (d2906f0987908581de69deb71dabc40289bce7e9) quincy (stable)": 20 } } [root@ceph-amk-bz-up-b1z31r-node7 ~]# Regards, Amarnath Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 6.1 Bug Fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2024:1580 The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days |