Bug 2265505 - [6.1z backport] [GSS] MDS crashes after upgrade to RHCS 6.1z4 - "assert_condition": "lock->get_state() == LOCK_LOCK || lock->get_state() == LOCK_MIX || lock->get_state() == LOCK_MIX_SYNC2"
Summary: [6.1z backport] [GSS] MDS crashes after upgrade to RHCS 6.1z4 - "assert_condi...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: CephFS
Version: 6.1
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 6.1z5
Assignee: Venky Shankar
QA Contact: Amarnath
Akash Raj
URL:
Whiteboard:
Depends On: 2265415 2265504
Blocks: 2267617
TreeView+ depends on / blocked
 
Reported: 2024-02-22 12:43 UTC by Bipin Kunal
Modified: 2024-07-31 04:25 UTC (History)
12 users (show)

Fixed In Version: ceph-17.2.6-203.el9cp
Doc Type: Bug Fix
Doc Text:
Previously, due to incorrect lock assertion in ceph-mds, ceph-mds would crash when some inodes were replicated in a multi-mds cluster. With this fix, the lock state in the assertion is validated and no crash is observed.
Clone Of: 2265504
Environment:
Last Closed: 2024-04-01 10:20:31 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-8365 0 None None None 2024-02-22 12:44:10 UTC
Red Hat Knowledge Base (Solution) 7057313 0 None None None 2024-02-22 18:38:11 UTC
Red Hat Product Errata RHBA-2024:1580 0 None None None 2024-04-01 10:20:36 UTC

Comment 1 Manny 2024-02-22 18:38:12 UTC
Corrected Release found from 5.3 to 6.1. Also, see KCS Article #7057313, (https://access.redhat.com/solutions/7057313)

/MC

Comment 6 Amarnath 2024-03-20 07:06:35 UTC
As per the comment : https://bugzilla.redhat.com/show_bug.cgi?id=2248825#c27


We ran sanity :
http://magna002.ceph.redhat.com/cephci-jenkins/cephci-run-B1Z31R
[root@ceph-amk-bz-up-b1z31r-node7 ~]# ceph versions
{
    "mon": {
        "ceph version 17.2.6-205.el9cp (d2906f0987908581de69deb71dabc40289bce7e9) quincy (stable)": 3
    },
    "mgr": {
        "ceph version 17.2.6-205.el9cp (d2906f0987908581de69deb71dabc40289bce7e9) quincy (stable)": 2
    },
    "osd": {
        "ceph version 17.2.6-205.el9cp (d2906f0987908581de69deb71dabc40289bce7e9) quincy (stable)": 12
    },
    "mds": {
        "ceph version 17.2.6-205.el9cp (d2906f0987908581de69deb71dabc40289bce7e9) quincy (stable)": 3
    },
    "overall": {
        "ceph version 17.2.6-205.el9cp (d2906f0987908581de69deb71dabc40289bce7e9) quincy (stable)": 20
    }
}
[root@ceph-amk-bz-up-b1z31r-node7 ~]# 


Regards,
Amarnath

Comment 9 errata-xmlrpc 2024-04-01 10:20:31 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 6.1 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2024:1580

Comment 10 Red Hat Bugzilla 2024-07-31 04:25:13 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days


Note You need to log in before you can comment on or make changes to this bug.