Bug 2258950 - [Tracker Ceph BZ #2259180] [CEE/SD][cephfs] mds crash: void MDLog::trim(int): assert(segments.size() >= pre_segments_size)
Summary: [Tracker Ceph BZ #2259180] [CEE/SD][cephfs] mds crash: void MDLog::trim(int):...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: ceph
Version: 4.14
Hardware: All
OS: All
unspecified
medium
Target Milestone: ---
: ODF 4.16.0
Assignee: Venky Shankar
QA Contact: Prasad Desala
URL:
Whiteboard:
: 2265110 (view as bug list)
Depends On: 2259180
Blocks:
TreeView+ depends on / blocked
 
Reported: 2024-01-18 07:48 UTC by Geo Jose
Modified: 2024-07-17 13:12 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
: 2259179 2259180 (view as bug list)
Environment:
Last Closed: 2024-07-17 13:12:09 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 7052513 0 None None None 2024-01-18 11:21:57 UTC
Red Hat Knowledge Base (Solution) 7068654 0 None None None 2024-05-05 18:50:51 UTC
Red Hat Product Errata RHSA-2024:4591 0 None None None 2024-07-17 13:12:11 UTC

Description Geo Jose 2024-01-18 07:48:17 UTC
Description of problem (please be detailed as possible and provide log
snippests):
---------------------------------------------------------------------
- mds daemon is crashing with the below:
~~~
{
    "assert_condition": "segments.size() >= pre_segments_size",
    "assert_file": "/builddir/build/BUILD/ceph-17.2.6/src/mds/MDLog.cc",
    "assert_func": "void MDLog::trim(int)",
    "assert_line": 651,
    "assert_msg": "/builddir/build/BUILD/ceph-17.2.6/src/mds/MDLog.cc: In function 'void MDLog::trim(int)' thread 7f814f2b7640 time 2024-01-16T05:59:33.686299+0000\n/builddir/build/BUILD/ceph-17.2.6/src/mds/MDLog.cc: 651: FAILED ceph_assert(segments.size() >= pre_segments_size)\n",
    "assert_thread_name": "safe_timer",
    "backtrace": [
        "/lib64/libc.so.6(+0x54db0) [0x7f8155956db0]",
        "/lib64/libc.so.6(+0xa154c) [0x7f81559a354c]",
        "raise()",
        "abort()",
        "(ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x188) [0x7f8155fb2ae1]",
        "/usr/lib64/ceph/libceph-common.so.2(+0x142c45) [0x7f8155fb2c45]",
        "(MDLog::trim(int)+0xb06) [0x558086dbb2a6]",
        "(MDSRankDispatcher::tick()+0x365) [0x558086b3dc65]",
        "ceph-mds(+0x11c71d) [0x558086b1071d]",
        "(CommonSafeTimer<ceph::fair_mutex>::timer_thread()+0x15e) [0x7f815609c4ae]",
        "/usr/lib64/ceph/libceph-common.so.2(+0x22cda1) [0x7f815609cda1]",
        "/lib64/libc.so.6(+0x9f802) [0x7f81559a1802]",
        "/lib64/libc.so.6(+0x3f450) [0x7f8155941450]"
    ],
    "ceph_version": "17.2.6-170.el9cp",
    "crash_id": "2024-01-16T05:59:33.687563Z_6f26298d-0162-4124-b2a7-06bbbc676df6",
    "entity_name": "mds.ocs-storagecluster-cephfilesystem-a",
    "os_id": "rhel",
    "os_name": "Red Hat Enterprise Linux",
    "os_version": "9.3 (Plow)",
    "os_version_id": "9.3",
    "process_name": "ceph-mds",
    "stack_sig": "21cf82abf00a9a80ef194472005415a53e94d6965c4e910d756a9f711243f498",
    "timestamp": "2024-01-16T05:59:33.687563Z",
    "utsname_hostname": "rook-ceph-mds-ocs-storagecluster-cephfilesystem-a-69756fd5mdvcz",
    "utsname_machine": "x86_64",
    "utsname_release": "5.14.0-284.43.1.el9_2.x86_64",
    "utsname_sysname": "Linux",
    "utsname_version": "#1 SMP PREEMPT_DYNAMIC Thu Nov 23 09:44:01 EST 2023"
}
~~~

Version of all relevant components (if applicable):
--------------------------------------------------
- RHODF 4.14.3
- ceph version 17.2.6-170.el9cp / RHCS 6.1.z3 Async - 6.1.3 Async 

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
------------------------------------------------------------------------
N/A. as of now, mds crashed only once.

Is there any workaround available to the best of your knowledge?
----------------------------------------------------------------
N/A


Can this issue reproducible?
---------------------------
Customer specific.

Can this issue reproduce from the UI?
-------------------------------------
N/A

Additional info:
---------------
- Upstream tracker: https://tracker.ceph.com/issues/59833

Comment 6 Venky Shankar 2024-02-20 13:24:14 UTC
*** Bug 2265110 has been marked as a duplicate of this bug. ***

Comment 22 errata-xmlrpc 2024-07-17 13:12:09 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.16.0 security, enhancement & bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2024:4591


Note You need to log in before you can comment on or make changes to this bug.