Bug 2139354 - [RDR] mds daemon crash with MDSTableClient::got_journaled_ack(unsigned long)+0x179) [0x55bf50df06e9 [NEEDINFO]
Summary: [RDR] mds daemon crash with MDSTableClient::got_journaled_ack(unsigned long)+...
Keywords:
Status: ASSIGNED
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: ceph
Version: 4.12
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ---
Assignee: Venky Shankar
QA Contact: Pratik Surve
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-11-02 09:37 UTC by Pratik Surve
Modified: 2024-09-12 16:46 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-08-22 12:00:06 UTC
Embargoed:
vshankar: needinfo? (amagrawa)


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 54741 0 None None None 2023-02-02 09:42:31 UTC

Description Pratik Surve 2022-11-02 09:37:13 UTC
Description of problem (please be detailed as possible and provide log
snippets):

[RDR] mds daemon crash with MDSTableClient::got_journaled_ack(unsigned long)+0x179) [0x55bf50df06e9

Version of all relevant components (if applicable):

OCP version:- 4.12.0-0.nightly-2022-10-18-192348
ODF version:- 4.12.0-79
CEPH version:- ceph version 16.2.10-50.el8cp (f311fa3856a155d4cd9b658e25a78def0ae7a7c3) pacific (stable)
ACM version:- 2.6.1
SUBMARINER version:- v0.13.0
VOLSYNC version:- volsync-product.v0.5.0

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?


Is there any workaround available to the best of your knowledge?


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?


Can this issue reproducible?


Can this issue reproduce from the UI?


If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1. Deploy RDR cluster 
2. Run the CEPHFS workload
3.


Actual results:

$ ceph crash info 2022-11-01T03:57:00.273106Z_a153d19d-9692-4379-9dcd-12b445b51c44

{
    "backtrace": [
        "/lib64/libpthread.so.0(+0x12ce0) [0x7f398b444ce0]",
        "(std::_Hashtable<unsigned long, unsigned long, std::allocator<unsigned long>, std::__detail::_Identity, std::equal_to<unsigned long>, std::hash<unsigned long>, std::__detail::_Mod_range_hashing, std::__detail::_Default_ranged_hash, std::__detail::_Prime_rehash_policy, std::__detail::_Hashtable_traits<false, true, true> >::_M_erase(std::integral_constant<bool, true>, unsigned long const&)+0x58) [0x55bf50df3148]",
        "(MDSTableClient::got_journaled_ack(unsigned long)+0x179) [0x55bf50df06e9]",
        "(ETableClient::replay(MDSRank*)+0x1b3) [0x55bf50ebff13]",
        "(MDLog::_replay_thread()+0xcd1) [0x55bf50e4f821]",
        "(MDLog::ReplayThread::entry()+0x11) [0x55bf50b4be41]",
        "/lib64/libpthread.so.0(+0x81cf) [0x7f398b43a1cf]",
        "clone()"
    ],
    "ceph_version": "16.2.10-50.el8cp",
    "crash_id": "2022-11-01T03:57:00.273106Z_a153d19d-9692-4379-9dcd-12b445b51c44",
    "entity_name": "mds.ocs-storagecluster-cephfilesystem-a",
    "os_id": "rhel",
    "os_name": "Red Hat Enterprise Linux",
    "os_version": "8.6 (Ootpa)",
    "os_version_id": "8.6",
    "process_name": "ceph-mds",
    "stack_sig": "c009c98d5387ea202da11cd36585a1c06b8d48ee740496c707f1171a5332598c",
    "timestamp": "2022-11-01T03:57:00.273106Z",
    "utsname_hostname": "rook-ceph-mds-ocs-storagecluster-cephfilesystem-a-5d464499kdvs7",
    "utsname_machine": "x86_64",
    "utsname_release": "4.18.0-372.26.1.el8_6.x86_64",
    "utsname_sysname": "Linux",
    "utsname_version": "#1 SMP Sat Aug 27 02:44:20 EDT 2022"
}


Expected results:


Additional info:

Comment 3 Mudit Agarwal 2022-11-03 02:50:30 UTC
Not a blocker for 4.12


Note You need to log in before you can comment on or make changes to this bug.