Bug 2139354

Summary: [RDR] mds daemon crash with MDSTableClient::got_journaled_ack(unsigned long)+0x179) [0x55bf50df06e9
Product: [Red Hat Storage] Red Hat OpenShift Data Foundation Reporter: Pratik Surve <prsurve>
Component: cephAssignee: Venky Shankar <vshankar>
ceph sub component: CephFS QA Contact: Pratik Surve <prsurve>
Status: ASSIGNED --- Docs Contact:
Severity: high    
Priority: unspecified CC: amagrawa, bniver, hyelloji, kramdoss, muagarwa, odf-bz-bot, sheggodu, vshankar
Version: 4.12Keywords: Reopened
Target Milestone: ---Flags: vshankar: needinfo? (amagrawa)
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-08-22 12:00:06 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Pratik Surve 2022-11-02 09:37:13 UTC
Description of problem (please be detailed as possible and provide log
snippets):

[RDR] mds daemon crash with MDSTableClient::got_journaled_ack(unsigned long)+0x179) [0x55bf50df06e9

Version of all relevant components (if applicable):

OCP version:- 4.12.0-0.nightly-2022-10-18-192348
ODF version:- 4.12.0-79
CEPH version:- ceph version 16.2.10-50.el8cp (f311fa3856a155d4cd9b658e25a78def0ae7a7c3) pacific (stable)
ACM version:- 2.6.1
SUBMARINER version:- v0.13.0
VOLSYNC version:- volsync-product.v0.5.0

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?


Is there any workaround available to the best of your knowledge?


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?


Can this issue reproducible?


Can this issue reproduce from the UI?


If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1. Deploy RDR cluster 
2. Run the CEPHFS workload
3.


Actual results:

$ ceph crash info 2022-11-01T03:57:00.273106Z_a153d19d-9692-4379-9dcd-12b445b51c44

{
    "backtrace": [
        "/lib64/libpthread.so.0(+0x12ce0) [0x7f398b444ce0]",
        "(std::_Hashtable<unsigned long, unsigned long, std::allocator<unsigned long>, std::__detail::_Identity, std::equal_to<unsigned long>, std::hash<unsigned long>, std::__detail::_Mod_range_hashing, std::__detail::_Default_ranged_hash, std::__detail::_Prime_rehash_policy, std::__detail::_Hashtable_traits<false, true, true> >::_M_erase(std::integral_constant<bool, true>, unsigned long const&)+0x58) [0x55bf50df3148]",
        "(MDSTableClient::got_journaled_ack(unsigned long)+0x179) [0x55bf50df06e9]",
        "(ETableClient::replay(MDSRank*)+0x1b3) [0x55bf50ebff13]",
        "(MDLog::_replay_thread()+0xcd1) [0x55bf50e4f821]",
        "(MDLog::ReplayThread::entry()+0x11) [0x55bf50b4be41]",
        "/lib64/libpthread.so.0(+0x81cf) [0x7f398b43a1cf]",
        "clone()"
    ],
    "ceph_version": "16.2.10-50.el8cp",
    "crash_id": "2022-11-01T03:57:00.273106Z_a153d19d-9692-4379-9dcd-12b445b51c44",
    "entity_name": "mds.ocs-storagecluster-cephfilesystem-a",
    "os_id": "rhel",
    "os_name": "Red Hat Enterprise Linux",
    "os_version": "8.6 (Ootpa)",
    "os_version_id": "8.6",
    "process_name": "ceph-mds",
    "stack_sig": "c009c98d5387ea202da11cd36585a1c06b8d48ee740496c707f1171a5332598c",
    "timestamp": "2022-11-01T03:57:00.273106Z",
    "utsname_hostname": "rook-ceph-mds-ocs-storagecluster-cephfilesystem-a-5d464499kdvs7",
    "utsname_machine": "x86_64",
    "utsname_release": "4.18.0-372.26.1.el8_6.x86_64",
    "utsname_sysname": "Linux",
    "utsname_version": "#1 SMP Sat Aug 27 02:44:20 EDT 2022"
}


Expected results:


Additional info:

Comment 3 Mudit Agarwal 2022-11-03 02:50:30 UTC
Not a blocker for 4.12