Bug 2185533

Summary: [RDR] mds daemon crash with pthread_getname_np()
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Mudit Agarwal <muagarwa>
Component: CephFSAssignee: Venky Shankar <vshankar>
Status: CLOSED INSUFFICIENT_DATA QA Contact: Amarnath <amk>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 4.2CC: bniver, ceph-eng-bugs, cephqe-warriors, ebenahar, gfarnum, hyelloji, muagarwa, prsurve, vshankar
Target Milestone: ---   
Target Release: 6.1z2   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 2139339 Environment:
Last Closed: 2023-08-30 15:56:44 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 2139339    
Bug Blocks:    

Description Mudit Agarwal 2023-04-10 07:01:56 UTC
+++ This bug was initially created as a clone of Bug #2139339 +++

Description of problem (please be detailed as possible and provide log
snippets):

mds daemon crash with pthread_getname_np()


Version of all relevant components (if applicable):

OCP version:- 4.12.0-0.nightly-2022-10-18-192348
ODF version:- 4.12.0-79
CEPH version:- ceph version 16.2.10-50.el8cp (f311fa3856a155d4cd9b658e25a78def0ae7a7c3) pacific (stable)
ACM version:- 2.6.1
SUBMARINER version:- v0.13.0
VOLSYNC version:- volsync-product.v0.5.0

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?


Is there any workaround available to the best of your knowledge?


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?


Can this issue reproducible?


Can this issue reproduce from the UI?


If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1.Deploy RDR cluster
2. Run cephfs workload
3.


Actual results:

$ ceph crash info 2022-10-27T15:48:47.335369Z_8b5ddab2-2651-4407-b281-4a029842129b
{
    "backtrace": [
        "/lib64/libpthread.so.0(+0x12ce0) [0x7f982f686ce0]",
        "pthread_getname_np()",
        "(ceph::logging::Log::dump_recent()+0x4c2) [0x7f9830a1e0f2]",
        "(MDSDaemon::respawn()+0x15b) [0x55f05f8e881b]",
        "(Context::complete(int)+0xd) [0x55f05f8f09fd]",
        "(MDSRank::respawn()+0x1c) [0x55f05f8f690c]",
        "(MDSRank::handle_write_error(int)+0x1a6) [0x55f05f8faa16]",
        "ceph-mds(+0x1ccefa) [0x55f05f8faefa]",
        "(Context::complete(int)+0xd) [0x55f05f8f09fd]",
        "(Finisher::finisher_thread_entry()+0x1a5) [0x7f983073bc95]",
        "/lib64/libpthread.so.0(+0x81cf) [0x7f982f67c1cf]",
        "clone()"
    ],
    "ceph_version": "16.2.10-50.el8cp",
    "crash_id": "2022-10-27T15:48:47.335369Z_8b5ddab2-2651-4407-b281-4a029842129b",
    "entity_name": "mds.ocs-storagecluster-cephfilesystem-a",
    "os_id": "rhel",
    "os_name": "Red Hat Enterprise Linux",
    "os_version": "8.6 (Ootpa)",
    "os_version_id": "8.6",
    "process_name": "ceph-mds",
    "stack_sig": "25c187a52a0bd6185eea9df828445b7bd639a28947da47ae5869697eb9e1ec89",
    "timestamp": "2022-10-27T15:48:47.335369Z",
    "utsname_hostname": "rook-ceph-mds-ocs-storagecluster-cephfilesystem-a-75b55cdctdpw7",
    "utsname_machine": "x86_64",
    "utsname_release": "4.18.0-372.26.1.el8_6.x86_64",
    "utsname_sysname": "Linux",
    "utsname_version": "#1 SMP Sat Aug 27 02:44:20 EDT 2022"
}


Expected results:


--- Additional comment from Pratik Surve on 2022-11-02 09:32:27 UTC ---

Must-gather logs:-

Logs location: http://rhsqe-repo.lab.eng.blr.redhat.com/ocs4qe/pratik/bz/2139339/nov2/02-11-2022_14-26-06

Comment 1 RHEL Program Management 2023-04-10 07:02:03 UTC
Please specify the severity of this bug. Severity is defined here:
https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.