Bug 2110273

Summary: OCS 4.9 both ceph mds pods a&b stuck in CrashLoopBackOff
Product: [Red Hat Storage] Red Hat OpenShift Data Foundation Reporter: Soumi Mitra <smitra>
Component: cephAssignee: Venky Shankar <vshankar>
ceph sub component: CephFS QA Contact: Elad <ebenahar>
Status: CLOSED NOTABUG Docs Contact:
Severity: urgent    
Priority: unspecified CC: amakarau, bhubbard, bniver, csharpe, hnallurv, hyelloji, kjosy, madam, mmuench, muagarwa, ocs-bugs, odf-bz-bot, pdhange, pdonnell, vshankar, vumrao
Version: 4.9   
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2022-09-29 02:46:33 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Comment 41 Venky Shankar 2022-08-11 04:07:50 UTC
Hi Soumi,

I'll have a look today.

Comment 42 Venky Shankar 2022-08-11 04:49:11 UTC
Hi Soumi,

You mention that the MG logs were collected when the MDS were online. I'm afraid these logs _might_ not be of much help (I'm unable to log into supportshell right now, so I haven't gone through the logs yet). As Patrick mentioned in his comment, we'd need the MG logs once the MDS hits the OOM failure to accurately diagnose the issue.

Comment 44 Venky Shankar 2022-08-11 05:23:47 UTC
Thank you Soumi. Looking into it now...

Comment 53 Mudit Agarwal 2022-09-29 02:47:21 UTC
*** Bug 2114777 has been marked as a duplicate of this bug. ***