Bug 2033548

Summary: [GSS] mon pods are in CLBO state
Product: [Red Hat Storage] Red Hat OpenShift Data Foundation Reporter: Sonal <sarora>
Component: rookAssignee: Travis Nielsen <tnielsen>
Status: CLOSED NOTABUG QA Contact: Elad <ebenahar>
Severity: urgent Docs Contact:
Priority: urgent    
Version: 4.8CC: apizarro, bniver, falim, hnallurv, madam, mmuench, muagarwa, ocs-bugs, odf-bz-bot, prpandey, tnielsen
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2022-01-10 16:27:43 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Sonal 2021-12-17 08:03:53 UTC
Description of problem (please be detailed as possible and provide log
snippests):

All mon pods and one osd are in CLBO state.

Each of the mon pod is failing with error: "rocksdb: Invalid argument: /var/lib/ceph/mon/ceph-a/store.db: does not exist (create_if_missing is false)"


Version of all relevant components (if applicable):
OCS 4.8.6

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
Yes

Is there any workaround available to the best of your knowledge?
No

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
2

Can this issue reproducible?
Yes, in the customer's environment


Actual results:
3 mon and one osd pods are in CLBO state

Expected results:
All OCS pods should be up and running

Additional info:
In the next private comment

Comment 13 Travis Nielsen 2022-01-10 16:27:43 UTC
Closing per previous comments