Bug 2226647 - Pod rook-ceph-operator CrashLoopBackOff
Summary: Pod rook-ceph-operator CrashLoopBackOff
Keywords:
Status: VERIFIED
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: rook
Version: 4.14
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
: ODF 4.14.0
Assignee: Subham Rai
QA Contact: Neha Berry
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-07-26 05:17 UTC by Aviad Polak
Modified: 2023-08-09 17:03 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github rook rook pull 12594 0 None Merged build: cosi driver is missing from olm list (backport #12592) 2023-07-27 05:17:24 UTC

Description Aviad Polak 2023-07-26 05:17:57 UTC
Description of problem (please be detailed as possible and provide log
snippests):
rook-ceph-operator pod is in CrashLoopBackOffstatus

Version of all relevant components (if applicable):

4.14.0-74 

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
yes

Is there any workaround available to the best of your knowledge?
Add crds manually


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
1

Can this issue reproducible?
yes


Additional info:
See Discussion here: https://chat.google.com/room/AAAAREGEba8/HC2wrPMgYeQ


Note You need to log in before you can comment on or make changes to this bug.