Bug 2022631 - Odf-operator-controller-manager pod reported "CrashLoopbackoff" status.
Summary: Odf-operator-controller-manager pod reported "CrashLoopbackoff" status.
Keywords:
Status: CLOSED DUPLICATE of bug 2022339
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: odf-operator
Version: 4.9
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ---
Assignee: Nitin Goyal
QA Contact: Raz Tamir
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-11-12 08:37 UTC by Mugdha Soni
Modified: 2023-08-09 17:00 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-11-12 09:37:43 UTC
Embargoed:


Attachments (Terms of Use)
Installed operator wizard. (51.67 KB, image/png)
2021-11-12 08:37 UTC, Mugdha Soni
no flags Details

Description Mugdha Soni 2021-11-12 08:37:52 UTC
Created attachment 1841361 [details]
Installed operator wizard.

Description of problem :-
------------------------------
After the successful installation of OCP 4.10, when installation of ODF operator was triggered , the status was "Installing " for long in UI and CLI. Further investigated , it was found that "odf-operator-controller-manager" was in "CrashLoopBackOff".

**[root@localhost ocp_10_test]#  oc get csv -n openshift-storage
NAME                     DISPLAY                       VERSION   REPLACES   PHASE
noobaa-operator.v4.9.0   NooBaa Operator               4.9.0     Succeeded                             
ocs-operator.v4.9.0      OpenShift Container Storage   4.9.0     Succeeded           
odf-operator.v4.9.0      OpenShift Data Foundation     4.9.0     Installing
           
**[root@localhost ocp_10_test]# oc get pods -n openshift-storage
NAME                                       READY   STATUS        RESTARTS   AGE          
noobaa-operator-58b95d7fb4-vfvs2           1/1     Running            0     23m          
ocs-metrics-exporter-6cd79df5b9-67tkw      1/1     Running            0     22m          
ocs-operator-5cb8b697bf-n7ppw              1/1     Running            0     22m          
odf-console-6858ff4fc5-zlvsq               1/1     Running            0     23m          
odf-operator-controller-manager-d6c9d8596-f2pph  1/2 CrashLoopBackOff(4m44s ago)     8  23m
rook-ceph-operator-66d6f65f5b-65zh7        1/1     Running            0     23m          

**Cluster details : AWS UPI cluster 

Version of all relevant components :-
--------------------------------------
OCP :- 4.10.0-0.nightly-2021-11-09-181140
ODF:- 4.9.0-230.ci


Does this issue impact your ability to continue to work with the product
----------------------------------------------------------------------------
Yes , cluster creation is failing.


Can this issue reproducible?
------------------------------
Not sure ,hit it while cluster creation.


Can this issue reproduce from the UI?
---------------------------------------
Yes

Steps to Reproduce:
-----------------------
1.Install OCP 4.10 and create catalogsource.yaml for the ODF operator installation.
2.Log into the console and direct towards operator hub .
3.Install the odf operator and observe the status .


Actual results:
---------------------
The ODF operator status shows "Installing " for hours.

Expected results:
--------------------
Operator should be installed successfully.


Additional info:
------------------
odf-operator-controller-manager logs : https://docs.google.com/document/d/1yNVLpXVHbRZ-rISCbZEsQbveTWpG9BVEKhcm7OPBlnc/edit?usp=sharing

Catalog source yaml :- https://docs.google.com/document/d/1qbScIM9usWPul2PQ_SuSjCWIZSrYTqG-ZOveuEFE3qs/edit?usp=sharing

Comment 2 Vineet 2021-11-12 09:37:43 UTC

*** This bug has been marked as a duplicate of bug 2022339 ***


Note You need to log in before you can comment on or make changes to this bug.