Bug 2022631

Summary: Odf-operator-controller-manager pod reported "CrashLoopbackoff" status.
Product: [Red Hat Storage] Red Hat OpenShift Data Foundation Reporter: Mugdha Soni <musoni>
Component: odf-operatorAssignee: Nitin Goyal <nigoyal>
Status: CLOSED DUPLICATE QA Contact: Raz Tamir <ratamir>
Severity: high Docs Contact:
Priority: unspecified    
Version: 4.9CC: assingh, bkunal, jarrpa, muagarwa, ocs-bugs, odf-bz-bot, vbadrina
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-11-12 09:37:43 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
Installed operator wizard. none

Description Mugdha Soni 2021-11-12 08:37:52 UTC
Created attachment 1841361 [details]
Installed operator wizard.

Description of problem :-
------------------------------
After the successful installation of OCP 4.10, when installation of ODF operator was triggered , the status was "Installing " for long in UI and CLI. Further investigated , it was found that "odf-operator-controller-manager" was in "CrashLoopBackOff".

**[root@localhost ocp_10_test]#  oc get csv -n openshift-storage
NAME                     DISPLAY                       VERSION   REPLACES   PHASE
noobaa-operator.v4.9.0   NooBaa Operator               4.9.0     Succeeded                             
ocs-operator.v4.9.0      OpenShift Container Storage   4.9.0     Succeeded           
odf-operator.v4.9.0      OpenShift Data Foundation     4.9.0     Installing
           
**[root@localhost ocp_10_test]# oc get pods -n openshift-storage
NAME                                       READY   STATUS        RESTARTS   AGE          
noobaa-operator-58b95d7fb4-vfvs2           1/1     Running            0     23m          
ocs-metrics-exporter-6cd79df5b9-67tkw      1/1     Running            0     22m          
ocs-operator-5cb8b697bf-n7ppw              1/1     Running            0     22m          
odf-console-6858ff4fc5-zlvsq               1/1     Running            0     23m          
odf-operator-controller-manager-d6c9d8596-f2pph  1/2 CrashLoopBackOff(4m44s ago)     8  23m
rook-ceph-operator-66d6f65f5b-65zh7        1/1     Running            0     23m          

**Cluster details : AWS UPI cluster 

Version of all relevant components :-
--------------------------------------
OCP :- 4.10.0-0.nightly-2021-11-09-181140
ODF:- 4.9.0-230.ci


Does this issue impact your ability to continue to work with the product
----------------------------------------------------------------------------
Yes , cluster creation is failing.


Can this issue reproducible?
------------------------------
Not sure ,hit it while cluster creation.


Can this issue reproduce from the UI?
---------------------------------------
Yes

Steps to Reproduce:
-----------------------
1.Install OCP 4.10 and create catalogsource.yaml for the ODF operator installation.
2.Log into the console and direct towards operator hub .
3.Install the odf operator and observe the status .


Actual results:
---------------------
The ODF operator status shows "Installing " for hours.

Expected results:
--------------------
Operator should be installed successfully.


Additional info:
------------------
odf-operator-controller-manager logs : https://docs.google.com/document/d/1yNVLpXVHbRZ-rISCbZEsQbveTWpG9BVEKhcm7OPBlnc/edit?usp=sharing

Catalog source yaml :- https://docs.google.com/document/d/1qbScIM9usWPul2PQ_SuSjCWIZSrYTqG-ZOveuEFE3qs/edit?usp=sharing

Comment 2 Vineet 2021-11-12 09:37:43 UTC

*** This bug has been marked as a duplicate of bug 2022339 ***