Bug 2181986

Summary: [GSS] ocs-operator 4.8.18 stuck in failed state after upgrade
Product: [Red Hat Storage] Red Hat OpenShift Data Foundation Reporter: Rafrojas <rafrojas>
Component: ocs-operatorAssignee: Nitin Goyal <nigoyal>
Status: CLOSED NOTABUG QA Contact: Elad <ebenahar>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 4.8CC: hnallurv, muagarwa, nigoyal, ocs-bugs, odf-bz-bot
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-04-21 05:55:13 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Rafrojas 2023-03-27 07:29:46 UTC
Description of problem (please be detailed as possible and provide log
snippests):
during upgrade 4.8 to 4.9, In the installed operators customer selected the subscription 4.9 for OCS. 
After  that noobaa comes and goes with "nooba operaor failed conflicting crd owner in  namespace"

Version of all relevant components (if applicable):
[OPERATORS]
NAME                                      VERSION  AVAILABLE  PROGRESSING  DEGRADED
authentication                            4.9.57   True       False        False
baremetal                                 4.9.57   True       False        False
cloud-controller-manager                  4.9.57   True       False        False
cloud-credential                          4.9.57   True       False        False
cluster-autoscaler                        4.9.57   True       False        False
config-operator                           4.9.57   True       False        False
console                                   4.9.57   True       False        False
csi-snapshot-controller                   4.9.57   True       False        False
dns                                       4.9.57   True       False        False
etcd                                      4.9.57   True       False        False
image-registry                            4.9.57   True       False        False
ingress                                   4.9.57   True       False        False
insights                                  4.9.57   True       False        False
kube-apiserver                            4.9.57   True       False        False
kube-controller-manager                   4.9.57   True       False        False
kube-scheduler                            4.9.57   True       False        False
kube-storage-version-migrator             4.9.57   True       False        False
machine-api                               4.9.57   True       False        False
machine-approver                          4.9.57   True       False        False
machine-config                            4.9.57   True       False        False
marketplace                               4.9.57   True       False        False
monitoring                                4.9.57   False      True         True
network                                   4.9.57   True       False        False
node-tuning                               4.9.57   True       False        False
openshift-apiserver                       4.9.57   True       False        False
openshift-controller-manager              4.9.57   True       False        False
openshift-samples                         4.9.57   True       False        False
operator-lifecycle-manager                4.9.57   True       False        False
operator-lifecycle-manager-catalog        4.9.57   True       False        False
operator-lifecycle-manager-packageserver  4.9.57   True       False        False
service-ca                                4.9.57   True       False        False
storage                                   4.9.57   True       False        False

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
Is also affecting nooba and some pods

Is there any workaround available to the best of your knowledge?
No

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?


Can this issue reproducible?
Is constant

Can this issue reproduce from the UI?
Not sure

If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1.
2.
3.


Actual results:


Expected results:


Additional info:

Comment 4 Mudit Agarwal 2023-04-06 02:44:04 UTC
Nitin, please take a look