Bug 2181986 - [GSS] ocs-operator 4.8.18 stuck in failed state after upgrade
Summary: [GSS] ocs-operator 4.8.18 stuck in failed state after upgrade
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: ocs-operator
Version: 4.8
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: ---
Assignee: Nitin Goyal
QA Contact: Elad
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-03-27 07:29 UTC by Rafrojas
Modified: 2023-08-09 17:00 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-04-21 05:55:13 UTC
Embargoed:


Attachments (Terms of Use)

Description Rafrojas 2023-03-27 07:29:46 UTC
Description of problem (please be detailed as possible and provide log
snippests):
during upgrade 4.8 to 4.9, In the installed operators customer selected the subscription 4.9 for OCS. 
After  that noobaa comes and goes with "nooba operaor failed conflicting crd owner in  namespace"

Version of all relevant components (if applicable):
[OPERATORS]
NAME                                      VERSION  AVAILABLE  PROGRESSING  DEGRADED
authentication                            4.9.57   True       False        False
baremetal                                 4.9.57   True       False        False
cloud-controller-manager                  4.9.57   True       False        False
cloud-credential                          4.9.57   True       False        False
cluster-autoscaler                        4.9.57   True       False        False
config-operator                           4.9.57   True       False        False
console                                   4.9.57   True       False        False
csi-snapshot-controller                   4.9.57   True       False        False
dns                                       4.9.57   True       False        False
etcd                                      4.9.57   True       False        False
image-registry                            4.9.57   True       False        False
ingress                                   4.9.57   True       False        False
insights                                  4.9.57   True       False        False
kube-apiserver                            4.9.57   True       False        False
kube-controller-manager                   4.9.57   True       False        False
kube-scheduler                            4.9.57   True       False        False
kube-storage-version-migrator             4.9.57   True       False        False
machine-api                               4.9.57   True       False        False
machine-approver                          4.9.57   True       False        False
machine-config                            4.9.57   True       False        False
marketplace                               4.9.57   True       False        False
monitoring                                4.9.57   False      True         True
network                                   4.9.57   True       False        False
node-tuning                               4.9.57   True       False        False
openshift-apiserver                       4.9.57   True       False        False
openshift-controller-manager              4.9.57   True       False        False
openshift-samples                         4.9.57   True       False        False
operator-lifecycle-manager                4.9.57   True       False        False
operator-lifecycle-manager-catalog        4.9.57   True       False        False
operator-lifecycle-manager-packageserver  4.9.57   True       False        False
service-ca                                4.9.57   True       False        False
storage                                   4.9.57   True       False        False

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
Is also affecting nooba and some pods

Is there any workaround available to the best of your knowledge?
No

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?


Can this issue reproducible?
Is constant

Can this issue reproduce from the UI?
Not sure

If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1.
2.
3.


Actual results:


Expected results:


Additional info:

Comment 4 Mudit Agarwal 2023-04-06 02:44:04 UTC
Nitin, please take a look


Note You need to log in before you can comment on or make changes to this bug.