Bug 2255343 - External Storage cluster(ocs-external-storagecluster) showing in error state
Summary: External Storage cluster(ocs-external-storagecluster) showing in error state
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: ocs-operator
Version: 4.15
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
: ODF 4.15.0
Assignee: Santosh Pillai
QA Contact: avdhoot
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-12-20 08:53 UTC by avdhoot
Modified: 2024-03-19 15:26 UTC (History)
7 users (show)

Fixed In Version: 4.15.0-98
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2024-03-19 15:26:34 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github red-hat-storage ocs-operator pull 2341 0 None Merged skip checking osd store type for external clusters 2023-12-26 06:03:32 UTC
Github red-hat-storage ocs-operator pull 2346 0 None open Bug 2255343: [release-4.15] skip checking osd store type for external clusters 2023-12-26 06:22:54 UTC
Red Hat Product Errata RHSA-2024:1383 0 None None None 2024-03-19 15:26:37 UTC

Description avdhoot 2023-12-20 08:53:03 UTC
Created attachment 2005103 [details]
screenshot_1

Description of problem (please be detailed as possible and provide log
snippests):

Attached external Storagecluster shows error state after odf installation. All the pods in the openshift-storage namespace are running.
error seen on UI- "Error while reconciling: cephcluster status does not have OSD store information"


➜  clust1 oc get pods -n openshift-storage
NAME                                               READY   STATUS    RESTARTS        AGE
csi-addons-controller-manager-68845fc459-nwjpc     2/2     Running   13 (112m ago)   21h
csi-cephfsplugin-7spg6                             2/2     Running   0               20h
csi-cephfsplugin-82dzg                             2/2     Running   1 (20h ago)     20h
csi-cephfsplugin-provisioner-5f67df5ffb-bmbx8      6/6     Running   1 (20h ago)     20h
csi-cephfsplugin-provisioner-5f67df5ffb-m7zbs      6/6     Running   0               20h
csi-cephfsplugin-zf6nd                             2/2     Running   0               20h
csi-rbdplugin-4dvfh                                3/3     Running   0               20h
csi-rbdplugin-provisioner-5b77cf784f-9jbjm         6/6     Running   2 (20h ago)     20h
csi-rbdplugin-provisioner-5b77cf784f-cjhl5         6/6     Running   0               20h
csi-rbdplugin-vjs74                                3/3     Running   1 (20h ago)     20h
csi-rbdplugin-zhf9b                                3/3     Running   0               20h
noobaa-core-0                                      1/1     Running   0               20h
noobaa-db-pg-0                                     1/1     Running   0               20h
noobaa-endpoint-5d5467d955-7p5nw                   1/1     Running   0               20h
noobaa-operator-f765b5f84-9llgt                    2/2     Running   5 (7h8m ago)    22h
ocs-operator-7d7d5c7f84-h5j65                      1/1     Running   18 (88m ago)    22h
odf-console-856d547ff7-mszsw                       1/1     Running   0               21h
odf-operator-controller-manager-5d7b667545-52jnl   2/2     Running   12 (124m ago)   21h
rook-ceph-operator-d7fcb5d5c-9sbv6                 1/1     Running   0               21h


Please find attached screenshot for error messages.

Version of all relevant components (if applicable):
OCP- 4.15
ODF- 4.15.0-89
ACM- 2.9
RHCS- 6.1

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
- yes

Is there any workaround available to the best of your knowledge?


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?


Can this issue reproducible?
- yes

Can this issue reproduce from the UI?
- yes

If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1.Create OCP 4.15 cluster. 
2.Install ODF(4.15) operator manually. 
3.Create external storage system by Attaching json.


Actual results:
ocs-external-storagecluster shows error state

Expected results:
ocs-external-storagecluster should be avialable/ready state.

Additional info:

Comment 8 Santosh Pillai 2023-12-20 11:47:23 UTC
Hi Umanga

The new getIsDROptimized method is failing for External Cluster. CephCluster does not capture the OSD store status in case of external cluster as OSDs are not running on the cluster. 

What could be the alternatives here?

Comment 14 avdhoot 2023-12-28 15:40:46 UTC
No issues found after attaching external cluster to managed cluster. ocs-external-storagecluster in ready state.
Verified bug with below product versions-

OCP- 4.15.0
ODF- 4.15.0-98
Ceph- 6.1z3
ACM- 2.9.1

Attached screenshot of showing ocs-external-storagecluster in ready state.
Hence Marking this bug as verified.

Comment 16 errata-xmlrpc 2024-03-19 15:26:34 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.15.0 security, enhancement, & bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2024:1383


Note You need to log in before you can comment on or make changes to this bug.