Bug 2100751 - [IBM Z - s390x] - cephblockpool ocs-storagecluster-cephblockpool stuck at "daemon_health":"WARNING","health":"WARNING" state
Summary: [IBM Z - s390x] - cephblockpool ocs-storagecluster-cephblockpool stuck at "da...
Keywords:
Status: CLOSED DUPLICATE of bug 2102397
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: odf-dr
Version: 4.11
Hardware: s390x
OS: Linux
unspecified
medium
Target Milestone: ---
: ---
Assignee: umanga
QA Contact: krishnaram Karthick
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-06-24 08:20 UTC by Abdul Kandathil (IBM)
Modified: 2023-08-09 17:00 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-07-12 14:28:23 UTC
Embargoed:


Attachments (Terms of Use)

Description Abdul Kandathil (IBM) 2022-06-24 08:20:30 UTC
Description of problem (please be detailed as possible and provide log
snippets):

cephblockpool ocs-storagecluster-cephblockpool stuck at "daemon_health":"WARNING","health":"WARNING" state on managed (DR) clusters.



[root@m4204001 ~]# oc get cephblockpool ocs-storagecluster-cephblockpool -n openshift-storage -o jsonpath='{.status.mirroringStatus.summary}{"\n"}'
{"daemon_health":"WARNING","health":"WARNING","image_health":"OK","states":{}}
[root@m4204001 ~]#



Version of all relevant components (if applicable):

Hub cluster: OCP 4.10, ACM 2.5, ODF DR HUB Operator 4.11.0.101, ODF Multicluster orchestrator 4.11.0.101

Primary and Secondary Managed cluster: OCP 4.11, ODF 4.11.0, 


Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?


Is there any workaround available to the best of your knowledge?


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?


Can this issue reproducible? 


Can this issue reproduce from the UI?


If this is a regression, please provide more details to justify this:


Steps to Reproduce:
Steps to Reproduce:
1. Deploy DR cluster 
2. Deploy Openshift DR Hub Operator
3. Deploy ODF Multicluster orchestrator and create mirror peer.
4. Once the mirrorpeer has the status "ExchangedSecret" check status of cephblockpool on both managed clusters.


[root@m4204001 ~]# oc get cephblockpool ocs-storagecluster-cephblockpool -n openshift-storage -o jsonpath='{.status.mirroringStatus.summary}{"\n"}'
{"daemon_health":"WARNING","health":"WARNING","image_health":"OK","states":{}}
[root@m4204001 ~]#

Actual results:

daemon_health and health in warning state


Expected results:

All components in healthy state

Additional info:

Must gather logs of all the three clusters.
https://drive.google.com/file/d/1Z7jn7ppfCfvfGZOB8-jYGTlzG6h7fevF/view?usp=sharing

Comment 2 umanga 2022-06-30 06:43:50 UTC
Must gather logs is missing all the information about ODF. Can you add those please?

Comment 3 umanga 2022-07-12 14:28:23 UTC
Not enough info about the environment, but going by the cluster state, root cause seems to be same as Bug 2102397.

*** This bug has been marked as a duplicate of bug 2102397 ***


Note You need to log in before you can comment on or make changes to this bug.