Description of problem (please be detailed as possible and provide log snippets): cephblockpool ocs-storagecluster-cephblockpool stuck at "daemon_health":"WARNING","health":"WARNING" state on managed (DR) clusters. [root@m4204001 ~]# oc get cephblockpool ocs-storagecluster-cephblockpool -n openshift-storage -o jsonpath='{.status.mirroringStatus.summary}{"\n"}' {"daemon_health":"WARNING","health":"WARNING","image_health":"OK","states":{}} [root@m4204001 ~]# Version of all relevant components (if applicable): Hub cluster: OCP 4.10, ACM 2.5, ODF DR HUB Operator 4.11.0.101, ODF Multicluster orchestrator 4.11.0.101 Primary and Secondary Managed cluster: OCP 4.11, ODF 4.11.0, Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? Is there any workaround available to the best of your knowledge? Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? Can this issue reproducible? Can this issue reproduce from the UI? If this is a regression, please provide more details to justify this: Steps to Reproduce: Steps to Reproduce: 1. Deploy DR cluster 2. Deploy Openshift DR Hub Operator 3. Deploy ODF Multicluster orchestrator and create mirror peer. 4. Once the mirrorpeer has the status "ExchangedSecret" check status of cephblockpool on both managed clusters. [root@m4204001 ~]# oc get cephblockpool ocs-storagecluster-cephblockpool -n openshift-storage -o jsonpath='{.status.mirroringStatus.summary}{"\n"}' {"daemon_health":"WARNING","health":"WARNING","image_health":"OK","states":{}} [root@m4204001 ~]# Actual results: daemon_health and health in warning state Expected results: All components in healthy state Additional info: Must gather logs of all the three clusters. https://drive.google.com/file/d/1Z7jn7ppfCfvfGZOB8-jYGTlzG6h7fevF/view?usp=sharing
Must gather logs is missing all the information about ODF. Can you add those please?
Not enough info about the environment, but going by the cluster state, root cause seems to be same as Bug 2102397. *** This bug has been marked as a duplicate of bug 2102397 ***