Versions: openshift-local-storage local-storage-operator.4.11.0-202212070335 openshift-storage mcg-operator.v4.11.4 openshift-storage ocs-operator.v4.11.4 openshift-storage odf-csi-addons-operator.v4.11.4 openshift-storage odf-operator.v4.11.4 OCP: 4.12.0-rc.5 Alertname Starts At Summary State ClusterObjectStoreState 2022-12-26 01:30:40 UTC active Severity: Critical Description: Cluster Object Store is in unhealthy state for more than 15s. Please check Ceph cluster health. Message: Cluster Object Store is in unhealthy state. Please check Ceph cluster health. But the ceph state is healthy: ceph status cluster: id: 0947c6ed-6881-4c2f-8c8c-3ec84b3446a4 health: HEALTH_OK services: mon: 3 daemons, quorum a,b,c (age 2w) mgr: a(active, since 2w) mds: 1/1 daemons up, 1 hot standby osd: 12 osds: 12 up (since 13d), 12 in (since 13d) rgw: 1 daemon active (1 hosts, 1 zones) data: volumes: 1/1 healthy pools: 11 pools, 641 pgs objects: 7.96k objects, 29 GiB usage: 94 GiB used, 17 TiB / 17 TiB avail pgs: 641 active+clean io: client: 1.4 KiB/s rd, 220 KiB/s wr, 1 op/s rd, 0 op/s wr
The alert represents the Phase of CephObjectStore CRD. The alert message is misleading to users while debugging, we need to change the alert message to reflect that to avoid users going in the wrong direction. Can't provide devel_acks as I don't have permissions.
Not a 4.13 blocker, moving this out
@vkathole can you take a look at it again? looks like the changes to the description didn't pop up on your side. Just tested it out with a 4.13 build.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.14.0 security, enhancement & bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2023:6832