Bug 2158773 - ClusterObjectStoreState reports critical alert
Summary: ClusterObjectStoreState reports critical alert
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: ceph-monitoring
Version: 4.11
Hardware: Unspecified
OS: Unspecified
high
unspecified
Target Milestone: ---
: ODF 4.14.0
Assignee: Divyansh Kamboj
QA Contact: Vishakha Kathole
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-01-06 14:17 UTC by Alexander Chuzhoy
Modified: 2023-11-08 18:50 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-11-08 18:49:53 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github red-hat-storage ocs-operator pull 2015 0 None open Bug 2158773: [release-4.13] make the description for Cluster Object Store more specific 2023-04-18 11:29:20 UTC
Github red-hat-storage ocs-operator pull 2054 0 None Merged generate latest yamls ClusterObjectStoreState Alert 2023-05-29 11:54:21 UTC
Red Hat Product Errata RHSA-2023:6832 0 None None None 2023-11-08 18:50:51 UTC

Description Alexander Chuzhoy 2023-01-06 14:17:09 UTC
Versions:
openshift-local-storage local-storage-operator.4.11.0-202212070335
openshift-storage mcg-operator.v4.11.4
openshift-storage ocs-operator.v4.11.4
openshift-storage odf-csi-addons-operator.v4.11.4
openshift-storage odf-operator.v4.11.4
OCP: 4.12.0-rc.5

Alertname Starts At Summary State
ClusterObjectStoreState 2022-12-26 01:30:40 UTC active

Severity: Critical

Description: Cluster Object Store is in unhealthy state for more than 15s. Please check Ceph cluster health.

Message: Cluster Object Store is in unhealthy state. Please check Ceph cluster health.

But the ceph state is healthy:

ceph status
  cluster:
    id:     0947c6ed-6881-4c2f-8c8c-3ec84b3446a4
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum a,b,c (age 2w)
    mgr: a(active, since 2w)
    mds: 1/1 daemons up, 1 hot standby
    osd: 12 osds: 12 up (since 13d), 12 in (since 13d)
    rgw: 1 daemon active (1 hosts, 1 zones)
 
  data:
    volumes: 1/1 healthy
    pools:   11 pools, 641 pgs
    objects: 7.96k objects, 29 GiB
    usage:   94 GiB used, 17 TiB / 17 TiB avail
    pgs:     641 active+clean
 
  io:
    client:   1.4 KiB/s rd, 220 KiB/s wr, 1 op/s rd, 0 op/s wr

Comment 5 Divyansh Kamboj 2023-04-11 14:06:44 UTC
The alert represents the Phase of CephObjectStore CRD. The alert message is misleading to users while debugging, we need to change the alert message to reflect that to avoid users going in the wrong direction.
Can't provide devel_acks as I don't have permissions.

Comment 14 Mudit Agarwal 2023-06-05 11:32:38 UTC
Not a 4.13 blocker, moving this out

Comment 19 Divyansh Kamboj 2023-07-12 11:30:47 UTC
@vkathole can you take a look at it again? looks like the changes to the description didn't pop up on your side. Just tested it out with a 4.13 build.

Comment 25 errata-xmlrpc 2023-11-08 18:49:53 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.14.0 security, enhancement & bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:6832


Note You need to log in before you can comment on or make changes to this bug.