Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
This project is now read‑only. Starting Monday, February 2, please use https://ibm-ceph.atlassian.net/ for all bug tracking management.

Bug 2319055

Summary: [RFE] Do not show Silenced health warnings in dashboard landing page
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Pawan <pdhiran>
Component: Ceph-DashboardAssignee: Abhishek Desai <abdesai>
Status: CLOSED ERRATA QA Contact: Vinayak Papnoi <vpapnoi>
Severity: high Docs Contact: Anjana Suparna Sriram <asriram>
Priority: high    
Version: 8.0CC: afrahman, ceph-eng-bugs, cephqe-warriors, mobisht, pegonzal
Target Milestone: ---Keywords: FutureFeature
Target Release: 9.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-20.1.0-33 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2026-01-29 06:53:10 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
Dashboard screenshots none

Description Pawan 2024-10-16 03:53:43 UTC
Created attachment 2052230 [details]
Dashboard screenshots

Description of problem:

We see that even though some alerts have been silenced via dashboard, those alerts are still being displayed in the dashboard landing page. This behaviour defeats the purpose of silencing those alerts.

Dashboard -> observability -> Alerts correctly displays alerts and shows them as silenced, but we cannot see any of that in landing page and it seems like there are active warnings, with status of the cluster being shown with a green tick mark, indicating that cluster is good. This ambiguous behaviour needs to be fixed.


Version-Release number of selected component (if applicable):
# ceph version
ceph version 19.2.0-22.el9cp (5e576686ad8987d973e08b278ecaeabe731ee77c) squid (stable)

How reproducible:
Always

Steps to Reproduce:
1. Generate few health warnings on the cluster.
2. Login to dashboard. See that on the landing page, the cluster health is in warning state, with Health warnings listed below.
3. go to Dashboard -> observability -> Alerts, and create a silence for those warnings. Make sure that the silence is created successfully for the alert.
4. Go back to dashboard login page. Now the cluster health is shown as green tick mark, but the Health warnings are still listed below, even with silence.

Actual results:
Health warnings displayed in dashboard login page even when it is silenced with no additional info

Expected results:
1. Health warnings not displayed in dashboard login page when it is silenced

or

2. Health warnings displayed in dashboard login page when it is silenced, but add some indicators that it's been silenced.

2nd approach is similar to how silencing alerts is handled in CLI. We still show the health warning in ceph status, but we highlight that it's muted.

[root@ceph-pdhiran-mszw0g-node1-installer ~]# ceph -s
  cluster:
    id:     36ae8c8e-86b5-11ef-9fd9-fa163e43b87b
    health: HEALTH_WARN
            noout flag(s) set

  services:
    mon: 3 daemons, quorum ceph-pdhiran-mszw0g-node1-installer,ceph-pdhiran-mszw0g-node2,ceph-pdhiran-mszw0g-node3 (age 21h)
    mgr: ceph-pdhiran-mszw0g-node2.tbrmep(active, since 21h), standbys: ceph-pdhiran-mszw0g-node3.etlihl, ceph-pdhiran-mszw0g-node1-installer.vsvktq
    mds: 1/1 daemons up, 2 standby
    osd: 24 osds: 24 up (since 21h), 24 in (since 6d)
         flags noout
    rgw: 3 daemons active (3 hosts, 1 zones)

  data:
    volumes: 1/1 healthy
    pools:   9 pools, 241 pgs
    objects: 10.31k objects, 5.9 GiB
    usage:   14 GiB used, 586 GiB / 600 GiB avail
    pgs:     241 active+clean


[root@ceph-pdhiran-mszw0g-node1-installer ~]# ceph health detail
HEALTH_WARN noout flag(s) set
[WRN] OSDMAP_FLAGS: noout flag(s) set

[root@ceph-pdhiran-mszw0g-node1-installer ~]# ceph health mute OSDMAP_FLAGS

[root@ceph-pdhiran-mszw0g-node1-installer ~]# ceph -s
  cluster:
    id:     36ae8c8e-86b5-11ef-9fd9-fa163e43b87b
    health: HEALTH_OK
            (muted: OSDMAP_FLAGS)

  services:
    mon: 3 daemons, quorum ceph-pdhiran-mszw0g-node1-installer,ceph-pdhiran-mszw0g-node2,ceph-pdhiran-mszw0g-node3 (age 21h)
    mgr: ceph-pdhiran-mszw0g-node2.tbrmep(active, since 21h), standbys: ceph-pdhiran-mszw0g-node3.etlihl, ceph-pdhiran-mszw0g-node1-installer.vsvktq
    mds: 1/1 daemons up, 2 standby
    osd: 24 osds: 24 up (since 21h), 24 in (since 6d)
         flags noout
    rgw: 3 daemons active (3 hosts, 1 zones)

  data:
    volumes: 1/1 healthy
    pools:   9 pools, 241 pgs
    objects: 10.31k objects, 5.9 GiB
    usage:   14 GiB used, 586 GiB / 600 GiB avail
    pgs:     241 active+clean

# ceph health detail
HEALTH_OK (muted: OSDMAP_FLAGS)
(MUTED) [WRN] OSDMAP_FLAGS: noout flag(s) set

Additional info:

Comment 8 errata-xmlrpc 2026-01-29 06:53:10 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat Ceph Storage 9.0 Security and Enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2026:1536