Bug 1652233 - [ceph-metrics]'OSDs down' tab is not working properly in 'CEPH Backend storage' Dashboard
Summary: [ceph-metrics]'OSDs down' tab is not working properly in 'CEPH Backend storag...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Metrics
Version: 3.2
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 3.3z7
Assignee: Zack Cerza
QA Contact: Sunil Angadi
Bara Ancincova
URL:
Whiteboard:
: 1658896 (view as bug list)
Depends On:
Blocks: 1629656
TreeView+ depends on / blocked
 
Reported: 2018-11-21 18:01 UTC by Yogesh Mane
Modified: 2021-05-06 18:32 UTC (History)
13 users (show)

Fixed In Version: cephmetrics-ansible-2.0.10-1.el7cp
Doc Type: Bug Fix
Doc Text:
.The `_OSD down_` tab shows an incorrect value When rebooting OSDs, the `_OSD down_` tab in the `_CEPH Backend storage_` dashboard shows the correct number of OSDs that are `down`. However, when all OSDs are `up` again after the reboot, the tab continues showing the number of `down` OSDs. With this update, both CLI and Grafana values are matching during osd up/down operation and working as expected.
Clone Of:
Environment:
Last Closed: 2021-05-06 18:32:04 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ceph cephmetrics pull 252 0 None closed dashboards: Fix slow/inaccurate OSD down counter 2021-02-19 18:57:46 UTC
Red Hat Product Errata RHSA-2021:1518 0 None None None 2021-05-06 18:32:30 UTC

Description Yogesh Mane 2018-11-21 18:01:14 UTC
Description of problem:
At 'CEPH Backend storage' Dashboard ,"OSDs down" tab is showing wrong value.

Version-Release number of selected component (if applicable):
cephmetrics-ansible-2.0.1-1.el7cp.x86_64

How reproducible:
Always

Steps to Reproduce:
1.Reboot one osd node.
2.Check the grafana , you will see number of osds down
3.Check the grafana, after osds are up.

Actual results:
You will see same value , no change in value when osds are up

Expected results:
You should see 0 osds down


Additional info:

Comment 7 Giridhar Ramaraju 2019-08-05 13:06:59 UTC
Updating the QA Contact to a Hemant. Hemant will be rerouting them to the appropriate QE Associate. 

Regards,
Giri

Comment 8 Giridhar Ramaraju 2019-08-05 13:09:34 UTC
Updating the QA Contact to a Hemant. Hemant will be rerouting them to the appropriate QE Associate. 

Regards,
Giri

Comment 9 Zack Cerza 2020-03-31 20:01:07 UTC
*** Bug 1658896 has been marked as a duplicate of this bug. ***

Comment 23 errata-xmlrpc 2021-05-06 18:32:04 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat Ceph Storage 3.3 Security and Bug Fix Update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:1518


Note You need to log in before you can comment on or make changes to this bug.