Created attachment 1513907 [details] Screenshot of "ceph backend storage" page Description of problem: OSDs Down count is not reflecting correctly on Ceph Backend Storage page Version-Release number of selected component (if applicable): cephmetrics-ansible-2.0.1-1.el7cp.x86_64 ceph-ansible-3.2.0-1.el7cp.noarch ceph-base-12.2.8-51.el7cp.x86_64 grafana:3-8 How reproducible: Always Steps to Reproduce: 1.Install 3.2 ceph cluster 2.Install ceph metrics dashboard 3.Log in to metrics dashboard and navigate to "ceph backend storage" page 4.Reboot the OSD node Actual results: OSD down panel is not reflecting to 0 when OSD came back to cluster after OSD node reboot. Expected results: OSD down panel should display 0 when all OSDs are up and running Additional info: CLI output of cluster: cluster: id: fb79098a-a170-4e78-a1dd-bfce71dda022 health: HEALTH_WARN application not enabled on 1 pool(s) services: mon: 3 daemons, quorum magna012,magna036,magna043 mgr: magna012(active), standbys: magna043, magna036 osd: 9 osds: 9 up, 9 in rgw: 1 daemon active data: pools: 5 pools, 132 pgs objects: 4.41k objects, 16.4GiB usage: 50.1GiB used, 8.09TiB / 8.14TiB avail pgs: 132 active+clean
Additional info: "OSD down panel" reflecting correct output after 40 to 60 min.
Updating the QA Contact to a Hemant. Hemant will be rerouting them to the appropriate QE Associate. Regards, Giri
*** This bug has been marked as a duplicate of bug 1652233 ***
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days