Bug 1658896 - OSDs Down count is not reflecting correctly on Ceph Backend Storage page
Summary: OSDs Down count is not reflecting correctly on Ceph Backend Storage page
Keywords:
Status: CLOSED DUPLICATE of bug 1652233
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Metrics
Version: 3.2
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: z5
: 3.3
Assignee: Christina Meno
QA Contact: Madhavi Kasturi
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-12-13 06:20 UTC by Uday kurundwade
Modified: 2023-09-14 04:43 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Known Issue
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-03-31 20:01:07 UTC
Embargoed:


Attachments (Terms of Use)
Screenshot of "ceph backend storage" page (140.89 KB, image/png)
2018-12-13 06:20 UTC, Uday kurundwade
no flags Details

Description Uday kurundwade 2018-12-13 06:20:31 UTC
Created attachment 1513907 [details]
Screenshot of "ceph backend storage" page

Description of problem:
OSDs Down count is not reflecting correctly on Ceph Backend Storage page

Version-Release number of selected component (if applicable):
cephmetrics-ansible-2.0.1-1.el7cp.x86_64
ceph-ansible-3.2.0-1.el7cp.noarch
ceph-base-12.2.8-51.el7cp.x86_64
grafana:3-8

How reproducible:
Always

Steps to Reproduce:
1.Install 3.2 ceph cluster
2.Install ceph metrics dashboard
3.Log in to metrics dashboard and navigate to "ceph backend storage" page
4.Reboot the OSD node

Actual results:
OSD down panel is not reflecting to 0 when OSD came back to cluster after OSD node reboot.

Expected results:
OSD down panel should display 0 when all OSDs are up and running

Additional info:
CLI output of cluster:

cluster:
    id:     fb79098a-a170-4e78-a1dd-bfce71dda022
    health: HEALTH_WARN
            application not enabled on 1 pool(s)
 
  services:
    mon: 3 daemons, quorum magna012,magna036,magna043
    mgr: magna012(active), standbys: magna043, magna036
    osd: 9 osds: 9 up, 9 in
    rgw: 1 daemon active
 
  data:
    pools:   5 pools, 132 pgs
    objects: 4.41k objects, 16.4GiB
    usage:   50.1GiB used, 8.09TiB / 8.14TiB avail
    pgs:     132 active+clean

Comment 3 Uday kurundwade 2018-12-13 08:16:37 UTC
Additional info:

"OSD down panel" reflecting correct output after 40 to 60 min.

Comment 4 Giridhar Ramaraju 2019-08-05 13:08:39 UTC
Updating the QA Contact to a Hemant. Hemant will be rerouting them to the appropriate QE Associate. 

Regards,
Giri

Comment 5 Giridhar Ramaraju 2019-08-05 13:10:05 UTC
Updating the QA Contact to a Hemant. Hemant will be rerouting them to the appropriate QE Associate. 

Regards,
Giri

Comment 9 Zack Cerza 2020-03-31 20:01:07 UTC

*** This bug has been marked as a duplicate of bug 1652233 ***

Comment 10 Red Hat Bugzilla 2023-09-14 04:43:43 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days


Note You need to log in before you can comment on or make changes to this bug.