Bug 1658896

Summary: OSDs Down count is not reflecting correctly on Ceph Backend Storage page
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Uday kurundwade <ukurundw>
Component: Ceph-MetricsAssignee: Christina Meno <gmeno>
Status: CLOSED DUPLICATE QA Contact: Madhavi Kasturi <mkasturi>
Severity: medium Docs Contact:
Priority: medium    
Version: 3.2CC: ceph-eng-bugs, gmeno, hnallurv, kdreyer, zcerza
Target Milestone: z5   
Target Release: 3.3   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Known Issue
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-03-31 20:01:07 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
Screenshot of "ceph backend storage" page none

Description Uday kurundwade 2018-12-13 06:20:31 UTC
Created attachment 1513907 [details]
Screenshot of "ceph backend storage" page

Description of problem:
OSDs Down count is not reflecting correctly on Ceph Backend Storage page

Version-Release number of selected component (if applicable):
cephmetrics-ansible-2.0.1-1.el7cp.x86_64
ceph-ansible-3.2.0-1.el7cp.noarch
ceph-base-12.2.8-51.el7cp.x86_64
grafana:3-8

How reproducible:
Always

Steps to Reproduce:
1.Install 3.2 ceph cluster
2.Install ceph metrics dashboard
3.Log in to metrics dashboard and navigate to "ceph backend storage" page
4.Reboot the OSD node

Actual results:
OSD down panel is not reflecting to 0 when OSD came back to cluster after OSD node reboot.

Expected results:
OSD down panel should display 0 when all OSDs are up and running

Additional info:
CLI output of cluster:

cluster:
    id:     fb79098a-a170-4e78-a1dd-bfce71dda022
    health: HEALTH_WARN
            application not enabled on 1 pool(s)
 
  services:
    mon: 3 daemons, quorum magna012,magna036,magna043
    mgr: magna012(active), standbys: magna043, magna036
    osd: 9 osds: 9 up, 9 in
    rgw: 1 daemon active
 
  data:
    pools:   5 pools, 132 pgs
    objects: 4.41k objects, 16.4GiB
    usage:   50.1GiB used, 8.09TiB / 8.14TiB avail
    pgs:     132 active+clean

Comment 3 Uday kurundwade 2018-12-13 08:16:37 UTC
Additional info:

"OSD down panel" reflecting correct output after 40 to 60 min.

Comment 4 Giridhar Ramaraju 2019-08-05 13:08:39 UTC
Updating the QA Contact to a Hemant. Hemant will be rerouting them to the appropriate QE Associate. 

Regards,
Giri

Comment 5 Giridhar Ramaraju 2019-08-05 13:10:05 UTC
Updating the QA Contact to a Hemant. Hemant will be rerouting them to the appropriate QE Associate. 

Regards,
Giri

Comment 9 Zack Cerza 2020-03-31 20:01:07 UTC

*** This bug has been marked as a duplicate of bug 1652233 ***

Comment 10 Red Hat Bugzilla 2023-09-14 04:43:43 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days