Bug 1655589

Summary: [ceph-metrics]"Capacity Utilization" is showing wrong value when osd is down
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Yogesh Mane <ymane>
Component: Ceph-MetricsAssignee: Christina Meno <gmeno>
Status: CLOSED DEFERRED QA Contact: Madhavi Kasturi <mkasturi>
Severity: medium Docs Contact:
Priority: medium    
Version: 3.2CC: bniver, ceph-eng-bugs, hnallurv, jbrier, kdreyer, pasik, ymane
Target Milestone: z5   
Target Release: 3.3   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Known Issue
Doc Text:
._Capacity Utilization_ in _Ceph - At Glance_ dashboard shows the wrong value when an OSD is down This issue causes the Red Hat Ceph Dashboard to show capacity utilization which is less than what `ceph df` shows. There is no workarond at this time.
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-04-22 12:24:44 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1629656    
Attachments:
Description Flags
Screenshout of grafana output none

Description Yogesh Mane 2018-12-03 13:44:01 UTC
Created attachment 1510920 [details]
Screenshout of grafana output

Description of problem:
In "Ceph - At Glance" page,"Capacity Utilization" is showing wrong value when osd is down.


Version-Release number of selected component (if applicable):
cephmetrics-ansible-2.0.1-1.el7cp.x86_64
ceph-ansible-3.2.0-0.1.rc4.el7cp.noarch

How reproducible:
Always

Steps to Reproduce:
1.Bring up the cluster with ceph-ansible
2.Bring up the grafana with cephmetrics-ansible
3.Bring down osd

Actual results:
"Capacity Utilization" value is showing wrong value.

Expected results:
"Capacity Utilization" should show correct value.

Additional info:
[ubuntu@magna049 ceph-ansible]$ ceph df
GLOBAL:
    SIZE        AVAIL      RAW USED     %RAW USED 
    2.73TiB     969GiB      1.78TiB         65.31 
POOLS:
    NAME                          ID     USED        %USED     MAX AVAIL     OBJECTS 
    cephfs_data                   1       633GiB     54.29        356GiB      162138 
    cephfs_metadata               2      16.5MiB         0        356GiB          61 
    .rgw.root                     3      1.09KiB         0        356GiB           4 
    default.rgw.control           4           0B         0        356GiB           8 
    default.rgw.meta              5      1.40KiB         0        356GiB           8 
    default.rgw.log               6           0B         0        356GiB         207 
    default.rgw.buckets.index     8           0B         0        356GiB           2 
    default.rgw.buckets.data      9       109GiB     16.93        356GiB       27825 
    p1                            11      146GiB     24.09        356GiB       37407 
    p2                            12

Comment 4 John Brier 2018-12-17 22:13:41 UTC
Yogesh, I saw you set the Doc Text on this bug, but then removed it. Was that
intentional?

If so, why?

I am hoping to add this as a Known Issue to the Release Notes.

Comment 6 Giridhar Ramaraju 2019-08-05 13:11:08 UTC
Updating the QA Contact to a Hemant. Hemant will be rerouting them to the appropriate QE Associate. 

Regards,
Giri

Comment 7 Giridhar Ramaraju 2019-08-05 13:12:09 UTC
Updating the QA Contact to a Hemant. Hemant will be rerouting them to the appropriate QE Associate. 

Regards,
Giri