Bug 1655589 - [ceph-metrics]"Capacity Utilization" is showing wrong value when osd is down
Summary: [ceph-metrics]"Capacity Utilization" is showing wrong value when osd is down
Keywords:
Status: CLOSED DEFERRED
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Metrics
Version: 3.2
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: z5
: 3.3
Assignee: Christina Meno
QA Contact: Madhavi Kasturi
URL:
Whiteboard:
Depends On:
Blocks: 1629656
TreeView+ depends on / blocked
 
Reported: 2018-12-03 13:44 UTC by Yogesh Mane
Modified: 2020-09-18 13:20 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Known Issue
Doc Text:
._Capacity Utilization_ in _Ceph - At Glance_ dashboard shows the wrong value when an OSD is down This issue causes the Red Hat Ceph Dashboard to show capacity utilization which is less than what `ceph df` shows. There is no workarond at this time.
Clone Of:
Environment:
Last Closed: 2020-04-22 12:24:44 UTC
Embargoed:


Attachments (Terms of Use)
Screenshout of grafana output (197.32 KB, image/png)
2018-12-03 13:44 UTC, Yogesh Mane
no flags Details

Description Yogesh Mane 2018-12-03 13:44:01 UTC
Created attachment 1510920 [details]
Screenshout of grafana output

Description of problem:
In "Ceph - At Glance" page,"Capacity Utilization" is showing wrong value when osd is down.


Version-Release number of selected component (if applicable):
cephmetrics-ansible-2.0.1-1.el7cp.x86_64
ceph-ansible-3.2.0-0.1.rc4.el7cp.noarch

How reproducible:
Always

Steps to Reproduce:
1.Bring up the cluster with ceph-ansible
2.Bring up the grafana with cephmetrics-ansible
3.Bring down osd

Actual results:
"Capacity Utilization" value is showing wrong value.

Expected results:
"Capacity Utilization" should show correct value.

Additional info:
[ubuntu@magna049 ceph-ansible]$ ceph df
GLOBAL:
    SIZE        AVAIL      RAW USED     %RAW USED 
    2.73TiB     969GiB      1.78TiB         65.31 
POOLS:
    NAME                          ID     USED        %USED     MAX AVAIL     OBJECTS 
    cephfs_data                   1       633GiB     54.29        356GiB      162138 
    cephfs_metadata               2      16.5MiB         0        356GiB          61 
    .rgw.root                     3      1.09KiB         0        356GiB           4 
    default.rgw.control           4           0B         0        356GiB           8 
    default.rgw.meta              5      1.40KiB         0        356GiB           8 
    default.rgw.log               6           0B         0        356GiB         207 
    default.rgw.buckets.index     8           0B         0        356GiB           2 
    default.rgw.buckets.data      9       109GiB     16.93        356GiB       27825 
    p1                            11      146GiB     24.09        356GiB       37407 
    p2                            12

Comment 4 John Brier 2018-12-17 22:13:41 UTC
Yogesh, I saw you set the Doc Text on this bug, but then removed it. Was that
intentional?

If so, why?

I am hoping to add this as a Known Issue to the Release Notes.

Comment 6 Giridhar Ramaraju 2019-08-05 13:11:08 UTC
Updating the QA Contact to a Hemant. Hemant will be rerouting them to the appropriate QE Associate. 

Regards,
Giri

Comment 7 Giridhar Ramaraju 2019-08-05 13:12:09 UTC
Updating the QA Contact to a Hemant. Hemant will be rerouting them to the appropriate QE Associate. 

Regards,
Giri


Note You need to log in before you can comment on or make changes to this bug.