Bug 1656820

Summary: [ceph-metrics]In "Ceph Cluster" dashboard ,"pool capacity" graphs are showing values ~1% more than what "df --cluster" shows.
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Yogesh Mane <ymane>
Component: Ceph-MetricsAssignee: ceph-eng-bugs <ceph-eng-bugs>
Status: CLOSED WONTFIX QA Contact: Yogesh Mane <ymane>
Severity: high Docs Contact:
Priority: medium    
Version: 3.2CC: ceph-eng-bugs, gmeno, hnallurv, jbrier, pasik, ymane
Target Milestone: rc   
Target Release: 3.3   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Known Issue
Doc Text:
.In the _Ceph Cluster_ dashboard the _Pool Capacity_ graphs display values higher than actual capacity This issue causes the _Pool Capacity_ graph to display values around one percent higher than what `df --cluster` shows. There is no workaround at this time.
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-04-05 15:04:30 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1629656    
Attachments:
Description Flags
Screenshot of output none

Description Yogesh Mane 2018-12-06 12:03:27 UTC
Created attachment 1512063 [details]
Screenshot of output

Description of problem:
In "Ceph Cluster" dashboard ,"pool capacity" graghs showing values increased by 1. 

Version-Release number of selected component (if applicable):
cephmetrics-ansible-2.0.1-1.el7cp.x86_64
ceph-ansible-3.2.0-0.1.rc8.el7cp.noarch

How reproducible:
Always

Steps to Reproduce:
1.Install ceph cluster with ceph-ansible
2.Install ceph-metrics
3.Navigate to "Ceph Cluster" dashboard
4.Check "Pool Capicity" gragh

Actual results:
Values of "pools used%" is displaying value increased by 1

Expected results:
Values of "pools used%" should be displayed correct value.

Additional info:
# ceph df --cluster local
GLOBAL:
    SIZE        AVAIL       RAW USED     %RAW USED 
    7.83TiB     6.66TiB      1.17TiB         14.93 
POOLS:
    NAME                          ID     USED        %USED     MAX AVAIL     OBJECTS 
    cephfs_data                   1           0B         0       1.91TiB           0 
    cephfs_metadata               2      2.19KiB         0       1.91TiB          21 
    .rgw.root                     3      1.09KiB         0       1.91TiB           4 
    default.rgw.control           4           0B         0       1.91TiB           8 
    default.rgw.meta              5        1008B         0       1.91TiB           6 
    default.rgw.log               6           0B         0       1.91TiB         207 
    default.rgw.buckets.index     7           0B         0       1.91TiB           1 
    default.rgw.buckets.data      8       358GiB     15.47       1.91TiB       91656 
    p1                            9      37.2GiB      1.87       1.91TiB        9535

Comment 3 Christina Meno 2018-12-10 22:47:42 UTC
We'll address this in future, not a blocker

Comment 5 John Brier 2018-12-17 22:15:16 UTC
Yogesh, could you set the Doc Text for this one?