.Graphs on the _OSD Node Detail_ dashboard might appear incorrect when used with _All_
Graphs generated under _OSD Node Detail_ > _OSD Host Name_ > _All_ do not show all OSDs in the cluster. A graph with data for hundreds or thousands of OSDs would not be usable. The ability to set _All_ is intended to show cluster-wide values. For some dashboards it does not make sense and should not be used.
There is no workaround at this time.
Created attachment 1514030[details]
Screenshot of graph with osds representing values
Description of problem:
Number of osds representing values in graphs are less than number of actual osds present in cluster.
Graphs are 'All Disk utilisation','All Disk IOPS','All Disk Latency','All Throughput by Disk' graphs in "OSD Node Detail" dashboard
Version-Release number of selected component (if applicable):
cephmetrics-ansible-2.0.1-1.el7cp.x86_64
ceph-ansible-3.2.0-0.1.rc8.el7cp.noarch
How reproducible:
Always
Steps to Reproduce:
1.Install ceph cluster
2.Install ceph-metrics
3.Goto 'OSD Node Detail' dashboard
4.check number of osds representing values in all 4 disk graghs
Actual results:
Number of osds representing values are less in graphs
Expected results:
Number of osds representing values should be correct in graph.
Additional info:
sudo ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 7.82686 root default
-3 2.61719 host magna066
1 hdd 0.65430 osd.1 up 1.00000 1.00000
3 hdd 0.65430 osd.3 up 1.00000 1.00000
5 hdd 0.65430 osd.5 up 1.00000 1.00000
7 hdd 0.65430 osd.7 up 1.00000 1.00000
-5 2.61719 host magna087
0 hdd 0.65430 osd.0 up 1.00000 1.00000
4 hdd 0.65430 osd.4 up 1.00000 1.00000
8 hdd 0.65430 osd.8 up 1.00000 1.00000
10 hdd 0.65430 osd.10 up 1.00000 1.00000
-7 2.59248 host magna089
2 hdd 0.77309 osd.2 up 1.00000 1.00000
6 hdd 0.90970 osd.6 up 1.00000 1.00000
9 hdd 0.90970 osd.9 up 1.00000 1.00000
(11 osds present in cluster)
(gragh showing 9 osds)
I have ceph 3.2 with disk based filestore scenario and installed metrics dashboard.
I have total 9 osds running (3 OSD nodes with each node running 3 osds). When I navigate to OSD node details page, I expect to see details for 9 osds whereas, it is showing the details for only 3 osds in all the graphs in the page. Please refer to the attached screenshot(ceph osd node details issue.png)
Created attachment 1514030 [details] Screenshot of graph with osds representing values Description of problem: Number of osds representing values in graphs are less than number of actual osds present in cluster. Graphs are 'All Disk utilisation','All Disk IOPS','All Disk Latency','All Throughput by Disk' graphs in "OSD Node Detail" dashboard Version-Release number of selected component (if applicable): cephmetrics-ansible-2.0.1-1.el7cp.x86_64 ceph-ansible-3.2.0-0.1.rc8.el7cp.noarch How reproducible: Always Steps to Reproduce: 1.Install ceph cluster 2.Install ceph-metrics 3.Goto 'OSD Node Detail' dashboard 4.check number of osds representing values in all 4 disk graghs Actual results: Number of osds representing values are less in graphs Expected results: Number of osds representing values should be correct in graph. Additional info: sudo ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 7.82686 root default -3 2.61719 host magna066 1 hdd 0.65430 osd.1 up 1.00000 1.00000 3 hdd 0.65430 osd.3 up 1.00000 1.00000 5 hdd 0.65430 osd.5 up 1.00000 1.00000 7 hdd 0.65430 osd.7 up 1.00000 1.00000 -5 2.61719 host magna087 0 hdd 0.65430 osd.0 up 1.00000 1.00000 4 hdd 0.65430 osd.4 up 1.00000 1.00000 8 hdd 0.65430 osd.8 up 1.00000 1.00000 10 hdd 0.65430 osd.10 up 1.00000 1.00000 -7 2.59248 host magna089 2 hdd 0.77309 osd.2 up 1.00000 1.00000 6 hdd 0.90970 osd.6 up 1.00000 1.00000 9 hdd 0.90970 osd.9 up 1.00000 1.00000 (11 osds present in cluster) (gragh showing 9 osds)