Description of problem: Host Utilization is NOT showing the charts for ALL the mounted disk partitions. To be more specific, the graphs for last 2 disk partitions are missing for all the hosts. Version-Release number of selected component (if applicable): rhsc-3.0.0-0.12.el6rhs.noarch rhsc-monitoring-uiplugin-0.1.1-1.el6rhs.noarch How reproducible: Always My test setup is a RHSC + Nagios Setup with 4 RHS 3.0 nodes in a cluster called "DocCluster". Look for more details under Additional info. Steps to Reproduce: 1. Login to RHSC 2. From the 'Tree' mode, Select any Host and click on the "Trends" tab and see the graphs Actual results: The trends tab is showing the utilization graphs for all the physical entities like CPU, Memory, Swap, Network-Interfaces and Disks. But out of 4 mounted disk partitions (/rhs/brick1, /rhs/brick2, /rhs/brick3 and /rhs/brick4), the last two are NOT shown (/rhs/brick3 and /rhs/brick4) at all. This is the case with all the 4 nodes in the cluster. If that is the case, I believe that last 2 mounted disk in a host will NOT be shown at all. For ex: If there are 8 mounted disks, only 6 will be shown in Trends. Is that right? Expected results: The trends tab should show the utilization graphs for all the physical entities like CPU, Memory, Swap, Network-Interfaces and ALL mounted disk partitions. Additional info: Screenshots attached. Some additional details about the test setup is given below for your reference: ################################################## [root@dhcp42-241 /]# gluster peer status Number of Peers: 3 Hostname: 10.70.43.90 Uuid: de710f3d-16b9-4089-a44c-4c9e2b2e7452 State: Peer in Cluster (Connected) Hostname: 10.70.43.179 Uuid: 704a62ba-b918-4f2c-bbde-99c7849d0f0f State: Peer in Cluster (Connected) Hostname: 10.70.42.239 Uuid: 6906637d-2630-41bf-83c1-1dca3b52d1ff State: Peer in Cluster (Connected) _______________________________________________________ [root@dhcp42-241 /]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_dhcp42228-lv_root 47G 2.4G 43G 6% / tmpfs 4.0G 0 4.0G 0% /dev/shm /dev/vda1 485M 34M 426M 8% /boot /dev/mapper/RHS_vg1-RHS_lv1 50G 49G 2.0G 97% /rhs/brick1 /dev/mapper/RHS_vg2-RHS_lv2 50G 33M 50G 1% /rhs/brick2 /dev/mapper/RHS_vg3-RHS_lv3 50G 49G 2.0G 97% /rhs/brick3 /dev/mapper/RHS_vg4-RHS_lv4 50G 49G 2.0G 97% /rhs/brick4 _______________________________________________________ [root@dhcp43-90 /]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_dhcp42228-lv_root 47G 2.4G 43G 6% / tmpfs 4.0G 0 4.0G 0% /dev/shm /dev/vda1 485M 34M 426M 8% /boot /dev/mapper/RHS_vg1-RHS_lv1 50G 49G 2.0G 97% /rhs/brick1 /dev/mapper/RHS_vg2-RHS_lv2 50G 49G 2.0G 97% /rhs/brick2 /dev/mapper/RHS_vg3-RHS_lv3 50G 49G 2.0G 97% /rhs/brick3 /dev/mapper/RHS_vg4-RHS_lv4 50G 49G 2.0G 97% /rhs/brick4 _______________________________________________________ [root@dhcp43-179 /]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_dhcp42228-lv_root 47G 2.0G 43G 5% / tmpfs 4.0G 0 4.0G 0% /dev/shm /dev/vda1 485M 34M 426M 8% /boot /dev/mapper/RHS_vg1-RHS_lv1 50G 49G 2.0G 97% /rhs/brick1 /dev/mapper/RHS_vg2-RHS_lv2 50G 49G 2.0G 97% /rhs/brick2 /dev/mapper/RHS_vg3-RHS_lv3 50G 49G 2.0G 97% /rhs/brick3 /dev/mapper/RHS_vg4-RHS_lv4 50G 49G 2.0G 97% /rhs/brick4 _______________________________________________________ [root@dhcp42-239 /]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_dhcp42228-lv_root 47G 2.4G 43G 6% / tmpfs 4.0G 0 4.0G 0% /dev/shm /dev/vda1 485M 34M 426M 8% /boot /dev/mapper/RHS_vg1-RHS_lv1 50G 49G 2.0G 97% /rhs/brick1 /dev/mapper/RHS_vg2-RHS_lv2 50G 33M 50G 1% /rhs/brick2 /dev/mapper/RHS_vg3-RHS_lv3 50G 49G 2.0G 97% /rhs/brick3 /dev/mapper/RHS_vg4-RHS_lv4 50G 49G 2.0G 97% /rhs/brick4 ##################################################
Created attachment 919609 [details] Host1-Trends graph
Created attachment 919610 [details] Host2-Trends graph
Created attachment 919611 [details] Host3-Trends graph
Created attachment 919612 [details] Host4-Trends graph
Please review edited doc text and sign off.
Looks Good to me
Currently technically its not possible to get the list of all mount points to display graph for each them. Hence moving this out of 3.0.2.
Thank you for your report. This bug is filed against a component for which no further new development is being undertaken