Bug 1704710 - Memory Usage under "Home -> Status -> Dashboard" for all pods all double counted
Summary: Memory Usage under "Home -> Status -> Dashboard" for all pods all double counted
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Management Console
Version: 4.1.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 4.1.0
Assignee: Samuel Padgett
QA Contact: Junqi Zhao
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-04-30 12:04 UTC by Junqi Zhao
Modified: 2019-05-03 22:54 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-05-03 22:54:41 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Memory Usage under "Home -> Status -> Dashboard" (259.19 KB, image/png)
2019-04-30 12:04 UTC, Junqi Zhao
no flags Details
memory usage correction (96.48 KB, image/png)
2019-05-03 22:51 UTC, Peter Ruan
no flags Details

Description Junqi Zhao 2019-04-30 12:04:14 UTC
Created attachment 1560235 [details]
Memory Usage under "Home -> Status -> Dashboard"

Description of problem:
eg, login cluster console, select openshift-monitoring project, and go to "Home -> Status -> Dashboard"
from the attached picture, we can see the total Memory Usage is about 3.4G, but from the "Memory Usage by Pod(Top 10)",
we could find prometheus-k8s-0 + prometheus-k8s-1 already exceed 3.4G

search by "pod_name:container_memory_usage_bytes:sum{namespace='openshift-monitoring'} / 1024 /1024" from prometheus UI,
the memeory usage in console UI is about twice the times from prometheus
***********************************************************************
Element	Value
{namespace="openshift-monitoring",pod_name="alertmanager-main-0"}	40.4375
{namespace="openshift-monitoring",pod_name="alertmanager-main-1"}	34.6796875
{namespace="openshift-monitoring",pod_name="alertmanager-main-2"}	33.09375
{namespace="openshift-monitoring",pod_name="cluster-monitoring-operator-574846786c-rprcb"}	31.859375
{namespace="openshift-monitoring",pod_name="grafana-587c569487-4m8j5"}	43.78125
{namespace="openshift-monitoring",pod_name="kube-state-metrics-67d6957cd6-4kbrl"}	37.15625
{namespace="openshift-monitoring",pod_name="node-exporter-227nz"}	26.91015625
{namespace="openshift-monitoring",pod_name="node-exporter-chkh2"}	33.3125
{namespace="openshift-monitoring",pod_name="node-exporter-fktrp"}	27.0390625
{namespace="openshift-monitoring",pod_name="node-exporter-g9mm6"}	32.59375
{namespace="openshift-monitoring",pod_name="node-exporter-jtg29"}	30.62890625
{namespace="openshift-monitoring",pod_name="node-exporter-nhfth"}	27.671875
{namespace="openshift-monitoring",pod_name="prometheus-adapter-849f77877f-7zcc7"}	19.4921875
{namespace="openshift-monitoring",pod_name="prometheus-adapter-849f77877f-qp7fr"}	19.265625
{namespace="openshift-monitoring",pod_name="prometheus-k8s-0"}	1375.32421875
{namespace="openshift-monitoring",pod_name="prometheus-k8s-1"}	1377.93359375
{namespace="openshift-monitoring",pod_name="prometheus-operator-5b895d4f9d-dxgfr"}	42.765625
{namespace="openshift-monitoring",pod_name="telemeter-client-7bddf7dcb8-xg5cn"}	22.6640625
***********************************************************************
Version-Release number of selected component (if applicable):
4.1.0-0.nightly-2019-04-28-064010

How reproducible:
Always

Steps to Reproduce:
1. See the description part
2.
3.

Actual results:
Memory Usage under "Home -> Status -> Dashboard" for all pods all double counted

Expected results:
Memory Usage under "Home -> Status -> Dashboard" should be accurate

Additional info:

Comment 1 Samuel Padgett 2019-04-30 12:24:58 UTC
https://github.com/openshift/console/pull/1507

Comment 2 Peter Ruan 2019-05-03 22:51:55 UTC
Created attachment 1562813 [details]
memory usage correction

Comment 3 Peter Ruan 2019-05-03 22:53:15 UTC
verified with 4.1.0-0.nightly-2019-05-03-145222 (see attachment for screenshot of the corretion)


Note You need to log in before you can comment on or make changes to this bug.