Description of problem: The way memory usage is shown in different dashboards is causing some confusion for customers. The first instance is from the 'grafana-dashboard-k8s-resources-cluster' config and uses a sum of container_memory_rss by namespace to show memory in the 'K8s / Compute Resource / Cluster' view: "targets": [ { "expr": "sum(container_memory_rss{cluster=\"$cluster\", container_name!=\"\"}) by (namespace)", "format": "time_series", "intervalFactor": 2, "legendFormat": "{{namespace}}", "legendLink": null, "step": 10 } ], Code: https://github.com/openshift/cluster-monitoring-operator/blob/release-3.11/assets/grafana/dashboard-definitions.yaml#L4033 The first instance is from the grafana-dashboard-k8s-resources-namespace config and used a sum of container_memory_usage_bytes by pod in the 'K8s / Compute Resource / NAmespaces' view: "targets": [ { "expr": "sum(container_memory_usage_bytes{namespace=\"$namespace\", container_name!=\"\"}) by (pod_name)", "format": "time_series", "intervalFactor": 2, "legendFormat": "{{pod_name}}", "legendLink": null, "step": 10 } ], Code: https://github.com/openshift/cluster-monitoring-operator/blob/release-3.11/assets/grafana/dashboard-definitions.yaml#L4845 Between these 2 metrics, container_memory_usage_bytes and container_memory_rss are 2 independent metrics generated from cAdvisor: container_memory_usage_bytes is generated from the cgroup memory.usage_in_bytes which includes rss, cache and swap where as container_memory_rss is based on the rss value from memory.stat. Within Grafana, the cluster view memory pane shows "Memory Usage w/o Cache" where as the Namespace pane is showing "Memory Usage", but I don't think that is a clear enough distinction to end users that they're reporting 2 different values from 2 different metric sources. Version-Release number of selected component (if applicable): 3.11 How reproducible: View the resources in grafana Actual results: Memory reporter is different and can confuse end users Expected results: Memory is aligned or we better explain what it is we are reporting on Additional info:
I created an issue on the upstream dependency that ships these dashboards, we'll need to discuss with the community what would be best to show consistently. I believe working-set-bytes is the best one to show in aggregations and when drilling down to pods/containers, show the different types of memory metrics.
I believe we fixed it with https://github.com/openshift/cluster-monitoring-operator/pull/442 so it should be available in 4.2.
Fixed. Checked on:4.2.0-0.nightly-2019-08-23-004712
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:2922