Bug 1729529 - Inconsistent presentation of memory statistics in Grafana creating confusion
Summary: Inconsistent presentation of memory statistics in Grafana creating confusion
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Monitoring
Version: 3.11.0
Hardware: All
OS: Linux
unspecified
medium
Target Milestone: ---
: 4.2.0
Assignee: Pawel Krupa
QA Contact: Viacheslav
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-07-12 14:02 UTC by Matthew Robson
Modified: 2019-10-16 06:29 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-10-16 06:29:43 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github kubernetes-monitoring kubernetes-mixin issues 227 0 None closed consistent metric use for memory 2020-11-18 20:14:49 UTC
Github openshift cluster-monitoring-operator pull 442 0 None closed Bug 1733830: Bump kubernetes-mixin 2020-11-18 20:14:50 UTC
Red Hat Product Errata RHBA-2019:2922 0 None None None 2019-10-16 06:29:58 UTC

Description Matthew Robson 2019-07-12 14:02:10 UTC
Description of problem:

The way memory usage is shown in different dashboards is causing some confusion for customers.

The first instance is from the 'grafana-dashboard-k8s-resources-cluster' config and uses a sum of container_memory_rss by namespace to show memory in the 'K8s / Compute Resource / Cluster' view:

                          "targets": [
                              {
                                  "expr": "sum(container_memory_rss{cluster=\"$cluster\", container_name!=\"\"}) by (namespace)",
                                  "format": "time_series",
                                  "intervalFactor": 2,
                                  "legendFormat": "{{namespace}}",
                                  "legendLink": null,
                                  "step": 10
                              }
                          ],

Code: https://github.com/openshift/cluster-monitoring-operator/blob/release-3.11/assets/grafana/dashboard-definitions.yaml#L4033

The first instance is from the grafana-dashboard-k8s-resources-namespace config and used a sum of container_memory_usage_bytes by pod in the 'K8s / Compute Resource / NAmespaces' view:

                          "targets": [
                              {
                                  "expr": "sum(container_memory_usage_bytes{namespace=\"$namespace\", container_name!=\"\"}) by (pod_name)",
                                  "format": "time_series",
                                  "intervalFactor": 2,
                                  "legendFormat": "{{pod_name}}",
                                  "legendLink": null,
                                  "step": 10
                              }
                          ],

Code: https://github.com/openshift/cluster-monitoring-operator/blob/release-3.11/assets/grafana/dashboard-definitions.yaml#L4845

Between these 2 metrics, container_memory_usage_bytes and container_memory_rss are 2 independent metrics generated from cAdvisor:

container_memory_usage_bytes is generated from the cgroup memory.usage_in_bytes which includes rss, cache and swap where as container_memory_rss is based on the rss value from memory.stat.


Within Grafana, the cluster view memory pane shows "Memory Usage w/o Cache" where as the Namespace pane is showing "Memory Usage", but I don't think that is a clear enough distinction to end users that they're reporting 2 different values from 2 different metric sources.


Version-Release number of selected component (if applicable):

3.11


How reproducible:

View the resources in grafana

Actual results:

Memory reporter is different and can confuse end users

Expected results:

Memory is aligned or we better explain what it is we are reporting on


Additional info:

Comment 1 Frederic Branczyk 2019-07-12 14:54:03 UTC
I created an issue on the upstream dependency that ships these dashboards, we'll need to discuss with the community what would be best to show consistently. I believe working-set-bytes is the best one to show in aggregations and when drilling down to pods/containers, show the different types of memory metrics.

Comment 2 Pawel Krupa 2019-08-20 14:54:21 UTC
I believe we fixed it with https://github.com/openshift/cluster-monitoring-operator/pull/442 so it should be available in 4.2.

Comment 4 Viacheslav 2019-08-23 09:12:40 UTC
Fixed.
Checked on:4.2.0-0.nightly-2019-08-23-004712

Comment 5 errata-xmlrpc 2019-10-16 06:29:43 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:2922


Note You need to log in before you can comment on or make changes to this bug.