Description of problem: Clusters don't report memory usage via telemetry. How reproducible: https://infogw-cache.api.openshift.com/graph?g0.range_input=1h&g0.expr=count(count%20by%20(_id)(cluster%3Acpu_usage_cores%3Asum))&g0.tab=1&g1.range_input=1h&g1.expr=count(count%20by%20(_id)(cluster%3Amemory_usage_bytes%3Asum))&g1.tab=1 Slack discussion: https://coreos.slack.com/archives/CEG5ZJQ1G/p1569333490049400 Expected results: Memory metrics are being reported. Additional info: https://github.com/openshift/cluster-monitoring-operator/pull/456 replaced the deprecated kube-prometheus node mixins with node mixins from node exporter. The latter does not include the `node:node_memory_bytes_total:sum`/`node:node_memory_bytes_available:sum` recording rule, it has to be replaced with `node_memory_MemTotal_bytes{job="node-exporter"}`/` node_memory_MemAvailable_bytes{job="node-exporter"}`.
Fixed. 4.2.0-0.nightly-2019-10-04-015220
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:0062