The openshift dashboard is displaying incorrect values for current memory and disk usage (and probably same issue for CPU as well). For example, the current memory status is coming from:
(sum(kube_node_status_capacity_memory_bytes) - sum(kube_node_status_allocatable_memory_bytes))[60m:5m]
This value is never going to change over the life of the cluster because the allocatable memory is the total memory that can be used for pods. It is not the amount of memory currently being used for running pods.
The fix for this is to either use data coming directly from the nodes (cluster:memory_usage_bytes:sum), or try to calculate the amount of memory currently being used maybe by adding up the container memory usage totals.
Was checked on 4.3.0-0.nightly-2019-10-25-015726
*** Bug 1774009 has been marked as a duplicate of this bug. ***
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.