Description of problem: When memory pressure is hit the message in the console is stating the disk capacity is low instead of the memory. I will include a screen shot Version-Release number of selected component (if applicable): Client Version: 4.6.16 Server Version: 4.7.0-rc.1 Kubernetes Version: v1.20.0+ba45583 How reproducible: I was playing with the hardEviction kubelet argument and had it set to when memory.available is < 10Gi. The node hit that threshold and starting to report MemoryPressure hit. From the Nodes panel there is a link called Memory Pressure This is what it states: Memory Pressure breakdown This node's available disk capacity is low. Performance may be degraded. Top pod consumers prometheus-k8s-0 3.43 GiB prometheus-k8s-1 3.24 GiB thanos-querier-5bc4b94fdf-jgdgb 349.8 MiB machine-config-daemon-b7tbz 292.8 MiB thanos-querier-5bc4b94fdf-2mpbj 289.8 MiB Steps to Reproduce: 1. 2. 3. Actual results: Should state available memory not disk. Also, I don't know where it is pulling the break down values as those pods seem to be using only half that amount being reported assuming memory. If I need to open another defect let me know. e.g prometheus-k8s-0 is showing 1.5GiB usage for mem 300m usage for CPU 60KiB usage for disk. Expected results: Should state memory and have matching values for the high memory consumers. Additional info: I am not sure what logs to pull so please let me know and I will add the additional info. Sorry about this as I am new and don't have access to bugzilla procedure info yet.
(In reply to jhusta from comment #0) > Should state available memory not disk. Also, I don't know where it is > pulling the break down values as those pods seem to be using only half that > amount being reported assuming memory. If I need to open another defect let > me know. > e.g prometheus-k8s-0 is showing 1.5GiB usage for mem 300m usage for CPU > 60KiB usage for disk. Please open a separate defect to address the queries. For the incorrect label, we simply swapped the two labels in the UI. For the queries, it will take more investigation.
Thanks Samuel I will open a new defect for the queries.
Version: 4.8.0-0.nightly-2021-03-10-023820 Used Kraken tool which in turn used Litmus to build the memory pressure (https://github.com/cloud-bulldozer/kraken.git) Now the memory pressure message indicates it is memory pressure. Screen shot attached.
Created attachment 1762516 [details] Screenshot showing the memory pressure
I just tested this on my system and it is fixed thank you. Do i close this or do you?
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.8.2 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:2438