Description of problem:
This was related to the MemoryPressure Window when selected from the Nodes panel. I cannot determine how the memory usage of the "top pod consumers" is being calculated as it does not match any other memory usage pod display. It is 2x more than the other panels. I will attach screen shots of what I am seeing.
Version-Release number of selected component (if applicable):
We currently moved to 4.8 but was originally found in in 4.7
Set a MachineConfigPool configuration that sets a memory Hard Eviction value. Create memory related stress that will exceed the memory usage as defined by the Hard Eviction Values.
Steps to Reproduce:
1.Set machineConfigPool for the worker node to have a hardevition for memory available
2. Create memory stress that exceeds the memory usage defined for hard eviction
3. Wait for MemoryPressure to be alerted and click on the MemoryPressure link on the Node panel. Check out the info in the pop up panel.
Top pod consumers looks to have incorrect memory usage
i would expect the memory usage to match all other memory usage displays for that pod
I still cannot get access to Red-hat to get the outline of the defect opening policy so I am not sure what logs you need. Please let me know and I will attach anything you require as this is easily reproducible on my smaller KVM environment.
Created attachment 1760291 [details]
Screen Shots from console
Opened per Samuel Padgett request from related MemoryPressure Defect.
Hi jhusta, The bug is reported against hardware s390x, I'm doubt whether it is hardware related, by the way, could you share me the image for pod memstress shown in your screenshot?
Hi @email@example.com my repos and image are in ibm git and artifactory which you will not have access to. We are simply using an ubuntu container and using stress-ng. Here is the command "stress-ng", "-v", "--vm", "1", "--vm-bytes", "'$ALLOCATION'", "--vm-method", "all", "--verify", "--temp-path", "/tmp"]' . With bytes equal to some value. I chose s390x as that is what I am testing on. I don't have access to an x86 machine so I make no assumptions.
Here is my dockerfile
RUN apt-get update -y && apt-get install -y stress-ng iperf3
CMD stress-ng --mmap 1
Thanks jhusta, I built image successfully with the dockerfile.
Checked on ocp 4.8 cluster with payload 4.8.0-0.nightly-2021-06-02-025513, the bug is still reproduced. The fix pr9030 is not contained in the payload. Waiting for new build with the fix.
The fix is till not contained in payload 4.8.0-0.nightly-2021-06-06-164529
@firstname.lastname@example.org Thanks for keep me posted!
Created attachment 1789785 [details]
In the test, I created the deployment with pod to consume 8G memory, so that memory are used up.
Tested on ocp 4.11 cluster with payload 4.11.0-0.nightly-2022-02-16-211105.
1. $ oc label machineconfigpool worker custom-kubelet=small-pods
2. Create kubeletconfig：
3. Create deployment with pods consume large memory.
- name: httpd
command: ["stress-ng", "-v", "--vm", "1", "--vm-bytes", "8G", "--vm-method", "all", "--verify", "--temp-path", "/tmp"]
- containerPort: 8080
4. Then check on nodes list page, when node show memory pressure info, check in the popover about the top pod info, compare it with the pod memory info on pods list page. The memory info is normal now.
The bug is fixed.
Thank you @email@example.com I am still testing 4.10 but will verify this fix once we move to 4.11