Bug 1419873 - Node keep MemoryPressure and can't resume
Summary: Node keep MemoryPressure and can't resume
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Node
Version: 3.5.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: ---
Assignee: Derek Carr
QA Contact: DeShuai Ma
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-02-07 10:03 UTC by DeShuai Ma
Modified: 2017-07-24 14:11 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-04-12 19:11:57 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2017:0884 0 normal SHIPPED_LIVE Red Hat OpenShift Container Platform 3.5 RPM Release Advisory 2017-04-12 22:50:07 UTC

Description DeShuai Ma 2017-02-07 10:03:57 UTC
Description of problem:
When trigger MemoryPressure, the eviction manager can't find any starved resource, I wait more than 30 minutes, but node still keep in MemoryPressure=True;
After 30 minutes, I restart the node service, the node become MemoryPressure=False

Version-Release number of selected component (if applicable):
openshift v3.5.0.17+c55cf2b
kubernetes v1.5.2+43a9be4
etcd 3.1.0
instance on ec2: m3.large

How reproducible:
Sometime

Steps to Reproduce:
1. Configure node soft memory eviction and restart node service
kubeletArguments:
  eviction-soft:
  - "memory.available<3.5Gi"
  eviction-soft-grace-period:
  - "memory.available=30s"
  eviction-max-pod-grace-period:
  - "10"
  eviction-pressure-transition-period:
  - "1m0s"

2. Create some pod trigger memory pressure (note: new created pod in the same node and others pods located in this node before)
for i in {1..5}; do kubectl create -f https://raw.githubusercontent.com/derekwaynecarr/kubernetes/examples-eviction/demo/kubelet-eviction/besteffort-pod.yaml  -n dma ; done
for i in {1..5}; do kubectl create -f https://raw.githubusercontent.com/derekwaynecarr/kubernetes/examples-eviction/demo/kubelet-eviction/burstable-pod.yaml  -n dma ; done
for i in {1..5}; do kubectl create -f https://raw.githubusercontent.com/derekwaynecarr/kubernetes/examples-eviction/demo/kubelet-eviction/guaranteed-pod.yaml  -n dma ; done

3. Check node status and node logs


Actual results:
3. Node keep in MemoryPressure and can't resume, eviction manager can't find any pod to evict
//logs in node
Feb  7 04:40:53 ip-172-18-6-5 atomic-openshift-node: I0207 04:40:53.646773   29576 helpers.go:734] eviction manager: eviction criteria not yet met for threshold(signal=memory.available, operator=3584Mi, value=LessThan, gracePeriod=30s), duration: 0s
Feb  7 04:40:53 ip-172-18-6-5 atomic-openshift-node: I0207 04:40:53.646819   29576 eviction_manager.go:269] eviction manager: no resources are starved

Expected results:

Additional info:

Comment 1 Derek Carr 2017-02-07 16:38:15 UTC
I will try and reproduce.

Comment 2 Derek Carr 2017-02-08 15:48:57 UTC
I attempted to reproduce this issue.  In my reproduction, memory pressure was raised, pods were evicted, and after ~1 min, the memory pressure was returned back to false.

With the 3.5Gi eviction threshold, the kubelet just evicts pods, but that may not result in a full reclaim above the specified threshold depending on other conditions on the node.  The kubelet restarting may have actually enabled reclaim by the kernel for additional memory.

I am going to add additional logging to the eviction manager so we can capture better logging output in service of this issue, but I suspect if we ran the following:

$ while true; do oc get --raw api/v1/nodes/<nodename>/proxy/stats/summary > `date "+%Y.%m.%d-%H.%M.%S"`-summary.log; sleep 10; done

We would have seen memory.available was still causing the threshold to be met.

Can we confirm that memory.available was actually above the specified value on the node, and that the kubelet still did not update the corresponding node condition?

Comment 3 Derek Carr 2017-02-08 18:04:59 UTC
I have opened a new PR upstream to improve logging in eviction manager so we can understand the state of the system better.  It will output what the kubelet observes for each signal, and what thresholds or node conditions are met w/ their associated grace periods.  This will help us understand how in some cases the node is still not able to return above a specified threshold with pod eviction alone.

UPSTREAM PR:
https://github.com/kubernetes/kubernetes/pull/41147

Comment 4 Derek Carr 2017-02-08 20:09:27 UTC
ORIGIN PR:
https://github.com/openshift/origin/pull/12876

I am moving this bug to POST so we can evaluate in a subsequent validation that the node was acting properly because eviction alone did not provide sufficient memory reclaim with the new logs.

Comment 5 Troy Dawson 2017-02-10 22:45:38 UTC
This has been merged into ocp and is in OCP v3.5.0.19 or newer.

Comment 7 DeShuai Ma 2017-02-15 08:03:07 UTC
Verify the bug on openshift v3.5.0.20+87266c6, when this happen again, I'll reopen the bug.

[root@openshift-104 dma]# oc describe no openshift-104.lab.sjc.redhat.com
Name:			openshift-104.lab.sjc.redhat.com
Role:			
Labels:			beta.kubernetes.io/arch=amd64
			beta.kubernetes.io/os=linux
			kubernetes.io/hostname=openshift-104.lab.sjc.redhat.com
			role=node
Taints:			<none>
CreationTimestamp:	Wed, 15 Feb 2017 00:24:33 -0500
Phase:			
Conditions:
  Type			Status	LastHeartbeatTime			LastTransitionTime			Reason				Message
  ----			------	-----------------			------------------			------				-------
  OutOfDisk 		False 	Wed, 15 Feb 2017 02:48:31 -0500 	Wed, 15 Feb 2017 00:24:33 -0500 	KubeletHasSufficientDisk 	kubelet has sufficient disk space available
  MemoryPressure 	False 	Wed, 15 Feb 2017 02:48:31 -0500 	Wed, 15 Feb 2017 02:47:41 -0500 	KubeletHasSufficientMemory 	kubelet has sufficient memory available
  DiskPressure 		False 	Wed, 15 Feb 2017 02:48:31 -0500 	Wed, 15 Feb 2017 00:24:33 -0500 	KubeletHasNoDiskPressure 	kubelet has no disk pressure
  Ready 		True 	Wed, 15 Feb 2017 02:48:31 -0500 	Wed, 15 Feb 2017 02:36:31 -0500 	KubeletReady 			kubelet is posting ready status
Addresses:		10.14.6.104,10.14.6.104,openshift-104.lab.sjc.redhat.com
Capacity:
 alpha.kubernetes.io/nvidia-gpu:	0
 cpu:					2
 memory:				3881932Ki
 pods:					250
Allocatable:
 alpha.kubernetes.io/nvidia-gpu:	0
 cpu:					2
 memory:				3881932Ki
 pods:					250
System Info:
 Machine ID:			acefceb9946d4bc59cee8c921f22116f
 System UUID:			690F4796-19EA-4E60-B8AD-E4C817AA9623
 Boot ID:			e1930b45-242b-486e-9710-228904137335
 Kernel Version:		3.10.0-514.10.1.el7.x86_64
 OS Image:			Red Hat Enterprise Linux Server 7.3 (Maipo)
 Operating System:		linux
 Architecture:			amd64
 Container Runtime Version:	docker://1.12.6
 Kubelet Version:		v1.5.2+43a9be4
 Kube-Proxy Version:		v1.5.2+43a9be4
ExternalID:			openshift-104.lab.sjc.redhat.com
Non-terminated Pods:		(5 in total)
  Namespace			Name				CPU Requests	CPU Limits	Memory Requests	Memory Limits
  ---------			----				------------	----------	---------------	-------------
  dma				guaranteed-dgwp8		100m (5%)	100m (5%)	300Mi (7%)	300Mi (7%)
  dma				guaranteed-fbcrc		100m (5%)	100m (5%)	300Mi (7%)	300Mi (7%)
  dma				guaranteed-j6l1z		100m (5%)	100m (5%)	300Mi (7%)	300Mi (7%)
  dma				guaranteed-kwhq7		100m (5%)	100m (5%)	300Mi (7%)	300Mi (7%)
  dma				guaranteed-rfqmx		100m (5%)	100m (5%)	300Mi (7%)	300Mi (7%)
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.
  CPU Requests	CPU Limits	Memory Requests	Memory Limits
  ------------	----------	---------------	-------------
  500m (25%)	500m (25%)	1500Mi (39%)	1500Mi (39%)
Events:
  FirstSeen	LastSeen	Count	From						SubObjectPath	Type		Reason				Message
  ---------	--------	-----	----						-------------	--------	------				-------
  2h		31m		3	{kubelet openshift-104.lab.sjc.redhat.com}			Normal		NodeNotSchedulable		Node openshift-104.lab.sjc.redhat.com status is now: NodeNotSchedulable
  37m		26m		3	{kubelet openshift-104.lab.sjc.redhat.com}			Normal		NodeSchedulable			Node openshift-104.lab.sjc.redhat.com status is now: NodeSchedulable
  24m		24m		1	{kubelet openshift-104.lab.sjc.redhat.com}			Normal		Starting			Starting kubelet.
  24m		24m		1	{kubelet openshift-104.lab.sjc.redhat.com}			Warning		ImageGCFailed			unable to find data for container /
  24m		24m		1	{kubelet openshift-104.lab.sjc.redhat.com}			Normal		NodeHasSufficientDisk		Node openshift-104.lab.sjc.redhat.com status is now: NodeHasSufficientDisk
  24m		24m		1	{kubelet openshift-104.lab.sjc.redhat.com}			Normal		NodeHasSufficientMemory		Node openshift-104.lab.sjc.redhat.com status is now: NodeHasSufficientMemory
  24m		24m		1	{kubelet openshift-104.lab.sjc.redhat.com}			Normal		NodeHasNoDiskPressure		Node openshift-104.lab.sjc.redhat.com status is now: NodeHasNoDiskPressure
  24m		24m		1	{kubelet openshift-104.lab.sjc.redhat.com}			Normal		NodeNotReady			Node openshift-104.lab.sjc.redhat.com status is now: NodeNotReady
  24m		24m		1	{kubelet openshift-104.lab.sjc.redhat.com}			Normal		NodeHasInsufficientMemory	Node openshift-104.lab.sjc.redhat.com status is now: NodeHasInsufficientMemory
  24m		24m		1	{kubelet openshift-104.lab.sjc.redhat.com}			Normal		NodeReady			Node openshift-104.lab.sjc.redhat.com status is now: NodeReady
  16m		12m		12	{kubelet openshift-104.lab.sjc.redhat.com}			Warning		EvictionThresholdMet		Attempting to reclaim memory
  12m		12m		1	{kubelet openshift-104.lab.sjc.redhat.com}			Normal		Starting			Starting kubelet.
  12m		12m		1	{kubelet openshift-104.lab.sjc.redhat.com}			Warning		ImageGCFailed			unable to find data for container /
  12m		12m		1	{kubelet openshift-104.lab.sjc.redhat.com}			Normal		NodeHasSufficientDisk		Node openshift-104.lab.sjc.redhat.com status is now: NodeHasSufficientDisk
  12m		12m		1	{kubelet openshift-104.lab.sjc.redhat.com}			Normal		NodeHasNoDiskPressure		Node openshift-104.lab.sjc.redhat.com status is now: NodeHasNoDiskPressure
  12m		12m		1	{kubelet openshift-104.lab.sjc.redhat.com}			Normal		NodeNotReady			Node openshift-104.lab.sjc.redhat.com status is now: NodeNotReady
  12m		12m		1	{kubelet openshift-104.lab.sjc.redhat.com}			Normal		NodeReady			Node openshift-104.lab.sjc.redhat.com status is now: NodeReady
  5m		5m		1	{kubelet openshift-104.lab.sjc.redhat.com}			Normal		NodeHasInsufficientMemory	Node openshift-104.lab.sjc.redhat.com status is now: NodeHasInsufficientMemory
  5m		4m		7	{kubelet openshift-104.lab.sjc.redhat.com}			Warning		SystemOOM			System OOM encountered
  5m		2m		7	{kubelet openshift-104.lab.sjc.redhat.com}			Warning		EvictionThresholdMet		Attempting to reclaim memory
  12m		57s		3	{kubelet openshift-104.lab.sjc.redhat.com}			Normal		NodeHasSufficientMemory		Node openshift-104.lab.sjc.redhat.com status is now: NodeHasSufficientMemory

Comment 9 errata-xmlrpc 2017-04-12 19:11:57 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:0884


Note You need to log in before you can comment on or make changes to this bug.