+++ This bug was initially created as a clone of Bug #1801824 +++ +++ This bug was initially created as a clone of Bug #1800319 +++ Creating a memory hogger pod (that should be evicted / OOM killed) instead of being safely handled by the node causes the node to become unreachable for >10m. On the node, the kubelet appears to be running but can't heartbeat the apiserver. Also, the node appears to think that the apiserver deleted all the pods (DELETE("api") in logs) which is not correct - no pods except the oomkilled one should be evicted / deleted. Recreate 1. Create the attached kill-node.yaml on the cluster (oc create -f kill-node.yaml) 2. Wait 2-3 minutes while memory fills up on the worker Expected: 1. memory-hog pod is oomkilled and/or evicted (either would be acceptable) 2. the node remains ready Actual: 1. Node is tainted as unreachable, heartbeats stop, and it takes >10m for it to recover 2. After recovery, events are delivered As part of fixing this, we need to add an e2e tests to the origin disruptive suite that triggers this (and add eviction tests, because this doesn't seem to evict anything). --- Additional comment from Clayton Coleman on 2020-02-06 21:14:33 UTC --- Once this is fixed we need to test against 4.3 and 4.2 and backport if it happens - this can DoS a node.
*** Bug 1771016 has been marked as a duplicate of this bug. ***
*** Bug 1776185 has been marked as a duplicate of this bug. ***
*** Bug 1797828 has been marked as a duplicate of this bug. ***