+++ This bug was initially created as a clone of Bug #1800319 +++
Creating a memory hogger pod (that should be evicted / OOM killed) instead of being safely handled by the node causes the node to become unreachable for >10m. On the node, the kubelet appears to be running but can't heartbeat the apiserver. Also, the node appears to think that the apiserver deleted all the pods (DELETE("api") in logs) which is not correct - no pods except the oomkilled one should be evicted / deleted.
1. Create the attached kill-node.yaml on the cluster (oc create -f kill-node.yaml)
2. Wait 2-3 minutes while memory fills up on the worker
1. memory-hog pod is oomkilled and/or evicted (either would be acceptable)
2. the node remains ready
1. Node is tainted as unreachable, heartbeats stop, and it takes >10m for it to recover
2. After recovery, events are delivered
As part of fixing this, we need to add an e2e tests to the origin disruptive suite that triggers this (and add eviction tests, because this doesn't seem to evict anything).
--- Additional comment from Clayton Coleman on 2020-02-06 21:14:33 UTC ---
Once this is fixed we need to test against 4.3 and 4.2 and backport if it happens - this can DoS a node.
*** Bug 1811159 has been marked as a duplicate of this bug. ***
How does this relate to bug 1808429, which has the same subject and also targets 4.3.z? Is this one a dup of that one?
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.