https://github.com/kubernetes/kubernetes/issues/55620
We saw a pod, with 44 of these completely break. It's like it just stopped making forward progress. I ran: oc get --raw /debug/pprof/profile --server=https://172.31.71.195:10250 > profile and it hung for about a hour. top showed 'openshift' using 100% CPU. I deleted the 'bad' sandboxes, restarted node, and now the node seems largely ok...
Upstream PR: https://github.com/kubernetes/kubernetes/pull/55641
Origin PR: https://github.com/openshift/origin/pull/17302
Checked with # openshift version openshift v3.7.26 kubernetes v1.7.6+a08f5eeb62 etcd 3.2.8 And can not reproduce this issue, so verify this.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:0636