Description of problem: Pods with memory limits set too low often present as a crash loop backoff with unhelpful (at least to the average user) Terminate message: invalid header field value "oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:327: setting cgroup config for procHooks process caused \\\"failed to write 372500 to cpu.cfs_quota_us: write /sys/fs/cgroup/cpu,cpuacct/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82054089_d9f9_11e7_8e35_0a46c474dfe0.slice/docker-79c4020e675c6b2c8299fd7be00647323e4d1eddaec8f356129cf55c009b8524.scope/cpu.cfs_quota_us: invalid argument\\\"\"\n" See attachment for details of failing docker registry pod. Version-Release number of selected component (if applicable): v3.7.9 Expected: Something pointing user to OOM issues & course of action.
Created attachment 1363383 [details] Example failure
I thought this was a memory limit issue, because when I set the memory limit to 3G, the pods started. After writing this bug, I wanted to recreate, so I set them back to 2G. The pods deployed fine. sjennings just reminded me about this nearly identical behavior which should have been fixed: https://bugzilla.redhat.com/show_bug.cgi?id=1509467
*** This bug has been marked as a duplicate of bug 1509467 ***