Hide Forgot
Description of problem: Customer has noticed that there are over 100 containers when they run `docker ps -a` even after lowering the maximum-dead-containers and maximum-dead-containers-per-container. The containers have been confirmed to be OpenShift containers (name starts with k8s_). Version-Release number of selected component (if applicable): OpenShift 3.1.1.6 How reproducible: I was unable to reproduce it Actual results: Containers are not being deleted Expected results: Containers deleted Additional info: Customer lowered maximum-dead-containers to 25 and maximum-dead-containers-per-container to 1
I tested this with a modified version of 3.1.1.6 to add some debugging to the container GC logic so I could see what it was doing. It correctly deleted containers to get me down to the maximum number I had specified. Could we please get: - docker ps -a - oc get pod -o yaml - for a container you expect to be deleted, docker inspect <container>
Eric, please see comment #1
Customer reported 104 total containers on the node, 4 running. Which means 100 are dead. It sounds like the node has been configured with max dead containers = 100, and it looks like this is working properly. Could we confirm what's in node-config.yaml under kubeletArguments? Could they provide a copy of that section of the config file?
Hi Andy, Customer was unable to provide the full file but they did provide the following: kubletArguments: maximum-dead-containers-per-container: - "1" maximum-dead-containers: - "25"
As shown in comment #6, customer had a typo in the config file. kubletArguments Should be kubeletArguments Rather than providing an error/warning message the config file simply skipped over that section and used the default settings (1m, 2, 100). I am closing this bug and I have filed bug #1321622 as an RFE to provide an error/warning if something like this occurs