Description of problem: Pod tmpfs mounts are not removed after the the pod has shut down. for example the default secret or docker login configs used by builds Version-Release number of selected component (if applicable): 3.1.1.-6-33-g81eabcc How reproducible: Starting a build. Jump onto the node and performing a mount Steps to Reproduce: 1. 2. 3. Actual results: - Performing a mount shows that the tmpfs still exist Expected results: The tmpfs mounts are unmounted Additional info:
I was able to reproduce this locally. Create a BC and instantiate a build. - These tempfs are owned builders(majority but cannot confirm on customer side just yet ) - They only get removed when the build is canceled or deleted mid build - They never get removed if a build is successful or completed but failed - The only way to remove these "oprhaned" mounted tmpfs to delete the pods that use these secrets. - The kublet will then start umounting them automatically Has anyone come across this behaviour before?
This is working as designed / has been working this way forever afaik. Paul, can you confirm? We may need to consider unmounting volumes from pods once they're no longer running, but that would need to be part of the bigger attach/detach refactoring most likely.
*** Bug 1331998 has been marked as a duplicate of this bug. ***
Volumes are not unmounted until the pod is deleted. Any possible change to this behavior would come in Kubernetes 1.5 at the earliest.
This is working as designed.
This is working as currently designed, but there is upstream discussion to modify the behavior here: https://github.com/kubernetes/kubernetes/issues/35406#issuecomment-256101016 I have asked for all memory backed volumes to be removed when a pod is terminated (and not just removed from apiserver) in order to ensure the memory is actually reclaimed. Seth is driving this change in Kubernetes 1.5, and we can assess cherry-picking earlier than OCP 3.5 when the fix is identified.
Upstream Origin PR to fix this: https://github.com/openshift/origin/pull/11939
Upstream Origin PR for release-1.4: https://github.com/openshift/origin/pull/12003
This has been merged into ocp and is in OCP v3.4.0.37 or newer.
Verify on openshift v3.4.0.38 Steps: 1. Create pod with emptyDir(medium=Memory),secret,configmap 2. When pod become complete/failed, make sure emptyDir(medium=Memory),secret,configmap volume are removed on node. Detail step ref cases: https://url.corp.redhat.com/f76d0c7
Hi, Will OpenShift v3.4.0.38 with this fix be released as OpenShift v3.4 planned at Winter 2016 (End of Q4 Start of Q1)? The customer behind this Bugzilla wants to confirm it.
I cannot verify the date, but it will be in the first release of OpenShift v3.4.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:0066