Bug 1330648
Summary: | Pods tmpfs mounts are never removed | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Ivan <imckinle> |
Component: | Node | Assignee: | Seth Jennings <sjenning> |
Status: | CLOSED ERRATA | QA Contact: | DeShuai Ma <dma> |
Severity: | medium | Docs Contact: | |
Priority: | unspecified | ||
Version: | 3.1.0 | CC: | agoldste, aos-bugs, bvincell, decarr, jokerman, mmccomas, pep, pmorie, tatanaka, tdawson, xtian |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2017-01-18 12:40:13 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1267746 |
Description
Ivan
2016-04-26 16:23:44 UTC
I was able to reproduce this locally. Create a BC and instantiate a build. - These tempfs are owned builders(majority but cannot confirm on customer side just yet ) - They only get removed when the build is canceled or deleted mid build - They never get removed if a build is successful or completed but failed - The only way to remove these "oprhaned" mounted tmpfs to delete the pods that use these secrets. - The kublet will then start umounting them automatically Has anyone come across this behaviour before? This is working as designed / has been working this way forever afaik. Paul, can you confirm? We may need to consider unmounting volumes from pods once they're no longer running, but that would need to be part of the bigger attach/detach refactoring most likely. *** Bug 1331998 has been marked as a duplicate of this bug. *** Volumes are not unmounted until the pod is deleted. Any possible change to this behavior would come in Kubernetes 1.5 at the earliest. This is working as designed. This is working as currently designed, but there is upstream discussion to modify the behavior here: https://github.com/kubernetes/kubernetes/issues/35406#issuecomment-256101016 I have asked for all memory backed volumes to be removed when a pod is terminated (and not just removed from apiserver) in order to ensure the memory is actually reclaimed. Seth is driving this change in Kubernetes 1.5, and we can assess cherry-picking earlier than OCP 3.5 when the fix is identified. Upstream Origin PR to fix this: https://github.com/openshift/origin/pull/11939 Upstream Origin PR for release-1.4: https://github.com/openshift/origin/pull/12003 This has been merged into ocp and is in OCP v3.4.0.37 or newer. Verify on openshift v3.4.0.38 Steps: 1. Create pod with emptyDir(medium=Memory),secret,configmap 2. When pod become complete/failed, make sure emptyDir(medium=Memory),secret,configmap volume are removed on node. Detail step ref cases: https://url.corp.redhat.com/f76d0c7 Hi, Will OpenShift v3.4.0.38 with this fix be released as OpenShift v3.4 planned at Winter 2016 (End of Q4 Start of Q1)? The customer behind this Bugzilla wants to confirm it. I cannot verify the date, but it will be in the first release of OpenShift v3.4. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:0066 |