Description of problem:
Secrets mounted to overlapping moutpoints like:
/etc/secret from volume-81n5p (rw)
/etc/secret/ca from volume-uvl3l (rw)
caused an errors:
Mar 3 16:14:37 node2 atomic-openshift-node: E0303 16:14:37.267568 1240 atomic_writer.go:444] pod stage/service-translation-21-k3j03
volume volume-lnt09: error pruning old user-visible path ca: remove /var/lib/origin/openshift.local.volumes/pods/da4ce4f9-0023-11e7-8d
29-fa163eb5ccf9/volumes/kubernetes.io~secret/volume-lnt09/ca: device or resource busy
This is not explicitly stated anywhere that overlapping mountpoints are not supported. So, customer request to "either disallow such a configuration or at least warn the user about this circumstance".
Version-Release number of selected component (if applicable):
Steps to Reproduce:
Is there any way to get a copy of the pod that the customer expects to use this with? I would want to see the volumes in the pod spec.
This use case should be improved with the system volume projection in 3.6.
rebase not yet landed.
With the projected volume driver, one can now project multiple secrets into the same directory. The easiest way for overlapping scenarios is to set the mount path to the longest path in common and then project the additional items as needed via path. A tested example is as follows:
- name: atest
- name: all-in-one
- name: all-in-one
- key: username2
More documentation here: https://kubernetes.io/docs/tasks/configure-pod-container/projected-volume/
From reading the customer ticket, sounds like this is resolved and not a bug.
Turns out there is a PR in flight for validating the volume mounts don't overlap.
Reopening to track it.
PR is in flight. Needs review/approval upstream. Should make next release.
Does "Next Release" mean you think we can and should fix this in 3.6.0, 3.6.1 or 3.7?
Poked the upstream PR for review. It did not make 1.7, so it would be 1.8 at best. We could pick it back to 3.7 if it lands.
*** Bug 1482450 has been marked as a duplicate of this bug. ***
sending to Storage for sig-storage discussion and path forward.
Upstream PR is blocked/dead by thockin on the grounds that pod that passed validation will no longer pass if they have overlapping mount points.
Other discussed solutions where to have the volume manager order the mounting of volumes such that mount points closer to / are mounted first.
I moved this to a trello card https://trello.com/c/qqrBplHi/554-ordering-of-unmount-operations-to-fix-projected-mounts
Per https://bugzilla.redhat.com/show_bug.cgi?id=1430322#c9 i'm moving this to 'low' severity/priority.
This seems to be fixed in external-storage repo by https://github.com/kubernetes-incubator/external-storage/commit/8d4f6da5ee7624c38b6d8ffcf667e0591cc0a7d7
I am not sure if we released new images to quay.io.
oops, sorry, wrong bug. Please ignore comment #17.
This appears to be very similar to https://bugzilla.redhat.com/show_bug.cgi?id=1516569 and joel smith has opened https://github.com/kubernetes/kubernetes/pull/57422 to address this.
3.9 PR: https://github.com/openshift/origin/pull/18165
Verified on below version,
Create a pod with 2 volumes, and the pods running well.
$ cat pod.yaml
- name: dynamic
- containerPort: 80
- mountPath: "/mnt/ocp"
- mountPath: "/mnt/ocp/ver39"
- name: volume1
- name: volume2
$ oc get pods dynamic
NAME READY STATUS RESTARTS AGE
dynamic 1/1 Running 0 21m
$ oc rsh dynamic touch /mnt/ocp/file1
$ oc rsh dynamic touch /mnt/ocp/ver39/file2
$ oc rsh dynamic ls -aR /mnt/ocp
On node, verify the volume when pod is running.
tmpfs 1.8G 0 1.8G 0% /var/lib/origin/openshift.local.volumes/pods/ab82c67c-ffea-11e7-971a-42010af0000a/volumes/kubernetes.io~secret/volume2
tmpfs 1.8G 0 1.8G 0% /var/lib/origin/openshift.local.volumes/pods/ab82c67c-ffea-11e7-971a-42010af0000a/volumes/kubernetes.io~secret/volume1
And verified that volume is cleaned from node when pod is removed.
FYI, our fixes for BZ#1516569 have now been released and should address this bug. The bug goes back (in some form or another) to OpenShift 3.3. Fixes are available in the following (and later) versions:
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.