Description of problem: Secrets mounted to overlapping moutpoints like: /etc/secret from volume-81n5p (rw) /etc/secret/ca from volume-uvl3l (rw) caused an errors: Mar 3 16:14:37 node2 atomic-openshift-node: E0303 16:14:37.267568 1240 atomic_writer.go:444] pod stage/service-translation-21-k3j03 volume volume-lnt09: error pruning old user-visible path ca: remove /var/lib/origin/openshift.local.volumes/pods/da4ce4f9-0023-11e7-8d 29-fa163eb5ccf9/volumes/kubernetes.io~secret/volume-lnt09/ca: device or resource busy This is not explicitly stated anywhere that overlapping mountpoints are not supported. So, customer request to "either disallow such a configuration or at least warn the user about this circumstance". Version-Release number of selected component (if applicable): 3.4 How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
Is there any way to get a copy of the pod that the customer expects to use this with? I would want to see the volumes in the pod spec.
This use case should be improved with the system volume projection in 3.6.
rebase not yet landed.
With the projected volume driver, one can now project multiple secrets into the same directory. The easiest way for overlapping scenarios is to set the mount path to the longest path in common and then project the additional items as needed via path. A tested example is as follows: apiVersion: v1 kind: Pod metadata: name: atest spec: containers: - name: atest image: busybox args: - sleep - "86400" volumeMounts: - name: all-in-one mountPath: "/etc/secret" volumes: - name: all-in-one projected: defaultMode: 0666 sources: - secret: name: mysecret - secret: name: mysecret2 items: - key: username2 path: ca/username2 More documentation here: https://kubernetes.io/docs/tasks/configure-pod-container/projected-volume/
From reading the customer ticket, sounds like this is resolved and not a bug.
Turns out there is a PR in flight for validating the volume mounts don't overlap. https://github.com/kubernetes/kubernetes/pull/47456 Reopening to track it.
PR is in flight. Needs review/approval upstream. Should make next release.
Does "Next Release" mean you think we can and should fix this in 3.6.0, 3.6.1 or 3.7?
Poked the upstream PR for review. It did not make 1.7, so it would be 1.8 at best. We could pick it back to 3.7 if it lands.
*** Bug 1482450 has been marked as a duplicate of this bug. ***
sending to Storage for sig-storage discussion and path forward. Upstream PR is blocked/dead by thockin on the grounds that pod that passed validation will no longer pass if they have overlapping mount points. Other discussed solutions where to have the volume manager order the mounting of volumes such that mount points closer to / are mounted first.
I moved this to a trello card https://trello.com/c/qqrBplHi/554-ordering-of-unmount-operations-to-fix-projected-mounts Per https://bugzilla.redhat.com/show_bug.cgi?id=1430322#c9 i'm moving this to 'low' severity/priority.
This seems to be fixed in external-storage repo by https://github.com/kubernetes-incubator/external-storage/commit/8d4f6da5ee7624c38b6d8ffcf667e0591cc0a7d7 I am not sure if we released new images to quay.io.
oops, sorry, wrong bug. Please ignore comment #17.
This appears to be very similar to https://bugzilla.redhat.com/show_bug.cgi?id=1516569 and joel smith has opened https://github.com/kubernetes/kubernetes/pull/57422 to address this.
3.9 PR: https://github.com/openshift/origin/pull/18165
Verified on below version, openshift v3.9.0-0.22.0 kubernetes v1.9.1+a0ce1bc657 etcd 3.2.8 Create a pod with 2 volumes, and the pods running well. $ cat pod.yaml kind: Pod apiVersion: v1 metadata: name: dynamic spec: containers: - name: dynamic image: aosqe/hello-openshift ports: - containerPort: 80 name: "http-server" volumeMounts: - mountPath: "/mnt/ocp" name: volume1 - mountPath: "/mnt/ocp/ver39" name: volume2 volumes: - name: volume1 secret: defaultMode: 420 secretName: secr1 - name: volume2 secret: defaultMode: 420 secretName: secr2 $ oc get pods dynamic NAME READY STATUS RESTARTS AGE dynamic 1/1 Running 0 21m $ oc rsh dynamic touch /mnt/ocp/file1 $ oc rsh dynamic touch /mnt/ocp/ver39/file2 $ oc rsh dynamic ls -aR /mnt/ocp /mnt/ocp: . file1 .. testilfe ..2018_01_23_03_08_54.946838222 ver39 ..data /mnt/ocp/..2018_01_23_03_08_54.946838222: . .. /mnt/ocp/ver39: . ..data .. file2 ..2018_01_23_03_08_54.375300715 testfile /mnt/ocp/ver39/..2018_01_23_03_08_54.375300715: . .. On node, verify the volume when pod is running. tmpfs 1.8G 0 1.8G 0% /var/lib/origin/openshift.local.volumes/pods/ab82c67c-ffea-11e7-971a-42010af0000a/volumes/kubernetes.io~secret/volume2 tmpfs 1.8G 0 1.8G 0% /var/lib/origin/openshift.local.volumes/pods/ab82c67c-ffea-11e7-971a-42010af0000a/volumes/kubernetes.io~secret/volume1 And verified that volume is cleaned from node when pod is removed.
FYI, our fixes for BZ#1516569 have now been released and should address this bug. The bug goes back (in some form or another) to OpenShift 3.3. Fixes are available in the following (and later) versions: 3.3.1.46.11-1.git.4.e236015 3.4.1.44.38-1.git.4.bb8df08 3.5.5.31.48-1.git.4.ff6153e 3.6.173.0.96-1.git.4.e6301f8 3.7.23-1.git.5.83efd71
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:0489