Bug 1430322 - overlapped mount points
Summary: overlapped mount points
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Storage
Version: 3.4.0
Hardware: Unspecified
OS: Unspecified
Target Milestone: ---
: 3.9.0
Assignee: Jan Safranek
QA Contact: Liang Xia
: 1482450 (view as bug list)
Depends On:
TreeView+ depends on / blocked
Reported: 2017-03-08 11:22 UTC by Alexander Koksharov
Modified: 2018-03-28 14:05 UTC (History)
13 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Cause: DownwardAPI, Secrets, ConfigMap and Projected volumes fully managed their content and did not allow any other volumes to be mounted on top of them. Consequence: Users could not mount any volume on top of aforementioned volumes. Fix: The volumes now touch only the files they create. Result: Users can mount any volume on top of aforementioned volumes.
Clone Of:
Last Closed: 2018-03-28 14:05:01 UTC
Target Upstream Version:

Attachments (Terms of Use)

System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2018:0489 0 None None None 2018-03-28 14:05:44 UTC

Description Alexander Koksharov 2017-03-08 11:22:08 UTC
Description of problem:
Secrets mounted to overlapping moutpoints like:
  /etc/secret from volume-81n5p (rw)
  /etc/secret/ca from volume-uvl3l (rw)
caused an errors:
Mar  3 16:14:37 node2 atomic-openshift-node: E0303 16:14:37.267568    1240 atomic_writer.go:444] pod stage/service-translation-21-k3j03
 volume volume-lnt09: error pruning old user-visible path ca: remove /var/lib/origin/openshift.local.volumes/pods/da4ce4f9-0023-11e7-8d
29-fa163eb5ccf9/volumes/kubernetes.io~secret/volume-lnt09/ca: device or resource busy

This is not explicitly stated anywhere that overlapping mountpoints are not supported. So, customer request to "either disallow such a configuration or at least warn the user about this circumstance".

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:

Actual results:

Expected results:

Additional info:

Comment 2 Paul Morie 2017-03-09 21:01:14 UTC
Is there any way to get a copy of the pod that the customer expects to use this with?  I would want to see the volumes in the pod spec.

Comment 4 Derek Carr 2017-03-22 14:40:03 UTC
This use case should be improved with the system volume projection in 3.6.

Comment 5 Derek Carr 2017-04-12 14:43:59 UTC
rebase not yet landed.

Comment 7 Jeff Peeler 2017-05-12 17:01:17 UTC
With the projected volume driver, one can now project multiple secrets into the same directory. The easiest way for overlapping scenarios is to set the mount path to the longest path in common and then project the additional items as needed via path. A tested example is as follows:

apiVersion: v1
kind: Pod
  name: atest
  - name: atest
    image: busybox
    - sleep
    - "86400"
    - name: all-in-one
      mountPath: "/etc/secret"
  - name: all-in-one
      defaultMode: 0666
      - secret:
          name: mysecret
      - secret:
          name: mysecret2
            - key: username2
              path: ca/username2

More documentation here: https://kubernetes.io/docs/tasks/configure-pod-container/projected-volume/

Comment 9 Seth Jennings 2017-06-20 17:56:30 UTC
From reading the customer ticket, sounds like this is resolved and not a bug.

Comment 10 Seth Jennings 2017-06-21 14:26:43 UTC
Turns out there is a PR in flight for validating the volume mounts don't overlap.


Reopening to track it.

Comment 11 Seth Jennings 2017-06-23 03:31:17 UTC
PR is in flight. Needs review/approval upstream. Should make next release.

Comment 12 Eric Paris 2017-06-23 11:58:49 UTC
Does "Next Release" mean you think we can and should fix this in 3.6.0, 3.6.1 or 3.7?

Comment 13 Derek Carr 2017-08-19 03:56:48 UTC
Poked the upstream PR for review.  It did not make 1.7, so it would be 1.8 at best.  We could pick it back to 3.7 if it lands.

Comment 14 Seth Jennings 2017-08-25 14:24:05 UTC
*** Bug 1482450 has been marked as a duplicate of this bug. ***

Comment 15 Seth Jennings 2017-09-06 21:36:22 UTC
sending to Storage for sig-storage discussion and path forward.

Upstream PR is blocked/dead by thockin on the grounds that pod that passed validation will no longer pass if they have overlapping mount points.

Other discussed solutions where to have the volume manager order the mounting of volumes such that mount points closer to / are mounted first.

Comment 16 Bradley Childs 2017-09-07 19:25:46 UTC
I moved this to a trello card https://trello.com/c/qqrBplHi/554-ordering-of-unmount-operations-to-fix-projected-mounts

Per https://bugzilla.redhat.com/show_bug.cgi?id=1430322#c9 i'm moving this to 'low' severity/priority.

Comment 17 Jan Safranek 2017-09-13 14:06:46 UTC
This seems to be fixed in external-storage repo by https://github.com/kubernetes-incubator/external-storage/commit/8d4f6da5ee7624c38b6d8ffcf667e0591cc0a7d7

I am not sure if we released new images to quay.io.

Comment 18 Jan Safranek 2017-09-13 14:15:55 UTC
oops, sorry, wrong bug. Please ignore comment #17.

Comment 23 Hemant Kumar 2018-01-17 22:24:22 UTC
This appears to be very similar to https://bugzilla.redhat.com/show_bug.cgi?id=1516569  and joel smith has opened https://github.com/kubernetes/kubernetes/pull/57422 to address this.

Comment 26 Jan Safranek 2018-01-19 11:39:02 UTC
3.9 PR: https://github.com/openshift/origin/pull/18165

Comment 28 Liang Xia 2018-01-23 03:34:56 UTC
Verified on below version,
openshift v3.9.0-0.22.0
kubernetes v1.9.1+a0ce1bc657
etcd 3.2.8

Create a pod with 2 volumes, and the pods running well.

$ cat pod.yaml 
kind: Pod
apiVersion: v1
  name: dynamic
    - name: dynamic
      image: aosqe/hello-openshift
        - containerPort: 80
          name: "http-server"
      - mountPath: "/mnt/ocp"
        name: volume1
      - mountPath: "/mnt/ocp/ver39"
        name: volume2
    - name: volume1
        defaultMode: 420
        secretName: secr1
    - name: volume2
        defaultMode: 420
        secretName: secr2

$ oc get pods dynamic
dynamic   1/1       Running   0          21m

$ oc rsh dynamic touch /mnt/ocp/file1
$ oc rsh dynamic touch /mnt/ocp/ver39/file2
$ oc rsh dynamic ls -aR /mnt/ocp
.                                file1
..                               testilfe
..2018_01_23_03_08_54.946838222  ver39

.   ..

.                                ..data
..                               file2
..2018_01_23_03_08_54.375300715  testfile

.   ..

On node, verify the volume when pod is running.
tmpfs                  1.8G     0  1.8G   0% /var/lib/origin/openshift.local.volumes/pods/ab82c67c-ffea-11e7-971a-42010af0000a/volumes/kubernetes.io~secret/volume2
tmpfs                  1.8G     0  1.8G   0% /var/lib/origin/openshift.local.volumes/pods/ab82c67c-ffea-11e7-971a-42010af0000a/volumes/kubernetes.io~secret/volume1

And verified that volume is cleaned from node when pod is removed.

Comment 29 Joel Smith 2018-03-12 20:32:13 UTC
FYI, our fixes for BZ#1516569 have now been released and should address this bug. The bug goes back (in some form or another) to OpenShift 3.3. Fixes are available in the following (and later) versions:

Comment 32 errata-xmlrpc 2018-03-28 14:05:01 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.