Bug 1430322 - overlapped mount points
overlapped mount points
Status: CLOSED ERRATA
Product: OpenShift Container Platform
Classification: Red Hat
Component: Storage (Show other bugs)
3.4.0
Unspecified Unspecified
low Severity low
: ---
: 3.9.0
Assigned To: Jan Safranek
Liang Xia
: Reopened
: 1482450 (view as bug list)
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2017-03-08 06:22 EST by Alexander Koksharov
Modified: 2018-03-28 10:05 EDT (History)
13 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Cause: DownwardAPI, Secrets, ConfigMap and Projected volumes fully managed their content and did not allow any other volumes to be mounted on top of them. Consequence: Users could not mount any volume on top of aforementioned volumes. Fix: The volumes now touch only the files they create. Result: Users can mount any volume on top of aforementioned volumes.
Story Points: ---
Clone Of:
Environment:
Last Closed: 2018-03-28 10:05:01 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2018:0489 None None None 2018-03-28 10:05 EDT

  None (edit)
Description Alexander Koksharov 2017-03-08 06:22:08 EST
Description of problem:
Secrets mounted to overlapping moutpoints like:
  /etc/secret from volume-81n5p (rw)
  /etc/secret/ca from volume-uvl3l (rw)
caused an errors:
Mar  3 16:14:37 node2 atomic-openshift-node: E0303 16:14:37.267568    1240 atomic_writer.go:444] pod stage/service-translation-21-k3j03
 volume volume-lnt09: error pruning old user-visible path ca: remove /var/lib/origin/openshift.local.volumes/pods/da4ce4f9-0023-11e7-8d
29-fa163eb5ccf9/volumes/kubernetes.io~secret/volume-lnt09/ca: device or resource busy

This is not explicitly stated anywhere that overlapping mountpoints are not supported. So, customer request to "either disallow such a configuration or at least warn the user about this circumstance".

Version-Release number of selected component (if applicable):
3.4

How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:
Comment 2 Paul Morie 2017-03-09 16:01:14 EST
Is there any way to get a copy of the pod that the customer expects to use this with?  I would want to see the volumes in the pod spec.
Comment 4 Derek Carr 2017-03-22 10:40:03 EDT
This use case should be improved with the system volume projection in 3.6.
Comment 5 Derek Carr 2017-04-12 10:43:59 EDT
rebase not yet landed.
Comment 7 Jeff Peeler 2017-05-12 13:01:17 EDT
With the projected volume driver, one can now project multiple secrets into the same directory. The easiest way for overlapping scenarios is to set the mount path to the longest path in common and then project the additional items as needed via path. A tested example is as follows:

apiVersion: v1
kind: Pod
metadata:
  name: atest
spec:
  containers:
  - name: atest
    image: busybox
    args:
    - sleep
    - "86400"
    volumeMounts:
    - name: all-in-one
      mountPath: "/etc/secret"
  volumes:
  - name: all-in-one
    projected:
      defaultMode: 0666
      sources:
      - secret:
          name: mysecret
      - secret:
          name: mysecret2
          items:
            - key: username2
              path: ca/username2

More documentation here: https://kubernetes.io/docs/tasks/configure-pod-container/projected-volume/
Comment 9 Seth Jennings 2017-06-20 13:56:30 EDT
From reading the customer ticket, sounds like this is resolved and not a bug.
Comment 10 Seth Jennings 2017-06-21 10:26:43 EDT
Turns out there is a PR in flight for validating the volume mounts don't overlap.

https://github.com/kubernetes/kubernetes/pull/47456

Reopening to track it.
Comment 11 Seth Jennings 2017-06-22 23:31:17 EDT
PR is in flight. Needs review/approval upstream. Should make next release.
Comment 12 Eric Paris 2017-06-23 07:58:49 EDT
Does "Next Release" mean you think we can and should fix this in 3.6.0, 3.6.1 or 3.7?
Comment 13 Derek Carr 2017-08-18 23:56:48 EDT
Poked the upstream PR for review.  It did not make 1.7, so it would be 1.8 at best.  We could pick it back to 3.7 if it lands.
Comment 14 Seth Jennings 2017-08-25 10:24:05 EDT
*** Bug 1482450 has been marked as a duplicate of this bug. ***
Comment 15 Seth Jennings 2017-09-06 17:36:22 EDT
sending to Storage for sig-storage discussion and path forward.

Upstream PR is blocked/dead by thockin on the grounds that pod that passed validation will no longer pass if they have overlapping mount points.

Other discussed solutions where to have the volume manager order the mounting of volumes such that mount points closer to / are mounted first.
Comment 16 Bradley Childs 2017-09-07 15:25:46 EDT
I moved this to a trello card https://trello.com/c/qqrBplHi/554-ordering-of-unmount-operations-to-fix-projected-mounts

Per https://bugzilla.redhat.com/show_bug.cgi?id=1430322#c9 i'm moving this to 'low' severity/priority.
Comment 17 Jan Safranek 2017-09-13 10:06:46 EDT
This seems to be fixed in external-storage repo by https://github.com/kubernetes-incubator/external-storage/commit/8d4f6da5ee7624c38b6d8ffcf667e0591cc0a7d7

I am not sure if we released new images to quay.io.
Comment 18 Jan Safranek 2017-09-13 10:15:55 EDT
oops, sorry, wrong bug. Please ignore comment #17.
Comment 23 Hemant Kumar 2018-01-17 17:24:22 EST
This appears to be very similar to https://bugzilla.redhat.com/show_bug.cgi?id=1516569  and joel smith has opened https://github.com/kubernetes/kubernetes/pull/57422 to address this.
Comment 26 Jan Safranek 2018-01-19 06:39:02 EST
3.9 PR: https://github.com/openshift/origin/pull/18165
Comment 28 Liang Xia 2018-01-22 22:34:56 EST
Verified on below version,
openshift v3.9.0-0.22.0
kubernetes v1.9.1+a0ce1bc657
etcd 3.2.8

Create a pod with 2 volumes, and the pods running well.

$ cat pod.yaml 
kind: Pod
apiVersion: v1
metadata:
  name: dynamic
spec:
  containers:
    - name: dynamic
      image: aosqe/hello-openshift
      ports:
        - containerPort: 80
          name: "http-server"
      volumeMounts:
      - mountPath: "/mnt/ocp"
        name: volume1
      - mountPath: "/mnt/ocp/ver39"
        name: volume2
  volumes:
    - name: volume1
      secret:
        defaultMode: 420
        secretName: secr1
    - name: volume2
      secret:
        defaultMode: 420
        secretName: secr2

$ oc get pods dynamic
NAME      READY     STATUS    RESTARTS   AGE
dynamic   1/1       Running   0          21m

$ oc rsh dynamic touch /mnt/ocp/file1
$ oc rsh dynamic touch /mnt/ocp/ver39/file2
$ oc rsh dynamic ls -aR /mnt/ocp
/mnt/ocp:
.                                file1
..                               testilfe
..2018_01_23_03_08_54.946838222  ver39
..data

/mnt/ocp/..2018_01_23_03_08_54.946838222:
.   ..

/mnt/ocp/ver39:
.                                ..data
..                               file2
..2018_01_23_03_08_54.375300715  testfile

/mnt/ocp/ver39/..2018_01_23_03_08_54.375300715:
.   ..


On node, verify the volume when pod is running.
tmpfs                  1.8G     0  1.8G   0% /var/lib/origin/openshift.local.volumes/pods/ab82c67c-ffea-11e7-971a-42010af0000a/volumes/kubernetes.io~secret/volume2
tmpfs                  1.8G     0  1.8G   0% /var/lib/origin/openshift.local.volumes/pods/ab82c67c-ffea-11e7-971a-42010af0000a/volumes/kubernetes.io~secret/volume1

And verified that volume is cleaned from node when pod is removed.
Comment 29 Joel Smith 2018-03-12 16:32:13 EDT
FYI, our fixes for BZ#1516569 have now been released and should address this bug. The bug goes back (in some form or another) to OpenShift 3.3. Fixes are available in the following (and later) versions:

3.3.1.46.11-1.git.4.e236015
3.4.1.44.38-1.git.4.bb8df08
3.5.5.31.48-1.git.4.ff6153e
3.6.173.0.96-1.git.4.e6301f8
3.7.23-1.git.5.83efd71
Comment 32 errata-xmlrpc 2018-03-28 10:05:01 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:0489

Note You need to log in before you can comment on or make changes to this bug.