+++ This bug was initially created as a clone of Bug #1555910 +++ Description of problem: Originally reported bug : https://github.com/kubernetes/kubernetes/issues/61178 By creating a deployment where an init container creates a file in an emptydir and then the container tries to mount this file as a subpath will reproduce it. --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: subpath namespace: kube-system spec: replicas: 1 template: metadata: labels: app: subpath spec: initContainers: - name: init image: busybox command: - touch - /mount/test volumeMounts: - name: mount mountPath: /mount containers: - name: subtest image: busybox command: - ls - -l - /mount/test volumeMounts: - name: mount mountPath: /mount/test subPath: test volumes: - name: mount emptyDir: {} --- Additional comment from Hemant Kumar on 2018-03-14 16:05:08 EDT --- https://github.com/openshift/ose/pull/1135
In 3.5 (and maybe older releases) the reproducer yaml is different. Use this file: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: subpath spec: replicas: 1 template: metadata: annotations: pod.beta.kubernetes.io/init-containers: '[{"name":"init-volume-hostpath","image":"busybox", "imagePullPolicy": "Always", "command":["touch","/mount/test"],"volumeMounts":[{"name":"mount","mountPath":"/mount"}]}]' pod.alpha.kubernetes.io/init-containers: '[{"name":"init-volume-hostpath","image":"busybox", "imagePullPolicy": "Always", "command":["touch","/mount/test"],"volumeMounts":[{"name":"mount","mountPath":"/mount"}]}]' labels: app: subpath spec: initContainers: - name: init image: busybox command: - touch - /mount/test volumeMounts: - name: mount mountPath: /mount containers: - name: subtest image: busybox command: - ls - -l - /mount/test volumeMounts: - name: mount mountPath: /mount/test subPath: test volumes: - name: mount emptyDir: {} It fails with CrashLoopBackOff: Error syncing pod, skipping: failed to "StartContainer" for "subtest" with RunContainerError: "GenerateRunContainerOptions: failed to prepare subPath for volumeMount \"mount\" of container \"subtest\"" and node log contains: E0315 11:10:20.203455 17434 mount_linux.go:521] Failed to clean subpath "/home/vagrant/ose/openshift.local.volumes/pods/4d8dd252-2841-11e8-bb03-08002743618a/volume-subpaths/mount/subtest/0": error checking /home/vagrant/ose/openshift.local.volumes/pods/4d8dd252-2841-11e8-bb03-08002743618a/volume-subpaths/mount/subtest/0 for mount: lstat /home/vagrant/ose/openshift.local.volumes/pods/4d8dd252-2841-11e8-bb03-08002743618a/volume-subpaths/mount/subtest/0/..: not a directory and there is a leftover mount on the host. With the fix, the deployment pod ends with CrashLoopBackOff too, but the event is different: Error syncing pod, skipping: failed to "StartContainer" for "subtest" with CrashLoopBackOff: "Back-off 10s restarting failed container=subtest pod=subpath-4043655688-tz57q_default(ca77b49a-2841-11e8-876b-08002743618a)" and all mounts are unmounted when all pods (and the deployment) are deleted.
OSE PR: https://github.com/openshift/ose/pull/1122
Tested with below version: openshift v3.5.5.31.66 kubernetes v1.5.2+43a9be4 Based on Jan's comment #1, created that pod and get below: # oc describe pods subpath-2452179313-mgdx3 ... Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 1m 1m 1 {default-scheduler } Normal Scheduled Successfully assigned subpath-2452179313-mgdx3 to ip.ec2.internal 55s 55s 1 {kubelet ip.ec2.internal} spec.initContainers{init-volume-hostpath} Normal Created Created container with docker id 07858cc239cf; Security:[seccomp=unconfined] 54s 54s 1 {kubelet ip.ec2.internal} spec.initContainers{init-volume-hostpath} Normal Started Started container with docker id 07858cc239cf 49s 49s 1 {kubelet ip.ec2.internal} spec.containers{subtest} Normal Created Created container with docker id 0a213a941fe2; Security:[seccomp=unconfined] 48s 48s 1 {kubelet ip.ec2.internal} spec.containers{subtest} Normal Started Started container with docker id 0a213a941fe2 43s 43s 1 {kubelet ip.ec2.internal} spec.containers{subtest} Normal Started Started container with docker id 346d1cff3f61 43s 43s 1 {kubelet ip.ec2.internal} spec.containers{subtest} Normal Created Created container with docker id 346d1cff3f61; Security:[seccomp=unconfined] 42s 41s 2 {kubelet ip.ec2.internal} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "subtest" with CrashLoopBackOff: "Back-off 10s restarting failed container=subtest pod=subpath-2452179313-mgdx3_default(12c21c0a-3d33-11e8-ad57-0ead52c5d5fe)" 1m 27s 4 {kubelet ip.ec2.internal} spec.initContainers{init-volume-hostpath} Normal Pulling pulling image "busybox" 58s 25s 4 {kubelet ip.ec2.internal} spec.initContainers{init-volume-hostpath} Normal Pulled Successfully pulled image "busybox" 24s 24s 1 {kubelet ip.ec2.internal} spec.containers{subtest} Normal Created Created container with docker id 4cc5b12f6058; Security:[seccomp=unconfined] 24s 24s 1 {kubelet ip.ec2.internal} spec.containers{subtest} Normal Started Started container with docker id 4cc5b12f6058 42s 10s 4 {kubelet ip.ec2.internal} spec.containers{subtest} Warning BackOff Back-off restarting failed docker container 23s 10s 2 {kubelet ip.ec2.internal} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "subtest" with CrashLoopBackOff: "Back-off 20s restarting failed container=subtest pod=subpath-2452179313-mgdx3_default(12c21c0a-3d33-11e8-ad57-0ead52c5d5fe)" This bug is fixed. Thanks
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:1106