Bug 1556838
| Summary: | [3.5] Mounting file in a subpath fails if file was created in initContainer | ||
|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | Jan Safranek <jsafrane> |
| Component: | Storage | Assignee: | Jan Safranek <jsafrane> |
| Status: | CLOSED ERRATA | QA Contact: | Wenqi He <wehe> |
| Severity: | urgent | Docs Contact: | |
| Priority: | urgent | ||
| Version: | 3.5.0 | CC: | aos-bugs, aos-storage-staff, atripath, bchilds, eparis, gpei, hekumar, jmalde, jsafrane, misalunk, wehe, xtian |
| Target Milestone: | --- | ||
| Target Release: | 3.5.z | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | 1555910 | Environment: | |
| Last Closed: | 2018-04-12 06:05:33 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | 1555910 | ||
| Bug Blocks: | 1555911 | ||
|
Description
Jan Safranek
2018-03-15 11:08:40 UTC
In 3.5 (and maybe older releases) the reproducer yaml is different. Use this file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: subpath
spec:
replicas: 1
template:
metadata:
annotations:
pod.beta.kubernetes.io/init-containers: '[{"name":"init-volume-hostpath","image":"busybox", "imagePullPolicy": "Always", "command":["touch","/mount/test"],"volumeMounts":[{"name":"mount","mountPath":"/mount"}]}]'
pod.alpha.kubernetes.io/init-containers: '[{"name":"init-volume-hostpath","image":"busybox", "imagePullPolicy": "Always", "command":["touch","/mount/test"],"volumeMounts":[{"name":"mount","mountPath":"/mount"}]}]'
labels:
app: subpath
spec:
initContainers:
- name: init
image: busybox
command:
- touch
- /mount/test
volumeMounts:
- name: mount
mountPath: /mount
containers:
- name: subtest
image: busybox
command:
- ls
- -l
- /mount/test
volumeMounts:
- name: mount
mountPath: /mount/test
subPath: test
volumes:
- name: mount
emptyDir: {}
It fails with CrashLoopBackOff:
Error syncing pod, skipping: failed to "StartContainer" for "subtest" with RunContainerError: "GenerateRunContainerOptions: failed to prepare subPath for volumeMount \"mount\" of container \"subtest\""
and node log contains:
E0315 11:10:20.203455 17434 mount_linux.go:521] Failed to clean subpath "/home/vagrant/ose/openshift.local.volumes/pods/4d8dd252-2841-11e8-bb03-08002743618a/volume-subpaths/mount/subtest/0": error checking /home/vagrant/ose/openshift.local.volumes/pods/4d8dd252-2841-11e8-bb03-08002743618a/volume-subpaths/mount/subtest/0 for mount: lstat /home/vagrant/ose/openshift.local.volumes/pods/4d8dd252-2841-11e8-bb03-08002743618a/volume-subpaths/mount/subtest/0/..: not a directory
and there is a leftover mount on the host.
With the fix, the deployment pod ends with CrashLoopBackOff too, but the event is different:
Error syncing pod, skipping: failed to "StartContainer" for "subtest" with CrashLoopBackOff: "Back-off 10s restarting failed container=subtest pod=subpath-4043655688-tz57q_default(ca77b49a-2841-11e8-876b-08002743618a)"
and all mounts are unmounted when all pods (and the deployment) are deleted.
Tested with below version: openshift v3.5.5.31.66 kubernetes v1.5.2+43a9be4 Based on Jan's comment #1, created that pod and get below: # oc describe pods subpath-2452179313-mgdx3 ... Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 1m 1m 1 {default-scheduler } Normal Scheduled Successfully assigned subpath-2452179313-mgdx3 to ip.ec2.internal 55s 55s 1 {kubelet ip.ec2.internal} spec.initContainers{init-volume-hostpath} Normal Created Created container with docker id 07858cc239cf; Security:[seccomp=unconfined] 54s 54s 1 {kubelet ip.ec2.internal} spec.initContainers{init-volume-hostpath} Normal Started Started container with docker id 07858cc239cf 49s 49s 1 {kubelet ip.ec2.internal} spec.containers{subtest} Normal Created Created container with docker id 0a213a941fe2; Security:[seccomp=unconfined] 48s 48s 1 {kubelet ip.ec2.internal} spec.containers{subtest} Normal Started Started container with docker id 0a213a941fe2 43s 43s 1 {kubelet ip.ec2.internal} spec.containers{subtest} Normal Started Started container with docker id 346d1cff3f61 43s 43s 1 {kubelet ip.ec2.internal} spec.containers{subtest} Normal Created Created container with docker id 346d1cff3f61; Security:[seccomp=unconfined] 42s 41s 2 {kubelet ip.ec2.internal} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "subtest" with CrashLoopBackOff: "Back-off 10s restarting failed container=subtest pod=subpath-2452179313-mgdx3_default(12c21c0a-3d33-11e8-ad57-0ead52c5d5fe)" 1m 27s 4 {kubelet ip.ec2.internal} spec.initContainers{init-volume-hostpath} Normal Pulling pulling image "busybox" 58s 25s 4 {kubelet ip.ec2.internal} spec.initContainers{init-volume-hostpath} Normal Pulled Successfully pulled image "busybox" 24s 24s 1 {kubelet ip.ec2.internal} spec.containers{subtest} Normal Created Created container with docker id 4cc5b12f6058; Security:[seccomp=unconfined] 24s 24s 1 {kubelet ip.ec2.internal} spec.containers{subtest} Normal Started Started container with docker id 4cc5b12f6058 42s 10s 4 {kubelet ip.ec2.internal} spec.containers{subtest} Warning BackOff Back-off restarting failed docker container 23s 10s 2 {kubelet ip.ec2.internal} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "subtest" with CrashLoopBackOff: "Back-off 20s restarting failed container=subtest pod=subpath-2452179313-mgdx3_default(12c21c0a-3d33-11e8-ad57-0ead52c5d5fe)" This bug is fixed. Thanks Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:1106 |