Bug 1657735 - Failed to prepare subPath error with system container OCP
Summary: Failed to prepare subPath error with system container OCP
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Storage
Version: 3.10.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 3.10.z
Assignee: Jan Safranek
QA Contact: Wenqi He
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-12-10 10:57 UTC by Wenqi He
Modified: 2019-01-10 09:27 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-01-10 09:27:12 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:0026 0 None None None 2019-01-10 09:27:13 UTC

Description Wenqi He 2018-12-10 10:57:46 UTC
Description of problem:
Pod failed to start with subpath

Version-Release number of selected component (if applicable):
openshift v3.10.83
kubernetes v1.10.0+b81c8f8

How reproducible:
Always

Steps to Reproduce:
1. Create a pod with subpath volume mounts with emptyDir

Actual results:
The pod failed to start with error "Error: failed to prepare subPath for volumeMount"
Tried with emptydir, cinder volume, all failed

# oc get pods
NAME                       READY     STATUS                       RESTARTS   AGE
emptypod                   0/1       CreateContainerConfigError   0          15m

# oc describe po emptypod
Events:
  Type     Reason     Age                From                                         Message
  ----     ------     ----               ----                                         -------
  Normal   Scheduled  16m                default-scheduler                            Successfully assigned emptypod to preserve-ah310stage4--node-infra-1
  Normal   Pulling    16m                kubelet, preserve-ah310stage4--node-infra-1  pulling image "aosqe/hello-openshift"
  Normal   Pulled     16m                kubelet, preserve-ah310stage4--node-infra-1  Successfully pulled image "aosqe/hello-openshift"
  Normal   Created    16m                kubelet, preserve-ah310stage4--node-infra-1  Created container
  Normal   Started    16m                kubelet, preserve-ah310stage4--node-infra-1  Started container
  Normal   Pulled     14m (x7 over 16m)  kubelet, preserve-ah310stage4--node-infra-1  Successfully pulled image "aosqe/hello-openshift"
  Warning  Failed     14m (x7 over 16m)  kubelet, preserve-ah310stage4--node-infra-1  Error: failed to prepare subPath for volumeMount "dynamic" of container "testpod"
  Normal   Pulling    1m (x58 over 16m)  kubelet, preserve-ah310stage4--node-infra-1  pulling image "aosqe/hello-openshift"


Expected results:
Pod could be started

Master Log:
Did not find any useful info

Node Log (of failed PODs):
Dec 10 10:26:20 preserve-ah310stage4--node-infra-1 atomic-openshift-node[19418]: I1210 10:26:20.754226   19428 kuberuntime_manager.go:570] computePodActions got {KillPod:false CreateSandbox:false SandboxID:df0bc615d7646432928f56818aea5f7c9b316ffa2a426a7e0f37a9ff4840ce66 Attempt:0 NextInitContainerToStart:nil ContainersToStart:[0] ContainersToKill:map[]} for pod "subpathpod_default(a8ebb857-fc61-11e8-a5bb-fa163edff5fc)"
Dec 10 10:26:20 preserve-ah310stage4--node-infra-1 atomic-openshift-node[19418]: I1210 10:26:20.754342   19428 kuberuntime_manager.go:724] Creating container &Container{Name:testpod,Image:aosqe/hello-openshift,Command:[],Args:[],WorkingDir:,Ports:[{http-server 0 80 TCP }],Env:[],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[{dynamic false /mnt/iaas subpath <nil>} {default-token-ts4zs true /var/run/secrets/kubernetes.io/serviceaccount  <nil>}],LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[],TerminationMessagePolicy:File,VolumeDevices:[],} in pod subpathpod_default(a8ebb857-fc61-11e8-a5bb-fa163edff5fc)
Dec 10 10:26:20 preserve-ah310stage4--node-infra-1 atomic-openshift-node[19418]: I1210 10:26:20.756972   19428 server.go:440] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"subpathpod", UID:"a8ebb857-fc61-11e8-a5bb-fa163edff5fc", APIVersion:"v1", ResourceVersion:"67521", FieldPath:"spec.containers{testpod}"}): type: 'Normal' reason: 'Pulling' pulling image "aosqe/hello-openshift"
Dec 10 10:26:22 preserve-ah310stage4--node-infra-1 atomic-openshift-node[19418]: I1210 10:26:22.318986   19428 kuberuntime_container.go:100] Generating ref for container testpod: &v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"subpathpod", UID:"a8ebb857-fc61-11e8-a5bb-fa163edff5fc", APIVersion:"v1", ResourceVersion:"67521", FieldPath:"spec.containers{testpod}"}
Dec 10 10:26:22 preserve-ah310stage4--node-infra-1 atomic-openshift-node[19418]: I1210 10:26:22.319103   19428 kubelet_pods.go:172] container: default/subpathpod/testpod podIP: "10.130.0.24" creating hosts mount: true
Dec 10 10:26:22 preserve-ah310stage4--node-infra-1 atomic-openshift-node[19418]: I1210 10:26:22.319878   19428 server.go:440] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"subpathpod", UID:"a8ebb857-fc61-11e8-a5bb-fa163edff5fc", APIVersion:"v1", ResourceVersion:"67521", FieldPath:"spec.containers{testpod}"}): type: 'Normal' reason: 'Pulled' Successfully pulled image "aosqe/hello-openshift"
Dec 10 10:26:22 preserve-ah310stage4--node-infra-1 atomic-openshift-node[19418]: E1210 10:26:22.323632   19428 pod_workers.go:186] Error syncing pod a8ebb857-fc61-11e8-a5bb-fa163edff5fc ("subpathpod_default(a8ebb857-fc61-11e8-a5bb-fa163edff5fc)"), skipping: failed to "StartContainer" for "testpod" with CreateContainerConfigError: "failed to prepare subPath for volumeMount \"dynamic\" of container \"testpod\""
Dec 10 10:26:22 preserve-ah310stage4--node-infra-1 atomic-openshift-node[19418]: I1210 10:26:22.323812   19428 server.go:440] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"subpathpod", UID:"a8ebb857-fc61-11e8-a5bb-fa163edff5fc", APIVersion:"v1", ResourceVersion:"67521", FieldPath:"spec.containers{testpod}"}): type: 'Warning' reason: 'Failed' Error: failed to prepare subPath for volumeMount "dynamic" of container "testpod"


Additional info:
We install system container OCP in atomic host

Comment 2 Wenqi He 2018-12-11 10:57:20 UTC
Adding the atomic host info
# cat /etc/redhat-release 
Red Hat Enterprise Linux Atomic Host release 7.6 
# uname -a
Linux preserve-ah310stage4--master-etcd-1 3.10.0-957.el7.x86_64 #1 SMP Thu Oct 4 20:48:51 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux


And I double confirmed on a rpm OCP today, it works well and with subPath volume.
# cat /etc/redhat-release 
Red Hat Enterprise Linux Server release 7.6 (Maipo)

# oc get pods
NAME                       READY     STATUS    RESTARTS   AGE
emptypod                   1/1       Running   0          4m

# cat empty.yaml 
kind: Pod
apiVersion: v1
metadata:
  name: emptypod
spec:
  initContainers:
    - name: init
      image: aosqe/hello-openshift
      command: ["sh", "-c", "mkdir /mnt/iaas/subpath; cp /hello /mnt/iaas/subpath/hello"]
      volumeMounts:
        - mountPath: "/mnt/iaas"
          name: dynamic
  containers:
    - name: testpod
      image: aosqe/hello-openshift
      ports:
        - containerPort: 80
          name: "http-server"
      volumeMounts:
      - mountPath: "/mnt/iaas"
        subPath: "subpath"
        name: dynamic
  volumes:
    - name: dynamic
      emptyDir:

Comment 3 Tomas Smetana 2018-12-12 09:36:39 UTC
I failed to reproduce this with v3.10.86 on OpenStack. And seeing the comment #2... Can we close this as NOTABUG or is there still something to be fixed here?

Comment 4 Wenqi He 2018-12-12 11:45:08 UTC
(In reply to Tomas Smetana from comment #3)
> I failed to reproduce this with v3.10.86 on OpenStack. And seeing the
> comment #2... Can we close this as NOTABUG or is there still something to be
> fixed here?

Hi Tomas, this bug I found in system container OCP, which was installed on Atomic host. I have sent you a separated mail to share you the env. Thanks

Comment 5 Tomas Smetana 2018-12-12 13:52:34 UTC
Thanks. I've looked around your cluster and there is really something broken. I suspect the init container fails to prepare the directory for the "testpod" container. I can see some image pull errors but I'm not sure those are related yet.

Comment 6 Tomas Smetana 2018-12-12 14:49:27 UTC
Yes, the problem is specific to Atomic Host -- nsenter mount is trying to access nonexisting path (/var/lib/origin/openshift.local.volumes/pods/378f2f7b-fe02-11e8-bcca-fa163edf5d52/volume-subpaths/dynamic/testpod/0):

Dec 12 11:36:58 preserve-wehe310--node-1 atomic-openshift-node[19495]: I1212 11:36:58.715327   19505 kubelet_pods.go:172] container: default/emptypod/testpod podIP: "10.130.0.2" creating hosts mount: true
Dec 12 11:36:58 preserve-wehe310--node-1 atomic-openshift-node[19495]: I1212 11:36:58.715340   19505 server.go:440] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"emptypod", UID:"378f2f7b-fe02-11e8-bcca-fa163edf5d52", APIVersion:"v1", ResourceVersion:"4338", FieldPath:"spec.containers{testpod}"}): type: 'Normal' reason: 'Pulled' Successfully pull
Dec 12 11:36:58 preserve-wehe310--node-1 atomic-openshift-node[19495]: I1212 11:36:58.715498   19505 nsenter_mount.go:323] Kubelet's PID on the host is 19505
Dec 12 11:36:58 preserve-wehe310--node-1 atomic-openshift-node[19495]: I1212 11:36:58.715538   19505 mount_linux.go:747] doBindSubPath "/var/lib/origin/openshift.local.volumes/pods/378f2f7b-fe02-11e8-bcca-fa163edf5d52/volumes/kubernetes.io~empty-dir/dynamic/subpath", full subpath "/var/lib/origin/openshift.local.volumes/pods/378f2f7b-fe02-11e8-bcca-fa163edf5d52/v
Dec 12 11:36:58 preserve-wehe310--node-1 atomic-openshift-node[19495]: I1212 11:36:58.715817   19505 nsenter_mount.go:167] findmnt: directory /var/lib/origin/openshift.local.volumes/pods/378f2f7b-fe02-11e8-bcca-fa163edf5d52/volume-subpaths/dynamic/testpod/0 does not exist
Dec 12 11:36:58 preserve-wehe310--node-1 atomic-openshift-node[19495]: I1212 11:36:58.715841   19505 nsenter.go:107] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- realpath -e /var/lib/origin/openshift.local.volumes/pods/378f2f7b-fe02-11e8-bcca-fa163edf5d52/volume-subpaths/dynamic/testpod/0]
Dec 12 11:36:58 preserve-wehe310--node-1 atomic-openshift-node[19495]: I1212 11:36:58.718299   19505 nsenter.go:151] failed to resolve symbolic links on /var/lib/origin/openshift.local.volumes/pods/378f2f7b-fe02-11e8-bcca-fa163edf5d52/volume-subpaths/dynamic/testpod/0: exit status 1
Dec 12 11:36:58 preserve-wehe310--node-1 atomic-openshift-node[19495]: I1212 11:36:58.718328   19505 mount_linux.go:767] doBindSubPath() failed for "/var/lib/origin/openshift.local.volumes/pods/378f2f7b-fe02-11e8-bcca-fa163edf5d52/volume-subpaths/dynamic/testpod/0", cleaning up subpath
Dec 12 11:36:58 preserve-wehe310--node-1 atomic-openshift-node[19495]: I1212 11:36:58.718340   19505 mount_linux.go:890] Cleaning up subpath mounts for subpath 0
Dec 12 11:36:58 preserve-wehe310--node-1 atomic-openshift-node[19495]: I1212 11:36:58.718353   19505 nsenter_mount.go:167] findmnt: directory /var/lib/origin/openshift.local.volumes/pods/378f2f7b-fe02-11e8-bcca-fa163edf5d52/volume-subpaths/dynamic/testpod/0 does not exist
Dec 12 11:36:58 preserve-wehe310--node-1 atomic-openshift-node[19495]: I1212 11:36:58.718360   19505 nsenter.go:107] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- realpath -e /var/lib/origin/openshift.local.volumes/pods/378f2f7b-fe02-11e8-bcca-fa163edf5d52/volume-subpaths/dynamic/testpod/0]
Dec 12 11:36:58 preserve-wehe310--node-1 atomic-openshift-node[19495]: I1212 11:36:58.720241   19505 nsenter.go:151] failed to resolve symbolic links on /var/lib/origin/openshift.local.volumes/pods/378f2f7b-fe02-11e8-bcca-fa163edf5d52/volume-subpaths/dynamic/testpod/0: exit status 1
Dec 12 11:36:58 preserve-wehe310--node-1 atomic-openshift-node[19495]: E1212 11:36:58.720261   19505 mount_linux.go:769] Failed to clean subpath "/var/lib/origin/openshift.local.volumes/pods/378f2f7b-fe02-11e8-bcca-fa163edf5d52/volume-subpaths/dynamic/testpod/0": error checking /var/lib/origin/openshift.local.volumes/pods/378f2f7b-fe02-11e8-bcca-fa163edf5d52/volu
Dec 12 11:36:58 preserve-wehe310--node-1 atomic-openshift-node[19495]: E1212 11:36:58.720273   19505 kubelet_pods.go:245] failed to prepare subPath for volumeMount "dynamic" of container "testpod": error checking path /var/lib/origin/openshift.local.volumes/pods/378f2f7b-fe02-11e8-bcca-fa163edf5d52/volume-subpaths/dynamic/testpod/0 for mount: exit status 1
Dec 12 11:36:58 preserve-wehe310--node-1 atomic-openshift-node[19495]: E1212 11:36:58.720331   19505 kuberuntime_manager.go:733] container start failed: CreateContainerConfigError: failed to prepare subPath for volumeMount "dynamic" of container "testpod"
Dec 12 11:36:58 preserve-wehe310--node-1 atomic-openshift-node[19495]: E1212 11:36:58.720415   19505 pod_workers.go:186] Error syncing pod 378f2f7b-fe02-11e8-bcca-fa163edf5d52 ("emptypod_default(378f2f7b-fe02-11e8-bcca-fa163edf5d52)"), skipping: failed to "StartContainer" for "testpod" with CreateContainerConfigError: "failed to prepare subPath for volumeMount \"
Dec 12 11:36:58 preserve-wehe310--node-1 atomic-openshift-node[19495]: I1212 11:36:58.720432   19505 server.go:440] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"emptypod", UID:"378f2f7b-fe02-11e8-bcca-fa163edf5d52", APIVersion:"v1", ResourceVersion:"4338", FieldPath:"spec.containers{testpod}"}): type: 'Warning' reason: 'Failed' Error: failed to

Comment 9 Jan Safranek 2018-12-17 13:32:12 UTC
Origin PR: https://github.com/openshift/origin/pull/21672

Comment 12 Wenqi He 2018-12-19 06:05:08 UTC
Tested on below version:
openshift v3.10.89
kubernetes v1.10.0+b81c8f8

# uname -a
Linux qe-wehe-master-etcd-1 3.10.0-957.el7.x86_64 #1 SMP Thu Oct 4 20:48:51 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

# cat /etc/redhat-release 
Red Hat Enterprise Linux Atomic Host release 7.6


The pod is running well with subPath volume mount on system container installation on Atomic host

# oc get pods emptypod -o yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    openshift.io/scc: anyuid
  creationTimestamp: 2018-12-19T05:58:47Z
  name: emptypod
  namespace: default
  resourceVersion: "21118"
  selfLink: /api/v1/namespaces/default/pods/emptypod
  uid: 26bce23f-0353-11e9-ae8a-fa163e9415bd
spec:
  containers:
  - image: aosqe/hello-openshift
    imagePullPolicy: Always
    name: testpod
    ports:
    - containerPort: 80
      name: http-server
      protocol: TCP
    resources: {}
    securityContext:
      capabilities:
        drop:
        - MKNOD
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /mnt/iaas
      name: dynamic
      subPath: subpath
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-z5hp5
      readOnly: true
  dnsPolicy: ClusterFirst
  imagePullSecrets:
  - name: default-dockercfg-rw7c2
  initContainers:
  - command:
    - sh
    - -c
    - mkdir /mnt/iaas/subpath; cp /hello /mnt/iaas/subpath/hello
    image: aosqe/hello-openshift
    imagePullPolicy: Always
    name: init
    resources: {}
    securityContext:
      capabilities:
        drop:
        - MKNOD
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /mnt/iaas
      name: dynamic
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-z5hp5
      readOnly: true
  nodeName: qe-wehe-node-infra-1
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext:
    seLinuxOptions:
      level: s0:c1,c0
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  volumes:
  - emptyDir: {}
    name: dynamic
  - name: default-token-z5hp5
    secret:
      defaultMode: 420
      secretName: default-token-z5hp5
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: 2018-12-19T05:58:55Z
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: 2018-12-19T05:58:59Z
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: 2018-12-19T05:58:47Z
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: docker://441daf160c1b236a68d5cc3f5ee4c16ec6b7190dc70be01355f93ec2e2392f5e
    image: docker.io/aosqe/hello-openshift:latest
    imageID: docker-pullable://docker.io/aosqe/hello-openshift@sha256:a2d509d3d5164f54a2406287405b2d114f952dca877cc465129f78afa858b31a
    lastState: {}
    name: testpod
    ready: true
    restartCount: 0
    state:
      running:
        startedAt: 2018-12-19T05:58:59Z
  hostIP: 172.16.122.56
  initContainerStatuses:
  - containerID: docker://fcc9913ba8c3b0080681c61c0e5e0f2e52af59adc784de2c6447f3dc06b22f70
    image: docker.io/aosqe/hello-openshift:latest
    imageID: docker-pullable://docker.io/aosqe/hello-openshift@sha256:a2d509d3d5164f54a2406287405b2d114f952dca877cc465129f78afa858b31a
    lastState: {}
    name: init
    ready: true
    restartCount: 0
    state:
      terminated:
        containerID: docker://fcc9913ba8c3b0080681c61c0e5e0f2e52af59adc784de2c6447f3dc06b22f70
        exitCode: 0
        finishedAt: 2018-12-19T05:58:55Z
        reason: Completed
        startedAt: 2018-12-19T05:58:55Z
  phase: Running
  podIP: 10.130.0.7
  qosClass: BestEffort
  startTime: 2018-12-19T05:58:48Z

Comment 14 errata-xmlrpc 2019-01-10 09:27:12 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0026


Note You need to log in before you can comment on or make changes to this bug.