Bug 1663260 - [3.9.57 Regression] subpath volume mounts do not work with secret, configmap, projected, or downwardAPI volumes
Summary: [3.9.57 Regression] subpath volume mounts do not work with secret, configmap,...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Storage
Version: 3.9.0
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 3.9.0
Assignee: Jan Safranek
QA Contact: Liang Xia
URL:
Whiteboard:
Depends On: 1554670
Blocks: 1554864 1554865 1554866 1554868 1554869 1554871 1623426
TreeView+ depends on / blocked
 
Reported: 2019-01-03 15:12 UTC by Jack Ottofaro
Modified: 2019-01-30 15:10 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1554670
Environment:
Last Closed: 2019-01-30 15:10:24 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:0098 0 None None None 2019-01-30 15:10:36 UTC

Comment 8 Thomas Schilling 2019-01-14 14:07:42 UTC
Hi, I see that the fix for 3.10 has been released. Is there an ETA for the 3.9 bugfix release?

Comment 10 Liang Xia 2019-01-17 08:41:25 UTC
All the testing on non-containerized kubelet cluster PASSED.

While most of testing on containerized kubelet FAILED with the same version.

Containerized kubelet cluster info,
    "msg": [
        "atomic-openshift version: v3.9.64", 
        "Operation System: Red Hat Enterprise Linux Atomic Host release 7.5", 
        "Cluster Install Method: docker container", 
        "Docker Version: docker-1.13.1-58.git87f2fab.el7.x86_64", 
        "Docker Storage Driver:  overlay2", 
        "OpenvSwitch Version: openvswitch-2.9.0-83.el7fdp.1.x86_64", 
        "etcd Version: etcd-3.2.22-1.el7.x86_64", 
        "Network Plugin: redhat/openshift-ovs-subnet", 
        "Auth Method: allowall", 
        "Registry Deployment Method: deploymentconfig", 
        "Secure Registry: True", 
        "Registry Backend Storage: cinder", 
        "Load Balancer: None", 
        "Docker System Container: False", 
        "CRI-O Enable: False", 
        "Firewall Service: iptables"
    ]

INFO> oc get pods pod-m7uf6 --output=yaml --config=/home/jenkins/workspace/Runner-v3/workdir/cucushift-oc39-standard-slave-4bnj1-0/ose_lxia1.kubeconfig --namespace=m7uf6
INFO> After 530 iterations and 900 seconds:
apiVersion: v1
kind: Pod
metadata:
  annotations:
    openshift.io/scc: privileged
  creationTimestamp: 2019-01-17T06:43:25Z
  name: pod-m7uf6
  namespace: m7uf6
  resourceVersion: "32197"
  selfLink: /api/v1/namespaces/m7uf6/pods/pod-m7uf6
  uid: 30c90069-1a23-11e9-9d6a-fa163eae747e
spec:
  containers:
  - image: aosqe/hello-openshift
    imagePullPolicy: Always
    name: sockfile
    ports:
    - containerPort: 80
      name: http-server
      protocol: TCP
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /mnt/run/test.sock
      name: file
      subPath: run/test.sock
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-l2frq
      readOnly: true
  dnsPolicy: ClusterFirst
  imagePullSecrets:
  - name: default-dockercfg-blhvl
  nodeName: qe-lxia-container-39-nrr-1
  nodeSelector:
    node-role.kubernetes.io/compute: "true"
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  volumes:
  - hostPath:
      path: /
      type: ""
    name: file
  - name: default-token-l2frq
    secret:
      defaultMode: 420
      secretName: default-token-l2frq
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: 2019-01-17T06:43:26Z
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: 2019-01-17T06:43:26Z
    message: 'containers with unready status: [sockfile]'
    reason: ContainersNotReady
    status: "False"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: 2019-01-17T06:43:25Z
    status: "True"
    type: PodScheduled
  containerStatuses:
  - image: aosqe/hello-openshift
    imageID: ""
    lastState: {}
    name: sockfile
    ready: false
    restartCount: 0
    state:
      waiting:
        message: failed to prepare subPath for volumeMount "file" of container "sockfile"
        reason: CreateContainerConfigError
  hostIP: 172.16.122.27
  phase: Pending
  podIP: 10.129.0.42
  qosClass: BestEffort
  startTime: 2019-01-17T06:43:26Z


INFO> oc get pods pod-dapi-volume --output=yaml --config=/home/jenkins/workspace/Runner-v3/workdir/cucushift-oc39-standard-slave-4bnj1-0/ose_lxia1.kubeconfig --namespace=b-ps8
INFO> After 527 iterations and 901 seconds:
apiVersion: v1
kind: Pod
metadata:
  annotations:
    build: one
    builder: qe-one
    openshift.io/scc: restricted
  creationTimestamp: 2019-01-17T06:11:07Z
  labels:
    rack: a111
    region: r1
    zone: z11
  name: pod-dapi-volume
  namespace: b-ps8
  resourceVersion: "27990"
  selfLink: /api/v1/namespaces/b-ps8/pods/pod-dapi-volume
  uid: ad66ca76-1a1e-11e9-9d6a-fa163eae747e
spec:
  containers:
  - image: aosqe/hello-openshift
    imagePullPolicy: Always
    name: client-container
    resources: {}
    securityContext:
      capabilities:
        drop:
        - KILL
        - MKNOD
        - SETGID
        - SETUID
      privileged: false
      runAsUser: 1000140000
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/tmp/podinfo
      name: podinfo
      subPath: subpath
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-678vt
      readOnly: true
  dnsPolicy: ClusterFirst
  imagePullSecrets:
  - name: default-dockercfg-n5lfr
  nodeName: qe-lxia-container-39-nrr-1
  nodeSelector:
    node-role.kubernetes.io/compute: "true"
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext:
    fsGroup: 1000140000
    seLinuxOptions:
      level: s0:c12,c4
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  volumes:
  - downwardAPI:
      defaultMode: 420
      items:
      - fieldRef:
          apiVersion: v1
          fieldPath: metadata.labels
        path: labels
      - fieldRef:
          apiVersion: v1
          fieldPath: metadata.annotations
        path: annotations
      - fieldRef:
          apiVersion: v1
          fieldPath: metadata.name
        path: name
      - fieldRef:
          apiVersion: v1
          fieldPath: metadata.namespace
        path: namespace
    name: podinfo
  - name: default-token-678vt
    secret:
      defaultMode: 420
      secretName: default-token-678vt
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: 2019-01-17T06:11:08Z
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: 2019-01-17T06:11:08Z
    message: 'containers with unready status: [client-container]'
    reason: ContainersNotReady
    status: "False"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: 2019-01-17T06:11:07Z
    status: "True"
    type: PodScheduled
  containerStatuses:
  - image: aosqe/hello-openshift
    imageID: ""
    lastState: {}
    name: client-container
    ready: false
    restartCount: 0
    state:
      waiting:
        message: failed to prepare subPath for volumeMount "podinfo" of container
          "client-container"
        reason: CreateContainerConfigError
  hostIP: 172.16.122.27
  phase: Pending
  podIP: 10.129.0.26
  qosClass: BestEffort
  startTime: 2019-01-17T06:11:08Z


Node logs,
Jan 17 06:58:10 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:10.442209   13993 prober.go:165] HTTP-Probe Host: https://10.129.0.6, Port: 1338, Path: /healthz
Jan 17 06:58:10 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:10.442262   13993 prober.go:168] HTTP-Probe Headers: map[]
Jan 17 06:58:10 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:10.448778   13993 http.go:96] Probe succeeded for https://10.129.0.6:1338/healthz, Response: {200 OK 200 HTTP/2.0 2 0 map[Content-Type:[text/plain; charset=utf-8] Content-Length:[2] Date:[Thu, 17 Jan 2019 06:58:10 GMT]] 0xc4228593c0 2 [] false false map[] 0xc42188d300 0xc42111b080}
Jan 17 06:58:10 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:10.448824   13993 prober.go:118] Liveness probe for "asb-1-x7frt_openshift-ansible-service-broker(d713476a-1a02-11e9-9d6a-fa163eae747e):asb" succeeded
Jan 17 06:58:10 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:10.631625   13993 generic.go:183] GenericPLEG: Relisting
--
Jan 17 06:58:20 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:20.453518   13993 prober.go:118] Liveness probe for "asb-1-x7frt_openshift-ansible-service-broker(d713476a-1a02-11e9-9d6a-fa163eae747e):asb" succeeded
Jan 17 06:58:20 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:20.714196   13993 generic.go:183] GenericPLEG: Relisting
Jan 17 06:58:21 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:21.169367   13993 cloud_request_manager.go:89] Requesting node addresses from cloud provider for node "qe-lxia-container-39-nrr-1"
Jan 17 06:58:21 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:21.169442   13993 openstack_instances.go:39] openstack.Instances() called
Jan 17 06:58:21 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:21.169458   13993 openstack_instances.go:46] Claiming to support Instances
Jan 17 06:58:21 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:21.169463   13993 openstack_instances.go:69] NodeAddresses(qe-lxia-container-39-nrr-1) called
Jan 17 06:58:21 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:21.315378   13993 ovs.go:145] Executing: ovs-ofctl -O OpenFlow13 dump-flows br0 table=253
Jan 17 06:58:21 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:21.319462   13993 healthcheck.go:98] SDN healthcheck succeeded
Jan 17 06:58:21 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:21.336303   13993 prober.go:165] HTTP-Probe Host: https://10.129.0.9, Port: 5000, Path: /healthz
Jan 17 06:58:21 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:21.336322   13993 prober.go:168] HTTP-Probe Headers: map[]
Jan 17 06:58:21 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:21.341256   13993 http.go:96] Probe succeeded for https://10.129.0.9:5000/healthz, Response: {200 OK 200 HTTP/2.0 2 0 map[Cache-Control:[no-cache] Content-Type:[text/plain; charset=utf-8] Content-Length:[0] Date:[Thu, 17 Jan 2019 06:58:21 GMT]] 0xc420e5d9c0 0 [] false false map[] 0xc4221e2000 0xc42298c630}
Jan 17 06:58:21 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:21.341289   13993 prober.go:118] Liveness probe for "docker-registry-2-6wjw4_default(fb4d472e-1a02-11e9-9d6a-fa163eae747e):registry" succeeded
Jan 17 06:58:21 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:21.461600   13993 openstack_instances.go:76] NodeAddresses(qe-lxia-container-39-nrr-1) => [{InternalIP 172.16.122.27} {ExternalIP 10.8.247.209}]
Jan 17 06:58:21 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:21.461622   13993 cloud_request_manager.go:108] Node addresses from cloud provider for node "qe-lxia-container-39-nrr-1" collected
Jan 17 06:58:21 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:21.684072   13993 kubelet.go:2123] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=true reason: message:
Jan 17 06:58:21 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:21.722242   13993 generic.go:183] GenericPLEG: Relisting
Jan 17 06:58:21 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:21.850492   13993 kubelet.go:1913] SyncLoop (SYNC): 1 pods; pod-m7uf6_m7uf6(30c90069-1a23-11e9-9d6a-fa163eae747e)
Jan 17 06:58:21 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:21.850541   13993 kubelet.go:1936] SyncLoop (housekeeping)
Jan 17 06:58:21 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:21.851625   13993 kubelet_pods.go:1381] Generating status for "pod-m7uf6_m7uf6(30c90069-1a23-11e9-9d6a-fa163eae747e)"
Jan 17 06:58:21 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:21.851697   13993 kubelet_pods.go:1346] pod waiting > 0, pending
Jan 17 06:58:21 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:21.851840   13993 status_manager.go:353] Ignoring same status for pod "pod-m7uf6_m7uf6(30c90069-1a23-11e9-9d6a-fa163eae747e)", status: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-01-17 06:43:26 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-01-17 06:43:26 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [sockfile]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-01-17 06:43:25 +0000 UTC Reason: Message:}] Message: Reason: HostIP:172.16.122.27 PodIP:10.129.0.42 StartTime:2019-01-17 06:43:26 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:sockfile State:{Waiting:&ContainerStateWaiting{Reason:CreateContainerConfigError,Message:failed to prepare subPath for volumeMount "file" of container "sockfile",} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:aosqe/hello-openshift ImageID: ContainerID:}] QOSClass:BestEffort}
Jan 17 06:58:21 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:21.852069   13993 volume_manager.go:344] Waiting for volumes to attach and mount for pod "pod-m7uf6_m7uf6(30c90069-1a23-11e9-9d6a-fa163eae747e)"
Jan 17 06:58:21 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:21.875467   13993 desired_state_of_world_populator.go:298] Added volume "file" (volSpec="file") for pod "30c90069-1a23-11e9-9d6a-fa163eae747e" to desired state.
Jan 17 06:58:21 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:21.875487   13993 desired_state_of_world_populator.go:298] Added volume "default-token-l2frq" (volSpec="default-token-l2frq") for pod "30c90069-1a23-11e9-9d6a-fa163eae747e" to desired state.
Jan 17 06:58:21 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:21.935747   13993 prober.go:165] HTTP-Probe Host: http://10.129.0.14, Port: 8080, Path: /pagecount
Jan 17 06:58:21 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:21.935762   13993 prober.go:168] HTTP-Probe Headers: map[]
Jan 17 06:58:21 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:21.937556   13993 http.go:96] Probe succeeded for http://10.129.0.14:8080/pagecount, Response: {200 OK 200 HTTP/1.1 1 1 map[Content-Type:[text/html; charset=utf-8] Content-Length:[15] Etag:[W/"f-FTwbMvrX67q9WDqBOD+0aFA4iUY"] Date:[Thu, 17 Jan 2019 06:58:21 GMT] X-Powered-By:[Express]] 0xc42319b780 15 [] true false map[] 0xc4221e3900 <nil>}
Jan 17 06:58:21 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:21.937586   13993 prober.go:118] Readiness probe for "nodejs-mongodb-example-1-8ghwz_install-test(42e9d3c6-1a03-11e9-9d6a-fa163eae747e):nodejs-mongodb-example" succeeded
Jan 17 06:58:21 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:21.968082   13993 operation_executor.go:895] Starting operationExecutor.MountVolume for volume "default-token-l2frq" (UniqueName: "kubernetes.io/secret/30c90069-1a23-11e9-9d6a-fa163eae747e-default-token-l2frq") pod "pod-m7uf6" (UID: "30c90069-1a23-11e9-9d6a-fa163eae747e") Volume is already mounted to pod, but remount was requested.
Jan 17 06:58:21 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:21.968125   13993 volume_host.go:218] using default mounter/exec for kubernetes.io/secret
Jan 17 06:58:21 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:21.968176   13993 reconciler.go:264] operationExecutor.MountVolume started for volume "default-token-l2frq" (UniqueName: "kubernetes.io/secret/30c90069-1a23-11e9-9d6a-fa163eae747e-default-token-l2frq") pod "pod-m7uf6" (UID: "30c90069-1a23-11e9-9d6a-fa163eae747e") Volume is already mounted to pod, but remount was requested.
Jan 17 06:58:21 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:21.968233   13993 secret.go:186] Setting up volume default-token-l2frq for pod 30c90069-1a23-11e9-9d6a-fa163eae747e at /var/lib/origin/openshift.local.volumes/pods/30c90069-1a23-11e9-9d6a-fa163eae747e/volumes/kubernetes.io~secret/default-token-l2frq
Jan 17 06:58:21 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:21.968263   13993 volume_host.go:218] using default mounter/exec for kubernetes.io/empty-dir
Jan 17 06:58:21 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:21.968268   13993 volume_host.go:218] using default mounter/exec for kubernetes.io/empty-dir
Jan 17 06:58:21 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:21.968292   13993 nsenter.go:107] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- realpath -e /var/lib/origin/openshift.local.volumes/pods/30c90069-1a23-11e9-9d6a-fa163eae747e/volumes/kubernetes.io~secret/default-token-l2frq]
Jan 17 06:58:21 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:21.969860   13993 nsenter_mount.go:182] nsenter findmnt args: [-o target,fstype --noheadings --first-only --target /var/lib/origin/openshift.local.volumes/pods/30c90069-1a23-11e9-9d6a-fa163eae747e/volumes/kubernetes.io~secret/default-token-l2frq]
Jan 17 06:58:21 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:21.969884   13993 nsenter.go:107] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- /bin/findmnt -o target,fstype --noheadings --first-only --target /var/lib/origin/openshift.local.volumes/pods/30c90069-1a23-11e9-9d6a-fa163eae747e/volumes/kubernetes.io~secret/default-token-l2frq]
Jan 17 06:58:21 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:21.972889   13993 nsenter_mount.go:196] IsLikelyNotMountPoint findmnt output for path /var/lib/origin/openshift.local.volumes/pods/30c90069-1a23-11e9-9d6a-fa163eae747e/volumes/kubernetes.io~secret/default-token-l2frq: /var/lib/origin/openshift.local.volumes/pods/30c90069-1a23-11e9-9d6a-fa163eae747e/volumes/kubernetes.io~secret/default-token-l2frq:
Jan 17 06:58:21 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:21.972905   13993 nsenter_mount.go:199] IsLikelyNotMountPoint: /var/lib/origin/openshift.local.volumes/pods/30c90069-1a23-11e9-9d6a-fa163eae747e/volumes/kubernetes.io~secret/default-token-l2frq is a mount point
Jan 17 06:58:21 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:21.975200   13993 secret.go:216] Received secret m7uf6/default-token-l2frq containing (4) pieces of data, 4101 total bytes
Jan 17 06:58:21 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:21.975285   13993 atomic_writer.go:332] /var/lib/origin/openshift.local.volumes/pods/30c90069-1a23-11e9-9d6a-fa163eae747e/volumes/kubernetes.io~secret/default-token-l2frq: current paths:   [ca.crt namespace service-ca.crt token]
Jan 17 06:58:21 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:21.975299   13993 atomic_writer.go:344] /var/lib/origin/openshift.local.volumes/pods/30c90069-1a23-11e9-9d6a-fa163eae747e/volumes/kubernetes.io~secret/default-token-l2frq: new paths:       [ca.crt namespace service-ca.crt token]
Jan 17 06:58:21 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:21.975306   13993 atomic_writer.go:347] /var/lib/origin/openshift.local.volumes/pods/30c90069-1a23-11e9-9d6a-fa163eae747e/volumes/kubernetes.io~secret/default-token-l2frq: paths to remove: map[]
Jan 17 06:58:21 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:21.975390   13993 atomic_writer.go:156] pod m7uf6/pod-m7uf6 volume default-token-l2frq: no update required for target directory /var/lib/origin/openshift.local.volumes/pods/30c90069-1a23-11e9-9d6a-fa163eae747e/volumes/kubernetes.io~secret/default-token-l2frq
Jan 17 06:58:21 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:21.975421   13993 operation_generator.go:551] MountVolume.SetUp succeeded for volume "default-token-l2frq" (UniqueName: "kubernetes.io/secret/30c90069-1a23-11e9-9d6a-fa163eae747e-default-token-l2frq") pod "pod-m7uf6" (UID: "30c90069-1a23-11e9-9d6a-fa163eae747e")
Jan 17 06:58:22 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:22.152286   13993 volume_manager.go:372] All volumes are attached and mounted for pod "pod-m7uf6_m7uf6(30c90069-1a23-11e9-9d6a-fa163eae747e)"
Jan 17 06:58:22 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:22.152317   13993 kuberuntime_manager.go:442] Syncing Pod "pod-m7uf6_m7uf6(30c90069-1a23-11e9-9d6a-fa163eae747e)": &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:pod-m7uf6,GenerateName:,Namespace:m7uf6,SelfLink:/api/v1/namespaces/m7uf6/pods/pod-m7uf6,UID:30c90069-1a23-11e9-9d6a-fa163eae747e,ResourceVersion:32074,Generation:0,CreationTimestamp:2019-01-17 06:43:25 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{kubernetes.io/config.seen: 2019-01-17T06:43:26.704172898Z,kubernetes.io/config.source: api,openshift.io/scc: privileged,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{file {HostPathVolumeSource{Path:/,Type:*,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}} {default-token-l2frq {nil nil nil nil nil &SecretVolumeSource{SecretName:default-token-l2frq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{sockfile aosqe/hello-openshift [] []  [{http-server 0 80 TCP }] [] [] {map[] map[]} [{file false /mnt/run/test.sock run/test.sock <nil>} {default-token-l2frq true /var/run/secrets/kubernetes.io/serviceaccount  <nil>}] [] nil nil nil /dev/termination-log File Always nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{node-role.kubernetes.io/compute: true,},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:qe-lxia-container-39-nrr-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,},ImagePullSecrets:[{default-dockercfg-blhvl}],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-01-17 06:43:26 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-01-17 06:43:26 +0000 UTC ContainersNotReady containers with unready status: [sockfile]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-01-17 06:43:25 +0000 UTC  }],Message:,Reason:,HostIP:172.16.122.27,PodIP:10.129.0.42,StartTime:2019-01-17 06:43:26 +0000 UTC,ContainerStatuses:[{sockfile {ContainerStateWaiting{Reason:CreateContainerConfigError,Message:failed to prepare subPath for volumeMount "file" of container "sockfile",} nil nil} {nil nil nil} false 0 aosqe/hello-openshift  }],QOSClass:BestEffort,InitContainerStatuses:[],},}
Jan 17 06:58:22 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:22.152536   13993 kuberuntime_manager.go:514] Container {Name:sockfile Image:aosqe/hello-openshift Command:[] Args:[] WorkingDir: Ports:[{Name:http-server HostPort:0 ContainerPort:80 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:file ReadOnly:false MountPath:/mnt/run/test.sock SubPath:run/test.sock MountPropagation:<nil>} {Name:default-token-l2frq ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jan 17 06:58:22 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:22.152547   13993 kuberuntime_manager.go:571] computePodActions got {KillPod:false CreateSandbox:false SandboxID:a3f62c85c548ed4b4f7618b08febc05542705fc9a7982e54b07495aad4ed2b23 Attempt:6 NextInitContainerToStart:nil ContainersToStart:[0] ContainersToKill:map[]} for pod "pod-m7uf6_m7uf6(30c90069-1a23-11e9-9d6a-fa163eae747e)"
Jan 17 06:58:22 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:22.152625   13993 kuberuntime_manager.go:725] Creating container &Container{Name:sockfile,Image:aosqe/hello-openshift,Command:[],Args:[],WorkingDir:,Ports:[{http-server 0 80 TCP }],Env:[],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[{file false /mnt/run/test.sock run/test.sock <nil>} {default-token-l2frq true /var/run/secrets/kubernetes.io/serviceaccount  <nil>}],LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[],TerminationMessagePolicy:File,VolumeDevices:[],} in pod pod-m7uf6_m7uf6(30c90069-1a23-11e9-9d6a-fa163eae747e)
Jan 17 06:58:22 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:22.155171   13993 kuberuntime_image.go:46] Pulling image "aosqe/hello-openshift:latest" without credentials
Jan 17 06:58:22 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:22.155231   13993 server.go:290] Event(v1.ObjectReference{Kind:"Pod", Namespace:"m7uf6", Name:"pod-m7uf6", UID:"30c90069-1a23-11e9-9d6a-fa163eae747e", APIVersion:"v1", ResourceVersion:"32074", FieldPath:"spec.containers{sockfile}"}): type: 'Normal' reason: 'Pulling' pulling image "aosqe/hello-openshift"
--
Jan 17 06:58:23 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:23.937381   13993 helpers.go:827] eviction manager: observations: signal=nodefs.inodesFree, available: 20740541, capacity: 20328Ki, time: 2019-01-17 06:58:09.973254847 +0000 UTC m=+15003.263498496
Jan 17 06:58:23 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:23.937391   13993 helpers.go:827] eviction manager: observations: signal=imagefs.available, available: 36747680Ki, capacity: 40646Mi, time: 2019-01-17 06:58:09.973254847 +0000 UTC m=+15003.263498496
Jan 17 06:58:23 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:23.937409   13993 helpers.go:827] eviction manager: observations: signal=imagefs.inodesFree, available: 20740541, capacity: 20328Ki, time: 2019-01-17 06:58:09.973254847 +0000 UTC m=+15003.263498496
Jan 17 06:58:23 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:23.937420   13993 helpers.go:829] eviction manager: observations: signal=allocatableMemory.available, available: 3584552Ki, capacity: 3880944Ki
Jan 17 06:58:23 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:23.937444   13993 eviction_manager.go:325] eviction manager: no resources are starved
Jan 17 06:58:24 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:24.659028   13993 kube_docker_client.go:348] Stop pulling image "aosqe/hello-openshift:latest": "Status: Image is up to date for docker.io/aosqe/hello-openshift:latest"
Jan 17 06:58:24 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:24.660753   13993 kuberuntime_container.go:101] Generating ref for container sockfile: &v1.ObjectReference{Kind:"Pod", Namespace:"m7uf6", Name:"pod-m7uf6", UID:"30c90069-1a23-11e9-9d6a-fa163eae747e", APIVersion:"v1", ResourceVersion:"32074", FieldPath:"spec.containers{sockfile}"}
Jan 17 06:58:24 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:24.660817   13993 kubelet_pods.go:175] container: m7uf6/pod-m7uf6/sockfile podIP: "10.129.0.42" creating hosts mount: true
Jan 17 06:58:24 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:24.660939   13993 nsenter_mount.go:318] Kubelet's PID on the host is 13993
Jan 17 06:58:24 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:24.660954   13993 mount_linux.go:698] doBindSubPath "/run/test.sock", full subpath "/run/test.sock" for volumepath "/"
Jan 17 06:58:24 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:24.660980   13993 nsenter_mount.go:167] findmnt: directory /var/lib/origin/openshift.local.volumes/pods/30c90069-1a23-11e9-9d6a-fa163eae747e/volume-subpaths/file/sockfile/0 does not exist
Jan 17 06:58:24 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:24.661003   13993 nsenter.go:107] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- realpath -e /var/lib/origin/openshift.local.volumes/pods/30c90069-1a23-11e9-9d6a-fa163eae747e/volume-subpaths/file/sockfile/0]
Jan 17 06:58:24 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:24.661013   13993 server.go:290] Event(v1.ObjectReference{Kind:"Pod", Namespace:"m7uf6", Name:"pod-m7uf6", UID:"30c90069-1a23-11e9-9d6a-fa163eae747e", APIVersion:"v1", ResourceVersion:"32074", FieldPath:"spec.containers{sockfile}"}): type: 'Normal' reason: 'Pulled' Successfully pulled image "aosqe/hello-openshift"
Jan 17 06:58:24 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:24.662876   13993 nsenter.go:151] failed to resolve symbolic links on /var/lib/origin/openshift.local.volumes/pods/30c90069-1a23-11e9-9d6a-fa163eae747e/volume-subpaths/file/sockfile/0: exit status 1
Jan 17 06:58:24 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:24.662892   13993 mount_linux.go:718] doBindSubPath() failed for "/var/lib/origin/openshift.local.volumes/pods/30c90069-1a23-11e9-9d6a-fa163eae747e/volume-subpaths/file/sockfile/0", cleaning up subpath
Jan 17 06:58:24 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:24.662907   13993 mount_linux.go:841] Cleaning up subpath mounts for subpath 0
Jan 17 06:58:24 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:24.662919   13993 nsenter_mount.go:167] findmnt: directory /var/lib/origin/openshift.local.volumes/pods/30c90069-1a23-11e9-9d6a-fa163eae747e/volume-subpaths/file/sockfile/0 does not exist
Jan 17 06:58:24 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:24.662925   13993 nsenter.go:107] Running nsenter command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- realpath -e /var/lib/origin/openshift.local.volumes/pods/30c90069-1a23-11e9-9d6a-fa163eae747e/volume-subpaths/file/sockfile/0]
Jan 17 06:58:24 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:24.664092   13993 nsenter.go:151] failed to resolve symbolic links on /var/lib/origin/openshift.local.volumes/pods/30c90069-1a23-11e9-9d6a-fa163eae747e/volume-subpaths/file/sockfile/0: exit status 1
Jan 17 06:58:24 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: E0117 06:58:24.664111   13993 mount_linux.go:720] Failed to clean subpath "/var/lib/origin/openshift.local.volumes/pods/30c90069-1a23-11e9-9d6a-fa163eae747e/volume-subpaths/file/sockfile/0": error checking /var/lib/origin/openshift.local.volumes/pods/30c90069-1a23-11e9-9d6a-fa163eae747e/volume-subpaths/file/sockfile/0 for mount: exit status 1
Jan 17 06:58:24 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: E0117 06:58:24.664122   13993 kubelet_pods.go:248] failed to prepare subPath for volumeMount "file" of container "sockfile": error checking path /var/lib/origin/openshift.local.volumes/pods/30c90069-1a23-11e9-9d6a-fa163eae747e/volume-subpaths/file/sockfile/0 for mount: exit status 1
Jan 17 06:58:24 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: E0117 06:58:24.664151   13993 kuberuntime_manager.go:734] container start failed: CreateContainerConfigError: failed to prepare subPath for volumeMount "file" of container "sockfile"
Jan 17 06:58:24 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: E0117 06:58:24.664175   13993 pod_workers.go:186] Error syncing pod 30c90069-1a23-11e9-9d6a-fa163eae747e ("pod-m7uf6_m7uf6(30c90069-1a23-11e9-9d6a-fa163eae747e)"), skipping: failed to "StartContainer" for "sockfile" with CreateContainerConfigError: "failed to prepare subPath for volumeMount \"file\" of container \"sockfile\""
Jan 17 06:58:24 qe-lxia-container-39-nrr-1 atomic-openshift-node[13968]: I0117 06:58:24.664189   13993 server.go:290] Event(v1.ObjectReference{Kind:"Pod", Namespace:"m7uf6", Name:"pod-m7uf6", UID:"30c90069-1a23-11e9-9d6a-fa163eae747e", APIVersion:"v1", ResourceVersion:"32074", FieldPath:"spec.containers{sockfile}"}): type: 'Warning' reason: 'Failed' Error: failed to prepare subPath for volumeMount "file" of container "sockfile"

Comment 20 errata-xmlrpc 2019-01-30 15:10:24 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0098


Note You need to log in before you can comment on or make changes to this bug.