Bug 1658037 - Can not use persistent volume hostpath if path contain ':'
Summary: Can not use persistent volume hostpath if path contain ':'
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Storage
Version: 3.9.0
Hardware: Unspecified
OS: Unspecified
low
low
Target Milestone: ---
: 3.9.z
Assignee: Hemant Kumar
QA Contact: Liang Xia
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-12-11 05:50 UTC by Liang Xia
Modified: 2019-11-15 15:45 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-11-15 15:45:27 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Liang Xia 2018-12-11 05:50:26 UTC
Description of problem:
Create a pod with persitent volume hostpath where the path is /tmp/2018-1208-12:13:14-dir, the pod failed with error:
Error: Error response from daemon: invalid bind mount spec "/tmp/2018-1208-12:13:14-dir:/mnt/ocp": invalid volume specification: '/tmp/2018-1208-12:13:14-dir:/mnt/ocp'


Version-Release number of selected component (if applicable):
# oc version
oc v3.9.58
kubernetes v1.9.1+a0ce1bc657
features: Basic-Auth GSSAPI Kerberos SPNEGO
Server https://preserve-qe-lxia-39-master-etcd-1:8443
openshift v3.9.58
kubernetes v1.9.1+a0ce1bc657


How reproducible:
Always

Steps to Reproduce:
1. Create a hostpath pv.
2. Create a project, add privilege scc to the user.
3. Create a pvc/privilege pod.
4. Check the pod.

Actual results:
  Warning  Failed  2m (x6 over 2m)  kubelet, preserve-qe-lxia-39-nrr-1  Error: Error response from daemon: invalid bind mount spec "/tmp/2018-1208-12:13:14-dir:/mnt/ocp": invalid volume specification: '/tmp/2018-1208-12:13:14-dir:/mnt/ocp'


Expected results:
Pod is up and running.


PV Dump:
# oc get pv pv-yuhk4 -o yaml --export
apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
    pv.kubernetes.io/bound-by-controller: "yes"
  creationTimestamp: null
  name: pv-yuhk4
  selfLink: /api/v1/persistentvolumes/pv-yuhk4
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 1Gi
  claimRef:
    apiVersion: v1
    kind: PersistentVolumeClaim
    name: pvc0001
    namespace: 0-dvq
    resourceVersion: "199933"
    uid: 40fc2cf9-fcfd-11e8-9607-fa163ef30df0
  hostPath:
    path: /tmp/2018-1208-12:13:14-dir
    type: DirectoryOrCreate
  persistentVolumeReclaimPolicy: Delete
status: {}

PVC Dump:
# oc get pvc pvc0001 -n 0-dvq -o yaml --export
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
    pv.kubernetes.io/bind-completed: "yes"
    pv.kubernetes.io/bound-by-controller: "yes"
  creationTimestamp: null
  name: pvc0001
  selfLink: /api/v1/namespaces/0-dvq/persistentvolumeclaims/pvc0001
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: ""
  volumeName: pv-yuhk4
status: {}


Additional info:
# oc get pod mypod -o yaml -n 0-dvq --export
apiVersion: v1
kind: Pod
metadata:
  annotations:
    openshift.io/scc: privileged
  creationTimestamp: null
  name: mypod
  selfLink: /api/v1/namespaces/0-dvq/pods/mypod
spec:
  containers:
  - image: aosqe/hello-openshift
    imagePullPolicy: Always
    name: mycontainer
    resources: {}
    securityContext:
      privileged: true
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /mnt/ocp
      name: my-volume
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-hqqsb
      readOnly: true
  dnsPolicy: ClusterFirst
  imagePullSecrets:
  - name: default-dockercfg-hhhmt
  nodeName: preserve-qe-lxia-39-nrr-1
  nodeSelector:
    node-role.kubernetes.io/compute: "true"
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  volumes:
  - name: my-volume
    persistentVolumeClaim:
      claimName: pvc0001
  - name: default-token-hqqsb
    secret:
      defaultMode: 420
      secretName: default-token-hqqsb
status:
  phase: Pending
  qosClass: BestEffort


Note You need to log in before you can comment on or make changes to this bug.