Bug 1225874 - [origin_infrastructure_265] Pod does not mount volumes using persistent volumes and claims
Summary: [origin_infrastructure_265] Pod does not mount volumes using persistent volum...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: OKD
Classification: Red Hat
Component: Storage
Version: 3.x
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: ---
Assignee: Mark Turansky
QA Contact: Liang Xia
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-05-28 11:29 UTC by Jianwei Hou
Modified: 2015-07-07 23:47 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-07-07 23:47:37 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Jianwei Hou 2015-05-28 11:29:15 UTC
Description of problem:
Create an nfs persistent volume and persistent volume claim, after the claim bound the volume, create a pod with this claim specified, the data on the nfs volume can not be viewed from the pod. The pod might not have successfully mounted the volume.

Version-Release number of selected component (if applicable):
Client Version: version.Info{Major:"0", Minor:"3+", GitVersion:"v0.3-9563-gb7caedeedb74f9-dirty", GitCommit:"b7caedeedb74f954e42d6c4bf957b0cdc5e3c3a5", GitTreeState:"dirty"}
Server Version: version.Info{Major:"0", Minor:"3+", GitVersion:"v0.3-9563-gb7caedeedb74f9-dirty", GitCommit:"b7caedeedb74f954e42d6c4bf957b0cdc5e3c3a5", GitTreeState:"dirty"}


How reproducible:
Always

Steps to Reproduce:
1. Create an NFS persistent volume
apiVersion: v1beta3
kind: PersistentVolume
metadata:
  name: nfs
spec:
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  nfs:
    path: /nfsshare
    server: 10.66.79.155


2. Create a claim
kind: PersistentVolumeClaim
apiVersion: v1beta3
metadata:
  name: nfsclaim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 3Gi


3. Create a pod that specifies the claim
kind: Pod
apiVersion: v1beta3
metadata:
  name: mypod
  labels:
    name: frontendhttp
spec:
  containers:
    - name: myfrontend
      image: nginx
      ports:
        - containerPort: 80
          name: "http-server"
      volumeMounts:
      - mountPath: "/usr/share/nginx/html"
        name: mypd
  volumes:
    - name: mypd
      source:
        persistentVolumeClaim:
          clainName: "nfsclaim"

4. After pod is created successfully, list files under /usr/share/nginx/html in the pod
cluster/kubectl.sh exec -p mypod ls /usr/share/nginx/html

Actual results:
After step 2: PV and PVC are bound
# cluster/kubectl.sh get pv          
NAME      LABELS    CAPACITY     ACCESSMODES   STATUS    CLAIM
nfs       <none>    5368709120   RWO           Bound     default/nfsclaim

# cluster/kubectl.sh get pvc
NAME       LABELS    STATUS    VOLUME
nfsclaim   map[]     Bound     nfs


After step 4: Nothing listed(an index.html is create on that volume before the pod mounts it), maybe the pod does not mount the volume

The Pod detail:
------------------------
apiVersion: v1beta3
items:
- apiVersion: v1beta3
  kind: Pod
  metadata:
    creationTimestamp: 2015-05-28T09:29:54Z
    labels:
      name: frontendhttp
    name: mypod
    namespace: default
    resourceVersion: "51"
    selfLink: /api/v1beta3/namespaces/default/pods/mypod
    uid: 19684fd1-051c-11e5-93da-3c970e22ab19
  spec:
    containers:
    - capabilities: {}
      image: nginx
      imagePullPolicy: IfNotPresent
      name: myfrontend
      ports:
      - containerPort: 80
        name: http-server
        protocol: TCP
      resources: {}
      securityContext:
        capabilities: {}
        privileged: false
      terminationMessagePath: /dev/termination-log
      volumeMounts:
      - mountPath: /usr/share/nginx/html
        name: mypd
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: default-token-ik5ce
        readOnly: true
    dnsPolicy: ClusterFirst
    host: 127.0.0.1
    restartPolicy: Always
    serviceAccount: default
    volumes:
    - emptyDir: {}
      name: mypd
      rbd: null
    - name: default-token-ik5ce
      rbd: null
      secret:
        secretName: default-token-ik5ce
  status:
    Condition:
    - status: "True"
      type: Ready
    containerStatuses:
    - containerID: docker://5dbd256be69d77645affedbea603cda0923ee255f9ed223a083b10e027b009fe
      image: nginx
      imageID: docker://42a3cf88f3f0cce2b4bfb2ed714eec5ee937525b4c7e0a0f70daff18c3f2ee92
      lastState: {}
      name: myfrontend
      ready: true
      restartCount: 0
      state:
        running:
          startedAt: 2015-05-28T09:29:59Z
    hostIP: 127.0.0.1
    phase: Running
    podIP: 172.17.0.12
    startTime: 2015-05-28T09:29:59Z
kind: List
metadata: {}
------------------------


Expected results:
After step 4: Should see an index.html listed, which is prepared on the nfs volume. 

Additional info:

Comment 1 Mark Turansky 2015-05-28 13:14:40 UTC
An NFS bug was recently fixed by https://github.com/GoogleCloudPlatform/kubernetes/pull/8688

The pod details don't have an NFS volume, making me think the above fix is what you need.

Comment 2 Jianwei Hou 2015-05-29 09:43:51 UTC
@mturansk, thank you. I've tested it pulling the latest kubernetes codes, making sure the fix is in, but the result is the same, the data on the volume can't be seen from the pod.

If the pod details have NFS volume(server, path, readOnly), then the volume data can be seen from the pod. But it does not work the same when the pod details only have persistent volume claims.

Comment 3 Mark Turansky 2015-06-02 02:40:57 UTC
Other recent upstream fixes:

A nil pointer for NFS as a PV:
https://github.com/GoogleCloudPlatform/kubernetes/pull/9069

An issue with the NFS server pod by Jan Safranek:
https://github.com/GoogleCloudPlatform/kubernetes/pull/9019

Comment 4 Jianwei Hou 2015-06-03 08:14:40 UTC
Tested with latest Kubernetes with the above fixes, pods created with claim specified still don't mount nfs successfully.

Comment 5 Mark Turansky 2015-06-04 19:13:56 UTC
Hou,

What does your NFS export look like?  I've seen an issue where an export has to happen with "no_root_squash" otherwise there the correct permissions to write wouldn't be correct.

Comment 6 Jianwei Hou 2015-06-05 06:48:41 UTC
@mturansk, my export is:

[root@openshift-shared-nfs ~]# cat /etc/exports
/nfsshare *(rw,sync,no_root_squash,insecure)

Comment 7 Mark Turansky 2015-06-09 17:44:10 UTC
should be fixed by:  https://github.com/openshift/origin/pull/3002

Comment 8 Jianwei Hou 2015-06-10 10:25:03 UTC
Verified it with:
openshift v0.6.0.0-1-gd63e998
kubernetes v0.17.1-804-g496be63

The nfs is mounted successfully when the pod is create using the claim

# oc exec -p mypod ls /usr/share/nginx/html
index.html
test


Note You need to log in before you can comment on or make changes to this bug.