Bug 1418977 - Failed to create dynamic persist volume of nfs-provisioner on Azure and Openstack
Summary: Failed to create dynamic persist volume of nfs-provisioner on Azure and Opens...
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Storage
Version: 3.5.0
Hardware: Unspecified
OS: Unspecified
low
high
Target Milestone: ---
: ---
Assignee: Matthew Wong
QA Contact: Wenqi He
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-02-03 10:31 UTC by Wenqi He
Modified: 2017-04-13 15:16 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-04-13 15:16:44 UTC
Target Upstream Version:
Embargoed:
mawong: needinfo-


Attachments (Terms of Use)

Description Wenqi He 2017-02-03 10:31:12 UTC
Description of problem:
Failed to create dynamic provisioning by nfs-provisioner

Version-Release number of selected component (if applicable):
openshift v3.5.0.14+20b49d0
kubernetes v1.5.2+43a9be4


How reproducible:
Always

Steps to Reproduce:
1. Create a project
2. Create a scc
$ oc create -f https://raw.githubusercontent.com/kubernetes-incubator/nfs-provisioner/master/deploy/kube-config/openshift-scc.yaml
3. Add user to this scc
$ oadm policy add-scc-to-user nfs-provisioner <user> 
4. Create a nfs-provisioner pod
$ oc create -f https://raw.githubusercontent.com/kubernetes-incubator/nfs-provisioner/master/deploy/kube-config/pod.yaml
5. Create a storage class
$ oc create -f https://raw.githubusercontent.com/kubernetes-incubator/nfs-provisioner/master/deploy/kube-config/class.yaml
6. Create a pvc
oc create -f https://raw.githubusercontent.com/kubernetes-incubator/nfs-provisioner/master/deploy/kube-config/claim.yaml

Actual results:
PVC keeps pending with below errors:
# oc get pods
nfs-provisioner            1/1       Running   0          17m

# oc get pvc
NAME      STATUS    VOLUME    CAPACITY   ACCESSMODES   AGE
nfspvc    Pending                                     5s

# oc describe pvc
Name:		nfspvc
Namespace:	default
StorageClass:	example-nfs
Status:		Pending
Volume:		
Labels:		<none>
Capacity:	
Access Modes:	
Events:
  FirstSeen	LastSeen	Count	From				SubObjectPath	Type		Reason			Message
  ---------	--------	-----	----				-------------	--------	------			-------
  31s		12s		4	{persistentvolume-controller }			Normal		ExternalProvisioning	cannot find provisioner "example.com/nfs", expecting that a volume for the claim is provisioned either manually or via external software


Expected results:
PVC can be bound to the PV which is created by nfs-provisioner

Additional info:
Feb  3 18:18:17 wehe-master atomic-openshift-master: I0203 18:18:17.404912   10582 pv_controller_base.go:579] storeObjectUpdate: adding claim "default/nfspvc", version 5310
Feb  3 18:18:17 wehe-master atomic-openshift-master: I0203 18:18:17.404935   10582 pv_controller.go:192] synchronizing PersistentVolumeClaim[default/nfspvc]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
Feb  3 18:18:17 wehe-master atomic-openshift-master: I0203 18:18:17.404965   10582 pv_controller.go:216] synchronizing unbound PersistentVolumeClaim[default/nfspvc]: no volume found
Feb  3 18:18:17 wehe-master atomic-openshift-master: I0203 18:18:17.404970   10582 pv_controller.go:1201] provisionClaim[default/nfspvc]: started
Feb  3 18:18:17 wehe-master atomic-openshift-master: I0203 18:18:17.404975   10582 pv_controller.go:1397] scheduleOperation[provision-default/nfspvc[145babb1-e9fa-11e6-9a17-000d3a184128]]
Feb  3 18:18:17 wehe-master atomic-openshift-master: I0203 18:18:17.404989   10582 pv_controller.go:1220] provisionClaimOperation [default/nfspvc] started, class: "example-nfs"
Feb  3 18:18:17 wehe-master atomic-openshift-master: I0203 18:18:17.418731   10582 panics.go:76] PUT /api/v1/namespaces/default/persistentvolumeclaims/nfspvc: (11.825803ms) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4 system:serviceaccount:openshift-infra:pv-binder-controller] 13.92.193.118:45296]
Feb  3 18:18:17 wehe-master atomic-openshift-master: I0203 18:18:17.419486   10582 pv_controller_base.go:607] storeObjectUpdate updating claim "default/nfspvc" with version 5311
Feb  3 18:18:17 wehe-master atomic-openshift-master: I0203 18:18:17.419527   10582 pv_controller.go:192] synchronizing PersistentVolumeClaim[default/nfspvc]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
Feb  3 18:18:17 wehe-master atomic-openshift-master: I0203 18:18:17.419562   10582 pv_controller.go:216] synchronizing unbound PersistentVolumeClaim[default/nfspvc]: no volume found
Feb  3 18:18:17 wehe-master atomic-openshift-master: I0203 18:18:17.419568   10582 pv_controller.go:1201] provisionClaim[default/nfspvc]: started
Feb  3 18:18:17 wehe-master atomic-openshift-master: I0203 18:18:17.419564   10582 pv_controller_base.go:607] storeObjectUpdate updating claim "default/nfspvc" with version 5311
Feb  3 18:18:17 wehe-master atomic-openshift-master: I0203 18:18:17.419596   10582 pv_controller.go:1249] provisioning claim "default/nfspvc": cannot find provisioner "example.com/nfs", expecting that a volume for the claim is provisioned either manually or via external software
Feb  3 18:18:17 wehe-master atomic-openshift-master: I0203 18:18:17.419572   10582 pv_controller.go:1397] scheduleOperation[provision-default/nfspvc[145babb1-e9fa-11e6-9a17-000d3a184128]]
Feb  3 18:18:17 wehe-master atomic-openshift-master: I0203 18:18:17.419654   10582 pv_controller.go:1220] provisionClaimOperation [default/nfspvc] started, class: "example-nfs"
Feb  3 18:18:17 wehe-master atomic-openshift-master: I0203 18:18:17.433966   10582 panics.go:76] PUT /api/v1/namespaces/default/persistentvolumeclaims/nfspvc: (13.655934ms) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4 system:serviceaccount:openshift-infra:pv-binder-controller] 13.92.193.118:45296]
Feb  3 18:18:17 wehe-master atomic-openshift-master: I0203 18:18:17.434479   10582 pv_controller_base.go:607] storeObjectUpdate updating claim "default/nfspvc" with version 5311
Feb  3 18:18:17 wehe-master atomic-openshift-master: I0203 18:18:17.434523   10582 pv_controller.go:1249] provisioning claim "default/nfspvc": cannot find provisioner "example.com/nfs", expecting that a volume for the claim is provisioned either manually or via external software
Feb  3 18:18:17 wehe-master atomic-openshift-master: I0203 18:18:17.435861   10582 panics.go:76] GET /apis/storage.k8s.io/v1beta1/watch/storageclasses?resourceVersion=4201&timeoutSeconds=448: (7m28.006117156s) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4 system:serviceaccount:openshift-infra:pv-binder-controller] 13.92.193.118:45296]
Feb  3 18:18:17 wehe-master atomic-openshift-master: I0203 18:18:17.436141   10582 reflector.go:392] pkg/controller/volume/persistentvolume/pv_controller_base.go:159: Watch close - *storage.StorageClass total 0 items received
Feb  3 18:18:17 wehe-master atomic-openshift-master: I0203 18:18:17.436933   10582 panics.go:76] POST /api/v1/namespaces/default/events: (16.91279ms) 201 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4 system:serviceaccount:openshift-infra:pv-binder-controller] 13.92.193.118:45296]
Feb  3 18:18:17 wehe-master atomic-openshift-master: I0203 18:18:17.452119   10582 panics.go:76] PATCH /api/v1/namespaces/default/events/nfspvc.149fbfa05654385b: (14.220545ms) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4 system:serviceaccount:openshift-infra:pv-binder-controller] 13.92.193.118:45296]

Comment 1 Matthew Wong 2017-02-03 18:14:03 UTC
What is the output of `oc logs nfs-provisioner`?

In step 3, is the <user> you add the SCC to the serviceaccount running the pod?

Here is what should work
0. Create a serviceaccount "nfs-provisioner"
$ cat > /tmp/serviceaccount.yaml <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-provisioner
EOF
$ oc create -f /tmp/serviceaccount.yaml
serviceaccount "nfs-provisioner" created

...

3. Add serviceaccount user to this scc
$ oadm policy add-scc-to-user nfs-provisioner system:serviceaccount:$PROJECT:nfs-provisioner

4. Create a nfs-provisioner pod with the serviceaccount "nfs-provisioner"
$ oc create -f https://raw.githubusercontent.com/kubernetes-incubator/nfs-provisioner/master/deploy/kube-config/pod-sa.yaml

Sorry I neglected to address openshift authentication, I will review the test cases again

Comment 2 Eric Paris 2017-02-03 18:24:53 UTC
reducing priority to low because the NFS provisioner is not a supported OCP piece of code.

Comment 3 Matthew Wong 2017-02-03 18:42:58 UTC
Also see: https://trello.com/c/ihX3mRPJ/383-5-documentation-nfs-provisioner-external-provisioners#comment-5894cece9bf9b6417610d679

If you know what serviceaccount the pods  will be running as (default?) you don't need to create one and you can use pod.yaml, not pod-sa.yaml

Comment 4 Wenqi He 2017-02-04 08:38:19 UTC
(In reply to Matthew Wong from comment #3)
> Also see:
> https://trello.com/c/ihX3mRPJ/383-5-documentation-nfs-provisioner-external-
> provisioners#comment-5894cece9bf9b6417610d679
> 
> If you know what serviceaccount the pods  will be running as (default?) you
> don't need to create one and you can use pod.yaml, not pod-sa.yaml

I got this worked today with add normal user to "storage-admin"
1. Create a scc as you mentioned below
2. Add serviceaccount user to this sccc
$ oadm policy add-scc-to-user nfs-provisioner system:serviceaccount:wehe:default
3. Add user to storage admin role: 
$ oadm policy add-cluster-role-to-user storage-admin wehe

Comment 5 Bradley Childs 2017-02-06 16:31:56 UTC
Working per last comment.

Comment 6 Wenqi He 2017-02-08 10:21:45 UTC
I'd like to re-open this from today's testing, get another error with deployment nfs-provisioner

1. Create NFS provisioner deployment
oc create -f https://raw.githubusercontent.com/kubernetes-incubator/nfs-provisioner/master/demo/deployment.yaml 
2. Create a storage class
oc create -f https://raw.githubusercontent.com/kubernetes-incubator/nfs-provisioner/master/demo/class.yaml 
3. Create a pvc
oc create -f https://raw.githubusercontent.com/kubernetes-incubator/nfs-provisioner/master/demo/claim.yaml

$ oc get pods
NAME                              READY     STATUS    RESTARTS   AGE
nfs-provisioner-770926304-pnfk2   1/1       Running   0          11m
$ oc get pvc
NAME      STATUS    VOLUME    CAPACITY   ACCESSMODES   AGE
nfs       Pending                                      11m

$ oc get pvc
NAME      STATUS    VOLUME    CAPACITY   ACCESSMODES   AGE
nfs       Pending                                      11m
[wehe@dhcp-136-45 octest]$ oc describe pvc nfs
Name:		nfs
Namespace:	wehe
StorageClass:	example-nfs
Status:		Pending
Volume:		
Labels:		<none>
Capacity:	
Access Modes:	
Events:
  FirstSeen	LastSeen	Count	From											SubObjectPath	Type		Reason			Message
  ---------	--------	-----	----											-------------	--------	------			-------
  11m		9m		10	{example.com/nfs nfs-provisioner-770926304-pnfk2 5e96b006-ede6-11e6-bb75-562ab5031b7f }			Warning		ProvisioningFailed	Failed to provision volume with StorageClass "example-nfs": error creating export for volume: error exporting export block 
EXPORT
{
	Export_Id = 1;
	Path = /export/pvc-79eb980e-ede6-11e6-94a4-000d3a179c12;
	Pseudo = /export/pvc-79eb980e-ede6-11e6-94a4-000d3a179c12;
	Access_Type = RW;
	Squash = no_root_squash;
	SecType = sys;
	Filesystem_id = 1.1;
	FSAL {
		Name = VFS;
	}
}
: error getting dbus session bus: dial unix /var/run/dbus/system_bus_socket: connect: permission denied
  11m	3s	59	{persistentvolume-controller }		Normal	ExternalProvisioning	cannot find provisioner "example.com/nfs", expecting that a volume for the claim is provisioned either manually or via external software

$ oc logs nfs-provisioner-770926304-pnfk2
I0208 10:07:16.349710       1 main.go:58] Provisioner example.com/nfs specified
I0208 10:07:16.349812       1 main.go:71] Starting NFS server!
I0208 10:07:16.658412       1 controller.go:256] Starting provisioner controller 5e96b006-ede6-11e6-bb75-562ab5031b7f!
I0208 10:08:02.436869       1 controller.go:841] scheduleOperation[lock-provision-wehe/nfs[79eb980e-ede6-11e6-94a4-000d3a179c12]]
I0208 10:08:02.452287       1 controller.go:841] scheduleOperation[lock-provision-wehe/nfs[79eb980e-ede6-11e6-94a4-000d3a179c12]]
I0208 10:08:02.496389       1 leaderelection.go:157] attempting to acquire leader lease...
I0208 10:08:02.561434       1 leaderelection.go:179] sucessfully acquired lease to provision for pvc wehe/nfs
I0208 10:08:02.561557       1 controller.go:841] scheduleOperation[provision-wehe/nfs[79eb980e-ede6-11e6-94a4-000d3a179c12]]
I0208 10:08:02.601652       1 provision.go:363] using service SERVICE_NAME=nfs-provisioner cluster IP 172.30.98.7 as NFS server IP
E0208 10:08:02.612964       1 controller.go:572] Failed to provision volume for claim "wehe/nfs" with StorageClass "example-nfs": error creating export for volume: error exporting export block 
EXPORT
{
	Export_Id = 1;
	Path = /export/pvc-79eb980e-ede6-11e6-94a4-000d3a179c12;
	Pseudo = /export/pvc-79eb980e-ede6-11e6-94a4-000d3a179c12;
	Access_Type = RW;
	Squash = no_root_squash;
	SecType = sys;
	Filesystem_id = 1.1;
	FSAL {
		Name = VFS;
	}
}

Comment 7 Matthew Wong 2017-02-08 16:35:13 UTC
Can you provide also the output of `oc get pod -o yaml $nfs-provisioner-pod` and `docker inspect $nfs-provisioner-pod-container`. Thanks

Comment 8 Matthew Wong 2017-02-08 18:49:23 UTC
Also the output of `ls -lZ /run/dbus/system_bus_socket` inside the container?
e.g. mine is
srwxrwxrwx. 1 root root system_u:object_r:container_share_t:s0 0 Feb  8 18:45 /run/dbus/system_bus_socket

Comment 9 Wenqi He 2017-02-09 02:36:00 UTC
$ oc get pods nfs-provisioner-770926304-pnfk2 -o yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    kubernetes.io/created-by: |
      {"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"wehe","name":"nfs-provisioner-770926304","uid":"e5456b61-edca-11e6-94a4-000d3a179c12","apiVersion":"extensions","resourceVersion":"7644"}}
    openshift.io/scc: nfs-provisioner
  creationTimestamp: 2017-02-08T10:07:15Z
  generateName: nfs-provisioner-770926304-
  labels:
    app: nfs-provisioner
    pod-template-hash: "770926304"
  name: nfs-provisioner-770926304-pnfk2
  namespace: wehe
  resourceVersion: "8908"
  selfLink: /api/v1/namespaces/wehe/pods/nfs-provisioner-770926304-pnfk2
  uid: 5dc12dbf-ede6-11e6-94a4-000d3a179c12
spec:
  containers:
  - args:
    - -provisioner=example.com/nfs
    - -grace-period=10
    env:
    - name: POD_IP
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: status.podIP
    - name: SERVICE_NAME
      value: nfs-provisioner
    - name: POD_NAMESPACE
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.namespace
    image: quay.io/kubernetes_incubator/nfs-provisioner:v1.0.3
    imagePullPolicy: IfNotPresent
    name: nfs-provisioner
    ports:
    - containerPort: 2049
      name: nfs
      protocol: TCP
    - containerPort: 20048
      name: mountd
      protocol: TCP
    - containerPort: 111
      name: rpcbind
      protocol: TCP
    - containerPort: 111
      name: rpcbind-udp
      protocol: UDP
    resources: {}
    securityContext:
      capabilities:
        add:
        - DAC_READ_SEARCH
        drop:
        - KILL
        - MKNOD
        - SYS_CHROOT
      privileged: false
      seLinuxOptions:
        level: s0:c8,c2
    terminationMessagePath: /dev/termination-log
    volumeMounts:
    - mountPath: /export
      name: export-volume
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-6lff3
      readOnly: true
  dnsPolicy: ClusterFirst
  imagePullSecrets:
  - name: default-dockercfg-6mjr0
  nodeName: wehe-node-1.eastus.cloudapp.azure.com
  restartPolicy: Always
  securityContext:
    fsGroup: 1000060000
    seLinuxOptions:
      level: s0:c8,c2
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  volumes:
  - hostPath:
      path: /tmp/nfs-provisioner
    name: export-volume
  - name: default-token-6lff3
    secret:
      defaultMode: 420
      secretName: default-token-6lff3
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: 2017-02-08T10:07:15Z
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: 2017-02-08T10:07:16Z
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: 2017-02-08T10:07:15Z
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: docker://1b68467495792876d7e07f9e3c017c6e88a72e92657cae4d410af367f2a24f44
    image: quay.io/kubernetes_incubator/nfs-provisioner:v1.0.3
    imageID: docker-pullable://quay.io/kubernetes_incubator/nfs-provisioner@sha256:ee2900e758c36214aad5bd4d3a7974bf43bbf3d21174a54d62567ca99c69d9e4
    lastState: {}
    name: nfs-provisioner
    ready: true
    restartCount: 0
    state:
      running:
        startedAt: 2017-02-08T10:07:16Z
  hostIP: 172.27.17.5
  phase: Running
  podIP: 10.129.0.18
  startTime: 2017-02-08T10:07:15Z
=====================================================================
# docker inspect 1b6846749579
[
    {
        "Id": "1b68467495792876d7e07f9e3c017c6e88a72e92657cae4d410af367f2a24f44",
        "Created": "2017-02-08T10:07:16.096598773Z",
        "Path": "/nfs-provisioner",
        "Args": [
            "-provisioner=example.com/nfs",
            "-grace-period=10"
        ],
        "State": {
            "Status": "running",
            "Running": true,
            "Paused": false,
            "Restarting": false,
            "OOMKilled": false,
            "Dead": false,
            "Pid": 11538,
            "ExitCode": 0,
            "Error": "",
            "StartedAt": "2017-02-08T10:07:16.269364942Z",
            "FinishedAt": "0001-01-01T00:00:00Z"
        },
        "Image": "sha256:c12625ede8fd43271ae0117ec714b0ab2203e6a3177c4f5136f9aa791098d2ea",
        "ResolvConfPath": "/var/lib/docker/containers/609f212dc85d7c40899441595553911a94df5d3924ec4165e81ca1aedb291653/resolv.conf",
        "HostnamePath": "/var/lib/docker/containers/609f212dc85d7c40899441595553911a94df5d3924ec4165e81ca1aedb291653/hostname",
        "HostsPath": "/var/lib/origin/openshift.local.volumes/pods/5dc12dbf-ede6-11e6-94a4-000d3a179c12/etc-hosts",
        "LogPath": "",
        "Name": "/k8s_nfs-provisioner.d375cb8_nfs-provisioner-770926304-pnfk2_wehe_5dc12dbf-ede6-11e6-94a4-000d3a179c12_b8244be6",
        "RestartCount": 0,
        "Driver": "overlay",
        "MountLabel": "system_u:object_r:svirt_sandbox_file_t:s0:c8,c2",
        "ProcessLabel": "system_u:system_r:svirt_lxc_net_t:s0:c8,c2",
        "AppArmorProfile": "",
        "ExecIDs": null,
        "HostConfig": {
            "Binds": [
                "/tmp/nfs-provisioner:/export",
                "/var/lib/origin/openshift.local.volumes/pods/5dc12dbf-ede6-11e6-94a4-000d3a179c12/volumes/kubernetes.io~secret/default-token-6lff3:/var/run/secrets/kubernetes.io/serviceaccount:ro,Z",
                "/var/lib/origin/openshift.local.volumes/pods/5dc12dbf-ede6-11e6-94a4-000d3a179c12/etc-hosts:/etc/hosts:Z",
                "/var/lib/origin/openshift.local.volumes/pods/5dc12dbf-ede6-11e6-94a4-000d3a179c12/containers/nfs-provisioner/b8244be6:/dev/termination-log:Z"
            ],
            "ContainerIDFile": "",
            "LogConfig": {
                "Type": "journald",
                "Config": {}
            },
            "NetworkMode": "container:609f212dc85d7c40899441595553911a94df5d3924ec4165e81ca1aedb291653",
            "PortBindings": null,
            "RestartPolicy": {
                "Name": "",
                "MaximumRetryCount": 0
            },
            "AutoRemove": false,
            "VolumeDriver": "",
            "VolumesFrom": null,
            "CapAdd": [
                "DAC_READ_SEARCH"
            ],
            "CapDrop": [
                "KILL",
                "MKNOD",
                "SYS_CHROOT"
            ],
            "Dns": null,
            "DnsOptions": null,
            "DnsSearch": null,
            "ExtraHosts": null,
            "GroupAdd": [
                "1000060000"
            ],
            "IpcMode": "container:609f212dc85d7c40899441595553911a94df5d3924ec4165e81ca1aedb291653",
            "Cgroup": "",
            "Links": null,
            "OomScoreAdj": 1000,
            "PidMode": "",
            "Privileged": false,
            "PublishAllPorts": false,
            "ReadonlyRootfs": false,
            "SecurityOpt": [
                "seccomp=unconfined",
                "label:level:s0:c8,c2",
                "label=user:system_u",
                "label=role:system_r",
                "label=type:svirt_lxc_net_t",
                "label=level:s0:c8,c2"
            ],
            "UTSMode": "",
            "UsernsMode": "",
            "ShmSize": 67108864,
            "Runtime": "docker-runc",
            "ConsoleSize": [
                0,
                0
            ],
            "Isolation": "",
            "CpuShares": 2,
            "Memory": 0,
            "CgroupParent": "",
            "BlkioWeight": 0,
            "BlkioWeightDevice": null,
            "BlkioDeviceReadBps": null,
            "BlkioDeviceWriteBps": null,
            "BlkioDeviceReadIOps": null,
            "BlkioDeviceWriteIOps": null,
            "CpuPeriod": 0,
            "CpuQuota": 0,
            "CpusetCpus": "",
            "CpusetMems": "",
            "Devices": [],
            "DiskQuota": 0,
            "KernelMemory": 0,
            "MemoryReservation": 0,
            "MemorySwap": -1,
            "MemorySwappiness": -1,
            "OomKillDisable": false,
            "PidsLimit": 0,
            "Ulimits": null,
            "CpuCount": 0,
            "CpuPercent": 0,
            "IOMaximumIOps": 0,
            "IOMaximumBandwidth": 0
        },
        "GraphDriver": {
            "Name": "overlay",
            "Data": {
                "LowerDir": "/var/lib/docker/overlay/a0e956c6c67742911e5a7de615e17cf02b46ced26ba911efb792da078cae63ff/root",
                "MergedDir": "/var/lib/docker/overlay/f0d49402a5a0710d99dcf34f471cfeaedf03217f126d234aec5dd13948a631e3/merged",
                "UpperDir": "/var/lib/docker/overlay/f0d49402a5a0710d99dcf34f471cfeaedf03217f126d234aec5dd13948a631e3/upper",
                "WorkDir": "/var/lib/docker/overlay/f0d49402a5a0710d99dcf34f471cfeaedf03217f126d234aec5dd13948a631e3/work"
            }
        },
        "Mounts": [
            {
                "Source": "/tmp/nfs-provisioner",
                "Destination": "/export",
                "Mode": "",
                "RW": true,
                "Propagation": "rprivate"
            },
            {
                "Source": "/var/lib/origin/openshift.local.volumes/pods/5dc12dbf-ede6-11e6-94a4-000d3a179c12/volumes/kubernetes.io~secret/default-token-6lff3",
                "Destination": "/var/run/secrets/kubernetes.io/serviceaccount",
                "Mode": "ro,Z",
                "RW": false,
                "Propagation": "rprivate"
            },
            {
                "Source": "/var/lib/origin/openshift.local.volumes/pods/5dc12dbf-ede6-11e6-94a4-000d3a179c12/etc-hosts",
                "Destination": "/etc/hosts",
                "Mode": "Z",
                "RW": true,
                "Propagation": "rprivate"
            },
            {
                "Source": "/var/lib/origin/openshift.local.volumes/pods/5dc12dbf-ede6-11e6-94a4-000d3a179c12/containers/nfs-provisioner/b8244be6",
                "Destination": "/dev/termination-log",
                "Mode": "Z",
                "RW": true,
                "Propagation": "rprivate"
            }
        ],
        "Config": {
            "Hostname": "nfs-provisioner-770926304-pnfk2",
            "Domainname": "",
            "User": "",
            "AttachStdin": false,
            "AttachStdout": false,
            "AttachStderr": false,
            "ExposedPorts": {
                "111/tcp": {},
                "111/udp": {},
                "20048/tcp": {},
                "2049/tcp": {}
            },
            "Tty": false,
            "OpenStdin": false,
            "StdinOnce": false,
            "Env": [
                "POD_IP=10.129.0.18",
                "SERVICE_NAME=nfs-provisioner",
                "POD_NAMESPACE=wehe",
                "NFS_PROVISIONER_PORT_20048_TCP_PORT=20048",
                "NFS_PROVISIONER_PORT_111_UDP_ADDR=172.30.98.7",
                "KUBERNETES_PORT_443_TCP=tcp://172.30.0.1:443",
                "KUBERNETES_PORT_443_TCP_PROTO=tcp",
                "NFS_PROVISIONER_SERVICE_PORT_MOUNTD=20048",
                "KUBERNETES_SERVICE_HOST=172.30.0.1",
                "KUBERNETES_PORT_53_UDP=udp://172.30.0.1:53",
                "NFS_PROVISIONER_PORT_111_TCP_ADDR=172.30.98.7",
                "NFS_PROVISIONER_PORT_111_UDP=udp://172.30.98.7:111",
                "KUBERNETES_SERVICE_PORT_DNS=53",
                "KUBERNETES_PORT=tcp://172.30.0.1:443",
                "NFS_PROVISIONER_SERVICE_PORT_NFS=2049",
                "NFS_PROVISIONER_SERVICE_PORT_RPCBIND=111",
                "NFS_PROVISIONER_PORT_2049_TCP_PORT=2049",
                "NFS_PROVISIONER_PORT_20048_TCP=tcp://172.30.98.7:20048",
                "KUBERNETES_PORT_53_UDP_PORT=53",
                "NFS_PROVISIONER_PORT_111_UDP_PORT=111",
                "KUBERNETES_SERVICE_PORT_HTTPS=443",
                "KUBERNETES_PORT_53_TCP_PORT=53",
                "KUBERNETES_PORT_53_TCP=tcp://172.30.0.1:53",
                "KUBERNETES_PORT_53_TCP_ADDR=172.30.0.1",
                "NFS_PROVISIONER_PORT=tcp://172.30.98.7:2049",
                "NFS_PROVISIONER_PORT_2049_TCP_ADDR=172.30.98.7",
                "NFS_PROVISIONER_PORT_20048_TCP_PROTO=tcp",
                "NFS_PROVISIONER_PORT_111_TCP=tcp://172.30.98.7:111",
                "NFS_PROVISIONER_SERVICE_PORT=2049",
                "NFS_PROVISIONER_PORT_2049_TCP_PROTO=tcp",
                "NFS_PROVISIONER_PORT_20048_TCP_ADDR=172.30.98.7",
                "KUBERNETES_PORT_443_TCP_PORT=443",
                "KUBERNETES_PORT_53_UDP_PROTO=udp",
                "NFS_PROVISIONER_SERVICE_HOST=172.30.98.7",
                "NFS_PROVISIONER_SERVICE_PORT_RPCBIND_UDP=111",
                "NFS_PROVISIONER_PORT_111_TCP_PROTO=tcp",
                "KUBERNETES_PORT_443_TCP_ADDR=172.30.0.1",
                "KUBERNETES_SERVICE_PORT_DNS_TCP=53",
                "KUBERNETES_PORT_53_UDP_ADDR=172.30.0.1",
                "KUBERNETES_PORT_53_TCP_PROTO=tcp",
                "NFS_PROVISIONER_PORT_2049_TCP=tcp://172.30.98.7:2049",
                "NFS_PROVISIONER_PORT_111_TCP_PORT=111",
                "NFS_PROVISIONER_PORT_111_UDP_PROTO=udp",
                "KUBERNETES_SERVICE_PORT=443",
                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                "DISTTAG=f24docker",
                "FGC=f24"
            ],
            "Cmd": [
                "-provisioner=example.com/nfs",
                "-grace-period=10"
            ],
            "Image": "quay.io/kubernetes_incubator/nfs-provisioner:v1.0.3",
            "Volumes": null,
            "WorkingDir": "",
            "Entrypoint": [
                "/nfs-provisioner"
            ],
            "OnBuild": null,
            "Labels": {
                "io.kubernetes.container.hash": "d375cb8",
                "io.kubernetes.container.name": "nfs-provisioner",
                "io.kubernetes.container.ports": "[{\"name\":\"nfs\",\"containerPort\":2049,\"protocol\":\"TCP\"},{\"name\":\"mountd\",\"containerPort\":20048,\"protocol\":\"TCP\"},{\"name\":\"rpcbind\",\"containerPort\":111,\"protocol\":\"TCP\"},{\"name\":\"rpcbind-udp\",\"containerPort\":111,\"protocol\":\"UDP\"}]",
                "io.kubernetes.container.restartCount": "0",
                "io.kubernetes.container.terminationMessagePath": "/dev/termination-log",
                "io.kubernetes.pod.name": "nfs-provisioner-770926304-pnfk2",
                "io.kubernetes.pod.namespace": "wehe",
                "io.kubernetes.pod.terminationGracePeriod": "30",
                "io.kubernetes.pod.uid": "5dc12dbf-ede6-11e6-94a4-000d3a179c12"
            }
        },
        "NetworkSettings": {
            "Bridge": "",
            "SandboxID": "",
            "HairpinMode": false,
            "LinkLocalIPv6Address": "",
            "LinkLocalIPv6PrefixLen": 0,
            "Ports": null,
            "SandboxKey": "",
            "SecondaryIPAddresses": null,
            "SecondaryIPv6Addresses": null,
            "EndpointID": "",
            "Gateway": "",
            "GlobalIPv6Address": "",
            "GlobalIPv6PrefixLen": 0,
            "IPAddress": "",
            "IPPrefixLen": 0,
            "IPv6Gateway": "",
            "MacAddress": "",
            "Networks": null
        }
    }
]
===========================================================================
[root@nfs-provisioner-770926304-pnfk2 /]# ls -lZ /run/dbus/system_bus_socket                                                                     
srwxrwxrwx. 1 root root system_u:object_r:svirt_sandbox_file_t:s0:c2,c8 0 Feb  8 10:07 /run/dbus/system_bus_socket

Please contact me if you need more info, thanks.

Comment 10 Wenqi He 2017-02-09 08:35:30 UTC
Got this worked again with today's build... Not sure what's wrong with my last testing. Sorry about this..

$ oc version
openshift v3.5.0.18+9a5d1aa
kubernetes v1.5.2+43a9be4

$ oc get pods
NAME                              READY     STATUS    RESTARTS   AGE
nfs-provisioner-770926304-sz6kt   1/1       Running   0          1m

$ oc get pvc
NAME      STATUS    VOLUME                                     CAPACITY   ACCESSMODES   AGE
nfs       Bound     pvc-2a190044-eea0-11e6-9a4b-42010af00018   1Mi        RWX           4s

Comment 11 Matthew Wong 2017-02-09 18:30:25 UTC
OK, no problem, thank you for reporting. It had to have been an SELinux issue: but my SCC has the same seLinuxContext setting as the default restricted SCC and the categories (c2,c8) look fine to me in the output you provided. So it seems there was an selinux-related issue in that openshift build that has been fixed.

Comment 12 Wenqi He 2017-03-31 08:49:09 UTC
I'd like to re-open this bug since I found two different issue on Azure and Openstack, and for AWS and GCE, it works well:

$ oc version
openshift v3.5.5
kubernetes v1.5.2+43a9be4

On Openstack:

[wehe@dhcp-136-45 octest]$ oc get pods
NAME                    READY     STATUS    RESTARTS   AGE
nfs-provisioner-toqd2   1/1       Running   0          1m
[wehe@dhcp-136-45 octest]$ oc get pvc
NAME        STATUS    VOLUME    CAPACITY   ACCESSMODES   AGE
nfsdynpvc   Pending                                      1m
[wehe@dhcp-136-45 octest]$ oc logs nfs-provisioner-toqd2
I0331 08:36:11.167448       1 main.go:58] Provisioner example.com/nfs specified
I0331 08:36:11.167579       1 main.go:71] Starting NFS server!
I0331 08:36:11.377529       1 controller.go:256] Starting provisioner controller 1818361a-15ed-11e7-b411-529c47630055!
I0331 08:36:19.641133       1 controller.go:841] scheduleOperation[lock-provision-pf93k/nfsdynpvc[1cf74c96-15ed-11e7-9cc6-fa163ece4ef6]]
I0331 08:36:19.652247       1 controller.go:841] scheduleOperation[lock-provision-pf93k/nfsdynpvc[1cf74c96-15ed-11e7-9cc6-fa163ece4ef6]]
I0331 08:36:19.657827       1 controller.go:641] cannot start watcher for PVC pf93k/nfsdynpvc: User "system:serviceaccount:pf93k:nfs-provisioner" cannot list events in project "pf93k"
E0331 08:36:19.657850       1 controller.go:493] Error watching for provisioning success, can't provision for claim "pf93k/nfsdynpvc": User "system:serviceaccount:pf93k:nfs-provisioner" cannot list events in project "pf93k"
I0331 08:36:19.657858       1 leaderelection.go:157] attempting to acquire leader lease...
I0331 08:36:19.667265       1 leaderelection.go:179] sucessfully acquired lease to provision for pvc pf93k/nfsdynpvc
I0331 08:36:19.667326       1 controller.go:841] scheduleOperation[provision-pf93k/nfsdynpvc[1cf74c96-15ed-11e7-9cc6-fa163ece4ef6]]
I0331 08:36:19.672231       1 provision.go:312] using potentially unstable pod IP POD_IP=10.129.0.20 as NFS server IP (because neither service env SERVICE_NAME nor node env NODE_NAME are set)
E0331 08:36:19.694625       1 controller.go:572] Failed to provision volume for claim "pf93k/nfsdynpvc" with StorageClass "nfs-provisioner-pf93k": error creating export for volume: error exporting export block 
EXPORT
{
	Export_Id = 1;
	Path = /export/pvc-1cf74c96-15ed-11e7-9cc6-fa163ece4ef6;
	Pseudo = /export/pvc-1cf74c96-15ed-11e7-9cc6-fa163ece4ef6;
	Access_Type = RW;
	Squash = no_root_squash;
	SecType = sys;
	Filesystem_id = 1.1;
	FSAL {
		Name = VFS;
	}
}
: error calling org.ganesha.nfsd.exportmgr.AddExport: 0 export entries in /export/vfs.conf added because (invalid param value) errors. Details:

[root@nfs-provisioner-toqd2 /]# ls -lZd /export/                                                                                       
drwxr-xr-x. 1 root root system_u:object_r:container_share_t:s0 82 Mar 31 08:38 /export/
[root@nfs-provisioner-toqd2 /]# ls /export/
nfs-provisioner.identity  v4old  v4recov  vfs.conf

[root@host-8-175-74 ~]# docker inspect 9517589f7f59
[
    {
        "Id": "9517589f7f59486a70318372100f582b17b81c8ab060234d676c74c3ae78c414",
        "Created": "2017-03-31T08:36:11.015596579Z",
        "Path": "/nfs-provisioner",
        "Args": [
            "-provisioner=example.com/nfs",
            "-grace-period=0"
        ],
        "State": {
            "Status": "running",
            "Running": true,
            "Paused": false,
            "Restarting": false,
            "OOMKilled": false,
            "Dead": false,
            "Pid": 14960,
            "ExitCode": 0,
            "Error": "",
            "StartedAt": "2017-03-31T08:36:11.10522218Z",
            "FinishedAt": "0001-01-01T00:00:00Z"
        },
        "Image": "sha256:c12625ede8fd43271ae0117ec714b0ab2203e6a3177c4f5136f9aa791098d2ea",
        "ResolvConfPath": "/var/lib/docker/containers/46b6654a478a924310b51ade0cb3c63596b5766f64ed0970124b0a5de32a7674/resolv.conf",
        "HostnamePath": "/var/lib/docker/containers/46b6654a478a924310b51ade0cb3c63596b5766f64ed0970124b0a5de32a7674/hostname",
        "HostsPath": "/var/lib/origin/openshift.local.volumes/pods/1765cde9-15ed-11e7-9cc6-fa163ece4ef6/etc-hosts",
        "LogPath": "",
        "Name": "/k8s_nfs-provisioner.944baba3_nfs-provisioner-toqd2_pf93k_1765cde9-15ed-11e7-9cc6-fa163ece4ef6_150e0c14",
        "RestartCount": 0,
        "Driver": "overlay",
        "MountLabel": "",
        "ProcessLabel": "",
        "AppArmorProfile": "",
        "ExecIDs": null,
        "HostConfig": {
            "Binds": [
                "/var/lib/origin/openshift.local.volumes/pods/1765cde9-15ed-11e7-9cc6-fa163ece4ef6/volumes/kubernetes.io~secret/nfs-provisioner-token-rj0c0:/var/run/secrets/kubernetes.io/serviceaccount:ro,Z",
                "/var/lib/origin/openshift.local.volumes/pods/1765cde9-15ed-11e7-9cc6-fa163ece4ef6/etc-hosts:/etc/hosts:Z",
                "/var/lib/origin/openshift.local.volumes/pods/1765cde9-15ed-11e7-9cc6-fa163ece4ef6/containers/nfs-provisioner/150e0c14:/dev/termination-log:Z"
            ],
            "ContainerIDFile": "",
            "LogConfig": {
                "Type": "journald",
                "Config": {}
            },
            "NetworkMode": "container:46b6654a478a924310b51ade0cb3c63596b5766f64ed0970124b0a5de32a7674",
            "PortBindings": null,
            "RestartPolicy": {
                "Name": "",
                "MaximumRetryCount": 0
            },
            "AutoRemove": false,
            "VolumeDriver": "",
            "VolumesFrom": null,
            "CapAdd": [
                "DAC_READ_SEARCH"
            ],
            "CapDrop": [
                "KILL",
                "MKNOD",
                "SYS_CHROOT"
            ],
            "Dns": null,
            "DnsOptions": null,
            "DnsSearch": null,
            "ExtraHosts": null,
            "GroupAdd": [
                "1000120000"
            ],
            "IpcMode": "container:46b6654a478a924310b51ade0cb3c63596b5766f64ed0970124b0a5de32a7674",
            "Cgroup": "",
            "Links": null,
            "OomScoreAdj": 1000,
            "PidMode": "",
            "Privileged": false,
            "PublishAllPorts": false,
            "ReadonlyRootfs": false,
            "SecurityOpt": [
                "seccomp=unconfined",
                "label=level:s0:c11,c5"
            ],
            "UTSMode": "",
            "UsernsMode": "",
            "ShmSize": 67108864,
            "Runtime": "docker-runc",
            "ConsoleSize": [
                0,
                0
            ],
            "Isolation": "",
            "CpuShares": 2,
            "Memory": 0,
            "CgroupParent": "",
            "BlkioWeight": 0,
            "BlkioWeightDevice": null,
            "BlkioDeviceReadBps": null,
            "BlkioDeviceWriteBps": null,
            "BlkioDeviceReadIOps": null,
            "BlkioDeviceWriteIOps": null,
            "CpuPeriod": 0,
            "CpuQuota": 0,
            "CpusetCpus": "",
            "CpusetMems": "",
            "Devices": [],
            "DiskQuota": 0,
            "KernelMemory": 0,
            "MemoryReservation": 0,
            "MemorySwap": -1,
            "MemorySwappiness": -1,
            "OomKillDisable": false,
            "PidsLimit": 0,
            "Ulimits": null,
            "CpuCount": 0,
            "CpuPercent": 0,
            "IOMaximumIOps": 0,
            "IOMaximumBandwidth": 0
        },
        "GraphDriver": {
            "Name": "overlay",
            "Data": {
                "LowerDir": "/var/lib/docker/overlay/07c1a11104cdea0ad606e0e525fc8af1dff548a2531a495746879044eec30efa/root",
                "MergedDir": "/var/lib/docker/overlay/d5b875f17bbfcd8582c68c1c4e60ea248f0e19f1cca687aca380e109c5f29e5c/merged",
                "UpperDir": "/var/lib/docker/overlay/d5b875f17bbfcd8582c68c1c4e60ea248f0e19f1cca687aca380e109c5f29e5c/upper",
                "WorkDir": "/var/lib/docker/overlay/d5b875f17bbfcd8582c68c1c4e60ea248f0e19f1cca687aca380e109c5f29e5c/work"
            }
        },
        "Mounts": [
            {
                "Source": "/var/lib/origin/openshift.local.volumes/pods/1765cde9-15ed-11e7-9cc6-fa163ece4ef6/etc-hosts",
                "Destination": "/etc/hosts",
                "Mode": "Z",
                "RW": true,
                "Propagation": "rprivate"
            },
            {
                "Source": "/var/lib/origin/openshift.local.volumes/pods/1765cde9-15ed-11e7-9cc6-fa163ece4ef6/containers/nfs-provisioner/150e0c14",
                "Destination": "/dev/termination-log",
                "Mode": "Z",
                "RW": true,
                "Propagation": "rprivate"
            },
            {
                "Source": "/var/lib/origin/openshift.local.volumes/pods/1765cde9-15ed-11e7-9cc6-fa163ece4ef6/volumes/kubernetes.io~secret/nfs-provisioner-token-rj0c0",
                "Destination": "/var/run/secrets/kubernetes.io/serviceaccount",
                "Mode": "ro,Z",
                "RW": false,
                "Propagation": "rprivate"
            }
        ],
        "Config": {
            "Hostname": "nfs-provisioner-toqd2",
            "Domainname": "",
            "User": "",
            "AttachStdin": false,
            "AttachStdout": false,
            "AttachStderr": false,
            "ExposedPorts": {
                "111/tcp": {},
                "111/udp": {},
                "20048/tcp": {},
                "2049/tcp": {}
            },
            "Tty": false,
            "OpenStdin": false,
            "StdinOnce": false,
            "Env": [
                "POD_IP=10.129.0.20",
                "KUBERNETES_SERVICE_HOST=172.30.0.1",
                "KUBERNETES_PORT_443_TCP_PROTO=tcp",
                "KUBERNETES_PORT_53_UDP_PORT=53",
                "KUBERNETES_SERVICE_PORT_DNS=53",
                "KUBERNETES_SERVICE_PORT_DNS_TCP=53",
                "KUBERNETES_PORT_443_TCP=tcp://172.30.0.1:443",
                "KUBERNETES_PORT_53_UDP_ADDR=172.30.0.1",
                "KUBERNETES_PORT_53_TCP_ADDR=172.30.0.1",
                "KUBERNETES_SERVICE_PORT_HTTPS=443",
                "KUBERNETES_PORT_443_TCP_PORT=443",
                "KUBERNETES_PORT_53_UDP=udp://172.30.0.1:53",
                "KUBERNETES_PORT_53_TCP=tcp://172.30.0.1:53",
                "KUBERNETES_PORT_53_TCP_PROTO=tcp",
                "KUBERNETES_PORT_53_TCP_PORT=53",
                "KUBERNETES_SERVICE_PORT=443",
                "KUBERNETES_PORT=tcp://172.30.0.1:443",
                "KUBERNETES_PORT_443_TCP_ADDR=172.30.0.1",
                "KUBERNETES_PORT_53_UDP_PROTO=udp",
                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                "DISTTAG=f24docker",
                "FGC=f24"
            ],
            "Cmd": [
                "-provisioner=example.com/nfs",
                "-grace-period=0"
            ],
            "Image": "quay.io/kubernetes_incubator/nfs-provisioner:v1.0.3",
            "Volumes": null,
            "WorkingDir": "",
            "Entrypoint": [
                "/nfs-provisioner"
            ],
            "OnBuild": null,
            "Labels": {
                "io.kubernetes.container.hash": "944baba3",
                "io.kubernetes.container.name": "nfs-provisioner",
                "io.kubernetes.container.ports": "[{\"name\":\"nfs\",\"containerPort\":2049,\"protocol\":\"TCP\"},{\"name\":\"mountd\",\"containerPort\":20048,\"protocol\":\"TCP\"},{\"name\":\"rpcbind\",\"containerPort\":111,\"protocol\":\"TCP\"},{\"name\":\"rpcbind-udp\",\"containerPort\":111,\"protocol\":\"UDP\"}]",
                "io.kubernetes.container.restartCount": "0",
                "io.kubernetes.container.terminationMessagePath": "/dev/termination-log",
                "io.kubernetes.pod.name": "nfs-provisioner-toqd2",
                "io.kubernetes.pod.namespace": "pf93k",
                "io.kubernetes.pod.terminationGracePeriod": "30",
                "io.kubernetes.pod.uid": "1765cde9-15ed-11e7-9cc6-fa163ece4ef6"
            }
        },
        "NetworkSettings": {
            "Bridge": "",
            "SandboxID": "",
            "HairpinMode": false,
            "LinkLocalIPv6Address": "",
            "LinkLocalIPv6PrefixLen": 0,
            "Ports": null,
            "SandboxKey": "",
            "SecondaryIPAddresses": null,
            "SecondaryIPv6Addresses": null,
            "EndpointID": "",
            "Gateway": "",
            "GlobalIPv6Address": "",
            "GlobalIPv6PrefixLen": 0,
            "IPAddress": "",
            "IPPrefixLen": 0,
            "IPv6Gateway": "",
            "MacAddress": "",
            "Networks": null
        }
    }
]


And for Azure it is same issue on Comment 6 and Comment 9.

Comment 13 Matthew Wong 2017-03-31 16:25:31 UTC
I don't see a mount to /export in this docker inspect, it should look like this
{
                "Source": "/tmp/nfs-provisioner",
                "Destination": "/export",
                "Mode": "",
                "RW": true,
                "Propagation": "rprivate"
},
since https://raw.githubusercontent.com/kubernetes-incubator/nfs-provisioner/master/demo/deployment.yaml mounts /tmp/nfs-provisioner to /export. I think here, https://github.com/kubernetes-incubator/external-storage/blob/master/nfs/deploy/kubernetes/pod.yaml or another pod.yaml was used instead, is that correct?

So this issue is because https://github.com/kubernetes-incubator/external-storage/blob/master/nfs/deploy/kubernetes/pod.yaml does not mount anything to /export AND RHEL 7.3 now defaults to the overlay docker storage driver: exporting does not work from an overlay fs, this is a known issue.

So IMO the next course of action is to remove https://github.com/kubernetes-incubator/external-storage/blob/master/nfs/deploy/kubernetes/pod.yaml (documentation pointing to it is already removed) and use https://github.com/kubernetes-incubator/external-storage/blob/master/nfs/deploy/kubernetes/pod_emptydir.yaml instead in all test cases to accommodate the new RHEL default

Comment 15 Wenqi He 2017-04-01 09:56:02 UTC
(In reply to Matthew Wong from comment #14)
> Correction: I didn't remove
> https://raw.githubusercontent.com/kubernetes-incubator/external-storage/
> master/nfs/deploy/kubernetes/pod.yaml, i renamed pod_emptydir.yaml ->
> pod.yaml
> 
> So if we replace links to
> https://raw.githubusercontent.com/kubernetes-incubator/nfs-provisioner/
> master/nfs/deploy/kubernetes/pod.yaml to
> https://raw.githubusercontent.com/kubernetes-incubator/external-storage/
> master/nfs/deploy/kubernetes/pod.yaml, it should work.

So we are not going to maintain the https://github.com/kubernetes-incubator/nfs-provisioner/tree/master/deploy/kube-config and move them all to "external-storage", right?

You are right, to use the pod under /external-storage/ on an overlay OCP, the nfs-provisioner pod works well on openstack.
But it still has the problem with Comment 6 and Comment 9 on Azure, I think it might be also caused by "overlay", will try a new env with "devicemapper" to see whether it is repro

Comment 16 Wenqi He 2017-04-05 06:51:52 UTC
I have tried to run on Azure with a "devicemapper" docker, the nfs-provisioner works well. So the issue in Comment 6 and Comment 9 should caused by "overlay".

Comment 17 Matthew Wong 2017-04-13 15:16:44 UTC
Yamls have been amended to use emptyDir only, this is a known issue that cannot be otherwise fixed, closing.


Note You need to log in before you can comment on or make changes to this bug.