Bug 1287016 - OpenShift running in docker container failed to mount nfs.
Summary: OpenShift running in docker container failed to mount nfs.
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: OKD
Classification: Red Hat
Component: Storage
Version: 3.x
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: ---
Assignee: Paul Morie
QA Contact: Liang Xia
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-12-01 10:57 UTC by Liang Xia
Modified: 2016-05-12 17:09 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-05-12 17:09:57 UTC
Target Upstream Version:
Embargoed:
sdodson: needinfo-


Attachments (Terms of Use)

Description Liang Xia 2015-12-01 10:57:12 UTC
Description of problem:
Try to mount an nfs to OpenShift which is running in a docker container,
it failed,
# mount <ip-of-the-nfs>:<export-path> /mnt
mount: wrong fs type, bad option, bad superblock on <secret-hide-here>:<hide>,
       missing codepage or helper program, or other error
       (for several filesystems (e.g. nfs, cifs) you might
       need a /sbin/mount.<type> helper program)
       In some cases useful info is found in syslog - try
       dmesg | tail or so.

Version-Release number of selected component (if applicable):
openshift/origin:fe92a885f059
openshift v1.1-264-gfdff20d
kubernetes v1.1.0-origin-1107-g4c8e6f4
etcd 2.1.2

How reproducible:
Always

Steps to Reproduce:
1.Pull the latest openshift/origin image
docker pull openshift/origin
2.Start OpenShift
sudo docker run -d --name "origin" \
        --privileged --pid=host --net=host \
        -v /:/rootfs:ro -v /var/run:/var/run:rw -v /sys:/sys -v /var/lib/docker:/var/lib/docker:rw \
        -v /var/lib/origin/openshift.local.volumes:/var/lib/origin/openshift.local.volumes \
        openshift/origin start
3.Open a console inside the container
sudo docker exec -it origin bash
4.Mount NFS
mount <ip-of-the-nfs>:<export-path> /mnt

Actual results:
# mount <ip-of-the-nfs>:<export-path> /mnt
mount: wrong fs type, bad option, bad superblock on <secret-hide-here>:<hide>,
       missing codepage or helper program, or other error
       (for several filesystems (e.g. nfs, cifs) you might
       need a /sbin/mount.<type> helper program)
       In some cases useful info is found in syslog - try
       dmesg | tail or so.

Expected results:
NFS can be mount.

Additional info:
In the container,
    # rpm -qa | grep nfs
    # which mount.nfs mount.nfs4
/usr/bin/which: no mount.nfs in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin)
/usr/bin/which: no mount.nfs4 in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin)

On the host,
    # rpm -qa | grep nfs
nfs4-acl-tools-0.3.3-14.el7.x86_64
nfs-utils-1.3.0-0.21.el7.x86_64
libnfsidmap-0.25-12.el7.x86_64
    # which mount.nfs mount.nfs4
/usr/sbin/mount.nfs
/usr/sbin/mount.nfs4

And after the package nfs-utils is installed in the container, nfs can be mount.

Comment 1 Paul Morie 2015-12-01 16:37:12 UTC
I believe the problem here is that you need to use the `--containerized` option for mounts to work correctly when OpenShift runs in a container.

Comment 2 Liang Xia 2015-12-02 06:00:37 UTC
Hi Paul,

Sorry, I didn't get your point.
Could you give more detail? 

Thanks,
Liang

Comment 3 Scott Dodson 2015-12-02 14:50:54 UTC
Paul,

The origin image sets OPENSHIFT_CONTAINERIZED true which I believe propagates to the containerized option of the kubelet, see

https://github.com/openshift/origin/blob/106821c7f3065eee1ed6452d71eda3aa195a630f/pkg/cmd/server/kubernetes/node_config.go#L142-L144

And the logs when I start my containerized node indicate it's containerized.

Dec 02 09:47:39 ose3-atomic.example.com docker[2532]: I1202 09:47:39.874915    2586 server.go:326] Running kubelet in containerized mode (experimental)

However, I don't see anywhere in the kube code where this would affect how it chooses to mount NFS volumes, it just calls `mount` and lets mount figure out which binaries are needed to mount a given fstype. I imagine we should add nfs-utils to the Dockerfile for the origin image so we get mount.nfs. I'm not entirely sure about the interactions between mount.nfs and the kernel but I imagine as long as the container is privileged it should work.

Comment 4 Scott Dodson 2015-12-02 14:53:25 UTC
Hmm, now I see that the flag switches the mounter to NsenterMounter..

Comment 5 Liang Xia 2015-12-09 03:12:00 UTC
Tried again on openshift/origin:63b205d14836 ,

I can see the log,
I1209 03:00:41.896674    9518 kubelet.go:833] Running in container "/kubelet"

But still failed to mount
# mount <ip-of-the-nfs>:<export-path> /mnt
mount: wrong fs type, bad option, bad superblock on <secret-hide-here>:<hide>,
       missing codepage or helper program, or other error
       (for several filesystems (e.g. nfs, cifs) you might
       need a /sbin/mount.<type> helper program)
       In some cases useful info is found in syslog - try
       dmesg | tail or so.

Comment 6 Paul Morie 2016-01-05 14:54:42 UTC
The way this is supposed to work is that the NsenterMounter execs mount(2) in the host's mount namespace.  This means that mount(2) will search the host's fs for mount helpers.

Can we establish whether this is working now, or still broken?

Comment 7 Liang Xia 2016-01-06 06:23:09 UTC
Still failed on openshift/origin:80b10b73a2c9

# mount <ip-of-the-nfs>:<export-path> /mnt
mount: wrong fs type, bad option, bad superblock on <secret-hide-here>:<hide>,
       missing codepage or helper program, or other error
       (for several filesystems (e.g. nfs, cifs) you might
       need a /sbin/mount.<type> helper program)
       In some cases useful info is found in syslog - try
       dmesg | tail or so.


$ docker inspect 80b10b73a2c9
[
{
    "Id": "80b10b73a2c95b046f038632bcddcb459600dd7c9f0867f0b3020fb6f9878c7e",
    "RepoTags": [
        "openshift/origin:latest"
    ],
    "RepoDigests": [],
    "Parent": "1c1fb0f55b292fb57a3d54adb964854133617e51826b458f395c897c1427341f",
    "Comment": "",
    "Created": "2016-01-05T19:39:10.814669866Z",
    "Container": "878b66fdb76ef034056599464768927f338ba1afe6c17fd89fc6add2fc9b6be6",
    "ContainerConfig": {
        "Hostname": "f77e60ad5dfc",
        "Domainname": "",
        "User": "",
        "AttachStdin": false,
        "AttachStdout": false,
        "AttachStderr": false,
        "ExposedPorts": {
            "53/tcp": {},
            "8443/tcp": {}
        },
        "Tty": false,
        "OpenStdin": false,
        "StdinOnce": false,
        "Env": [
            "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
            "HOME=/root",
            "OPENSHIFT_CONTAINERIZED=true",
            "KUBECONFIG=/var/lib/origin/openshift.local.config/master/admin.kubeconfig"
        ],
        "Cmd": [
            "/bin/sh",
            "-c",
            "#(nop) ENTRYPOINT \u0026{[\"/usr/bin/openshift\"]}"
        ],
        "Image": "1c1fb0f55b292fb57a3d54adb964854133617e51826b458f395c897c1427341f",
        "Volumes": null,
        "WorkingDir": "/var/lib/origin",
        "Entrypoint": [
            "/usr/bin/openshift"
        ],
        "OnBuild": [],
        "Labels": {
            "build-date": "2015-12-23",
            "license": "GPLv2",
            "name": "CentOS Base Image",
            "vendor": "CentOS"
        }
    },
    "DockerVersion": "1.8.2-el7",
    "Author": "",
    "Config": {
        "Hostname": "f77e60ad5dfc",
        "Domainname": "",
        "User": "",
        "AttachStdin": false,
        "AttachStdout": false,
        "AttachStderr": false,
        "ExposedPorts": {
            "53/tcp": {},
            "8443/tcp": {}
        },
        "Tty": false,
        "OpenStdin": false,
        "StdinOnce": false,
        "Env": [
            "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
            "HOME=/root",
            "OPENSHIFT_CONTAINERIZED=true",
            "KUBECONFIG=/var/lib/origin/openshift.local.config/master/admin.kubeconfig"
        ],
        "Cmd": null,
        "Image": "1c1fb0f55b292fb57a3d54adb964854133617e51826b458f395c897c1427341f",
        "Volumes": null,
        "WorkingDir": "/var/lib/origin",
        "Entrypoint": [
            "/usr/bin/openshift"
        ],
        "OnBuild": [],
        "Labels": {
            "build-date": "2015-12-23",
            "license": "GPLv2",
            "name": "CentOS Base Image",
            "vendor": "CentOS"
        }
    },
    "Architecture": "amd64",
    "Os": "linux",
    "Size": 0,
    "VirtualSize": 485894020,
    "GraphDriver": {
        "Name": "devicemapper",
        "Data": {
            "DeviceId": "839",
            "DeviceName": "docker-253:0-266312-80b10b73a2c95b046f038632bcddcb459600dd7c9f0867f0b3020fb6f9878c7e",
            "DeviceSize": "107374182400"
        }
    }
}
]

Comment 8 Andy Goldstein 2016-01-07 02:49:26 UTC
I wouldn't expect a manual invocation of "mount" to work necessarily in the container (since we don't have nfs-utils installed)... but have you tried creating a pod with an nfs volume - does that work?

Comment 9 Paul Morie 2016-01-07 04:23:48 UTC
Andy is correct; manual invocation of mount in the container is not equivalent to how the mount operation is performed when containerized and isn't a valid test for this case.  In order to diagnose what happened during your specific test, I will need the log output from the openshift-node log.  Will you attach that to this bug or post in a comment?  In the meantime, I will try to recreate locally.

Comment 10 Liang Xia 2016-01-07 06:22:35 UTC
Tried on openshift/origin:80b10b73a2c9 with following steps,

1.Pull the latest openshift/origin image
docker pull openshift/origin
2.Start OpenShift
sudo docker run -d --name "origin" \
        --privileged --pid=host --net=host \
        -v /:/rootfs:ro -v /var/run:/var/run:rw -v /sys:/sys -v /var/lib/docker:/var/lib/docker:rw \
        -v /var/lib/origin/openshift.local.volumes:/var/lib/origin/openshift.local.volumes \
        openshift/origin start
3.Open a console inside the container
sudo docker exec -it origin bash
4.Create a pod which uses nfs.
oc create -f pod.json
5.Check the pod
oc describe pods

# oc describe pods
Name:				mypod
Namespace:			default
Image(s):			aosqe/hello-openshift
Node:				<hide...here>
Start Time:			Thu, 07 Jan 2016 03:03:52 +0000
Labels:				name=frontendhttp
Status:				Running
Reason:				
Message:			
IP:				172.17.0.2
Replication Controllers:	<none>
Containers:
  myfrontend:
    Container ID:	docker://156cc7117fc255732e20e5f589a8232f3dd61b24ada1063df56a203e5c8ecd5e
    Image:		aosqe/hello-openshift
    Image ID:		docker://cddcd4ab363acd31256ed7880d4b669fa45227e49eec41429f80a4f252dfb0da
    QoS Tier:
      cpu:		BestEffort
      memory:		BestEffort
    State:		Running
      Started:		Thu, 07 Jan 2016 03:36:17 +0000
    Ready:		True
    Restart Count:	0
    Environment Variables:
Conditions:
  Type		Status
  Ready 	True 
Volumes:
  pvol:
    Type:	NFS (an NFS mount that lasts the lifetime of a pod)
    Server:	<hide...here>
    Path:	<hide...here>
    ReadOnly:	false
  default-token-i88id:
    Type:	Secret (a secret that should populate this volume)
    SecretName:	default-token-i88id
Events:
  FirstSeen	LastSeen	Count	From					SubobjectPath	Reason		Message
  ─────────	────────	─────	────					─────────────	──────		───────
  2h		2m		386	{kubelet <hide...here>}			FailedMount	Unable to mount volumes for pod "mypod_default": exit status 32
  2h		2m		386	{kubelet <hide...here>}			FailedSync	Error syncing pod, skipping: exit status 32


Check on the host,
$ docker logs origin
......
E0107 04:16:10.590332   32509 kubelet.go:1461] Unable to mount volumes for pod "mypod_default": exit status 32; skipping pod
E0107 04:16:10.592617   32509 pod_workers.go:113] Error syncing pod 48331ec8-b4eb-11e5-a7b2-54ee75528544, skipping: exit status 32
E0107 04:16:30.596308   32509 kubelet.go:1461] Unable to mount volumes for pod "mypod_default": exit status 32; skipping pod
E0107 04:16:30.598881   32509 pod_workers.go:113] Error syncing pod 48331ec8-b4eb-11e5-a7b2-54ee75528544, skipping: exit status 32
......

Comment 11 Paul Morie 2016-01-07 20:22:19 UTC
We need to get the openshift log at a higher log level to diagnose what is going on here.  Please tack `--loglevel=5` onto the end of the command you use to start openshift and look for log lines containing nsenter_mount.go (please post the whole log anyway, pastebin or gist is fine with me).

Personally, I tried to reproduce this using an E2E test that runs an NFS server in a container and tries to make a client pod that does an NFS mount from that server.  I got an error 32 from mount, but it was due to connection timing out, and the mount.nfs binary on the host was invoked correctly.

So, I don't think the issue here is that the nfs-utils package isn't installed in the container, but I would like to understand what _is_ happening (especially if we can make the debug experience better when some nfs mount goes sideways).

For clarity, the needinfo here is for logs at an increased log level.

Comment 12 Liang Xia 2016-01-08 02:37:57 UTC
Created attachment 1112722 [details]
Logs via command 'docker logs origin'

All the output of command "docker logs origin" when the container origin is started with loglevel 5

Comment 13 Liang Xia 2016-01-08 02:40:48 UTC
[root@dhcp-14-110 origin]# oc create -f pod.json 
pod "mypod" created

[root@dhcp-14-110 origin]# oc get pods
NAME      READY     STATUS    RESTARTS   AGE
mypod     1/1       Running   0          3s

[root@dhcp-14-110 origin]# oc describe pods
Name:				mypod
Namespace:			default
Image(s):			aosqe/hello-openshift
Node:				dhcp-14-110.nay.redhat.com/10.66.136.64
Start Time:			Fri, 08 Jan 2016 02:29:09 +0000
Labels:				name=frontendhttp
Status:				Running
Reason:				
Message:			
IP:				172.17.0.2
Replication Controllers:	<none>
Containers:
  myfrontend:
    Container ID:	docker://1370952a6ebe966292b7518e95794263a2a1b4079b0bf57e188a45223a03801a
    Image:		aosqe/hello-openshift
    Image ID:		docker://cddcd4ab363acd31256ed7880d4b669fa45227e49eec41429f80a4f252dfb0da
    QoS Tier:
      memory:		BestEffort
      cpu:		BestEffort
    State:		Running
      Started:		Fri, 08 Jan 2016 02:29:10 +0000
    Ready:		True
    Restart Count:	0
    Environment Variables:
Conditions:
  Type		Status
  Ready 	True 
Volumes:
  pvol:
    Type:	NFS (an NFS mount that lasts the lifetime of a pod)
    Server:	10.66.79.133
    Path:	/jhou
    ReadOnly:	false
  default-token-wt4ch:
    Type:	Secret (a secret that should populate this volume)
    SecretName:	default-token-wt4ch
Events:
  FirstSeen	LastSeen	Count	From					SubobjectPath				Reason		Message
  ─────────	────────	─────	────					─────────────				──────		───────
  10s		10s		1	{scheduler }									Scheduled	Successfully assigned mypod to dhcp-14-110.nay.redhat.com
  10s		10s		1	{kubelet dhcp-14-110.nay.redhat.com}	implicitly required container POD	Created		Created with docker id 951444bb1837
  10s		10s		1	{kubelet dhcp-14-110.nay.redhat.com}	implicitly required container POD	Pulled		Container image "openshift/origin-pod:v1.1" already present on machine
  9s		9s		1	{kubelet dhcp-14-110.nay.redhat.com}	spec.containers{myfrontend}		Started		Started with docker id 1370952a6ebe
  9s		9s		1	{kubelet dhcp-14-110.nay.redhat.com}	spec.containers{myfrontend}		Pulled		Container image "aosqe/hello-openshift" already present on machine
  9s		9s		1	{kubelet dhcp-14-110.nay.redhat.com}	implicitly required container POD	Started		Started with docker id 951444bb1837
  9s		9s		1	{kubelet dhcp-14-110.nay.redhat.com}	spec.containers{myfrontend}		Created		Created with docker id 1370952a6ebe
  8s		8s		1	{kubelet dhcp-14-110.nay.redhat.com}						FailedMount	Unable to mount volumes for pod "mypod_default": exit status 32
  8s		8s		1	{kubelet dhcp-14-110.nay.redhat.com}						FailedSync	Error syncing pod, skipping: exit status 32

Comment 14 Liang Xia 2016-01-08 03:02:12 UTC
[lxia@dhcp-14-110 ~]$ mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
devtmpfs on /dev type devtmpfs (rw,nosuid,size=8016132k,nr_inodes=2004033,mode=755)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
configfs on /sys/kernel/config type configfs (rw,relatime)
/dev/mapper/fedora_dhcp--14--110-root on / type ext4 (rw,relatime,data=ordered)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime)
mqueue on /dev/mqueue type mqueue (rw,relatime)
tmpfs on /tmp type tmpfs (rw)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=32,pgrp=1,timeout=0,minproto=5,maxproto=5,direct)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime)
/dev/sda3 on /boot type ext4 (rw,relatime,data=ordered)
/dev/mapper/fedora_dhcp--14--110-home on /home type ext4 (rw,relatime,data=ordered)
tmpfs on /run/user/42 type tmpfs (rw,nosuid,nodev,relatime,size=1605428k,mode=700,uid=42,gid=42)
gvfsd-fuse on /run/user/42/gvfs type fuse.gvfsd-fuse (rw,nosuid,nodev,relatime,user_id=42,group_id=42)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
tmpfs on /run/user/1000 type tmpfs (rw,nosuid,nodev,relatime,size=1605428k,mode=700,uid=1000,gid=1000)
gvfsd-fuse on /run/user/1000/gvfs type fuse.gvfsd-fuse (rw,nosuid,nodev,relatime,user_id=1000,group_id=1000)
tmpfs on /var/lib/openshift/openshift.local.volumes/pods/98e7072c-b5af-11e5-baac-54ee75528544/volumes/kubernetes.io~secret/default-token-wt4ch type tmpfs (rw,relatime)
tmpfs on /var/lib/openshift/openshift.local.volumes/pods/98e7072c-b5af-11e5-baac-54ee75528544/volumes/kubernetes.io~secret/default-token-wt4ch type tmpfs (rw,relatime)
tmpfs on /var/lib/openshift/openshift.local.volumes/pods/98e7072c-b5af-11e5-baac-54ee75528544/volumes/kubernetes.io~secret/default-token-wt4ch type tmpfs (rw,relatime)
........


I have umount all the token and volume, then rm -rf /var/lib/openshift/openshift.local.volumes before running the container, but now it looks like above. and the item is still increasing.
[lxia@dhcp-14-110 ~]$ mount | wc -l
100
[lxia@dhcp-14-110 ~]$ mount | wc -l
108
[lxia@dhcp-14-110 ~]$ mount | wc -l
117

Comment 18 Liang Xia 2016-01-14 04:40:08 UTC
# oc get events
FIRSTSEEN   LASTSEEN   COUNT     NAME                         KIND      SUBOBJECT   REASON           SOURCE                                    MESSAGE
1h          1h         1         dhcp-14-110.nay.redhat.com   Node                  Starting         {kube-proxy dhcp-14-110.nay.redhat.com}   Starting kube-proxy.
1h          1h         1         dhcp-14-110.nay.redhat.com   Node                  Starting         {kubelet dhcp-14-110.nay.redhat.com}      Starting kubelet.
1h          1h         1         dhcp-14-110.nay.redhat.com   Node                  NodeReady        {kubelet dhcp-14-110.nay.redhat.com}      Node dhcp-14-110.nay.redhat.com status is now: NodeReady
1h          1h         1         dhcp-14-110.nay.redhat.com   Node                  RegisteredNode   {controllermanager }                      Node dhcp-14-110.nay.redhat.com event: Registered Node dhcp-14-110.nay.redhat.com in NodeController
1h          1h         1         mypod                        Pod                   Scheduled        {scheduler }                              Successfully assigned mypod to dhcp-14-110.nay.redhat.com
1h          55s        31        mypod                        Pod                   FailedMount      {kubelet dhcp-14-110.nay.redhat.com}      Unable to mount volumes for pod "mypod_default": exit status 32
1h          55s        31        mypod                        Pod                   FailedSync       {kubelet dhcp-14-110.nay.redhat.com}      Error syncing pod, skipping: exit status 32




I0114 04:35:07.076507   18416 nsenter_mount.go:174] findmnt command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- /bin/findmnt -o target --noheadings --target /var/lib/origin/openshift.local.volumes/pods/297699cc-ba6f-11e5-ad71-54ee75528544/volumes/kubernetes.io~nfs/pvol]
I0114 04:35:07.086035   18416 nsenter_mount.go:185] IsLikelyNotMountPoint findmnt output: /
E0114 04:35:07.086144   18416 kubelet.go:1461] Unable to mount volumes for pod "mypod_default": exit status 32; skipping pod
I0114 04:35:07.086167   18416 kubelet.go:2772] Generating status for "mypod_default"
I0114 04:35:07.086196   18416 server.go:734] Event(api.ObjectReference{Kind:"Pod", Namespace:"default", Name:"mypod", UID:"297699cc-ba6f-11e5-ad71-54ee75528544", APIVersion:"v1", ResourceVersion:"232", FieldPath:""}): reason: 'FailedMount' Unable to mount volumes for pod "mypod_default": exit status 32
I0114 04:35:07.086932   18416 kubelet.go:2683] pod waiting > 0, pending
I0114 04:35:07.087013   18416 manager.go:231] Ignoring same status for pod "mypod_default", status: {Phase:Pending Conditions:[{Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2016-01-14 03:32:35.521256369 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [myfrontend]}] Message: Reason: HostIP:10.66.136.64 PodIP: StartTime:2016-01-14 03:32:35.521256639 +0000 UTC ContainerStatuses:[{Name:myfrontend State:{Waiting:0xc20fcc32c0 Running:<nil> Terminated:<nil>} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:<nil>} Ready:false RestartCount:0 Image:aosqe/hello-openshift ImageID: ContainerID:}]}
E0114 04:35:07.087056   18416 pod_workers.go:113] Error syncing pod 297699cc-ba6f-11e5-ad71-54ee75528544, skipping: exit status 32
I0114 04:35:07.087114   18416 server.go:734] Event(api.ObjectReference{Kind:"Pod", Namespace:"default", Name:"mypod", UID:"297699cc-ba6f-11e5-ad71-54ee75528544", APIVersion:"v1", ResourceVersion:"232", FieldPath:""}): reason: 'FailedSync' Error syncing pod, skipping: exit status 32
I0114 04:35:07.087148   18416 volumes.go:109] Used volume plugin "kubernetes.io/nfs" for pvol
I0114 04:35:07.087167   18416 nsenter_mount.go:174] findmnt command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- /bin/findmnt -o target --noheadings --target /var/lib/origin/openshift.local.volumes/pods/297699cc-ba6f-11e5-ad71-54ee75528544/volumes/kubernetes.io~nfs/pvol]
I0114 04:35:07.096908   18416 nsenter_mount.go:185] IsLikelyNotMountPoint findmnt output: /
I0114 04:35:07.096939   18416 nfs.go:161] NFS mount set up: /var/lib/origin/openshift.local.volumes/pods/297699cc-ba6f-11e5-ad71-54ee75528544/volumes/kubernetes.io~nfs/pvol false <nil>
I0114 04:35:07.096993   18416 nsenter_mount.go:114] nsenter Mounting 10.66.79.133:/jhou /var/lib/origin/openshift.local.volumes/pods/297699cc-ba6f-11e5-ad71-54ee75528544/volumes/kubernetes.io~nfs/pvol nfs []
I0114 04:35:07.097006   18416 nsenter_mount.go:117] Mount command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- /bin/mount -t nfs 10.66.79.133:/jhou /var/lib/origin/openshift.local.volumes/pods/297699cc-ba6f-11e5-ad71-54ee75528544/volumes/kubernetes.io~nfs/pvol]

Comment 19 Paul Morie 2016-02-04 14:38:29 UTC
The correct mounter is being used here -- the exit code is actually being passed back by the nfs mount helper from the host.  Exit code 32 means the mountpoint is busy or already mounted.

Can you show the output of `mount` after the nfs mount fails?

To be clear -- the problem here is the nfs mount failing; the mechanics of how the mount is being run are working correctly.

Comment 20 Liang Xia 2016-02-05 10:49:35 UTC
The mount info is already there in #comment 14

Comment 21 Liang Xia 2016-02-16 10:59:12 UTC
# oc describe pods
Name:		mypod
Namespace:	default
Image(s):	aosqe/hello-openshift
Node:		dhcp-14-110.nay.redhat.com/10.66.136.64
Start Time:	Tue, 16 Feb 2016 10:52:26 +0000
Labels:		name=frontendhttp
Status:		Running
Reason:		
Message:	
IP:		172.17.0.2
Controllers:	<none>
Containers:
  myfrontend:
    Container ID:	docker://318c84a18d18c443e60a27c10069de36e9352443c7ad73689869f884bba7db23
    Image:		aosqe/hello-openshift
    Image ID:		docker://sha256:caa46d03cf599cd2e98f40accd8256efa362e2212e70a903beb5b6380d2c461c
    QoS Tier:
      cpu:		BestEffort
      memory:		BestEffort
    State:		Running
      Started:		Tue, 16 Feb 2016 10:52:27 +0000
    Ready:		True
    Restart Count:	0
    Environment Variables:
Conditions:
  Type		Status
  Ready 	True 
Volumes:
  pvol:
    Type:	NFS (an NFS mount that lasts the lifetime of a pod)
    Server:	10.66.79.133
    Path:	/home/data/lxia
    ReadOnly:	false
  default-token-u8r8l:
    Type:	Secret (a secret that should populate this volume)
    SecretName:	default-token-u8r8l
Events:
  FirstSeen	LastSeen	Count	From					SubobjectPath			Type		Reason		Message
  ---------	--------	-----	----					-------------			--------	------		-------
  1m		1m		1	{default-scheduler }							Normal		Scheduled	Successfully assigned mypod to dhcp-14-110.nay.redhat.com
  1m		1m		1	{kubelet dhcp-14-110.nay.redhat.com}	spec.containers{myfrontend}	Normal		Pulled		Container image "aosqe/hello-openshift" already present on machine
  1m		1m		1	{kubelet dhcp-14-110.nay.redhat.com}	spec.containers{myfrontend}	Normal		Created		Created container with docker id 318c84a18d18
  1m		1m		1	{kubelet dhcp-14-110.nay.redhat.com}	spec.containers{myfrontend}	Normal		Started		Started container with docker id 318c84a18d18
  20s		20s		1	{kubelet dhcp-14-110.nay.redhat.com}					Warning		FailedMount	Unable to mount volumes for pod "mypod_default(5da6e128-d49b-11e5-acb5-54ee75528544)": exit status 32
  19s		19s		1	{kubelet dhcp-14-110.nay.redhat.com}					Warning		FailedSync	Error syncing pod, skipping: exit status 32



On the host, (The last two lines appears _after_ the pod is created)
$ mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
devtmpfs on /dev type devtmpfs (rw,nosuid,size=8016020k,nr_inodes=2004005,mode=755)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
configfs on /sys/kernel/config type configfs (rw,relatime)
/dev/mapper/fedora_dhcp--14--110-root on / type ext4 (rw,relatime,data=ordered)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=35,pgrp=1,timeout=0,minproto=5,maxproto=5,direct)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime)
mqueue on /dev/mqueue type mqueue (rw,relatime)
tmpfs on /tmp type tmpfs (rw)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime)
/dev/sda3 on /boot type ext4 (rw,relatime,data=ordered)
/dev/mapper/fedora_dhcp--14--110-home on /home type ext4 (rw,relatime,data=ordered)
tmpfs on /run/user/42 type tmpfs (rw,nosuid,nodev,relatime,size=1605416k,mode=700,uid=42,gid=42)
gvfsd-fuse on /run/user/42/gvfs type fuse.gvfsd-fuse (rw,nosuid,nodev,relatime,user_id=42,group_id=42)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
tmpfs on /run/user/1000 type tmpfs (rw,nosuid,nodev,relatime,size=1605416k,mode=700,uid=1000,gid=1000)
gvfsd-fuse on /run/user/1000/gvfs type fuse.gvfsd-fuse (rw,nosuid,nodev,relatime,user_id=1000,group_id=1000)
tmpfs on /var/lib/openshift/openshift.local.volumes/pods/5da6e128-d49b-11e5-acb5-54ee75528544/volumes/kubernetes.io~secret/default-token-u8r8l type tmpfs (rw,relatime)
10.66.79.133:/home/data/lxia on /var/lib/openshift/openshift.local.volumes/pods/5da6e128-d49b-11e5-acb5-54ee75528544/volumes/kubernetes.io~nfs/pvol type nfs4 (rw,relatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=10.66.136.44,local_lock=none,addr=10.66.79.133)
tmpfs on /var/lib/openshift/openshift.local.volumes/pods/5da6e128-d49b-11e5-acb5-54ee75528544/volumes/kubernetes.io~secret/default-token-u8r8l type tmpfs (rw,relatime)


$ docker logs origin
I0216 10:56:02.911480    8513 volumes.go:127] Used volume plugin "kubernetes.io/nfs" for pvol
I0216 10:56:02.911509    8513 nsenter_mount.go:174] findmnt command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- /bin/findmnt -o target --noheadings --target /var/lib/origin/openshift.local.volumes/pods/5da6e128-d49b-11e5-acb5-54ee75528544/volumes/kubernetes.io~nfs/pvol]
I0216 10:56:02.917988    8513 nsenter_mount.go:186] IsLikelyNotMountPoint findmnt output: /
I0216 10:56:02.918007    8513 nfs.go:167] NFS mount set up: /var/lib/origin/openshift.local.volumes/pods/5da6e128-d49b-11e5-acb5-54ee75528544/volumes/kubernetes.io~nfs/pvol false <nil>
I0216 10:56:02.918064    8513 nsenter_mount.go:114] nsenter Mounting 10.66.79.133:/home/data/lxia /var/lib/origin/openshift.local.volumes/pods/5da6e128-d49b-11e5-acb5-54ee75528544/volumes/kubernetes.io~nfs/pvol nfs []
I0216 10:56:02.918079    8513 nsenter_mount.go:117] Mount command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- /bin/mount -t nfs 10.66.79.133:/home/data/lxia /var/lib/origin/openshift.local.volumes/pods/5da6e128-d49b-11e5-acb5-54ee75528544/volumes/kubernetes.io~nfs/pvol]
I0216 10:56:02.983216    8513 generic.go:149] GenericPLEG: Relisting
I0216 10:56:02.983288    8513 volumes.go:127] Used volume plugin "kubernetes.io/secret" for default-token-u8r8l
I0216 10:56:02.983342    8513 nsenter_mount.go:174] findmnt command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- /bin/findmnt -o target --noheadings --target /var/lib/origin/openshift.local.volumes/pods/5da6e128-d49b-11e5-acb5-54ee75528544/volumes/kubernetes.io~secret/default-token-u8r8l]
I0216 10:56:02.987042    8513 docker.go:344] Docker Container: /origin is not managed by kubelet.
I0216 10:56:02.991399    8513 nsenter_mount.go:186] IsLikelyNotMountPoint findmnt output: /var/lib/openshift/openshift.local.volumes/pods/5da6e128-d49b-11e5-acb5-54ee75528544/volumes/kubernetes.io~secret/default-token-u8r8l
/var/lib/openshift/openshift.local.volumes/pods/5da6e128-d49b-11e5-acb5-54ee75528544/volumes/kubernetes.io~secret/default-token-u8r8l
/var/lib/openshift/openshift.local.volumes/pods/5da6e128-d49b-11e5-acb5-54ee75528544/volumes/kubernetes.io~secret/default-token-u8r8l
I0216 10:56:02.991443    8513 secret.go:152] Setting up volume default-token-u8r8l for pod 5da6e128-d49b-11e5-acb5-54ee75528544 at /var/lib/origin/openshift.local.volumes/pods/5da6e128-d49b-11e5-acb5-54ee75528544/volumes/kubernetes.io~secret/default-token-u8r8l
I0216 10:56:02.991475    8513 volumes.go:127] Used volume plugin "kubernetes.io/empty-dir" for wrapped_default-token-u8r8l
I0216 10:56:02.991489    8513 nsenter_mount.go:174] findmnt command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- /bin/findmnt -o target --noheadings --target /var/lib/origin/openshift.local.volumes/pods/5da6e128-d49b-11e5-acb5-54ee75528544/volumes/kubernetes.io~secret/default-token-u8r8l]
I0216 10:56:02.998210    8513 nsenter_mount.go:186] IsLikelyNotMountPoint findmnt output: /var/lib/openshift/openshift.local.volumes/pods/5da6e128-d49b-11e5-acb5-54ee75528544/volumes/kubernetes.io~secret/default-token-u8r8l
/var/lib/openshift/openshift.local.volumes/pods/5da6e128-d49b-11e5-acb5-54ee75528544/volumes/kubernetes.io~secret/default-token-u8r8l
/var/lib/openshift/openshift.local.volumes/pods/5da6e128-d49b-11e5-acb5-54ee75528544/volumes/kubernetes.io~secret/default-token-u8r8l
I0216 10:56:02.998270    8513 empty_dir_linux.go:39] Determining mount medium of /var/lib/origin/openshift.local.volumes/pods/5da6e128-d49b-11e5-acb5-54ee75528544/volumes/kubernetes.io~secret/default-token-u8r8l
I0216 10:56:02.998282    8513 nsenter_mount.go:174] findmnt command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- /bin/findmnt -o target --noheadings --target /var/lib/origin/openshift.local.volumes/pods/5da6e128-d49b-11e5-acb5-54ee75528544/volumes/kubernetes.io~secret/default-token-u8r8l]
I0216 10:56:03.004669    8513 nsenter_mount.go:186] IsLikelyNotMountPoint findmnt output: /var/lib/openshift/openshift.local.volumes/pods/5da6e128-d49b-11e5-acb5-54ee75528544/volumes/kubernetes.io~secret/default-token-u8r8l
/var/lib/openshift/openshift.local.volumes/pods/5da6e128-d49b-11e5-acb5-54ee75528544/volumes/kubernetes.io~secret/default-token-u8r8l
/var/lib/openshift/openshift.local.volumes/pods/5da6e128-d49b-11e5-acb5-54ee75528544/volumes/kubernetes.io~secret/default-token-u8r8l
I0216 10:56:03.004703    8513 empty_dir_linux.go:49] Statfs_t of %v: %+v/var/lib/origin/openshift.local.volumes/pods/5da6e128-d49b-11e5-acb5-54ee75528544/volumes/kubernetes.io~secret/default-token-u8r8l{61267 4096 12868767 2363838 1704382 3276800 3033939 {[-26315010 605263591]} 255 4096 4128 [0 0 0 0]}
I0216 10:56:03.004729    8513 empty_dir.go:230] pod 5da6e128-d49b-11e5-acb5-54ee75528544: mounting tmpfs for volume wrapped_default-token-u8r8l with opts []
I0216 10:56:03.004737    8513 nsenter_mount.go:114] nsenter Mounting tmpfs /var/lib/origin/openshift.local.volumes/pods/5da6e128-d49b-11e5-acb5-54ee75528544/volumes/kubernetes.io~secret/default-token-u8r8l tmpfs []
I0216 10:56:03.004747    8513 nsenter_mount.go:117] Mount command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- /bin/mount -t tmpfs tmpfs /var/lib/origin/openshift.local.volumes/pods/5da6e128-d49b-11e5-acb5-54ee75528544/volumes/kubernetes.io~secret/default-token-u8r8l]
I0216 10:56:03.013572    8513 secret.go:178] Received secret default/default-token-u8r8l containing (2) pieces of data, 1912 total bytes
I0216 10:56:03.013591    8513 secret.go:183] Writing secret data default/default-token-u8r8l/token (846 bytes) to host file /var/lib/origin/openshift.local.volumes/pods/5da6e128-d49b-11e5-acb5-54ee75528544/volumes/kubernetes.io~secret/default-token-u8r8l/token
I0216 10:56:03.013600    8513 writer.go:62] Command to write data to file: nsenter [--mount=/rootfs/proc/1/ns/mnt -- sh -c cat > /var/lib/origin/openshift.local.volumes/pods/5da6e128-d49b-11e5-acb5-54ee75528544/volumes/kubernetes.io~secret/default-token-u8r8l/token]
I0216 10:56:03.019981    8513 writer.go:72] Command to change permissions to file: nsenter [--mount=/rootfs/proc/1/ns/mnt -- chmod 444 /var/lib/origin/openshift.local.volumes/pods/5da6e128-d49b-11e5-acb5-54ee75528544/volumes/kubernetes.io~secret/default-token-u8r8l/token]
I0216 10:56:03.025228    8513 secret.go:183] Writing secret data default/default-token-u8r8l/ca.crt (1066 bytes) to host file /var/lib/origin/openshift.local.volumes/pods/5da6e128-d49b-11e5-acb5-54ee75528544/volumes/kubernetes.io~secret/default-token-u8r8l/ca.crt
I0216 10:56:03.025256    8513 writer.go:62] Command to write data to file: nsenter [--mount=/rootfs/proc/1/ns/mnt -- sh -c cat > /var/lib/origin/openshift.local.volumes/pods/5da6e128-d49b-11e5-acb5-54ee75528544/volumes/kubernetes.io~secret/default-token-u8r8l/ca.crt]
I0216 10:56:03.030855    8513 writer.go:72] Command to change permissions to file: nsenter [--mount=/rootfs/proc/1/ns/mnt -- chmod 444 /var/lib/origin/openshift.local.volumes/pods/5da6e128-d49b-11e5-acb5-54ee75528544/volumes/kubernetes.io~secret/default-token-u8r8l/ca.crt]
I0216 10:56:03.037039    8513 manager.go:342] Container inspect result: {ID:318c84a18d18c443e60a27c10069de36e9352443c7ad73689869f884bba7db23 Created:2016-02-16 10:52:26.814449063 +0000 UTC Path:/hello-openshift Args:[] Config:0xc20b265500 State:{Running:true Paused:false Restarting:false OOMKilled:false Pid:16990 ExitCode:0 Error: StartedAt:2016-02-16 10:52:27.293343352 +0000 UTC FinishedAt:0001-01-01 00:00:00 +0000 UTC} Image:sha256:caa46d03cf599cd2e98f40accd8256efa362e2212e70a903beb5b6380d2c461c Node:<nil> NetworkSettings:0xc20bc1ba00 SysInitPath: ResolvConfPath:/var/lib/docker/containers/65e6adc15f98770b7214cf0f9d4fee6eb6f9eca24fc25f6200794b6626594f4b/resolv.conf HostnamePath:/var/lib/docker/containers/65e6adc15f98770b7214cf0f9d4fee6eb6f9eca24fc25f6200794b6626594f4b/hostname HostsPath:/var/lib/origin/openshift.local.volumes/pods/5da6e128-d49b-11e5-acb5-54ee75528544/etc-hosts LogPath:/var/lib/docker/containers/318c84a18d18c443e60a27c10069de36e9352443c7ad73689869f884bba7db23/318c84a18d18c443e60a27c10069de36e9352443c7ad73689869f884bba7db23-json.log Name:/k8s_myfrontend.9bfbb0b6_mypod_default_5da6e128-d49b-11e5-acb5-54ee75528544_400cb4c2 Driver:devicemapper Mounts:[{Name: Source:/var/lib/origin/openshift.local.volumes/pods/5da6e128-d49b-11e5-acb5-54ee75528544/volumes/kubernetes.io~nfs/pvol Destination:/opt Driver: Mode: RW:true} {Name: Source:/var/lib/origin/openshift.local.volumes/pods/5da6e128-d49b-11e5-acb5-54ee75528544/volumes/kubernetes.io~secret/default-token-u8r8l Destination:/var/run/secrets/kubernetes.io/serviceaccount Driver: Mode:ro,Z RW:false} {Name: Source:/var/lib/origin/openshift.local.volumes/pods/5da6e128-d49b-11e5-acb5-54ee75528544/etc-hosts Destination:/etc/hosts Driver: Mode: RW:true} {Name: Source:/var/lib/origin/openshift.local.volumes/pods/5da6e128-d49b-11e5-acb5-54ee75528544/containers/myfrontend/318c84a18d18c443e60a27c10069de36e9352443c7ad73689869f884bba7db23 Destination:/dev/termination-log Driver: Mode: RW:true}] Volumes:map[] VolumesRW:map[] HostConfig:0xc209232a00 ExecIDs:[] RestartCount:0 AppArmorProfile:}
I0216 10:56:03.037648    8513 manager.go:342] Container inspect result: {ID:65e6adc15f98770b7214cf0f9d4fee6eb6f9eca24fc25f6200794b6626594f4b Created:2016-02-16 10:52:26.242276211 +0000 UTC Path:/pod Args:[] Config:0xc20ef33180 State:{Running:true Paused:false Restarting:false OOMKilled:false Pid:16848 ExitCode:0 Error: StartedAt:2016-02-16 10:52:26.803989149 +0000 UTC FinishedAt:0001-01-01 00:00:00 +0000 UTC} Image:sha256:9bd7e84a07a79b4f358f0f39bb1f60bd00f0ae2b66938ea0b3c9a16cedf892bc Node:<nil> NetworkSettings:0xc20db55800 SysInitPath: ResolvConfPath:/var/lib/docker/containers/65e6adc15f98770b7214cf0f9d4fee6eb6f9eca24fc25f6200794b6626594f4b/resolv.conf HostnamePath:/var/lib/docker/containers/65e6adc15f98770b7214cf0f9d4fee6eb6f9eca24fc25f6200794b6626594f4b/hostname HostsPath:/var/lib/docker/containers/65e6adc15f98770b7214cf0f9d4fee6eb6f9eca24fc25f6200794b6626594f4b/hosts LogPath:/var/lib/docker/containers/65e6adc15f98770b7214cf0f9d4fee6eb6f9eca24fc25f6200794b6626594f4b/65e6adc15f98770b7214cf0f9d4fee6eb6f9eca24fc25f6200794b6626594f4b-json.log Name:/k8s_POD.ba1101ee_mypod_default_5da6e128-d49b-11e5-acb5-54ee75528544_a4d9ad76 Driver:devicemapper Mounts:[] Volumes:map[] VolumesRW:map[] HostConfig:0xc2096f4780 ExecIDs:[] RestartCount:0 AppArmorProfile:}
I0216 10:56:03.039362    8513 manager.go:1698] Syncing Pod "mypod_default(5da6e128-d49b-11e5-acb5-54ee75528544)": &{TypeMeta:{Kind: APIVersion:} ObjectMeta:{Name:mypod GenerateName: Namespace:default SelfLink:/api/v1/namespaces/default/pods/mypod UID:5da6e128-d49b-11e5-acb5-54ee75528544 ResourceVersion:243 Generation:0 CreationTimestamp:2016-02-16 10:52:26 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:frontendhttp] Annotations:map[openshift.io/scc:anyuid kubernetes.io/config.source:api kubernetes.io/config.seen:2016-02-16T10:52:26.117879408Z]} Spec:{Volumes:[{Name:pvol VolumeSource:{HostPath:<nil> EmptyDir:<nil> GCEPersistentDisk:<nil> AWSElasticBlockStore:<nil> GitRepo:<nil> Secret:<nil> NFS:0xc20baa9b30 ISCSI:<nil> Glusterfs:<nil> PersistentVolumeClaim:<nil> RBD:<nil> FlexVolume:<nil> Cinder:<nil> CephFS:<nil> Flocker:<nil> DownwardAPI:<nil> FC:<nil>}} {Name:default-token-u8r8l VolumeSource:{HostPath:<nil> EmptyDir:<nil> GCEPersistentDisk:<nil> AWSElasticBlockStore:<nil> GitRepo:<nil> Secret:0xc20c326a50 NFS:<nil> ISCSI:<nil> Glusterfs:<nil> PersistentVolumeClaim:<nil> RBD:<nil> FlexVolume:<nil> Cinder:<nil> CephFS:<nil> Flocker:<nil> DownwardAPI:<nil> FC:<nil>}}] Containers:[{Name:myfrontend Image:aosqe/hello-openshift Command:[] Args:[] WorkingDir: Ports:[{Name:http-server HostPort:0 ContainerPort:80 Protocol:TCP HostIP:}] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:pvol ReadOnly:false MountPath:/opt} {Name:default-token-u8r8l ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount}] LivenessProbe:<nil> ReadinessProbe:<nil> Lifecycle:<nil> TerminationMessagePath:/dev/termination-log ImagePullPolicy:IfNotPresent SecurityContext:0xc20baa9b60 Stdin:false StdinOnce:false TTY:false}] RestartPolicy:Always TerminationGracePeriodSeconds:0xc20c326978 ActiveDeadlineSeconds:<nil> DNSPolicy:ClusterFirst NodeSelector:map[] ServiceAccountName:default NodeName:dhcp-14-110.nay.redhat.com SecurityContext:0xc20da3d640 ImagePullSecrets:[{Name:default-dockercfg-apyab}]} Status:{Phase:Running Conditions:[{Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2016-02-16 10:52:27 +0000 UTC Reason: Message:}] Message: Reason: HostIP:10.66.136.64 PodIP:172.17.0.2 StartTime:2016-02-16 10:52:26 +0000 UTC ContainerStatuses:[{Name:myfrontend State:{Waiting:<nil> Running:0xc20c7eadc0 Terminated:<nil>} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:<nil>} Ready:true RestartCount:0 Image:aosqe/hello-openshift ImageID:docker://sha256:caa46d03cf599cd2e98f40accd8256efa362e2212e70a903beb5b6380d2c461c ContainerID:docker://318c84a18d18c443e60a27c10069de36e9352443c7ad73689869f884bba7db23}]}}

Comment 22 Paul Morie 2016-02-23 01:28:31 UTC
Were you able to do the mount operation successfully manually on the host, outside of a docker container, as we discussed?

Comment 23 Liang Xia 2016-02-23 02:06:09 UTC
$ mount
......
tmpfs on /var/lib/openshift/openshift.local.volumes/pods/5da6e128-d49b-11e5-acb5-54ee75528544/volumes/kubernetes.io~secret/default-token-u8r8l type tmpfs (rw,relatime)
10.66.79.133:/home/data/lxia on /var/lib/openshift/openshift.local.volumes/pods/5da6e128-d49b-11e5-acb5-54ee75528544/volumes/kubernetes.io~nfs/pvol type nfs4 (rw,relatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=10.66.136.44,local_lock=none,addr=10.66.79.133)
tmpfs on /var/lib/openshift/openshift.local.volumes/pods/5da6e128-d49b-11e5-acb5-54ee75528544/volumes/kubernetes.io~secret/default-token-u8r8l type tmpfs (rw,relatime)

From the output of command mount on the host, I can see the volume is mounted actually. 
Just not sure why the pod is still complaining 
FailedMount	Unable to mount volumes for pod "mypod_default(5da6e128-d49b-11e5-acb5-54ee75528544)": exit status 32
  19s		19s		1	{kubelet dhcp-14-110.nay.redhat.com}					Warning		FailedSync	Error syncing pod, skipping: exit status 32

Comment 24 Liang Xia 2016-02-23 02:58:38 UTC
On the host, manually do
$ sudo mount -t nfs 10.66.79.133:/home/data/lxia /var/lib/origin/openshift.local.volumes/pods/ae70bd9f-d9d6-11e5-88b9-54ee75528544/volumes/kubernetes.io~nfs/pvol
mount.nfs: /var/lib/openshift/openshift.local.volumes/pods/ae70bd9f-d9d6-11e5-88b9-54ee75528544/volumes/kubernetes.io~nfs/pvol is busy or already mounted

The volume is mounted actually by the nsenter to the host, but the pod do not aware/can not see that, and just complain
  17m		11s		13	{kubelet dhcp-14-110.nay.redhat.com}					Warning		FailedMount	Unable to mount volumes for pod "pod-with-volume_default(ae70bd9f-d9d6-11e5-88b9-54ee75528544)": exit status 32
  17m		11s		13	{kubelet dhcp-14-110.nay.redhat.com}					Warning		FailedSync	Error syncing pod, skipping: exit status 32

Comment 25 Liang Xia 2016-02-23 05:34:16 UTC
# oc rsh pod-with-volume df -h
Filesystem                Size      Used Available Use% Mounted on
......
10.66.79.133:/home/data/lxia  17.5G      4.1G     13.4G  23% /mnt/nfs

# oc rsh pod-with-volume mount
......
10.66.79.133:/home/data/lxia on /mnt/nfs type nfs4 (rw,relatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=10.66.136.219,local_lock=none,addr=10.66.79.133)

The storage is actually mounted. Read/Write to the storage can work.

The problem here is that describe still show mount failed.
# oc describe pods pod-with-volume
Name:		pod-with-volume
Namespace:	default
Image(s):	aosqe/hello-openshift
Node:		dhcp-14-110.nay.redhat.com/10.66.136.219
Start Time:	Tue, 23 Feb 2016 02:39:37 +0000
Labels:		name=frontendhttp
Status:		Running
Reason:		
Message:	
IP:		172.17.0.2
Controllers:	<none>
Containers:
  myfrontend:
    Container ID:	docker://b05f96b18a7f1772abb1351b30633d442b004af070461c4dfd3661da08649c62
    Image:		aosqe/hello-openshift
    Image ID:		docker://sha256:caa46d03cf599cd2e98f40accd8256efa362e2212e70a903beb5b6380d2c461c
    QoS Tier:
      cpu:		BestEffort
      memory:		BestEffort
    State:		Running
      Started:		Tue, 23 Feb 2016 02:39:38 +0000
    Ready:		True
    Restart Count:	0
    Environment Variables:
Conditions:
  Type		Status
  Ready 	True 
Volumes:
  pvol:
    Type:	NFS (an NFS mount that lasts the lifetime of a pod)
    Server:	10.66.79.133
    Path:	/home/data/lxia
    ReadOnly:	false
  default-token-n3en0:
    Type:	Secret (a secret that should populate this volume)
    SecretName:	default-token-n3en0
Events:
  FirstSeen	LastSeen	Count	From					SubobjectPath	Type		Reason		Message
  ---------	--------	-----	----					-------------	--------	------		-------
  2h		35s		118	{kubelet dhcp-14-110.nay.redhat.com}			Warning		FailedMount	Unable to mount volumes for pod "pod-with-volume_default(ae70bd9f-d9d6-11e5-88b9-54ee75528544)": exit status 32
  2h		35s		118	{kubelet dhcp-14-110.nay.redhat.com}			Warning		FailedSync	Error syncing pod, skipping: exit status 32

Comment 26 Paul Morie 2016-02-23 07:50:25 UTC
I talked with Liang tonight and we debugged together on the test system.  This is indeed a bug.  For context, the kubelet runs the volume setup functions over and over again in a loop.  Each plugin, in this function, has to figure out if work has to be done.  The NFS plugin uses a findmnt commant to determine if the volume is already mounted.  What is happening on the test system is that the findmnt check is erroneously returning that the mountpoint isn't already present when it is, so the mount command is run over and over again, and correctly gets exit code 32 for mount point busy.

When I ran this command manually on the test system, from inside the origin container, it ran correctly and showed that the mountpoint already existed.  Further debugging is needed.

I think people _must_ be hitting this in the wild and not realizing it.  The mount works correctly from a functional standpoint, so perhaps most users don't dig any further or don't notice the errors in the describe output.

Comment 27 Andy Goldstein 2016-03-04 18:27:24 UTC
I am still able to mount successfully in my test environment (Fedora 23 VM). I do not get any "exit code 32" errors.

Liang, what OS are you using?

Comment 28 Chao Yang 2016-03-10 06:15:52 UTC
I did not reproduce this on the below server.
[root@ip-10-166-135-19 /]# uname -a
Linux ip-10-166-135-19.ec2.internal 3.10.0-229.7.2.el7.x86_64 #1 SMP Fri May 15 21:38:46 EDT 2015 x86_64 x86_64 x86_64 GNU/Linux


1.docker run -d --name="origin" --privileged --pid=host --net=host -v /:/rootfs:ro -v /var/run:/var/run:rw -v /sys:/sys -v /var/lib/docker:/var/lib/docker:rw -v /var/lib/origin/openshift.local.volumes:/var/lib/origin/openshift.local.volumes  openshift/origin start

2. docker exec -ti origin bash
3. create a pod using nfs
4.[root@ip-10-166-135-19 origin]# oc describe pods mypod2
Name:		mypod2
Namespace:	default
Image(s):	aosqe/hello-openshift
Node:		ip-10-166-135-19.ec2.internal/10.166.135.19
Start Time:	Thu, 10 Mar 2016 05:36:02 +0000
Labels:		name=frontendhttp
Status:		Running
Reason:		
Message:	
IP:		172.17.0.3
Controllers:	<none>
Containers:
  myfrontend:
    Container ID:	docker://4abf2f2f41bfd237f62222def63950457cffcb50e29d3a6ee42942bf2eeb4c17
    Image:		aosqe/hello-openshift
    Image ID:		docker://cddcd4ab363acd31256ed7880d4b669fa45227e49eec41429f80a4f252dfb0da
    Port:		80/TCP
    QoS Tier:
      memory:		BestEffort
      cpu:		BestEffort
    State:		Running
      Started:		Thu, 10 Mar 2016 05:36:04 +0000
    Ready:		True
    Restart Count:	0
    Environment Variables:
Conditions:
  Type		Status
  Ready 	True 
Volumes:
  pvol:
    Type:	NFS (an NFS mount that lasts the lifetime of a pod)
    Server:	10.166.135.19
    Path:	/tmp/test
    ReadOnly:	false
  default-token-v9lxq:
    Type:	Secret (a secret that should populate this volume)
    SecretName:	default-token-v9lxq
Events:
  FirstSeen	LastSeen	Count	From					SubobjectPath			Type		Reason		Message
  ---------	--------	-----	----					-------------			--------	------		-------
  2m		2m		1	{default-scheduler }							Normal		Scheduled	Successfully assigned mypod2 to ip-10-166-135-19.ec2.internal
  2m		2m		1	{kubelet ip-10-166-135-19.ec2.internal}	spec.containers{myfrontend}	Normal		Pulled		Container image "aosqe/hello-openshift" already present on machine
  2m		2m		1	{kubelet ip-10-166-135-19.ec2.internal}	spec.containers{myfrontend}	Normal		Created		Created container with docker id 4abf2f2f41bf
  2m		2m		1	{kubelet ip-10-166-135-19.ec2.internal}	spec.containers{myfrontend}	Normal		Started		Started container with docker id 4abf2f2f41bf

Comment 29 Andy Goldstein 2016-03-10 14:49:39 UTC
I'm closing this for now. Please reopen if it's still an issue.

Comment 30 Wenqi He 2016-04-01 11:40:49 UTC
This issue still repro on my latest regression testing.

[wehe@dhcp-137-221 ~]$ uname -a
Linux dhcp-137-221.nay.redhat.com 4.4.6-300.fc23.x86_64 #1 SMP Wed Mar 16 22:10:37 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

[root@dhcp-137-221 origin]# openshift version
openshift v1.1.5-38-gea6653c
kubernetes v1.2.0-36-g4a3f9c5
etcd 2.2.5

1. sudo docker pull openshift/origin
2. sudo docker run -d --name "origin"  --privileged --pid=host --net=host  -v /:/rootfs:ro -v /var/run:/var/run:rw -v /sys:/sys -v /var/lib/docker:/var/lib/docker:rw  -v /var/lib/openshift/openshift.local.volumes:/var/lib/openshift/openshift.local.volumes  openshift/origin start  --loglevel=5
3. sudo docker exec -it origin bash
4. create pv and pvc and pod
oc create -f https://raw.githubusercontent.com/openshift-qe/v3-testfiles/master/persistent-volumes/nfs/nfs-recycle-rwo.json
oc create -f https://raw.githubusercontent.com/openshift-qe/v3-testfiles/master/persistent-volumes/nfs/claim-rwo.json
oc create -f https://raw.githubusercontent.com/openshift-qe/v3-testfiles/master/persistent-volumes/nfs/pod.json
5. Check the pod status
oc describe pod nfs
get below error info
Events:
  FirstSeen	LastSeen	Count	From					SubobjectPath	Type		Reason		Message
  ---------	--------	-----	----					-------------	--------	------		-------
  40m		40m		1	{default-scheduler }					Normal		Scheduled	Successfully assigned nfs to dhcp-137-221.nay.redhat.com
  40m		30m		11	{kubelet dhcp-137-221.nay.redhat.com}			Warning		FailedSync	Error syncing pod, skipping: mkdir /mnt/nfs/pods/3ba8c134-f7f8-11e5-a11a-6c0b849ad3e9: no such file or directory
  38m		6s		179	{kubelet dhcp-137-221.nay.redhat.com}			Warning		FailedMount	Unable to mount volumes for pod "nfs_default(3ba8c134-f7f8-11e5-a11a-6c0b849ad3e9)": exit status 32
  38m		6s		179	{kubelet dhcp-137-221.nay.redhat.com}			Warning		FailedSync	Error syncing pod, skipping: exit status 32

I even install the nfs-utils in the container and the mount.nfs works well outside the pods.

Comment 31 Andy Goldstein 2016-04-01 14:34:35 UTC
Your command to start the origin container is slightly incorrect. You are bind mounting -v /var/lib/openshift/openshift.local.volumes:/var/lib/openshift/openshift.local.volumes, which is not right. It needs to be -v /var/lib/origin/openshift.local.volumes:/var/lib/origin/openshift.local.volumes.

I am able to successfully mount an nfs volume using the sample files you provided. The only slightly weird thing was that the first time through the sync loop for the pod, it failed to mount the nfs volume due to a timeout, but the second time through the loop, it succeeded.

I0401 14:01:10.840354   24982 nsenter_mount.go:114] nsenter Mounting 172.30.152.92:/ /var/lib/origin/openshift.local.volumes/pods/304c0780-f812-11e5-baa4-001c42de2c3c/volumes/kubernetes.io~nfs/nfs nfs []
I0401 14:01:10.840380   24982 nsenter_mount.go:117] Mount command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- /bin/mount -t nfs 172.30.152.92:/ /var/lib/origin/openshift.local.volumes/pods/304c0780-f812-11e5-baa4-001c42de2c3c/volumes/kubernetes.io~nfs/nfs]
[snip]
I0401 14:06:09.919607   24982 nsenter_mount.go:121] Output from mount command: mount.nfs: Connection timed out
I0401 14:06:09.919646   24982 nsenter_mount.go:174] findmnt command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- /bin/findmnt -o target --noheadings --target /var/lib/origin/openshift.local.volumes/pods/304c0780-f812-11e5-baa4-001c42de2c3c/volumes/kubernetes.io~nfs/nfs]
I0401 14:06:09.929245   24982 nsenter_mount.go:187] IsLikelyNotMountPoint findmnt output: /
E0401 14:06:09.929417   24982 kubelet.go:1796] Unable to mount volumes for pod "nfs_default(304c0780-f812-11e5-baa4-001c42de2c3c)": exit status 32; skipping pod
E0401 14:06:09.929434   24982 pod_workers.go:138] Error syncing pod 304c0780-f812-11e5-baa4-001c42de2c3c, skipping: exit status 32

I didn't do anything else and just waited, and eventually the mount succeeded:

I0401 14:06:22.070743   24982 nsenter_mount.go:174] findmnt command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- /bin/findmnt -o target --noheadings --target /var/lib/origin/openshift.local.volumes/pods/304c0780-f812-11e5-baa4-001c42de2c3c/volumes/kubernetes.io~nfs/nfs]
I0401 14:06:22.080091   24982 nsenter_mount.go:187] IsLikelyNotMountPoint findmnt output: /
I0401 14:06:22.080154   24982 nfs.go:167] NFS mount set up: /var/lib/origin/openshift.local.volumes/pods/304c0780-f812-11e5-baa4-001c42de2c3c/volumes/kubernetes.io~nfs/nfs false <nil>
I0401 14:06:22.080384   24982 nsenter_mount.go:114] nsenter Mounting 172.30.152.92:/ /var/lib/origin/openshift.local.volumes/pods/304c0780-f812-11e5-baa4-001c42de2c3c/volumes/kubernetes.io~nfs/nfs nfs []
I0401 14:06:22.080426   24982 nsenter_mount.go:117] Mount command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- /bin/mount -t nfs 172.30.152.92:/ /var/lib/origin/openshift.local.volumes/pods/304c0780-f812-11e5-baa4-001c42de2c3c/volumes/kubernetes.io~nfs/nfs]
I0401 14:06:29.069550   24982 nsenter_mount.go:174] findmnt command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- /bin/findmnt -o target --noheadings --target /var/lib/origin/openshift.local.volumes/pods/304c0780-f812-11e5-baa4-001c42de2c3c/volumes/kubernetes.io~nfs/nfs]
I0401 14:06:29.094197   24982 nsenter_mount.go:187] IsLikelyNotMountPoint findmnt output: /var/lib/origin/openshift.local.volumes/pods/304c0780-f812-11e5-baa4-001c42de2c3c/volumes/kubernetes.io~nfs/nfs
I0401 14:06:29.094743   24982 nfs.go:167] NFS mount set up: /var/lib/origin/openshift.local.volumes/pods/304c0780-f812-11e5-baa4-001c42de2c3c/volumes/kubernetes.io~nfs/nfs true <nil>
I0401 14:06:29.095101   24982 nsenter_mount.go:174] findmnt command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- /bin/findmnt -o target --noheadings --target /var/lib/origin/openshift.local.volumes/pods/304c0780-f812-11e5-baa4-001c42de2c3c/volumes/kubernetes.io~secret/default-token-hzjgb]
I0401 14:06:29.106089   24982 controller.go:346] Pod nfs updated.


Could you please retest w/the corrected origin start command?

Comment 33 Liang Xia 2016-04-05 02:48:42 UTC
Retest with the updated command, the pod can mount/read/write to NFS storage.

[root@dhcp-136-78 origin]# oc get pods
NAME      READY     STATUS    RESTARTS   AGE
nfs       1/1       Running   0          7m

[root@dhcp-136-78 origin]# oc describe pods nfs
Name:		nfs
Namespace:	default
Node:		dhcp-136-78.nay.redhat.com/10.66.136.206
Start Time:	Tue, 05 Apr 2016 02:37:28 +0000
Labels:		name=frontendhttp
Status:		Running
IP:		172.17.0.2
Controllers:	<none>
Containers:
  myfrontend:
    Container ID:	docker://8ac734d9704f2b7cf3eebb7901c3fb3b5c72f937e76edf1557b3f63858a67540
    Image:		aosqe/hello-openshift
    Image ID:		docker://sha256:caa46d03cf599cd2e98f40accd8256efa362e2212e70a903beb5b6380d2c461c
    Port:		80/TCP
    QoS Tier:
      cpu:		BestEffort
      memory:		BestEffort
    State:		Running
      Started:		Tue, 05 Apr 2016 02:37:29 +0000
    Ready:		True
    Restart Count:	0
    Environment Variables:
Conditions:
  Type		Status
  Ready 	True 
Volumes:
  pvol:
    Type:	PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:	nfsc
    ReadOnly:	false
  default-token-391kl:
    Type:	Secret (a volume populated by a Secret)
    SecretName:	default-token-391kl
Events:
  FirstSeen	LastSeen	Count	From					SubobjectPath			Type		Reason	Message
  ---------	--------	-----	----					-------------			--------	------	-------
  8m		8m		1	{default-scheduler }							Normal		Scheduled	Successfully assigned nfs to dhcp-136-78.nay.redhat.com
  7m		7m		1	{kubelet dhcp-136-78.nay.redhat.com}	spec.containers{myfrontend}	Normal		Pulled	Container image "aosqe/hello-openshift" already present on machine
  7m		7m		1	{kubelet dhcp-136-78.nay.redhat.com}	spec.containers{myfrontend}	Normal		CreatedCreated container with docker id 8ac734d9704f
  7m		7m		1	{kubelet dhcp-136-78.nay.redhat.com}	spec.containers{myfrontend}	Normal		StartedStarted container with docker id 8ac734d9704f


[root@dhcp-136-78 origin]# oc rsh nfs    
/ # df -h
Filesystem                Size      Used Available Use% Mounted on
/dev/mapper/docker-253:0-202308629-b8a017e0dbe57bb854f33334594ad8438c30d4286c57a0fe544a098bf774182e
                         98.3G     66.7M     93.2G   0% /
tmpfs                     7.6G         0      7.6G   0% /dev
tmpfs                     7.6G         0      7.6G   0% /sys/fs/cgroup
10.66.79.133:/jhou       17.5G      4.0G     13.4G  23% /mnt/nfs
/dev/mapper/rhel_dhcp--136--78-root
                         50.0G     22.4G     27.6G  45% /dev/termination-log
tmpfs                     7.6G     12.0K      7.6G   0% /var/run/secrets/kubernetes.io/serviceaccount
/dev/mapper/rhel_dhcp--136--78-root
                         50.0G     22.4G     27.6G  45% /etc/resolv.conf
/dev/mapper/rhel_dhcp--136--78-root
                         50.0G     22.4G     27.6G  45% /etc/hostname
/dev/mapper/rhel_dhcp--136--78-root
                         50.0G     22.4G     27.6G  45% /etc/hosts
shm                      64.0M         0     64.0M   0% /dev/shm
tmpfs                     7.6G         0      7.6G   0% /proc/kcore
tmpfs                     7.6G         0      7.6G   0% /proc/timer_stats
/ # ls /mnt/nfs/
file
/ # touch /mnt/nfs/myfile
/ # ls /mnt/nfs/
file    myfile

Comment 35 Wenqi He 2016-04-06 04:28:10 UTC
I have tested this again with the modified path, the pod runs well. Change the status to VERIFIED.


Note You need to log in before you can comment on or make changes to this bug.