Bug 1313391 - Node of pod using a NFS PVC, successfully mount but immediately unmount it.
Node of pod using a NFS PVC, successfully mount but immediately unmount it.
Status: CLOSED ERRATA
Product: OpenShift Container Platform
Classification: Red Hat
Component: Storage (Show other bugs)
3.1.0
All Linux
high Severity high
: ---
: ---
Assigned To: Bradley Childs
Jianwei Hou
:
: 1314924 (view as bug list)
Depends On:
Blocks: 1267746
  Show dependency treegraph
 
Reported: 2016-03-01 09:20 EST by Christophe Augello
Modified: 2016-05-12 12:30 EDT (History)
12 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2016-05-12 12:30:56 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Christophe Augello 2016-03-01 09:20:05 EST
Description of problem:

When a pod that uses a persistent volume claim which is backed by an NFS-based persistent volume is started, we see that he node mounts the NFS PV successfully, but immediately after that it unmount it.

Version-Release number of selected component (if applicable):
3.1.1

How reproducible:


Steps to Reproduce:
1. Deploy jenkins with persitent PV
2.
3.

Actual results:

The node does 4 mount attempts during the proces:

  ===== /sbin/mount.nfs called at Fri Feb 19 16:54:00 CET 2016 =====
  ===== /sbin/mount.nfs exiting at Fri Feb 19 16:54:00 CET 2016 with status 0 ===== 

  ===== /sbin/mount.nfs called at Fri Feb 19 16:54:40 CET 2016 =====
  ===== /sbin/mount.nfs exiting at Fri Feb 19 16:54:45 CET 2016 with status 32 ===== 

  ===== /sbin/mount.nfs called at Fri Feb 19 16:54:45 CET 2016 =====
  ===== /sbin/mount.nfs exiting at Fri Feb 19 16:54:50 CET 2016 with status 0 ===== 

  ===== /sbin/mount.nfs called at Fri Feb 19 16:55:35 CET 2016 =====
  ===== /sbin/mount.nfs exiting at Fri Feb 19 16:55:40 CET 2016 with status 0 ===== 

All of them were made with the very same arguments:

  $ grep 'with arguments' exportfs__20160219171506.log  | cut -d: -f4- | sort -u
  nfs.example.com:/srv/nfs/pv0003 /var/lib/origin/openshift.local.volumes/pods/fdcd0853-d720-11e5-b3dc-005056bf24e7/volumes/kubernetes.io~nfs/pv0003 -o rw


1st mount attempt
-----------------

Feb 19 16:54:00 node.example.com atomic-openshift-node[106648]: I0219 16:54:00.201096  106648 config.go:383] Receiving a new pod "jenkins-3-3clo3_infra-test"

Feb 19 16:54:00 node.example.com atomic-openshift-node[106648]: I0219 16:54:00.211456  106648 volumes.go:109] Used volume plugin "kubernetes.io/persistent-claim" for jenkins-data
Feb 19 16:54:00 node.example.com atomic-openshift-node[106648]: I0219 16:54:00.211512  106648 nfs.go:161] NFS mount set up: /var/lib/origin/openshift.local.volumes/pods/fdcd0853-d720-11e5-b3dc-005056bf24e7/volumes/kubernetes.io~nfs/pv0003 false stat /var/lib/origin/openshift.local.volumes/pods/fdcd0853-d720-11e5-b3dc-005056bf24e7/volumes/kubernetes.io~nfs/pv0003: no such file or directory
Feb 19 16:54:00 node.example.com atomic-openshift-node[106648]: I0219 16:54:00.211658  106648 mount_linux.go:97] Mounting nfs.example.com:/srv/nfs/pv0003 /var/lib/origin/openshift.local.volumes/pods/fdcd0853-d720-11e5-b3dc-005056bf24e7/volumes/kubernetes.io~nfs/pv0003 nfs []

Feb 19 16:54:00 node.example.com atomic-openshift-node[106648]: I0219 16:54:00.312515  106648 manager.go:1720] Need to restart pod infra container for "jenkins-3-3clo3_infra-test" because it is not found
Feb 19 16:54:00 node.example.com atomic-openshift-node[106648]: I0219 16:54:00.312612  106648 manager.go:1739] Container {Name:jenkins Image:registry.access.redhat.com/openshift3/jenkins-1-rhel7:latest Command:[] Args:[] WorkingDir: Ports:[] Env:[{Name:JENKINS_PASSWORD Value:password ValueFrom:<nil>}] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:jenkins-data ReadOnly:false MountPath:/var/lib/jenkins} {Name:default-token-9tgmb ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount}] LivenessProbe:<nil> ReadinessProbe:<nil> Lifecycle:<nil> TerminationMessagePath:/dev/termination-log ImagePullPolicy:IfNotPresent SecurityContext:0xc20b54fa70 Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Feb 19 16:54:00 node.example.com atomic-openshift-node[106648]: I0219 16:54:00.312701  106648 manager.go:1825] Got container changes for pod "jenkins-3-3clo3_infra-test": {StartInfraContainer:true InfraContainerId: ContainersToStart:map[0:{}] ContainersToKeep:map[]}
Feb 19 16:54:00 node.example.com atomic-openshift-node[106648]: I0219 16:54:00.312739  106648 manager.go:1831] Killing Infra Container for "jenkins-3-3clo3_infra-test", will start new one
Feb 19 16:54:00 node.example.com atomic-openshift-node[106648]: I0219 16:54:00.312761  106648 manager.go:1864] Creating pod infra container for "jenkins-3-3clo3_infra-test"
Feb 19 16:54:00 node.example.com docker[106610]: time="2016-02-19T16:54:00.315842950+01:00" level=info msg="GET /images/openshift3/ose-pod:v3.1.1.6/json"
Feb 19 16:54:00 node.example.com atomic-openshift-node[106648]: I0219 16:54:00.331824  106648 config.go:253] Setting pods for source api
Feb 19 16:54:00 node.example.com atomic-openshift-node[106648]: I0219 16:54:00.333436  106648 manager.go:306] Status for pod "jenkins-3-3clo3_infra-test" updated successfully

Feb 19 16:54:02 node.example.com atomic-openshift-node[106648]: I0219 16:54:02.332311  106648 volumes.go:205] Making a volume.Cleaner for volume kubernetes.io~nfs/pv0003 of pod fdcd0853-d720-11e5-b3dc-005056bf24e7
Feb 19 16:54:02 node.example.com atomic-openshift-node[106648]: I0219 16:54:02.332419  106648 volumes.go:241] Used volume plugin "kubernetes.io/nfs" for fdcd0853-d720-11e5-b3dc-005056bf24e7/kubernetes.io~nfs
Feb 19 16:54:02 node.example.com atomic-openshift-node[106648]: I0219 16:54:02.332458  106648 volumes.go:205] Making a volume.Cleaner for volume kubernetes.io~secret/default-token-9tgmb of pod fdcd0853-d720-11e5-b3dc-005056bf24e7
Feb 19 16:54:02 node.example.com atomic-openshift-node[106648]: I0219 16:54:02.332485  106648 volumes.go:241] Used volume plugin "kubernetes.io/secret" for fdcd0853-d720-11e5-b3dc-005056bf24e7/kubernetes.io~secret
Feb 19 16:54:02 node.example.com atomic-openshift-node[106648]: W0219 16:54:02.332512  106648 kubelet.go:1750] Orphaned volume "fdcd0853-d720-11e5-b3dc-005056bf24e7/pv0003" found, tearing down volume
Feb 19 16:54:02 node.example.com atomic-openshift-node[106648]: I0219 16:54:02.334270  106648 mount_linux.go:129] Unmounting /var/lib/origin/openshift.local.volumes/pods/fdcd0853-d720-11e5-b3dc-005056bf24e7/volumes/kubernetes.io~nfs/pv0003


2nd mount attempt
-----------------

Feb 19 16:54:40 node.example.com atomic-openshift-node[106648]: E0219 16:54:40.369663  106648 manager.go:1867] Failed to create pod infra container: impossible: cannot find the mounted volumes for pod "jenkins-3-3clo3_infra-test"; Skipping pod "jenkins-3-3clo3_infra-test"
Feb 19 16:54:40 node.example.com atomic-openshift-node[106648]: I0219 16:54:40.369873  106648 kubelet.go:2836] Generating status for "jenkins-3-3clo3_infra-test"
Feb 19 16:54:40 node.example.com atomic-openshift-node[106648]: I0219 16:54:40.370143  106648 server.go:736] Event(api.ObjectReference{Kind:"Pod", Namespace:"infra-test", Name:"jenkins-3-3clo3", UID:"fdcd0853-d720-11e5-b3dc-005056bf24e7", APIVersion:"v1", ResourceVersion:"3326540", FieldPath:"implicitly required container POD"}): reason: 'Pulled' Container image "openshift3/ose-pod:v3.1.1.6" already present on machine
Feb 19 16:54:40 node.example.com docker[106610]: time="2016-02-19T16:54:40.372222828+01:00" level=info msg="GET /containers/json?all=1"
Feb 19 16:54:40 node.example.com atomic-openshift-node[106648]: I0219 16:54:40.375659  106648 kubelet.go:2747] pod waiting > 0, pending
Feb 19 16:54:40 node.example.com atomic-openshift-node[106648]: E0219 16:54:40.375745  106648 pod_workers.go:113] Error syncing pod fdcd0853-d720-11e5-b3dc-005056bf24e7, skipping: impossible: cannot find the mounted volumes for pod "jenkins-3-3clo3_infra-test"
Feb 19 16:54:40 node.example.com atomic-openshift-node[106648]: I0219 16:54:40.376133  106648 server.go:736] Event(api.ObjectReference{Kind:"Pod", Namespace:"infra-test", Name:"jenkins-3-3clo3", UID:"fdcd0853-d720-1 1e5-b3dc-005056bf24e7", APIVersion:"v1", ResourceVersion:"3326540", FieldPath:""}): reason: 'FailedSync' Error syncing pod, skipping: impossible: cannot find the mounted volumes for pod "jenkins-3-3clo3_infra-test"
Feb 19 16:54:40 node.example.com atomic-openshift-node[106648]: I0219 16:54:40.427587  106648 volumes.go:109] Used volume plugin "kubernetes.io/nfs" for pv0003
Feb 19 16:54:40 node.example.com atomic-openshift-node[106648]: I0219 16:54:40.427622  106648 volumes.go:109] Used volume plugin "kubernetes.io/persistent-claim" for jenkins-data
Feb 19 16:54:40 node.example.com atomic-openshift-node[106648]: I0219 16:54:40.427686  106648 nfs.go:161] NFS mount set up: /var/lib/origin/openshift.local.volumes/pods/fdcd0853-d720-11e5-b3dc-005056bf24e7/volumes/kubernetes.io~nfs/pv0003 false stat /var/lib/origin/openshift.local.volumes/pods/fdcd0853-d720-11e5-b3dc-005056bf24e7/volumes/kubernetes.io~nfs/pv0003: no such file or directory
Feb 19 16:54:40 node.example.com atomic-openshift-node[106648]: I0219 16:54:40.428209  106648 mount_linux.go:97] Mounting nfs.example.com:/srv/nfs/pv0003 /var/lib/origin/openshift.local.volumes/pods/fdcd0853-d720-11e5-b3dc-005056bf24e7/volumes/kubernetes.io~nfs/pv0003 nfs []

Feb 19 16:54:42 node.example.com atomic-openshift-node[106648]: I0219 16:54:42.435945  106648 volumes.go:205] Making a volume.Cleaner for volume kubernetes.io~nfs/pv0003 of pod fdcd0853-d720-11e5-b3dc-005056bf24e7
Feb 19 16:54:42 node.example.com atomic-openshift-node[106648]: I0219 16:54:42.435995  106648 volumes.go:241] Used volume plugin "kubernetes.io/nfs" for fdcd0853-d720-11e5-b3dc-005056bf24e7/kubernetes.io~nfs
Feb 19 16:54:42 node.example.com atomic-openshift-node[106648]: I0219 16:54:42.436015  106648 volumes.go:205] Making a volume.Cleaner for volume kubernetes.io~secret/default-token-9tgmb of pod fdcd0853-d720-11e5-b3dc-005056bf24e7
Feb 19 16:54:42 node.example.com atomic-openshift-node[106648]: I0219 16:54:42.436027  106648 volumes.go:241] Used volume plugin "kubernetes.io/secret" for fdcd0853-d720-11e5-b3dc-005056bf24e7/kubernetes.io~secret
Feb 19 16:54:42 node.example.com atomic-openshift-node[106648]: W0219 16:54:42.436041  106648 kubelet.go:1750] Orphaned volume "fdcd0853-d720-11e5-b3dc-005056bf24e7/pv0003" found, tearing down volume
Feb 19 16:54:42 node.example.com atomic-openshift-node[106648]: I0219 16:54:42.451130  106648 manager.go:315] Global Housekeeping(1455897282) took 123.5603ms


Feb 19 16:54:45 node.example.com atomic-openshift-node[106648]: E0219 16:54:45.625556  106648 nfs.go:178] IsLikelyNotMountPoint check failed: stat /var/lib/origin/openshift.local.volumes/pods/fdcd0853-d720-11e5-b3dc-005056bf24e7/volumes/kubernetes.io~nfs/pv0003: no such file or directory
Feb 19 16:54:45 node.example.com atomic-openshift-node[106648]: E0219 16:54:45.625621  106648 kubelet.go:1521] Unable to mount volumes for pod "jenkins-3-3clo3_infra-test": Mount failed: exit status 32
Feb 19 16:54:45 node.example.com atomic-openshift-node[106648]: Mounting arguments: nfs.example.com:/srv/nfs/pv0003 /var/lib/origin/openshift.local.volumes/pods/fdcd0853-d720-11e5-b3dc-005056bf24e7/volumes/kubernetes.io~nfs/pv0003 nfs []
Feb 19 16:54:45 node.example.com atomic-openshift-node[106648]: Output: mount.nfs.old: access denied by server while mounting nfs.example.com:/srv/nfs/pv0003
Feb 19 16:54:45 node.example.com atomic-openshift-node[106648]: ; skipping pod
Feb 19 16:54:45 node.example.com atomic-openshift-node[106648]: I0219 16:54:45.625644  106648 kubelet.go:2836] Generating status for "jenkins-3-3clo3_infra-test"
Feb 19 16:54:45 node.example.com atomic-openshift-node[106648]: I0219 16:54:45.625987  106648 server.go:736] Event(api.ObjectReference{Kind:"Pod", Namespace:"infra-test", Name:"jenkins-3-3clo3", UID:"fdcd0853-d720-11e5-b3dc-005056bf24e7", APIVersion:"v1", ResourceVersion:"3326540", FieldPath:""}): reason: 'FailedMount' Unable to mount volumes for pod "jenkins-3-3clo3_infra-test": Mount failed: exit status 32
Feb 19 16:54:45 node.example.com atomic-openshift-node[106648]: Mounting arguments: nfs.example.com:/srv/nfs/pv0003 /var/lib/origin/openshift.local.volumes/pods/fdcd0853-d720-11e5-b3dc-005056bf24e7/volumes/kubernetes.io~nfs/pv0003 nfs []
Feb 19 16:54:45 node.example.com atomic-openshift-node[106648]: Output: mount.nfs.old: access denied by server while mounting nfs.example.com:/srv/nfs/pv0003


3rd mount attempt
-----------------

Feb 19 16:54:45 node.example.com atomic-openshift-node[106648]: I0219 16:54:45.653980  106648 volumes.go:109] Used volume plugin "kubernetes.io/nfs" for pv0003
Feb 19 16:54:45 node.example.com atomic-openshift-node[106648]: I0219 16:54:45.654011  106648 volumes.go:109] Used volume plugin "kubernetes.io/persistent-claim" for jenkins-data
Feb 19 16:54:45 node.example.com atomic-openshift-node[106648]: I0219 16:54:45.654055  106648 nfs.go:161] NFS mount set up: /var/lib/origin/openshift.local.volumes/pods/fdcd0853-d720-11e5-b3dc-005056bf24e7/volumes/kubernetes.io~nfs/pv0003 false stat /var/lib/origin/openshift.local.volumes/pods/fdcd0853-d720-11e5-b3dc-005056bf24e7/volumes/kubernetes.io~nfs/pv0003: no such file or directory
Feb 19 16:54:45 node.example.com atomic-openshift-node[106648]: I0219 16:54:45.654576  106648 mount_linux.go:97] Mounting nfs.example.com:/srv/nfs/pv0003 /var/lib/origin/openshift.local.volumes/pods/fdcd0853-d720-11e5-b3dc-005056bf24e7/volumes/kubernetes.io~nfs/pv0003 nfs []

Feb 19 16:54:50 node.example.com atomic-openshift-node[106648]: I0219 16:54:50.775648  106648 manager.go:1720] Need to restart pod infra container for "jenkins-3-3clo3_infra-test" because it is not found
Feb 19 16:54:50 node.example.com atomic-openshift-node[106648]: I0219 16:54:50.775728  106648 manager.go:1739] Container {Name:jenkins Image:registry.access.redhat.com/openshift3/jenkins-1-rhel7:latest Command:[] Args:[] WorkingDir: Ports:[] Env:[{Name:JENKINS_PASSWORD Value:password ValueFrom:<nil>}] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:jenkins-data ReadOnly:false MountPath:/var/lib/jenkins} {Name:default-token-9tgmb ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount}] LivenessProbe:<nil> ReadinessProbe:<nil> Lifecycle:<nil> TerminationMessagePath:/dev/termination-log ImagePullPolicy:IfNotPresent SecurityContext:0xc20b54fa70 Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Feb 19 16:54:50 node.example.com atomic-openshift-node[106648]: I0219 16:54:50.775806  106648 manager.go:1825] Got container changes for pod "jenkins-3-3clo3_infra-test": {StartInfraContainer:true InfraContainerId: ContainersToStart:map[0:{}] ContainersToKeep:map[]}
Feb 19 16:54:50 node.example.com atomic-openshift-node[106648]: I0219 16:54:50.775841  106648 manager.go:1831] Killing Infra Container for "jenkins-3-3clo3_infra-test", will start new one
Feb 19 16:54:50 node.example.com atomic-openshift-node[106648]: I0219 16:54:50.775856  106648 manager.go:1864] Creating pod infra container for "jenkins-3-3clo3_infra-test"
Feb 19 16:54:50 node.example.com docker[106610]: time="2016-02-19T16:54:50.779163041+01:00" level=info msg="GET /images/openshift3/ose-pod:v3.1.1.6/json"
Feb 19 16:54:52 node.example.com docker[106610]: time="2016-02-19T16:54:52.327578635+01:00" level=info msg="GET /containers/json"
Feb 19 16:54:52 node.example.com atomic-openshift-node[106648]: I0219 16:54:52.327133  106648 kubelet.go:2183] SyncLoop (periodic sync)
Feb 19 16:54:52 node.example.com atomic-openshift-node[106648]: I0219 16:54:52.327241  106648 kubelet.go:2149] SyncLoop (housekeeping)
Feb 19 16:54:52 node.example.com docker[106610]: time="2016-02-19T16:54:52.330318020+01:00" level=info msg="GET /containers/json"
Feb 19 16:54:52 node.example.com atomic-openshift-node[106648]: I0219 16:54:52.332242  106648 volumes.go:205] Making a volume.Cleaner for volume kubernetes.io~nfs/pv0003 of pod fdcd0853-d720-11e5-b3dc-005056bf24e7
Feb 19 16:54:52 node.example.com atomic-openshift-node[106648]: I0219 16:54:52.332281  106648 volumes.go:241] Used volume plugin "kubernetes.io/nfs" for fdcd0853-d720-11e5-b3dc-005056bf24e7/kubernetes.io~nfs
Feb 19 16:54:52 node.example.com atomic-openshift-node[106648]: I0219 16:54:52.332297  106648 volumes.go:205] Making a volume.Cleaner for volume kubernetes.io~secret/default-token-9tgmb of pod fdcd0853-d720-11e5-b3dc-005056bf24e7
Feb 19 16:54:52 node.example.com atomic-openshift-node[106648]: I0219 16:54:52.332310  106648 volumes.go:241] Used volume plugin "kubernetes.io/secret" for fdcd0853-d720-11e5-b3dc-005056bf24e7/kubernetes.io~secret
Feb 19 16:54:52 node.example.com atomic-openshift-node[106648]: W0219 16:54:52.332323  106648 kubelet.go:1750] Orphaned volume "fdcd0853-d720-11e5-b3dc-005056bf24e7/pv0003" found, tearing down volume
Feb 19 16:54:52 node.example.com atomic-openshift-node[106648]: I0219 16:54:52.333654  106648 mount_linux.go:129] Unmounting /var/lib/origin/openshift.local.volumes/pods/fdcd0853-d720-11e5-b3dc-005056bf24e7/volumes/kubernetes.io~nfs/pv0003

Expected results:



Additional info:

PV details:
~~~---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv0003
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  nfs:
    path: /srv/nfs/pv0003
    server: nfs.example.com
  persistentVolumeReclaimPolicy: Recycle
~~~


Exports:
~~~
/srv/nfs/pv0001         10.212.10.0/23(rw,all_squash)
/srv/nfs/pv0002         10.212.10.0/23(rw,all_squash)
/srv/nfs/pv0003         10.212.10.0/23(rw,all_squash)
~~~

Content of the PV:
~~~
[root@CPYI0019 ~]# mount nfs.example.com:/srv/nfs/pv0003 /tmp/test
[root@CPYI0019 ~]# ls -l /tmp/test
total 68
-rw-r--r--.  1 nfsnobody nfsnobody    0 Feb 10 17:13 configured
-rw-r--r--.  1 nfsnobody nfsnobody 2564 Feb 10 17:13 config.xml
-rw-r--r--.  1 nfsnobody nfsnobody 2577 Feb 10 17:13 config.xml.tpl
-rw-r--r--.  1 nfsnobody nfsnobody 1752 Feb 11 10:35 Download metadata.log
-rw-r--r--.  1 nfsnobody nfsnobody  159 Feb 11 10:34 hudson.model.UpdateCenter.xml
-rw-------.  1 nfsnobody nfsnobody 1680 Feb 10 17:16 identity.key.enc
drwxr-xr-x.  3 nfsnobody nfsnobody 4096 Feb 10 17:13 jobs
-rw-r--r--.  1 nfsnobody nfsnobody  907 Feb 11 10:34 nodeMonitors.xml
drwxr-xr-x.  2 nfsnobody nfsnobody 4096 Feb 10 17:16 nodes
-rw-r--r--.  1 nfsnobody nfsnobody   72 Feb 11 10:34 password
drwxr-xr-x. 22 nfsnobody nfsnobody 4096 Feb 10 17:16 plugins
-rw-r--r--.  1 nfsnobody nfsnobody  129 Feb 11 10:37 queue.xml
-rw-r--r--.  1 nfsnobody nfsnobody  129 Feb 11 09:56 queue.xml.bak
-rw-r--r--.  1 nfsnobody nfsnobody   64 Feb 10 17:14 secret.key
-rw-r--r--.  1 nfsnobody nfsnobody    0 Feb 10 17:14 secret.key.not-so-secret
drwxr-xr-x.  4 nfsnobody nfsnobody 4096 Feb 10 17:18 secrets
drwxr-xr-x.  2 nfsnobody nfsnobody 4096 Feb 10 17:16 userContent
drwxr-xr-x.  3 nfsnobody nfsnobody 4096 Feb 10 17:13 users
drwxr-xr-x.  9 nfsnobody nfsnobody 4096 Feb 10 17:14 war
~~~
Comment 1 Iuliia Ievstignieieva 2016-03-02 02:55:40 EST
Hi all,

Please prioritize this bugzilla. It is quite urgent for the customer. The temprorary workaround does not seem like an option for that.


Thanks,


Julia

Team Lead

GSS EMEA
Comment 3 Bradley Childs 2016-03-02 18:17:57 EST
I can you provide more details about the user and SCC settings being used here?
Comment 4 Christophe Augello 2016-03-03 04:43:48 EST
@Bradley

defaults are used.
Comment 6 Bradley Childs 2016-03-04 11:10:46 EST
It looks like this issue:

https://github.com/kubernetes/kubernetes/issues/20734

Which is fixed upstream and will be in 3.2 but is not in 3.1.
Comment 7 hchen 2016-03-04 15:03:31 EST
k8s 19600 is ported to openshift origin 1.1.3
https://github.com/openshift/origin/commit/3aa75a49ff71a38dcb128d5165d417afc4758568
Comment 8 hchen 2016-03-04 15:24:01 EST
it is also in OSE v3.1.1.901
https://github.com/openshift/ose/commit/3aa75a49ff71a38dcb128d5165d417afc4758568
Comment 9 Troy Dawson 2016-03-07 18:24:54 EST
Should be in OSE v3.1.1.911 which was pushed to QE today.
Comment 10 Josep 'Pep' Turro Mauri 2016-03-08 04:43:33 EST
(In reply to Bradley Childs from comment #6)
> It looks like this issue:
> 
> https://github.com/kubernetes/kubernetes/issues/20734

For reference: also tracked in Origin as bug 1298284
Comment 11 Josep 'Pep' Turro Mauri 2016-03-08 10:42:46 EST
*** Bug 1314924 has been marked as a duplicate of this bug. ***
Comment 12 Jianwei Hou 2016-03-09 03:07:28 EST
Verified on
openshift v3.1.1.911
kubernetes v1.2.0-alpha.7-703-gbc4550d
etcd 2.2.5

According to bug 1298284, verification steps are:

1. create a PV and a claim (I use Cinder volumes, but I saw it on AWS and GCE too)
2. create a pod that uses the claim
3. In a loop:
  3.1 create the pod
  3.2 wait until it's running
  3.3 run 'kubectl describe pods'
  3.4 delete it
  3.5 wait until the volume is unmounted and detached from the node (this is important!)

at step 3.3, I have not seen pod has been restarted. 
I've written a script to repeat the test 20 times, can not reproduce. This issue is fixed
Comment 13 Josep 'Pep' Turro Mauri 2016-03-31 04:57:15 EDT
(In reply to Bradley Childs from comment #6)
> It looks like this issue:
> 
> https://github.com/kubernetes/kubernetes/issues/20734
> 
> Which is fixed upstream and will be in 3.2 but is not in 3.1.

Apparently this was backported to 3.1 via bug 1318472.
Comment 15 errata-xmlrpc 2016-05-12 12:30:56 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2016:1064

Note You need to log in before you can comment on or make changes to this bug.