Bug 1471630 - [vSphere][containerized] VMDK not unmounted after deleting Pod
Summary: [vSphere][containerized] VMDK not unmounted after deleting Pod
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Storage
Version: 3.6.1
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 3.7.0
Assignee: Jan Safranek
QA Contact: Jianwei Hou
URL:
Whiteboard:
Depends On:
Blocks: 1473338
TreeView+ depends on / blocked
 
Reported: 2017-07-17 06:32 UTC by Jianwei Hou
Modified: 2017-11-28 22:01 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
undefined
Clone Of:
: 1473338 (view as bug list)
Environment:
Last Closed: 2017-11-28 22:01:28 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2017:3188 0 normal SHIPPED_LIVE Moderate: Red Hat OpenShift Container Platform 3.7 security, bug, and enhancement update 2017-11-29 02:34:54 UTC

Description Jianwei Hou 2017-07-17 06:32:19 UTC
Description of problem:
On containerized OCP 3.6 on vSphere, the vmdk can not be unmounted from node when the Pod is deleted. 
On RPM installed OCP, this is not reproducible, the unmount was successful and immediate.

Version-Release number of selected component (if applicable):
openshift v3.6.140
kubernetes v1.6.1+5115d708d7
etcd 3.2.1

How reproducible:
Always

Steps to Reproduce:
1. Setup containerized OCP 3.6 on vSphere.
2. Ensure each Vm is configured with 'disk.enableUUID=1', configure cloud provider
3. Create StorageClass, PVC, Pod
4. Delete Pod
5. Verify vmdk is unmounted from node

Actual results:
After step 5, vmdk is still attached to node.
[root@ocp36 ~]# mount|grep vsphere
/dev/sdb on /var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/vsphere-volume/mounts/[datastore1] kubevols/kubernetes-dynamic-pvc-0a6b27db-6ab5-11e7-8f64-0050569f1b82.vmdk type ext4 (rw,relatime,seclabel,data=ordered)

Expected results:
Vmdk should be unmounted.

Master Log:

Node Log (of failed PODs):
Jul 17 14:26:52 ocp36 atomic-openshift-node: I0717 14:26:52.741075   24244 nsenter_mount.go:202] IsLikelyNotMountPoint: /var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/vsphere-volume/mounts/[datastore1] kubevols/kubernetes-dynamic-pvc-0a6b27db-6ab5-11e7-8f64-0050569f1b82.vmdk is not a mount point
Jul 17 14:26:52 ocp36 atomic-openshift-node: W0717 14:26:52.741086   24244 util.go:85] Warning: "/var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/vsphere-volume/mounts/[datastore1] kubevols/kubernetes-dynamic-pvc-0a6b27db-6ab5-11e7-8f64-0050569f1b82.vmdk" is not a mountpoint, deleting
Jul 17 14:26:52 ocp36 atomic-openshift-node: E0717 14:26:52.741183   24244 nestedpendingoperations.go:262] Operation for "\"kubernetes.io/vsphere-volume/[datastore1] kubevols/kubernetes-dynamic-pvc-0a6b27db-6ab5-11e7-8f64-0050569f1b82.vmdk\"" failed. No retries permitted until 2017-07-17 14:28:52.741137521 +0800 CST (durationBeforeRetry 2m0s). Error: UnmountDevice failed for volume "kubernetes.io/vsphere-volume/[datastore1] kubevols/kubernetes-dynamic-pvc-0a6b27db-6ab5-11e7-8f64-0050569f1b82.vmdk" (spec.Name: "pvc-0a6b27db-6ab5-11e7-8f64-0050569f1b82") with: remove /var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/vsphere-volume/mounts/[datastore1] kubevols/kubernetes-dynamic-pvc-0a6b27db-6ab5-11e7-8f64-0050569f1b82.vmdk: device or resource busy
Jul 17 14:26:52 ocp36 journal: I0717 14:26:52.741075   24244 nsenter_mount.go:202] IsLikelyNotMountPoint: /var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/vsphere-volume/mounts/[datastore1] kubevols/kubernetes-dynamic-pvc-0a6b27db-6ab5-11e7-8f64-0050569f1b82.vmdk is not a mount point
Jul 17 14:26:52 ocp36 journal: W0717 14:26:52.741086   24244 util.go:85] Warning: "/var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/vsphere-volume/mounts/[datastore1] kubevols/kubernetes-dynamic-pvc-0a6b27db-6ab5-11e7-8f64-0050569f1b82.vmdk" is not a mountpoint, deleting
Jul 17 14:26:52 ocp36 journal: E0717 14:26:52.741183   24244 nestedpendingoperations.go:262] Operation for "\"kubernetes.io/vsphere-volume/[datastore1] kubevols/kubernetes-dynamic-pvc-0a6b27db-6ab5-11e7-8f64-0050569f1b82.vmdk\"" failed. No retries permitted until 2017-07-17 14:28:52.741137521 +0800 CST (durationBeforeRetry 2m0s). Error: UnmountDevice failed for volume "kubernetes.io/vsphere-volume/[datastore1] kubevols/kubernetes-dynamic-pvc-0a6b27db-6ab5-11e7-8f64-0050569f1b82.vmdk" (spec.Name: "pvc-0a6b27db-6ab5-11e7-8f64-0050569f1b82") with: remove /var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/vsphere-volume/mounts/[datastore1] kubevols/kubernetes-dynamic-pvc-0a6b27db-6ab5-11e7-8f64-0050569f1b82.vmdk: device or resource busy



PV Dump:
apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
    kubernetes.io/createdby: vsphere-volume-dynamic-provisioner
    pv.kubernetes.io/bound-by-controller: "yes"
    pv.kubernetes.io/provisioned-by: kubernetes.io/vsphere-volume
  creationTimestamp: null
  name: pvc-0a6b27db-6ab5-11e7-8f64-0050569f1b82
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 1Gi
  claimRef:
    apiVersion: v1
    kind: PersistentVolumeClaim
    name: c4
    namespace: jhou
    resourceVersion: "105830"
    uid: 0a6b27db-6ab5-11e7-8f64-0050569f1b82
  persistentVolumeReclaimPolicy: Delete
  storageClassName: vsphere-thin
  vsphereVolume:
    fsType: ext4
    volumePath: '[datastore1] kubevols/kubernetes-dynamic-pvc-0a6b27db-6ab5-11e7-8f64-0050569f1b82.vmdk'

PVC Dump:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
    pv.kubernetes.io/bind-completed: "yes"
    pv.kubernetes.io/bound-by-controller: "yes"
    volume.beta.kubernetes.io/storage-class: vsphere-thin
    volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/vsphere-volume
  creationTimestamp: null
  name: c4
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  volumeName: pvc-0a6b27db-6ab5-11e7-8f64-0050569f1b82

StorageClass Dump (if StorageClass used by PV/PVC):
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  creationTimestamp: null
  name: vsphere-thin
parameters:
  diskformat: thin
provisioner: kubernetes.io/vsphere-volume

Additional info:

Comment 1 Jan Safranek 2017-07-18 08:13:26 UTC
Digging in the node:

* /proc/mounts says that /dev/sdb is mounted:

/dev/sdb /var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/vsphere-volume/mounts/[datastore1]\040kubevols/kubernetes-dynamic-pvc-0a6b27db-6ab5-11e7-8f64-0050569f1b82.vmdk ext4 rw,seclabel,relatime,data=ordered 0 0

Notice a space (\040) in the mount path. There is a directory named "[datastore1] kubevols".


* IsLikelyNotMountPoint think that it's not a mount point:

IsLikelyNotMountPoint findmnt output for path /var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/vsphere-volume/mounts/[datastore1] kubevols/kubernetes-dynamic-pvc-0a6b27db-6ab5-11e7-8f64-0050569f1b82.vmdk: /var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/vsphere-volume/mounts/[datastore1]:

IsLikelyNotMountPoint: /var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/vsphere-volume/mounts/[datastore1] kubevols/kubernetes-dynamic-pvc-0a6b27db-6ab5-11e7-8f64-0050569f1b82.vmdk is not a mount point


Notice that findmnt output is cut at the first space -> bug in Kubernetes NsenterMounter.

Comment 2 Jan Safranek 2017-07-18 11:56:59 UTC
posted a PR upstream: https://github.com/kubernetes/kubernetes/pull/49111

Comment 3 Eric Paris 2017-07-20 14:13:52 UTC
moved back to ASSIGNED until we have a PR against origin/master (for 3.7)

Comment 4 Eric Paris 2017-07-20 14:29:56 UTC
https://github.com/openshift/origin/pull/15371

Comment 6 Jianwei Hou 2017-09-26 10:36:55 UTC
Verified on v3.7.0-0.127.0

Volume is immediately unmounted after Pod is deleted on containerized OCP.

Comment 9 errata-xmlrpc 2017-11-28 22:01:28 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:3188


Note You need to log in before you can comment on or make changes to this bug.