Bug 1339051 - AWS EBS Volume and GCE PD should be unmounted after the pod is deleted, containerised environment only
Summary: AWS EBS Volume and GCE PD should be unmounted after the pod is deleted, conta...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Storage
Version: 3.2.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: ---
Assignee: hchen
QA Contact: Jianwei Hou
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-05-24 04:10 UTC by Weihua Meng
Modified: 2016-09-27 09:33 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-09-27 09:33:10 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:1933 normal SHIPPED_LIVE Red Hat OpenShift Container Platform 3.3 Release Advisory 2016-09-27 13:24:36 UTC

Description Weihua Meng 2016-05-24 04:10:52 UTC
Description of problem:
When the AWS EBS attached pod was deleted, the volume should be unmounted from the host and become available for reuse. But it dose not happen in containerised environment, rpm install environment works fine.

Version-Release number of selected component (if applicable):
oc v3.2.0.44
kubernetes v1.2.0-36-g4a3f9c5

How reproducible:
Always

Steps to Reproduce:
1. set up openshift in AWS with cloudprovider configured.
https://docs.openshift.org/latest/install_config/configuring_aws.html
2. create a pod with EBS volume。
pod-ebs.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: aws-web
spec:
  containers:
    - name: web
      image: aosqe/hello-openshift
      ports:
        - name: web
          containerPort: 80
          protocol: tcp
      volumeMounts:
        - name: html-volume
          mountPath: "/usr/share/nginx/html"
  volumes:
    - name: html-volume
      awsElasticBlockStore:
        volumeID: aws://us-east-1d/vol-xxxxxxxx
        fsType: ext4

3. get the node name 
oc get pods -o wide
4. delete the pod
5. ssh to the node, check mount state
6. check the volume state on AWS

Actual results:
5. the volume is still mounted on the host.
/dev/xvdbb on /var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/aws-ebs/mounts/aws/us-east-1d/vol-5068e9f5 type ext4 (rw,relatime,seclabel,data=ordered)
tmpfs on /var/lib/origin/openshift.local.volumes/pods/53d1fc60-2157-11e6-8849-0efb0d21fb5d/volumes/kubernetes.io~secret/default-token-qwtgw type tmpfs (rw,relatime,rootcontext=system_u:object_r:svirt_sandbox_file_t:s0,seclabel)

6. the volume is in in-use state

Expected results:
5. the volume should not mount on the host.
6. the volume is in available state

Additional info:
works fine in rpm install environment.

Comment 1 Chao Yang 2016-05-24 09:17:33 UTC
Some logs from /var/log/messages when I using dynamic provisioned ebs volume, and create pod , after delete pod and pvc, the pv became Released then Failed status
May 24 01:34:36 ip-172-18-8-74 docker: I0524 01:34:36.455883       1 persistentvolume_claim_binder_controller.go:194] Synchronizing PersistentVolume[pv-aws-bx6cj], current phase: Released
May 24 01:34:36 ip-172-18-8-74 docker: I0524 01:34:36.476849       1 persistentvolume_recycler_controller.go:149] PersistentVolume[pv-aws-bx6cj] retrying recycle after timeout
May 24 01:34:36 ip-172-18-8-74 docker: I0524 01:34:36.476862       1 persistentvolume_recycler_controller.go:168] Reclaiming PersistentVolume[pv-aws-bx6cj]
May 24 01:34:36 ip-172-18-8-74 docker: I0524 01:34:36.476868       1 persistentvolume_recycler_controller.go:271] Deleting PersistentVolume[pv-aws-bx6cj]
May 24 01:34:36 ip-172-18-8-74 docker: I0524 01:34:36.476923       1 log_handler.go:33] AWS request: ec2 DeleteVolume
May 24 01:34:36 ip-172-18-8-74 docker: I0524 01:34:36.497749       1 reflector.go:366] /builddir/build/BUILD/atomic-openshift-git-0.a4463d9/_thirdpartyhacks/src/github.com/openshift/openshift-sdn/plugins/osdn/registry.go:448: Watch close - *api.Node total 60 items received
May 24 01:34:36 ip-172-18-8-74 docker: I0524 01:34:36.693880       1 aws_util.go:117] Error deleting EBS Disk volume aws://us-east-1d/vol-bc28a919: error deleting EBS volumes: VolumeInUse: Volume vol-bc28a919 is currently attached to i-afa01f35

Comment 2 hchen 2016-06-10 19:41:44 UTC
is it the same as https://bugzilla.redhat.com/show_bug.cgi?id=1335293? the following log seems so


Error deleting EBS Disk volume aws://us-east-1d/vol-bc28a919: error deleting EBS volumes: VolumeInUse: Volume vol-bc28a919 is currently attached to i-afa01f35

Comment 3 Weihua Meng 2016-06-12 10:40:57 UTC
This bug is concerning releasing the EBS.
Bug 1335293 seems to be allocating volume problems.
From user view, they are different.
The root cause could be the same,
but may be different.

Comment 6 Weihua Meng 2016-06-14 10:58:56 UTC
GCE PD has same issue.

Comment 8 hchen 2016-06-14 17:57:07 UTC
Fix is proposed to kubernetes 
https://github.com/kubernetes/kubernetes/pull/27380

Comment 10 Troy Dawson 2016-07-22 19:50:49 UTC
This has been merged and is in OSE v3.3.0.9 or newer.

Comment 11 Weihua Meng 2016-07-24 15:19:24 UTC
Fixed.
openshift v3.3.0.9
kubernetes v1.3.0+57fb9ac
etcd 2.3.0+git

set up containerized OpenShift.
follow repo steps.
got expected results:
5. the volume does not mount on the host.
6. the volume is in available state.

Comment 13 errata-xmlrpc 2016-09-27 09:33:10 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2016:1933


Note You need to log in before you can comment on or make changes to this bug.