Description of problem:
When the AWS EBS attached pod was deleted, the volume should be unmounted from the host and become available for reuse. But it dose not happen in containerised environment, rpm install environment works fine.
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1. set up openshift in AWS with cloudprovider configured.
2. create a pod with EBS volume。
- name: web
- name: web
- name: html-volume
- name: html-volume
3. get the node name
oc get pods -o wide
4. delete the pod
5. ssh to the node, check mount state
6. check the volume state on AWS
5. the volume is still mounted on the host.
/dev/xvdbb on /var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/aws-ebs/mounts/aws/us-east-1d/vol-5068e9f5 type ext4 (rw,relatime,seclabel,data=ordered)
tmpfs on /var/lib/origin/openshift.local.volumes/pods/53d1fc60-2157-11e6-8849-0efb0d21fb5d/volumes/kubernetes.io~secret/default-token-qwtgw type tmpfs (rw,relatime,rootcontext=system_u:object_r:svirt_sandbox_file_t:s0,seclabel)
6. the volume is in in-use state
5. the volume should not mount on the host.
6. the volume is in available state
works fine in rpm install environment.
Some logs from /var/log/messages when I using dynamic provisioned ebs volume, and create pod , after delete pod and pvc, the pv became Released then Failed status
May 24 01:34:36 ip-172-18-8-74 docker: I0524 01:34:36.455883 1 persistentvolume_claim_binder_controller.go:194] Synchronizing PersistentVolume[pv-aws-bx6cj], current phase: Released
May 24 01:34:36 ip-172-18-8-74 docker: I0524 01:34:36.476849 1 persistentvolume_recycler_controller.go:149] PersistentVolume[pv-aws-bx6cj] retrying recycle after timeout
May 24 01:34:36 ip-172-18-8-74 docker: I0524 01:34:36.476862 1 persistentvolume_recycler_controller.go:168] Reclaiming PersistentVolume[pv-aws-bx6cj]
May 24 01:34:36 ip-172-18-8-74 docker: I0524 01:34:36.476868 1 persistentvolume_recycler_controller.go:271] Deleting PersistentVolume[pv-aws-bx6cj]
May 24 01:34:36 ip-172-18-8-74 docker: I0524 01:34:36.476923 1 log_handler.go:33] AWS request: ec2 DeleteVolume
May 24 01:34:36 ip-172-18-8-74 docker: I0524 01:34:36.497749 1 reflector.go:366] /builddir/build/BUILD/atomic-openshift-git-0.a4463d9/_thirdpartyhacks/src/github.com/openshift/openshift-sdn/plugins/osdn/registry.go:448: Watch close - *api.Node total 60 items received
May 24 01:34:36 ip-172-18-8-74 docker: I0524 01:34:36.693880 1 aws_util.go:117] Error deleting EBS Disk volume aws://us-east-1d/vol-bc28a919: error deleting EBS volumes: VolumeInUse: Volume vol-bc28a919 is currently attached to i-afa01f35
is it the same as https://bugzilla.redhat.com/show_bug.cgi?id=1335293? the following log seems so
Error deleting EBS Disk volume aws://us-east-1d/vol-bc28a919: error deleting EBS volumes: VolumeInUse: Volume vol-bc28a919 is currently attached to i-afa01f35
This bug is concerning releasing the EBS.
Bug 1335293 seems to be allocating volume problems.
From user view, they are different.
The root cause could be the same,
but may be different.
GCE PD has same issue.
Fix is proposed to kubernetes
This has been merged and is in OSE v220.127.116.11 or newer.
set up containerized OpenShift.
follow repo steps.
got expected results:
5. the volume does not mount on the host.
6. the volume is in available state.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.