Description of problem: Failed to delete dynamically provisioned PV after deleting pod and PVC Version-Release number of selected component (if applicable): openshift v3.1.0.4 kubernetes v1.1.0-origin-1107-g4c8e6f4 etcd 2.1.2 How reproducible: 80% Steps to Reproduce: 1.Create a pvc { "kind": "PersistentVolumeClaim", "apiVersion": "v1", "metadata": { "name": "claim1", "annotations": { "volume.alpha.kubernetes.io/storage-class": "foo" } }, "spec": { "accessModes": [ "ReadWriteOnce" ], "resources": { "requests": { "storage": "3Gi" } } } } 2.After pv and pvc is bound, creat a pod kind: Pod apiVersion: v1 metadata: name: mypod labels: name: frontendhttp spec: containers: - name: myfrontend image: jhou/hello-openshift ports: - containerPort: 80 name: "http-server" volumeMounts: - mountPath: "/tmp" name: aws volumes: - name: aws persistentVolumeClaim: claimName: claim1 3.Delete pod and pvc, check pv status PV is failed 4.Check the volume status from provider console from aws web console, this volume is 'avaible' Actual results: pv is in failed status pv-aws-0pcyt <none> 3Gi RWO Failed default/claim1 35m Expected results: pv should be sucessfully released and deleted. The volume should also be deleted from its provider Additional info: 1.root@ip-172-18-6-204 ec2-user]# oc describe pv pv-aws-0pcyt Name: pv-aws-0pcyt Labels: <none> Status: Failed Claim: default/claim1 Reclaim Policy: Delete Access Modes: RWO Capacity: 3Gi Message: Deletion error: error delete EBS volumes: VolumeInUse: Volume vol-18c790e5 is currently attached to i-5417fbe7 status code: 400, request id: Source: Type: AWSElasticBlockStore (a Persistent Disk resource in AWS) VolumeID: aws://us-east-1d/vol-18c790e5 FSType: ext4 Partition: 0 ReadOnly: false 2. This issue is also reproducible for Cinder
I think we hit a race here - the pod is still running (or it's being slowly deleted) at the point when volume controller deletes the volume. We need some way how to retry the volume deletion if the first attempt did not succeed. Or synchronize pod and pvc deletion.
> Expected results: > pv should be sucessfully released and deleted. The volume should also be deleted from its provider Just to clarify the use case... The PV should be deleted (bug) but not the the actual volume & data. If you have a PV pointing at an AWS volume the PV should delete but not the physical AWS volume and data.
Thanks. Will update the test case results
(In reply to Bradley Childs from comment #2) > > Expected results: > > pv should be sucessfully released and deleted. The volume should also be deleted from its provider > > Just to clarify the use case... The PV should be deleted (bug) but not the > the actual volume & data. If you have a PV pointing at an AWS volume the PV > should delete but not the physical AWS volume and data. No, dynamically created AWS EBS volumes _should_ be deleted when user deletes appropriate claim that created id. IMO that's the point of dynamic provisioning - create and _delete_ volumes on demand.
(In reply to Jan Safranek from comment #4) > (In reply to Bradley Childs from comment #2) > > > Expected results: > > > pv should be sucessfully released and deleted. The volume should also be deleted from its provider > > > > Just to clarify the use case... The PV should be deleted (bug) but not the > > the actual volume & data. If you have a PV pointing at an AWS volume the PV > > should delete but not the physical AWS volume and data. > > No, dynamically created AWS EBS volumes _should_ be deleted when user > deletes appropriate claim that created id. IMO that's the point of dynamic > provisioning - create and _delete_ volumes on demand. Yes, I think so. When I was testing cinder for this feature, if pvc is deleted, the pv will be deleted too, and the physical cinder volume will be deleted from openstack as well.
Yes this was my mistake- the volume is deleted when it's not set to retain. The default / unspecified value should be to retain though.
Reassigning to Jan for EBS testing.
Kubernetes PR: https://github.com/kubernetes/kubernetes/pull/19365
Origin PR merged
Verification is passed on oc v1.1.2-274-g6187dc3 kubernetes v1.2.0-origin
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2016:1064