Bug 1293850 - Failed to delete dynamically provisioned PV when PVC is deleted
Summary: Failed to delete dynamically provisioned PV when PVC is deleted
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Storage   
(Show other bugs)
Version: 3.1.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: ---
Assignee: Jan Safranek
QA Contact: Liang Xia
URL:
Whiteboard:
Keywords:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-12-23 09:32 UTC by Chao Yang
Modified: 2016-05-12 16:26 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2016-05-12 16:26:24 UTC
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2016:1064 normal SHIPPED_LIVE Important: Red Hat OpenShift Enterprise 3.2 security, bug fix, and enhancement update 2016-05-12 20:19:17 UTC

Description Chao Yang 2015-12-23 09:32:16 UTC
Description of problem:
Failed to delete dynamically provisioned PV after deleting pod and PVC


Version-Release number of selected component (if applicable):
openshift v3.1.0.4
kubernetes v1.1.0-origin-1107-g4c8e6f4
etcd 2.1.2

How reproducible:
80%

Steps to Reproduce:
1.Create a pvc 
{
  "kind": "PersistentVolumeClaim",
  "apiVersion": "v1",
  "metadata": {
    "name": "claim1",
    "annotations": {
      "volume.alpha.kubernetes.io/storage-class": "foo"
    }
  },
  "spec": {
    "accessModes": [
      "ReadWriteOnce"
    ],
    "resources": {
      "requests": {
        "storage": "3Gi"
      }
    }
  }
}

2.After pv and pvc is bound, creat a pod
kind: Pod
apiVersion: v1
metadata:
  name: mypod
  labels:
    name: frontendhttp
spec:
  containers:
    - name: myfrontend
      image: jhou/hello-openshift
      ports:
        - containerPort: 80
          name: "http-server"
      volumeMounts:
      - mountPath: "/tmp"
        name: aws
  volumes:
    - name: aws
      persistentVolumeClaim:
        claimName: claim1
3.Delete pod and pvc, check pv status
PV is failed
4.Check the volume status from provider console
from aws web console, this volume is 'avaible'

Actual results:
pv is in failed status
pv-aws-0pcyt   <none>       3Gi        RWO           Failed      default/claim1             35m


Expected results:
pv should be sucessfully released and deleted. The volume should also be deleted from its provider

Additional info:
1.root@ip-172-18-6-204 ec2-user]# oc describe pv pv-aws-0pcyt
Name:		pv-aws-0pcyt
Labels:		<none>
Status:		Failed
Claim:		default/claim1
Reclaim Policy:	Delete
Access Modes:	RWO
Capacity:	3Gi
Message:	Deletion error: error delete EBS volumes: VolumeInUse: Volume vol-18c790e5 is currently attached to i-5417fbe7
		status code: 400, request id: 
Source:
    Type:	AWSElasticBlockStore (a Persistent Disk resource in AWS)
    VolumeID:	aws://us-east-1d/vol-18c790e5
    FSType:	ext4
    Partition:	0
    ReadOnly:	false
2. This issue is also reproducible for Cinder

Comment 1 Jan Safranek 2016-01-04 14:01:29 UTC
I think we hit a race here - the pod is still running (or it's being slowly deleted) at the point when volume controller deletes the volume. We need some way how to retry the volume deletion if the first attempt did not succeed. Or synchronize pod and pvc deletion.

Comment 2 Bradley Childs 2016-01-05 01:20:15 UTC
> Expected results:
> pv should be sucessfully released and deleted. The volume should also be deleted from its provider

Just to clarify the use case... The PV should be deleted (bug) but not the the actual volume & data. If you have a PV pointing at an AWS volume the PV should delete but not the physical AWS volume and data.

Comment 3 Chao Yang 2016-01-05 07:51:06 UTC
Thanks.
Will update the test case results

Comment 4 Jan Safranek 2016-01-05 09:07:08 UTC
(In reply to Bradley Childs from comment #2)
> > Expected results:
> > pv should be sucessfully released and deleted. The volume should also be deleted from its provider
> 
> Just to clarify the use case... The PV should be deleted (bug) but not the
> the actual volume & data. If you have a PV pointing at an AWS volume the PV
> should delete but not the physical AWS volume and data.

No, dynamically created AWS EBS volumes _should_ be deleted when user deletes appropriate claim that created id. IMO that's the point of dynamic provisioning - create and _delete_ volumes on demand.

Comment 5 Jianwei Hou 2016-01-05 11:07:08 UTC
(In reply to Jan Safranek from comment #4)
> (In reply to Bradley Childs from comment #2)
> > > Expected results:
> > > pv should be sucessfully released and deleted. The volume should also be deleted from its provider
> > 
> > Just to clarify the use case... The PV should be deleted (bug) but not the
> > the actual volume & data. If you have a PV pointing at an AWS volume the PV
> > should delete but not the physical AWS volume and data.
> 
> No, dynamically created AWS EBS volumes _should_ be deleted when user
> deletes appropriate claim that created id. IMO that's the point of dynamic
> provisioning - create and _delete_ volumes on demand.

Yes, I think so. When I was testing cinder for this feature, if pvc is deleted, the pv will be deleted too, and the physical cinder volume will be deleted from openstack as well.

Comment 6 Bradley Childs 2016-01-05 14:50:52 UTC
Yes this was my mistake- the volume is deleted when it's not set to retain.  The default / unspecified value should be to retain though.

Comment 7 Mark Turansky 2016-01-11 15:33:23 UTC
Reassigning to Jan for EBS testing.

Comment 8 Jan Safranek 2016-01-12 08:35:00 UTC
Kubernetes PR: https://github.com/kubernetes/kubernetes/pull/19365

Comment 9 Jan Safranek 2016-02-09 14:39:23 UTC
Origin PR merged

Comment 10 Chao Yang 2016-02-16 03:01:33 UTC
Verification is passed on
oc v1.1.2-274-g6187dc3
kubernetes v1.2.0-origin

Comment 12 errata-xmlrpc 2016-05-12 16:26:24 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2016:1064


Note You need to log in before you can comment on or make changes to this bug.