Bug 1652478 - [heketi]: Volume does not get deleted from gluster after pvc delete
Summary: [heketi]: Volume does not get deleted from gluster after pvc delete
Keywords:
Status: CLOSED DUPLICATE of bug 1642034
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: heketi
Version: rhgs-3.4
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ---
Assignee: John Mulligan
QA Contact: Prasanth
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-11-22 08:27 UTC by Rochelle
Modified: 2019-12-03 08:41 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-01-24 19:19:27 UTC
Embargoed:


Attachments (Terms of Use)

Description Rochelle 2018-11-22 08:27:58 UTC
Description of problem:
======================
I deleted an app pod before deleting the pvc it was connected to.
The pvc was deleted but I can still see the volumes on the gluster nodes

[root@dhcp35-60 ~]# oc get pvc
NAME      STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS    AGE
new       Bound     pvc-918310b4-ee26-11e8-b832-525400bb3330   20Gi       RWO            glusterfs-new   1h


'new' is the one I created for current testing. Otherwise, there were no pvc's

The PV is still present:
========================
[root@dhcp35-60 ~]# oc get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                    STORAGECLASS     REASON    AGE
pvc-7b79b71d-eca7-11e8-b832-525400bb3330   4Gi        RWO            Delete           Failed    app-storage/claim        glusterfs-test             1d
pvc-918310b4-ee26-11e8-b832-525400bb3330   20Gi       RWO            Delete           Bound     app-storage/new          glusterfs-new              1h
pvc-da51e86d-ec87-11e8-b832-525400bb3330   3Gi        RWO            Delete           Failed    app-storage/claim1       glusterfs-test             2d
registry-volume                            5Gi        RWX            Retain           Bound     default/registry-claim                              3d


please note : the 'registry-volume' and 'pvc-918310b4-ee26-11e8-b832-525400bb3330 ' are new volumes Ive created to carry on with my testing. 

From the gluster node:
----------------------
[root@dhcp35-216 ~]# gluster v list
gluster_shared_storage
glusterfs-registry-volume
heketidbstorage
vol_app-storage_claim1_da54bad0-ec87-11e8-ac79-525400bb3330
vol_app-storage_claim_7b7d0cf9-eca7-11e8-ac79-525400bb3330
vol_app-storage_new_918603c6-ee26-11e8-ac79-525400bb3330



Version-Release number of selected component (if applicable):
============================================================
[root@dhcp35-60 ~]# rpm -qa | grep heketi
heketi-client-7.0.0-15.el7rhgs.x86_64

How reproducible:
=================
2/2

Steps to Reproduce:
===================
1. Create a pvc (3x3)
2. Create an app pod linked to the pvc
3. Delete the app pod before deleting the pvc
4. Delete the pvc


I understand that we should delete the pvc before deleting the app pod it was linked with. However, even after deleting the app pod before deleting the pvc, the pv should be deleted as well. 
The volumes should not be present on the gluster nodes
Actual results:
===============
I can see the deleted pvc's still present on the gluster nodes

Expected results:
=================
The deleted pvc's should not be seen from gluster as well


Additional info:
================
will attach sosreports along with events.log and heketi.log


Note You need to log in before you can comment on or make changes to this bug.