Bug 1853384 - Attempting to delete a pv with a dependent dv hangs without returning an error
Summary: Attempting to delete a pv with a dependent dv hangs without returning an error
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Container Native Virtualization (CNV)
Classification: Red Hat
Component: Storage
Version: 2.3.0
Hardware: Unspecified
OS: Unspecified
medium
low
Target Milestone: ---
: 2.5.0
Assignee: Adam Litke
QA Contact: Ying Cui
URL:
Whiteboard:
: 1853429 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-07-02 14:26 UTC by Lars Kellogg-Stedman
Modified: 2020-07-22 12:26 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-07-22 12:26:08 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Lars Kellogg-Stedman 2020-07-02 14:26:44 UTC
Description of problem:

After uploading an image with "virtctl image-upload dv ...", as in:

./virtctl image-upload dv larsdv2 --image-path cirros-0.5.1-x86_64-disk.img --storage-class hostpath-provisioner --insecure --size 1Gi

The following resources are created:

- a persistent volume claim:

NAME      STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS           AGE
larsdv2   Bound    pvc-9f321068-1a38-4957-80bb-1aa8f655291e   557Gi      RWO            hostpath-provisioner   40s

- a persistent volume:

NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM             STORAGECLASS           REASON   AGE
pvc-9f321068-1a38-4957-80bb-1aa8f655291e   557Gi      RWO            Delete           Bound    default/larsdv2   hostpath-provisioner            58s

- a data volume:

NAME      PHASE       PROGRESS   AGE
larsdv2   Succeeded              77s

Attempting to delete the pv returns a "success" message:

$ oc delete pv pvc-9f321068-1a38-4957-80bb-1aa8f655291e
persistentvolume "pvc-9f321068-1a38-4957-80bb-1aa8f655291e" deleted

...but the command hangs at that point and never exits.

The pv goes into the "Terminating" state, and stays there:

NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS        CLAIM             STORAGECLASS           REASON   AGE
pvc-9f321068-1a38-4957-80bb-1aa8f655291e   557Gi      RWO            Delete           Terminating   default/larsdv2   hostpath-provisioner            3m1s

Expected results:

I expect the "oc delete pv" command to return an error ("pv <pvcname> cannot be deleted because it is in use by another resource").

Comment 1 Adam Litke 2020-07-07 11:11:19 UTC
As far as I understand it, the expected behavior is for the command to complete but the PV to remain in Terminating state until the bound PVC is deleted.

@awels, can you try to reproduce this and see if anything strange is happening in the PV deletion path that would prevent the command from completing?

Comment 2 Adam Litke 2020-07-07 11:14:46 UTC
*** Bug 1853429 has been marked as a duplicate of this bug. ***

Comment 3 Alexander Wels 2020-07-07 11:58:38 UTC
There is a kubernetes.io/pv-protection finalizer on the PV, to stop people from deleting a PV that is bound to a PVC. Since kubectl delete pv is a foreground delete it will wait until the PV is actually deleted which will not happen because of the finalizer. The deletionTimeStamp is set on the PV as well, so it will get deleted whenever kubernetes is able to do so (aka when the finalizer is gone). The finalizer will be removed when the PV is unbound.

All of this is normal kubernetes behavior and not a bug.


Note You need to log in before you can comment on or make changes to this bug.