Bug 2108716 - Deletion of Application deleting the pods but not PVCs
Summary: Deletion of Application deleting the pods but not PVCs
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: odf-dr
Version: 4.11
Hardware: Unspecified
OS: Linux
unspecified
high
Target Milestone: ---
: ODF 4.12.0
Assignee: Shyamsundar
QA Contact: kmanohar
URL:
Whiteboard:
Depends On: 2110026
Blocks: 2094357 2107226
TreeView+ depends on / blocked
 
Reported: 2022-07-19 18:49 UTC by Amarnath
Modified: 2023-12-08 04:29 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
.Deletion of Application now deletes pods and PVCs correctly Previously, when deleting an application from the RHACM console, DRPC did not get deleted. Not deleting DRPC leads to not deleting the VRG as well as the VR. If the VRG/VR is not deleted, the PVC finalizer list will not be cleaned up, causing the PVC to stay in a `Terminating` state. With this update, deleting an application from the RHACM console deletes the required dependent DRPC and related resources on the managed clusters, freeing up the PVCs as well for required garbage collection.
Clone Of:
: 2110026 (view as bug list)
Environment:
Last Closed: 2023-02-08 14:06:28 UTC
Embargoed:


Attachments (Terms of Use)

Description Amarnath 2022-07-19 18:49:44 UTC
Description of problem (please be detailed as possible and provide log
snippets):

Delete the application from Multicloud cluster.
The pods are getting deleted but the PVCs are going to a Terminating state.
They are stuck in that.

But still, the storage is consumed and mirroring is in active state for the PVCs



Version of all relevant components (if applicable):
OCP : 4.11.0-0.nightly-2022-07-06-145812
ODF : 4.11.0-110

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?


Is there any workaround available to the best of your knowledge?


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?


Can this issue reproducible?
2/2

Can this issue reproduce from the UI?
Yes

If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1. Navigate to Multi could hub and select application
2. Delete the application
3. check the pods and PVCs are removed that are related the application


Actual results:
Pods are getting deleted.
PVCs are going to terminating state.
[amk@amk ~]$ oc get pods,pvc -n busybox-workloads-4
NAME                                   STATUS        VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                         AGE
persistentvolumeclaim/busybox-pvc-61   Terminating   pvc-4b393e92-0faf-4a37-8163-921188673730   46Gi       RWO            ocs-storagecluster-ceph-rbd-mirror   24h
persistentvolumeclaim/busybox-pvc-62   Terminating   pvc-d251f5ee-6e01-440b-a1be-f04b9822a19b   20Gi       RWO            ocs-storagecluster-ceph-rbd-mirror   24h
persistentvolumeclaim/busybox-pvc-63   Terminating   pvc-006f13c0-a750-4eda-a2b4-178983e97c42   94Gi       RWO            ocs-storagecluster-ceph-rbd-mirror   24h
persistentvolumeclaim/busybox-pvc-64   Terminating   pvc-27695da9-0f90-401e-ab60-a3111ae846b6   47Gi       RWO            ocs-storagecluster-ceph-rbd-mirror   24h
persistentvolumeclaim/busybox-pvc-65   Terminating   pvc-e5f3168a-9ec8-483d-b57b-835748ad3f1d   139Gi      RWO            ocs-storagecluster-ceph-rbd-mirror   24h
persistentvolumeclaim/busybox-pvc-66   Terminating   pvc-f7c27b09-9df1-409a-867d-3447c9fcdcf1   133Gi      RWO            ocs-storagecluster-ceph-rbd-mirror   24h
persistentvolumeclaim/busybox-pvc-67   Terminating   pvc-67c85338-87a8-44f9-b7b9-e591915e0864   50Gi       RWO            ocs-storagecluster-ceph-rbd-mirror   24h
persistentvolumeclaim/busybox-pvc-68   Terminating   pvc-9d6c53bb-c07f-431f-b59b-a8452ab13101   61Gi       RWO            ocs-storagecluster-ceph-rbd-mirror   24h
persistentvolumeclaim/busybox-pvc-69   Terminating   pvc-6bb7a113-58ec-4fc3-b928-f1ce6083fbc0   116Gi      RWO            ocs-storagecluster-ceph-rbd-mirror   24h
persistentvolumeclaim/busybox-pvc-70   Terminating   pvc-c1eef446-d85b-4ca2-b0b9-1ed4e1d49191   63Gi       RWO            ocs-storagecluster-ceph-rbd-mirror   24h
persistentvolumeclaim/busybox-pvc-71   Terminating   pvc-da964ad9-03f4-4e6b-9c05-51025133b001   131Gi      RWO            ocs-storagecluster-ceph-rbd-mirror   24h
persistentvolumeclaim/busybox-pvc-72   Terminating   pvc-5f04fd74-6ef3-41e1-9a5c-03537c2de1cd   27Gi       RWO            ocs-storagecluster-ceph-rbd-mirror   24h
persistentvolumeclaim/busybox-pvc-73   Terminating   pvc-2858078a-5570-4c41-b8e5-cd18d9f96306   43Gi       RWO            ocs-storagecluster-ceph-rbd-mirror   24h
persistentvolumeclaim/busybox-pvc-74   Terminating   pvc-25d1938d-ccb0-4201-9747-64c04341ee3f   101Gi      RWO            ocs-storagecluster-ceph-rbd-mirror   24h
persistentvolumeclaim/busybox-pvc-75   Terminating   pvc-fadd2961-34e5-4693-85d1-752ecd1d0345   131Gi      RWO            ocs-storagecluster-ceph-rbd-mirror   24h
persistentvolumeclaim/busybox-pvc-76   Terminating   pvc-e2a168a4-ae11-4925-a0bc-9f40d4ad6e2b   107Gi      RWO            ocs-storagecluster-ceph-rbd-mirror   24h
persistentvolumeclaim/busybox-pvc-77   Terminating   pvc-a02dcd58-c60e-4561-abfc-a5de745a556a   72Gi       RWO            ocs-storagecluster-ceph-rbd-mirror   24h
persistentvolumeclaim/busybox-pvc-78   Terminating   pvc-e7d4c711-4984-4328-9b8d-80ef12eb9e18   44Gi       RWO            ocs-storagecluster-ceph-rbd-mirror   24h
persistentvolumeclaim/busybox-pvc-79   Terminating   pvc-bde7a8ac-704a-4083-b802-d118c322d0b8   81Gi       RWO            ocs-storagecluster-ceph-rbd-mirror   24h
persistentvolumeclaim/busybox-pvc-80   Terminating   pvc-25b2293b-fc28-4370-912f-edb4b9d76305   113Gi      RWO            ocs-storagecluster-ceph-rbd-mirror   24h
[amk@amk ~]$ 


Expected results:
Both the Pods and PVcs should get deleted

Additional info:

Comment 4 Shyamsundar 2022-07-19 19:28:09 UTC
@amk require must-gather output for ODF and ACM hub to debug the problem.

@gshanmug this I suspect is because the UI does not delete the DRPC created for the application (due to lack of integration for the same). Due to which, VRG holds back the PVC on the cluster as it is not deleted as yet. What is the component to move this BZ to? (should we have UI as another sub-component here?)

Comment 5 Amarnath 2022-07-21 07:05:10 UTC
Hi Shyam,

I have collected must-gather logs, But our file server is down.
Can you help me with any file server where I can post it?

Regards,
Amarnath

Comment 6 Shyamsundar 2022-07-21 15:56:29 UTC
(In reply to Amarnath from comment #5)
> Hi Shyam,
> 
> I have collected must-gather logs, But our file server is down.
> Can you help me with any file server where I can post it?

I really do not have any location handy.

Can you let us know if DRPlacementControl resource exists on the hub cluster for the application that was deleted? and if yes, paste its YAML output.
Further if it exists, please post the VolumeReplicationGroup resource from the ManagedCluster where the workload was deleted as well.

Both resources are to be found in the same namespace as the application.

The above would be enough to prove the conjecture that the issue is non-deletion of DRPC by the application UI.

Comment 11 Amarnath 2022-07-25 04:34:57 UTC
Hi @srangsrangana,

As mentioned in the screenshots attached.
I tried deleting the application from Web console of ACM(multi cloud cluster console).

Regards,
Amarnath

Comment 14 Mudit Agarwal 2022-08-03 08:16:15 UTC
Moving this out to 4.12 after discussing with Karthick, we don't want to risk a regression at this point of time.

Please provide the doc text.

Comment 33 Red Hat Bugzilla 2023-12-08 04:29:42 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days


Note You need to log in before you can comment on or make changes to this bug.