Back to bug 2159791

Who When What Removed Added
Aman Agrawal 2023-01-10 18:17:05 UTC Keywords Regression
Shyamsundar 2023-01-10 20:49:57 UTC Assignee srangana mrajanna
Sub Component ramen volume-replication-operator
Aman Agrawal 2023-01-11 09:23:12 UTC CC mrajanna
Flags needinfo?(mrajanna)
Madhu Rajanna 2023-01-11 10:55:18 UTC Flags needinfo?(mrajanna)
Madhu Rajanna 2023-01-11 11:37:08 UTC Flags needinfo?(idryomov)
CC idryomov
Shyamsundar 2023-01-11 12:43:23 UTC CC srangana
Ilya Dryomov 2023-01-11 20:43:02 UTC Flags needinfo?(idryomov)
Madhu Rajanna 2023-01-12 08:25:09 UTC Flags needinfo?(idryomov)
Ilya Dryomov 2023-01-12 21:24:26 UTC Flags needinfo?(idryomov)
Mudit Agarwal 2023-01-17 02:13:03 UTC Status NEW ASSIGNED
Madhu Rajanna 2023-01-20 06:30:36 UTC Flags needinfo?(srangana)
Shyamsundar 2023-01-20 12:49:27 UTC Flags needinfo?(srangana) needinfo?(mrajanna)
Aman Agrawal 2023-01-20 12:58:05 UTC Flags needinfo?(srangana) needinfo?(mrajanna)
Shyamsundar 2023-01-23 13:47:38 UTC Flags needinfo?(srangana)
Madhu Rajanna 2023-01-24 07:12:30 UTC Flags needinfo?(mrajanna) needinfo?(mrajanna)
Shyamsundar 2023-01-25 15:00:48 UTC Doc Type If docs needed, set a value Known Issue
Doc Text Cause: Potentially stuck processes within a container in an uninterruptible state

Consequence: When deleting a workload from a cluster, the corresponding pods may not terminate with events such as, FailedKillPod. This may cause delay or failure in garbage collecting dependent DR resources such as the, PVC, VolumeReplication, VolumeReplicationGroup. It would also prevent a future deployment of the same workload to the cluster as the stale resources are not yet garbage collected.

Workaround (if any): Reboot the worker node on which the pod is currently running and stuck in terminating state

Result: Pod termination is succesful and subsequently related DR API resources are also garbage collected
Aman Agrawal 2023-01-30 09:03:10 UTC Blocks 2107226
Aman Agrawal 2023-01-30 10:01:53 UTC Keywords Regression
Sunil Kumar Acharya 2023-01-30 10:46:42 UTC CC sheggodu
Olive Lakra 2023-01-30 14:05:06 UTC Doc Text Cause: Potentially stuck processes within a container in an uninterruptible state

Consequence: When deleting a workload from a cluster, the corresponding pods may not terminate with events such as, FailedKillPod. This may cause delay or failure in garbage collecting dependent DR resources such as the, PVC, VolumeReplication, VolumeReplicationGroup. It would also prevent a future deployment of the same workload to the cluster as the stale resources are not yet garbage collected.

Workaround (if any): Reboot the worker node on which the pod is currently running and stuck in terminating state

Result: Pod termination is succesful and subsequently related DR API resources are also garbage collected
.DR workloads remain stuck when deleted

When deleting a workload from a cluster, the corresponding pods may not terminate with events such as `FailedKillPod`. This may cause delay or failure in garbage collecting dependent DR resources such as the `PVC`, `VolumeReplication`, and `VolumeReplicationGroup`. It would also prevent a future deployment of the same workload to the cluster as the stale resources are not yet garbage collected.

To workaround, this issue, reboot the worker node on which the pod is currently running and stuck in a terminating state. This results in successful pod termination and subsequently related DR API resources are also garbage collected.
CC olakra
Olive Lakra 2023-01-30 14:07:54 UTC Doc Text .DR workloads remain stuck when deleted

When deleting a workload from a cluster, the corresponding pods may not terminate with events such as `FailedKillPod`. This may cause delay or failure in garbage collecting dependent DR resources such as the `PVC`, `VolumeReplication`, and `VolumeReplicationGroup`. It would also prevent a future deployment of the same workload to the cluster as the stale resources are not yet garbage collected.

To workaround, this issue, reboot the worker node on which the pod is currently running and stuck in a terminating state. This results in successful pod termination and subsequently related DR API resources are also garbage collected.
.Disaster recovery workloads remain stuck when deleted

When deleting a workload from a cluster, the corresponding pods may not terminate with events such as `FailedKillPod`. This may cause delay or failure in garbage collecting dependent DR resources such as the `PVC`, `VolumeReplication`, and `VolumeReplicationGroup`. It would also prevent a future deployment of the same workload to the cluster as the stale resources are not yet garbage collected.

To workaround, this issue, reboot the worker node on which the pod is currently running and stuck in a terminating state. This results in successful pod termination and subsequently related DR API resources are also garbage collected.
Olive Lakra 2023-01-30 16:16:27 UTC Doc Text .Disaster recovery workloads remain stuck when deleted

When deleting a workload from a cluster, the corresponding pods may not terminate with events such as `FailedKillPod`. This may cause delay or failure in garbage collecting dependent DR resources such as the `PVC`, `VolumeReplication`, and `VolumeReplicationGroup`. It would also prevent a future deployment of the same workload to the cluster as the stale resources are not yet garbage collected.

To workaround, this issue, reboot the worker node on which the pod is currently running and stuck in a terminating state. This results in successful pod termination and subsequently related DR API resources are also garbage collected.
.Disaster recovery workloads remain stuck when deleted

When deleting a workload from a cluster, the corresponding pods may not terminate with events such as `FailedKillPod`. This may cause delay or failure in garbage collecting dependent DR resources such as the `PVC`, `VolumeReplication`, and `VolumeReplicationGroup`. It would also prevent a future deployment of the same workload to the cluster as the stale resources are not yet garbage collected.

Workaround: Reboot the worker node on which the pod is currently running and stuck in a terminating state. This results in successful pod termination and subsequently related DR API resources are also garbage collected.
Red Hat Bugzilla 2023-01-31 23:37:13 UTC CC madam
Sunil Kumar Acharya 2023-03-19 17:58:34 UTC Flags needinfo?(mrajanna)
Madhu Rajanna 2023-03-22 11:08:15 UTC Flags needinfo?(mrajanna) needinfo?(amagrawa)
Aman Agrawal 2023-03-23 09:06:59 UTC Flags needinfo?(amagrawa)
Karolin Seeger 2023-05-02 11:44:25 UTC CC kseeger
Sidhant Agrawal 2023-05-11 12:00:30 UTC CC sagrawal
Shyamsundar 2023-05-11 16:44:23 UTC Flags needinfo?(idryomov)
krishnaram Karthick 2023-05-16 05:16:05 UTC CC kramdoss
Ilya Dryomov 2023-05-16 10:32:49 UTC Flags needinfo?(idryomov)
Shyamsundar 2023-05-17 20:29:33 UTC Flags needinfo?(idryomov)
Ilya Dryomov 2023-05-19 21:13:00 UTC Flags needinfo?(idryomov)
Ilya Dryomov 2023-05-19 21:15:19 UTC Flags needinfo?(ypadia)
CC ypadia
yati padia 2023-05-22 03:57:10 UTC Flags needinfo?(ypadia)
Shyamsundar 2023-05-23 12:12:30 UTC Blocks 2209298
Red Hat Bugzilla 2023-05-31 23:37:45 UTC CC mrajanna
Assignee mrajanna srangana
Karolin Seeger 2023-06-06 06:56:19 UTC Flags needinfo?(kramdoss) needinfo?(kramdoss)
Karolin Seeger 2023-06-06 06:58:48 UTC Flags needinfo?(kramdoss)
Red Hat Bugzilla 2023-08-03 08:29:45 UTC CC ocs-bugs
Elad 2023-08-09 17:00:43 UTC CC odf-bz-bot
Raghavendra Talur 2023-08-09 18:25:35 UTC Flags needinfo?(kramdoss)
CC rtalur
Aman Agrawal 2023-08-10 16:47:46 UTC Flags needinfo?(mrajanna) needinfo?(idryomov)
CC mrajanna
Ilya Dryomov 2023-08-10 20:29:01 UTC Flags needinfo?(idryomov)
Mudit Agarwal 2023-08-11 15:26:12 UTC Flags needinfo?(amagrawa)
krishnaram Karthick 2023-08-14 04:41:28 UTC QA Contact amagrawa
Madhu Rajanna 2023-08-14 07:00:54 UTC Flags needinfo?(mrajanna)
Aman Agrawal 2023-08-14 08:00:30 UTC Flags needinfo?(amagrawa)
krishnaram Karthick 2023-08-14 14:30:19 UTC Flags needinfo?(kramdoss) needinfo?(kramdoss) needinfo+ needinfo+
Ilya Dryomov 2023-08-14 15:38:16 UTC Flags needinfo+ needinfo+
Mudit Agarwal 2023-08-16 09:49:22 UTC Blocks 2209298

Back to bug 2159791