Bug 2061311 - Cleanup of installed spoke clusters hang on deletion of spoke namespace
Summary: Cleanup of installed spoke clusters hang on deletion of spoke namespace
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Advanced Cluster Management for Kubernetes
Classification: Red Hat
Component: Cluster Lifecycle
Version: rhacm-2.5
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: rhacm-2.5
Assignee: Jian Qiu
QA Contact: Hui Chen
Christopher Dawson
URL:
Whiteboard:
: 2062743 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-03-07 11:20 UTC by Yona First
Modified: 2023-09-15 01:52 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-06-09 02:09:07 UTC
Target Upstream Version:
Embargoed:
bot-tracker-sync: rhacm-2.5+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github stolostron backlog issues 20474 0 None None None 2022-03-07 13:55:13 UTC
Red Hat Product Errata RHSA-2022:4956 0 None None None 2022-06-09 02:09:17 UTC

Description Yona First 2022-03-07 11:20:33 UTC
Description of the problem: When trying to delete a spoke cluster that was successfully installed with AI + RHACM 2.5, deletion of the spoke cluster's namespace hangs.

OCP version on hub: 4.10.0-0.nightly-2022-03-05-023708 
RHACM snapshot: 2.5.0-DOWNSTREAM-2022-03-06-15-33-16
MCE Snapshot: quay.io/acm-d/mce-custom-registry/2.5.0-DOWNSTREAM-2022-03-06-15-33-16

Steps to reproduce:
1. Deploy a 4.10 Hub cluster with RHACM 2.5 bundled with AI.
2. Deploy a spoke cluster using ZTP and applying relevant CRDs.
3. Delete the spoke cluster by deleting the spoke CRDs in reverse order of creation, finishing with spoke namespace.

Actual results:
Deletion of spoke cluster namespace hangs, stuck in "Terminating".

Expected results:
Spoke cluster namespace should delete normally.

Additional info:

From the conditions in the spoke cluster namespace:

  - lastTransitionTime: "2022-03-07T04:53:56Z"
    message: 'Some resources are remaining: managedclusteraddons.addon.open-cluster-management.io
      has 1 resource instances'
    reason: SomeResourcesRemain
    status: "True"
    type: NamespaceContentRemaining
  - lastTransitionTime: "2022-03-07T04:53:56Z"
    message: 'Some content in the namespace has finalizers remaining: cluster.open-cluster-management.io/addon-pre-delete
      in 1 resource instances'
    reason: SomeFinalizersRemain
    status: "True"
    type: NamespaceFinalizersRemaining
  phase: Terminating

Workaround: Deleting the cluster.open-cluster-management.io/addon-pre-delete finalizer in the ManagedClusterAddOn resource allows the namespace to finish deletion.

Comment 1 Yona First 2022-03-09 09:21:49 UTC
Relevant Slack thread: https://coreos.slack.com/archives/CTDEY6EEA/p1646745063651159

Comment 2 bjacot 2022-03-10 13:33:53 UTC
*** Bug 2062743 has been marked as a duplicate of this bug. ***

Comment 3 bot-tracker-sync 2022-03-14 20:47:15 UTC
G2Bsync 1066623295 comment 
 zhiweiyin318 Mon, 14 Mar 2022 10:35:48 UTC 
 G2Bsync
has fixed in the latest snapshot. please verify. thanks.

Comment 4 Shelly Miron 2022-03-20 10:52:49 UTC
Hi, I'm facing a similar issue, and tested it with this snapshot: 2.5.0-DOWNSTREAM-2022-03-14-18-18-07

the problem is that, although the hanging is not last forever, it does take time until the ns terminated, i would say even 10 minutes,
and this can cause issues when trying to create another spoke for example with the same name, while the resources of the previous ones were not deleted yet.

this is the msg i'm seeing:

  - lastTransitionTime: "2022-03-20T10:24:10Z"
    message: 'Some resources are remaining: rolebindings.authorization.openshift.io
      has 1 resource instances, rolebindings.rbac.authorization.k8s.io has 1 resource
      instances'
    reason: SomeResourcesRemain
    status: "True"
    type: NamespaceContentRemaining
  - lastTransitionTime: "2022-03-20T10:24:10Z"
    message: 'Some content in the namespace has finalizers remaining: cluster.open-cluster-management.io/manifest-work-cleanup
      in 2 resource instances'
    reason: SomeFinalizersRemain
    status: "True"
    type: NamespaceFinalizersRemaining
  phase: Terminating

Comment 5 zyin@redhat.com 2022-03-22 09:39:37 UTC
There is another PR to reduce detach time.

Comment 6 Yuanyuan He 2022-04-11 07:31:33 UTC
@yfirst could you please verify if the latest PR solved your problem? Thanks!

Comment 7 Yona First 2022-04-11 14:13:59 UTC
@yuhe Looks like it's fixed as of ACM snapshot 2.5.0-DOWNSTREAM-2022-04-08-18-34-34. Thank you!

Comment 11 errata-xmlrpc 2022-06-09 02:09:07 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat Advanced Cluster Management 2.5 security updates, images, and bug fixes), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:4956

Comment 12 Red Hat Bugzilla 2023-09-15 01:52:32 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 365 days


Note You need to log in before you can comment on or make changes to this bug.