Bug 2061311

Summary: Cleanup of installed spoke clusters hang on deletion of spoke namespace
Product: Red Hat Advanced Cluster Management for Kubernetes Reporter: Yona First <yfirst>
Component: Cluster LifecycleAssignee: Jian Qiu <jqiu>
Status: CLOSED ERRATA QA Contact: Hui Chen <huichen>
Severity: high Docs Contact: Christopher Dawson <cdawson>
Priority: unspecified    
Version: rhacm-2.5CC: ccrum, dhuynh, jqiu, smiron, trwest, yfirst, yuhe, zyin
Target Milestone: ---Flags: bot-tracker-sync: rhacm-2.5+
Target Release: rhacm-2.5   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2022-06-09 02:09:07 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Yona First 2022-03-07 11:20:33 UTC
Description of the problem: When trying to delete a spoke cluster that was successfully installed with AI + RHACM 2.5, deletion of the spoke cluster's namespace hangs.

OCP version on hub: 4.10.0-0.nightly-2022-03-05-023708 
RHACM snapshot: 2.5.0-DOWNSTREAM-2022-03-06-15-33-16
MCE Snapshot: quay.io/acm-d/mce-custom-registry/2.5.0-DOWNSTREAM-2022-03-06-15-33-16

Steps to reproduce:
1. Deploy a 4.10 Hub cluster with RHACM 2.5 bundled with AI.
2. Deploy a spoke cluster using ZTP and applying relevant CRDs.
3. Delete the spoke cluster by deleting the spoke CRDs in reverse order of creation, finishing with spoke namespace.

Actual results:
Deletion of spoke cluster namespace hangs, stuck in "Terminating".

Expected results:
Spoke cluster namespace should delete normally.

Additional info:

From the conditions in the spoke cluster namespace:

  - lastTransitionTime: "2022-03-07T04:53:56Z"
    message: 'Some resources are remaining: managedclusteraddons.addon.open-cluster-management.io
      has 1 resource instances'
    reason: SomeResourcesRemain
    status: "True"
    type: NamespaceContentRemaining
  - lastTransitionTime: "2022-03-07T04:53:56Z"
    message: 'Some content in the namespace has finalizers remaining: cluster.open-cluster-management.io/addon-pre-delete
      in 1 resource instances'
    reason: SomeFinalizersRemain
    status: "True"
    type: NamespaceFinalizersRemaining
  phase: Terminating

Workaround: Deleting the cluster.open-cluster-management.io/addon-pre-delete finalizer in the ManagedClusterAddOn resource allows the namespace to finish deletion.

Comment 1 Yona First 2022-03-09 09:21:49 UTC
Relevant Slack thread: https://coreos.slack.com/archives/CTDEY6EEA/p1646745063651159

Comment 2 bjacot 2022-03-10 13:33:53 UTC
*** Bug 2062743 has been marked as a duplicate of this bug. ***

Comment 3 bot-tracker-sync 2022-03-14 20:47:15 UTC
G2Bsync 1066623295 comment 
 zhiweiyin318 Mon, 14 Mar 2022 10:35:48 UTC 
 G2Bsync
has fixed in the latest snapshot. please verify. thanks.

Comment 4 Shelly Miron 2022-03-20 10:52:49 UTC
Hi, I'm facing a similar issue, and tested it with this snapshot: 2.5.0-DOWNSTREAM-2022-03-14-18-18-07

the problem is that, although the hanging is not last forever, it does take time until the ns terminated, i would say even 10 minutes,
and this can cause issues when trying to create another spoke for example with the same name, while the resources of the previous ones were not deleted yet.

this is the msg i'm seeing:

  - lastTransitionTime: "2022-03-20T10:24:10Z"
    message: 'Some resources are remaining: rolebindings.authorization.openshift.io
      has 1 resource instances, rolebindings.rbac.authorization.k8s.io has 1 resource
      instances'
    reason: SomeResourcesRemain
    status: "True"
    type: NamespaceContentRemaining
  - lastTransitionTime: "2022-03-20T10:24:10Z"
    message: 'Some content in the namespace has finalizers remaining: cluster.open-cluster-management.io/manifest-work-cleanup
      in 2 resource instances'
    reason: SomeFinalizersRemain
    status: "True"
    type: NamespaceFinalizersRemaining
  phase: Terminating

Comment 5 zyin@redhat.com 2022-03-22 09:39:37 UTC
There is another PR to reduce detach time.

Comment 6 Yuanyuan He 2022-04-11 07:31:33 UTC
@yfirst could you please verify if the latest PR solved your problem? Thanks!

Comment 7 Yona First 2022-04-11 14:13:59 UTC
@yuhe Looks like it's fixed as of ACM snapshot 2.5.0-DOWNSTREAM-2022-04-08-18-34-34. Thank you!

Comment 11 errata-xmlrpc 2022-06-09 02:09:07 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat Advanced Cluster Management 2.5 security updates, images, and bug fixes), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:4956

Comment 12 Red Hat Bugzilla 2023-09-15 01:52:32 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 365 days