Description of the problem: When trying to delete a spoke cluster that was successfully installed with AI + RHACM 2.5, deletion of the spoke cluster's namespace hangs. OCP version on hub: 4.10.0-0.nightly-2022-03-05-023708 RHACM snapshot: 2.5.0-DOWNSTREAM-2022-03-06-15-33-16 MCE Snapshot: quay.io/acm-d/mce-custom-registry/2.5.0-DOWNSTREAM-2022-03-06-15-33-16 Steps to reproduce: 1. Deploy a 4.10 Hub cluster with RHACM 2.5 bundled with AI. 2. Deploy a spoke cluster using ZTP and applying relevant CRDs. 3. Delete the spoke cluster by deleting the spoke CRDs in reverse order of creation, finishing with spoke namespace. Actual results: Deletion of spoke cluster namespace hangs, stuck in "Terminating". Expected results: Spoke cluster namespace should delete normally. Additional info: From the conditions in the spoke cluster namespace: - lastTransitionTime: "2022-03-07T04:53:56Z" message: 'Some resources are remaining: managedclusteraddons.addon.open-cluster-management.io has 1 resource instances' reason: SomeResourcesRemain status: "True" type: NamespaceContentRemaining - lastTransitionTime: "2022-03-07T04:53:56Z" message: 'Some content in the namespace has finalizers remaining: cluster.open-cluster-management.io/addon-pre-delete in 1 resource instances' reason: SomeFinalizersRemain status: "True" type: NamespaceFinalizersRemaining phase: Terminating Workaround: Deleting the cluster.open-cluster-management.io/addon-pre-delete finalizer in the ManagedClusterAddOn resource allows the namespace to finish deletion.
Relevant Slack thread: https://coreos.slack.com/archives/CTDEY6EEA/p1646745063651159
*** Bug 2062743 has been marked as a duplicate of this bug. ***
G2Bsync 1066623295 comment zhiweiyin318 Mon, 14 Mar 2022 10:35:48 UTC G2Bsync has fixed in the latest snapshot. please verify. thanks.
Hi, I'm facing a similar issue, and tested it with this snapshot: 2.5.0-DOWNSTREAM-2022-03-14-18-18-07 the problem is that, although the hanging is not last forever, it does take time until the ns terminated, i would say even 10 minutes, and this can cause issues when trying to create another spoke for example with the same name, while the resources of the previous ones were not deleted yet. this is the msg i'm seeing: - lastTransitionTime: "2022-03-20T10:24:10Z" message: 'Some resources are remaining: rolebindings.authorization.openshift.io has 1 resource instances, rolebindings.rbac.authorization.k8s.io has 1 resource instances' reason: SomeResourcesRemain status: "True" type: NamespaceContentRemaining - lastTransitionTime: "2022-03-20T10:24:10Z" message: 'Some content in the namespace has finalizers remaining: cluster.open-cluster-management.io/manifest-work-cleanup in 2 resource instances' reason: SomeFinalizersRemain status: "True" type: NamespaceFinalizersRemaining phase: Terminating
There is another PR to reduce detach time.
@yfirst could you please verify if the latest PR solved your problem? Thanks!
@yuhe Looks like it's fixed as of ACM snapshot 2.5.0-DOWNSTREAM-2022-04-08-18-34-34. Thank you!
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat Advanced Cluster Management 2.5 security updates, images, and bug fixes), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:4956
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 365 days