Description of the problem: After having uninstalled RHACM as per documentation, there are many leftovers in the cluster. Release version: 2.4 OCP version: 4.7 Browser Info: Steps to reproduce: 1. Install RHACM as per doc 2. Remove RHACM with command line as per https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.4/html/install/installing#uninstalling (usinng either Console or command-line instructions) Actual results: All RHACM-related components have been removed Expected results: Many leftovers are found: - CRDs, e.g: channels.apps.open-cluster-management.io 2022-01-14T08:00:19Z clustermanagers.operator.open-cluster-management.io 2022-01-14T08:00:27Z deployables.apps.open-cluster-management.io 2022-01-14T08:00:30Z gitopsclusters.apps.open-cluster-management.io 2022-01-14T08:00:25Z helmreleases.apps.open-cluster-management.io 2022-01-14T08:00:24Z multiclusterhubs.operator.open-cluster-management.io 2022-01-14T08:00:23Z multiclusterobservabilities.observability.open-cluster-management.io 2022-01-14T08:00:27Z observabilityaddons.observability.open-cluster-management.io 2022-01-14T08:00:22Z placementrules.apps.open-cluster-management.io 2022-01-14T08:00:22Z submarinerconfigs.submarineraddon.open-cluster-management.io 2022-01-14T08:00:27Z subscriptions.apps.open-cluster-management.io 2022-01-14T08:00:21Z - ClusterRoles - ClusterRoleBindings Additional info: Also used the shell script in Troubleshooting section, still not sufficient.
The operator lifecycle manager that deploys ACM intentionally preserves CRDs after uninstall to prevent data loss risk. See https://github.com/operator-framework/operator-lifecycle-manager/issues/1326. You can delete the CRDs after, but we don't attempt to clean them up at the risk of deleting existing instances of those resources.
There are much more left-overs, e.g. cluster roles and cluster rolebindings. But more hurting are the gatekeeper constrainttemplates and constraints, which are being processed by gatekeeper after RHACM uninstall.
Moving to Modified, talked to @cwall about there being new leaks after testing with the old leaks cleared up.
G2Bsync 1123171357 comment nelsonjean Wed, 11 May 2022 04:26:36 UTC G2Bsync @kurwang , are we planning to continue fixing this in 2.5 or is there a workaround that can be documented?
G2Bsync 1123171415 comment nelsonjean Wed, 11 May 2022 04:26:42 UTC G2Bsync @kurwang , are we planning to continue fixing this in 2.5 or is there a workaround that can be documented?
G2Bsync 1124237509 comment cameronmwall Wed, 11 May 2022 20:04:29 UTC G2Bsync @nelsonjean We have made all the fixes we intend to for 2.5. The majority of leaks were cleaned up and any left over should be able to be worked around using the [documented cleanup script](https://github.com/stolostron/rhacm-docs/blob/2.4_stage/install/uninstall.adoc). Persisting leaks and new ones that have sprung up over the development cycle will be handled in 2.6 as part of [this epic](https://app.zenhub.com/workspaces/engineering-backlog-do-not-delete-604fab62d4b98d00150a2854/issues/stolostron/backlog/22481) I believe this issue can be closed
Discussed and will be closed, QE has verified the bug.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat Advanced Cluster Management 2.5 security updates, images, and bug fixes), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:4956