Bug 1977298 - Unable to Destroy Failed Install on vSphere ACM 2.2.5
Summary: Unable to Destroy Failed Install on vSphere ACM 2.2.5
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Advanced Cluster Management for Kubernetes
Classification: Red Hat
Component: Cluster Lifecycle
Version: rhacm-2.2
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
: rhacm-2.2.6
Assignee: cahl
QA Contact: Derek Ho
Christopher Dawson
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-06-29 12:26 UTC by Todd Wardzinski
Modified: 2024-10-01 18:49 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-08-10 18:33:12 UTC
Target Upstream Version:
Embargoed:
ming: rhacm-2.2.z+
ming: needinfo+


Attachments (Terms of Use)
Failed Job (627.08 KB, image/png)
2021-06-29 12:26 UTC, Todd Wardzinski
no flags Details
Perpetual Failed State on Lifecycle Management (279.33 KB, image/png)
2021-06-29 12:27 UTC, Todd Wardzinski
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Github open-cluster-management backlog issues 13784 0 None None None 2021-06-29 16:23:22 UTC
Red Hat Product Errata RHBA-2021:3126 0 None None None 2021-08-10 18:33:22 UTC

Description Todd Wardzinski 2021-06-29 12:26:26 UTC
Created attachment 1795764 [details]
Failed Job

Description of the problem:
Customer attempted install of OCP 4.7 cluster on vSphere 7.0.2 -- install failed.  Upon attempt to remove the failed install via the Destroy Cluster functionality on ACM console, cluster-to-be-destroyed sits in perpetual 'Destroying' status.

Release version:
release-2.2

Operator snapshot version:
rhacm-2.2.5

OCP version:
Client Version: 4.7.13
Server Version: 4.7.13

Browser Info:
Chromium & CLI

Steps to reproduce:
1. Log into ACM, go to Cluster Lifecycle.
2. Attempt to create a cluster.  Customer has issue present where install is failing on vSphere.  
3. Attempt a 'Destroy Cluster' from the Cluster Lifecycle.

Actual results:
On Cluster Lifecycle page, the "destroyed" cluster will sit in perpetual "Destroying" status.

Expected results:
Cluster should remove itself fairly quickly from cluster lifecycle page.

Current resolve:
As kubeadmin, remove the failed namespace.

Additional info:
More detailed screenshots available via internal email.

The issue seems to stem from a failed uninstall job.  Since nothing was able to really happen on the backend vSphere install, an error is present with "Object References is Empty"

Comment 1 Todd Wardzinski 2021-06-29 12:27:27 UTC
Created attachment 1795765 [details]
Perpetual Failed State on Lifecycle Management

Comment 2 vbirsan 2021-06-29 13:13:33 UTC
This seems to be a cluster lifecycle issue

Comment 3 cahl 2021-06-29 17:05:55 UTC
This is a known issue in 2.2. Please see https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.2/html/release_notes/red-hat-advanced-cluster-management-for-kubernetes-release-notes#cluster-management-issues, section 1.3.3.12. Process to destroy a cluster does not complete for information on properly cleaning up the kubernetes resources.

Comment 4 Mike Ng 2021-06-29 19:00:43 UTC
G2Bsync 870703050 comment 
 chrisahl Tue, 29 Jun 2021 15:33:07 UTC 
 G2Bsync This is a known issue in 2.2.   Please see https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.2/html/release_notes/red-hat-advanced-cluster-management-for-kubernetes-release-notes#cluster-management-issues, section `1.3.3.12. Process to destroy a cluster does not complete` for information on properly cleaning up the kubernetes resources.

Comment 12 errata-xmlrpc 2021-08-10 18:33:12 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Advanced Cluster Management 2.2.6 bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:3126


Note You need to log in before you can comment on or make changes to this bug.