Bug 2055601 - installer did not destroy *.app dns recored in a IPI on ASH install
Summary: installer did not destroy *.app dns recored in a IPI on ASH install
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Installer
Version: 4.10
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 4.11.0
Assignee: Aditya Narayanaswamy
QA Contact: Mike Gahagan
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-02-17 10:50 UTC by Johnny Liu
Modified: 2022-08-10 10:50 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Cause: The *.apps entry did not have the tag "kubernetes.io_cluster<infraID>" that was used by the installer destroy code to isolate all the resources created for a given cluster and delete them. Consequence: The *.apps entry was not being cleaned up wih desstroy Fix: Added the tag in the cluster ingress operator during creation time Result: makes the entry visible to the destroy code to delete.
Clone Of:
Environment:
Last Closed: 2022-08-10 10:50:22 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-ingress-operator pull 737 0 None open Bug 2055601: Add cluster tag to *.apps domain record 2022-04-28 16:44:14 UTC
Red Hat Product Errata RHSA-2022:5069 0 None None None 2022-08-10 10:50:35 UTC

Description Johnny Liu 2022-02-17 10:50:00 UTC
Trigger an IPI install on AzureStack wwt.
Destroy the cluster.
Check public zone, still saw *.apps record there.

Version:
OpenShift Installer 4.10.0-0.nightly-2022-02-11-123954

Platform:
AzureStack

Please specify:
* IPI + customer vnet

What happened?
Installer destroy log:
02-15 14:14:23.531  level=debug msg=OpenShift Installer 4.10.0-0.nightly-2022-02-11-123954
02-15 14:14:23.531  level=debug msg=Built from commit 1ddc64b523042f450f21cc45f1150d29cb01ecc1
02-15 14:14:23.806  level=info msg=Credentials loaded from file "/home/installer1/workspace/ocp-common/Flexy-destroy/flexy/workdir/azurecreds20220215-393107-1ixbji8"
02-15 14:14:23.806  level=debug msg=deleting public records
02-15 14:14:24.776  level=debug msg=<nil>
02-15 14:14:25.733  level=info msg=deleted record=api.jialiub4
02-15 14:14:25.733  level=info msg=deleted record=api-int.jialiub4
02-15 14:14:26.337  level=debug msg=deleting resource group
02-15 14:25:04.423  level=info msg=deleted resource group=jialiub4-kfrm8-rg
02-15 14:25:04.423  level=debug msg=deleting application registrations
02-15 14:25:04.423  level=debug msg=Purging asset "Metadata" from disk
02-15 14:25:04.423  level=debug msg=Purging asset "Master Ignition Customization Check" from disk
02-15 14:25:04.423  level=debug msg=Purging asset "Worker Ignition Customization Check" from disk
02-15 14:25:04.423  level=debug msg=Purging asset "Terraform Variables" from disk
02-15 14:25:04.423  level=debug msg=Purging asset "Kubeconfig Admin Client" from disk
02-15 14:25:04.423  level=debug msg=Purging asset "Kubeadmin Password" from disk
02-15 14:25:04.423  level=debug msg=Purging asset "Certificate (journal-gatewayd)" from disk
02-15 14:25:04.423  level=debug msg=Purging asset "Cluster" from disk


After installer destroy step complete, *.apps is still listed there.

Comment 1 Patrick Dillon 2022-03-22 17:40:01 UTC
The logs are showing a <nil> output which may be a red herring for this particular bug (but should be fixed). Deletion for dns records is handled by tags, so make sure to check records created by ingress controller are correctly tagged.

Comment 2 Mike Gahagan 2022-03-24 13:30:36 UTC
This is occuring on 4.11 as well.

Comment 3 Mike Gahagan 2022-03-24 13:33:02 UTC
I can confirm there is no kubernetes.io_cluster.$cluster_name tag on the apps wildcard record as there seems to be on the api and api-int records.

Comment 8 Mike Gahagan 2022-06-23 21:11:08 UTC
confirmed this is fixed in 4.11.0-0.nightly-2022-06-23-092832

Comment 9 errata-xmlrpc 2022-08-10 10:50:22 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: OpenShift Container Platform 4.11.0 bug fix and security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:5069


Note You need to log in before you can comment on or make changes to this bug.