Bug 1875511
Summary: | openshift-install destroy cluster fails to delete a network in GCP | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Petr Muller <pmuller> |
Component: | Installer | Assignee: | aos-install |
Installer sub component: | openshift-installer | QA Contact: | To Hung Sze <tsze> |
Status: | CLOSED DEFERRED | Docs Contact: | |
Severity: | medium | ||
Priority: | low | CC: | aaleman, adahiya, bleanhar, tsze, wking, yanyang |
Version: | 4.5 | Keywords: | UpcomingSprint |
Target Milestone: | --- | ||
Target Release: | 4.7.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2020-11-02 19:05:53 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Petr Muller
2020-09-03 16:13:46 UTC
The health checks are created with random names, and the only way installer can associate them is to lookup which LB -> which machines -> which cluster. So if the machines are gone there is not way for us to re-associate. Secondly the de-provision script is running on the same cluster multiple times with previously _deleted / left around_ clusters which makes this problem more apparent. There is not good way to circumvent this unless we involve upstream to tag them appropriately. Will need a lot more work and planning, moving to 4.7 *** Bug 1801968 has been marked as a duplicate of this bug. *** https://bugzilla.redhat.com/show_bug.cgi?id=1801968 was closed as duplicate of this. https://issues.redhat.com/browse/CORS-1573 should be good enough to also include this fix. Thanks. We'll track the work for this in Jira. *** Bug 1906172 has been marked as a duplicate of this bug. *** |