Bug 1906172
Summary: | CI cleaner for GCP regularly fails on GCP networks | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | aaleman |
Component: | Installer | Assignee: | aos-install |
Installer sub component: | openshift-installer | QA Contact: | Gaoyun Pei <gpei> |
Status: | CLOSED DUPLICATE | Docs Contact: | |
Severity: | unspecified | ||
Priority: | unspecified | CC: | jstuever, mstaeble |
Version: | 4.6 | ||
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2020-12-11 17:46:07 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
aaleman
2020-12-09 20:01:04 UTC
@aaleman Can you estimate how often "very regularly" is? I am struggling to find other examples of this beyond the sample linked. For the sample, this was from a cluster installation that was aborted. This could be what is causing the difficulty in the clean up. Unfortunately, with an aborted install, the install logs are not captured, so it is hard to confirm where the install was at the time when the install was aborted. About daily I guess? You can go through the job history to find out: https://prow.ci.openshift.org/job-history/gs/origin-ci-test/logs/periodic-ipi-deprovision It is annoying to find out because the job runs very frequently and one failure might make many jobs fail. I found a few occurrences of this from Dec 4 [1]. One of those cluster was created recently enough were I could find the logs [2]. The install failed because it could not create one of the IAM members. We may have a situation where a failed installation is leaving some resources in a state where the destroyer cannot find them, which in this case is blocking the delete of another resource. [1] https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/periodic-ipi-deprovision/1334754122909356032 [2] https://storage.googleapis.com/origin-ci-test/pr-logs/pull/openshift_release/13961/rehearse-13961-pull-ci-openshift-cluster-authentication-operator-master-e2e-agnostic-upgrade/1334514961300328448/build-log.txt The install error from the previous comment is the following. level=error msg=Error: Request "Create IAM Members roles/storage.admin serviceAccount:ci-op-8j7scnqg-28b57-ztn4z-w.gserviceaccount.com for \"project \\\"openshift-gce-devel-ci\\\"\"" returned error: Error applying IAM policy for project "openshift-gce-devel-ci": Error setting IAM policy for project "openshift-gce-devel-ci": googleapi: Error 400: Service account ci-op-st8y14-openshift-g-pcl7k.gserviceaccount.com does not exist., badRequest level=error level=error msg= on ../tmp/openshift-install-220613468/iam/main.tf line 11, in resource "google_project_iam_member" "worker-storage-admin": level=error msg= 11: resource "google_project_iam_member" "worker-storage-admin" { level=error level=error level=fatal msg=failed to fetch Cluster: failed to generate asset "Cluster": failed to create cluster: failed to apply Terraform: failed to complete the change The destroy error is the following. time="2020-12-04T07:05:30Z" level=debug msg="failed to delete network ci-op-6t11kq1y-5f4c2-hjrkp-network with error: RESOURCE_IN_USE_BY_ANOTHER_RESOURCE: The network resource 'projects/openshift-gce-devel-ci/global/networks/ci-op-6t11kq1y-5f4c2-hjrkp-network' is already being used by 'projects/openshift-gce-devel-ci/global/firewalls/k8s-fw-a976e64f33c9b4a1299d1e565af87c0f'" For context: this has been discussed in prior bugs.... and we have a JIRA card as well. https://bugzilla.redhat.com/show_bug.cgi?id=1801968 https://bugzilla.redhat.com/show_bug.cgi?id=1875511 https://bugzilla.redhat.com/show_bug.cgi?id=1788708 https://issues.redhat.com/browse/CORS-1573 *** This bug has been marked as a duplicate of bug 1875511 *** |