Bug 1906172

Summary: CI cleaner for GCP regularly fails on GCP networks
Product: OpenShift Container Platform Reporter: aaleman
Component: InstallerAssignee: aos-install
Installer sub component: openshift-installer QA Contact: Gaoyun Pei <gpei>
Status: CLOSED DUPLICATE Docs Contact:
Severity: unspecified    
Priority: unspecified CC: jstuever, mstaeble
Version: 4.6   
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-12-11 17:46:07 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Comment 1 Matthew Staebler 2020-12-09 22:14:10 UTC
@aaleman Can you estimate how often "very regularly" is? I am struggling to find other examples of this beyond the sample linked.

For the sample, this was from a cluster installation that was aborted. This could be what is causing the difficulty in the clean up. Unfortunately, with an aborted install, the install logs are not captured, so it is hard to confirm where the install was at the time when the install was aborted.

Comment 2 aaleman 2020-12-10 15:13:53 UTC
About daily I guess? You can go through the job history to find out: https://prow.ci.openshift.org/job-history/gs/origin-ci-test/logs/periodic-ipi-deprovision
It is annoying to find out because the job runs very frequently and one failure might make many jobs fail.

Comment 3 Matthew Staebler 2020-12-10 16:50:33 UTC
I found a few occurrences of this from Dec 4 [1]. One of those cluster was created recently enough were I could find the logs [2]. The install failed because it could not create one of the IAM members. We may have a situation where a failed installation is leaving some resources in a state where the destroyer cannot find them, which in this case is blocking the delete of another resource.

[1] https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/periodic-ipi-deprovision/1334754122909356032
[2] https://storage.googleapis.com/origin-ci-test/pr-logs/pull/openshift_release/13961/rehearse-13961-pull-ci-openshift-cluster-authentication-operator-master-e2e-agnostic-upgrade/1334514961300328448/build-log.txt

Comment 4 Matthew Staebler 2020-12-10 16:52:17 UTC
The install error from the previous comment is the following.

level=error msg=Error: Request "Create IAM Members roles/storage.admin serviceAccount:ci-op-8j7scnqg-28b57-ztn4z-w.gserviceaccount.com for \"project \\\"openshift-gce-devel-ci\\\"\"" returned error: Error applying IAM policy for project "openshift-gce-devel-ci": Error setting IAM policy for project "openshift-gce-devel-ci": googleapi: Error 400: Service account ci-op-st8y14-openshift-g-pcl7k.gserviceaccount.com does not exist., badRequest
level=error
level=error msg=  on ../tmp/openshift-install-220613468/iam/main.tf line 11, in resource "google_project_iam_member" "worker-storage-admin":
level=error msg=  11: resource "google_project_iam_member" "worker-storage-admin" {
level=error
level=error
level=fatal msg=failed to fetch Cluster: failed to generate asset "Cluster": failed to create cluster: failed to apply Terraform: failed to complete the change


The destroy error is the following.

time="2020-12-04T07:05:30Z" level=debug msg="failed to delete network ci-op-6t11kq1y-5f4c2-hjrkp-network with error: RESOURCE_IN_USE_BY_ANOTHER_RESOURCE: The network resource 'projects/openshift-gce-devel-ci/global/networks/ci-op-6t11kq1y-5f4c2-hjrkp-network' is already being used by 'projects/openshift-gce-devel-ci/global/firewalls/k8s-fw-a976e64f33c9b4a1299d1e565af87c0f'"

Comment 6 Matthew Staebler 2020-12-11 17:46:07 UTC

*** This bug has been marked as a duplicate of bug 1875511 ***