Bug 1746459
| Summary: | destroy-cluster sometimes fails to delete AWS snapshot | ||
|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | Matthew Staebler <mstaeble> |
| Component: | Installer | Assignee: | Matthew Staebler <mstaeble> |
| Installer sub component: | openshift-installer | QA Contact: | Mike Fiedler <mifiedle> |
| Status: | CLOSED ERRATA | Docs Contact: | |
| Severity: | unspecified | ||
| Priority: | unspecified | CC: | dgoodwin, jialiu, mifiedle, nmalik, sdodson |
| Version: | 4.1.0 | ||
| Target Milestone: | --- | ||
| Target Release: | 4.2.0 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2019-10-16 06:38:14 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
Hitting this as well today, the snapshot in question does not appear to exist in the AWS UI. Unclear how the filtering can be finding it indefinitely. Same problem. There are no snapshots in us-east-1 and it seems to find something and loop for 2+ hours at this point trying to delete it:
time="2019-08-28T18:18:17Z" level=debug msg="search for and delete matching resources by tag in us-east-1 matching aws.Filter{\"kubernetes.io/cluster/nmalik2-sm54x\":\"owned\"}"
time="2019-08-28T18:18:17Z" level=debug msg="InvalidSnapshot.NotFound: \n\tstatus code: 400, request id: 552fc3fe-1369-40f2-9cc1-2e2de111cc7d" arn="arn:aws:ec2:us-east-1:278307472902:snapshot/snap-020a90089e802252c"
Verified on 4.2.0-0.nightly-2019-09-10-074025 Installed, deleted master AMI, deleted master AMI snapshot, destroyed Installed, deleted master AMI, destroyed while deleting AMI snapshot Installed, destroyed without touching AMI or snapshot All cluster destroys were successful. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:2922 |
When destroying a cluster, the installer sometimes gets stuck trying to delete a snapshot. I have seen this happen on two separate occasions. In both cases, the installer completed the terraform apply during create-cluster. This is a snippet of what gets repeated in the installer logs. time="2019-08-28T13:48:36Z" level=debug msg="search for IAM roles" installID=mfs5wxtd time="2019-08-28T13:48:36Z" level=debug msg="search for IAM users" installID=mfs5wxtd time="2019-08-28T13:48:46Z" level=debug msg="search for and delete matching resources by tag in us-east-1 matching aws.Filter{\"kubernetes.io/cluster/ci-cluster-v4-1-vdgsk\":\"owned\"}" installID=mfs5wxtd time="2019-08-28T13:48:46Z" level=debug msg="InvalidSnapshot.NotFound: \n\tstatus code: 400, request id: 9c2e706d-faec-4272-8f56-e683c882397a" arn="arn:aws:ec2:us-east-1:462175581547:snapshot/snap-0e711a4f51bdce4be" installID=mfs5wxtd Version-Release number of selected component (if applicable): quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0b59c158d03551e7f3f34fb3ce751576d892a1dabebb4510ae40666b203683a1