Description of problem: openshift-install deprovision fails to stop and remove ovirt vms Version-Release number of the following components: openshift-install-linux-4.4.0-0.nightly-2020-03-28-140753 ovirt 4.3.9 How reproducible: only tried once Steps to Reproduce: 1. provision a cluster (it failed to provision because I ran out of memory) 2. sudo ./openshift-install destroy cluster --dir=gshereme-rhv-ocp-1 Actual results: INFO searching VMs by tag=gshereme-rhv-ocp-1-sfdd2 INFO Found %!s(int=6) VMs INFO Stopping VM gshereme-rhv-ocp-1-sfdd2-worker-0-7gvxf : errors: %s%!(EXTRA <nil>) INFO Stopping VM gshereme-rhv-ocp-1-sfdd2-worker-0-cp6jx : errors: %s%!(EXTRA <nil>) INFO Stopping VM gshereme-rhv-ocp-1-sfdd2-master-2 : errors: %s%!(EXTRA <nil>) INFO Stopping VM gshereme-rhv-ocp-1-sfdd2-master-0 : errors: %s%!(EXTRA <nil>) INFO Stopping VM gshereme-rhv-ocp-1-sfdd2-worker-0-nr7gz : errors: %s%!(EXTRA <nil>) INFO Stopping VM gshereme-rhv-ocp-1-sfdd2-master-1 : errors: %s%!(EXTRA <nil>) INFO Removing VM gshereme-rhv-ocp-1-sfdd2-worker-0-7gvxf : errors: %s%!(EXTRA <nil>) INFO Removing VM gshereme-rhv-ocp-1-sfdd2-worker-0-cp6jx : errors: %s%!(EXTRA <nil>) INFO Removing VM gshereme-rhv-ocp-1-sfdd2-master-2 : errors: %s%!(EXTRA <nil>) INFO Removing VM gshereme-rhv-ocp-1-sfdd2-master-0 : errors: %s%!(EXTRA <nil>) INFO Removing VM gshereme-rhv-ocp-1-sfdd2-worker-0-nr7gz : errors: %s%!(EXTRA <nil>) INFO Removing VM gshereme-rhv-ocp-1-sfdd2-master-1 : errors: %s%!(EXTRA <nil>) ERROR Removing VMs - error: %!s(<nil>) INFO Removing tag gshereme-rhv-ocp-1-sfdd2 : errors: %s%!(EXTRA <nil>) ERROR Removing Tag - error: %!s(<nil>) ERROR Removing Template - error: %!s(<nil>) Expected results: successful deprovision
Actually it did remove the 3 masters and 3 workers, but it left the bootstrap vm running
Verified with: openshift-install-linux-4.5.0-0.nightly-2020-05-11-080639 Verification: [installer@vm-15-107 ~]$ ./openshift-install destroy cluster --dir=resources --log-level=debug DEBUG OpenShift Installer 4.5.0-0.nightly-2020-05-11-080639 DEBUG Built from commit 94f6539c438c876cf43f87c576692e7213d62a91 DEBUG Searching VMs by tag=primary-wr59h DEBUG Found 6 VMs INFO Stopping VM primary-wr59h-master-1 INFO Stopping VM primary-wr59h-master-0 INFO Stopping VM primary-wr59h-worker-0-plxbj INFO Stopping VM primary-wr59h-master-2 INFO Stopping VM primary-wr59h-worker-0-8sxqh INFO Stopping VM primary-wr59h-worker-0-9tt7c INFO VM primary-wr59h-master-0 powered off INFO Removing VM primary-wr59h-master-0 INFO VM primary-wr59h-master-1 powered off INFO Removing VM primary-wr59h-master-1 INFO VM primary-wr59h-worker-0-plxbj powered off INFO VM primary-wr59h-master-2 powered off INFO Removing VM primary-wr59h-worker-0-plxbj INFO Removing VM primary-wr59h-master-2 INFO VM primary-wr59h-worker-0-8sxqh powered off INFO Removing VM primary-wr59h-worker-0-8sxqh INFO VM primary-wr59h-worker-0-9tt7c powered off INFO Removing VM primary-wr59h-worker-0-9tt7c INFO Removing tag primary-wr59h DEBUG Purging asset "Metadata" from disk DEBUG Purging asset "Terraform Variables" from disk DEBUG Purging asset "Kubeconfig Admin Client" from disk DEBUG Purging asset "Kubeadmin Password" from disk DEBUG Purging asset "Certificate (journal-gatewayd)" from disk DEBUG Purging asset "Cluster" from disk INFO Time elapsed: 28s
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:2409