Description of problem: The "openshift-install destroy cluster" command, returns the following error: $ openshift-install destroy cluster FATAL Failed while preparing to destroy cluster: open metadata.json: no such file or directory When running the openshift-install command, it places a number of files and directorys in the current working directory. The "openshift-install destroy cluster" requires the presence of the metadata.json created by the "openshift-install create cluster" command. This error can suggest the user should take note of the current working directory where the installer was run and be sure to run the "openshift-install destroy cluster" command from the same directory where you ran the "openshift-install create cluster" command. Version-Release number of the following components: $ openshift-install version openshift-install v0.11.0 How reproducible: Every time. Steps to Reproduce: 1. openshift-install create cluster 2. cd ~/another_dir 3. openshift-install destroy cluster Actual results: FATAL Failed while preparing to destroy cluster: open metadata.json: no such file or directory Expected results: An error return that suggests the user should take note of the current working directory where the installer was run and be sure to run the "openshift-install destroy cluster" command from the same directory where you ran the "openshift-install create cluster" command. Additional info: Knowledgebase created.
> This error can suggest the user should take note of the current working directory where the installer was run and be sure to run the "openshift-install destroy cluster" command from the same directory where you ran the "openshift-install create cluster" command. This is true of all openshift-install calls which are intended to affect a given cluster, it's not specific to 'destroy cluster' or this error. I'm not sure how we would reliably distinguish "ran on the wrong directory" in general for error messages, but I have no problem with adding an entry to the troubleshooting docs [1] with a section for "Are you sure you're using the right asset directory for your cluster?". Would that be sufficient to close this issue? [1]: ttps://github.com/openshift/installer/blob/master/docs/user/troubleshooting.md
Hello, Agree that an entry should be added to troubleshooting.md as this is straightforward. Interestingly, today, I moved to the 0.12.0 version of the installer and this installation failed (there was a resource issue in my AWS account and another error related to profiles I have not seen before). But not metadata.json file was created at all by the create command. So in this case, I have a bunch of resources created in AWS and a tfstate file, but no metadata.json. So I can't "destroy" the AWS items through the installer. Is that a separate BZ from this in your opinion?
> But not metadata.json file was created at all by the create command. That's bug 1673185.
You can also create the metadata.json file your self by following: https://access.redhat.com/solutions/3826921
So with bug 1673185 fixed in v0.13.0, what's left here? That leaves just the "don't forget about the asset directory" docs floated in comment 1 for this issue.
Sorry, I wrote the previous comment in two stages, and forgot to read it over completely before posting :p. I'm convinced that the docs are the only outstanding issue here, so no needinfo. But feel free to weigh in if you think I'm missing something :).
We just need to make sure that the documentation makes it clear that files are created in $(pwd) or --dir and that those files should be preserved for future operations including destroying the cluster. Moving to docs. I'm not sure whether I'd call this a 4.1 blocker or not.
The docs already say to specify the installation directory that contains your files in the destroy command: https://docs.openshift.com/container-platform/4.0/installing/installing_aws/uninstalling-cluster-aws.html The installation instructions already tell you to use --dir= to specify a directory (https://docs.openshift.com/container-platform/4.0/installing/installing_aws/installing-aws-default.html#launching-installer-installing-aws-default), and I'm adding a note to remind users to not delete the installer or the folder of data when the bug about regenerating the metadata.json file merges (https://bugzilla.redhat.com/show_bug.cgi?id=1683019). @Vikram, is this a dupe or NOTABUG?
The way the docs currently read, they don't deal with this situation. We should document what the installer does (when it uninstalls), in that it reads the metadata.json file, and works to remove a cluster, without that file (pointing to the right cluster) the delete process will fail (thus the source of this bug).
OK! I'm adding a note here: https://github.com/openshift/openshift-docs/pull/14554 I'll make sure that this intention is preserved whenever https://bugzilla.redhat.com/show_bug.cgi?id=1683019 is resolved. Johnray, will you PTAL?
Looks good to me, It, the new version doc, is clear to express the facts: 1. user needs to preserve the files that the installation program creates. 2. installer uses the metadata.json to destroy the cluster. If no more suggestion,I will move this bug to status 'verified'
Thank you Sheng Lao! Please move the bug to verified whenever you're ready.
I've merged the PR and am waiting for it to go live.
This change is live: https://docs.openshift.com/container-platform/4.1/installing/installing_aws/installing-aws-default.html#installation-launching-installer_installing-aws-default
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days