Bug 1673185

Summary: with OpenShift install 0.12.0, metadata.json is not generated
Product: OpenShift Container Platform Reporter: jooho lee <jlee>
Component: InstallerAssignee: W. Trevor King <wking>
Installer sub component: openshift-installer QA Contact: Johnny Liu <jialiu>
Status: CLOSED ERRATA Docs Contact:
Severity: unspecified    
Priority: unspecified CC: wking
Version: 4.1.0   
Target Milestone: ---   
Target Release: 4.1.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-06-04 10:42:43 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Bug Depends On:    
Bug Blocks: 1664187    

Description jooho lee 2019-02-06 21:22:50 UTC
Description of problem:

With openshift install 0.12.0, the metadata.json is not generated so I need to create it manually for destroying the cluster.


Version-Release number of the following components:

How reproducible:

Steps to Reproduce:
1.openshift-install-0.12.0  create cluster --log-level=debug --dir=aws
2.openshift-install-0.12.0  destroy cluster --log-level=debug --dir=aws
3.

Actual results:
FATAL Failed while preparing to destroy cluster: open aws/metadata.json: no such file or directory 


Expected results:
There must be the metadata.json file and destroy the cluster.


Additional info:
Please attach logs from ansible-playbook with the -vvv flag

Comment 1 jooho lee 2019-02-06 22:44:41 UTC
I tested it again. 

I found something more.

If the cluster is up properly, the metadata.json is generated but if it failed to deploy a cluster with errors like VPC limitation, there is no metadata.json file.

Comment 2 W. Trevor King 2019-02-06 23:34:56 UTC
Fixed by installer#1199, which has landed and will go out whenever we cut the next release.

Comment 4 Johnny Liu 2019-02-19 02:49:04 UTC
Verified this bug with v4.0.0-0.176.0.0-dirty installer, and PASS.

[root@preserve-jialiu-ansible 20190219]# ./openshift-install create cluster --dir ./demo1
? Platform aws
? Region us-east-1
? Base Domain qe.devcluster.openshift.com
? Cluster Name qe-jialiu
? Pull Secret [? for help] ******************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************
WARNING Found override for OS Image. Please be warned, this is not advised 
WARNING Found override for ReleaseImage. Please be warned, this is not advised 
INFO Creating cluster...                          
ERROR                                              
ERROR Error: Error applying plan:                  
ERROR                                              
ERROR 1 error occurred:                            
ERROR 	* module.vpc.aws_vpc.new_vpc: 1 error occurred: 
ERROR 	* aws_vpc.new_vpc: Error creating VPC: VpcLimitExceeded: The maximum number of VPCs has been reached. 
ERROR 	status code: 400, request id: da8068e7-c0b9-4d41-bcbb-ac649300d535 
ERROR                                              
ERROR                                              
ERROR                                              
ERROR                                              
ERROR                                              
ERROR Terraform does not automatically rollback in the face of errors. 
ERROR Instead, your Terraform state file has been partially updated with 
ERROR any resources that successfully completed. Please address the error 
ERROR above and apply again to incrementally change your infrastructure. 
ERROR                                              
ERROR                                              
FATAL failed to fetch Cluster: failed to generate asset "Cluster": failed to create cluster: failed to apply using Terraform 
[root@preserve-jialiu-ansible 20190219]# ./openshift-install destroy cluster --dir ./demo1
INFO Deleted                                       arn="arn:aws:s3:::terraform-20190219024612056100000001"
INFO Deleted                                       arn="arn:aws:ec2:us-east-1:301721915996:dhcp-options/dopt-0fc7a85bbeb114229" id=dopt-0fc7a85bbeb114229
INFO Deleted                                       arn="arn:aws:iam::301721915996:role/qe-jialiu-bootstrap-role" id=qe-jialiu-bootstrap-role name=qe-jialiu-bootstrap-role policy=qe-jialiu-bootstrap-policy
INFO Disassociated                                 IAM instance profile="arn:aws:iam::301721915996:instance-profile/qe-jialiu-bootstrap-profile" arn="arn:aws:iam::301721915996:role/qe-jialiu-bootstrap-role" id=qe-jialiu-bootstrap-role name=qe-jialiu-bootstrap-profile role=qe-jialiu-bootstrap-role
INFO Deleted                                       IAM instance profile="arn:aws:iam::301721915996:instance-profile/qe-jialiu-bootstrap-profile" arn="arn:aws:iam::301721915996:role/qe-jialiu-bootstrap-role" id=qe-jialiu-bootstrap-role name=qe-jialiu-bootstrap-profile
INFO Deleted                                       arn="arn:aws:iam::301721915996:role/qe-jialiu-bootstrap-role" id=qe-jialiu-bootstrap-role name=qe-jialiu-bootstrap-role
INFO Deleted                                       arn="arn:aws:iam::301721915996:role/qe-jialiu-master-role" id=qe-jialiu-master-role name=qe-jialiu-master-role policy=qe-jialiu_master_policy
INFO Disassociated                                 IAM instance profile="arn:aws:iam::301721915996:instance-profile/qe-jialiu-master-profile" arn="arn:aws:iam::301721915996:role/qe-jialiu-master-role" id=qe-jialiu-master-role name=qe-jialiu-master-profile role=qe-jialiu-master-role
INFO Deleted                                       IAM instance profile="arn:aws:iam::301721915996:instance-profile/qe-jialiu-master-profile" arn="arn:aws:iam::301721915996:role/qe-jialiu-master-role" id=qe-jialiu-master-role name=qe-jialiu-master-profile
INFO Deleted                                       arn="arn:aws:iam::301721915996:role/qe-jialiu-master-role" id=qe-jialiu-master-role name=qe-jialiu-master-role
INFO Deleted                                       arn="arn:aws:iam::301721915996:role/qe-jialiu-worker-role" id=qe-jialiu-worker-role name=qe-jialiu-worker-role policy=qe-jialiu_worker_policy
INFO Disassociated                                 IAM instance profile="arn:aws:iam::301721915996:instance-profile/qe-jialiu-worker-profile" arn="arn:aws:iam::301721915996:role/qe-jialiu-worker-role" id=qe-jialiu-worker-role name=qe-jialiu-worker-profile role=qe-jialiu-worker-role
INFO Deleted                                       IAM instance profile="arn:aws:iam::301721915996:instance-profile/qe-jialiu-worker-profile" arn="arn:aws:iam::301721915996:role/qe-jialiu-worker-role" id=qe-jialiu-worker-role name=qe-jialiu-worker-profile
INFO Deleted                                       arn="arn:aws:iam::301721915996:role/qe-jialiu-worker-role" id=qe-jialiu-worker-role name=qe-jialiu-worker-role
[root@preserve-jialiu-ansible 20190219]# ./openshift-install version
./openshift-install v4.0.0-0.176.0.0-dirty

Comment 5 W. Trevor King 2019-02-27 05:28:55 UTC
And 0.13.0 is out with the fix [1].

[1]: https://github.com/openshift/installer/releases/tag/v0.13.0

Comment 8 errata-xmlrpc 2019-06-04 10:42:43 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0758