Bug 1673185 - with OpenShift install 0.12.0, metadata.json is not generated
Summary: with OpenShift install 0.12.0, metadata.json is not generated
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Installer
Version: 4.1.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: 4.1.0
Assignee: W. Trevor King
QA Contact: Johnny Liu
URL:
Whiteboard:
Depends On:
Blocks: 1664187
TreeView+ depends on / blocked
 
Reported: 2019-02-06 21:22 UTC by jooho lee
Modified: 2019-06-04 10:43 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-06-04 10:42:43 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift installer pull 1199 0 None closed pkg/asset/targets: Render the metadata asset before cluster 2020-04-29 16:16:11 UTC
Red Hat Product Errata RHBA-2019:0758 0 None None None 2019-06-04 10:43:51 UTC

Description jooho lee 2019-02-06 21:22:50 UTC
Description of problem:

With openshift install 0.12.0, the metadata.json is not generated so I need to create it manually for destroying the cluster.


Version-Release number of the following components:

How reproducible:

Steps to Reproduce:
1.openshift-install-0.12.0  create cluster --log-level=debug --dir=aws
2.openshift-install-0.12.0  destroy cluster --log-level=debug --dir=aws
3.

Actual results:
FATAL Failed while preparing to destroy cluster: open aws/metadata.json: no such file or directory 


Expected results:
There must be the metadata.json file and destroy the cluster.


Additional info:
Please attach logs from ansible-playbook with the -vvv flag

Comment 1 jooho lee 2019-02-06 22:44:41 UTC
I tested it again. 

I found something more.

If the cluster is up properly, the metadata.json is generated but if it failed to deploy a cluster with errors like VPC limitation, there is no metadata.json file.

Comment 2 W. Trevor King 2019-02-06 23:34:56 UTC
Fixed by installer#1199, which has landed and will go out whenever we cut the next release.

Comment 4 Johnny Liu 2019-02-19 02:49:04 UTC
Verified this bug with v4.0.0-0.176.0.0-dirty installer, and PASS.

[root@preserve-jialiu-ansible 20190219]# ./openshift-install create cluster --dir ./demo1
? Platform aws
? Region us-east-1
? Base Domain qe.devcluster.openshift.com
? Cluster Name qe-jialiu
? Pull Secret [? for help] ******************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************
WARNING Found override for OS Image. Please be warned, this is not advised 
WARNING Found override for ReleaseImage. Please be warned, this is not advised 
INFO Creating cluster...                          
ERROR                                              
ERROR Error: Error applying plan:                  
ERROR                                              
ERROR 1 error occurred:                            
ERROR 	* module.vpc.aws_vpc.new_vpc: 1 error occurred: 
ERROR 	* aws_vpc.new_vpc: Error creating VPC: VpcLimitExceeded: The maximum number of VPCs has been reached. 
ERROR 	status code: 400, request id: da8068e7-c0b9-4d41-bcbb-ac649300d535 
ERROR                                              
ERROR                                              
ERROR                                              
ERROR                                              
ERROR                                              
ERROR Terraform does not automatically rollback in the face of errors. 
ERROR Instead, your Terraform state file has been partially updated with 
ERROR any resources that successfully completed. Please address the error 
ERROR above and apply again to incrementally change your infrastructure. 
ERROR                                              
ERROR                                              
FATAL failed to fetch Cluster: failed to generate asset "Cluster": failed to create cluster: failed to apply using Terraform 
[root@preserve-jialiu-ansible 20190219]# ./openshift-install destroy cluster --dir ./demo1
INFO Deleted                                       arn="arn:aws:s3:::terraform-20190219024612056100000001"
INFO Deleted                                       arn="arn:aws:ec2:us-east-1:301721915996:dhcp-options/dopt-0fc7a85bbeb114229" id=dopt-0fc7a85bbeb114229
INFO Deleted                                       arn="arn:aws:iam::301721915996:role/qe-jialiu-bootstrap-role" id=qe-jialiu-bootstrap-role name=qe-jialiu-bootstrap-role policy=qe-jialiu-bootstrap-policy
INFO Disassociated                                 IAM instance profile="arn:aws:iam::301721915996:instance-profile/qe-jialiu-bootstrap-profile" arn="arn:aws:iam::301721915996:role/qe-jialiu-bootstrap-role" id=qe-jialiu-bootstrap-role name=qe-jialiu-bootstrap-profile role=qe-jialiu-bootstrap-role
INFO Deleted                                       IAM instance profile="arn:aws:iam::301721915996:instance-profile/qe-jialiu-bootstrap-profile" arn="arn:aws:iam::301721915996:role/qe-jialiu-bootstrap-role" id=qe-jialiu-bootstrap-role name=qe-jialiu-bootstrap-profile
INFO Deleted                                       arn="arn:aws:iam::301721915996:role/qe-jialiu-bootstrap-role" id=qe-jialiu-bootstrap-role name=qe-jialiu-bootstrap-role
INFO Deleted                                       arn="arn:aws:iam::301721915996:role/qe-jialiu-master-role" id=qe-jialiu-master-role name=qe-jialiu-master-role policy=qe-jialiu_master_policy
INFO Disassociated                                 IAM instance profile="arn:aws:iam::301721915996:instance-profile/qe-jialiu-master-profile" arn="arn:aws:iam::301721915996:role/qe-jialiu-master-role" id=qe-jialiu-master-role name=qe-jialiu-master-profile role=qe-jialiu-master-role
INFO Deleted                                       IAM instance profile="arn:aws:iam::301721915996:instance-profile/qe-jialiu-master-profile" arn="arn:aws:iam::301721915996:role/qe-jialiu-master-role" id=qe-jialiu-master-role name=qe-jialiu-master-profile
INFO Deleted                                       arn="arn:aws:iam::301721915996:role/qe-jialiu-master-role" id=qe-jialiu-master-role name=qe-jialiu-master-role
INFO Deleted                                       arn="arn:aws:iam::301721915996:role/qe-jialiu-worker-role" id=qe-jialiu-worker-role name=qe-jialiu-worker-role policy=qe-jialiu_worker_policy
INFO Disassociated                                 IAM instance profile="arn:aws:iam::301721915996:instance-profile/qe-jialiu-worker-profile" arn="arn:aws:iam::301721915996:role/qe-jialiu-worker-role" id=qe-jialiu-worker-role name=qe-jialiu-worker-profile role=qe-jialiu-worker-role
INFO Deleted                                       IAM instance profile="arn:aws:iam::301721915996:instance-profile/qe-jialiu-worker-profile" arn="arn:aws:iam::301721915996:role/qe-jialiu-worker-role" id=qe-jialiu-worker-role name=qe-jialiu-worker-profile
INFO Deleted                                       arn="arn:aws:iam::301721915996:role/qe-jialiu-worker-role" id=qe-jialiu-worker-role name=qe-jialiu-worker-role
[root@preserve-jialiu-ansible 20190219]# ./openshift-install version
./openshift-install v4.0.0-0.176.0.0-dirty

Comment 5 W. Trevor King 2019-02-27 05:28:55 UTC
And 0.13.0 is out with the fix [1].

[1]: https://github.com/openshift/installer/releases/tag/v0.13.0

Comment 8 errata-xmlrpc 2019-06-04 10:42:43 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0758


Note You need to log in before you can comment on or make changes to this bug.