Bug 1777224 - InfraID in metadata.json and .openshift_install_state.json is not consistent when repeating `create` commands
Summary: InfraID in metadata.json and .openshift_install_state.json is not consistent ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Installer
Version: 4.3.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 4.7.0
Assignee: Jeremiah Stuever
QA Contact: Yang Yang
URL:
Whiteboard:
Depends On:
Blocks: 1777803 1806822
TreeView+ depends on / blocked
 
Reported: 2019-11-27 08:15 UTC by Gaoyun Pei
Modified: 2021-02-24 15:11 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
: 1777803 1806822 (view as bug list)
Environment:
Last Closed: 2021-02-24 15:10:48 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift installer pull 4223 0 None closed Bug 1777224: pkg/asset: metadata to depend on ignition 2021-02-09 08:32:31 UTC
Red Hat Product Errata RHSA-2020:5633 0 None None None 2021-02-24 15:11:52 UTC

Description Gaoyun Pei 2019-11-27 08:15:49 UTC
Description of problem:
After running `create manifests` and `create ignition-configs` twice against the same dir, the "InfraID" in metadata.json and .openshift_install_state.json file is not consistent in the end.


1. Generate the manifests and ignition-configs using a sample install-config.yaml

$ cp install-config.yaml gpei-6/

$ ls gpei-6/
install-config.yaml

$ ./openshift-install create manifests --dir gpei-6
INFO Consuming Install Config from target directory

$ ./openshift-install create ignition-configs --dir gpei-6
INFO Consuming Worker Machines from target directory
INFO Consuming Common Manifests from target directory
INFO Consuming Master Machines from target directory
INFO Consuming Openshift Manifests from target directory


2. Check infraID under the dir gpei-6/, it's "gpei-0-qxmhz" in both metadata.json and .openshift_install_state.json

$ cat metadata.json
{"clusterName":"gpei-0","clusterID":"ce50c780-6d6a-4318-8fdc-72b2a6ec3e32","infraID":"gpei-0-qxmhz","aws":{"region":"us-east-2","identifier":[{"kubernetes.io/cluster/gpei-0-qxmhz":"owned"},{"openshiftClusterID":"ce50c780-6d6a-4318-8fdc-72b2a6ec3e32"}]}}

$ grep gpei-0-qxmhz .openshift_install*
.openshift_install_state.json:        "InfraID": "gpei-0-qxmhz"


3. Repeat the create manifests and ignition-configs steps

$ cp install-config.yaml gpei-6/

$ ./openshift-install create manifests --dir gpei-6
WARNING   Discarding the Master Ignition Config that was provided in the target directory because its dependencies are dirty and it needs to be regenerated
INFO Consuming Install Config from target directory
INFO Consuming Master Ignition Config from target directory
WARNING   Discarding the Worker Ignition Config that was provided in the target directory because its dependencies are dirty and it needs to be regenerated
INFO Consuming Worker Ignition Config from target directory

$ ./openshift-install create ignition-configs --dir gpei-6
WARNING Discarding the Bootstrap Ignition Config that was provided in the target directory because its dependencies are dirty and it needs to be regenerated
INFO Consuming Common Manifests from target directory
INFO Consuming Worker Machines from target directory
INFO Consuming Master Machines from target directory
INFO Consuming Openshift Manifests from target directory


4) Check infraID under the dir gpei-6/, the InfraID in .openshift_install_state.json changed to "gpei-0-vknpr". But in metadata.json, infraID didn't change. 

$ cat metadata.json
{"clusterName":"gpei-0","clusterID":"ce50c780-6d6a-4318-8fdc-72b2a6ec3e32","infraID":"gpei-0-qxmhz","aws":{"region":"us-east-2","identifier":[{"kubernetes.io/cluster/gpei-0-qxmhz":"owned"},{"openshiftClusterID":"ce50c780-6d6a-4318-8fdc-72b2a6ec3e32"}]}}

$ grep -i infraID .openshift_install_state.json
        "InfraID": "gpei-0-vknpr"


This won't break the installation, but bring trouble when destroying the cluster, since the InfraID in metadata.json is not the one used in cluster.


Version-Release number of the following components:
4.3.0-0.nightly-2019-11-26-034010

How reproducible:

Steps to Reproduce:
1.
2.
3.

Actual results:
Please include the entire output from the last TASK line through the end of output if an error is generated

Expected results:

Additional info:
Please attach logs from ansible-playbook with the -vvv flag

Comment 9 Scott Dodson 2020-09-09 20:34:03 UTC
Not a 4.6.0 blocker.

Comment 13 Yang Yang 2020-10-26 09:42:07 UTC
Verified with openshift-install 4.7.0-0.nightly-2020-10-24-155529

Steps for verification:
1. Create install-config file
# openshift-install create cluster --dir=bz

2. Backup install-config.yaml
# cp install-config.yaml install-config.yaml.bak

3. Create manifests and ignition configs
# openshift-install create manifests --dir bz
INFO Credentials loaded from file "/root/.gcp/osServiceAccount.json" 
INFO Consuming Install Config from target directory 
INFO Manifests created in: bz/manifests and bz/openshift 

# openshift-install create ignition-configs --dir bz
INFO Consuming Master Machines from target directory 
INFO Consuming Openshift Manifests from target directory 
INFO Consuming Common Manifests from target directory 
INFO Consuming Worker Machines from target directory 
INFO Consuming OpenShift Install (Manifests) from target directory 
INFO Ignition-Configs created in: bz and bz/auth  

4. Check infra id
# grep infraID bz/metadata.json 
{"clusterName":"yang","clusterID":"d0b6c5db-1467-49f3-a07c-ba6054c536c6","infraID":"yang-bbn2g","gcp":{"region":"us-central1","projectID":"openshift-qe"}}

# grep InfraID bz/.openshift_install*
bz/.openshift_install_state.json:        "InfraID": "yang-bbn2g"

5. Restore install-config.yaml
# cp bz/install-config.yaml.bak bz/install-config.yaml

6. Create manifests and ignition configs once again
# openshift-install create manifests --dir bz
INFO Credentials loaded from file "/root/.gcp/osServiceAccount.json" 
WARNING   Discarding the Master Ignition Config that was provided in the target directory because its dependencies are dirty and it needs to be regenerated 
INFO Consuming Master Ignition Config from target directory 
INFO Consuming Install Config from target directory 
WARNING   Discarding the Worker Ignition Config that was provided in the target directory because its dependencies are dirty and it needs to be regenerated 
INFO Consuming Worker Ignition Config from target directory 
INFO Manifests created in: bz/manifests and bz/openshift 
# openshift-install create ignition-configs --dir bz
WARNING Discarding the Bootstrap Ignition Config that was provided in the target directory because its dependencies are dirty and it needs to be regenerated 
INFO Consuming Worker Machines from target directory 
INFO Consuming OpenShift Install (Manifests) from target directory 
INFO Consuming Openshift Manifests from target directory 
INFO Consuming Master Machines from target directory 
INFO Consuming Common Manifests from target directory 
INFO Ignition-Configs created in: bz and bz/auth  

7. Check infra id
# grep infraID bz/metadata.json 
{"clusterName":"yang","clusterID":"75aa9e7d-c510-4d02-a54e-1ea33894b3ee","infraID":"yang-6cmr4","gcp":{"region":"us-central1","projectID":"openshift-qe"}}
# grep InfraID bz/.openshift_install*
bz/.openshift_install_state.json:        "InfraID": "yang-6cmr4"

The infra id used in cluster is identical with that in metadata.json, hence move it to verified state.

Comment 16 errata-xmlrpc 2021-02-24 15:10:48 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:5633


Note You need to log in before you can comment on or make changes to this bug.