create a ipi on aws with default configuration, only one master ip is present in cluster.tfvars.json, so only this master log is collected when gather bootstrap log.
$ ls log-bundle-20211122143620/control-plane/
$ cat cluster.tfvars.json | jq -r
In the code, the first item of list is as output.
Normally, all masters IPs should be as output on terraform, then it can be consumed by "openshift gather bootstrap" later.
What did you expect to happen?
All master IPs are stored in cluster.tfvars.json and related logs are collected when launching "openshift-install gather bootstrap"
How to reproduce it (as minimally and precisely as possible)?
Always on 4.9/4.10
https://github.com/openshift/installer/blob/7fd358462f14d43f41d64a5d591c85adc2c122f4/data/data/aws/cluster/master/outputs.tf#L2 is only grabbing the first IP address in the list of IP addresses for all the masters, rather than the first IP address for each master.
Verified on 4.10.0-0.nightly-2021-10-20-193037, logs on all control nodes are collected, so move bug to VERIFIED.
$ ls -ltr log-bundle-20211213020240/control-plane/
drwxrwxr-x. 7 jima jima 4096 Dec 13 2021 10.0.141.58
drwxrwxr-x. 7 jima jima 4096 Dec 13 2021 10.0.176.42
drwxrwxr-x. 7 jima jima 4096 Dec 13 2021 10.0.200.85
Issue also exist on 4.9, need to backport, how to handle this? @mstaeble
correct comment4, the nightly build used to verify the bug is 4.10.0-0.nightly-2021-12-12-184227, not 4.10.0-0.nightly-2021-10-20-193037.
(In reply to jima from comment #4)
> Issue also exist on 4.9, need to backport, how to handle this?
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory (Moderate: OpenShift Container Platform 4.10.3 security update), and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.