Version: 4.6 Platform: OSP What happened? 1) The installer correctly pulls the RHCOS image and uploads it to the correct Glance location 2) The installer correctly create the ignition file and uploads it to the correct Glance location 3) When the bootstrap instance attempts to configure itself it fails to find the ignition file via glance as the URL is incorrect. The install then fails eventually after timing out. This is a result of 'regional' awareness not being included when determining the Glance URL for the ignition file. - We have observed a scenario where when we have 2 regions that use a common API and thus when the service catalog is checked for image endpoints, multiple endpoints are presented. - The [getGlancePublicURL](https://github.com/openshift/installer/blob/06525c4ac264a612bf2c806a87413e2b03bc7a00/pkg/tfvars/openstack/openstack.go#L174-L197) appears to indicate that the first public endpoint is used. This allows only one region (e.g; us-east-1) to succeed for the customer as it is listed first. The secondary region (e.g; us-west-1) fails as the ignition file has been uploaded to the us-west-1 glance API and the generated URL is processing based on the first endpoint (us-east-1). What did you expect to happen? * Provide an option to check the ignition file in the respective region.
We never worked with multiple regions, so can't help with it, sorry
Verified on 4.8.0-0.nightly-2021-04-15-074503 on RHOS-16.1-RHEL-8-20210311.n.1. Given multiple image endpoints on OSP: $ openstack catalog show glance +-----------+--------------------------------------+ | Field | Value | +-----------+--------------------------------------+ | endpoints | regionTwo | | | public: https://1.1.1.1:13292 | | | regionOne | | | public: https://10.0.0.101:13292 | | | regionOne | | | admin: http://172.17.1.135:9292 | | | regionOne | | | internal: http://172.17.1.135:9292 | | | | | id | 06a4ecfd422a462da97e147eff1be9c7 | | name | glance | | type | image | +-----------+--------------------------------------+ IPI OCP installation works fine: DEBUG Time elapsed per stage: DEBUG Infrastructure: 2m31s DEBUG Bootstrap Complete: 16m38s DEBUG API: 2m56s DEBUG Bootstrap Destroy: 49s DEBUG Cluster Operators: 27m3s INFO Time elapsed: 47m53s and the cluster is up and running: $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.8.0-0.nightly-2021-04-15-074503 True False 44s Cluster version is 4.8.0-0.nightly-2021-04-15-074503 $ oc get clusteroperators NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.8.0-0.nightly-2021-04-15-074503 True False False 2m54s baremetal 4.8.0-0.nightly-2021-04-15-074503 True False False 33m cloud-credential 4.8.0-0.nightly-2021-04-15-074503 True False False 42m cluster-autoscaler 4.8.0-0.nightly-2021-04-15-074503 True False False 33m config-operator 4.8.0-0.nightly-2021-04-15-074503 True False False 34m console 4.8.0-0.nightly-2021-04-15-074503 True False False 10m csi-snapshot-controller 4.8.0-0.nightly-2021-04-15-074503 True False False 33m dns 4.8.0-0.nightly-2021-04-15-074503 True False False 27m etcd 4.8.0-0.nightly-2021-04-15-074503 True False False 34m image-registry 4.8.0-0.nightly-2021-04-15-074503 True False False 14m ingress 4.8.0-0.nightly-2021-04-15-074503 True False False 29m insights 4.8.0-0.nightly-2021-04-15-074503 True False False 27m kube-apiserver 4.8.0-0.nightly-2021-04-15-074503 True False False 31m kube-controller-manager 4.8.0-0.nightly-2021-04-15-074503 True False False 31m kube-scheduler 4.8.0-0.nightly-2021-04-15-074503 True False False 31m kube-storage-version-migrator 4.8.0-0.nightly-2021-04-15-074503 True False False 34m machine-api 4.8.0-0.nightly-2021-04-15-074503 True False False 28m machine-approver 4.8.0-0.nightly-2021-04-15-074503 True False False 33m machine-config 4.8.0-0.nightly-2021-04-15-074503 True False False 32m marketplace 4.8.0-0.nightly-2021-04-15-074503 True False False 33m monitoring 4.8.0-0.nightly-2021-04-15-074503 True False False 12m network 4.8.0-0.nightly-2021-04-15-074503 True False False 36m node-tuning 4.8.0-0.nightly-2021-04-15-074503 True False False 32m openshift-apiserver 4.8.0-0.nightly-2021-04-15-074503 True False False 25m openshift-controller-manager 4.8.0-0.nightly-2021-04-15-074503 True False False 26m openshift-samples 4.8.0-0.nightly-2021-04-15-074503 True False False 24m operator-lifecycle-manager 4.8.0-0.nightly-2021-04-15-074503 True False False 33m operator-lifecycle-manager-catalog 4.8.0-0.nightly-2021-04-15-074503 True False False 32m operator-lifecycle-manager-packageserver 4.8.0-0.nightly-2021-04-15-074503 True False False 26m service-ca 4.8.0-0.nightly-2021-04-15-074503 True False False 35m storage 4.8.0-0.nightly-2021-04-15-074503 True False False 28m
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.8.2 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:2438