Bug 1934123
Summary: | [OSP] First public endpoint is used to fetch ignition config from Glance URL (with multiple endpoints) on OSP | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Apoorva Jagtap <apjagtap> |
Component: | Installer | Assignee: | Mike Fedosin <mfedosin> |
Installer sub component: | OpenShift on OpenStack | QA Contact: | rlobillo |
Status: | CLOSED ERRATA | Docs Contact: | |
Severity: | medium | ||
Priority: | medium | CC: | gcheresh, juriarte, pprinett |
Version: | 4.6.z | Keywords: | Triaged |
Target Milestone: | --- | ||
Target Release: | 4.8.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: |
Cause: the installer did not take into account the region where the bootstrap ignition config should be located when generating its URL. So, despite the fact the config data was placed in the right region, the installer always took the first public endpoint from the list, which may belong to another region.
Consequence: bootstrap machine couldn't fetch the config from the provided URL as it's not correct.
Fix: to take the user's region into account when generating the URL and pick the right public endpoint.
Result: the installer always generates correct bootstrap ignition config URLs.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2021-07-27 22:49:00 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1939014 |
Description
Apoorva Jagtap
2021-03-02 14:49:49 UTC
We never worked with multiple regions, so can't help with it, sorry Verified on 4.8.0-0.nightly-2021-04-15-074503 on RHOS-16.1-RHEL-8-20210311.n.1. Given multiple image endpoints on OSP: $ openstack catalog show glance +-----------+--------------------------------------+ | Field | Value | +-----------+--------------------------------------+ | endpoints | regionTwo | | | public: https://1.1.1.1:13292 | | | regionOne | | | public: https://10.0.0.101:13292 | | | regionOne | | | admin: http://172.17.1.135:9292 | | | regionOne | | | internal: http://172.17.1.135:9292 | | | | | id | 06a4ecfd422a462da97e147eff1be9c7 | | name | glance | | type | image | +-----------+--------------------------------------+ IPI OCP installation works fine: DEBUG Time elapsed per stage: DEBUG Infrastructure: 2m31s DEBUG Bootstrap Complete: 16m38s DEBUG API: 2m56s DEBUG Bootstrap Destroy: 49s DEBUG Cluster Operators: 27m3s INFO Time elapsed: 47m53s and the cluster is up and running: $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.8.0-0.nightly-2021-04-15-074503 True False 44s Cluster version is 4.8.0-0.nightly-2021-04-15-074503 $ oc get clusteroperators NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.8.0-0.nightly-2021-04-15-074503 True False False 2m54s baremetal 4.8.0-0.nightly-2021-04-15-074503 True False False 33m cloud-credential 4.8.0-0.nightly-2021-04-15-074503 True False False 42m cluster-autoscaler 4.8.0-0.nightly-2021-04-15-074503 True False False 33m config-operator 4.8.0-0.nightly-2021-04-15-074503 True False False 34m console 4.8.0-0.nightly-2021-04-15-074503 True False False 10m csi-snapshot-controller 4.8.0-0.nightly-2021-04-15-074503 True False False 33m dns 4.8.0-0.nightly-2021-04-15-074503 True False False 27m etcd 4.8.0-0.nightly-2021-04-15-074503 True False False 34m image-registry 4.8.0-0.nightly-2021-04-15-074503 True False False 14m ingress 4.8.0-0.nightly-2021-04-15-074503 True False False 29m insights 4.8.0-0.nightly-2021-04-15-074503 True False False 27m kube-apiserver 4.8.0-0.nightly-2021-04-15-074503 True False False 31m kube-controller-manager 4.8.0-0.nightly-2021-04-15-074503 True False False 31m kube-scheduler 4.8.0-0.nightly-2021-04-15-074503 True False False 31m kube-storage-version-migrator 4.8.0-0.nightly-2021-04-15-074503 True False False 34m machine-api 4.8.0-0.nightly-2021-04-15-074503 True False False 28m machine-approver 4.8.0-0.nightly-2021-04-15-074503 True False False 33m machine-config 4.8.0-0.nightly-2021-04-15-074503 True False False 32m marketplace 4.8.0-0.nightly-2021-04-15-074503 True False False 33m monitoring 4.8.0-0.nightly-2021-04-15-074503 True False False 12m network 4.8.0-0.nightly-2021-04-15-074503 True False False 36m node-tuning 4.8.0-0.nightly-2021-04-15-074503 True False False 32m openshift-apiserver 4.8.0-0.nightly-2021-04-15-074503 True False False 25m openshift-controller-manager 4.8.0-0.nightly-2021-04-15-074503 True False False 26m openshift-samples 4.8.0-0.nightly-2021-04-15-074503 True False False 24m operator-lifecycle-manager 4.8.0-0.nightly-2021-04-15-074503 True False False 33m operator-lifecycle-manager-catalog 4.8.0-0.nightly-2021-04-15-074503 True False False 32m operator-lifecycle-manager-packageserver 4.8.0-0.nightly-2021-04-15-074503 True False False 26m service-ca 4.8.0-0.nightly-2021-04-15-074503 True False False 35m storage 4.8.0-0.nightly-2021-04-15-074503 True False False 28m Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.8.2 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:2438 |