Bug 2065552
Summary: | [AWS] Failed to install cluster on AWS ap-southeast-3 region due to image-registry panic error | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Yunfei Jiang <yunjiang> |
Component: | Image Registry | Assignee: | Oleg Bulatov <obulatov> |
Status: | CLOSED ERRATA | QA Contact: | Keenon Lee <jitli> |
Severity: | high | Docs Contact: | |
Priority: | high | ||
Version: | 4.10 | CC: | aos-bugs, obulatov, otrifirg, xiuwang, yingzhan |
Target Milestone: | --- | ||
Target Release: | 4.11.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: |
Cause: the image registry and its operator uses old AWS SDK
Consequence: they don't know about the ap-southeast-3 region
Fix: bump AWS SDK
Result: the registry can be configured to use ap-southeast-3
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2022-08-10 10:54:40 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 2110963 |
Description
Yunfei Jiang
2022-03-18 07:59:17 UTC
Hi, I have passed the image registry panic error by specifying the serviceEndpoints for S3 also. This is probably due to openshift-installer (via its vendor aws-sdk) does not have the visibility of ap-southeast-3 region, because it was not GA yet during the development of openshift-install 4.10. I have put the details of my install-config.yaml in [1] [1] https://bugzilla.redhat.com/show_bug.cgi?id=2065510 The workaround to add serviceEndpoints for ap-southeast-3 region in install-config.yaml works for image-registry. If not add `regionEndpoint: https://s3.ap-southeast-3.amazonaws.com` in image registry configure, will meet comment #0 panic. Works well in ap-southeast-3 redhat@jitli:~/work/src/test/2074050/test$ oc get node -l node-role.kubernetes.io/worker -o=jsonpath='{.items[*].metadata.labels.topology\.kubernetes\.io\/zone}' ap-southeast-3a ap-southeast-3b ap-southeast-3c redhat@jitli:~/work/src/test/2074050/test$ oc get pods -n openshift-image-registry NAME READY STATUS RESTARTS AGE cluster-image-registry-operator-975868bd5-mzftg 1/1 Running 0 34m image-registry-5d7bc4499c-6srvt 1/1 Running 0 21m image-registry-5d7bc4499c-r9l62 1/1 Running 0 21m node-ca-5tbsp 1/1 Running 0 21m node-ca-69xw9 1/1 Running 0 21m node-ca-b24tn 1/1 Running 0 17m node-ca-bnvmt 1/1 Running 0 21m node-ca-n4v7z 1/1 Running 0 17m node-ca-sfkzr 1/1 Running 0 17m https://mastern-jenkins-csb-openshift-qe.apps.ocp-c1.prod.psi.redhat.com/job/ocp-common/job/ginkgo-test-vm/19619/testReport/ lgtm Done Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: OpenShift Container Platform 4.11.0 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:5069 |