Bug 1787604
| Summary: | Image Registry Needs Support for AWS Bahrain Region me-south-1 | |||
|---|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | Patrick Dillon <padillon> | |
| Component: | Image Registry | Assignee: | Oleg Bulatov <obulatov> | |
| Status: | CLOSED ERRATA | QA Contact: | Mike Fiedler <mifiedle> | |
| Severity: | medium | Docs Contact: | ||
| Priority: | medium | |||
| Version: | 4.4 | CC: | adam.kaplan, aos-bugs, dofinn, jjerezro, mifiedle, nmalik, palshure, sdodson, wewang | |
| Target Milestone: | --- | |||
| Target Release: | 4.4.0 | |||
| Hardware: | Unspecified | |||
| OS: | Unspecified | |||
| Whiteboard: | ||||
| Fixed In Version: | Doc Type: | Bug Fix | ||
| Doc Text: |
Cause: the registry used aws-sdk-go v1.21.10
Consequence: it didn't know about me-south-1
Fix: bump aws-sdk-go to v1.28.2
Result: me-south-1 can be used in configuration
|
Story Points: | --- | |
| Clone Of: | ||||
| : | 1796584 (view as bug list) | Environment: | ||
| Last Closed: | 2020-05-04 11:22:02 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | ||||
| Bug Blocks: | 1796584, 1796651 | |||
|
Description
Patrick Dillon
2020-01-03 15:32:59 UTC
Typo in step 1: upsream -> upstream I suspect that the AWS SDK the registry uses doesn't know about the new region.
As a work-around, can you add the Bahrain endpoint in the operator config [1]?
```
$ oc patch config.imageregistry.operator.openshift.io cluster -p '{"spec":{"storage":{"s3":{"regionEndpoint":"https://s3.me-south-1.amazonaws.com"}}}}' --type=merge
```
[1] https://docs.openshift.com/container-platform/4.2/registry/configuring-registry-storage/configuring-registry-storage-aws-user-infrastructure.html#registry-configuring-storage-aws-user-infra_configuring-registry-storage-aws-user-infrastructure
Verified on 4.4.0-0.nightly-2020-01-20-230953 root@ip-172-31-64-58: ~ # oc get nodes NAME STATUS ROLES AGE VERSION ip-10-0-129-153.me-south-1.compute.internal Ready worker 29m v1.17.0 ip-10-0-143-81.me-south-1.compute.internal Ready master 39m v1.17.0 ip-10-0-156-129.me-south-1.compute.internal Ready master 39m v1.17.0 ip-10-0-156-241.me-south-1.compute.internal Ready worker 29m v1.17.0 ip-10-0-161-192.me-south-1.compute.internal Ready worker 30m v1.17.0 ip-10-0-167-45.me-south-1.compute.internal Ready master 39m v1.17.0 root@ip-172-31-64-58: ~ # oc get pods -n openshift-image-registry NAME READY STATUS RESTARTS AGE cluster-image-registry-operator-6598f5c8b-xkhlw 2/2 Running 0 34m image-registry-64f447bf68-slgv8 1/1 Running 0 34m node-ca-2r5nq 1/1 Running 0 34m node-ca-4wdq8 1/1 Running 0 30m node-ca-8lwrp 1/1 Running 0 34m node-ca-fxn2s 1/1 Running 0 30m node-ca-gbf88 1/1 Running 0 29m node-ca-jwzsk 1/1 Running 0 34m version 4.4.0-0.nightly-2020-01-20-230953 True False 21m Cluster version is 4.4.0-0.nightly-2020-01-20-230953 *** Bug 1798879 has been marked as a duplicate of this bug. *** Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:0581 Install already cover it. |