Bug 1787604 - Image Registry Needs Support for AWS Bahrain Region me-south-1
Summary: Image Registry Needs Support for AWS Bahrain Region me-south-1
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Image Registry
Version: 4.4
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 4.4.0
Assignee: Oleg Bulatov
QA Contact: Mike Fiedler
URL:
Whiteboard:
: 1798879 (view as bug list)
Depends On:
Blocks: 1796584 1796651
TreeView+ depends on / blocked
 
Reported: 2020-01-03 15:32 UTC by Patrick Dillon
Modified: 2023-09-07 21:22 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Cause: the registry used aws-sdk-go v1.21.10 Consequence: it didn't know about me-south-1 Fix: bump aws-sdk-go to v1.28.2 Result: me-south-1 can be used in configuration
Clone Of:
: 1796584 (view as bug list)
Environment:
Last Closed: 2020-05-04 11:22:02 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-image-registry-operator pull 437 0 None closed Bug 1787604: aws-sdk-go v1.28.2 2020-07-28 08:41:09 UTC
Github openshift image-registry pull 216 0 None closed Bug 1787604: aws-sdk-go v1.28.2 2020-07-28 08:41:08 UTC
Red Hat Product Errata RHBA-2020:0581 0 None None None 2020-05-04 11:22:40 UTC

Description Patrick Dillon 2020-01-03 15:32:59 UTC
Description of problem: We are adding support for the AWS Bahrain region me-south-1 in pull request https://github.com/openshift/installer/pull/2826. The image-registry panics with an unknown region.



Version-Release number of selected component (if applicable): 4.4


How reproducible: Always


Steps to Reproduce:
1. In cloned installer repo where remote is called "upsream", fetch the PR: git fetch upstream pull/2826/head:pr-2826
2. git checkout pr-2826
3. ./hack/build.sh
4. Set release image to OCP, e.g.: export OPENSHIFT_INSTALL_RELEASE_IMAGE_OVERRIDE=`curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp-dev-preview/latest/release.txt | sed -n 's/Pull From: //p'`
5. Create a cluster as usual, making sure to select me-south-1 region and using try.openshift.com pull secret

Actual results:

$ oc logs image-registry-79c5b59b8-jqqct -n openshift-image-registry
time="2020-01-03T14:59:40.84698399Z" level=info msg="start registry" distribution_version=v2.6.0+unknown go.version=go1.12.12 openshift_version=v4.4.0-201912200058+2c3bfc0-dirty
time="2020-01-03T14:59:40.847679595Z" level=info msg="caching project quota objects with TTL 1m0s" go.version=go1.12.12
panic: Invalid region provided: me-south-1

goroutine 1 [running]:
github.com/openshift/image-registry/vendor/github.com/docker/distribution/registry/handlers.NewApp(0x1e30260, 0xc000048090, 0xc000291c00, 0xc00009d9e0)
	/go/src/github.com/openshift/image-registry/vendor/github.com/docker/distribution/registry/handlers/app.go:127 +0x31ac
github.com/openshift/image-registry/pkg/dockerregistry/server/supermiddleware.NewApp(0x1e30260, 0xc000048090, 0xc000291c00, 0x1e37de0, 0xc0002d9950, 0x1e42c00)
	/go/src/github.com/openshift/image-registry/pkg/dockerregistry/server/supermiddleware/app.go:96 +0x85
github.com/openshift/image-registry/pkg/dockerregistry/server.NewApp(0x1e30260, 0xc000048090, 0x1e03aa0, 0xc00010c2e0, 0xc000291c00, 0xc000206280, 0x0, 0x0, 0x0, 0xc00004e000)
	/go/src/github.com/openshift/image-registry/pkg/dockerregistry/server/app.go:138 +0x2d4
github.com/openshift/image-registry/pkg/cmd/dockerregistry.NewServer(0x1e30260, 0xc000048090, 0xc000291c00, 0xc000206280, 0x0, 0x0, 0x1e6fd00)
	/go/src/github.com/openshift/image-registry/pkg/cmd/dockerregistry/dockerregistry.go:210 +0x1c2
github.com/openshift/image-registry/pkg/cmd/dockerregistry.Execute(0x1deda00, 0xc00010c020)
	/go/src/github.com/openshift/image-registry/pkg/cmd/dockerregistry/dockerregistry.go:164 +0xa42
main.main()
	/go/src/github.com/openshift/image-registry/cmd/dockerregistry/main.go:93 +0x49c
Expected results: Cluster success.


Additional info:

Comment 1 Patrick Dillon 2020-01-03 15:34:28 UTC
Typo in step 1: upsream -> upstream

Comment 3 Adam Kaplan 2020-01-14 19:30:25 UTC
I suspect that the AWS SDK the registry uses doesn't know about the new region.

As a work-around, can you add the Bahrain endpoint in the operator config [1]?

```
$ oc patch config.imageregistry.operator.openshift.io cluster -p '{"spec":{"storage":{"s3":{"regionEndpoint":"https://s3.me-south-1.amazonaws.com"}}}}' --type=merge 
```

[1] https://docs.openshift.com/container-platform/4.2/registry/configuring-registry-storage/configuring-registry-storage-aws-user-infrastructure.html#registry-configuring-storage-aws-user-infra_configuring-registry-storage-aws-user-infrastructure

Comment 5 Mike Fiedler 2020-01-27 14:23:31 UTC
Verified on 4.4.0-0.nightly-2020-01-20-230953

root@ip-172-31-64-58: ~ # oc get nodes
NAME                                          STATUS   ROLES    AGE   VERSION
ip-10-0-129-153.me-south-1.compute.internal   Ready    worker   29m   v1.17.0
ip-10-0-143-81.me-south-1.compute.internal    Ready    master   39m   v1.17.0
ip-10-0-156-129.me-south-1.compute.internal   Ready    master   39m   v1.17.0
ip-10-0-156-241.me-south-1.compute.internal   Ready    worker   29m   v1.17.0
ip-10-0-161-192.me-south-1.compute.internal   Ready    worker   30m   v1.17.0
ip-10-0-167-45.me-south-1.compute.internal    Ready    master   39m   v1.17.0
root@ip-172-31-64-58: ~ # oc get pods -n openshift-image-registry
NAME                                              READY   STATUS    RESTARTS   AGE
cluster-image-registry-operator-6598f5c8b-xkhlw   2/2     Running   0          34m
image-registry-64f447bf68-slgv8                   1/1     Running   0          34m
node-ca-2r5nq                                     1/1     Running   0          34m
node-ca-4wdq8                                     1/1     Running   0          30m
node-ca-8lwrp                                     1/1     Running   0          34m
node-ca-fxn2s                                     1/1     Running   0          30m
node-ca-gbf88                                     1/1     Running   0          29m
node-ca-jwzsk                                     1/1     Running   0          34m

version   4.4.0-0.nightly-2020-01-20-230953   True        False         21m     Cluster version is 4.4.0-0.nightly-2020-01-20-230953

Comment 6 Adam Kaplan 2020-02-06 18:06:07 UTC
*** Bug 1798879 has been marked as a duplicate of this bug. ***

Comment 9 errata-xmlrpc 2020-05-04 11:22:02 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:0581

Comment 10 wewang 2022-10-12 02:50:08 UTC
Install already cover it.


Note You need to log in before you can comment on or make changes to this bug.