Description of problem: We are adding support for the AWS Bahrain region me-south-1 in pull request https://github.com/openshift/installer/pull/2826. The image-registry panics with an unknown region. Version-Release number of selected component (if applicable): 4.4 How reproducible: Always Steps to Reproduce: 1. In cloned installer repo where remote is called "upsream", fetch the PR: git fetch upstream pull/2826/head:pr-2826 2. git checkout pr-2826 3. ./hack/build.sh 4. Set release image to OCP, e.g.: export OPENSHIFT_INSTALL_RELEASE_IMAGE_OVERRIDE=`curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp-dev-preview/latest/release.txt | sed -n 's/Pull From: //p'` 5. Create a cluster as usual, making sure to select me-south-1 region and using try.openshift.com pull secret Actual results: $ oc logs image-registry-79c5b59b8-jqqct -n openshift-image-registry time="2020-01-03T14:59:40.84698399Z" level=info msg="start registry" distribution_version=v2.6.0+unknown go.version=go1.12.12 openshift_version=v4.4.0-201912200058+2c3bfc0-dirty time="2020-01-03T14:59:40.847679595Z" level=info msg="caching project quota objects with TTL 1m0s" go.version=go1.12.12 panic: Invalid region provided: me-south-1 goroutine 1 [running]: github.com/openshift/image-registry/vendor/github.com/docker/distribution/registry/handlers.NewApp(0x1e30260, 0xc000048090, 0xc000291c00, 0xc00009d9e0) /go/src/github.com/openshift/image-registry/vendor/github.com/docker/distribution/registry/handlers/app.go:127 +0x31ac github.com/openshift/image-registry/pkg/dockerregistry/server/supermiddleware.NewApp(0x1e30260, 0xc000048090, 0xc000291c00, 0x1e37de0, 0xc0002d9950, 0x1e42c00) /go/src/github.com/openshift/image-registry/pkg/dockerregistry/server/supermiddleware/app.go:96 +0x85 github.com/openshift/image-registry/pkg/dockerregistry/server.NewApp(0x1e30260, 0xc000048090, 0x1e03aa0, 0xc00010c2e0, 0xc000291c00, 0xc000206280, 0x0, 0x0, 0x0, 0xc00004e000) /go/src/github.com/openshift/image-registry/pkg/dockerregistry/server/app.go:138 +0x2d4 github.com/openshift/image-registry/pkg/cmd/dockerregistry.NewServer(0x1e30260, 0xc000048090, 0xc000291c00, 0xc000206280, 0x0, 0x0, 0x1e6fd00) /go/src/github.com/openshift/image-registry/pkg/cmd/dockerregistry/dockerregistry.go:210 +0x1c2 github.com/openshift/image-registry/pkg/cmd/dockerregistry.Execute(0x1deda00, 0xc00010c020) /go/src/github.com/openshift/image-registry/pkg/cmd/dockerregistry/dockerregistry.go:164 +0xa42 main.main() /go/src/github.com/openshift/image-registry/cmd/dockerregistry/main.go:93 +0x49c Expected results: Cluster success. Additional info:
Typo in step 1: upsream -> upstream
I suspect that the AWS SDK the registry uses doesn't know about the new region. As a work-around, can you add the Bahrain endpoint in the operator config [1]? ``` $ oc patch config.imageregistry.operator.openshift.io cluster -p '{"spec":{"storage":{"s3":{"regionEndpoint":"https://s3.me-south-1.amazonaws.com"}}}}' --type=merge ``` [1] https://docs.openshift.com/container-platform/4.2/registry/configuring-registry-storage/configuring-registry-storage-aws-user-infrastructure.html#registry-configuring-storage-aws-user-infra_configuring-registry-storage-aws-user-infrastructure
Verified on 4.4.0-0.nightly-2020-01-20-230953 root@ip-172-31-64-58: ~ # oc get nodes NAME STATUS ROLES AGE VERSION ip-10-0-129-153.me-south-1.compute.internal Ready worker 29m v1.17.0 ip-10-0-143-81.me-south-1.compute.internal Ready master 39m v1.17.0 ip-10-0-156-129.me-south-1.compute.internal Ready master 39m v1.17.0 ip-10-0-156-241.me-south-1.compute.internal Ready worker 29m v1.17.0 ip-10-0-161-192.me-south-1.compute.internal Ready worker 30m v1.17.0 ip-10-0-167-45.me-south-1.compute.internal Ready master 39m v1.17.0 root@ip-172-31-64-58: ~ # oc get pods -n openshift-image-registry NAME READY STATUS RESTARTS AGE cluster-image-registry-operator-6598f5c8b-xkhlw 2/2 Running 0 34m image-registry-64f447bf68-slgv8 1/1 Running 0 34m node-ca-2r5nq 1/1 Running 0 34m node-ca-4wdq8 1/1 Running 0 30m node-ca-8lwrp 1/1 Running 0 34m node-ca-fxn2s 1/1 Running 0 30m node-ca-gbf88 1/1 Running 0 29m node-ca-jwzsk 1/1 Running 0 34m version 4.4.0-0.nightly-2020-01-20-230953 True False 21m Cluster version is 4.4.0-0.nightly-2020-01-20-230953
*** Bug 1798879 has been marked as a duplicate of this bug. ***
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:0581
Install already cover it.