Bug 1791457 - Add support for AWS me-south-1 Bahrain region.
Summary: Add support for AWS me-south-1 Bahrain region.
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Installer
Version: 4.3.z
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 4.3.z
Assignee: Patrick Dillon
QA Contact: gaoshang
URL:
Whiteboard:
Depends On: 1796651
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-01-15 21:11 UTC by Patrick Dillon
Modified: 2020-02-25 06:18 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-02-25 06:17:59 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Github openshift installer pull 2877 None closed Bug 1791457: [release-4.3] Add support for AWS Bahrain region me-south-1 2020-03-12 05:08:25 UTC
Red Hat Product Errata RHBA-2020:0528 None None None 2020-02-25 06:18:15 UTC

Description Patrick Dillon 2020-01-15 21:11:20 UTC
Description of problem: The installer does not support the AWS Bahrain region me-south-1. 

Expected results: Installer should be able to install a cluster in the me-south-1 region.

Comment 2 gaoshang 2020-02-14 04:00:34 UTC
This bug has been verified with OCP 4.3.0-0.nightly-2020-02-13-105503 on AWS, me-south-1 region can be listed, though installation get image-registry error, there's already Bug 1796584 to track it. Moved this bug status to VERIFIED, thanks.

Version-Release number of selected component (if applicable):
# oc get clusterversion
NAME      VERSION   AVAILABLE   PROGRESSING   SINCE   STATUS
version             False       True          51m     Unable to apply 4.3.0-0.nightly-2020-02-13-105503: the cluster operator image-registry has not yet successfully rolled out

Steps to Reproduce:
1. Install OCP 4.3 on AWS in me-south-1 region
# ./openshift-install destroy cluster --dir $DIR_NAME --log-level debug
...
? SSH Public Key /root/.ssh/openshift-qe.pub
DEBUG       Fetching Base Domain...                
DEBUG         Fetching Platform...                 
DEBUG         Generating Platform...               
? Platform aws
? Region me-south-1
DEBUG       Generating Base Domain...              
DEBUG listing AWS hosted zones                     
? Base Domain qe.devcluster.openshift.com
DEBUG       Fetching Cluster Name...               
DEBUG         Fetching Base Domain...              
DEBUG         Reusing previously-fetched Base Domain 
DEBUG         Fetching Platform...                 
DEBUG         Reusing previously-fetched Platform  
DEBUG       Generating Cluster Name...             
? Cluster Name sgao-0
DEBUG       Fetching Pull Secret...                
DEBUG       Generating Pull Secret...              

...
INFO Cluster operator image-registry Available is False with NoReplicasAvailable: The deployment does not have available replicas 
INFO Cluster operator image-registry Progressing is True with DeploymentNotCompleted: The deployment has not completed 
INFO Cluster operator insights Disabled is False with :  
FATAL failed to initialize the cluster: Working towards 4.3.0-0.nightly-2020-02-13-105503: 100% complete, waiting on image-registry

2. Installation failed with image-registry error, check logs in image-registry pod, region "me-south-1" is not recognized

# oc get clusterversion
NAME      VERSION   AVAILABLE   PROGRESSING   SINCE   STATUS
version             False       True          51m     Unable to apply 4.3.0-0.nightly-2020-02-13-105503: the cluster operator image-registry has not yet successfully rolled out

# oc get co
NAME                                       VERSION                             AVAILABLE   PROGRESSING   DEGRADED   SINCE
authentication                             4.3.0-0.nightly-2020-02-13-105503   True        False         False      25m
cloud-credential                           4.3.0-0.nightly-2020-02-13-105503   True        False         False      50m
cluster-autoscaler                         4.3.0-0.nightly-2020-02-13-105503   True        False         False      43m
console                                    4.3.0-0.nightly-2020-02-13-105503   True        False         False      35m
dns                                        4.3.0-0.nightly-2020-02-13-105503   True        False         False      46m
image-registry                                                                 False       True          False      44m
ingress                                    4.3.0-0.nightly-2020-02-13-105503   True        False         False      38m
insights                                   4.3.0-0.nightly-2020-02-13-105503   True        False         False      48m
kube-apiserver                             4.3.0-0.nightly-2020-02-13-105503   True        False         False      46m
kube-controller-manager                    4.3.0-0.nightly-2020-02-13-105503   True        False         False      45m
kube-scheduler                             4.3.0-0.nightly-2020-02-13-105503   True        False         False      45m
machine-api                                4.3.0-0.nightly-2020-02-13-105503   True        False         False      47m
machine-config                             4.3.0-0.nightly-2020-02-13-105503   True        False         False      46m
marketplace                                4.3.0-0.nightly-2020-02-13-105503   True        False         False      43m
monitoring                                 4.3.0-0.nightly-2020-02-13-105503   True        False         False      35m
network                                    4.3.0-0.nightly-2020-02-13-105503   True        False         False      48m
node-tuning                                4.3.0-0.nightly-2020-02-13-105503   True        False         False      43m
openshift-apiserver                        4.3.0-0.nightly-2020-02-13-105503   True        False         False      44m
openshift-controller-manager               4.3.0-0.nightly-2020-02-13-105503   True        False         False      46m
openshift-samples                          4.3.0-0.nightly-2020-02-13-105503   True        False         False      42m
operator-lifecycle-manager                 4.3.0-0.nightly-2020-02-13-105503   True        False         False      47m
operator-lifecycle-manager-catalog         4.3.0-0.nightly-2020-02-13-105503   True        False         False      47m
operator-lifecycle-manager-packageserver   4.3.0-0.nightly-2020-02-13-105503   True        False         False      45m
service-ca                                 4.3.0-0.nightly-2020-02-13-105503   True        False         False      48m
service-catalog-apiserver                  4.3.0-0.nightly-2020-02-13-105503   True        False         False      44m
service-catalog-controller-manager         4.3.0-0.nightly-2020-02-13-105503   True        False         False      44m
storage                                    4.3.0-0.nightly-2020-02-13-105503   True        False         False      44m

# oc get pod -n openshift-image-registry
NAME                                              READY   STATUS             RESTARTS   AGE
cluster-image-registry-operator-c56959457-5sz5w   2/2     Running            0          82m
image-registry-5dd75448f7-vd9km                   0/1     CrashLoopBackOff   19         81m
image-registry-7548bc96dc-hjvh6                   0/1     CrashLoopBackOff   19         81m
node-ca-2sxvt                                     1/1     Running            0          76m
node-ca-848xd                                     1/1     Running            0          76m
node-ca-bl6gr                                     1/1     Running            0          76m
node-ca-g559k                                     1/1     Running            0          81m
node-ca-qx2rw                                     1/1     Running            0          81m
node-ca-wp57z                                     1/1     Running            0          81m

# oc logs image-registry-5dd75448f7-vd9km -n openshift-image-registry
time="2020-02-13T16:47:08.813338982Z" level=info msg="start registry" distribution_version=v2.6.0+unknown go.version=go1.12.12 openshift_version=v4.3.2-202002122007+cf6a638-dirty
time="2020-02-13T16:47:08.813820907Z" level=info msg="caching project quota objects with TTL 1m0s" go.version=go1.12.12
panic: Invalid region provided: me-south-1

goroutine 1 [running]:
github.com/openshift/image-registry/vendor/github.com/docker/distribution/registry/handlers.NewApp(0x1c03b20, 0xc000048090, 0xc000065c00, 0xc0006115c0)
        /go/src/github.com/openshift/image-registry/vendor/github.com/docker/distribution/registry/handlers/app.go:127 +0x31ac
github.com/openshift/image-registry/pkg/dockerregistry/server/supermiddleware.NewApp(0x1c03b20, 0xc000048090, 0xc000065c00, 0x1c0aea0, 0xc00047d680, 0x1c14d00)
        /go/src/github.com/openshift/image-registry/pkg/dockerregistry/server/supermiddleware/app.go:96 +0x85
github.com/openshift/image-registry/pkg/dockerregistry/server.NewApp(0x1c03b20, 0xc000048090, 0x1bdbda0, 0xc000010578, 0xc000065c00, 0xc000336320, 0x0, 0x0, 0x0, 0xc00004e000)
        /go/src/github.com/openshift/image-registry/pkg/dockerregistry/server/app.go:138 +0x2d4
github.com/openshift/image-registry/pkg/cmd/dockerregistry.NewServer(0x1c03b20, 0xc000048090, 0xc000065c00, 0xc000336320, 0x0, 0x0, 0x1c3ef20)
        /go/src/github.com/openshift/image-registry/pkg/cmd/dockerregistry/dockerregistry.go:210 +0x1c2
github.com/openshift/image-registry/pkg/cmd/dockerregistry.Execute(0x1bc63e0, 0xc000010050)
        /go/src/github.com/openshift/image-registry/pkg/cmd/dockerregistry/dockerregistry.go:164 +0xa42
main.main()
        /go/src/github.com/openshift/image-registry/cmd/dockerregistry/main.go:93 +0x49c

Comment 4 errata-xmlrpc 2020-02-25 06:17:59 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:0528


Note You need to log in before you can comment on or make changes to this bug.