Bug 1758632 - Web UI is not accessible when migration operator is installed using OLM
Summary: Web UI is not accessible when migration operator is installed using OLM
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Migration Tooling
Version: 4.2.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 4.2.0
Assignee: Jason Montleon
QA Contact: Zhang Cheng
URL: https://github.com/fusor/mig-operator...
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-10-04 16:21 UTC by Roshni
Modified: 2019-10-16 06:42 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-10-16 06:41:54 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:2922 0 None None None 2019-10-16 06:42:08 UTC

Description Roshni 2019-10-04 16:21:54 UTC
Description of problem:
Web UI is not accessible when migration operator is installed using OLM

Version-Release number of selected component (if applicable):
# oc describe pod/migration-operator-57dd87c7fc-c7frh | grep Image
                    containerImage: image-registry.openshift-image-registry.svc:5000/rhcam/openshift-migration-operator:v1.0
    Image:         image-registry.openshift-image-registry.svc:5000/rhcam/openshift-migration-operator:v1.0
    Image ID:      image-registry.openshift-image-registry.svc:5000/rhcam/openshift-migration-operator@sha256:5fb4683e9693c24bbdd95322887bdffec4feba36aec8de93abc3e37a2b36034e
    Image:          image-registry.openshift-image-registry.svc:5000/rhcam/openshift-migration-operator:v1.0
    Image ID:       image-registry.openshift-image-registry.svc:5000/rhcam/openshift-migration-operator@sha256:5fb4683e9693c24bbdd95322887bdffec4feba36aec8de93abc3e37a2b36034e
# oc describe pod/controller-manager-56d558c5f-8vg9r | grep Image
    Image:         image-registry.openshift-image-registry.svc:5000/rhcam/openshift-migration-controller:v1.0
    Image ID:      image-registry.openshift-image-registry.svc:5000/rhcam/openshift-migration-controller@sha256:46c622e0fbe64165b09930738a7d111c875976b54c8236ddce328cb5470d60ab
# oc describe pod/migration-ui-6f7df75875-4vbmc | grep Image
    Image:          image-registry.openshift-image-registry.svc:5000/rhcam/openshift-migration-ui:v1.0
    Image ID:       image-registry.openshift-image-registry.svc:5000/rhcam/openshift-migration-ui@sha256:59e60d7036ebdc5b7d29895104e7b459a53a1c004e876f50b3e79cdc2b78941c
# oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.2.0-0.nightly-2019-10-04-015220   True        False         86m     Cluster version is 4.2.0-0.nightly-2019-10-04-015220

How reproducible:
always

Steps to Reproduce:
1. Follow instructions in https://github.com/fusor/mig-operator/tree/master/deploy/test-helpers to install the migration operator and create the controller CR

# oc get all
NAME                                      READY   STATUS      RESTARTS   AGE
pod/controller-manager-56d558c5f-8vg9r    1/1     Running     0          70m
pod/migration-operator-57dd87c7fc-c7frh   2/2     Running     0          71m
pod/migration-ui-6f7df75875-4vbmc         1/1     Running     0          70m
pod/registry-plan-4-v7hfr-1-deploy        0/1     Completed   0          49m
pod/registry-plan-4-v7hfr-1-tv66r         1/1     Running     0          49m
pod/restic-5vhjr                          1/1     Running     0          70m
pod/restic-rxb5t                          1/1     Running     0          70m
pod/restic-wzkr8                          1/1     Running     0          70m
pod/velero-fcdd7cdbc-8k9hw                1/1     Running     0          70m

NAME                                            DESIRED   CURRENT   READY   AGE
replicationcontroller/registry-plan-4-v7hfr-1   1         1         1       49m

NAME                                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
service/controller-manager-service   ClusterIP   172.30.225.146   <none>        443/TCP    70m
service/migration-operator-metrics   ClusterIP   172.30.166.67    <none>        8383/TCP   70m
service/migration-ui                 ClusterIP   172.30.10.197    <none>        9000/TCP   70m
service/registry-plan-4-v7hfr        ClusterIP   172.30.222.43    <none>        5000/TCP   49m

NAME                    DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/restic   3         3         3       3            3           <none>          70m

NAME                                 READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/controller-manager   1/1     1            1           70m
deployment.apps/migration-operator   1/1     1            1           71m
deployment.apps/migration-ui         1/1     1            1           70m
deployment.apps/velero               1/1     1            1           70m

NAME                                            DESIRED   CURRENT   READY   AGE
replicaset.apps/controller-manager-56d558c5f    1         1         1       70m
replicaset.apps/migration-operator-57dd87c7fc   1         1         1       71m
replicaset.apps/migration-ui-6f7df75875         1         1         1       70m
replicaset.apps/velero-fcdd7cdbc                1         1         1       70m

NAME                                                       REVISION   DESIRED   CURRENT   TRIGGERED BY
deploymentconfig.apps.openshift.io/registry-plan-4-v7hfr   1          1         1         config

NAME                                                   IMAGE REPOSITORY                                                                                                                              TAGS   UPDATED
imagestream.image.openshift.io/registry-plan-4-v7hfr   default-route-openshift-image-registry.apps.rpattath-4-perf.perf-testing.devcluster.openshift.com/openshift-migration/registry-plan-4-v7hfr   2      About an hour ago

NAME                                 HOST/PORT                                                                                  PATH   SERVICES       PORT        TERMINATION     WILDCARD
route.route.openshift.io/migration   migration-openshift-migration.apps.rpattath-4-perf.perf-testing.devcluster.openshift.com          migration-ui   port-9000   edge/Redirect   None

2. Go to https://migration-openshift-migration.apps.rpattath-4-perf.perf-testing.devcluster.openshift.com

3.

Actual results:
Take me to htpps://migration-openshift-migration.apps.rpattath-4-perf.perf-testing.devcluster.openshift.com/cert_error page despite trusting the CA going to the api page.

Expected results:
Migration UI should be accessible

Additional info:
# for i in $(oc get po -n openshift-kube-apiserver -l app=openshift-kube-apiserver --no-headers -o custom-columns=name:.metadata.name); do oc exec -n openshift-kube-apiserver $i -- cat /etc/kubernetes/static-pod-resources/configmaps/config/config.yaml | python -m json.tool | grep cors -A5; done
Defaulting container name to kube-apiserver-7.
Use 'oc describe pod/kube-apiserver-ip-10-0-137-40.us-west-2.compute.internal -n openshift-kube-apiserver' to see all of the containers in this pod.
    "corsAllowedOrigins": [
        "//127\\.0\\.0\\.1(:|$)",
        "//localhost(:|$)"
    ],
    "imagePolicyConfig": {
        "externalRegistryHostnames": [
Defaulting container name to kube-apiserver-7.
Use 'oc describe pod/kube-apiserver-ip-10-0-149-209.us-west-2.compute.internal -n openshift-kube-apiserver' to see all of the containers in this pod.
    "corsAllowedOrigins": [
        "//127\\.0\\.0\\.1(:|$)",
        "//localhost(:|$)"
    ],
    "imagePolicyConfig": {
        "externalRegistryHostnames": [
Defaulting container name to kube-apiserver-7.
Use 'oc describe pod/kube-apiserver-ip-10-0-162-202.us-west-2.compute.internal -n openshift-kube-apiserver' to see all of the containers in this pod.
    "corsAllowedOrigins": [
        "//127\\.0\\.0\\.1(:|$)",
        "//localhost(:|$)"
    ],
    "imagePolicyConfig": {
        "externalRegistryHostnames": [

# oc describe apiserver cluster
Name:         cluster
Namespace:    
Labels:       <none>
Annotations:  release.openshift.io/create-only: true
API Version:  config.openshift.io/v1
Kind:         APIServer
Metadata:
  Creation Timestamp:  2019-10-04T14:27:12Z
  Generation:          1
  Resource Version:    1006
  Self Link:           /apis/config.openshift.io/v1/apiservers/cluster
  UID:                 0e7203f3-e6b3-11e9-9d11-06c9a2eef256
Spec:
Events:  <none>

# oc get operatorsource -n openshift-marketplace -o wide
NAME                   TYPE          ENDPOINT              REGISTRY            DISPLAYNAME   PUBLISHER   STATUS      MESSAGE                                       AGE
rh-osbs-applications   appregistry   https://quay.io/cnr   rh-osbs-operators                             Succeeded   The object has been successfully reconciled   83m

Comment 1 Jason Montleon 2019-10-04 17:10:27 UTC
To me this looks like the operator image being used is not the latest. If the cluster is still up it would be helpful to get access to it so I can confirm one way or the other if it's using the latest image.

The apiserver cluster config is missing the additional cors configration, which should be present with the latest image, v1.0-0.9

oc describe apiserver cluster
Name:         cluster
Namespace:    
Labels:       <none>
Annotations:  release.openshift.io/create-only: true
API Version:  config.openshift.io/v1
Kind:         APIServer
Metadata:
  Creation Timestamp:  2019-10-04T14:27:12Z
  Generation:          1
  Resource Version:    1006
  Self Link:           /apis/config.openshift.io/v1/apiservers/cluster
  UID:                 0e7203f3-e6b3-11e9-9d11-06c9a2eef256
Spec:
Events:  <none>

This was fixed upstream here:
https://github.com/fusor/mig-operator/commit/3bc9bd6ffccda6b592ee272d9d8af392c74268f0#diff-edf82c5f6707ec7fc50472287dccd251L215-L242

Comment 3 Jason Montleon 2019-10-04 17:58:02 UTC
This is a permissions issue. The operator does not have permission to change retrieve/edit the config.openshift.io resources.

The fix is to add this to the clusterRole

- apiGroups:
  - config.openshift.io
  resources:
  - apiservers
  verbs:
  - '*'

Comment 4 Jason Montleon 2019-10-04 18:07:28 UTC
https://github.com/fusor/mig-operator/pull/108

Comment 7 Roshni 2019-10-07 18:30:05 UTC
Cannot reproduce the issue in the bug description using the latest from https://quay.io/application/rh-osbs-operators/cam-operator?tab=releases

# oc describe pod/migration-operator-8cf576859-pkgrw | grep Image
                    containerImage: image-registry.openshift-image-registry.svc:5000/rhcam/openshift-migration-operator:v1.0
    Image:         image-registry.openshift-image-registry.svc:5000/rhcam/openshift-migration-operator:v1.0
    Image ID:      image-registry.openshift-image-registry.svc:5000/rhcam/openshift-migration-operator@sha256:2834e3a9c50aac685c22cfc5aeee8bf51146074cd9b1e51edf5c8129d0766093
    Image:          image-registry.openshift-image-registry.svc:5000/rhcam/openshift-migration-operator:v1.0
    Image ID:       image-registry.openshift-image-registry.svc:5000/rhcam/openshift-migration-operator@sha256:2834e3a9c50aac685c22cfc5aeee8bf51146074cd9b1e51edf5c8129d0766093

Comment 8 errata-xmlrpc 2019-10-16 06:41:54 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:2922


Note You need to log in before you can comment on or make changes to this bug.