Description of problem: Installation of the operator and the controller application migration fails in OCP 3.11 source cluster. Version-Release number of selected component (if applicable): $ oc version oc v3.11.126 kubernetes v1.11.0+d4cacc0 features: Basic-Auth GSSAPI Kerberos SPNEGO Server https:// openshift v3.11.104 kubernetes v1.11.0+d4cacc0 https://github.com/fusor/mig-operator commit-id: 75321b8757d3d997315b93bfb284131d8acb738c Quay manifest operator image https://quay.io/repository/ocpmigrate/mig-operator/manifest/sha256:6cd4fb75ce7e79f668de3281c7ebe1069f1d32f8f17902ead4bacc3cd0a74e5f $ oc describe pod migration-operator-6d47cc9948-qrrdf | grep image Normal Pulling 39m kubelet, node1.sregidor-ocp3.internal pulling image "quay.io/ocpmigrate/mig-operator:latest" How reproducible: Steps to Reproduce: 1. clone https://github.com/fusor/mig-operator 2. oc create -f operator.yml 3.Since it is the ocp3 source cluster, edit controller.yaml so that migration_controller: false migration_ui: false 4. oc create -f controller.yml Actual results: 1. Service account with name "mig" is not created. $ oc get pods NAME READY STATUS RESTARTS AGE migration-operator-6d47cc9948-pw2p7 2/2 Running 0 1m restic-6c2f9 1/1 Running 0 23s restic-8jlmh 1/1 Running 0 23s restic-b79b4 1/1 Running 0 23s restic-ctbgt 1/1 Running 0 23s restic-wskh2 1/1 Running 0 23s velero-7559946c5c-hqvh8 1/1 Running 0 23s $ oc get sa NAME SECRETS AGE builder 2 1m default 2 1m deployer 2 1m migration-operator 2 1m velero 2 29s 2. There is an error in operator logs $ oc logs migration-operator-6d47cc9948-pw2p7 -c operator . . . \nTASK [migrationcontroller : Set up migration CRDs] *****************************\r\ntask path: /opt/ansible/roles/migrationcontroller/tasks/main.yml:86\nok: [localhost] => (item=cluster-registry-crd.yaml) => {\"changed\": false, \"item\": \"cluster-registry-crd.yaml\", \"method\": \"delete\", \"result\": {}}\nok: [localhost] => (item=migration_v1alpha1_migcluster.yaml) => {\"changed\": false, \"item\": \"migration_v1alpha1_migcluster.yaml\", \"method\": \"delete\", \"result\": {}}\nok: [localhost] => (item=migration_v1alpha1_migmigration.yaml) => {\"changed\": false, \"item\": \"migration_v1alpha1_migmigration.yaml\", \"method\": \"delete\", \"result\": {}}\nok: [localhost] => (item=migration_v1alpha1_migplan.yaml) => {\"changed\": false, \"item\": \"migration_v1alpha1_migplan.yaml\", \"method\": \"delete\", \"result\": {}}\nok: [localhost] => (item=migration_v1alpha1_migstorage.yaml) => {\"changed\": false, \"item\": \"migration_v1alpha1_migstorage.yaml\", \"method\": \"delete\", \"result\": {}}\n\r\nTASK [migrationcontroller : Set up mig controller] *****************************\r\ntask path: /opt/ansible/roles/migrationcontroller/tasks/main.yml:97\nfatal: [localhost]: FAILED! => {\"changed\": false, \"msg\": \"Failed to find exact match for migration.openshift.io/v1alpha1.MigCluster by [kind, name, singularName, shortNames]\"}\n\r\nPLAY RECAP *********************************************************************\r\nlocalhost : ok=8 changed=0 unreachable=0 failed=1 \r\n\n","job":"1837425794803595142","name":"migration-controller","namespace":"mig","error":"exit status 2","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\tpkg/mod/github.com/go-logr/zapr.1/zapr.go:128\ngithub.com/operator-framework/operator-sdk/pkg/ansible/runner.(*runner).Run.func1\n\tsrc/github.com/operator-framework/operator-sdk/pkg/ansible/runner/runner.go:190"} Expected results: 1. No failures in operator's logs 2. and "mig" service account created. Additional info:
@Sergio, I've just merged a PR aimed at fixing this: https://github.com/fusor/mig-operator/pull/31 Can you test again with the latest mig-operator image? should be available on latest/master https://quay.io/repository/ocpmigrate/mig-operator?tab=tags Corresponding quay autobuild: https://quay.io/repository/ocpmigrate/mig-operator/build/cba457a3-3d79-4d9c-9540-a3ce424e7c7a
@Derek Whatley, I have checked it. It installed properly, the "mig" service account was created and I could add the source cluster without any problem. $ oc get sa NAME SECRETS AGE builder 2 1m default 2 1m deployer 2 1m mig 2 12s migration-operator 2 1m velero 2 20s Thank you very much.
I verified https://github.com/fusor/mig-operator/pull/31 commit-id 30e2e9bc2c38fb04407d10d1c651441a55841e0a https://quay.io/repository/ocpmigrate/mig-operator/manifest/sha256:1df62f5ce345f56520a8d0b9795fa9bc55fcac9c04a029f6ddf4da638b055a32
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:2922