+++ This bug was initially created as a clone of Bug #2004347 +++ Description of problem: If the migplan is not enabled DIM as true, the State migration failed at StageBackup and reported error " This migration has following error conditions: migration registry service not found " Version-Release number of selected component (if applicable): MTC 1.6.0 source: OCP 3.11 on AWS + MTC 1.5.1 target: OCP 4.6 on AWS + MTC 1.6.0 (controller) How reproducible: always Steps to Reproduce: 1. Deploy an application ansible-playbook deploy-app.yml -e use_role=ocp-django -e namespace=newocp-django$ oc -n newocp-django get pod NAME READY STATUS RESTARTS AGE django-psql-persistent-1-42zs5 1/1 Running 0 43s django-psql-persistent-1-build 0/1 Completed 0 2m postgresql-1-jzpxd 1/1 Running 0 2m 2. Create a migplan with indirectImageMigration: true and indirectVolumeMigration: true 3. Execute State Actual results: The State migration failed at StageBackup Expected results: The State migration should completed successfully Additional info: 1. Migplan $ oc get migplan newocp-django -o yaml ..... spec: destMigClusterRef: name: host namespace: openshift-migration indirectImageMigration: true indirectVolumeMigration: true migStorageRef: name: camautomation namespace: openshift-migration namespaces: - newocp-django persistentVolumes: - capacity: 1Gi name: pvc-42c3a07b-15da-11ec-aa3f-0eeadc05e8eb proposedCapacity: 1Gi pvc: accessModes: - ReadWriteOnce hasReference: true name: postgresql:postgresql namespace: newocp-django selection: action: copy copyMethod: filesystem storageClass: gp2 storageClass: gp2 supported: actions: - skip - copy - move copyMethods: - filesystem - snapshot srcMigClusterRef: name: source-cluster namespace: openshift-migration 2. migmigration $ oc get migmigration state-migration-cac05 -o yaml ..... status: conditions: - category: Advisory durable: true lastTransitionTime: "2021-09-15T04:10:08Z" message: '[1] Stage pods created.' status: "True" type: StagePodsCreated - category: Advisory durable: true lastTransitionTime: "2021-09-15T04:10:11Z" message: 'The migration has failed. See: Errors.' reason: EnsureStageBackup status: "True" type: Failed errors: - migration registry service not found itinerary: Failed observedDigest: c8d71234644888ce1d8861fdea5016b76e0f62241e01e0f00224ddd0a5606116 Controller logs: {"level":"info","ts":1631681185.4735537,"logger":"migration","msg":"Building Stage Velero Backup resource definition","migMigration":"state-migration-dc055","phase":"EnsureStageBackup"} {"level":"info","ts":1631681185.5071464,"logger":"migration","msg":"Phase execution failed.","migMigration":"state-migration-dc055","phase":"EnsureStageBackup","phaseDescription":"Creating a stage backup.","error":"migration registry service not found"} {"level":"info","ts":1631681185.507176,"logger":"migration","msg":"","migMigration":"state-migration-dc055","error":"migration registry service not found","stacktrace":"\ngithub.com/konveyor/mig-controller/pkg/controller/migmigration.(*Task).getAnnotations()\n\t/remote-source/app/pkg/controller/migmigration/registry.go:38\ngithub.com/konveyor/mig-controller/pkg/controller/migmigration.(*Task).buildBackup()\n\t/remote-source/app/pkg/controller/migmigration/backup.go:470\ngithub.com/konveyor/mig-controller/pkg/controller/migmigration.(*Task).ensureStageBackup()\n\t/remote-source/app/pkg/controller/migmigration/backup.go:131\ngithub.com/konveyor/mig-controller/pkg/controller/migmigration.(*Task).Run()\n\t/remote-source/app/pkg/controller/migmigration/task.go:750\ngithub.com/konveyor/mig-controller/pkg/controller/migmigration.(*ReconcileMigMigration).migrate()\n\t/remote-source/app/pkg/controller/migmigration/migrate.go:70\ngithub.com/konveyor/mig-controller/pkg/controller/migmigration.(*ReconcileMigMigration).Reconcile()\n\t/remote-source/app/pkg/controller/migmigration/migmigration_controller.go:258\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler()\n\t/remote-source/app/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:263\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem()\n\t/remote-source/app/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:235\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1.1()\n\t/remote-source/app/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:198\nk8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext.func1()\n\t/remote-source/app/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:185\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1()\n\t/remote-source/app/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil()\n\t/remote-source/app/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil()\n\t/remote-source/app/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext()\n\t/remote-source/app/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:185\nk8s.io/apimachinery/pkg/util/wait.UntilWithContext()\n\t/remote-source/app/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:99\nruntime.goexit()\n\t/opt/rh/go-toolset-1.16/root/usr/lib/go-toolset-1.16-golang/src/runtime/asm_amd64.s:1371"} {"level":"info","ts":1631681185.5072126,"logger":"migration","msg":"Marking migration as FAILED. See Status.Errors","migMigration":"state-migration-dc055","phase":"EnsureStageBackup","migrationErrors":["migration registry service not found"]} {"level":"info","ts":1631681185.5560036,"logger":"migration","msg":"[RUN] (Step 1/4) Migration failed.","migMigration":"state-migration-dc055","phase":"EnsureStageBackup"}
This is fixed through https://github.com/konveyor/mig-controller/pull/1213
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Migration Toolkit for Containers (MTC) 1.7.0 release advisory), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2022:1043