+++ This bug was initially created as a clone of Bug #1845090 +++ Description of problem: When we try to migrate more than one PVC to check the PVCs limit the migration fails because it cannot find the volumes. Version-Release number of selected component (if applicable): CAM 1.2.2 stage SOURCE: OCP 3.11 TARGET: OCP 4.4 How reproducible: Always Steps to Reproduce: 1. In the cluster that contains the controller, configure a maximum of 5 PVs per migration plan in MigrationController resouce $ oc patch -n openshift-migration MigrationController migration-controller --type=json -p='[{"op":"add", "path": "/spec/mig_pv_limit", "value": "5"}]' 2. Create 3 namespaces with a nginx with 2 PVs in every namespace. It makes 6 PVs created, 2 in each namespace. $ oc process -p NAMESPACE=max-pvs-1 -f https://gitlab.cee.redhat.com/app-mig/cam-helper/raw/master/ocp-26160/nginx_with_pv_defaultsc_template.yml | oc create -f - $ oc process -p NAMESPACE=max-pvs-2 -f https://gitlab.cee.redhat.com/app-mig/cam-helper/raw/master/ocp-26160/nginx_with_pv_defaultsc_template.yml | oc create -f - $ oc process -p NAMESPACE=max-pvs-3 -f https://gitlab.cee.redhat.com/app-mig/cam-helper/raw/master/ocp-26160/nginx_with_pv_defaultsc_template.yml | oc create -f - 3. Feed that data once all pods are running $ oc -n max-pvs-1 rsh $(oc get pods -n max-pvs-1 -o jsonpath='{.items[0].metadata.name}') sh -c 'echo "<h1>HELLO WORLD</h1>" > /usr/share/nginx/html/index.html' $ oc -n max-pvs-2 rsh $(oc get pods -n max-pvs-2 -o jsonpath='{.items[0].metadata.name}') sh -c 'echo "<h1>HELLO WORLD</h1>" > /usr/share/nginx/html/index.html' $ oc -n max-pvs-2 rsh $(oc get pods -n max-pvs-2 -o jsonpath='{.items[0].metadata.name}') sh -c 'echo "<h1>HELLO WORLD</h1>" > /usr/share/nginx/html/index.html' 4.Create a Migration Plan and select all those namespaces in order to be migrated. The three namespaces in the same migration plan. It will try to migrate a total of 6 persistent volumes. 5. Execute the migration plan Actual results: The migration fails in StageBackupCreated state. We can find the following log in the VeleroBackup logs gSource="pkg/backup/resource_backupper.go:283" name=nginx-html namespace=max-pvs-3 resource=persistentvolumeclaims time="2020-06-08T09:37:13Z" level=error msg="Error backing up item" backup=openshift-migration/max-pvs-mig-1591604420-hdt9c error="error getting vol ume info: rpc error: code = Unknown desc = InvalidVolume.NotFound: The volume 'vol-0dc329baa28ba7b8e' does not exist.\n\tstatus code: 400, request i d: 67599552-c075-4cd3-98cc-d6f58d510a81" group=v1 logSource="pkg/backup/resource_backupper.go:287" name=nginx-html namespace=max-pvs-3 resource=pers istentvolumeclaims Expected results: The migration should finish without errors and all namespaces and pvs should be migrated properly Additional info: All logs attached. --- Additional comment from Sergio on 2020-06-08 13:00:30 UTC --- --- Additional comment from Sergio on 2020-06-08 13:00:49 UTC ---
Was fixed in last release via: https://github.com/konveyor/mig-controller/pull/564
Verified using MCT 1.4.0 openshift-migration-rhel7-operator@sha256:60a0bdc7fca0d3d597efae0c242f7dc25da45c072b33198c3eb7fa425a604472 - name: MIG_CONTROLLER_REPO value: openshift-migration-controller-rhel8@sha256 - name: MIG_CONTROLLER_TAG value: 6f53fa6c8ea2648736ced2d38ebb2ead46d3975f71d7efe4bd24e6fec223aaee Verified by executing test case ocp-26160-max-pvs Moved to VERIFIED status.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Migration Toolkit for Containers (MTC) tool image release advisory 1.4.0), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:5329