Description of problem: 1. Create a Migration plan with a single VM,using a source storage id, that is not the storage id of the VM disk. (This id belong to some other storage in VMware). 2. Run this migration plan. Result: Migration plan succeeds. VM migrated successfully. Commands: $ cat << EOF | oc apply -f - --- apiVersion: virt.konveyor.io/v1alpha1 kind: Plan metadata: name: plan1 namespace: openshift-migration spec: provider: source: name: vmware1 namespace: openshift-migration destination: name: host namespace: openshift-migration map: networks: - source: id: network-14 destination: type: pod name: pod namespace: openshift-migration datastores: - source: id: datastore-11 <- A storage in VMware but not where VM disk resides destination: storageClass: nfs vms: - id: vm-647 EOF $ cat << EOF | oc apply -f - --- apiVersion: virt.konveyor.io/v1alpha1 kind: Migration metadata: name: plan1-run namespace: openshift-migration spec: plan: name: plan1 namespace: openshift-migration EOF Version-Release number of selected component (if applicable): MTV-2.0 Expected results: Migration plan should fail since the source storage id is not the storage id of the migrated VM's disk.
It's normal behavior in VMIO, as it uses the default storage class if the source is not mapped. @jortel this should probably be part of the Plan validation.
(In reply to Fabien Dupont from comment #1) > It's normal behavior in VMIO, as it uses the default storage class if the > source is not mapped. > @jortel this should probably be part of the Plan validation. Agreed. https://github.com/konveyor/forklift-controller/issues/103
*** Bug 1902490 has been marked as a duplicate of this bug. ***
The fix should be part of build mtv-operator-bundle-container-2.0.0-4 / iib:72115.
Both network and storage map cause a critical plan failure, as expected. Additional info may be helpful, such as the vm name with the missing mapping and/or which network/storage is missing in the map. This is what we have now: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning VMStorageNotMapped 2m32s (x2 over 5m33s) plan VM has unmapped storage. Warning VMNetworksNotMapped 25s (x2 over 5m33s) plan VM has unmapped networks. As a user, i do not know where to star looking for the missing mappings. build 2.0.0-8 / iib:72981 fduarte
You'll need to look at the Conditions on the Plan rather than the events. The VM*NotMapped conditions include a list of VMs with unmapped resources which should make it easy to narrow down the issue.
missed it Status: Conditions: Category: Critical Items: id:vm-1882 name:'rh8amos-2' id:vm-1966 name:'nachandr-rhel8' Last Transition Time: 2021-05-06T14:52:27Z Message: VM has unmapped networks. Reason: NotValid Status: True Type: VMNetworksNotMapped Category: Critical Items: id:vm-1882 name:'rh8amos-2' Last Transition Time: 2021-05-06T14:52:27Z Message: VM has unmapped storage. Reason: NotValid Status: True Type: VMStorageNotMapped verified build 2.0.0-8 / iib:72981
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (MTV 2.0.0 images), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2021:2381