Description of problem: When a migration plan is run, a ServiceTemplateTransformationPlanTask and an InfraConversionJob are created per VM in the plan. The InfraConversionJob is responsible for monitoring the process via a state machine. With the current state machine, the first thing that happens is that a conversion host is assigned to the Task. Then, the task is 'starts' which means that it runs the ServiceTemplateTransformationPlanTask.preflight_check method. This has two effects: 1. We need a conversion host to run the checks. This means that in a large plan, it may happen really late because the number of conversion slots is limited. 2. The UI doesn't report the correct total storage to migrate, because the virtv2v_disks array is only populated during preflight_check. This means that migrations waiting for a conversion host are not reported. Version-Release number of selected component (if applicable): 5.10.6 How reproducible: Always Steps to Reproduce: 1. Create a migration plan with 20 VMs 2. Limit the total concurrent migrations to 10 3. Start the migration plan Actual results: The total storage reported on the migration plan card corresponds to the storage of the 10 VMs currently migrating. As the other VMs start to migrate, the total storage will grow. Expected results: The total storage reported on the migration plan cards corresponds to the storage of the 20 VMs.
This is going to be part of a broader rewrite of the state machine to integrate warm migration steps. Moving to 5.11.z.
https://github.com/ManageIQ/manageiq/pull/19146
*** Bug 1740661 has been marked as a duplicate of this bug. ***
To test this BZ, one should create a migration plan with a VM that won't pass preflight check. One example is a VM supposed to migrate to OpenStack but powered off when the migration plan starts. The VM migration should fail right away, without waiting for a conversion host. This can be verified in Rails console: irb> task = Vm.find_by(:name => 'my_vm', :vendor => 'vmware') irb> task = ServiceTemplateTransformationPlanTask.where(:source => vm).last irb> task.status => "Error" irb> task.conversion_host => nil
Verified this issue with CFME version: 5.11.1.1 While migrating 20 VMs, one conversion host, configured to max 10 VMs -> first, CFME UI showed data size for 10 VMs, and after some seconds the data size was updated to the total size (see attached video) Is it the expected behaviour? Tried to reproduced the issue with one VM, as described in comment #5, without success. Our tries were: 1. VM without network -> passed. (by design) 2. VM with storage that doesn't match to the storage mapping (tested by rest api) -> didn't get the described error: irb> vm = Vm.find_by(:name => 'v2v_migration_vm_0', :vendor => 'vmware') irb> task = ServiceTemplateTransformationPlanTask.where(:source => vm).last irb> task.status => "Ok" How to verify this by one migration to RHV?
Created attachment 1643009 [details] Screen Record - bug verification
One possibility is to verify that with RHV: 1. Create an infrastructure mapping with a RHV provider 2. Create a migration plan with the previously created infrastructure mapping 3. Modify the infrastructure mapping to remove the cluster mapping 4. Run the migration plan To remove the cluster mapping from the infrastructure mapping via Rails console: irb> mapping = TransformationMapping.find_by(:name => "My Mapping") irb> mapping.transformation_mapping_items.select { |i| i.source_type == 'EmsCluster' }.each { |i| i.destroy }