New commit detected on ManageIQ/manageiq/hammer: https://github.com/ManageIQ/manageiq/commit/59e5de4f03c21eaaa584ef5a2ab933bd4ab4a8c5 commit 59e5de4f03c21eaaa584ef5a2ab933bd4ab4a8c5 Author: Keenan Brock <keenan> AuthorDate: Fri Jun 14 18:03:37 2019 -0400 Commit: Keenan Brock <keenan> CommitDate: Fri Jun 14 18:03:37 2019 -0400 Merge pull request #18860 from djberg96/conversion_host_throttling [V2V] Modify active_tasks so that it always reloads (cherry picked from commit 660387c242a7581405ad85280370cd77317aac36) Fixes https://bugzilla.redhat.com/show_bug.cgi?id=1721117 Fixes https://bugzilla.redhat.com/show_bug.cgi?id=1721118 app/models/conversion_host.rb | 5 +- lib/infra_conversion_throttler.rb | 13 +- 2 files changed, 15 insertions(+), 3 deletions(-)
Created attachment 1584982 [details] Maximum concurrent migrations per conversion host As in attached SS , Maximum concurrent migrations per conversion host is set to 3
Created attachment 1584983 [details] seven migrations running at a time I started 8 migrations at the same time out of which 7 started.
Created attachment 1584984 [details] four migrations using same conversion host As in attached screenshot . More than three (Maximum concurrent migrations per conversion host was set to three) migrations use same conversion host .
Started 8 migrations and set Maximum concurrent migrations per conversion host to 3. Two conversion hosts were set . SO each should allow three migrations at a time , where as 4 migrations were allowed . @Ilanit , please check appliance https://10.16.5.95/ and check if this is the test case you tested .
@Ilanit, can you please run a migration plan with logs in debug mode ? This should print extra logs, that will help understand why the hosts are selected.
Ilanit and I have been running some migrations today and we could see that the throttling was working fine. However, we had done some changes on Ilanit's appliance, applying the PR mentioned in https://bugzilla.redhat.com/show_bug.cgi?id=1726186 and https://bugzilla.redhat.com/show_bug.cgi?id=1719689. So, this may have an effect. Both PRs should be merged soon and part of CMFE 5.10.7, so I move this BZ to ON_QA to revalidate once 5.10.7 is available to QE. Here is the scenario we tested with 2 conversion hosts: - Create a migration plan with 20 VM - Set the number of concurrent conversions per host to 5 using the 'Migration Settings' page - Start the migration plan - Check that each conversion host has 5 migrations and that 10 migrations are in PreMigration state - Set the max_concurrent_tasks attribute of the first conversion host to 7 - Check that conversion host 1 has 7 migrations, that conversion host 2 has 5 migrations and that 8 migrations are in PreMigration state - Set the max_concurrent_tasks attribute of the second conversion host to 7 - Check that conversion host 1 has 7 migrations, that conversion host 2 has 7 migrations and that 6 migrations are in PreMigration state - Set the number of concurrent conversions per host to 5 using the 'Migration Settings' page - Set the max_concurrent_tasks attribute of both conversion hosts to nil - Check that each conversion host has 10 migrations and that all migrations are started
@Ytale, can you enable debug mode on your appliance, run the migration again and attach $evm.log ?
Created attachment 1589235 [details] evm log evm log in debug mode.
@Shveta, from the logs, I can see that the conversion host has 4 migrations and 2 are pending. Isn't that the expected behavior ?
I am not sure how come logs showing that but in reality if you see in UI this is not the case. Let me prepare some sample plans for you, give me 30min.
@Fabien, plans are ready. Created 'fabien-th1..th6' plans, limit is set to 3 in UI and log level also debug, you can check it by migrating on same appliance.
Created attachment 1589407 [details] 6 migration plans - only 3 running
Created attachment 1589409 [details] 4 migration plan - only 3 runnng
When I connected to the appliance, there was a migration plan in progress. However, the migration task was finished. So, that probably explains what you experienced in the UI. You didn't follow the scenario that Ilanit and I proposed. Any reason for that ? We proposed using 1 migration plan with X VMs, rather than X migration plans with 1 VM. This is because the throttling is done at the task level, not the migration plan level. So, this allows you to see all the migration tasks in the migration plan details. I've attached screenshots of the appliance after removing the 'ghost' migration plan. You can see that only 3 migration plans are running at the same time. So, for me, it validates the fix. All migration plans are now finished, so you can test it by yourself again. Moving back to ON_QA.
Created one plan with 8 VM's and throttling limit set to 3. At a time only three vm's started migration . Working as expected. Verified in 5.10.7.0.20190709151852_68f0bf9
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2019:1833