Bug 1083926 - The hosts max_scheduling_memory should be updated when a live migration starts.
Summary: The hosts max_scheduling_memory should be updated when a live migration starts.
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine
Version: 3.3.0
Hardware: Unspecified
OS: Unspecified
medium
high
Target Milestone: ---
: 3.5.0
Assignee: Martin Sivák
QA Contact: Artyom
URL:
Whiteboard: sla
Depends On:
Blocks: rhev3.5beta 1156165
TreeView+ depends on / blocked
 
Reported: 2014-04-03 09:14 UTC by Roman Hodain
Modified: 2019-04-28 09:29 UTC (History)
13 users (show)

Fixed In Version: vt1.3
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-02-11 17:58:45 UTC
oVirt Team: SLA
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:0158 0 normal SHIPPED_LIVE Important: Red Hat Enterprise Virtualization Manager 3.5.0 2015-02-11 22:38:50 UTC
oVirt gerrit 26707 0 master MERGED engine: Substract pendingVmSize from maxSchedulingMemory Never

Description Roman Hodain 2014-04-03 09:14:23 UTC
Description of problem:
When the live migration is started the max_scheduling_memory of a host
(https://rhev-m/api/hosts/<UUID>) is not updated immediately, but after the
migration is finished. This is not correct as the memory is already consumed
when the migration is started. 

If the rest API is used to migrate a VM to a specific host. The
max_scheduling_memory can be used to make sure that the migration will not fail
because of insufficient memory. Unfortunately if more than one VM is migrated then the value is not updated properly an next migrations can fail.

Version-Release number of selected component (if applicable):
	rhevm-restapi-3.3.1-0.48.el6ev.noarch

How reproducible:
	100%

Steps to Reproduce:
	1. Create a VM	
	2. Start migration of the VM to another host
	3. Monitor the max_scheduling_memory 

Actual results:
	Values is update after the migration is fifnished

Expected results:
	Value is updated as soon as the migration starts an in case of a failure
the values is ecreased again.

Additional info:

Comment 1 Juan Hernández 2014-04-03 09:45:09 UTC
The RESTAPI gets the max_scheduling_memory value directly from the backend with each invocation, it doesn't cache it in any way, so this probably needs to be changed in the backend.

Comment 2 Michal Skrivanek 2014-04-11 05:52:46 UTC
thoughts?

Comment 3 Martin Sivák 2014-04-11 12:38:01 UTC
The scheduling algorithm updates pending cpu and memory values for the destination host once it selects it. This happens before the migration is started.

The bad value comes from VDS.calculateFreeVirtualMemory that does not use the pending field in the computation. The result is not used anywhere except REST.

Comment 5 Artyom 2014-08-07 12:59:43 UTC
Checked on ovirt-engine-3.5.0-0.0.master.20140804172041.git23b558e.el6.noarch
All looks fine except one thing:
Have host with <max_scheduling_memory>16268656640</max_scheduling_memory>
and vm with memory: 2048 and guaranteed_memory: 1024
When I start migration I see that:
<max_scheduling_memory>15194914816</max_scheduling_memory> so we have reservation on host for 1024mb
but just before migration finish I can see that for seconds returns old value:
<max_scheduling_memory>16268656640</max_scheduling_memory>
and after migration finished
<max_scheduling_memory>14053015552</max_scheduling_memory> so vm reserved all 2048mb

Question now why it for a second jump to old value, and when you have a big amount of vms that migrate from host to host it can be source of problem.

Comment 6 Martin Sivák 2014-11-25 15:41:41 UTC
Please open a new bug for that. It is probably related to synchronization of the database writes for memory pending counters and host info coming from vdsm.

Comment 7 Artyom 2014-11-26 08:03:02 UTC
ok, so I will recheck this one and if it ok, will verified this one

Comment 8 Artyom 2014-11-27 13:13:20 UTC
Verified on rhevm-3.5.0-0.21.el6ev.noarch

Comment 10 errata-xmlrpc 2015-02-11 17:58:45 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-0158.html


Note You need to log in before you can comment on or make changes to this bug.