Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1142081

Summary: If vm stopped in state 'Waiting for Launch', engine not update 'Max free Memory for scheduling new VMs' in webadmin
Product: Red Hat Enterprise Virtualization Manager Reporter: Artyom <alukiano>
Component: ovirt-engineAssignee: Martin Sivák <msivak>
Status: CLOSED CURRENTRELEASE QA Contact: Artyom <alukiano>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 3.5.0CC: ahadas, alukiano, dfediuck, gklein, lsurette, mavital, mgoldboi, msivak, rbalakri, Rhev-m-bugs, srevivo, ykaul
Target Milestone: ovirt-3.6.0-rc3Keywords: TestOnly, Triaged
Target Release: 3.6.0   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-04-20 01:32:15 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: SLA RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
screenshot none

Description Artyom 2014-09-16 06:46:57 UTC
Created attachment 937861 [details]
screenshot

Description of problem:
If vm stopped in state 'Waiting for Launch', engine not update host 'Max free Memory for scheduling new VMs' in webadmin

Version-Release number of selected component (if applicable):
rhevm-3.5.0-0.12.beta.el6ev.noarch

How reproducible:
Always

Steps to Reproduce:
1. Create vm and run it
2. Stop vm, in 'Waiting for Launch' state
3.

Actual results:
engine not update host 'Max free Memory for scheduling new VMs' in webadmin

Expected results:
engine update host 'Max free Memory for scheduling new VMs' in webadmin

Additional info:
I also attach screenshot and database line:
engine=# select mem_available from vds where host_name='master-vds10.qa.lab.tlv.redhat.com';
 mem_available 
---------------
         22984
(1 row)

Also REST not updated and have <max_scheduling_memory>3807379456</max_scheduling_memory>

Comment 1 Arik 2014-09-28 06:25:58 UTC
Was it the first VM to run on that host?

Comment 2 Artyom 2014-09-28 09:31:23 UTC
yes, if you need I can try to check other cases, when it not first vm.

Comment 3 Arik 2014-09-28 11:21:53 UTC
(In reply to Artyom from comment #2)
> yes, if you need I can try to check other cases, when it not first vm.

If I understand the problem correctly, the problem is that in this particular case we don't decrease the pending memory, but the committed memory.
So if there was no other Vm running on the host, the 'max_scheduling_memory' would not be decreased (because there was no 'committed memory' before).
If there was another VM running on the host its committed memory will be decreased - the bug will still exist but you'll see that the max_scheduling_memory is being decreased (I expect you'll see that the pending memory stays the same and the committed memory is reduced).
It will be great if you can verify it.

Comment 4 Doron Fediuck 2014-10-26 14:55:00 UTC
*** Bug 1156011 has been marked as a duplicate of this bug. ***

Comment 5 Doron Fediuck 2015-06-07 08:56:08 UTC
Martin,
does the updated pending resource mechanism handles this case?

Comment 6 Martin Sivák 2015-10-06 11:22:01 UTC
Yes, I believe so.

Comment 7 Doron Fediuck 2015-10-07 06:01:27 UTC
Based on comment 6 this should be resolved by now.
Please try to verify.

Comment 8 Artyom 2015-10-12 08:31:52 UTC
Verified on rhevm-3.6.0-0.18.el6.noarch