Bug 1290465 - Max free Memory for scheduling new VMs is not updated correctly
Max free Memory for scheduling new VMs is not updated correctly
Status: CLOSED NEXTRELEASE
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine (Show other bugs)
3.5.7
All All
high Severity urgent
: ---
: ---
Assigned To: nobody nobody
sla
:
Depends On: 1182007
Blocks:
  Show dependency treegraph
 
Reported: 2015-12-10 10:25 EST by Roy Golan
Modified: 2016-02-10 15:14 EST (History)
17 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1182007
Environment:
Last Closed: 2015-12-17 08:59:47 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: SLA
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
oVirt gerrit 36970 None None None Never
oVirt gerrit 40611 None None None Never

  None (edit)
Description Roy Golan 2015-12-10 10:25:08 EST
+++ This bug was initially created as a clone of Bug #1182007 +++

Description of problem:
When running VMs on host in WA portal admin can see
Max free Memory for scheduling new VMs set to maximum host memory.
After few times refreshing tab 'Max free Memory for scheduling new VMs' shows correct value but only 1 our of 8 times.

Version-Release number of selected component (if applicable):
vt13.5
vdsm-4.16.8.1-4.el6ev
libvirt-0.10.2-46.el6_6.2

How reproducible:
90%

Steps to Reproduce:
1. Have 1 host in cluster
2. Set overcomitment of cluster to 100%
3. Run VM on Host
4. See that Host General tab 'Max free Memory for scheduling new VMs'
5. if value is correct refresh the general tab

Actual results:
'Max free Memory for scheduling new VMs' is not updated correctly, it shows most of the times full host memory, after few refreshes it shows correct value and than goes back to full host memory.

Expected results:
;Max free Memory for scheduling new VMs' updated correctly.

Additional info:
Memory of host used: 32047 MB total, 2243 MB used, 29804 MB free
Max free memory: changing on refres between 11572 MB (correct) and 31661 MB (full HOST memory)

--- Additional comment from Eyal Edri on 2015-02-25 03:43:45 EST ---

3.5.1 is already full with bugs (over 80), and since none of these bugs were added as urgent for 3.5.1 release in the tracker bug, moving to 3.5.2

--- Additional comment from Roy Golan on 2015-03-04 04:20:13 EST ---

moving back to assigned as the upstream patch doesn't solve the issue

--- Additional comment from Pavel Zhukov on 2015-05-21 09:54:20 EDT ---

Reproduced with rhevm-backend-3.5.0-0.32.el6ev.noarch

--- Additional comment from Pavel Zhukov on 2015-05-21 10:02:14 EDT ---

And with rhevm-backend-3.5.1.1-0.1.el6ev.noarch

--- Additional comment from Dan Yasny on 2015-06-05 10:08:01 EDT ---

I've got an issue where I see the opposite - the reported memory value is very low, so the hosts are invalidated as migration destinations, and no VM can migrate in the setup. All hosts with misreported available memory get filtered out at scheduling, and migration fails.

restarting ovirt-engine helps, but eventually the values in the DB go down again, even though vdsClient reports correctly.

Not sure this is relevant to this bug, or another BZ is due.


PS: besides the filtering out of hosts because of memory, there is nothing helpful in the logs, to show where the available memory values come from, and how often they are polled. To me this seems like another potential bug, related to this one.

--- Additional comment from Roy Golan on 2015-06-10 06:04:28 EDT ---

This was fixed in 3.6. The whole mechanism when through a series of changes and non fits in Z.

Dan, I suggest you to open a separate bug with the details?

--- Additional comment from Dan Yasny on 2015-06-10 11:50:12 EDT ---

Created https://bugzilla.redhat.com/show_bug.cgi?id=1230314

--- Additional comment from Eyal Edri on 2015-08-13 06:37:27 EDT ---

moving old bug fixed before ovirt alpha release as fixed in current beta2, 
3.6.0-9.

--- Additional comment from Shira Maximov on 2015-09-09 09:37:46 EDT ---

failed to verify this bug on the following version: 
Red Hat Enterprise Virtualization Manager Version: 3.6.0-0.13.master.el6


Steps to Reproduce:
1. Have 1 host in cluster
2. Set overcomitment of cluster to 100%
3. Run memory on the host
4. See that Host General tab 'Max free Memory for scheduling new VMs'
5. the value isn't correct

--- Additional comment from Doron Fediuck on 2015-11-22 02:52:39 EST ---

(In reply to Shira Maximov from comment #9)
> failed to verify this bug on the following version: 
> Red Hat Enterprise Virtualization Manager Version: 3.6.0-0.13.master.el6
> 
> 
> Steps to Reproduce:
> 1. Have 1 host in cluster
> 2. Set overcomitment of cluster to 100%
> 3. Run memory on the host

Please explain what you did in step 3;
Did you run additional VMs? Did you run something else that consumes memory?

--- Additional comment from Shira Maximov on 2015-11-26 15:10:31 EST ---

i have checked the memory consumption it two ways :
1. just allocate the memory when creating a new VM - - that worked fine. 
2. creating a script that runs on the host and allocates the memory  

something like that : 
import sys
import time

if __name__ == "__main__":
    foo = " " * int(sys.argv[1])
    time.sleep(1800)  # max test time  

In this type of allocation the value of 'Max free Memory for scheduling new VMs'
didn't reflect the real Max Free Memory for scheduling new VMs. 

I aware of the FreeMemoryCalculation but i think that this calculation will not reflect the real free memory on the host because a service the runs on the host can consume the memory.

Moran what do you think ?

--- Additional comment from Shira Maximov on 2015-11-30 10:10:48 EST ---

roi, can you please move it back to on_qa so i can verify the bug?

--- Additional comment from Shira Maximov on 2015-12-01 03:15:36 EST ---

verified on :  Red Hat Enterprise Virtualization Manager Version: 3.6.0-0.13.master.el6


Steps to Reproduce:
1. Have 1 host in cluster
2. Set overcomitment of cluster to 100%
3. run Vm that allocate some memory of the host
4. See that Host General tab 'Max free Memory for scheduling new VMs'
5. the value is correct

Note You need to log in before you can comment on or make changes to this bug.