|Summary:||Max free Memory for scheduling new VMs is not updated correctly|
|Product:||Red Hat Enterprise Virtualization Manager||Reporter:||Lukas Svaty <lsvaty>|
|Component:||ovirt-engine||Assignee:||Roy Golan <rgolan>|
|Status:||CLOSED ERRATA||QA Contact:||Shira Maximov <mshira>|
|Version:||3.5.0||CC:||dfediuck, dyasny, gklein, istein, lpeer, lsurette, mgoldboi, mshira, obockows, pzhukov, rbalakri, rgolan, Rhev-m-bugs, sherold, yeylon, ykaul|
|Fixed In Version:||3.6.0-9||Doc Type:||Bug Fix|
Previously, the internal monitoring for virtual machines and hosts didn't take into account the latest running virtual machines when calculating the free memory available for running additional virtual machines. As a result, a host would not be able to run virtual machines, or would run virtual machines without having enough memory for them. With this release, the calculation of the host's maximum free memory includes the running virtual machines.
|:||1290465 (view as bug list)||Environment:|
|Last Closed:||2016-03-09 20:54:32 UTC||Type:||Bug|
|oVirt Team:||SLA||RHEL 7.3 requirements from Atomic Host:|
|Cloudforms Team:||---||Target Upstream Version:|
|Bug Depends On:|
Description Lukas Svaty 2015-01-14 09:45:25 UTC
Description of problem: When running VMs on host in WA portal admin can see Max free Memory for scheduling new VMs set to maximum host memory. After few times refreshing tab 'Max free Memory for scheduling new VMs' shows correct value but only 1 our of 8 times. Version-Release number of selected component (if applicable): vt13.5 vdsm-184.108.40.206-4.el6ev libvirt-0.10.2-46.el6_6.2 How reproducible: 90% Steps to Reproduce: 1. Have 1 host in cluster 2. Set overcomitment of cluster to 100% 3. Run VM on Host 4. See that Host General tab 'Max free Memory for scheduling new VMs' 5. if value is correct refresh the general tab Actual results: 'Max free Memory for scheduling new VMs' is not updated correctly, it shows most of the times full host memory, after few refreshes it shows correct value and than goes back to full host memory. Expected results: ;Max free Memory for scheduling new VMs' updated correctly. Additional info: Memory of host used: 32047 MB total, 2243 MB used, 29804 MB free Max free memory: changing on refres between 11572 MB (correct) and 31661 MB (full HOST memory)
Comment 1 Eyal Edri 2015-02-25 08:43:45 UTC
3.5.1 is already full with bugs (over 80), and since none of these bugs were added as urgent for 3.5.1 release in the tracker bug, moving to 3.5.2
Comment 2 Roy Golan 2015-03-04 09:20:13 UTC
moving back to assigned as the upstream patch doesn't solve the issue
Comment 3 Pavel Zhukov 2015-05-21 13:54:20 UTC
Reproduced with rhevm-backend-3.5.0-0.32.el6ev.noarch
Comment 4 Pavel Zhukov 2015-05-21 14:02:14 UTC
And with rhevm-backend-220.127.116.11-0.1.el6ev.noarch
Comment 5 Dan Yasny 2015-06-05 14:08:01 UTC
I've got an issue where I see the opposite - the reported memory value is very low, so the hosts are invalidated as migration destinations, and no VM can migrate in the setup. All hosts with misreported available memory get filtered out at scheduling, and migration fails. restarting ovirt-engine helps, but eventually the values in the DB go down again, even though vdsClient reports correctly. Not sure this is relevant to this bug, or another BZ is due. PS: besides the filtering out of hosts because of memory, there is nothing helpful in the logs, to show where the available memory values come from, and how often they are polled. To me this seems like another potential bug, related to this one.
Comment 6 Roy Golan 2015-06-10 10:04:28 UTC
This was fixed in 3.6. The whole mechanism when through a series of changes and non fits in Z. Dan, I suggest you to open a separate bug with the details?
Comment 7 Dan Yasny 2015-06-10 15:50:12 UTC
Comment 8 Eyal Edri 2015-08-13 10:37:27 UTC
moving old bug fixed before ovirt alpha release as fixed in current beta2, 3.6.0-9.
Comment 9 Shira Maximov 2015-09-09 13:37:46 UTC
failed to verify this bug on the following version: Red Hat Enterprise Virtualization Manager Version: 3.6.0-0.13.master.el6 Steps to Reproduce: 1. Have 1 host in cluster 2. Set overcomitment of cluster to 100% 3. Run memory on the host 4. See that Host General tab 'Max free Memory for scheduling new VMs' 5. the value isn't correct
Comment 10 Doron Fediuck 2015-11-22 07:52:39 UTC
(In reply to Shira Maximov from comment #9) > failed to verify this bug on the following version: > Red Hat Enterprise Virtualization Manager Version: 3.6.0-0.13.master.el6 > > > Steps to Reproduce: > 1. Have 1 host in cluster > 2. Set overcomitment of cluster to 100% > 3. Run memory on the host Please explain what you did in step 3; Did you run additional VMs? Did you run something else that consumes memory?
Comment 11 Shira Maximov 2015-11-26 20:10:31 UTC
i have checked the memory consumption it two ways : 1. just allocate the memory when creating a new VM - - that worked fine. 2. creating a script that runs on the host and allocates the memory something like that : import sys import time if __name__ == "__main__": foo = " " * int(sys.argv) time.sleep(1800) # max test time In this type of allocation the value of 'Max free Memory for scheduling new VMs' didn't reflect the real Max Free Memory for scheduling new VMs. I aware of the FreeMemoryCalculation but i think that this calculation will not reflect the real free memory on the host because a service the runs on the host can consume the memory. Moran what do you think ?
Comment 12 Shira Maximov 2015-11-30 15:10:48 UTC
roi, can you please move it back to on_qa so i can verify the bug?
Comment 13 Shira Maximov 2015-12-01 08:15:36 UTC
verified on : Red Hat Enterprise Virtualization Manager Version: 3.6.0-0.13.master.el6 Steps to Reproduce: 1. Have 1 host in cluster 2. Set overcomitment of cluster to 100% 3. run Vm that allocate some memory of the host 4. See that Host General tab 'Max free Memory for scheduling new VMs' 5. the value is correct
Comment 19 errata-xmlrpc 2016-03-09 20:54:32 UTC
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHEA-2016-0376.html