Description of problem: When running VMs on host in WA portal admin can see Max free Memory for scheduling new VMs set to maximum host memory. After few times refreshing tab 'Max free Memory for scheduling new VMs' shows correct value but only 1 our of 8 times. Version-Release number of selected component (if applicable): vt13.5 vdsm-4.16.8.1-4.el6ev libvirt-0.10.2-46.el6_6.2 How reproducible: 90% Steps to Reproduce: 1. Have 1 host in cluster 2. Set overcomitment of cluster to 100% 3. Run VM on Host 4. See that Host General tab 'Max free Memory for scheduling new VMs' 5. if value is correct refresh the general tab Actual results: 'Max free Memory for scheduling new VMs' is not updated correctly, it shows most of the times full host memory, after few refreshes it shows correct value and than goes back to full host memory. Expected results: ;Max free Memory for scheduling new VMs' updated correctly. Additional info: Memory of host used: 32047 MB total, 2243 MB used, 29804 MB free Max free memory: changing on refres between 11572 MB (correct) and 31661 MB (full HOST memory)
3.5.1 is already full with bugs (over 80), and since none of these bugs were added as urgent for 3.5.1 release in the tracker bug, moving to 3.5.2
moving back to assigned as the upstream patch doesn't solve the issue
Reproduced with rhevm-backend-3.5.0-0.32.el6ev.noarch
And with rhevm-backend-3.5.1.1-0.1.el6ev.noarch
I've got an issue where I see the opposite - the reported memory value is very low, so the hosts are invalidated as migration destinations, and no VM can migrate in the setup. All hosts with misreported available memory get filtered out at scheduling, and migration fails. restarting ovirt-engine helps, but eventually the values in the DB go down again, even though vdsClient reports correctly. Not sure this is relevant to this bug, or another BZ is due. PS: besides the filtering out of hosts because of memory, there is nothing helpful in the logs, to show where the available memory values come from, and how often they are polled. To me this seems like another potential bug, related to this one.
This was fixed in 3.6. The whole mechanism when through a series of changes and non fits in Z. Dan, I suggest you to open a separate bug with the details?
Created https://bugzilla.redhat.com/show_bug.cgi?id=1230314
moving old bug fixed before ovirt alpha release as fixed in current beta2, 3.6.0-9.
failed to verify this bug on the following version: Red Hat Enterprise Virtualization Manager Version: 3.6.0-0.13.master.el6 Steps to Reproduce: 1. Have 1 host in cluster 2. Set overcomitment of cluster to 100% 3. Run memory on the host 4. See that Host General tab 'Max free Memory for scheduling new VMs' 5. the value isn't correct
(In reply to Shira Maximov from comment #9) > failed to verify this bug on the following version: > Red Hat Enterprise Virtualization Manager Version: 3.6.0-0.13.master.el6 > > > Steps to Reproduce: > 1. Have 1 host in cluster > 2. Set overcomitment of cluster to 100% > 3. Run memory on the host Please explain what you did in step 3; Did you run additional VMs? Did you run something else that consumes memory?
i have checked the memory consumption it two ways : 1. just allocate the memory when creating a new VM - - that worked fine. 2. creating a script that runs on the host and allocates the memory something like that : import sys import time if __name__ == "__main__": foo = " " * int(sys.argv[1]) time.sleep(1800) # max test time In this type of allocation the value of 'Max free Memory for scheduling new VMs' didn't reflect the real Max Free Memory for scheduling new VMs. I aware of the FreeMemoryCalculation but i think that this calculation will not reflect the real free memory on the host because a service the runs on the host can consume the memory. Moran what do you think ?
roi, can you please move it back to on_qa so i can verify the bug?
verified on : Red Hat Enterprise Virtualization Manager Version: 3.6.0-0.13.master.el6 Steps to Reproduce: 1. Have 1 host in cluster 2. Set overcomitment of cluster to 100% 3. run Vm that allocate some memory of the host 4. See that Host General tab 'Max free Memory for scheduling new VMs' 5. the value is correct
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHEA-2016-0376.html