Description of problem:
During the scale test ramp up, i found the hosts in our setup with 100% memory utilization.
due to the following situation, all of the vms are shutdown and the engine still reported 100% memory utilization for the hosts in time that virtual machine is 0.
not sure if this problems related to scale.
logs are not available.
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1. configured the cluster to 200% over commit.
2. start pumping vms to host, more than physical memory size.
(if physical memory is 64GB and over commit 200%, build up vms with 1 GB ram * 128).
when shunting down vms, engine still showing the same memory utilization state (100% utilization).
engine shouldn't report 100% memory utilization while no vms are running.
Have you waited long enough? It takes time for the stats counters to bubble through all the layers.. especially when the host is under heavy load.
yes, i think more than 1 night.
Were there any migrations before you started shutting down hosts?
Can you try again with DEBUG enabled and give us the logs?
What did the webadmin say in the "Max free Memory for scheduling new VMs" fields (Host tab)?
pending due to storage issue
i'll update and reproduced when able.
Please re-open once the data is available.
Not sure if it is the same bug, in a setup of 10 hosts here 2 of them were reporting the wrong memory size (95%), no VMs running on them. The real used memory was about 2GB out of 32GB, I believe it was wrongly reporting cached memory as real used memory, freeing the cache with echo '3 > /proc/sys/vm/drop_caches' made it report the right real usage. Could it be related to this one ? Should I open a new one ?
System: Centos7 x64
Ovirt Version: 18.104.22.168-1.el7.centos