Description of problem: When I start vm in paused mode(not important via run once or just added flag via edit window) and after it stop vm(without start it) pending_vmem_size on host stay with value of vm memory guaranteed memory, if you again start vm in paused mode it will add another bunch of memory to this value(1024+1024), so you can reach state when you don't have running vms on host, but also can't start new vm because memory filter Version-Release number of selected component (if applicable): av10 How reproducible: always Steps to Reproduce: 1. create new vm with some guaranteed memory(1024) 2. Run it in paused mode, wait until vm status paused, stop vm 3. Check pending_vmem_size of host, via engine database(select pending_vmem_size from vds_dynamic where vds_id='your_host_id';) Actual results: 1024 Expected results: 0 Additional info:
The same thing happening when you migrate paused vm, so if you migrate vm from host to host you just increase pending_vmem_size parameter of each host
looks like a duplicate of bug 1049318 ? (or a 3.4 version of it)
*** This bug has been marked as a duplicate of bug 1049318 ***
This should be resolved by bug 1049318. Please verify.
Verified vt3.1
rhev 3.5.0 was released. closing.