[see the original bug for more details] cloning to oVirt, we may need to use the two backported enlightenments, probably add a separate enum to existing "os.windows_xp.devices.hyperv.enabled.value = true" key. We need a different set (those 2 new flags) for Windows 10 or we can probably send it to older windows too (needs to be tested) on new enough hosts - so possibly only in 4.3 cluster level. If it's per cluster level and it works for all Windows versions we may not need to touch the osinfo variable... +++ This bug was initially created as a clone of Bug #1610461 +++ Description of problem: High CPU load for Windows 10 (Update 1803) Guests when idle Version-Release number of selected component (if applicable): All 'qemu-kvm' and 'qemu-kvm-ev' (qemu-kvm-ev-2.10.0-21.el7_5.4.1) How reproducible: Install the last Windows10 on kvm virtual machine with one CPU. Steps to Reproduce: 1. Install the last Windows10 on kvm virtual machine with one CPU. 2. Wait, idle time inside the Windows. (CPU load 0%) 3. See, in Linux virtual machine load 20-30%. Actual results: Linux virtual machine load 20-30% Expected results: Linux virtual machine load 0-5% Additional info: Need this possibility: <hyperv> <synic state='on'/> <stimer state='on'/> </hyperv> It was disabled here: https://bugzilla.redhat.com/show_bug.cgi?id=1336517 This is original post: https://forum.proxmox.com/threads/high-cpu-load-for-windows-10-guests-when-idle.44531/
(In reply to Michal Skrivanek from comment #0) > [see the original bug for more details] > > cloning to oVirt, we may need to use the two backported enlightenments, > probably add a separate enum to existing > "os.windows_xp.devices.hyperv.enabled.value = true" key. We need a different > set (those 2 new flags) for Windows 10 or we can probably send it to older > windows too (needs to be tested) This is my understanding too: all Windows versions should benefit from these enlightenments, we're not aware of any possible regressions ATM.
this requires qemu-kvm-rhev-2.12.0-18.el7_6.2 and kernel newer than 3.10.0-957.1.3.el7. Not yet released
*** Bug 1655039 has been marked as a duplicate of this bug. ***
*** Bug 1655340 has been marked as a duplicate of this bug. ***
WA is to use non-Windows OS type for now, or 4.2 cluster level
Verified on: ovirt-engine-4.3.0-0.8.master.20190120162615.git5926f20.el7.noarch vdsm-4.30.6-17.gitcfd81b7.el7.x86_64 qemu-kvm-rhev-2.12.0-18.el7_6.2.x86_64 libvirt-daemon-kvm-4.5.0-10.el7_6.3.x86_64 qemu-kvm-tools-rhev-2.12.0-18.el7_6.2.x86_64 qemu-kvm-common-rhev-2.12.0-18.el7_6.2.x86_64 qemu-kvm-rhev-debuginfo-2.12.0-18.el7_6.2.x86_64 kernel-3.10.0-970.el7.x86_64 Steps: 1. Create a windows 10 VM, with update 1803, OS Type set to: Other OS. 2. Run the VM. 3. Check that the VM started without hyperv (virsh -r dumpxml <vm>) See it without: <features> <acpi/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <synic state='on'/> <stimer state='on'/> </hyperv> </features> 4. Run on the host: # top -p <vm_pid> -n 1800 -d 2 -b > unfixed-top-pc-result 5. Stop the VM. 6. Change the OS Type to Windows 10 64b. 7. Run the VM. 8. Check that the VM started with hyperv (virsh -r dumpxml <vm>) See it has: <features> <acpi/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <synic state='on'/> <stimer state='on'/> </hyperv> </features> 9. Run on the host: # top -p <vm_pid> -n 1800 -d 2 -b > fixed-top-pc-result 10. Run on both outputs( unfixed-top-pc-result, fixed-top-pc-result) # cat <output> |grep qemu-kvm|awk -F ' ' '{print $9;}'|awk '{sum+=$1} END {print "Average = ", sum/NR}' 11. Check the difference from step 10. Results: From unfixed-top-pc-result, the output of the command in step 11: Average = 46.3238 From fixed-top-pc-result, the output of the command in step 11: Average = 11.3415
*** Bug 1669102 has been marked as a duplicate of this bug. ***
Hi Steve, Would you review the updated content in the Doc Text field? Thanks, Rolfe
Dear Rolfe, The doc text you added does basically fit the description of this issue. With Best Regards. Steven Rosenberg.
This bugzilla is included in oVirt 4.3.0 release, published on February 4th 2019. Since the problem described in this bug report should be resolved in oVirt 4.3.0 release, it has been closed with a resolution of CURRENT RELEASE. If the solution does not work for you, please open a new bug report.