+++ This bug was initially created as a clone of Bug #1083525 +++
Description of problem:
As of RHEL7.0 QEMU and Libvirt support several Hyper-V Enlightenment features, that need to be explicitly turned on for Windows guests. It is forbidden to turn them on for RHEL5 guests.
The QEMU flags are: "hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time"
hv_relaxed turns off an internal Windows watchdog, and by doing so avoids some high load BSODs
hv_relaxed is also supported in RHEL6 as of RHEL6.4
All the other flags are performance optimization flags, that can improve the performance by 10% to much more (in extreme cases of resources overcommit).
Libvirt related bugs:
Bug 1056205 - new cpu's flags, to control hyper-v related features - hv-time
Bug 784836 - new cpu's flags, to control hyper-v related features
Bug 864606 - [RFE] Enable Hyper-V Enlightenment for Windows guests
Note: You might want to duplicate this BZ to other RHEL7 based versions of RHOS.
let's investigate possible solutions first
Initial patch posted
*** Bug 1096769 has been marked as a duplicate of this bug. ***
*** Bug 829845 has been marked as a duplicate of this bug. ***
I see there's a vdsm patch posted with the proper XML, but just some clarifying details after an internal discussion:
The recommended qemu configuration is:
Which maps to the libvirt XML:
<spinlocks state='on' retries='8191'/>
<timer name='hypervclock' present='yes'/>
Though there are some version caveats here:
- relaxed state='on' libvirt 1.0.0+, qemu 1.1+
- vapic, spinlocks requires libvirt 1.1.0+, qemu 1.1+
- hypervclock requires libvirt 1.2.2+, qemu 2.0.0+
Those are upstream version numbers though, all those bits should work on RHEL7.0+ regardless of version number
AFAIK it should be safe to mix the hypervclock setting the other recommended
timer settings (mentioned at
The hv_relaxed flag is available already on RHEL 6.5.
Probably too late to have it for 3.5.
But 3.6 must include that. And if not too hard - 3.5 would be great as well.
This flag prevents having BSOD on some of Windows guests.
We are on track for 3.5.
Engine patch being discussed, seems quite ready for merging, and easy to backport: http://gerrit.ovirt.org/#/c/29238/
VDSM patches merged:
patch including hv_relaxed merged both in master and in 3.5.0:
the missing bits (as per comment https://bugzilla.redhat.com/show_bug.cgi?id=1083529#c10) are trickier, opened https://bugzilla.redhat.com/show_bug.cgi?id=1125297 to track their status.
http://gerrit.ovirt.org/#/c/29234/ - is ready and tested from long time, just requires to sort out libvirt dependencies
since support for hv_relaxed flag was merged (http://gerrit.ovirt.org/#/c/30254/2/vdsm/virt/vm.py,cm) and since https://bugzilla.redhat.com/show_bug.cgi?id=1125297 will track the missing hv_* flags ,
moving to MODIFIED.
consider bug 1091818
note later in 3.5 we removed the hyperv support for Win2012/8
*** Bug 1125297 has been marked as a duplicate of this bug. ***
actually not completed; we need/want to add new flags in RHEL 7.2 in addition to exisiting partial hv support in 3.5
all merged, we now support all known hyperv optimizations.
Moving back to MODIFIED.
Francesco, I am wondering if this bug should stay under RFE component or should be moved to another component?
(In reply to Marina from comment #21)
> Francesco, I am wondering if this bug should stay under RFE component or
> should be moved to another component?
Not sure it should be moved. I think this bug has the qualifications to be considered an RFE:
we extended an existing functionality of the system.
Francesco, I was thinking the PM is job is to change this and remove all kind of keywords after the RFE has been triaged.
Scott, isnt' it?
(In reply to Marina from comment #23)
> Francesco, I was thinking the PM is job is to change this and remove all
> kind of keywords after the RFE has been triaged.
> Scott, isnt' it?
Of course, I just gave my two cents :) (sorry if that was unclear before). No problems or objections on my side.
hv_time is broken on current master. We need https://gerrit.ovirt.org/#/c/44119/ to solve that. The other options are supposed to work, please check them.
back to POST because of (lack of) patch 44119
adding my run results :
1. Test multi os - check the flags are set correctly on verious windows os. - FAILED
windows 7 : i can't find the flag :hv_time
[root@lilach-vdsb ~]# ps -aux | grep qemu
qemu 26384 100 0.9 1661288 76124 ? Sl 10:51 0:43 /usr/libexec/qemu-kvm -name windows_7 -S -machine pc-i440fx-rhel7.1.0,accel=kvm,usb=off -cpu Conroe,hv_relaxed,hv_vapic,hv_spinlocks=0x1fff -m 1024 -realtime mlock=off -smp 1,maxcpus=16,sockets=16,cores=1,threads=1 -numa node,nodeid=0,cpus=0-7,mem=1024 -uuid 4a9ff646-70d7-4491-b13c-b3db7d90643d -smbios type=1,manu
windows 2003: i can't find the flag :hv_time
[root@RHEL7 ~]# ps -aux | grep qemu
qemu 1011 3.8 0.2 1652220 34800 ? Sl 10:57 0:04 /usr/libexec/qemu-kvm -name a -S -machine pc-i440fx-rhel7.1.0,accel=kvm,usb=off -cpu Conroe,hv_relaxed,hv_vapic,hv_spinlocks=0x1fff -m 1024 -realtime mlock=off -smp 1,maxcpus=16,sockets=16,cores=1,threads=1 -numa node,nodeid=0,cpus=0,mem=1024 -uuid eef99fd5-36e4-4905-b823-94de4c3cee93 -smbios type=1,manufacturer=o
windows xp : i can't find the flag :hv_time
[root@RHEL7 ~]# ps -aux | grep qemu
qemu 32042 1.4 0.2 1651872 31832 ? Rl 11:11 0:02 /usr/libexec/qemu-kvm -name xp -S -machine pc-i440fx-rhel7.1.0,accel=kvm,usb=off -cpu Conroe,hv_relaxed,hv_vapic,hv_spinlocks=0x1fff -m 1024 -realtime mlock=off -smp 1,maxcpus
2. Test check Watchdog flag - FAILED:
1. create windows VM
os type: windows 7 X 64,
highly available enabled (in high availbility sub tab)
watchdog model: i6300esb
install windows Guest Agent
2. verify that the watchdog device exists
3. disable automatic reboot of the windows vm
4. create blue screen- in taskmgr find csrss.exe and click End task.
5. the watch should be triggered and it didn't
3. Negative test - run linux VM and see if the flags doesn't exists - PASSED
4.Test guest windows without guest agent - run windows VM and see that the flags exists - PASSED
Note - not all the glags existed as mentioned abovw
I don't know if item 2 above should have done anything or not. The hv flag disables Windows watchdog...so maybe expected behavior?
Francesco, please confirm
(In reply to Michal Skrivanek from comment #30)
> I don't know if item 2 above should have done anything or not. The hv flag
> disables Windows watchdog...so maybe expected behavior?
> Francesco, please confirm
I'm having hard time tracking a reliable comprehensive source for the enlightenments.
These comments seems to confirm that indeed the watchdog is expected to be disabled:
So it seems correct the watchdog didn't work.
can you please confirm that is correct that the hv_relaxed flag, once enabled, prevents the watchdog to work, like it happened as described in
(In reply to Francesco Romani from comment #32)
> can you please confirm that is correct that the hv_relaxed flag, once
> enabled, prevents the watchdog to work, like it happened as described in
Hmm, I don't know if that's expected or not. Vadim likely knows, setting NEEDINFO
(In reply to Cole Robinson from comment #33)
> (In reply to Francesco Romani from comment #32)
> > Cole,
> > can you please confirm that is correct that the hv_relaxed flag, once
> > enabled, prevents the watchdog to work, like it happened as described in
> > https://bugzilla.redhat.com/show_bug.cgi?id=1083529#c29
> > Thanks!
> Hmm, I don't know if that's expected or not. Vadim likely knows, setting
hv_relaxed deals with the absolutely different things than i6300esb watchdog device.
hv_relaxed disables DPC (https://msdn.microsoft.com/en-us/library/windows/hardware/ff544084%28v=vs.85%29.aspx) watchdog mechanism which can trigger 101 BSOD (https://msdn.microsoft.com/en-us/library/windows/hardware/ff557211%28v=vs.85%29.aspx) inside of Windows VM running on an overcommited host.
verified on 3.6-0.5
link to test: https://polarion.engineering.redhat.com/polarion/#/project/RHEVM3/testrun?id=3%5F6%5FVIRT%5FWindows%5Fguest%5Fhv%5Fflags%5F3605
this specific bug was about making sure that RHEV was up to date with the last recommendations about hypervisor configuration for windows guest. So it should be transparent for users; furthermore there is not a single improvement we can highlight here; for these reasons I don't think this BZ deserves mention in the documentation.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.