Bug 1385264

Summary: Nested virtualization in KVM - Virtualization support is disabled in the firmware.
Product: [Community] Virtualization Tools Reporter: hlmasterchief93
Component: libvirtAssignee: Libvirt Maintainers <libvirt-maint>
Status: CLOSED NOTABUG QA Contact:
Severity: high Docs Contact:
Priority: unspecified    
Version: unspecifiedCC: hlmasterchief93, libvirt-maint, phrdina, rbalakri, vorpal
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-10-17 10:28:12 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Attachments:
Description Flags
virt-host-validate in Fedora 24 live guest none

Description hlmasterchief93 2016-10-15 19:48:45 UTC
Created attachment 1210827 [details]
virt-host-validate in Fedora 24 live guest

Description of problem:

Trying to enable Hyper-V in Windows 10 / Server 2016 guest resulted in error: "Virtualization support is disabled in the firmware."

systeminfo report:
Hyper-V Requirements:      A hypervisor has been detected. Features required for Hyper-V will not be displayed.

systeminfo report with <feature policy='disable' name='hypervisor'/>:
Hyper-V Requirements:      VM Monitor Mode Extensions: Yes
                           Virtualization Enabled In Firmware: No
                           Second Level Address Translation: Yes
                           Data Execution Prevention Available: Yes

C:\>wmic cpu get VirtualizationFirmwareEnabled
VirtualizationFirmwareEnabled
FALSE

PS C:\> (Get-CimInstance Win32_Processor).VirtualizationFirmwareEnabled
False

virt-host-validate in Fedora 24 live guest:
All check PASS except IOMMU, details in attachment


Version-Release number of selected component (if applicable):
Fedora 24
kernel-4.7.6-200.fc24.x86_64
qemu-system-x86-2:2.6.2-1.fc24.x86_64
qemu-kvm-2:2.6.2-1.fc24.x86_64
libvirt-1.3.3.2-1.fc24.x86_64

How reproducible:
Every time

Steps to Reproduce:
1. Install Windows 10 / Server 2016 guest
2. Try to enable Hyper-V
3. Hyper-V cannot be installed with error message

Actual results:
Nested virtualization worked

Expected results:
KVM in KVM seem work but Hyper-V does not

Additional info:
Hyper-V in Hyper-V guide
https://msdn.microsoft.com/en-us/virtualization/hyperv_on_windows/user_guide/nesting

Comment 1 hlmasterchief93 2016-10-15 19:56:56 UTC
PowerShell check
PS C:\> (Get-CimInstance Win32_Processor).SecondLevelAddressTranslationExtensions
True
PS C:\> (Get-CimInstance Win32_Processor).VirtualizationFirmwareEnabled
False
PS C:\> (Get-CimInstance Win32_Processor).VMMonitorModeExtensions
True
PS C:\> (Get-CimInstance Win32_OperatingSystem).DataExecutionPrevention_Available
True


WMIC check
C:\>wmic cpu get SecondLevelAddressTranslationExtensions
SecondLevelAddressTranslationExtensions
TRUE
C:\>wmic cpu get VirtualizationFirmwareEnabled
VirtualizationFirmwareEnabled
FALSE
C:\>wmic cpu get VMMonitorModeExtensions
VMMonitorModeExtensions
TRUE
C:\>wmic os get DataExecutionPrevention_Available
DataExecutionPrevention_Available
TRUE

Comment 2 Pavel Hrdina 2016-10-17 09:06:11 UTC
Hi, please provide domain XML of the windows guest, I've tested it and it works so I guess that your XML doesn't enable the correct cpu feature for windows guest.

Comment 3 hlmasterchief93 2016-10-17 09:14:14 UTC
I have try to use the fedora-virt-preview and kernel-vanilla-stable repo and now it work with these packages (just add the repo and upgrade, no need to change the guest xml)

kernel-4.8.1-1.vanilla.knurd.1.fc24.x86_64
libvirt-2.2.0-1.fc24.x86_64
qemu-kvm-2:2.7.0-3.fc24.x86_64
qemu-system-x86-2:2.7.0-3.fc24.x86_64

Comment 4 hlmasterchief93 2016-10-23 03:05:15 UTC
It seem to soon to tell.
So the qemu 2.7 packages enable Windows 10 / Server to install Hyper-V, newer kernel or libvirt is not needed.

But Hyper-V still not working correctly. When add this tag <feature policy='disable' name='hypervisor'/>, Hyper-V cannot start VM with error: "Virtual Machine could not be started because the hypervisor is not running"

Without this tag, there is this error "Failed to start the virtual machine because one of the Hyper-V components is not running." and in Device Manager, this component "Microsoft Hyper-V Virtual Machine Bus Provider" report "Windows cannot initialise the device driver for this hardware. (Code 37)"

Comment 5 BugMasta 2017-03-09 21:05:50 UTC
himasterchief is right. This a bug and it's not resolved.

PS C:\Users\Administrator> (Get-CimInstance Win32_Processor).VirtualizationFirmwareEnabled
True
True

I checked and these features are all enabled as well:
SecondLevelAddressTranslationExtensions - True
VMMonitorModeExtensions - True
DataExecutionPrevention_Available - True

I have also verified with cpuid on the server 2016 at L1 that the vt-x extensions are visible.

But when i try to start a VM on the L1 win2k16 I'm still getting:
"Failed to start the virtual machine blah because one of the Hyper-V components is not running."
WHen i click on details it says the virtual machine management service failed to start the vm for the same reason.

I'm running f25, but i read elsewhere that seabios 1.10 was required for this to work, so i installed seabios-bin.noarch 1.10.1-2.fc26 from rawhide & still no luck.

Qemu is:
qemu-2.7.1-2.fc25.x86_64


My first response would be to say it's microsoft's fault, but it is reported to work at:
https://ladipro.wordpress.com/2017/02/24/running-hyperv-in-kvm-guest/

I have followed all the steps that guy did with 2 exceptions - 
1) I'm running centos 7 guest not win xp
2) i edited my server2016 vm xml config to have

  <cpu mode='host-passthrough'>
  <feature policy='require' name='vmx'/>
  </cpu>

But when started from virt-manager my qemu command line still does not have +vmx on it, which he suggested wass necessary. 

But at this point it looks like vmx is enabled in the server2k16 guest - the host-passthrough should guarantee it is there, and i've had no problems with nested ESXi hosts creating their own L2 VMs, nor with nested RHEV/Ovirt hosts creating their own VMs, with my current configuration

So starting the VM manually with +vmx is my final resort, and I don't expect it will make any difference.

There's definitely something wrong here, and it's probably microsoft's fault, as ESXi and RHEV/Ovirt are fine but if we could defeat Redmond in this battle and get this working that would be awesome...

Comment 6 BugMasta 2017-03-09 21:13:24 UTC
Even with all th nested virtualisation features i can find, enabled, the error i'm seeing in event viewer is:

Hypervisor launch failed; Processor does not support the minimum features required to run the hypervisor (MSR index 0x48B, allowed bits 0x2600000000, required bits 0x1000FB00000000).

Comment 7 hlmasterchief93 2017-03-09 21:20:01 UTC
It seem we need qemu 2.7 and kernel 4.10 to make Hyper-V on QEMU/KVM work. You may want to check this, I have not test this yet
https://ladipro.wordpress.com/2017/02/24/running-hyperv-in-kvm-guest/

Comment 8 BugMasta 2017-03-09 22:05:38 UTC
Aah balls. I'm running fedora 25 i assumed that if someone else had this working my kernel would be new enough, but:

uname -a
Linux blah 4.9.10-200.fc25.x86_64 #1 SMP Wed Feb 15 23:28:59 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

Unbelievable. Why is it that everything you try to do on linux seems to be on the bleeding edge of impossibility.

Comment 9 BugMasta 2017-03-09 22:07:46 UTC
Anyway thanks for the top himasterchef I'm installing kernel 4.11 from fedora rawhide now, fingers crossed that will fix this!

Comment 10 BugMasta 2017-03-09 22:44:53 UTC
Bah! The rawhide kernel-4.11.0-0.rc1.git0.1.fc27.x86_64 is no good! The server 2016 VM pegs its CPUs and doesn't stop!

The only 4.10 kernel available for fedora that i can find is at the good jforbes kernel stabilisation repo:

https://copr.fedorainfracloud.org/coprs/jforbes/kernel-stabilization/repo/fedora-25/jforbes-kernel-stabilization-fedora-25.repo

So, fingers Xed for 4.10.1-1.fc25...

Comment 11 BugMasta 2017-03-09 23:00:03 UTC
Hmm... Ok well, a little progress. No error saying hyper-v components not running this time.

But now it fails with:
"vm blah failed to change state" so there's still a problem.

Doh! I just tried to launch the vm a second time and now it is back to saying one of the hyper-v components is not running.

But there is still nothing in the event log about processor not supporting hyper-v. So that is at least some improvement.

Looks like this is a bit closer to working now, in 4.10.1-1, but still no donut.

Comment 12 hlmasterchief93 2017-03-10 07:16:28 UTC
I just have sometime to test with kernel 4.10 and it seems working. I have not try to install OS yet, just try to boot VM, work with both gen. I am using kernel from Kernel vanilla repositories.

I only need <cpu mode='host-passthrough'>, not <feature policy='require' name='vmx'/>. I think host-passthrough already pass the vmx flag.

fc24
kernel-4.10.1-1.vanilla.knurd.1.fc24.x86_64    @kernel-vanilla-stable
qemu-kvm-2:2.7.0-8.fc24.x86_64                 @fedora-virt-preview
libvirt-2.2.0-2.fc24.x86_64                    @fedora-virt-preview

Comment 13 BugMasta 2017-03-11 14:30:13 UTC
I'll have to have another look. I got my wires crossed a bit earlier, when trying the 4.10 kernel from jforbes repo - it turns out i was still getting events from hyper-v about components not running, i just didn't see them at the top level in event viewer i had to drill down into events specifically from hyper-v or look on the server manager page for hyper-v (which is where i saw them originally, but forgot to look there later). 

Another problem i have is that the 4.10 kernel for fedora 25 which i got from jforbes is not stable at all for qemu my VMs don't boot most of the time, just like the 4.11 that i tried. I get pegged cpus, bootup hangs, and reboots with thread exceptions, and kernel module traces on my host console.

Not sure if the 4.10 kernel you're using, from kernel-vanilla-stable is available for f25 but if it is i'll give it a go.