Bug 1782882 - qemu-kvm: kvm_init_vcpu failed: Function not implemented
Summary: qemu-kvm: kvm_init_vcpu failed: Function not implemented
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine
Version: unspecified
Hardware: Unspecified
OS: Unspecified
urgent
high
Target Milestone: ovirt-4.4.0
: ---
Assignee: Steven Rosenberg
QA Contact: meital avital
URL:
Whiteboard:
: 1786464 1810558 (view as bug list)
Depends On: 1794868
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-12-12 14:51 UTC by Radek Duda
Modified: 2020-08-04 13:21 UTC (History)
19 users (show)

Fixed In Version: rhv-4.4.0-25
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-08-04 13:21:21 UTC
oVirt Team: Virt
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
qemu.log (5.04 KB, text/plain)
2019-12-12 14:51 UTC, Radek Duda
no flags Details
vdsm.log (36.94 KB, text/plain)
2019-12-12 14:57 UTC, Radek Duda
no flags Details
var/log logs from Host (3.26 MB, application/gzip)
2019-12-19 08:10 UTC, Steven Rosenberg
no flags Details
Logs with debugging set (3.85 MB, application/gzip)
2019-12-19 13:12 UTC, Steven Rosenberg
no flags Details
Shows that Windows 10 loads fine. (99.00 KB, image/png)
2020-03-10 14:24 UTC, Steven Rosenberg
no flags Details
qemu log (4.21 KB, text/plain)
2020-03-10 14:50 UTC, Steven Rosenberg
no flags Details
qemu log with q35 machine type (20.36 KB, text/plain)
2020-03-10 15:22 UTC, Steven Rosenberg
no flags Details
Downstream Windows 10 test with i440 and q35 (10.15 KB, text/plain)
2020-03-11 16:16 UTC, Steven Rosenberg
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1794868 0 high CLOSED Missing Hyper-V enlightenments 2023-02-18 04:02:28 UTC
Red Hat Product Errata RHSA-2020:3247 0 None None None 2020-08-04 13:21:44 UTC

Description Radek Duda 2019-12-12 14:51:34 UTC
Created attachment 1644451 [details]
qemu.log

Description of problem:
Windows VM cannot be run in rhv4.4 with message:
qemu-kvm: kvm_init_vcpu failed: Function not implemented

Version-Release number of selected component (if applicable):
host:
vdsm-4.40.0-154.git4e13ea9.el8ev.x86_64
qemu-kvm-3.1.0-20.module+el8+2888+cdc893a8.x86_64
libvirt-5.0.0-7.module+el8+2887+effa3c42.x86_64

engine:
ovirt-engine-4.4.0-0.6.master.el7.noarch

guest: Win10:

LC_ALL=C \
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin \
QEMU_AUDIO_DRV=none \
/usr/libexec/qemu-kvm \
-name guest=Win10,debug-threads=on \
-S \
-object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-15-Win10/master-key.aes \
-machine pc-q35-rhel8.0.0,accel=kvm,usb=off,dump-guest-core=off \
-cpu SandyBridge,hv_time,hv_relaxed,hv_vapic,hv_spinlocks=0x1fff,hv_synic,hv_stimer \
-m size=2048000k,slots=16,maxmem=8192000k \
-realtime mlock=off \
-smp 4,maxcpus=16,sockets=16,cores=1,threads=1 \
-object iothread,id=iothread1 \
-numa node,nodeid=0,cpus=0-15,mem=2000 \
-uuid d801ba63-8e01-4c16-8f90-8a3fb0b81fac \
-smbios 'type=1,manufacturer=Red Hat,product=RHEL,version=8.2-0.5.el8,serial=34353736-3132-5a43-3135-333430314b32,uuid=d801ba63-8e01-4c16-8f90-8a3fb0b81fac' \
-no-user-config \
-nodefaults \
-chardev socket,id=charmonitor,fd=37,server,nowait \
-mon chardev=charmonitor,id=monitor,mode=control \
-rtc base=2019-12-12T13:35:21,driftfix=slew \
-global kvm-pit.lost_tick_policy=delay \
-no-hpet \
-no-shutdown \
-global ICH9-LPC.disable_s3=1 \
-global ICH9-LPC.disable_s4=1 \
-boot strict=on \
-device pcie-root-port,port=0x10,chassis=1,id=pci.1,bus=pcie.0,multifunction=on,addr=0x2 \
-device pcie-root-port,port=0x11,chassis=2,id=pci.2,bus=pcie.0,addr=0x2.0x1 \
-device pcie-root-port,port=0x12,chassis=3,id=pci.3,bus=pcie.0,addr=0x2.0x2 \
-device pcie-root-port,port=0x13,chassis=4,id=pci.4,bus=pcie.0,addr=0x2.0x3 \
-device pcie-root-port,port=0x14,chassis=5,id=pci.5,bus=pcie.0,addr=0x2.0x4 \
-device pcie-root-port,port=0x15,chassis=6,id=pci.6,bus=pcie.0,addr=0x2.0x5 \
-device pcie-root-port,port=0x16,chassis=7,id=pci.7,bus=pcie.0,addr=0x2.0x6 \
-device qemu-xhci,id=usb,bus=pci.3,addr=0x0 \
-device virtio-scsi-pci,iothread=iothread1,id=ua-2e34d5f3-fb33-4a11-955e-b461455a07d3,bus=pci.2,addr=0x0 \
-device virtio-serial-pci,id=virtio-serial0,bus=pci.4,addr=0x0 \
-drive if=none,id=drive-ua-e19fa1a5-0cb8-4641-b5de-0a50bda15829,media=cdrom,readonly=on \
-device ide-cd,bus=ide.2,drive=drive-ua-e19fa1a5-0cb8-4641-b5de-0a50bda15829,id=ua-e19fa1a5-0cb8-4641-b5de-0a50bda15829,werror=report,rerror=report \
-drive file=/rhev/data-center/mnt/blockSD/a8f75b3a-19f1-4146-8a66-a588104bcd23/images/37d75c12-8819-441d-bd13-e7f8f3426078/8afd02d9-a85c-46e7-bf21-95883a39b582,format=qcow2,if=none,id=drive-ua-37d75c12-8819-441d-bd13-e7f8f3426078,cache=none,aio=native \
-device scsi-hd,bus=ua-2e34d5f3-fb33-4a11-955e-b461455a07d3.0,channel=0,scsi-id=0,lun=0,drive=drive-ua-37d75c12-8819-441d-bd13-e7f8f3426078,id=ua-37d75c12-8819-441d-bd13-e7f8f3426078,bootindex=1,write-cache=on,serial=37d75c12-8819-441d-bd13-e7f8f3426078,werror=stop,rerror=stop \
-netdev tap,fds=39:40:41:42,id=hostua-c23841c2-ba83-43c2-854c-0ddb0c94fe32,vhost=on,vhostfds=43:44:45:46 \
-device virtio-net-pci,mq=on,vectors=10,host_mtu=9000,netdev=hostua-c23841c2-ba83-43c2-854c-0ddb0c94fe32,id=ua-c23841c2-ba83-43c2-854c-0ddb0c94fe32,mac=56:6f:bf:f4:00:01,bus=pci.1,addr=0x0 \
-chardev socket,id=charchannel0,fd=47,server,nowait \
-device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=ovirt-guest-agent.0 \
-chardev socket,id=charchannel1,fd=48,server,nowait \
-device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 \
-chardev spicevmc,id=charchannel2,name=vdagent \
-device virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 \
-device usb-tablet,id=input0,bus=usb.0,port=1 \
-spice port=5905,tls-port=5906,addr=10.37.175.140,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on \
-object tls-creds-x509,id=vnc-tls-creds0,dir=/etc/pki/vdsm/libvirt-vnc,endpoint=server,verify-peer=no \
-vnc 10.37.175.140:7,password,tls-creds=vnc-tls-creds0 \
-k en-us \
-device qxl-vga,id=ua-7236c983-9b95-4c1e-8fa2-8657f10eb3b0,ram_size=67108864,vram_size=8388608,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pcie.0,addr=0x1 \
-device virtio-balloon-pci,id=ua-53218ac5-6233-4735-a145-d74de0fc631b,bus=pci.5,addr=0x0 \
-object rng-random,id=objua-62c5075a-6e7c-48c3-be4d-d9fc24147cf2,filename=/dev/urandom \
-device virtio-rng-pci,rng=objua-62c5075a-6e7c-48c3-be4d-d9fc24147cf2,id=ua-62c5075a-6e7c-48c3-be4d-d9fc24147cf2,bus=pci.6,addr=0x0 \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
-msg timestamp=on

How reproducible:
always

Steps to Reproduce:
1. Launch Win10 VM
2.
3.

Actual results:
VM is immediatelly shut down

Expected results:
VM is running


Additional info:

Comment 1 Radek Duda 2019-12-12 14:57:17 UTC
Created attachment 1644453 [details]
vdsm.log

Comment 3 FuXiangChun 2019-12-13 10:55:34 UTC
I can reproduce this bug with libvirt + Hyper-V flags. As Hyper-v flag 'hv-synic' requires Hyper-V hv-vpindex flag. So you need enable it by layer production or add "<vpindex state='on'/>" xml. 

Root reason: miss hv-vpindex Hyper-V flag. 

Test summary:
1. with libvirt testing 

libvirt must add hv-vpindex Hyper-V flag to solve this problem. 

2. with qemu-kvm command line testing(not sure if qemu's behavior is correct)

You don't need to add/enable hv-vpindex Hyper-V flag, guest also can boot successfully.

E.g: without hv-vpindex Hyper-V flag
/usr/libexec/qemu-kvm -cpu SandyBridge,hv_relaxed,hv_vapic,hv_spinlocks=0x1fff,hv_stimer,hv_time,hv-synic
VNC server running on ::1:5900
  

The following is part of dependence about Hyper-V flags. please refer to:

----------------------------------------
+3.8. hv-synic
+==============
+Enables Hyper-V Synthetic interrupt controller - an extension of a local APIC.
+When enabled, this enlightenment provides additional communication facilities
+to the guest: SynIC messages and Events. This is a pre-requisite for
+implementing VMBus devices (not yet in QEMU). Additionally, this enlightenment
+is needed to enable Hyper-V synthetic timers. SynIC is controlled through MSRs
+HV_X64_MSR_SCONTROL..HV_X64_MSR_EOM (0x40000080..0x40000084) and
+HV_X64_MSR_SINT0..HV_X64_MSR_SINT15 (0x40000090..0x4000009F)
+
+Requires: hv-vpindex
+
+3.9. hv-stimer
+===============
+Enables Hyper-V synthetic timers. There are four synthetic timers per virtual
+CPU controlled through HV_X64_MSR_STIMER0_CONFIG..HV_X64_MSR_STIMER3_COUNT
+(0x400000B0..0x400000B7) MSRs. These timers can work either in single-shot or
+periodic mode. It is known that certain Windows versions revert to using HPET
+(or even RTC when HPET is unavailable) extensively when this enlightenment is
+not provided; this can lead to significant CPU consumption, even when virtual
+CPU is idle.
+
+Requires: hv-vpindex, hv-synic, hv-time 
------------------------------------------

Comment 4 FuXiangChun 2019-12-13 11:00:38 UTC
In addition, your test version is inconsistent in comment0. 

Kernel is 8.2 
qemu-kvm and libvirt is 8.0.0

Comment 20 Steven Rosenberg 2019-12-18 11:13:17 UTC
(In reply to FuXiangChun from comment #3)
> I can reproduce this bug with libvirt + Hyper-V flags. As Hyper-v flag
> 'hv-synic' requires Hyper-V hv-vpindex flag. So you need enable it by layer
> production or add "<vpindex state='on'/>" xml. 
> 
> Root reason: miss hv-vpindex Hyper-V flag. 
> 
> Test summary:
> 1. with libvirt testing 
> 
> libvirt must add hv-vpindex Hyper-V flag to solve this problem. 
> 
> 2. with qemu-kvm command line testing(not sure if qemu's behavior is correct)
> 
> You don't need to add/enable hv-vpindex Hyper-V flag, guest also can boot
> successfully.
> 
> E.g: without hv-vpindex Hyper-V flag
> /usr/libexec/qemu-kvm -cpu
> SandyBridge,hv_relaxed,hv_vapic,hv_spinlocks=0x1fff,hv_stimer,hv_time,hv-
> synic
> VNC server running on ::1:5900
>   
> 
> The following is part of dependence about Hyper-V flags. please refer to:
> 
> ----------------------------------------
> +3.8. hv-synic
> +==============
> +Enables Hyper-V Synthetic interrupt controller - an extension of a local
> APIC.
> +When enabled, this enlightenment provides additional communication
> facilities
> +to the guest: SynIC messages and Events. This is a pre-requisite for
> +implementing VMBus devices (not yet in QEMU). Additionally, this
> enlightenment
> +is needed to enable Hyper-V synthetic timers. SynIC is controlled through
> MSRs
> +HV_X64_MSR_SCONTROL..HV_X64_MSR_EOM (0x40000080..0x40000084) and
> +HV_X64_MSR_SINT0..HV_X64_MSR_SINT15 (0x40000090..0x4000009F)
> +
> +Requires: hv-vpindex
> +
> +3.9. hv-stimer
> +===============
> +Enables Hyper-V synthetic timers. There are four synthetic timers per
> virtual
> +CPU controlled through HV_X64_MSR_STIMER0_CONFIG..HV_X64_MSR_STIMER3_COUNT
> +(0x400000B0..0x400000B7) MSRs. These timers can work either in single-shot
> or
> +periodic mode. It is known that certain Windows versions revert to using
> HPET
> +(or even RTC when HPET is unavailable) extensively when this enlightenment
> is
> +not provided; this can lead to significant CPU consumption, even when
> virtual
> +CPU is idle.
> +
> +Requires: hv-vpindex, hv-synic, hv-time 
> ------------------------------------------

I tested this with my installation. I have the following packages installed on the Host:

vdsm.x86_64                                          4.40.0-1363.gitf6a1ba0a0.el8                      @ovirt-master-snapshot 
libvirt.x86_64                                       4.5.0-35.1.module+el8.1.0+4931+38af3e93           rhel-8-for-x86_64-appstream-rpms
qemu-kvm.x86_64                                      15:2.12.0-88.module+el8.1.0+5013+4f99814c.1 rhel-8-for-x86_64-appstream-rpms

Note: My libvirt is older. I install from the following package: https://resources.ovirt.org/pub/yum-repo/ovirt-release-master.rpm

Note that enlightenments are sent from the engine to the vdsm so changes would come from the engine.

Launching a VM set to Windows 10 x86, fails due to the following error in the vdsm log:

libvirt.libvirtError: unsupported configuration: host doesn't support hyperv 'synic' feature

Adding the vpindex as per this suggestion failed due to the following error:

2019-12-18 12:02:00,911+0200 INFO  (vm/3e7bbfc6) [virt.vm] (vmId='3e7bbfc6-b802-4549-b378-b82f771b116c') Changed state to Down: unsupported configuration: host doesn't support hyperv 'vpindex' feature (code=1) (vm:1604)

When the synic enlightenment and and stimer enlightenment are not sent to the vdsm from the engine, the VM loads successfully.

It seems the option may be:

1. To upgrade the master package to the current libvirt that supports the enlightenment(s) to be supported.
2. To turn off the synic enlightenment from the engine until we are supporting them or at least for those OSs that still do not support them.

Comment 21 Michal Privoznik 2019-12-18 15:57:44 UTC
(In reply to Steven Rosenberg from comment #20)
> 
> I tested this with my installation. I have the following packages installed
> on the Host:
> 
> vdsm.x86_64                                         
> 4.40.0-1363.gitf6a1ba0a0.el8                      @ovirt-master-snapshot 
> libvirt.x86_64                                      
> 4.5.0-35.1.module+el8.1.0+4931+38af3e93          
> rhel-8-for-x86_64-appstream-rpms
> qemu-kvm.x86_64                                     
> 15:2.12.0-88.module+el8.1.0+5013+4f99814c.1 rhel-8-for-x86_64-appstream-rpms
> 
> Note: My libvirt is older. I install from the following package:
> https://resources.ovirt.org/pub/yum-repo/ovirt-release-master.rpm
> 
> Note that enlightenments are sent from the engine to the vdsm so changes
> would come from the engine.
> 
> Launching a VM set to Windows 10 x86, fails due to the following error in
> the vdsm log:
> 
> libvirt.libvirtError: unsupported configuration: host doesn't support hyperv
> 'synic' feature

Libvirt uses qemu to detect this. So this is qemu saying to us that it can't enable synic feature.

> 
> Adding the vpindex as per this suggestion failed due to the following error:
> 
> 2019-12-18 12:02:00,911+0200 INFO  (vm/3e7bbfc6) [virt.vm]
> (vmId='3e7bbfc6-b802-4549-b378-b82f771b116c') Changed state to Down:
> unsupported configuration: host doesn't support hyperv 'vpindex' feature
> (code=1) (vm:1604)

And here it tells us it was unable to enable vpindex.

> 
> When the synic enlightenment and and stimer enlightenment are not sent to
> the vdsm from the engine, the VM loads successfully.
> 
> It seems the option may be:
> 
> 1. To upgrade the master package to the current libvirt that supports the
> enlightenment(s) to be supported.
> 2. To turn off the synic enlightenment from the engine until we are
> supporting them or at least for those OSs that still do not support them.

Can you share the debug log please? But before you do that, please remove cached capabilities:

rm /var/cache/libvirt/qemu/capabilities/*

to force libvirt into querying new ones (and thus have debug logs of that).

Comment 22 Steven Rosenberg 2019-12-19 08:10:38 UTC
Created attachment 1646328 [details]
var/log logs from Host

Includes all of the logs from the Host

Comment 23 Steven Rosenberg 2019-12-19 08:13:26 UTC
(In reply to Michal Privoznik from comment #21)
> (In reply to Steven Rosenberg from comment #20)
> > 
> > I tested this with my installation. I have the following packages installed
> > on the Host:
> > 
> > vdsm.x86_64                                         
> > 4.40.0-1363.gitf6a1ba0a0.el8                      @ovirt-master-snapshot 
> > libvirt.x86_64                                      
> > 4.5.0-35.1.module+el8.1.0+4931+38af3e93          
> > rhel-8-for-x86_64-appstream-rpms
> > qemu-kvm.x86_64                                     
> > 15:2.12.0-88.module+el8.1.0+5013+4f99814c.1 rhel-8-for-x86_64-appstream-rpms
> > 
> > Note: My libvirt is older. I install from the following package:
> > https://resources.ovirt.org/pub/yum-repo/ovirt-release-master.rpm
> > 
> > Note that enlightenments are sent from the engine to the vdsm so changes
> > would come from the engine.
> > 
> > Launching a VM set to Windows 10 x86, fails due to the following error in
> > the vdsm log:
> > 
> > libvirt.libvirtError: unsupported configuration: host doesn't support hyperv
> > 'synic' feature
> 
> Libvirt uses qemu to detect this. So this is qemu saying to us that it can't
> enable synic feature.
> 
> > 
> > Adding the vpindex as per this suggestion failed due to the following error:
> > 
> > 2019-12-18 12:02:00,911+0200 INFO  (vm/3e7bbfc6) [virt.vm]
> > (vmId='3e7bbfc6-b802-4549-b378-b82f771b116c') Changed state to Down:
> > unsupported configuration: host doesn't support hyperv 'vpindex' feature
> > (code=1) (vm:1604)
> 
> And here it tells us it was unable to enable vpindex.
> 
> > 
> > When the synic enlightenment and and stimer enlightenment are not sent to
> > the vdsm from the engine, the VM loads successfully.
> > 
> > It seems the option may be:
> > 
> > 1. To upgrade the master package to the current libvirt that supports the
> > enlightenment(s) to be supported.
> > 2. To turn off the synic enlightenment from the engine until we are
> > supporting them or at least for those OSs that still do not support them.
> 
> Can you share the debug log please? But before you do that, please remove
> cached capabilities:
> 
> rm /var/cache/libvirt/qemu/capabilities/*
> 
> to force libvirt into querying new ones (and thus have debug logs of that).

Please see the attached file with all of the logs. If you want specifc logs for next time please advise. I did remove the file in the /var/cache/libvirt/qemu/capabilities before re-simulating the issue. The file was not recreated.

Comment 24 Michal Privoznik 2019-12-19 12:15:06 UTC
(In reply to Steven Rosenberg from comment #22)
> Created attachment 1646328 [details]
> var/log logs from Host
> 
> Includes all of the logs from the Host

I'm sorry, I should have been clearer. I need debug logs, what you provided contains generated qemu command line only (which is irrelevant in this case). Please follow the steps here:

https://wiki.libvirt.org/page/DebugLogs

The capabilities files were not recreated because libvirtd keeps them in memory still probably. So the safest option is to:
1) shut down libvirtd
2) remove files from /var/cache/libvirt/qemu/capabilities/*
3) enable debugging in the config file
4) start libvirtd
5) try to start WIN-10 vm
6) pack logs and attach to this BZ

Comment 25 Steven Rosenberg 2019-12-19 13:12:41 UTC
Created attachment 1646497 [details]
Logs with debugging set

Logs with debugging set

Comment 26 Michal Privoznik 2019-12-20 06:40:45 UTC
Thank you, so based on the logs I think this works as expected. I'm able to reproduce (sort of) and here are my findings:

Because of qemu commit v4.1.0-rc0~43^2~21 qemu requires VPINDEX to be enabled whenever SYNIC is enabled. Although, I don't understand how the guest from comment 3 point 2 (using qemu only) was able to boot.
Vitaly, how does qemu check for host support of various Hyper-V features? I mean, obviously their host is lacking some features and thus the guest is unable to boot. Do you have any ideas?

Comment 27 Vitaly Kuznetsov 2019-12-20 09:27:02 UTC
(In reply to Michal Privoznik from comment #26)
> Thank you, so based on the logs I think this works as expected. I'm able to
> reproduce (sort of) and here are my findings:
> 
> Because of qemu commit v4.1.0-rc0~43^2~21 qemu requires VPINDEX to be
> enabled whenever SYNIC is enabled. Although, I don't understand how the
> guest from comment 3 point 2 (using qemu only) was able to boot.
> Vitaly, how does qemu check for host support of various Hyper-V features?

It asks KVM what it supports by checking capabilities and, or, in newer versions,
by issuing KVM_GET_SUPPORTED_HV_CPUID ioctl.

But for the enlightenments in question (vpindex and synic) it is practically
impossible to find a RHEL8 kernel which doesn't support them both. These are
purely software features so they are present for any hardware.

> I mean, obviously their host is lacking some features and thus the guest is
> unable to boot. Do you have any ideas?

I may be missing something but aren't we just trying to launch a guest with an
inconsistent config (synic but not vpindex)? If yes than why can't we fix that
(always add both for Windows)?

Comment 28 Michal Privoznik 2019-12-20 19:03:42 UTC
(In reply to Vitaly Kuznetsov from comment #27)
> (In reply to Michal Privoznik from comment #26)
> > Thank you, so based on the logs I think this works as expected. I'm able to
> > reproduce (sort of) and here are my findings:
> > 
> > Because of qemu commit v4.1.0-rc0~43^2~21 qemu requires VPINDEX to be
> > enabled whenever SYNIC is enabled. Although, I don't understand how the
> > guest from comment 3 point 2 (using qemu only) was able to boot.
> > Vitaly, how does qemu check for host support of various Hyper-V features?
> 
> It asks KVM what it supports by checking capabilities and, or, in newer
> versions,
> by issuing KVM_GET_SUPPORTED_HV_CPUID ioctl.
> 
> But for the enlightenments in question (vpindex and synic) it is practically
> impossible to find a RHEL8 kernel which doesn't support them both. These are
> purely software features so they are present for any hardware.
> 
> > I mean, obviously their host is lacking some features and thus the guest is
> > unable to boot. Do you have any ideas?
> 
> I may be missing something but aren't we just trying to launch a guest with
> an
> inconsistent config (synic but not vpindex)? If yes than why can't we fix
> that
> (always add both for Windows)?

I believe the confusion arose because when attempted through libvirt, qemu is unable to initialize vCPUs; but when started by hand then the inconsistent config does not longer matter (but I suspect this is because by hand cmd line doesn't have -nodefaults nor -no-user-config and/or similar).

But I agree that if vdsm generates inconsistent XML there's not much left for libvirt to do. Steven?

Comment 29 Steven Rosenberg 2019-12-22 09:53:49 UTC
(In reply to Michal Privoznik from comment #28)
> (In reply to Vitaly Kuznetsov from comment #27)
> > (In reply to Michal Privoznik from comment #26)
> > > Thank you, so based on the logs I think this works as expected. I'm able to
> > > reproduce (sort of) and here are my findings:
> > > 
> > > Because of qemu commit v4.1.0-rc0~43^2~21 qemu requires VPINDEX to be
> > > enabled whenever SYNIC is enabled. Although, I don't understand how the
> > > guest from comment 3 point 2 (using qemu only) was able to boot.
> > > Vitaly, how does qemu check for host support of various Hyper-V features?
> > 
> > It asks KVM what it supports by checking capabilities and, or, in newer
> > versions,
> > by issuing KVM_GET_SUPPORTED_HV_CPUID ioctl.
> > 
> > But for the enlightenments in question (vpindex and synic) it is practically
> > impossible to find a RHEL8 kernel which doesn't support them both. These are
> > purely software features so they are present for any hardware.
> > 
> > > I mean, obviously their host is lacking some features and thus the guest is
> > > unable to boot. Do you have any ideas?
> > 
> > I may be missing something but aren't we just trying to launch a guest with
> > an
> > inconsistent config (synic but not vpindex)? If yes than why can't we fix
> > that
> > (always add both for Windows)?
> 
> I believe the confusion arose because when attempted through libvirt, qemu
> is unable to initialize vCPUs; but when started by hand then the
> inconsistent config does not longer matter (but I suspect this is because by
> hand cmd line doesn't have -nodefaults nor -no-user-config and/or similar).
> 
> But I agree that if vdsm generates inconsistent XML there's not much left
> for libvirt to do. Steven?

Adding the vpindex as an enlightenment did not fix the problem because it also failed as per my previous comment:

2019-12-18 12:02:00,911+0200 INFO  (vm/3e7bbfc6) [virt.vm] (vmId='3e7bbfc6-b802-4549-b378-b82f771b116c') Changed state to Down: unsupported configuration: host doesn't support hyperv 'vpindex' feature (code=1) (vm:1604)

Also, this did work previously without the vpindex so it seems something changed to create the dependency which now causes the failure.

The previous run with debugging did not include vpindex, so to clarify I will also attach the logs with debug with the vpindex and the error for that enlightenment.

Comment 30 Steven Rosenberg 2019-12-22 10:00:36 UTC
(In reply to Steven Rosenberg from comment #29)
> (In reply to Michal Privoznik from comment #28)
> > (In reply to Vitaly Kuznetsov from comment #27)
> > > (In reply to Michal Privoznik from comment #26)
> > > > Thank you, so based on the logs I think this works as expected. I'm able to
> > > > reproduce (sort of) and here are my findings:
> > > > 
> > > > Because of qemu commit v4.1.0-rc0~43^2~21 qemu requires VPINDEX to be
> > > > enabled whenever SYNIC is enabled. Although, I don't understand how the
> > > > guest from comment 3 point 2 (using qemu only) was able to boot.
> > > > Vitaly, how does qemu check for host support of various Hyper-V features?
> > > 
> > > It asks KVM what it supports by checking capabilities and, or, in newer
> > > versions,
> > > by issuing KVM_GET_SUPPORTED_HV_CPUID ioctl.
> > > 
> > > But for the enlightenments in question (vpindex and synic) it is practically
> > > impossible to find a RHEL8 kernel which doesn't support them both. These are
> > > purely software features so they are present for any hardware.
> > > 
> > > > I mean, obviously their host is lacking some features and thus the guest is
> > > > unable to boot. Do you have any ideas?
> > > 
> > > I may be missing something but aren't we just trying to launch a guest with
> > > an
> > > inconsistent config (synic but not vpindex)? If yes than why can't we fix
> > > that
> > > (always add both for Windows)?
> > 
> > I believe the confusion arose because when attempted through libvirt, qemu
> > is unable to initialize vCPUs; but when started by hand then the
> > inconsistent config does not longer matter (but I suspect this is because by
> > hand cmd line doesn't have -nodefaults nor -no-user-config and/or similar).
> > 
> > But I agree that if vdsm generates inconsistent XML there's not much left
> > for libvirt to do. Steven?
> 
> Adding the vpindex as an enlightenment did not fix the problem because it
> also failed as per my previous comment:
> 
> 2019-12-18 12:02:00,911+0200 INFO  (vm/3e7bbfc6) [virt.vm]
> (vmId='3e7bbfc6-b802-4549-b378-b82f771b116c') Changed state to Down:
> unsupported configuration: host doesn't support hyperv 'vpindex' feature
> (code=1) (vm:1604)
> 
> Also, this did work previously without the vpindex so it seems something
> changed to create the dependency which now causes the failure.
> 
> The previous run with debugging did not include vpindex, so to clarify I
> will also attach the logs with debug with the vpindex and the error for that
> enlightenment.

The new logs can be found here with vpindex:

https://drive.google.com/open?id=1yNRkecyzfa2jJk2NZmSRDs5qceSPiubU

Comment 31 Michal Privoznik 2019-12-23 11:47:09 UTC
(In reply to Steven Rosenberg from comment #29)
> (In reply to Michal Privoznik from comment #28)
> > (In reply to Vitaly Kuznetsov from comment #27)
> > > (In reply to Michal Privoznik from comment #26)
> > > > Thank you, so based on the logs I think this works as expected. I'm able to
> > > > reproduce (sort of) and here are my findings:
> > > > 
> > > > Because of qemu commit v4.1.0-rc0~43^2~21 qemu requires VPINDEX to be
> > > > enabled whenever SYNIC is enabled. Although, I don't understand how the
> > > > guest from comment 3 point 2 (using qemu only) was able to boot.
> > > > Vitaly, how does qemu check for host support of various Hyper-V features?
> > > 
> > > It asks KVM what it supports by checking capabilities and, or, in newer
> > > versions,
> > > by issuing KVM_GET_SUPPORTED_HV_CPUID ioctl.
> > > 
> > > But for the enlightenments in question (vpindex and synic) it is practically
> > > impossible to find a RHEL8 kernel which doesn't support them both. These are
> > > purely software features so they are present for any hardware.
> > > 
> > > > I mean, obviously their host is lacking some features and thus the guest is
> > > > unable to boot. Do you have any ideas?
> > > 
> > > I may be missing something but aren't we just trying to launch a guest with
> > > an
> > > inconsistent config (synic but not vpindex)? If yes than why can't we fix
> > > that
> > > (always add both for Windows)?
> > 
> > I believe the confusion arose because when attempted through libvirt, qemu
> > is unable to initialize vCPUs; but when started by hand then the
> > inconsistent config does not longer matter (but I suspect this is because by
> > hand cmd line doesn't have -nodefaults nor -no-user-config and/or similar).
> > 
> > But I agree that if vdsm generates inconsistent XML there's not much left
> > for libvirt to do. Steven?
> 
> Adding the vpindex as an enlightenment did not fix the problem because it
> also failed as per my previous comment:
> 
> 2019-12-18 12:02:00,911+0200 INFO  (vm/3e7bbfc6) [virt.vm]
> (vmId='3e7bbfc6-b802-4549-b378-b82f771b116c') Changed state to Down:
> unsupported configuration: host doesn't support hyperv 'vpindex' feature
> (code=1) (vm:1604)


This suggests KVM doesn't support it. Maybe you need to update the kernel? Vitaly, any ideas?
I've looked into the kernel code and it doesn't look like vpindex is something that is not enabled by default. I mean, kvm_vm_ioctl_check_extension() function does nothing for KVM_CAP_HYPERV_VP_INDEX but returns 1 (meaning 'supported').

> 
> Also, this did work previously without the vpindex so it seems something
> changed to create the dependency which now causes the failure.

This was introduced in qemu's upstream commit of: v3.1.0-rc0~44^2~9 which makes sense since you claim you started seeing this only recently (and you're using qemu-kvm-3.1.0).

So the question boils down to enabling VPINDEX.

> 
> The previous run with debugging did not include vpindex, so to clarify I
> will also attach the logs with debug with the vpindex and the error for that
> enlightenment.

Comment 32 Steven Rosenberg 2019-12-25 09:37:01 UTC
(In reply to Michal Privoznik from comment #28)
> (In reply to Vitaly Kuznetsov from comment #27)
> > (In reply to Michal Privoznik from comment #26)
> > > Thank you, so based on the logs I think this works as expected. I'm able to
> > > reproduce (sort of) and here are my findings:
> > > 
> > > Because of qemu commit v4.1.0-rc0~43^2~21 qemu requires VPINDEX to be
> > > enabled whenever SYNIC is enabled. Although, I don't understand how the
> > > guest from comment 3 point 2 (using qemu only) was able to boot.
> > > Vitaly, how does qemu check for host support of various Hyper-V features?
> > 
> > It asks KVM what it supports by checking capabilities and, or, in newer
> > versions,
> > by issuing KVM_GET_SUPPORTED_HV_CPUID ioctl.
> > 
> > But for the enlightenments in question (vpindex and synic) it is practically
> > impossible to find a RHEL8 kernel which doesn't support them both. These are
> > purely software features so they are present for any hardware.
> > 
> > > I mean, obviously their host is lacking some features and thus the guest is
> > > unable to boot. Do you have any ideas?
> > 
> > I may be missing something but aren't we just trying to launch a guest with
> > an
> > inconsistent config (synic but not vpindex)? If yes than why can't we fix
> > that
> > (always add both for Windows)?
> 
> I believe the confusion arose because when attempted through libvirt, qemu
> is unable to initialize vCPUs; but when started by hand then the
> inconsistent config does not longer matter (but I suspect this is because by
> hand cmd line doesn't have -nodefaults nor -no-user-config and/or similar).
> 
> But I agree that if vdsm generates inconsistent XML there's not much left
> for libvirt to do. Steven?

Comment 33 Steven Rosenberg 2019-12-26 09:00:21 UTC
*** Bug 1786464 has been marked as a duplicate of this bug. ***

Comment 34 Vitaly Kuznetsov 2020-01-02 10:47:14 UTC
(In reply to Michal Privoznik from comment #31)
> (In reply to Steven Rosenberg from comment #29)
> > 
> > Adding the vpindex as an enlightenment did not fix the problem because it
> > also failed as per my previous comment:
> > 
> > 2019-12-18 12:02:00,911+0200 INFO  (vm/3e7bbfc6) [virt.vm]
> > (vmId='3e7bbfc6-b802-4549-b378-b82f771b116c') Changed state to Down:
> > unsupported configuration: host doesn't support hyperv 'vpindex' feature
> > (code=1) (vm:1604)
> This suggests KVM doesn't support it. Maybe you need to update the kernel?
> Vitaly, any ideas?

This indeed seems to be the core of the issue. 'vpindex' can't be unsupported by kernel
and/or QEMU, it is a pure software feature which we have for a long time. We need to
figure out who reports it as unsupported.

> I've looked into the kernel code and it doesn't look like vpindex is
> something that is not enabled by default. I mean,
> kvm_vm_ioctl_check_extension() function does nothing for
> KVM_CAP_HYPERV_VP_INDEX but returns 1 (meaning 'supported').
> 

Indeed.

> > 
> > Also, this did work previously without the vpindex so it seems something
> > changed to create the dependency which now causes the failure.
> 
> This was introduced in qemu's upstream commit of: v3.1.0-rc0~44^2~9 which
> makes sense since you claim you started seeing this only recently (and
> you're using qemu-kvm-3.1.0).
> 
> So the question boils down to enabling VPINDEX.
>

Yes. Previously, it was possible to enable synic without vpindex, however, this
configuration makes little sense as Windows won't be able to use it. Proper 
dependencies were added to QEMU to forbid such configurations.

Comment 35 Vitaly Kuznetsov 2020-01-02 13:15:26 UTC
Looking at Steven's logs I can see that libvirt doesn't see any of the Hyper-V features:

2019-12-22 09:45:04.152+0000: 22481: debug : virCPUDataCheckFeature:729 : arch=x86_64, data=0x7f72f8071270, feature=__kvm_hv_relaxed
2019-12-22 09:45:04.152+0000: 22481: warning : qemuProcessVerifyHypervFeatures:3943 : host doesn't support hyperv 'relaxed' feature
2019-12-22 09:45:04.152+0000: 22481: debug : virCPUDataCheckFeature:729 : arch=x86_64, data=0x7f72f8071270, feature=__kvm_hv_vapic
2019-12-22 09:45:04.152+0000: 22481: warning : qemuProcessVerifyHypervFeatures:3943 : host doesn't support hyperv 'vapic' feature
2019-12-22 09:45:04.152+0000: 22481: debug : virCPUDataCheckFeature:729 : arch=x86_64, data=0x7f72f8071270, feature=__kvm_hv_spinlocks
2019-12-22 09:45:04.152+0000: 22481: warning : qemuProcessVerifyHypervFeatures:3943 : host doesn't support hyperv 'spinlocks' feature
2019-12-22 09:45:04.152+0000: 22481: debug : virCPUDataCheckFeature:729 : arch=x86_64, data=0x7f72f8071270, feature=__kvm_hv_vpindex
2019-12-22 09:45:04.152+0000: 22481: error : qemuProcessVerifyHypervFeatures:3956 : unsupported configuration: host doesn't support hyperv 'vpindex' feature

we, probably, need to check what qemuProcessVerifyHypervFeatures() does.

Comment 36 Vitaly Kuznetsov 2020-01-02 13:24:32 UTC
I also see that libvirt-5.0.0 is being used, it probably lacks the following commit:

commit 0ccdd476bb329f1486438b896255e5c44a91ff4a
Author: Jiri Denemark <jdenemar>
Date:   Thu Jul 25 13:50:57 2019 +0200

    qemu: Fix hyperv features with QEMU 4.1

Michal, could you please check my (wild) guess?

Comment 37 Steven Rosenberg 2020-01-02 13:35:01 UTC
(In reply to Vitaly Kuznetsov from comment #35)
> Looking at Steven's logs I can see that libvirt doesn't see any of the
> Hyper-V features:
> 
> 2019-12-22 09:45:04.152+0000: 22481: debug : virCPUDataCheckFeature:729 :
> arch=x86_64, data=0x7f72f8071270, feature=__kvm_hv_relaxed
> 2019-12-22 09:45:04.152+0000: 22481: warning :
> qemuProcessVerifyHypervFeatures:3943 : host doesn't support hyperv 'relaxed'
> feature
> 2019-12-22 09:45:04.152+0000: 22481: debug : virCPUDataCheckFeature:729 :
> arch=x86_64, data=0x7f72f8071270, feature=__kvm_hv_vapic
> 2019-12-22 09:45:04.152+0000: 22481: warning :
> qemuProcessVerifyHypervFeatures:3943 : host doesn't support hyperv 'vapic'
> feature
> 2019-12-22 09:45:04.152+0000: 22481: debug : virCPUDataCheckFeature:729 :
> arch=x86_64, data=0x7f72f8071270, feature=__kvm_hv_spinlocks
> 2019-12-22 09:45:04.152+0000: 22481: warning :
> qemuProcessVerifyHypervFeatures:3943 : host doesn't support hyperv
> 'spinlocks' feature
> 2019-12-22 09:45:04.152+0000: 22481: debug : virCPUDataCheckFeature:729 :
> arch=x86_64, data=0x7f72f8071270, feature=__kvm_hv_vpindex
> 2019-12-22 09:45:04.152+0000: 22481: error :
> qemuProcessVerifyHypervFeatures:3956 : unsupported configuration: host
> doesn't support hyperv 'vpindex' feature
> 
> we, probably, need to check what qemuProcessVerifyHypervFeatures() does.

Yes, it looks like this function is returning the not supported message:

        case VIR_DOMAIN_HYPERV_VPINDEX:
        case VIR_DOMAIN_HYPERV_RUNTIME:
        case VIR_DOMAIN_HYPERV_SYNIC:
        case VIR_DOMAIN_HYPERV_STIMER:
        case VIR_DOMAIN_HYPERV_RESET:
        case VIR_DOMAIN_HYPERV_FREQUENCIES:
        case VIR_DOMAIN_HYPERV_REENLIGHTENMENT:
        case VIR_DOMAIN_HYPERV_TLBFLUSH:
        case VIR_DOMAIN_HYPERV_IPI:
        case VIR_DOMAIN_HYPERV_EVMCS:
            virReportError(VIR_ERR_CONFIG_UNSUPPORTED,
                           _("host doesn't support hyperv '%s' feature"),
                           virDomainHypervTypeToString(i));

Comment 38 Michal Privoznik 2020-01-06 09:33:13 UTC
(In reply to Vitaly Kuznetsov from comment #36)
> I also see that libvirt-5.0.0 is being used, it probably lacks the following
> commit:
> 
> commit 0ccdd476bb329f1486438b896255e5c44a91ff4a
> Author: Jiri Denemark <jdenemar>
> Date:   Thu Jul 25 13:50:57 2019 +0200
> 
>     qemu: Fix hyperv features with QEMU 4.1
> 
> Michal, could you please check my (wild) guess?

I thought about this commit, but it fixes how libvirt handles qemu-4.1 (where certain hyperv features reporting changed). And Steven is running qemu-kvm-3.1.0 so I discarded that.
Steven, can you shed more light on what distro are you actually running? Because from the versions it looks like RHEL-AV-8.0.0 which is now unsupported (if I'm not mistaken). And the fix Vitaly is referring to is contained in RHEL-AV-8.1.0.

Comment 39 Steven Rosenberg 2020-01-06 11:15:07 UTC
(In reply to Michal Privoznik from comment #38)
> (In reply to Vitaly Kuznetsov from comment #36)
> > I also see that libvirt-5.0.0 is being used, it probably lacks the following
> > commit:
> > 
> > commit 0ccdd476bb329f1486438b896255e5c44a91ff4a
> > Author: Jiri Denemark <jdenemar>
> > Date:   Thu Jul 25 13:50:57 2019 +0200
> > 
> >     qemu: Fix hyperv features with QEMU 4.1
> > 
> > Michal, could you please check my (wild) guess?
> 
> I thought about this commit, but it fixes how libvirt handles qemu-4.1
> (where certain hyperv features reporting changed). And Steven is running
> qemu-kvm-3.1.0 so I discarded that.
> Steven, can you shed more light on what distro are you actually running?
> Because from the versions it looks like RHEL-AV-8.0.0 which is now
> unsupported (if I'm not mistaken). And the fix Vitaly is referring to is
> contained in RHEL-AV-8.1.0.

Red Hat Enterprise Linux release 8.1 (Ootpa)

Comment 40 Steven Rosenberg 2020-01-30 12:34:09 UTC
(In reply to Michal Privoznik from comment #38)
> (In reply to Vitaly Kuznetsov from comment #36)
> > I also see that libvirt-5.0.0 is being used, it probably lacks the following
> > commit:
> > 
> > commit 0ccdd476bb329f1486438b896255e5c44a91ff4a
> > Author: Jiri Denemark <jdenemar>
> > Date:   Thu Jul 25 13:50:57 2019 +0200
> > 
> >     qemu: Fix hyperv features with QEMU 4.1
> > 
> > Michal, could you please check my (wild) guess?
> 
> I thought about this commit, but it fixes how libvirt handles qemu-4.1
> (where certain hyperv features reporting changed). And Steven is running
> qemu-kvm-3.1.0 so I discarded that.
> Steven, can you shed more light on what distro are you actually running?
> Because from the versions it looks like RHEL-AV-8.0.0 which is now
> unsupported (if I'm not mistaken). And the fix Vitaly is referring to is
> contained in RHEL-AV-8.1.0.

There has not been any movement on this issue. It may be advisable to remove the HYPERV_SYNIC flag from the engine until we can support it with VPINDEX.

Please advise.

Comment 42 Jiri Denemark 2020-01-31 11:06:11 UTC
The issue with unsupported hyperv features will be fixed for RHEL-8.1.0 (see
bug 1794868).

However, I'm not sure what versions we're dealing with in this bug.

The bug description mentions qemu-kvm-3.1.0-20 and libvirt-5.0.0-7 which would
be RHEL-AV-8.0.0, but support for MSR features was not backported to this
version of libvirt, which means it should not be affected even though
unavailable-features QOM property was backported to QEMU.

Later, comment #20 talks about libvirt-4.5.0-35.1 and qemu-kvm-2.12.0-88.*.1,
which are both from RHEL-8.1.0.z. This is broken (tracked by bug 1794868
mentioned above) and I will fix it.

Additionally, RHV is supposed to use RHEL-AV, which does not have this bug at
all.

Comment 43 Michal Skrivanek 2020-03-06 08:48:00 UTC
why such a chaos with this bug?

hyper-v flags changed recently by:
https://gerrit.ovirt.org/#/c/104462/
minimum AV version required by:
Requires: qemu-kvm >= 15:4.2.0-10.module+el8.2.0+5740+c3dff59e

so it's supposed to work currently for 4.4 VMs 

However AFAICT it's not going to work for <4.4 compat VMs on 4.4 product version because of comment #34. The patch above didn't handle this case correctly and keeps sending synic without vpindex for such VMs, but that's not going to work because of 4.4 host limitation

anyway, this is pretty serious, retargeting

Comment 44 Ryan Barry 2020-03-10 12:25:10 UTC
This is now showing up in other testing (with Windows VMs) and needs a resolution

Comment 45 Steven Rosenberg 2020-03-10 13:46:28 UTC
(In reply to Ryan Barry from comment #44)
> This is now showing up in other testing (with Windows VMs) and needs a
> resolution

As per the blocking issue [1], this needs to be retested with libvirt-4.5.0-39.module+el8.2.0+5690+f1eb5920.x86_64


[1] https://bugzilla.redhat.com/show_bug.cgi?id=1794868#c5

Comment 46 Ryan Barry 2020-03-10 14:02:42 UTC
Nisim's last test was with:

2020-03-05 13:39:25.197+0000: starting up libvirt version: 6.0.0, package: 7.module+el8.2.0+5869+c23fe68b (Red Hat, Inc. <http://bugzilla.redhat.com/bugzilla>, 2020-02-25-16:32:10, ), qemu version: 4.2.0qemu-kvm-4.2.0-13.module+el8.2.0+5898+fb4bceae, kernel: 4.18.0-184.el8.x86_64, hostname: titan100.lab.eng.tlv2.redhat.com

Whic is above 8.2.0+5690 in version, and we saw the same error. Can you test it, please?

Comment 47 Steven Rosenberg 2020-03-10 14:24:18 UTC
Created attachment 1668967 [details]
Shows that Windows 10 loads fine.

Using the following libvirt [1], Windows 10 runs fine on my environment. OS set to Windows 10 x64 for the VM.

[1] libvirt.x86_64                                              5.6.0-10.el8                                                  ovirt-master-advanced-virtualization-candidate

Comment 48 Ryan Barry 2020-03-10 14:32:18 UTC
Logs would help verify relative versions

Comment 49 Steven Rosenberg 2020-03-10 14:35:24 UTC
(In reply to Ryan Barry from comment #48)
> Logs would help verify relative versions

Which logs, the libvirt logs?

Comment 50 Steven Rosenberg 2020-03-10 14:50:39 UTC
Created attachment 1668970 [details]
qemu log

Comment 51 Ryan Barry 2020-03-10 15:05:46 UTC
-machine pc-i440fx-rhel7.6.0,accel=kvm,usb=off,dump-guest-core=off

This starts because of the machine type. The reported problems are with 8.2.0 machine types. Can you change and re-test? 8.2.0 is the default downstream

Comment 52 Ryan Barry 2020-03-10 15:07:09 UTC
*** Bug 1810558 has been marked as a duplicate of this bug. ***

Comment 53 Steven Rosenberg 2020-03-10 15:22:51 UTC
Created attachment 1668975 [details]
qemu log with q35 machine type

Tested also with q35 machine type, which is 8.1 not 8.2, but the Host is rhel 8.2 and this is reflected in the log. There is no q35-rhel8.2 machine type choice in the VM create/edit webadmin.

Comment 54 Ryan Barry 2020-03-10 18:51:19 UTC
Ok, one last thing --

8.2 machine types are available downstream. Can you work with Nisim to re-test this, or provision an EL8 host (virtual with nested virt is ok)? It works for me with updated packages, but confirmation that it works with the existing compose would be ideal

Comment 55 Steven Rosenberg 2020-03-11 16:16:47 UTC
Created attachment 1669401 [details]
Downstream Windows 10 test with i440 and q35

Retested with downstream version [1]. Both i440 machine type with Legacy Bios succeeded to launch. Then tested q35-rhel8.2 machine type with q35 uefi Bios Type which also succeeds. 



[1] libvirt.x86_64                                             6.0.0-9.module+el8.2.0+5957+7ae8988e                       advanced-virt-el8-rhv-4.4

Comment 56 Ryan Barry 2020-03-11 18:44:04 UTC
Thanks!

Nisim, relevant qemu logs are:

2020-03-11 16:09:59.128+0000: starting up libvirt version: 6.0.0, package: 9.module+el8.2.0+5957+7ae8988e (Red Hat, Inc. <http://bugzilla.redhat.com/bugzilla>, 2020-03-06-17:16:39, ), qemu version: 4.2.0qemu-kvm-4.2.0-13.module+el8.2.0+5898+fb4bceae, kernel: 4.18.0-187.el8.x86_64, hostname: sla-leonard.tlv.redhat.com
...
-smbios type=1,manufacturer=oVirt,product=RHEL,version=8.2-0.9.el8,serial=4c4c4544-0043-5610-8054-c3c04f44354a,uuid=04b5ec0d-ebec-4723-ae1e-8a0758be65f3,family=oVirt \
-no-user-config \

Can you please retest?

Comment 57 Nisim Simsolo 2020-03-12 09:01:30 UTC
(In reply to Ryan Barry from comment #56)
.
. 
> Can you please retest?

Using the latest builds, RHEL 8 and Windows VMs succeeds to launch with Q35 (legacy/UEFI BIOS) and also with pc-i440fx:
ovirt-engine-4.4.0-0.25.master.el8ev.noarch
vdsm-4.40.5-1.el8ev.x86_64
libvirt-daemon-6.0.0-9.module+el8.2.0+5957+7ae8988e.x86_64
qemu-kvm-4.2.0-13.module+el8.2.0+5898+fb4bceae.x86_64

Comment 58 Michal Skrivanek 2020-03-12 11:39:25 UTC
verified in comment #56 already

Comment 61 errata-xmlrpc 2020-08-04 13:21:21 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: RHV Manager (ovirt-engine) 4.4 security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:3247


Note You need to log in before you can comment on or make changes to this bug.