Bug 1961761
Summary: | Enabling Device Guard in Windows on RHV 4.4 gives BSOD on reboot | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 9 | Reporter: | Robert McSwain <rmcswain> |
Component: | qemu-kvm | Assignee: | Marek Kedzierski <mkedzier> |
qemu-kvm sub component: | Devices | QA Contact: | menli <menli> |
Status: | CLOSED NOTABUG | Docs Contact: | Jiri Herrmann <jherrman> |
Severity: | high | ||
Priority: | high | CC: | ailan, coli, jinzhao, juzhang, lijin, mdean, menli, michal.skrivanek, mkalinin, mkedzier, qizhu, virt-maint, vkuznets, xiagao, xuwei, zhguo, zixchen |
Version: | unspecified | Keywords: | Triaged |
Target Milestone: | rc | ||
Target Release: | --- | ||
Hardware: | x86_64 | ||
OS: | Windows | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2022-01-25 02:46:32 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Robert McSwain
2021-05-18 16:18:59 UTC
It may be cpu flags related issue. The similar bug: Bug 1871670 - windows guest can not boot after reboot with Device Guard enabled - CLOSED WONTFIX Could you please help recheck the following two items? Many thanks. 1. Booting the guest with all currently supported Hyper-V enlightenmetns. And then check whether hit this bug. 2. Disabling Hyper-V enlightenmetns, check whether hit this bug. cpu flags like: -cpu 'Opteron_G5',hv_stimer,hv_synic,hv_vpindex,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_frequencies,hv_runtime,hv_tlbflush,hv_reenlightenment,hv_stimer_direct,hv_ipi,+kvm_pv_unhalt \ Or -cpu 'Skylake-Server',hv_stimer,hv_synic,hv_vpindex,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_frequencies,hv_runtime,hv_tlbflush,hv_reenlightenment,hv_stimer_direct,hv_ipi,+kvm_pv_unhalt \ According to https://bugzilla.redhat.com/show_bug.cgi?id=1871670#c25, I tested it with all currently supported Hyper-V enlightenmetns, not hit this issue. I tried it on rhel8.3 and rhel8.4. Versions: Host RHEL8.3: kernel-4.18.0-240.el8.x86_64 qemu-kvm-4.2.0-34.module+el8.3.0+10437+1ca0c2ba.5 edk2-ovmf-20200602gitca407c7246bf-3.el8.noarch Guest: win2016 with virtio-win-1.9.16-2 Host RHEL8.4: kernel-4.18.0-305.el8.x86_64 qemu-kvm-5.2.0-16.module+el8.4.0+10806+b7d97207 edk2-ovmf-20200602gitca407c7246bf-4.el8_4.1.noarch Guest: win2016 with virtio-win-1.9.16-2 Test steps: 1.start a win2016 guest /usr/libexec/qemu-kvm \ -S \ -name 'avocado-vt-vm1' \ -sandbox on \ -blockdev node-name=file_ovmf_code,driver=file,filename=/usr/share/OVMF/OVMF_CODE.secboot.fd,auto-read-only=on,discard=unmap \ -blockdev node-name=drive_ovmf_code,driver=raw,read-only=on,file=file_ovmf_code \ -blockdev node-name=file_ovmf_vars,driver=file,filename=/home/kvm_autotest_root/images/avocado-vt-vm1_win2016-64-virtio-scsi.qcow2_VARS.fd,auto-read-only=on,discard=unmap \ -blockdev node-name=drive_ovmf_vars,driver=raw,read-only=off,file=file_ovmf_vars \ -machine q35,pflash0=drive_ovmf_code,pflash1=drive_ovmf_vars \ -device pcie-root-port,id=pcie-root-port-0,multifunction=on,bus=pcie.0,addr=0x1,chassis=1 \ -device pcie-pci-bridge,id=pcie-pci-bridge-0,addr=0x0,bus=pcie-root-port-0 \ -nodefaults \ -device VGA,bus=pcie.0,addr=0x2 \ -m 30720 \ -smp 12,maxcpus=12,cores=6,threads=1,dies=1,sockets=2 \ -cpu 'Skylake-Server',hv_stimer,hv_synic,hv_vpindex,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_frequencies,hv_runtime,hv_tlbflush,hv_reenlightenment,hv_stimer_direct,hv_ipi,+kvm_pv_unhalt \ -chardev socket,id=qmp_id_qmpmonitor1,wait=off,server=on,path=/tmp/avocado_or3cdzmf/monitor-qmpmonitor1-20210520-112138-lIGOyFgO \ -mon chardev=qmp_id_qmpmonitor1,mode=control \ -chardev socket,id=qmp_id_catch_monitor,wait=off,server=on,path=/tmp/avocado_or3cdzmf/monitor-catch_monitor-20210520-112138-lIGOyFgO \ -mon chardev=qmp_id_catch_monitor,mode=control \ -device pvpanic,ioport=0x505,id=id0XVSLL \ -chardev socket,id=chardev_serial0,wait=off,server=on,path=/tmp/avocado_or3cdzmf/serial-serial0-20210520-112138-lIGOyFgO \ -device isa-serial,id=serial0,chardev=chardev_serial0 \ -chardev socket,id=seabioslog_id_20210520-112138-lIGOyFgO,path=/tmp/avocado_or3cdzmf/seabios-20210520-112138-lIGOyFgO,server=on,wait=off \ -device isa-debugcon,chardev=seabioslog_id_20210520-112138-lIGOyFgO,iobase=0x402 \ -device pcie-root-port,id=pcie-root-port-1,port=0x1,addr=0x1.0x1,bus=pcie.0,chassis=2 \ -device qemu-xhci,id=usb1,bus=pcie-root-port-1,addr=0x0 \ -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \ -device pcie-root-port,id=pcie-root-port-2,port=0x2,addr=0x1.0x2,bus=pcie.0,chassis=3 \ -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pcie-root-port-2,addr=0x0 \ -blockdev node-name=file_image1,driver=file,auto-read-only=on,discard=unmap,aio=threads,filename=/home/kvm_autotest_root/images/win2016-64-virtio-scsi.qcow2,cache.direct=on,cache.no-flush=off \ -blockdev node-name=drive_image1,driver=qcow2,read-only=off,cache.direct=on,cache.no-flush=off,file=file_image1 \ -device scsi-hd,id=image1,drive=drive_image1,write-cache=on \ -device pcie-root-port,id=pcie-root-port-3,port=0x3,addr=0x1.0x3,bus=pcie.0,chassis=4 \ -device virtio-net-pci,mac=9a:bb:02:2a:aa:77,id=idM0i9R2,netdev=idwDIh6t,bus=pcie-root-port-3,addr=0x0 \ -netdev tap,id=idwDIh6t,vhost=on \ -blockdev node-name=file_cd1,driver=file,auto-read-only=on,discard=unmap,aio=threads,filename=/home/kvm_autotest_root/iso/windows/winutils.iso,cache.direct=on,cache.no-flush=off \ -blockdev node-name=drive_cd1,driver=raw,read-only=on,cache.direct=on,cache.no-flush=off,file=file_cd1 \ -device scsi-cd,id=cd1,drive=drive_cd1,write-cache=on \ -vnc :0 \ -rtc base=localtime,clock=host,driftfix=slew \ -boot menu=off,order=cdn,once=c,strict=off \ -enable-kvm \ -monitor stdio \ 2.enable secure boot 3.download Device Guard and enable it 4.reboot the guest Doc: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html-single/configuring_and_managing_virtualization/index#enabling-hyper-v-enlightenments_optimizing-windows-virtual-machines-on-rhel-8 Suggest enable all hyper-v-enlightenments flags. By mistake I removed needinfo from Robert and Michal - sorry! I tried it on win2016 on rhel8.3 with following steps,not hit this issue. Versions: Host RHEL8.3: kernel-4.18.0-240.el8.x86_64 qemu-kvm-4.2.0-34.module+el8.3.0+10437+1ca0c2ba.5 edk2-ovmf-20200602gitca407c7246bf-3.el8.noarch virtio-win-1.9.14-4 Test steps: 1.start a win2016 guest(with secure boot enable) /usr/libexec/qemu-kvm \ -name 'avocado-vt-vm3' \ -machine q35 \ -nodefaults \ -vga std \ -device pcie-root-port,port=0x10,chassis=1,id=pci.1,bus=pcie.0,multifunction=on,addr=0x3 \ -device pcie-root-port,port=0x11,chassis=2,id=pci.2,bus=pcie.0,addr=0x3.0x1 \ -device pcie-root-port,port=0x12,chassis=3,id=pci.3,bus=pcie.0,addr=0x3.0x2 \ -device pcie-root-port,port=0x13,chassis=4,id=pci.4,bus=pcie.0,addr=0x3.0x3 \ -device pcie-root-port,port=0x14,chassis=5,id=pci.5,bus=pcie.0,addr=0x3.0x4 \ -device pcie-root-port,port=0x15,chassis=6,id=pci.6,bus=pcie.0,addr=0x3.0x5 \ -device pcie-root-port,port=0x16,chassis=7,id=pci.7,bus=pcie.0,addr=0x3.0x6 \ -device pcie-root-port,port=0x17,chassis=8,id=pci.8,bus=pcie.0,addr=0x3.0x7 \ -device virtio-scsi-pci,id=virtio_scsi_pci1,bus=pci.4 \ -blockdev driver=file,cache.direct=off,cache.no-flush=on,filename=/home/kvm_autotest_root/images/win2016-64-virtio-scsi.qcow2,node-name=data_node \ -blockdev driver=qcow2,node-name=data_disk,file=data_node \ -device scsi-hd,drive=data_disk,id=disk1,bus=virtio_scsi_pci1.0,serial=kk \ -device virtio-net-pci,mac=9a:36:83:b6:3d:05,id=idJVpmsF,netdev=id23ZUK6,bus=pci.3 \ -netdev tap,id=id23ZUK6,vhost=on \ -m 4G \ -smp 2,maxcpus=4 \ -cpu 'Skylake-Server',hv_stimer,hv_synic,hv_vpindex,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_frequencies,hv_runtime,hv_tlbflush,hv_reenlightenment,hv_stimer_direct,hv_ipi,+kvm_pv_unhalt \ -drive id=drive_cd1,if=none,snapshot=off,aio=threads,cache=none,media=cdrom,file=/home/kvm_autotest_root/iso/ISO/Win2016/en_windows_server_2016_updated_feb_2018_x64_dvd_11636692.iso \ -device ide-cd,id=cd2,drive=drive_cd1,bus=ide.0,unit=0 \ -cdrom /home/kvm_autotest_root/iso/windows/virtio-win-1.9.14-4.el8.iso \ -device piix3-usb-uhci,id=usb -device usb-tablet,id=input0 \ -vnc :10 \ -rtc base=localtime,clock=host,driftfix=slew \ -boot order=cdn,once=c,menu=off,strict=off \ -enable-kvm \ -qmp tcp:0:1231,server,nowait \ -monitor stdio \ -device virtio-serial-pci,id=virtio-serial1,max_ports=31,bus=pci.5 \ -chardev socket,id=channel2,path=/tmp/helloworld2,server,nowait -device virtserialport,bus=virtio-serial1.0,chardev=channel2,name=org.qemu.guest_agent.0,id=port2 \ -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0,bus=pci.6 \ -device virtio-balloon-pci,id=balloon0,bus=pci.7 \ -device pvpanic,id=pvpanic0,ioport=0x0505 \ -device virtio-keyboard-pci,id=kbd0,serial=virtio-keyboard,bus=pci.8 -device virtio-mouse-pci,id=mouse0,serial=virtio-mouse -device virtio-tablet-pci,id=tablet0,serial=virtio-tablet \ -object memory-backend-file,id=mem,size=4G,mem-path=/dev/shm,share=on \ -numa node,memdev=mem \ -chardev socket,id=char0,path=/tmp/vhostqemu \ -device vhost-user-fs-pci,queue-size=1024,chardev=char0,tag=myfs \ -blockdev node-name=file_stg2,driver=file,cache.direct=on,cache.no-flush=off,filename=/home/test/data1.qcow2,aio=threads \ -blockdev node-name=drive_stg2,driver=qcow2,cache.direct=on,cache.no-flush=off,file=file_stg2 \ -device virtio-blk-pci,id=stg2,drive=drive_stg2 \ -drive file=/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd,if=pflash,format=raw,unit=0,readonly=on \ -drive file=/home/OVMF_VARS.secboot.fd,if=pflash,format=raw,unit=1 \ 2.download Device Guard and enable it 3.reboot the guest boot the guest (remove all hyperv flags ), repeat steps 2 and 3, also not hit the issue. Thanks Menghuan Bulk update: Move RHEL-AV bugs to RHEL9. If necessary to resolve in RHEL8, then clone to the current RHEL8 release. @zhguo, Could you help check this issue, do you have any ideas? Xiaoling I am closing it since the customer issue has already been closed as configure issue, and QE has verified that without vmx there is no such issue. Please feel free to reopen it if you still hit the issue. Hi Marek, I am not sure if this should be documented as a known issue, or should we move the solution to INVALID if we don't need the document? (In reply to Qianqian Zhu from comment #50) > Hi Marek, > > I am not sure if this should be documented as a known issue, or should we > move the solution to INVALID if we don't need the document? Hi Qianqian, I think this configuration is invalid, so it should not be documented. Thanks, Marek Thanks Marek, I am moving it to NOTABUG and remove doc type per comment 51. |