win8-32 hit the same issue with qemu-kvm-3.1.0-23.module+el8+3081+58d4aeb5.x86_64 qemu cli: -smp 32 \ -device virtio-net-pci,mac=9a:f6:f7:f8:f9:fa,id=id4BOtfH,netdev=idMGpOHX,bus=pci.0,addr=0x5,mq=on,vectors=66 \ -netdev tap,id=idMGpOHX,vhost=on,script=/etc/qemu-ifup,queues=32 \ As Yuri said, this issue can easily reproduced if queues are larger than guest cpu numbers: -smp 32 ---> guest only has 2 cpus ---> this issue reproduced -smp 32,cores=32 ---> guest has 32 cpus ---> this issue NOT reproduced
QEMU has been recently split into sub-components and as a one-time operation to avoid breakage of tools, we are setting the QEMU sub-component of this BZ to "General". Please review and change the sub-component if necessary the next time you review this BZ. Thanks
Hi, I Hit the same issue on rhel8.2.0 slow train on win2019 guest. qemu cli: -cpu Skylake-Server,hv_stimer,hv_synic,hv_time,hv_relaxed,hv_vpindex,hv_spinlocks=0xfff,hv_vapic,hv_reset -enable-kvm -m 5800G -smp 128 \ -device pcie-root-port,port=0x10,chassis=1,id=pci.1,bus=pcie.0,multifunction=on,addr=0x3 \ -device pcie-root-port,port=0x11,chassis=2,id=pci.2,bus=pcie.0,addr=0x3.0x1 \ -device pcie-root-port,port=0x12,chassis=3,id=pci.3,bus=pcie.0,addr=0x3.0x2 \ -netdev tap,script=/etc/qemu-ifup1,id=hostnet0,vhost=on,queues=128 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:52:52:22:1d:a8,mq=on,vectors=258,bus=pci.3 \ Used versions: qemu-kvm-2.12.0-99.module+el8.2.0+5827+8c39933c.x86_64. kernel-4.18.0-193.el8.x86_64 This bug is reported on RHEL8 Fast train only, if I need clone one on RHEL8 Slow train? Thanks~ Peixiu
(In reply to Peixiu Hou from comment #3) > Hi, > > I Hit the same issue on rhel8.2.0 slow train on win2019 guest. > > qemu cli: > -cpu > Skylake-Server,hv_stimer,hv_synic,hv_time,hv_relaxed,hv_vpindex, > hv_spinlocks=0xfff,hv_vapic,hv_reset -enable-kvm > -m 5800G -smp 128 \ > -device > pcie-root-port,port=0x10,chassis=1,id=pci.1,bus=pcie.0,multifunction=on, > addr=0x3 \ > -device pcie-root-port,port=0x11,chassis=2,id=pci.2,bus=pcie.0,addr=0x3.0x1 \ > -device pcie-root-port,port=0x12,chassis=3,id=pci.3,bus=pcie.0,addr=0x3.0x2 \ > -netdev tap,script=/etc/qemu-ifup1,id=hostnet0,vhost=on,queues=128 -device > virtio-net-pci,netdev=hostnet0,id=net0,mac=00:52:52:22:1d:a8,mq=on, > vectors=258,bus=pci.3 \ > > Used versions: > qemu-kvm-2.12.0-99.module+el8.2.0+5827+8c39933c.x86_64. > kernel-4.18.0-193.el8.x86_64 > > This bug is reported on RHEL8 Fast train only, if I need clone one on RHEL8 > Slow train? > > Thanks~ > Peixiu ThanksPeixu, please clone the bug on RHEL8 Slow train
(In reply to Peixiu Hou from comment #3) > Hi, > > I Hit the same issue on rhel8.2.0 slow train on win2019 guest. Which virtio-net-pci driver version is installed in the guest? > qemu cli: > -cpu > Skylake-Server,hv_stimer,hv_synic,hv_time,hv_relaxed,hv_vpindex, > hv_spinlocks=0xfff,hv_vapic,hv_reset -enable-kvm > -m 5800G -smp 128 \ > -device > pcie-root-port,port=0x10,chassis=1,id=pci.1,bus=pcie.0,multifunction=on, > addr=0x3 \ > -device pcie-root-port,port=0x11,chassis=2,id=pci.2,bus=pcie.0,addr=0x3.0x1 \ > -device pcie-root-port,port=0x12,chassis=3,id=pci.3,bus=pcie.0,addr=0x3.0x2 \ > -netdev tap,script=/etc/qemu-ifup1,id=hostnet0,vhost=on,queues=128 -device > virtio-net-pci,netdev=hostnet0,id=net0,mac=00:52:52:22:1d:a8,mq=on, > vectors=258,bus=pci.3 \ > > Used versions: > qemu-kvm-2.12.0-99.module+el8.2.0+5827+8c39933c.x86_64. > kernel-4.18.0-193.el8.x86_64 > > This bug is reported on RHEL8 Fast train only, if I need clone one on RHEL8 > Slow train? > > Thanks~ > Peixiu
This is the regression in latest qemu. Buggy commit is https://github.com/qemu/qemu/commit/f19bcdfedd53ee93412d535a842a89fa27cae7f2 Fix posted upstream https://lists.nongnu.org/archive/html/qemu-devel/2020-07/msg07508.html
Merged to upstream https://github.com/qemu/qemu/commit/a48aaf882b100b30111b5c7c75e1d9e83fe76cfd
==steps Test Version: qemu-kvm-5.0.0-2.module+el8.3.0+7379+0505d6ca.x86_64 kernel-4.18.0-232.el8.x86_64 virtio-win-prewhql-0.1-189.iso 1, Boot win2019 guest with queues number large than CPUs number. /usr/libexec/qemu-kvm \ -name 'avocado-vt-vm1' \ -sandbox on \ -machine q35 \ -device pcie-root-port,id=pcie-root-port-0,multifunction=on,bus=pcie.0,addr=0x1,chassis=1 \ -device pcie-pci-bridge,id=pcie-pci-bridge-0,addr=0x0,bus=pcie-root-port-0 \ -nodefaults \ -device VGA,bus=pcie.0,addr=0x2 \ -m 4G \ -smp 8 \ -cpu 'Skylake-Server',hv_stimer,hv_synic,hv_vpindex,hv_relaxed,hv_spinlocks=0xfff,hv_vapic,hv_time,hv_frequencies,hv_runtime,hv_tlbflush,hv_reenlightenment,hv_stimer_direct,hv_ipi,+kvm_pv_unhalt \ -device pcie-root-port,id=pcie-root-port-2,port=0x2,addr=0x1.0x2,bus=pcie.0,chassis=3 \ -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pcie-root-port-2,addr=0x0 \ -blockdev node-name=file_image1,driver=file,aio=threads,filename=/home/kvm_autotest_root/images/win2019-64-virtio-scsi.qcow2,cache.direct=on,cache.no-flush=off \ -blockdev node-name=drive_image1,driver=qcow2,cache.direct=on,cache.no-flush=off,file=file_image1 \ -device scsi-hd,id=image1,drive=drive_image1,write-cache=on \ -blockdev node-name=file_cd1,driver=file,read-only=on,aio=threads,filename=/home/kvm_autotest_root/iso/windows/winutils.iso,cache.direct=on,cache.no-flush=off \ -blockdev node-name=drive_cd1,driver=raw,read-only=on,cache.direct=on,cache.no-flush=off,file=file_cd1 \ -device scsi-cd,id=cd1,drive=drive_cd1,write-cache=on \ -vnc :0 \ -enable-kvm \ -device pcie-root-port,id=pcie_extra_root_port_0,multifunction=on,bus=pcie.0,addr=0x3,chassis=5 \ -monitor stdio \ -device pcie-root-port,id=pcie-root-port-3,port=0x3,addr=0x1.0x3,bus=pcie.0,chassis=4 \ -device virtio-net-pci,mac=9a:6b:f3:69:f9:88,id=idIFqP07,netdev=id2GONAV,mq=on,vectors=34,bus=pcie-root-port-3,addr=0x0 \ -netdev tap,id=id2GONAV,vhost=on,queues=16 \ 2. qemu output error info (qemu) qemu-kvm: unable to start vhost net: 14: falling back on userspace virtio qemu-kvm: unable to start vhost net: 14: falling back on userspace virtio ==Reproduced with qemu-kvm-5.0.0-2.module+el8.3.0+7379+0505d6ca.x86_64 ==Verified with qemu-kvm-5.1.0-2.module+el8.3.0+7652+b30e6901.x86_64 1, Boot win2019 guest with queues number large than CPUs number. /usr/libexec/qemu-kvm \ -name 'avocado-vt-vm1' \ -sandbox on \ -machine q35 \ -device pcie-root-port,id=pcie-root-port-0,multifunction=on,bus=pcie.0,addr=0x1,chassis=1 \ -device pcie-pci-bridge,id=pcie-pci-bridge-0,addr=0x0,bus=pcie-root-port-0 \ -nodefaults \ -device VGA,bus=pcie.0,addr=0x2 \ -m 4G \ -smp 8 \ -cpu 'Skylake-Server',hv_stimer,hv_synic,hv_vpindex,hv_relaxed,hv_spinlocks=0xfff,hv_vapic,hv_time,hv_frequencies,hv_runtime,hv_tlbflush,hv_reenlightenment,hv_stimer_direct,hv_ipi,+kvm_pv_unhalt \ -device pcie-root-port,id=pcie-root-port-2,port=0x2,addr=0x1.0x2,bus=pcie.0,chassis=3 \ -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pcie-root-port-2,addr=0x0 \ -blockdev node-name=file_image1,driver=file,aio=threads,filename=/home/kvm_autotest_root/images/win2019-64-virtio-scsi.qcow2,cache.direct=on,cache.no-flush=off \ -blockdev node-name=drive_image1,driver=qcow2,cache.direct=on,cache.no-flush=off,file=file_image1 \ -device scsi-hd,id=image1,drive=drive_image1,write-cache=on \ -blockdev node-name=file_cd1,driver=file,read-only=on,aio=threads,filename=/home/kvm_autotest_root/iso/windows/winutils.iso,cache.direct=on,cache.no-flush=off \ -blockdev node-name=drive_cd1,driver=raw,read-only=on,cache.direct=on,cache.no-flush=off,file=file_cd1 \ -device scsi-cd,id=cd1,drive=drive_cd1,write-cache=on \ -vnc :0 \ -enable-kvm \ -device pcie-root-port,id=pcie_extra_root_port_0,multifunction=on,bus=pcie.0,addr=0x3,chassis=5 \ -monitor stdio \ -device pcie-root-port,id=pcie-root-port-3,port=0x3,addr=0x1.0x3,bus=pcie.0,chassis=4 \ -device virtio-net-pci,mac=9a:6b:f3:69:f9:88,id=idIFqP07,netdev=id2GONAV,mq=on,vectors=34,bus=pcie-root-port-3,addr=0x0 \ -netdev tap,id=id2GONAV,vhost=on,queues=16 \ 2.Guest works well.So this bug has been fixed very well. Move to 'VERIFIED'.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (virt:8.3 bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:5137