Bug 1935031
Summary: | qemu guest failed boot when attach vhost-user-blk-pci with unmatched num-queues with qsd | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux Advanced Virtualization | Reporter: | qing.wang <qinwang> |
Component: | qemu-kvm | Assignee: | Kevin Wolf <kwolf> |
qemu-kvm sub component: | virtio-blk,scsi | QA Contact: | qing.wang <qinwang> |
Status: | CLOSED ERRATA | Docs Contact: | |
Severity: | medium | ||
Priority: | medium | CC: | coli, ddepaula, jinzhao, juzhang, kwolf, lijin, qzhang, virt-maint, xuwei |
Version: | 8.4 | Keywords: | Triaged |
Target Milestone: | rc | ||
Target Release: | 8.4 | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | qemu-kvm-6.0.0-24.module+el8.5.0+11844+1e3017bd | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2021-11-16 07:51:47 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1957194 |
Description
qing.wang
2021-03-04 09:43:30 UTC
Fixed upstream as of commit c90bd505. QA_ACK, please? QE bot(pre verify): Set 'Verified:Tested,SanityOnly' as gating/tier1 test pass. Tested on Red Hat Enterprise Linux release 8.5 Beta (Ootpa) 4.18.0-315.el8.x86_64 qemu-kvm-common-6.0.0-24.module+el8.5.0+11844+1e3017bd.x86_64 The issue get fixed, no crash happened. 1.boot qsd qemu-storage-daemon \ --chardev socket,path=/tmp/qmp.sock,server=on,wait=off,id=char1 \ --monitor chardev=char1 \ --object iothread,id=iothread0 \ --blockdev driver=file,node-name=file1,filename=/home/kvm_autotest_root/images/disk1.qcow2 \ --blockdev driver=qcow2,node-name=fmt1,file=file1 \ --export type=vhost-user-blk,id=export1,addr.type=unix,addr.path=/tmp/vhost-user-blk1.sock,node-name=fmt1,writable=on,iothread=iothread0 \ --blockdev driver=file,node-name=file2,filename=/home/kvm_autotest_root/images/disk2.qcow2 \ --blockdev driver=qcow2,node-name=fmt2,file=file2 \ --export type=vhost-user-blk,id=export2,addr.type=unix,addr.path=/tmp/vhost-user-blk2.sock,node-name=fmt2,writable=on,iothread=iothread0 \ 2.boot vm with host-user-blk-pci device /usr/libexec/qemu-kvm -enable-kvm \ -m 4G -M accel=kvm,memory-backend=mem \ -nodefaults \ -vga qxl \ -smp 4 \ -object memory-backend-memfd,id=mem,size=4G,share=on \ -device pcie-root-port,id=pcie-root-port-0,multifunction=on,bus=pci.0,addr=0x3,chassis=1 \ -device pcie-root-port,id=pcie-root-port-1,port=0x1,addr=0x3.0x1,bus=pci.0,chassis=2 \ -device pcie-root-port,id=pcie-root-port-2,port=0x2,addr=0x3.0x2,bus=pci.0,chassis=3 \ -device pcie-root-port,id=pcie-root-port-3,port=0x3,addr=0x3.0x3,bus=pci.0,chassis=4 \ -device pcie-root-port,id=pcie-root-port-4,port=0x4,addr=0x3.0x4,bus=pci.0,chassis=5 \ -device pcie-root-port,id=pcie-root-port-5,port=0x5,addr=0x3.0x5,bus=pci.0,chassis=6 \ -device pcie-root-port,id=pcie-root-port-6,port=0x6,addr=0x3.0x6,bus=pci.0,chassis=7 \ -device pcie-root-port,id=pcie-root-port-7,port=0x7,addr=0x3.0x7,bus=pci.0,chassis=8 \ -device qemu-xhci,id=usb1,bus=pcie-root-port-1,addr=0x0 \ -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \ -device virtio-scsi-pci,id=scsi0,bus=pcie-root-port-5 \ -device virtio-scsi-pci,id=scsi1,bus=pcie-root-port-6 \ -blockdev driver=qcow2,file.driver=file,file.filename=/home/kvm_autotest_root/images/rhel840-64-virtio-scsi.qcow2,node-name=os_image1 \ -device virtio-blk-pci,id=blk0,drive=os_image1,bootindex=0 \ \ -chardev socket,path=/tmp/vhost-user-blk1.sock,id=vhost1 \ -device vhost-user-blk-pci,chardev=vhost1,id=blk1,bootindex=1,num-queues=1 \ \ -chardev socket,path=/tmp/vhost-user-blk2.sock,id=vhost2 \ -device vhost-user-blk-pci,chardev=vhost2,id=blk2,bootindex=2 \ -vnc :5 \ -monitor stdio \ -qmp tcp:0:5955,server,nowait 3.login guest, check the disk exist and io dd if=/dev/zero of=/dev/vdb count=100 bs=1M oflag=direct if we set num-queues=4 boot failed : vhost initialization failed: Invalid argument but the help message /usr/libexec/qemu-kvm -device vhost-user-blk-pci,? vhost-user-blk-pci options: num-queues=<uint16> - (default: 65535)) What do you think fire a new bug to request hide "num-queues=<uint16> - (default: 65535))" since other value is not supported. (In reply to qing.wang from comment #10) > if we set num-queues=4 boot failed : > vhost initialization failed: Invalid argument You should actually see two error message lines, like this: qemu-system-x86_64: -device vhost-user-blk-pci,chardev=vhost1,id=blk1,bootindex=1,num-queues=4: The maximum number of queues supported by the backend is 1 qemu-system-x86_64: -device vhost-user-blk-pci,chardev=vhost1,id=blk1,bootindex=1,num-queues=4: vhost initialization failed: Invalid argument > What do you think fire a new bug to request hide "num-queues=<uint16> - > (default: 65535))" since other value is not supported. Other values are supported, but they depend on the backend. If you add num-queues=4 to the --export option for qemu-storage-daemon, QEMU can use anything between 1 and 4 queues. test with qsd side: qemu-storage-daemon \ --chardev socket,path=/tmp/qmp.sock,server=on,wait=off,id=char1 \ --monitor chardev=char1 \ --object iothread,id=iothread0 \ --blockdev driver=file,node-name=file1,filename=/home/kvm_autotest_root/images/disk1.qcow2 \ --blockdev driver=qcow2,node-name=fmt1,file=file1 \ --export type=vhost-user-blk,id=export1,addr.type=unix,addr.path=/tmp/vhost-user-blk1.sock,node-name=fmt1,writable=on,iothread=iothread0 \ --blockdev driver=file,node-name=file2,filename=/home/kvm_autotest_root/images/disk2.qcow2 \ --blockdev driver=qcow2,node-name=fmt2,file=file2 \ --export type=vhost-user-blk,id=export2,addr.type=unix,addr.path=/tmp/vhost-user-blk2.sock,node-name=fmt2,writable=on,num-queues=4,iothread=iothread0 \ qemu side: -chardev socket,path=/tmp/vhost-user-blk2.sock,id=vhost2 \ -device vhost-user-blk-pci,chardev=vhost2,id=blk2,bootindex=2,num-queues=4 \ The guest boot succeed, but real mq is 2 ls /sys/block/vdc/mq 0 1 So the num-queues is limited qsd and qemu both. The question is if the num-queues great than qsd or qemu setting, can it boot ? or give the wrong message to forbid boot? Also passed on Red Hat Enterprise Linux release 9.0 Beta (Plow) 5.14.0-0.rc6.46.el9.x86_64 qemu-kvm-common-6.0.0-12.el9.x86_64 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (virt:av bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:4684 |