Bug 1935031 - qemu guest failed boot when attach vhost-user-blk-pci with unmatched num-queues with qsd
Summary: qemu guest failed boot when attach vhost-user-blk-pci with unmatched num-queu...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux Advanced Virtualization
Classification: Red Hat
Component: qemu-kvm
Version: 8.4
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: rc
: 8.4
Assignee: Kevin Wolf
QA Contact: qing.wang
URL:
Whiteboard:
Depends On:
Blocks: 1957194
TreeView+ depends on / blocked
 
Reported: 2021-03-04 09:43 UTC by qing.wang
Modified: 2023-03-14 14:48 UTC (History)
9 users (show)

Fixed In Version: qemu-kvm-6.0.0-24.module+el8.5.0+11844+1e3017bd
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-11-16 07:51:47 UTC
Type: ---
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2021:4684 0 None None None 2021-11-16 07:52:21 UTC

Description qing.wang 2021-03-04 09:43:30 UTC
Description of problem:
usually num-queues is deafult in qemu,under machine type pc the value is 1.
under q35 the value is same as smp, if the value is ummatched with qsd exported.
The guest/qemu boot failed with unfriendly prompt message:

#(qemu) qemu-storage-daemon: vu_panic: Invalid queue index: 1
#qemu-kvm: -device vhost-user-blk-pci,chardev=vhost1,id=blk1,bus=pcie-root-port-3,addr=0x0,num-queues=1,bootindex=1,num-queues=4: Failed to read msg header. Read -1 instead of 12. Original request 24.
#qemu-kvm: -device vhost-user-blk-pci,chardev=vhost1,id=blk1,bus=pcie-root-port-3,addr=0x0,num-queues=1,bootindex=1,num-queues=4: vhost-user-blk: get block config failed
#qemu-kvm: -device vhost-user-blk-pci,chardev=vhost1,id=blk1,bus=pcie-root-port-3,addr=0x0,num-queues=1,bootindex=1,num-queues=4: Failed to write msg. Wrote -1 instead of 84.
#qemu-kvm: -device vhost-user-blk-pci,chardev=vhost1,id=blk1,bus=pcie-root-port-3,addr=0x0,num-queues=1,bootindex=1,num-queues=4: vhost-user-blk: get block config failed
#qemu-kvm: Failed to read from slave.
#qemu-storage-daemon: vu_panic: Invalid queue index: 1
#qemu-kvm: Failed to set msg fds.
#qemu-kvm: vhost VQ 0 ring restore failed: -1: Input/output error (5)

vhost-user-blk-pci options:
  addr=<int32>           - Slot and optional function number, example: 06.0 or 06 (default: -1)
  any_layout=<bool>      - on/off (default: true)
  ats=<bool>             - on/off (default: false)
  bootindex=<int32>
  chardev=<str>          - ID of a chardev to use as a backend
  class=<uint32>         -  (default: 0)
  config-wce=<bool>      - on/off (default: true)
  disable-legacy=<OnOffAuto> - on/off/auto (default: "auto")
  disable-modern=<bool>  -  (default: false)
  event_idx=<bool>       - on/off (default: true)
  failover_pair_id=<str>
  indirect_desc=<bool>   - on/off (default: true)
  iommu_platform=<bool>  - on/off (default: false)
  migrate-extra=<bool>   - on/off (default: true)
  modern-pio-notify=<bool> - on/off (default: false)
  multifunction=<bool>   - on/off (default: false)
  notify_on_empty=<bool> - on/off (default: true)
  num-queues=<uint16>    -  (default: 65535)
  packed=<bool>          - on/off (default: false)
  page-per-vq=<bool>     - on/off (default: false)
  queue-size=<uint32>    -  (default: 128)


Version-Release number of selected component (if applicable):
Red Hat Enterprise Linux release 8.4 Beta (Ootpa)
4.18.0-291.el8.x86_64
qemu-kvm-common-5.2.0-9.module+el8.4.0+10182+4161bd91.x86_64

How reproducible:
100%

Steps to Reproduce:
1.create images if they are inexist
qemu-img create -f qcow2 /home/kvm_autotest_root/images/disk1.qcow2 1G
qemu-img create -f qcow2 /home/kvm_autotest_root/images/disk2.qcow2 1G


2.export them with qsd

qemu-storage-daemon \
  --chardev socket,path=/tmp/qmp.sock,server,nowait,id=char1 \
  --monitor chardev=char1 \
  --object iothread,id=iothread0 \
  --blockdev driver=file,node-name=file1,filename=/home/kvm_autotest_root/images/disk1.qcow2 \
  --blockdev driver=qcow2,node-name=fmt1,file=file1 \
  --export type=vhost-user-blk,id=export1,addr.type=unix,addr.path=/tmp/vhost-user-blk1.sock,node-name=fmt1,writable=on,iothread=iothread0 \
  --blockdev driver=file,node-name=file2,filename=/home/kvm_autotest_root/images/disk2.qcow2\
  --blockdev driver=qcow2,node-name=fmt2,file=file2 \
  --export type=vhost-user-blk,id=export2,addr.type=unix,addr.path=/tmp/vhost-user-blk2.sock,node-name=fmt2,writable=on,iothread=iothread0 

3.boot qemu with vhost-user-blk-pci device
/usr/libexec/qemu-kvm -enable-kvm \
  -m 4G -M q35,accel=kvm,memory-backend=mem,kernel-irqchip=split \
  -nodefaults \
  -vga qxl \
  -cpu host,+kvm_pv_unhalt \
  -smp 4 \
  -device intel-iommu,device-iotlb=on,intremap \
  -object memory-backend-memfd,id=mem,size=4G,share=on \
  -device pcie-root-port,id=pcie-root-port-0,multifunction=on,bus=pcie.0,addr=0x3,chassis=1 \
  -device pcie-root-port,id=pcie-root-port-1,port=0x1,addr=0x3.0x1,bus=pcie.0,chassis=2 \
  -device pcie-root-port,id=pcie-root-port-2,port=0x2,addr=0x3.0x2,bus=pcie.0,chassis=3 \
  -device pcie-root-port,id=pcie-root-port-3,port=0x3,addr=0x3.0x3,bus=pcie.0,chassis=4 \
  -device pcie-root-port,id=pcie-root-port-4,port=0x4,addr=0x3.0x4,bus=pcie.0,chassis=5 \
  -device pcie-root-port,id=pcie-root-port-5,port=0x5,addr=0x3.0x5,bus=pcie.0,chassis=6 \
  -device pcie-root-port,id=pcie-root-port-6,port=0x6,addr=0x3.0x6,bus=pcie.0,chassis=7 \
  -device pcie-root-port,id=pcie-root-port-7,port=0x7,addr=0x3.0x7,bus=pcie.0,chassis=8 \
  -device qemu-xhci,id=usb1,bus=pcie-root-port-1,addr=0x0 \
  -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \
  -device virtio-scsi-pci,id=scsi0,bus=pcie-root-port-5 \
  -device virtio-scsi-pci,id=scsi1,bus=pcie-root-port-6 \
  -blockdev driver=qcow2,file.driver=file,file.filename=/home/kvm_autotest_root/images/rhel840-64-virtio-scsi.qcow2,node-name=os_image1 \
  -device virtio-blk-pci,id=blk0,drive=os_image1,bootindex=0 \
  \
  -chardev socket,path=/tmp/vhost-user-blk1.sock,id=vhost1 \
  -device vhost-user-blk-pci,chardev=vhost1,id=blk1,bus=pcie-root-port-3,addr=0x0,num-queues=4,bootindex=1 \
  \
  -vnc :5 \
  -monitor stdio \
  -qmp tcp:0:5955,server,nowait

Actual results:
guest boot failed
Expected results:
Document this feature or give friendly message or set the default to 1 for vhost-user-blk-pci


Additional info:
it still failed due to q35 use same value as smp.
  -device vhost-user-blk-pci,chardev=vhost1,id=blk1,bus=pcie-root-port-3,addr=0x0,bootindex=1 \

Comment 1 Kevin Wolf 2021-06-18 17:38:45 UTC
Fixed upstream as of commit c90bd505.

Comment 3 Danilo de Paula 2021-07-02 21:18:22 UTC
QA_ACK, please?

Comment 9 Yanan Fu 2021-07-19 07:09:45 UTC
QE bot(pre verify): Set 'Verified:Tested,SanityOnly' as gating/tier1 test pass.

Comment 10 qing.wang 2021-07-19 11:04:18 UTC
Tested on
Red Hat Enterprise Linux release 8.5 Beta (Ootpa)
4.18.0-315.el8.x86_64
qemu-kvm-common-6.0.0-24.module+el8.5.0+11844+1e3017bd.x86_64

The issue get fixed, no crash happened.

1.boot qsd
qemu-storage-daemon \
  --chardev socket,path=/tmp/qmp.sock,server=on,wait=off,id=char1 \
  --monitor chardev=char1 \
  --object iothread,id=iothread0 \
  --blockdev driver=file,node-name=file1,filename=/home/kvm_autotest_root/images/disk1.qcow2 \
  --blockdev driver=qcow2,node-name=fmt1,file=file1 \
  --export type=vhost-user-blk,id=export1,addr.type=unix,addr.path=/tmp/vhost-user-blk1.sock,node-name=fmt1,writable=on,iothread=iothread0  \
  --blockdev driver=file,node-name=file2,filename=/home/kvm_autotest_root/images/disk2.qcow2 \
  --blockdev driver=qcow2,node-name=fmt2,file=file2 \
  --export type=vhost-user-blk,id=export2,addr.type=unix,addr.path=/tmp/vhost-user-blk2.sock,node-name=fmt2,writable=on,iothread=iothread0  \



2.boot vm with host-user-blk-pci device

/usr/libexec/qemu-kvm -enable-kvm \
  -m 4G -M accel=kvm,memory-backend=mem \
  -nodefaults \
  -vga qxl \
  -smp 4 \
  -object memory-backend-memfd,id=mem,size=4G,share=on \
  -device pcie-root-port,id=pcie-root-port-0,multifunction=on,bus=pci.0,addr=0x3,chassis=1 \
  -device pcie-root-port,id=pcie-root-port-1,port=0x1,addr=0x3.0x1,bus=pci.0,chassis=2 \
  -device pcie-root-port,id=pcie-root-port-2,port=0x2,addr=0x3.0x2,bus=pci.0,chassis=3 \
  -device pcie-root-port,id=pcie-root-port-3,port=0x3,addr=0x3.0x3,bus=pci.0,chassis=4 \
  -device pcie-root-port,id=pcie-root-port-4,port=0x4,addr=0x3.0x4,bus=pci.0,chassis=5 \
  -device pcie-root-port,id=pcie-root-port-5,port=0x5,addr=0x3.0x5,bus=pci.0,chassis=6 \
  -device pcie-root-port,id=pcie-root-port-6,port=0x6,addr=0x3.0x6,bus=pci.0,chassis=7 \
  -device pcie-root-port,id=pcie-root-port-7,port=0x7,addr=0x3.0x7,bus=pci.0,chassis=8 \
  -device qemu-xhci,id=usb1,bus=pcie-root-port-1,addr=0x0 \
  -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \
  -device virtio-scsi-pci,id=scsi0,bus=pcie-root-port-5 \
  -device virtio-scsi-pci,id=scsi1,bus=pcie-root-port-6 \
  -blockdev driver=qcow2,file.driver=file,file.filename=/home/kvm_autotest_root/images/rhel840-64-virtio-scsi.qcow2,node-name=os_image1 \
  -device virtio-blk-pci,id=blk0,drive=os_image1,bootindex=0 \
  \
  -chardev socket,path=/tmp/vhost-user-blk1.sock,id=vhost1 \
  -device vhost-user-blk-pci,chardev=vhost1,id=blk1,bootindex=1,num-queues=1 \
  \
  -chardev socket,path=/tmp/vhost-user-blk2.sock,id=vhost2 \
  -device vhost-user-blk-pci,chardev=vhost2,id=blk2,bootindex=2 \
  -vnc :5 \
  -monitor stdio \
  -qmp tcp:0:5955,server,nowait

3.login guest, check the disk exist and io 
dd if=/dev/zero of=/dev/vdb count=100 bs=1M oflag=direct

if we set num-queues=4 boot failed :
vhost initialization failed: Invalid argument

but the help message 
/usr/libexec/qemu-kvm -device vhost-user-blk-pci,?
vhost-user-blk-pci options:
  num-queues=<uint16>    -  (default: 65535))

What do you think fire a new bug to request hide  "num-queues=<uint16>    -  (default: 65535))" since other value is not supported.

Comment 11 Kevin Wolf 2021-07-20 11:00:31 UTC
(In reply to qing.wang from comment #10)
> if we set num-queues=4 boot failed :
> vhost initialization failed: Invalid argument

You should actually see two error message lines, like this:

qemu-system-x86_64: -device vhost-user-blk-pci,chardev=vhost1,id=blk1,bootindex=1,num-queues=4: The maximum number of queues supported by the backend is 1
qemu-system-x86_64: -device vhost-user-blk-pci,chardev=vhost1,id=blk1,bootindex=1,num-queues=4: vhost initialization failed: Invalid argument

> What do you think fire a new bug to request hide  "num-queues=<uint16>    - 
> (default: 65535))" since other value is not supported.

Other values are supported, but they depend on the backend. If you add num-queues=4 to the --export option for qemu-storage-daemon, QEMU can use anything between 1 and 4 queues.

Comment 12 qing.wang 2021-07-21 09:12:28 UTC
test with qsd side:
qemu-storage-daemon \
  --chardev socket,path=/tmp/qmp.sock,server=on,wait=off,id=char1 \
  --monitor chardev=char1 \
  --object iothread,id=iothread0 \
  --blockdev driver=file,node-name=file1,filename=/home/kvm_autotest_root/images/disk1.qcow2 \
  --blockdev driver=qcow2,node-name=fmt1,file=file1 \
  --export type=vhost-user-blk,id=export1,addr.type=unix,addr.path=/tmp/vhost-user-blk1.sock,node-name=fmt1,writable=on,iothread=iothread0  \
  --blockdev driver=file,node-name=file2,filename=/home/kvm_autotest_root/images/disk2.qcow2 \
  --blockdev driver=qcow2,node-name=fmt2,file=file2 \
  --export type=vhost-user-blk,id=export2,addr.type=unix,addr.path=/tmp/vhost-user-blk2.sock,node-name=fmt2,writable=on,num-queues=4,iothread=iothread0  \



qemu side:
-chardev socket,path=/tmp/vhost-user-blk2.sock,id=vhost2 \
  -device vhost-user-blk-pci,chardev=vhost2,id=blk2,bootindex=2,num-queues=4 \

The guest boot succeed, but real mq is 2
ls /sys/block/vdc/mq
0 1

So the num-queues is limited qsd and qemu both. 
The question is if the num-queues great than  qsd or qemu setting, can it boot ? or give the wrong message to forbid boot?

Comment 13 qing.wang 2021-08-31 08:08:34 UTC
Also passed on
Red Hat Enterprise Linux release 9.0 Beta (Plow)
5.14.0-0.rc6.46.el9.x86_64
qemu-kvm-common-6.0.0-12.el9.x86_64

Comment 15 errata-xmlrpc 2021-11-16 07:51:47 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (virt:av bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:4684


Note You need to log in before you can comment on or make changes to this bug.