Bug 1935014 - qemu crash when attach vhost-user-blk-pci with option queue-size=4096
Summary: qemu crash when attach vhost-user-blk-pci with option queue-size=4096
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux Advanced Virtualization
Classification: Red Hat
Component: qemu-kvm
Version: 8.4
Hardware: x86_64
OS: Linux
high
high
Target Milestone: rc
: 8.4
Assignee: Kevin Wolf
QA Contact: qing.wang
URL:
Whiteboard:
Depends On:
Blocks: 1957194
TreeView+ depends on / blocked
 
Reported: 2021-03-04 09:10 UTC by qing.wang
Modified: 2023-02-28 05:38 UTC (History)
9 users (show)

Fixed In Version: qemu-kvm-6.0.0-24.module+el8.5.0+11844+1e3017bd
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-11-16 07:51:47 UTC
Type: ---
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2021:4684 0 None None None 2021-11-16 07:52:21 UTC

Description qing.wang 2021-03-04 09:10:12 UTC
Description of problem:
The qemu crash when attach vhost-user-blk-pci with option queue-size=4096
vhost-user-blk-pci options:
  addr=<int32>           - Slot and optional function number, example: 06.0 or 06 (default: -1)
  any_layout=<bool>      - on/off (default: true)
  ats=<bool>             - on/off (default: false)
  bootindex=<int32>
  chardev=<str>          - ID of a chardev to use as a backend
  class=<uint32>         -  (default: 0)
  config-wce=<bool>      - on/off (default: true)
  disable-legacy=<OnOffAuto> - on/off/auto (default: "auto")
  disable-modern=<bool>  -  (default: false)
  event_idx=<bool>       - on/off (default: true)
  failover_pair_id=<str>
  indirect_desc=<bool>   - on/off (default: true)
  iommu_platform=<bool>  - on/off (default: false)
  migrate-extra=<bool>   - on/off (default: true)
  modern-pio-notify=<bool> - on/off (default: false)
  multifunction=<bool>   - on/off (default: false)
  notify_on_empty=<bool> - on/off (default: true)
  num-queues=<uint16>    -  (default: 65535)
  packed=<bool>          - on/off (default: false)
  page-per-vq=<bool>     - on/off (default: false)
  queue-size=<uint32>    -  (default: 128)


BT:

#0  0x00007f6402ddb37f in raise () at /lib64/libc.so.6
#1  0x00007f6402dc5db5 in abort () at /lib64/libc.so.6
#2  0x000055daea7a2bbc in virtio_add_queue
    (vdev=vdev@entry=0x7f62ec46b1a0, queue_size=<optimized out>, handle_output=handle_output@entry=0x55daea7e97a0 <vhost_user_blk_handle_output>) at ../hw/virtio/virtio.c:2408
#3  0x000055daea7e9334 in vhost_user_blk_device_realize (dev=0x7f62ec46b1a0, errp=<optimized out>)
    at ../hw/block/vhost-user-blk.c:464
#4  0x000055daea7a083c in virtio_device_realize (dev=0x7f62ec46b1a0, errp=0x7ffda3b69bf0) at ../hw/virtio/virtio.c:3657
#5  0x000055daea7f783f in device_set_realized (obj=<optimized out>, value=true, errp=0x7ffda3b69c70)
    at ../hw/core/qdev.c:886
#6  0x000055daea7edd6a in property_set_bool
    (obj=0x7f62ec46b1a0, v=<optimized out>, name=<optimized out>, opaque=0x55daecfc3980, errp=0x7ffda3b69c70)
    at ../qom/object.c:2251
#7  0x000055daea7efd8b in object_property_set
    (obj=obj@entry=0x7f62ec46b1a0, name=name@entry=0x55daeaa082f2 "realized", v=v@entry=
    0x55daeee31be0, errp=errp@entry=0x7ffda3b69db0) at ../qom/object.c:1398
#8  0x000055daea7ed593 in object_property_set_qobject
    (obj=obj@entry=0x7f62ec46b1a0, name=name@entry=0x55daeaa082f2 "realized", value=value@entry=0x55daeee31b20, errp=errp@entry=0x7ffda3b69db0) at ../qom/qom-qobject.c:28
#9  0x000055daea7effc8 in object_property_set_bool
    (obj=0x7f62ec46b1a0, name=0x55daeaa082f2 "realized", value=<optimized out>, errp=0x7ffda3b69db0)
    at ../qom/object.c:1465
#10 0x000055daea692f36 in virtio_pci_realize (pci_dev=0x7f62ec463010, errp=0x7ffda3b69db0)
    at ../hw/virtio/virtio-pci.c:1853
#11 0x000055daea5a7dd8 in pci_qdev_realize (qdev=0x7f62ec463010, errp=<optimized out>) at ../hw/pci/pci.c:2130
#12 0x000055daea7f783f in device_set_realized (obj=<optimized out>, value=true, errp=0x7ffda3b69ec0)

Version-Release number of selected component (if applicable):
Red Hat Enterprise Linux release 8.4 Beta (Ootpa)
4.18.0-291.el8.x86_64
qemu-kvm-common-5.2.0-9.module+el8.4.0+10182+4161bd91.x86_64

How reproducible:
100%

Steps to Reproduce:
1.create images if they are inexist
qemu-img create -f qcow2 /home/kvm_autotest_root/images/disk1.qcow2 1G
qemu-img create -f qcow2 /home/kvm_autotest_root/images/disk2.qcow2 1G


2.export them with qsd

qemu-storage-daemon \
  --chardev socket,path=/tmp/qmp.sock,server,nowait,id=char1 \
  --monitor chardev=char1 \
  --object iothread,id=iothread0 \
  --blockdev driver=file,node-name=file1,filename=/home/kvm_autotest_root/images/disk1.qcow2 \
  --blockdev driver=qcow2,node-name=fmt1,file=file1 \
  --export type=vhost-user-blk,id=export1,addr.type=unix,addr.path=/tmp/vhost-user-blk1.sock,node-name=fmt1,writable=on,iothread=iothread0 \
  --blockdev driver=file,node-name=file2,filename=/home/kvm_autotest_root/images/disk2.qcow2\
  --blockdev driver=qcow2,node-name=fmt2,file=file2 \
  --export type=vhost-user-blk,id=export2,addr.type=unix,addr.path=/tmp/vhost-user-blk2.sock,node-name=fmt2,writable=on,iothread=iothread0 

3.boot qemu with vhost-user-blk-pci device
/usr/libexec/qemu-kvm -enable-kvm \
  -m 4G -M q35,accel=kvm,memory-backend=mem,kernel-irqchip=split \
  -nodefaults \
  -vga qxl \
  -cpu host,+kvm_pv_unhalt \
  -smp 4 \
  -device intel-iommu,device-iotlb=on,intremap \
  -object memory-backend-memfd,id=mem,size=4G,share=on \
  -device pcie-root-port,id=pcie-root-port-0,multifunction=on,bus=pcie.0,addr=0x3,chassis=1 \
  -device pcie-root-port,id=pcie-root-port-1,port=0x1,addr=0x3.0x1,bus=pcie.0,chassis=2 \
  -device pcie-root-port,id=pcie-root-port-2,port=0x2,addr=0x3.0x2,bus=pcie.0,chassis=3 \
  -device pcie-root-port,id=pcie-root-port-3,port=0x3,addr=0x3.0x3,bus=pcie.0,chassis=4 \
  -device pcie-root-port,id=pcie-root-port-4,port=0x4,addr=0x3.0x4,bus=pcie.0,chassis=5 \
  -device pcie-root-port,id=pcie-root-port-5,port=0x5,addr=0x3.0x5,bus=pcie.0,chassis=6 \
  -device pcie-root-port,id=pcie-root-port-6,port=0x6,addr=0x3.0x6,bus=pcie.0,chassis=7 \
  -device pcie-root-port,id=pcie-root-port-7,port=0x7,addr=0x3.0x7,bus=pcie.0,chassis=8 \
  -device qemu-xhci,id=usb1,bus=pcie-root-port-1,addr=0x0 \
  -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \
  -device virtio-scsi-pci,id=scsi0,bus=pcie-root-port-5 \
  -device virtio-scsi-pci,id=scsi1,bus=pcie-root-port-6 \
  -blockdev driver=qcow2,file.driver=file,file.filename=/home/kvm_autotest_root/images/rhel840-64-virtio-scsi.qcow2,node-name=os_image1 \
  -device virtio-blk-pci,id=blk0,drive=os_image1,bootindex=0 \
  \
  -chardev socket,path=/tmp/vhost-user-blk1.sock,id=vhost1 \
  -device vhost-user-blk-pci,chardev=vhost1,id=blk1,bus=pcie-root-port-3,addr=0x0,num-queues=1,bootindex=1,queue-size=4096 \
  \
  -vnc :5 \
  -monitor stdio \
  -qmp tcp:0:5955,server,nowait

Actual results:
crash
Expected results:
not crash. it may give message if queue-size is incorrect.
Additional info:
default value 128 is ok

Comment 1 Kevin Wolf 2021-04-13 17:09:12 UTC
Sent a patch upstream to fix the crash:

https://lists.gnu.org/archive/html/qemu-block/2021-04/msg00346.html
("[PATCH v2] vhost-user-blk: Fail gracefully on too large queue size")

In case it's useful information, the allowed maximum is currently 1024.

Comment 2 Klaus Heinrich Kiwi 2021-06-15 15:25:14 UTC
(In reply to Kevin Wolf from comment #1)
> Sent a patch upstream to fix the crash:
> 
> https://lists.gnu.org/archive/html/qemu-block/2021-04/msg00346.html
> ("[PATCH v2] vhost-user-blk: Fail gracefully on too large queue size")
> 
> In case it's useful information, the allowed maximum is currently 1024.

Kevin, can you update us on the upstream status for this patch, and how soon should we expect it to be picked up by downstream?

Thanks,

 -Klaus

Comment 3 Kevin Wolf 2021-06-18 09:56:03 UTC
This is fixed upstream as of commit 68bf7336.

To be backported downstream together with the other vhost-user-blk error handling fixes.

Comment 12 Yanan Fu 2021-07-19 07:09:31 UTC
QE bot(pre verify): Set 'Verified:Tested,SanityOnly' as gating/tier1 test pass.

Comment 13 qing.wang 2021-07-19 09:43:10 UTC
Tested on
Red Hat Enterprise Linux release 8.5 Beta (Ootpa)
4.18.0-315.el8.x86_64
qemu-kvm-common-6.0.0-24.module+el8.5.0+11844+1e3017bd.x86_64


1.boot qsd
qemu-storage-daemon \
  --chardev socket,path=/tmp/qmp.sock,server=on,wait=off,id=char1 \
  --monitor chardev=char1 \
  --object iothread,id=iothread0 \
  --blockdev driver=file,node-name=file1,filename=/home/kvm_autotest_root/images/disk1.qcow2 \
  --blockdev driver=qcow2,node-name=fmt1,file=file1 \
  --export type=vhost-user-blk,id=export1,addr.type=unix,addr.path=/tmp/vhost-user-blk1.sock,node-name=fmt1,writable=on,iothread=iothread0  \
  --blockdev driver=file,node-name=file2,filename=/home/kvm_autotest_root/images/disk2.qcow2 \
  --blockdev driver=qcow2,node-name=fmt2,file=file2 \
  --export type=vhost-user-blk,id=export2,addr.type=unix,addr.path=/tmp/vhost-user-blk2.sock,node-name=fmt2,writable=on,iothread=iothread0  \



2.boot vm with host-user-blk-pci device

/usr/libexec/qemu-kvm -enable-kvm \
  -m 4G -M accel=kvm,memory-backend=mem \
  -nodefaults \
  -vga qxl \
  -smp 4 \
  -object memory-backend-memfd,id=mem,size=4G,share=on \
  -device pcie-root-port,id=pcie-root-port-0,multifunction=on,bus=pci.0,addr=0x3,chassis=1 \
  -device pcie-root-port,id=pcie-root-port-1,port=0x1,addr=0x3.0x1,bus=pci.0,chassis=2 \
  -device pcie-root-port,id=pcie-root-port-2,port=0x2,addr=0x3.0x2,bus=pci.0,chassis=3 \
  -device pcie-root-port,id=pcie-root-port-3,port=0x3,addr=0x3.0x3,bus=pci.0,chassis=4 \
  -device pcie-root-port,id=pcie-root-port-4,port=0x4,addr=0x3.0x4,bus=pci.0,chassis=5 \
  -device pcie-root-port,id=pcie-root-port-5,port=0x5,addr=0x3.0x5,bus=pci.0,chassis=6 \
  -device pcie-root-port,id=pcie-root-port-6,port=0x6,addr=0x3.0x6,bus=pci.0,chassis=7 \
  -device pcie-root-port,id=pcie-root-port-7,port=0x7,addr=0x3.0x7,bus=pci.0,chassis=8 \
  -device qemu-xhci,id=usb1,bus=pcie-root-port-1,addr=0x0 \
  -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \
  -device virtio-scsi-pci,id=scsi0,bus=pcie-root-port-5 \
  -device virtio-scsi-pci,id=scsi1,bus=pcie-root-port-6 \
  -blockdev driver=qcow2,file.driver=file,file.filename=/home/kvm_autotest_root/images/rhel840-64-virtio-scsi.qcow2,node-name=os_image1 \
  -device virtio-blk-pci,id=blk0,drive=os_image1,bootindex=0 \
  \
  -chardev socket,path=/tmp/vhost-user-blk1.sock,id=vhost1 \
  -device vhost-user-blk-pci,chardev=vhost1,id=blk1,bootindex=1,num-queues=1,queue-size=1024 \
  \
  -vnc :5 \
  -monitor stdio \
  -qmp tcp:0:5955,server,nowait

3.login guest, check the disk exist and io 
dd if=/dev/zero of=/dev/vdb count=100 bs=1M oflag=direct

Comment 14 qing.wang 2021-08-31 07:51:34 UTC
Also passed on
Red Hat Enterprise Linux release 9.0 Beta (Plow)
5.14.0-0.rc6.46.el9.x86_64
qemu-kvm-common-6.0.0-12.el9.x86_64

Comment 16 errata-xmlrpc 2021-11-16 07:51:47 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (virt:av bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:4684


Note You need to log in before you can comment on or make changes to this bug.