Description of problem:
The QMP monitor gets BLOCK_IO_ERROR event when boot VM, it display have error event on scsi-hd disk with
driver virtio-blk-vhost-vdpa
{"QMP": {"version": {"qemu": {"micro": 0, "minor": 0, "major": 8}, "package": "qemu-kvm-8.0.0-8.el9"}, "capabilities": ["oob"]}}
{ 'execute': 'qmp_capabilities'}
{"return": {}}
...
{"timestamp": {"seconds": 1690191730, "microseconds": 149726}, "event": "BLOCK_IO_ERROR", "data": {"device": "", "nospace": false, "node-name": "fmt_stg1", "reason": "Invalid argument", "operation": "read", "action": "report"}}
Version-Release number of selected component (if applicable):
How reproducible:
100%
Steps to Reproduce:
1. parare vhost vdpa disks on host
modprobe vhost-vdpa
modprobe vdpa-sim-blk
vdpa dev add mgmtdev vdpasim_blk name blk0
vdpa dev add mgmtdev vdpasim_blk name blk1
vdpa dev list -jp
ls /dev/vhost-vdpa*
[ $? -ne 0 ] && echo "wrong create vdpa device"
2. boot VM with virtio-blk-vhost-vdpa driver
/usr/libexec/qemu-kvm \
-name testvm \
-machine q35,memory-backend=mem \
-object memory-backend-memfd,id=mem,size=6G,share=on \
-m 6G \
-smp 2 \
-cpu host,+kvm_pv_unhalt \
-device ich9-usb-ehci1,id=usb1 \
-device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \
\
\
-device pcie-root-port,id=pcie-root-port-0,multifunction=on,bus=pcie.0,addr=0x3,chassis=1 \
-device pcie-root-port,id=pcie-root-port-1,port=0x1,addr=0x3.0x1,bus=pcie.0,chassis=2 \
-device pcie-root-port,id=pcie-root-port-2,port=0x2,addr=0x3.0x2,bus=pcie.0,chassis=3 \
-device pcie-root-port,id=pcie-root-port-3,port=0x3,addr=0x3.0x3,bus=pcie.0,chassis=4 \
-device pcie-root-port,id=pcie-root-port-4,port=0x4,addr=0x3.0x4,bus=pcie.0,chassis=5 \
-device pcie-root-port,id=pcie-root-port-5,port=0x5,addr=0x3.0x5,bus=pcie.0,chassis=6 \
-device pcie-root-port,id=pcie-root-port-6,port=0x6,addr=0x3.0x6,bus=pcie.0,chassis=7 \
-device pcie-root-port,id=pcie-root-port-7,port=0x7,addr=0x3.0x7,bus=pcie.0,chassis=8 \
-device pcie-root-port,id=pcie_extra_root_port_0,bus=pcie.0,addr=0x4 \
-object iothread,id=iothread0 \
-device virtio-scsi-pci,id=scsi0,bus=pcie-root-port-0,iothread=iothread0 \
-blockdev driver=qcow2,file.driver=file,cache.direct=off,cache.no-flush=on,file.filename=/home/kvm_autotest_root/images/rhel930-64-virtio-scsi.qcow2,node-name=drive_image1,file.aio=threads \
-device scsi-hd,id=os,drive=drive_image1,bus=scsi0.0,bootindex=0,serial=OS_DISK \
\
-blockdev node-name=prot_stg0,driver=virtio-blk-vhost-vdpa,path=/dev/vhost-vdpa-0,cache.direct=on \
-blockdev node-name=fmt_stg0,driver=raw,file=prot_stg0 \
-device virtio-blk-pci,iothread=iothread0,bus=pcie-root-port-4,addr=0,id=stg0,drive=fmt_stg0,bootindex=1 \
\
-blockdev node-name=prot_stg1,driver=virtio-blk-vhost-vdpa,path=/dev/vhost-vdpa-1,cache.direct=on \
-blockdev node-name=fmt_stg1,driver=raw,file=prot_stg1 \
-device scsi-hd,id=stg1,drive=fmt_stg1,bootindex=2 \
-vnc :5 \
-monitor stdio \
-qmp tcp:0:5955,server=on,wait=off \
-device virtio-net-pci,mac=9a:b5:b6:b1:b2:b7,id=nic1,netdev=nicpci,bus=pcie-root-port-7 \
-netdev tap,id=nicpci \
-boot menu=on,reboot-timeout=1000,strict=off \
\
-chardev socket,id=socket-serial,path=/var/tmp/socket-serial,logfile=/var/tmp/file-serial.log,mux=on,server=on,wait=off \
-serial chardev:socket-serial \
-chardev file,path=/var/tmp/file-bios.log,id=file-bios \
-device isa-debugcon,chardev=file-bios,iobase=0x402 \
\
-chardev socket,id=socket-qmp,path=/var/tmp/socket-qmp,logfile=/var/tmp/file-qmp.log,mux=on,server=on,wait=off \
-mon chardev=socket-qmp,mode=control \
-chardev socket,id=socket-hmp,path=/var/tmp/socket-hmp,logfile=/var/tmp/file-hmp.log,mux=on,server=on,wait=off \
-mon chardev=socket-hmp,mode=readline \
3. login QMP monitor to check the message in VM booting
telnet 127.0.0.1 5955
{ 'execute': 'qmp_capabilities'}
{"return": {}}
Actual results:
The QMP monitor may catch the BLOCK_IO_ERROR event
Expected results:
No BLOCK_IO_ERROR event
Additional info:
Comment 1Stefano Garzarella
2023-07-25 07:52:39 UTC
I'm reducing the severity since vdpa-blk is tech preview and for now we only have the simulator.
Could it be an expected event during boot when Linux tries to read beyond the end of the device?
(In reply to Stefano Garzarella from comment #1)
> Could it be an expected event during boot when Linux tries to read beyond
> the end of the device?
In theory it shouldn't, BLOCK_IO_ERROR is for errors coming from the backend. Invalid requests from the guest such as reading beyond the end of the device should always return failure to the guest without considering rerror/werror settings and sending a QAPI event.
In virtio-blk, we have the virtio_blk_sect_range_ok() calls for this. In scsi-disk, it is check_lba_range().
Description of problem: The QMP monitor gets BLOCK_IO_ERROR event when boot VM, it display have error event on scsi-hd disk with driver virtio-blk-vhost-vdpa {"QMP": {"version": {"qemu": {"micro": 0, "minor": 0, "major": 8}, "package": "qemu-kvm-8.0.0-8.el9"}, "capabilities": ["oob"]}} { 'execute': 'qmp_capabilities'} {"return": {}} ... {"timestamp": {"seconds": 1690191730, "microseconds": 149726}, "event": "BLOCK_IO_ERROR", "data": {"device": "", "nospace": false, "node-name": "fmt_stg1", "reason": "Invalid argument", "operation": "read", "action": "report"}} Version-Release number of selected component (if applicable): How reproducible: 100% Steps to Reproduce: 1. parare vhost vdpa disks on host modprobe vhost-vdpa modprobe vdpa-sim-blk vdpa dev add mgmtdev vdpasim_blk name blk0 vdpa dev add mgmtdev vdpasim_blk name blk1 vdpa dev list -jp ls /dev/vhost-vdpa* [ $? -ne 0 ] && echo "wrong create vdpa device" 2. boot VM with virtio-blk-vhost-vdpa driver /usr/libexec/qemu-kvm \ -name testvm \ -machine q35,memory-backend=mem \ -object memory-backend-memfd,id=mem,size=6G,share=on \ -m 6G \ -smp 2 \ -cpu host,+kvm_pv_unhalt \ -device ich9-usb-ehci1,id=usb1 \ -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \ \ \ -device pcie-root-port,id=pcie-root-port-0,multifunction=on,bus=pcie.0,addr=0x3,chassis=1 \ -device pcie-root-port,id=pcie-root-port-1,port=0x1,addr=0x3.0x1,bus=pcie.0,chassis=2 \ -device pcie-root-port,id=pcie-root-port-2,port=0x2,addr=0x3.0x2,bus=pcie.0,chassis=3 \ -device pcie-root-port,id=pcie-root-port-3,port=0x3,addr=0x3.0x3,bus=pcie.0,chassis=4 \ -device pcie-root-port,id=pcie-root-port-4,port=0x4,addr=0x3.0x4,bus=pcie.0,chassis=5 \ -device pcie-root-port,id=pcie-root-port-5,port=0x5,addr=0x3.0x5,bus=pcie.0,chassis=6 \ -device pcie-root-port,id=pcie-root-port-6,port=0x6,addr=0x3.0x6,bus=pcie.0,chassis=7 \ -device pcie-root-port,id=pcie-root-port-7,port=0x7,addr=0x3.0x7,bus=pcie.0,chassis=8 \ -device pcie-root-port,id=pcie_extra_root_port_0,bus=pcie.0,addr=0x4 \ -object iothread,id=iothread0 \ -device virtio-scsi-pci,id=scsi0,bus=pcie-root-port-0,iothread=iothread0 \ -blockdev driver=qcow2,file.driver=file,cache.direct=off,cache.no-flush=on,file.filename=/home/kvm_autotest_root/images/rhel930-64-virtio-scsi.qcow2,node-name=drive_image1,file.aio=threads \ -device scsi-hd,id=os,drive=drive_image1,bus=scsi0.0,bootindex=0,serial=OS_DISK \ \ -blockdev node-name=prot_stg0,driver=virtio-blk-vhost-vdpa,path=/dev/vhost-vdpa-0,cache.direct=on \ -blockdev node-name=fmt_stg0,driver=raw,file=prot_stg0 \ -device virtio-blk-pci,iothread=iothread0,bus=pcie-root-port-4,addr=0,id=stg0,drive=fmt_stg0,bootindex=1 \ \ -blockdev node-name=prot_stg1,driver=virtio-blk-vhost-vdpa,path=/dev/vhost-vdpa-1,cache.direct=on \ -blockdev node-name=fmt_stg1,driver=raw,file=prot_stg1 \ -device scsi-hd,id=stg1,drive=fmt_stg1,bootindex=2 \ -vnc :5 \ -monitor stdio \ -qmp tcp:0:5955,server=on,wait=off \ -device virtio-net-pci,mac=9a:b5:b6:b1:b2:b7,id=nic1,netdev=nicpci,bus=pcie-root-port-7 \ -netdev tap,id=nicpci \ -boot menu=on,reboot-timeout=1000,strict=off \ \ -chardev socket,id=socket-serial,path=/var/tmp/socket-serial,logfile=/var/tmp/file-serial.log,mux=on,server=on,wait=off \ -serial chardev:socket-serial \ -chardev file,path=/var/tmp/file-bios.log,id=file-bios \ -device isa-debugcon,chardev=file-bios,iobase=0x402 \ \ -chardev socket,id=socket-qmp,path=/var/tmp/socket-qmp,logfile=/var/tmp/file-qmp.log,mux=on,server=on,wait=off \ -mon chardev=socket-qmp,mode=control \ -chardev socket,id=socket-hmp,path=/var/tmp/socket-hmp,logfile=/var/tmp/file-hmp.log,mux=on,server=on,wait=off \ -mon chardev=socket-hmp,mode=readline \ 3. login QMP monitor to check the message in VM booting telnet 127.0.0.1 5955 { 'execute': 'qmp_capabilities'} {"return": {}} Actual results: The QMP monitor may catch the BLOCK_IO_ERROR event Expected results: No BLOCK_IO_ERROR event Additional info: