Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 2225125

Summary: [vdpa-blk] The VM QMP monitor get BLOCK_IO_ERROR on scsi-hd disk with driver virtio-blk-vhost-vdpa
Product: Red Hat Enterprise Linux 9 Reporter: qing.wang <qinwang>
Component: qemu-kvmAssignee: Stefano Garzarella <sgarzare>
qemu-kvm sub component: virtio-blk,scsi QA Contact: qing.wang <qinwang>
Status: CLOSED MIGRATED Docs Contact:
Severity: medium    
Priority: medium CC: aliang, chayang, coli, jinzhao, juzhang, kwolf, lijin, qizhu, sgarzare, stefanha, vgoyal, virt-maint, xuwei, zhenyzha
Version: 9.3Keywords: MigratedToJIRA, Triaged
Target Milestone: rcFlags: pm-rhel: mirror+
Target Release: ---   
Hardware: x86_64   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-09-22 16:55:57 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description qing.wang 2023-07-24 11:03:02 UTC
Description of problem:
The QMP monitor gets BLOCK_IO_ERROR event when boot VM, it display have error event on scsi-hd disk with 
 driver virtio-blk-vhost-vdpa





{"QMP": {"version": {"qemu": {"micro": 0, "minor": 0, "major": 8}, "package": "qemu-kvm-8.0.0-8.el9"}, "capabilities": ["oob"]}}
{ 'execute': 'qmp_capabilities'}
{"return": {}}
...
{"timestamp": {"seconds": 1690191730, "microseconds": 149726}, "event": "BLOCK_IO_ERROR", "data": {"device": "", "nospace": false, "node-name": "fmt_stg1", "reason": "Invalid argument", "operation": "read", "action": "report"}}


Version-Release number of selected component (if applicable):


How reproducible:
100%

Steps to Reproduce:

1. parare vhost vdpa disks on host

  modprobe vhost-vdpa
  modprobe vdpa-sim-blk
  vdpa dev add mgmtdev vdpasim_blk name blk0
  vdpa dev add mgmtdev vdpasim_blk name blk1
  vdpa dev list -jp
  ls /dev/vhost-vdpa*
  [ $? -ne 0 ] && echo "wrong create vdpa device"

2. boot VM with virtio-blk-vhost-vdpa driver
/usr/libexec/qemu-kvm \
  -name testvm \
  -machine q35,memory-backend=mem \
  -object memory-backend-memfd,id=mem,size=6G,share=on \
  -m  6G \
  -smp 2 \
  -cpu host,+kvm_pv_unhalt \
  -device ich9-usb-ehci1,id=usb1 \
  -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \
   \
   \
  -device pcie-root-port,id=pcie-root-port-0,multifunction=on,bus=pcie.0,addr=0x3,chassis=1 \
  -device pcie-root-port,id=pcie-root-port-1,port=0x1,addr=0x3.0x1,bus=pcie.0,chassis=2 \
  -device pcie-root-port,id=pcie-root-port-2,port=0x2,addr=0x3.0x2,bus=pcie.0,chassis=3 \
  -device pcie-root-port,id=pcie-root-port-3,port=0x3,addr=0x3.0x3,bus=pcie.0,chassis=4 \
  -device pcie-root-port,id=pcie-root-port-4,port=0x4,addr=0x3.0x4,bus=pcie.0,chassis=5 \
  -device pcie-root-port,id=pcie-root-port-5,port=0x5,addr=0x3.0x5,bus=pcie.0,chassis=6 \
  -device pcie-root-port,id=pcie-root-port-6,port=0x6,addr=0x3.0x6,bus=pcie.0,chassis=7 \
  -device pcie-root-port,id=pcie-root-port-7,port=0x7,addr=0x3.0x7,bus=pcie.0,chassis=8 \
  -device pcie-root-port,id=pcie_extra_root_port_0,bus=pcie.0,addr=0x4  \
  -object iothread,id=iothread0 \
  -device virtio-scsi-pci,id=scsi0,bus=pcie-root-port-0,iothread=iothread0 \
  -blockdev driver=qcow2,file.driver=file,cache.direct=off,cache.no-flush=on,file.filename=/home/kvm_autotest_root/images/rhel930-64-virtio-scsi.qcow2,node-name=drive_image1,file.aio=threads   \
  -device scsi-hd,id=os,drive=drive_image1,bus=scsi0.0,bootindex=0,serial=OS_DISK   \
  \
  -blockdev node-name=prot_stg0,driver=virtio-blk-vhost-vdpa,path=/dev/vhost-vdpa-0,cache.direct=on \
  -blockdev node-name=fmt_stg0,driver=raw,file=prot_stg0 \
  -device virtio-blk-pci,iothread=iothread0,bus=pcie-root-port-4,addr=0,id=stg0,drive=fmt_stg0,bootindex=1 \
  \
  -blockdev node-name=prot_stg1,driver=virtio-blk-vhost-vdpa,path=/dev/vhost-vdpa-1,cache.direct=on \
  -blockdev node-name=fmt_stg1,driver=raw,file=prot_stg1 \
  -device scsi-hd,id=stg1,drive=fmt_stg1,bootindex=2 \
  -vnc :5 \
  -monitor stdio \
  -qmp tcp:0:5955,server=on,wait=off \
  -device virtio-net-pci,mac=9a:b5:b6:b1:b2:b7,id=nic1,netdev=nicpci,bus=pcie-root-port-7 \
  -netdev tap,id=nicpci \
  -boot menu=on,reboot-timeout=1000,strict=off \
  \
  -chardev socket,id=socket-serial,path=/var/tmp/socket-serial,logfile=/var/tmp/file-serial.log,mux=on,server=on,wait=off \
  -serial chardev:socket-serial \
  -chardev file,path=/var/tmp/file-bios.log,id=file-bios \
  -device isa-debugcon,chardev=file-bios,iobase=0x402 \
  \
  -chardev socket,id=socket-qmp,path=/var/tmp/socket-qmp,logfile=/var/tmp/file-qmp.log,mux=on,server=on,wait=off \
  -mon chardev=socket-qmp,mode=control \
  -chardev socket,id=socket-hmp,path=/var/tmp/socket-hmp,logfile=/var/tmp/file-hmp.log,mux=on,server=on,wait=off \
  -mon chardev=socket-hmp,mode=readline \

3. login QMP monitor to check the message in VM booting

telnet 127.0.0.1 5955
{ 'execute': 'qmp_capabilities'}
{"return": {}}


Actual results:
The QMP monitor may catch the BLOCK_IO_ERROR event

Expected results:
No BLOCK_IO_ERROR event

Additional info:

Comment 1 Stefano Garzarella 2023-07-25 07:52:39 UTC
I'm reducing the severity since vdpa-blk is tech preview and for now we only have the simulator.

Could it be an expected event during boot when Linux tries to read beyond the end of the device?

Comment 2 Kevin Wolf 2023-08-08 16:48:48 UTC
(In reply to Stefano Garzarella from comment #1)
> Could it be an expected event during boot when Linux tries to read beyond
> the end of the device?

In theory it shouldn't, BLOCK_IO_ERROR is for errors coming from the backend. Invalid requests from the guest such as reading beyond the end of the device should always return failure to the guest without considering rerror/werror settings and sending a QAPI event.

In virtio-blk, we have the virtio_blk_sect_range_ok() calls for this. In scsi-disk, it is check_lba_range().

Comment 5 Stefano Garzarella 2023-08-29 08:59:25 UTC
(In reply to Kevin Wolf from comment #2)
> (In reply to Stefano Garzarella from comment #1)
> > Could it be an expected event during boot when Linux tries to read beyond
> > the end of the device?
> 
> In theory it shouldn't, BLOCK_IO_ERROR is for errors coming from the
> backend. Invalid requests from the guest such as reading beyond the end of
> the device should always return failure to the guest without considering
> rerror/werror settings and sending a QAPI event.
> 
> In virtio-blk, we have the virtio_blk_sect_range_ok() calls for this. In
> scsi-disk, it is check_lba_range().

Thanks for the details!

I'll try to replicate it and figure out the cause.

Comment 6 RHEL Program Management 2023-09-22 16:55:08 UTC
Issue migration from Bugzilla to Jira is in process at this time. This will be the last message in Jira copied from the Bugzilla bug.

Comment 7 RHEL Program Management 2023-09-22 16:55:57 UTC
This BZ has been automatically migrated to the issues.redhat.com Red Hat Issue Tracker. All future work related to this report will be managed there.

Due to differences in account names between systems, some fields were not replicated.  Be sure to add yourself to Jira issue's "Watchers" field to continue receiving updates and add others to the "Need Info From" field to continue requesting information.

To find the migrated issue, look in the "Links" section for a direct link to the new issue location. The issue key will have an icon of 2 footprints next to it, and begin with "RHEL-" followed by an integer.  You can also find this issue by visiting https://issues.redhat.com/issues/?jql= and searching the "Bugzilla Bug" field for this BZ's number, e.g. a search like:

"Bugzilla Bug" = 1234567

In the event you have trouble locating or viewing this issue, you can file an issue by sending mail to rh-issues. You can also visit https://access.redhat.com/articles/7032570 for general account information.