Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Not hit this issue on
Red Hat Enterprise Linux release 9.3 Beta (Plow)
5.14.0-352.el9.x86_64
qemu-kvm-8.0.0-11.el9.x86_64
seabios-bin-1.16.1-1.el9.noarch
edk2-ovmf-20230524-2.el9.noarch
libvirt-9.5.0-5.el9.x86_64
Description of problem: Hotplug virtio-blk-pci disk with driver virtio-blk-vhost-vdpa. It gets a slow response on device_add QMP (about 8 second) The wait time is more than hotplug scsi-hd disk with driver virtio-blk-vhost-vdpa The latter can return quickly. Version-Release number of selected component (if applicable): Red Hat Enterprise Linux release 9.3 Beta (Plow) 5.14.0-340.el9.x86_64 qemu-kvm-8.0.0-8.el9.x86_64 seabios-bin-1.16.1-1.el9.noarch edk2-ovmf-20230524-2.el9.noarch libvirt-9.3.0-2.el9.x86_64 libblkio-1.3.0-1.el9.x86_64 How reproducible: 100% Steps to Reproduce: 1. parare vhost vdpa disks on host modprobe vhost-vdpa modprobe vdpa-sim-blk vdpa dev add mgmtdev vdpasim_blk name blk0 vdpa dev add mgmtdev vdpasim_blk name blk1 vdpa dev list -jp ls /dev/vhost-vdpa* [ $? -ne 0 ] && echo "wrong create vdpa device" 2. boot VM with virtio-blk-vhost-vdpa driver /usr/libexec/qemu-kvm \ -name testvm \ -machine q35,memory-backend=mem \ -object memory-backend-memfd,id=mem,size=6G,share=on \ -m 6G \ -smp 2 \ -cpu host,+kvm_pv_unhalt \ -device ich9-usb-ehci1,id=usb1 \ -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \ \ \ -device pcie-root-port,id=pcie-root-port-0,multifunction=on,bus=pcie.0,addr=0x3,chassis=1 \ -device pcie-root-port,id=pcie-root-port-1,port=0x1,addr=0x3.0x1,bus=pcie.0,chassis=2 \ -device pcie-root-port,id=pcie-root-port-2,port=0x2,addr=0x3.0x2,bus=pcie.0,chassis=3 \ -device pcie-root-port,id=pcie-root-port-3,port=0x3,addr=0x3.0x3,bus=pcie.0,chassis=4 \ -device pcie-root-port,id=pcie-root-port-4,port=0x4,addr=0x3.0x4,bus=pcie.0,chassis=5 \ -device pcie-root-port,id=pcie-root-port-5,port=0x5,addr=0x3.0x5,bus=pcie.0,chassis=6 \ -device pcie-root-port,id=pcie-root-port-6,port=0x6,addr=0x3.0x6,bus=pcie.0,chassis=7 \ -device pcie-root-port,id=pcie-root-port-7,port=0x7,addr=0x3.0x7,bus=pcie.0,chassis=8 \ -device pcie-root-port,id=pcie_extra_root_port_0,bus=pcie.0,addr=0x4 \ -object iothread,id=iothread0 \ -device virtio-scsi-pci,id=scsi0,bus=pcie-root-port-0,iothread=iothread0 \ -blockdev driver=qcow2,file.driver=file,cache.direct=off,cache.no-flush=on,file.filename=/home/kvm_autotest_root/images/rhel930-64-virtio-scsi.qcow2,node-name=drive_image1,file.aio=threads \ -device scsi-hd,id=os,drive=drive_image1,bus=scsi0.0,bootindex=0,serial=OS_DISK \ \ -blockdev node-name=prot_stg0,driver=virtio-blk-vhost-vdpa,path=/dev/vhost-vdpa-0,cache.direct=on \ -blockdev node-name=fmt_stg0,driver=raw,file=prot_stg0 \ -device virtio-blk-pci,iothread=iothread0,bus=pcie-root-port-4,addr=0,id=stg0,drive=fmt_stg0,bootindex=1 \ \ -blockdev node-name=prot_stg1,driver=virtio-blk-vhost-vdpa,path=/dev/vhost-vdpa-1,cache.direct=on \ -blockdev node-name=fmt_stg1,driver=raw,file=prot_stg1 \ -device scsi-hd,id=stg1,drive=fmt_stg1,bootindex=2 \ -vnc :5 \ -monitor stdio \ -qmp tcp:0:5955,server=on,wait=off \ -device virtio-net-pci,mac=9a:b5:b6:b1:b2:b7,id=nic1,netdev=nicpci,bus=pcie-root-port-7 \ -netdev tap,id=nicpci \ -boot menu=on,reboot-timeout=1000,strict=off \ \ -chardev socket,id=socket-serial,path=/var/tmp/socket-serial,logfile=/var/tmp/file-serial.log,mux=on,server=on,wait=off \ -serial chardev:socket-serial \ -chardev file,path=/var/tmp/file-bios.log,id=file-bios \ -device isa-debugcon,chardev=file-bios,iobase=0x402 \ \ -chardev socket,id=socket-qmp,path=/var/tmp/socket-qmp,logfile=/var/tmp/file-qmp.log,mux=on,server=on,wait=off \ -mon chardev=socket-qmp,mode=control \ -chardev socket,id=socket-hmp,path=/var/tmp/socket-hmp,logfile=/var/tmp/file-hmp.log,mux=on,server=on,wait=off \ -mon chardev=socket-hmp,mode=readline \ 3. unplug the scsi-hd disk then hotplug it back. {"execute": "device_del", "arguments": {"id": "stg1"}} 4. check disk in the guest disappears,then start to hotplug it back {"execute": "device_add", "arguments": {"driver": "scsi-hd", "id": "stg1", "drive": "fmt_stg1"}} 5. unplug the virtio-blk-pci disk {"execute": "device_del", "arguments": {"id": "stg0"}} 6. check disk in the guest disappears,then start to hotplug it back {"execute": "device_add", "arguments": {"driver": "virtio-blk-pci", "id": "stg0", "drive": "fmt_stg0","bus":"pcie-root-port-4"}} Actual results: Get return at step 6 will take about 8 seconds Expected results: The time of getting the return of QMP device_add should have no big difference on scsi-hd and virtio-blk-pci {"return": {}} Additional info: