Bug 2225389

Summary: [vdpa-blk] slow response on device_add QMP when hotplug virtio-blk-pci with driver virtio-blk-vhost-vdpa
Product: Red Hat Enterprise Linux 9 Reporter: qing.wang <qinwang>
Component: qemu-kvmAssignee: Virtualization Maintenance <virt-maint>
qemu-kvm sub component: virtio-blk,scsi QA Contact: qing.wang <qinwang>
Status: CLOSED CURRENTRELEASE Docs Contact:
Severity: medium    
Priority: medium CC: aliang, chayang, coli, jinzhao, juzhang, kwolf, lijin, qizhu, sgarzare, stefanha, virt-maint, xuwei, zhenyzha
Version: 9.3   
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-08-11 10:32:32 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description qing.wang 2023-07-25 07:58:08 UTC
Description of problem:
Hotplug virtio-blk-pci disk with driver virtio-blk-vhost-vdpa.
It gets a slow response on device_add QMP (about 8 second)


The wait time is more than hotplug scsi-hd disk with driver virtio-blk-vhost-vdpa
The latter can return quickly. 

Version-Release number of selected component (if applicable):
Red Hat Enterprise Linux release 9.3 Beta (Plow)
5.14.0-340.el9.x86_64
qemu-kvm-8.0.0-8.el9.x86_64
seabios-bin-1.16.1-1.el9.noarch
edk2-ovmf-20230524-2.el9.noarch
libvirt-9.3.0-2.el9.x86_64
libblkio-1.3.0-1.el9.x86_64

How reproducible:
100%

Steps to Reproduce:

1. parare vhost vdpa disks on host

  modprobe vhost-vdpa
  modprobe vdpa-sim-blk
  vdpa dev add mgmtdev vdpasim_blk name blk0
  vdpa dev add mgmtdev vdpasim_blk name blk1
  vdpa dev list -jp
  ls /dev/vhost-vdpa*
  [ $? -ne 0 ] && echo "wrong create vdpa device"

2. boot VM with virtio-blk-vhost-vdpa driver
/usr/libexec/qemu-kvm \
  -name testvm \
  -machine q35,memory-backend=mem \
  -object memory-backend-memfd,id=mem,size=6G,share=on \
  -m  6G \
  -smp 2 \
  -cpu host,+kvm_pv_unhalt \
  -device ich9-usb-ehci1,id=usb1 \
  -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \
   \
   \
  -device pcie-root-port,id=pcie-root-port-0,multifunction=on,bus=pcie.0,addr=0x3,chassis=1 \
  -device pcie-root-port,id=pcie-root-port-1,port=0x1,addr=0x3.0x1,bus=pcie.0,chassis=2 \
  -device pcie-root-port,id=pcie-root-port-2,port=0x2,addr=0x3.0x2,bus=pcie.0,chassis=3 \
  -device pcie-root-port,id=pcie-root-port-3,port=0x3,addr=0x3.0x3,bus=pcie.0,chassis=4 \
  -device pcie-root-port,id=pcie-root-port-4,port=0x4,addr=0x3.0x4,bus=pcie.0,chassis=5 \
  -device pcie-root-port,id=pcie-root-port-5,port=0x5,addr=0x3.0x5,bus=pcie.0,chassis=6 \
  -device pcie-root-port,id=pcie-root-port-6,port=0x6,addr=0x3.0x6,bus=pcie.0,chassis=7 \
  -device pcie-root-port,id=pcie-root-port-7,port=0x7,addr=0x3.0x7,bus=pcie.0,chassis=8 \
  -device pcie-root-port,id=pcie_extra_root_port_0,bus=pcie.0,addr=0x4  \
  -object iothread,id=iothread0 \
  -device virtio-scsi-pci,id=scsi0,bus=pcie-root-port-0,iothread=iothread0 \
  -blockdev driver=qcow2,file.driver=file,cache.direct=off,cache.no-flush=on,file.filename=/home/kvm_autotest_root/images/rhel930-64-virtio-scsi.qcow2,node-name=drive_image1,file.aio=threads   \
  -device scsi-hd,id=os,drive=drive_image1,bus=scsi0.0,bootindex=0,serial=OS_DISK   \
  \
  -blockdev node-name=prot_stg0,driver=virtio-blk-vhost-vdpa,path=/dev/vhost-vdpa-0,cache.direct=on \
  -blockdev node-name=fmt_stg0,driver=raw,file=prot_stg0 \
  -device virtio-blk-pci,iothread=iothread0,bus=pcie-root-port-4,addr=0,id=stg0,drive=fmt_stg0,bootindex=1 \
  \
  -blockdev node-name=prot_stg1,driver=virtio-blk-vhost-vdpa,path=/dev/vhost-vdpa-1,cache.direct=on \
  -blockdev node-name=fmt_stg1,driver=raw,file=prot_stg1 \
  -device scsi-hd,id=stg1,drive=fmt_stg1,bootindex=2 \
  -vnc :5 \
  -monitor stdio \
  -qmp tcp:0:5955,server=on,wait=off \
  -device virtio-net-pci,mac=9a:b5:b6:b1:b2:b7,id=nic1,netdev=nicpci,bus=pcie-root-port-7 \
  -netdev tap,id=nicpci \
  -boot menu=on,reboot-timeout=1000,strict=off \
  \
  -chardev socket,id=socket-serial,path=/var/tmp/socket-serial,logfile=/var/tmp/file-serial.log,mux=on,server=on,wait=off \
  -serial chardev:socket-serial \
  -chardev file,path=/var/tmp/file-bios.log,id=file-bios \
  -device isa-debugcon,chardev=file-bios,iobase=0x402 \
  \
  -chardev socket,id=socket-qmp,path=/var/tmp/socket-qmp,logfile=/var/tmp/file-qmp.log,mux=on,server=on,wait=off \
  -mon chardev=socket-qmp,mode=control \
  -chardev socket,id=socket-hmp,path=/var/tmp/socket-hmp,logfile=/var/tmp/file-hmp.log,mux=on,server=on,wait=off \
  -mon chardev=socket-hmp,mode=readline \


3. unplug the scsi-hd disk then hotplug it back.
{"execute": "device_del", "arguments": {"id": "stg1"}}


4. check disk in the guest disappears,then start to hotplug it back

{"execute": "device_add", "arguments": {"driver": "scsi-hd", "id": "stg1", "drive": "fmt_stg1"}}

5. unplug the virtio-blk-pci disk 

{"execute": "device_del", "arguments": {"id": "stg0"}}

6. check disk in the guest disappears,then start to hotplug it back

{"execute": "device_add", "arguments": {"driver": "virtio-blk-pci", "id": "stg0", "drive": "fmt_stg0","bus":"pcie-root-port-4"}}




Actual results:
Get return at step 6 will take about 8 seconds

Expected results:
The time of getting the return of QMP device_add should have no big difference on scsi-hd and virtio-blk-pci

{"return": {}}



Additional info:

Comment 1 qing.wang 2023-07-25 08:07:10 UTC
*** Bug 2225390 has been marked as a duplicate of this bug. ***

Comment 3 qing.wang 2023-08-11 10:32:32 UTC
Not hit this issue on 

Red Hat Enterprise Linux release 9.3 Beta (Plow)
5.14.0-352.el9.x86_64
qemu-kvm-8.0.0-11.el9.x86_64
seabios-bin-1.16.1-1.el9.noarch
edk2-ovmf-20230524-2.el9.noarch
libvirt-9.5.0-5.el9.x86_64