Bug 2125119

Summary: Mirror job with "copy-mode":"write-blocking" that used for storage migration can't converge under heavy I/O
Product: Red Hat Enterprise Linux 8 Reporter: aihua liang <aliang>
Component: qemu-kvmAssignee: Hanna Czenczek <hreitz>
qemu-kvm sub component: Block Jobs QA Contact: aihua liang <aliang>
Status: CLOSED ERRATA Docs Contact:
Severity: high    
Priority: medium CC: coli, ddepaula, hreitz, jinzhao, jmaloy, juzhang, kwolf, pm-rhel, vgoyal, virt-maint
Version: 8.7Keywords: Triaged
Target Milestone: rc   
Target Release: ---   
Hardware: x86_64   
OS: Unspecified   
Whiteboard:
Fixed In Version: qemu-kvm-6.2.0-29.module+el8.8.0+17991+08d03241 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 2123297 Environment:
Last Closed: 2023-05-16 08:16:30 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 2123297    
Bug Blocks:    

Description aihua liang 2022-09-08 04:32:46 UTC
+++ This bug was initially created as a clone of Bug #2123297 +++

Description of problem:
 Mirror job with "copy-mode":"write-blocking" that used for storage migration can't converge under heavy I/O.

Version-Release number of selected component (if applicable):
 kernel version:4.18.0-419.el8.x86_64
 qemu-kvm version:qemu-kvm-6.2.0-20.module+el8.7.0+16496+35f7e655

How reproducible:
 100%


Steps to Reproduce:
Test Env prepare:
 #qemu-img create -f raw test.img 100G
 #losetup /dev/loop0 test.img
 #pvcreate /dev/loop0
 #vgcreate test /dev/loop0
 #lvcreate -L 20G -n system test
 #lvcreate -L 20G -n mirror_system test
 #lvcreate -L 3G -n data1 test
 # .... create data2 ~ data7 with size 3G
 #lvcreate -L 3G -n mirror_data1 test
 # ....create mirror_data2 ~ mirror_data7 with size 3G

Test Steps:
 1. install guest on /dev/tets/system

 2. Start src qemu cmd:
     /usr/libexec/qemu-kvm \
    -name 'avocado-vt-vm1'  \
    -sandbox on  \
    -machine q35,memory-backend=mem-machine_mem \
    -device pcie-root-port,id=pcie-root-port-0,multifunction=on,bus=pcie.0,addr=0x1,chassis=1 \
    -device pcie-pci-bridge,id=pcie-pci-bridge-0,addr=0x0,bus=pcie-root-port-0  \
    -nodefaults \
    -device VGA,bus=pcie.0,addr=0x2 \
    -m 30720 \
    -object memory-backend-ram,size=30720M,id=mem-machine_mem  \
    -smp 10,maxcpus=10,cores=5,threads=1,dies=1,sockets=2  \
    -cpu 'Cascadelake-Server',ss=on,vmx=on,pdcm=on,hypervisor=on,tsc-adjust=on,umip=on,pku=on,md-clear=on,stibp=on,arch-capabilities=on,xsaves=on,ibpb=on,ibrs=on,amd-stibp=on,amd-ssbd=on,rdctl-no=on,ibrs-all=on,skip-l1dfl-vmentry=on,mds-no=on,pschange-mc-no=on,tsx-ctrl=on,hle=off,rtm=off,kvm_pv_unhalt=on \
    -chardev socket,wait=off,id=qmp_id_qmpmonitor1,server=on,path=/tmp/monitor-qmpmonitor1-20220601-053115-5q30Md4A  \
    -mon chardev=qmp_id_qmpmonitor1,mode=control \
    -chardev socket,wait=off,id=qmp_id_catch_monitor,server=on,path=/tmp/monitor-catch_monitor-20220601-053115-5q30Md4A  \
    -mon chardev=qmp_id_catch_monitor,mode=control \
    -device pvpanic,ioport=0x505,id=idtsFTaW \
    -chardev socket,wait=off,id=chardev_serial0,server=on,path=/tmp/serial-serial0-20220601-053115-5q30Md44 \
    -device isa-serial,id=serial0,chardev=chardev_serial0  \
    -chardev socket,id=seabioslog_id_20220601-053115-5q30Md44,path=/tmp/seabios-20220601-053115-5q30Md44,server=on,wait=off \
    -device isa-debugcon,chardev=seabioslog_id_20220601-053115-5q30Md44,iobase=0x402 \
    -device pcie-root-port,id=pcie-root-port-1,port=0x1,addr=0x1.0x1,bus=pcie.0,chassis=2 \
    -device qemu-xhci,id=usb1,bus=pcie-root-port-1,addr=0x0 \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \
    -device pcie-root-port,id=pcie-root-port-2,port=0x2,addr=0x1.0x2,bus=pcie.0,chassis=3 \
    -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pcie-root-port-2,addr=0x0 \
    -blockdev node-name=file_image1,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/system,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_image1,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_image1 \
    -device scsi-hd,id=image1,drive=drive_image1,write-cache=on \
    -blockdev node-name=file_data1,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/data1,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data1,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data1 \
    -device scsi-hd,id=data1,drive=drive_data1,write-cache=on \
    -blockdev node-name=file_data2,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/data2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data2,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data2 \
    -device scsi-hd,id=data2,drive=drive_data2,write-cache=on \
    -blockdev node-name=file_data3,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/data3,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data3,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data3 \
    -device scsi-hd,id=data3,drive=drive_data3,write-cache=on \
    -blockdev node-name=file_data4,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/data4,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data4,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data4 \
    -device scsi-hd,id=data4,drive=drive_data4,write-cache=on \
    -blockdev node-name=file_data5,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/data5,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data5,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data5 \
    -device scsi-hd,id=data5,drive=drive_data5,write-cache=on \
    -blockdev node-name=file_data6,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/data6,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data6,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data6 \
    -device scsi-hd,id=data6,drive=drive_data6,write-cache=on \
    -blockdev node-name=file_data7,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/data7,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data7,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data7 \
    -device scsi-hd,id=data7,drive=drive_data7,write-cache=on \
    -device pcie-root-port,id=pcie-root-port-3,port=0x3,addr=0x1.0x3,bus=pcie.0,chassis=4 \

 3. Start dst vm by qemu cmd:
    /usr/libexec/qemu-kvm \
    -name 'avocado-vt-vm1'  \
    -sandbox on  \
    -machine q35,memory-backend=mem-machine_mem \
    -device pcie-root-port,id=pcie-root-port-0,multifunction=on,bus=pcie.0,addr=0x1,chassis=1 \
    -device pcie-pci-bridge,id=pcie-pci-bridge-0,addr=0x0,bus=pcie-root-port-0  \
    -nodefaults \
    -device VGA,bus=pcie.0,addr=0x2 \
    -m 30720 \
    -object memory-backend-ram,size=30720M,id=mem-machine_mem  \
    -smp 10,maxcpus=10,cores=5,threads=1,dies=1,sockets=2  \
    -cpu 'Cascadelake-Server',ss=on,vmx=on,pdcm=on,hypervisor=on,tsc-adjust=on,umip=on,pku=on,md-clear=on,stibp=on,arch-capabilities=on,xsaves=on,ibpb=on,ibrs=on,amd-stibp=on,amd-ssbd=on,rdctl-no=on,ibrs-all=on,skip-l1dfl-vmentry=on,mds-no=on,pschange-mc-no=on,tsx-ctrl=on,hle=off,rtm=off,kvm_pv_unhalt=on \
    -chardev socket,wait=off,id=qmp_id_qmpmonitor1,server=on,path=/tmp/monitor-qmpmonitor1-20220601-053115-5q30Md45  \
    -mon chardev=qmp_id_qmpmonitor1,mode=control \
    -chardev socket,wait=off,id=qmp_id_catch_monitor,server=on,path=/tmp/monitor-catch_monitor-20220601-053115-5q30Md44  \
    -mon chardev=qmp_id_catch_monitor,mode=control \
    -device pvpanic,ioport=0x505,id=idtsFTaW \
    -chardev socket,wait=off,id=chardev_serial0,server=on,path=/tmp/serial-serial0-20220601-053115-5q30Md44 \
    -device isa-serial,id=serial0,chardev=chardev_serial0  \
    -chardev socket,id=seabioslog_id_20220601-053115-5q30Md44,path=/tmp/seabios-20220601-053115-5q30Md44,server=on,wait=off \
    -device isa-debugcon,chardev=seabioslog_id_20220601-053115-5q30Md44,iobase=0x402 \
    -device pcie-root-port,id=pcie-root-port-1,port=0x1,addr=0x1.0x1,bus=pcie.0,chassis=2 \
    -device qemu-xhci,id=usb1,bus=pcie-root-port-1,addr=0x0 \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \
    -device pcie-root-port,id=pcie-root-port-2,port=0x2,addr=0x1.0x2,bus=pcie.0,chassis=3 \
    -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pcie-root-port-2,addr=0x0 \
    -blockdev node-name=file_image1,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/mirror_system,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_image1,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_image1 \
    -device scsi-hd,id=image1,drive=drive_image1,write-cache=on \
    -blockdev node-name=file_data1,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/mirror_data1,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data1,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data1 \
    -device scsi-hd,id=data1,drive=drive_data1,write-cache=on \
    -blockdev node-name=file_data2,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/mirror_data2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data2,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data2 \
    -device scsi-hd,id=data2,drive=drive_data2,write-cache=on \
    -blockdev node-name=file_data3,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/mirror_data3,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data3,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data3 \
    -device scsi-hd,id=data3,drive=drive_data3,write-cache=on \
    -blockdev node-name=file_data4,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/mirror_data4,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data4,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data4 \
    -device scsi-hd,id=data4,drive=drive_data4,write-cache=on \
    -blockdev node-name=file_data5,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/mirror_data5,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data5,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data5 \
    -device scsi-hd,id=data5,drive=drive_data5,write-cache=on \
    -blockdev node-name=file_data6,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/mirror_data6,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data6,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data6 \
    -device scsi-hd,id=data6,drive=drive_data6,write-cache=on \
    -blockdev node-name=file_data7,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/mirror_data7,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data7,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data7 \
    -device scsi-hd,id=data7,drive=drive_data7,write-cache=on \
    -device pcie-root-port,id=pcie-root-port-3,port=0x3,addr=0x1.0x3,bus=pcie.0,chassis=4 \

 4. Start ndb server and expose all disks on dst
    { "execute": "nbd-server-start", "arguments": { "addr": { "type": "inet", "data": { "host": "10.73.196.25", "port": "3333" } } } }
{"return": {}}
{"execute":"block-export-add","arguments":{"id": "export0", "node-name": "drive_image1", "type": "nbd", "writable": true}}
{"return": {}}
{"execute":"block-export-add","arguments":{"id": "export1", "node-name": "drive_data1", "type": "nbd", "writable": true}}
{"return": {}}
{"execute":"block-export-add","arguments":{"id": "export2", "node-name": "drive_data2", "type": "nbd", "writable": true}}
{"return": {}}
{"execute":"block-export-add","arguments":{"id": "export3", "node-name": "drive_data3", "type": "nbd", "writable": true}}
{"return": {}}
{"execute":"block-export-add","arguments":{"id": "export4", "node-name": "drive_data4", "type": "nbd", "writable": true}}
{"return": {}}
{"execute":"block-export-add","arguments":{"id": "export5", "node-name": "drive_data5", "type": "nbd", "writable": true}}
{"return": {}}
{"execute":"block-export-add","arguments":{"id": "export6", "node-name": "drive_data6", "type": "nbd", "writable": true}}
{"return": {}}
{"execute":"block-export-add","arguments":{"id": "export7", "node-name": "drive_data7", "type": "nbd", "writable": true}}
{"return": {}}

  5. Login src guest, do fio test with randrw on all data disks.
     (guest)# mkfs.ext4 /dev/sdb && mkdir /mnt/1 && mount /dev/sdb /mnt/1
            # mkfs.ext4 /dev/sdc && mkdir /mnt/2 && mount /dev/sdb /mnt/2
            # mkfs.ext4 /dev/sdd && mkdir /mnt/3 && mount /dev/sdb /mnt/3
            # mkfs.ext4 /dev/sde && mkdir /mnt/4 && mount /dev/sdb /mnt/4
            # mkfs.ext4 /dev/sdf && mkdir /mnt/5 && mount /dev/sdb /mnt/5
            # mkfs.ext4 /dev/sdg && mkdir /mnt/6 && mount /dev/sdb /mnt/6
            # mkfs.ext4 /dev/sdh && mkdir /mnt/7 && mount /dev/sdb /mnt/7
          for i in range(1,8)
            # fio --name=stress$i --filename=/mnt/$i/atest --ioengine=libaio --rw=randrw --direct=1 --bs=4K --size=2G --iodepth=256 --numjobs=30 --runtime=1200 --time_based

 6. After fio test start, add target disks and do mirror from src to dst
    {"execute":"blockdev-add","arguments":{"driver":"nbd","node-name":"mirror","server":{"type":"inet","host":"10.73.196.25","port":"3333"},"export":"drive_image1"}}
    {"execute":"blockdev-add","arguments":{"driver":"nbd","node-name":"mirror_data1","server":{"type":"inet","host":"10.73.196.25","port":"3333"},"export":"drive_data1"}}
    {"execute":"blockdev-add","arguments":{"driver":"nbd","node-name":"mirror_data2","server":{"type":"inet","host":"10.73.196.25","port":"3333"},"export":"drive_data2"}}
    {"execute":"blockdev-add","arguments":{"driver":"nbd","node-name":"mirror_data3","server":{"type":"inet","host":"10.73.196.25","port":"3333"},"export":"drive_data3"}}
{"execute":"blockdev-add","arguments":{"driver":"nbd","node-name":"mirror_data4","server":{"type":"inet","host":"10.73.196.25","port":"3333"},"export":"drive_data4"}}
{"execute":"blockdev-add","arguments":{"driver":"nbd","node-name":"mirror_data5","server":{"type":"inet","host":"10.73.196.25","port":"3333"},"export":"drive_data5"}}
{"execute":"blockdev-add","arguments":{"driver":"nbd","node-name":"mirror_data6","server":{"type":"inet","host":"10.73.196.25","port":"3333"},"export":"drive_data6"}}
{"execute":"blockdev-add","arguments":{"driver":"nbd","node-name":"mirror_data7","server":{"type":"inet","host":"10.73.196.25","port":"3333"},"export":"drive_data7"}}

{ "execute": "blockdev-mirror", "arguments": { "device": "drive_image1","target": "mirror", "copy-mode":"write-blocking", "sync": "full","job-id":"j1" } }
{ "execute": "blockdev-mirror", "arguments": { "device": "drive_data1","target": "mirror_data1", "copy-mode":"write-blocking", "sync": "full","job-id":"j2"}}
{ "execute": "blockdev-mirror", "arguments": { "device": "drive_data2","target": "mirror_data2", "copy-mode":"write-blocking", "sync": "full","job-id":"j3"}}
{ "execute": "blockdev-mirror", "arguments": { "device": "drive_data3","target": "mirror_data3", "copy-mode":"write-blocking", "sync": "full","job-id":"j4"}}
{ "execute": "blockdev-mirror", "arguments": { "device": "drive_data4","target": "mirror_data4", "copy-mode":"write-blocking", "sync": "full","job-id":"j5"}}
{ "execute": "blockdev-mirror", "arguments": { "device": "drive_data5","target": "mirror_data5", "copy-mode":"write-blocking", "sync": "full","job-id":"j6"}}
{ "execute": "blockdev-mirror", "arguments": { "device": "drive_data6","target": "mirror_data6", "copy-mode":"write-blocking", "sync": "full","job-id":"j7"}}
{ "execute": "blockdev-mirror", "arguments": { "device": "drive_data7","target": "mirror_data7", "copy-mode":"write-blocking", "sync": "full","job-id":"j8"}}


Actual Result:
 After step6, j1 reach ready status after 3 minutes, but other jobs reach ready status after 20 minutes.

Expected results:
  Mirror jobs can converge.

Will check if it's a regression and give info later.

Comment 1 aihua liang 2022-09-08 09:35:19 UTC
Hit this issue on qemu version greate than qemu4.2, like
Qemu-kvm-6.0.0-29.module+el8.6.0+12490+ec3e565c
Qemu-kvm-5.2.0-1.module+el8.4.0+9091+650b220a
Qemu-kvm-5.0.0-0.module+el8.3.0+6612+6b86f0c9


Not this issue on qemu4.2, and mirror converage in 5 minutes.
Qemu-kvm-4.2.0-49.module+el8.5.0+10804+ce89428a
Qemu-kvm-4.2.0-21.module+el8.2.1+6586+8b7713b9

So, it's a regression issue.

Comment 2 Vivek Goyal 2022-09-08 18:02:05 UTC
(In reply to aihua liang from comment #1)
> Hit this issue on qemu version greate than qemu4.2, like
> Qemu-kvm-6.0.0-29.module+el8.6.0+12490+ec3e565c
> Qemu-kvm-5.2.0-1.module+el8.4.0+9091+650b220a
> Qemu-kvm-5.0.0-0.module+el8.3.0+6612+6b86f0c9
> 
> 
> Not this issue on qemu4.2, and mirror converage in 5 minutes.
> Qemu-kvm-4.2.0-49.module+el8.5.0+10804+ce89428a
> Qemu-kvm-4.2.0-21.module+el8.2.1+6586+8b7713b9
> 
> So, it's a regression issue.

Based on the discussion in other bug, I don't think any particular time frame for job to converge was ever promised. So while it might be a change of behavior since qemu4.2, I will stop short of calling it a regression because qemu4.2 behavior was never a guarantee to begin with.

I guess Kevin and Hanna will have the final say on this. For now, I am dropping the priority to medium. But if they think this should be "High" prio instead, they can raise it.

Comment 3 Vivek Goyal 2022-09-08 18:03:11 UTC
I dropped "Regression" tag as well for now. Kevin and Haana, please correct me if that's not the case.

Comment 4 Vivek Goyal 2022-09-09 15:26:31 UTC
Hi Hanna,

As you are handling similar bug for rhel9, for now I am assigning this bug to you as well. Hopefully resolution for both the issues is same. Please let me know if somebody else should be handling it instead.

Comment 5 Hanna Czenczek 2022-09-12 07:16:13 UTC
Yes, I’ll take this one, too.

Comment 11 Yanan Fu 2023-01-28 02:31:28 UTC
QE bot(pre verify): Set 'Verified:Tested,SanityOnly' as gating/tier1 test pass.

Comment 12 aihua liang 2023-01-28 07:50:18 UTC
Test on qemu-kvm-6.2.0-29.module+el8.8.0+17991+08d03241, all mirror jobs reach ready status within 5 minutes, with all fio jobs still in running status.

Test Env prepare:
 #qemu-img create -f raw test.img 100G
 #losetup /dev/loop0 test.img
 #pvcreate /dev/loop0
 #vgcreate test /dev/loop0
 #lvcreate -L 20G -n system test
 #lvcreate -L 20G -n mirror_system test
 #lvcreate -L 3G -n data1 test
 # .... create data2 ~ data7 with size 3G
 #lvcreate -L 3G -n mirror_data1 test
 # ....create mirror_data2 ~ mirror_data7 with size 3G

Test Steps:
 1. install guest on /dev/test/system
 2. Start guest in src
   /usr/libexec/qemu-kvm \
    -name 'avocado-vt-vm1'  \
    -sandbox on  \
    -machine q35,memory-backend=mem-machine_mem \
    -device pcie-root-port,id=pcie-root-port-0,multifunction=on,bus=pcie.0,addr=0x1,chassis=1 \
    -device pcie-pci-bridge,id=pcie-pci-bridge-0,addr=0x0,bus=pcie-root-port-0  \
    -nodefaults \
    -device VGA,bus=pcie.0,addr=0x2 \
    -m 30720 \
    -object memory-backend-ram,size=30720M,id=mem-machine_mem  \
    -smp 10,maxcpus=10,cores=5,threads=1,dies=1,sockets=2  \
    -cpu 'Cascadelake-Server',ss=on,vmx=on,pdcm=on,hypervisor=on,tsc-adjust=on,umip=on,pku=on,md-clear=on,stibp=on,arch-capabilities=on,xsaves=on,ibpb=on,ibrs=on,amd-stibp=on,amd-ssbd=on,rdctl-no=on,ibrs-all=on,skip-l1dfl-vmentry=on,mds-no=on,pschange-mc-no=on,tsx-ctrl=on,hle=off,rtm=off,kvm_pv_unhalt=on \
    -chardev socket,wait=off,id=qmp_id_qmpmonitor1,server=on,path=/tmp/monitor-qmpmonitor1-20220601-053115-5q30Md4A  \
    -mon chardev=qmp_id_qmpmonitor1,mode=control \
    -chardev socket,wait=off,id=qmp_id_catch_monitor,server=on,path=/tmp/monitor-catch_monitor-20220601-053115-5q30Md4A  \
    -mon chardev=qmp_id_catch_monitor,mode=control \
    -device pvpanic,ioport=0x505,id=idtsFTaW \
    -chardev socket,wait=off,id=chardev_serial0,server=on,path=/tmp/serial-serial0-20220601-053115-5q30Md44 \
    -device isa-serial,id=serial0,chardev=chardev_serial0  \
    -chardev socket,id=seabioslog_id_20220601-053115-5q30Md44,path=/tmp/seabios-20220601-053115-5q30Md44,server=on,wait=off \
    -device isa-debugcon,chardev=seabioslog_id_20220601-053115-5q30Md44,iobase=0x402 \
    -device pcie-root-port,id=pcie-root-port-1,port=0x1,addr=0x1.0x1,bus=pcie.0,chassis=2 \
    -device qemu-xhci,id=usb1,bus=pcie-root-port-1,addr=0x0 \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \
    -device pcie-root-port,id=pcie-root-port-2,port=0x2,addr=0x1.0x2,bus=pcie.0,chassis=3 \
    -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pcie-root-port-2,addr=0x0 \
    -blockdev node-name=file_image1,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/system,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_image1,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_image1 \
    -device scsi-hd,id=image1,drive=drive_image1,write-cache=on \
    -blockdev node-name=file_data1,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/data1,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data1,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data1 \
    -device scsi-hd,id=data1,drive=drive_data1,write-cache=on \
    -blockdev node-name=file_data2,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/data2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data2,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data2 \
    -device scsi-hd,id=data2,drive=drive_data2,write-cache=on \
    -blockdev node-name=file_data3,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/data3,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data3,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data3 \
    -device scsi-hd,id=data3,drive=drive_data3,write-cache=on \
    -blockdev node-name=file_data4,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/data4,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data4,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data4 \
    -device scsi-hd,id=data4,drive=drive_data4,write-cache=on \
    -blockdev node-name=file_data5,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/data5,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data5,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data5 \
    -device scsi-hd,id=data5,drive=drive_data5,write-cache=on \
    -blockdev node-name=file_data6,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/data6,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data6,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data6 \
    -device scsi-hd,id=data6,drive=drive_data6,write-cache=on \
    -blockdev node-name=file_data7,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/data7,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data7,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data7 \
    -device scsi-hd,id=data7,drive=drive_data7,write-cache=on \
    -device pcie-root-port,id=pcie-root-port-3,port=0x3,addr=0x1.0x3,bus=pcie.0,chassis=4 \
    -device virtio-net-pci,mac=9a:a8:df:fa:7f:a8,id=idaEy3Yo,netdev=idodXikU,bus=pcie-root-port-3,addr=0x0  \
-netdev tap,id=idodXikU,vhost=on \
    -vnc :0  \
    -rtc base=utc,clock=host,driftfix=slew  \
    -boot menu=off,order=cdn,once=d,strict=off  \
    -no-shutdown \
    -enable-kvm \
    -device pcie-root-port,id=pcie_extra_root_port_0,multifunction=on,bus=pcie.0,addr=0x3,chassis=5 \
    -monitor stdio \

 3. Start guest in dst
    /usr/libexec/qemu-kvm \
    -name 'avocado-vt-vm1'  \
    -sandbox on  \
    -machine q35,memory-backend=mem-machine_mem \
    -device pcie-root-port,id=pcie-root-port-0,multifunction=on,bus=pcie.0,addr=0x1,chassis=1 \
    -device pcie-pci-bridge,id=pcie-pci-bridge-0,addr=0x0,bus=pcie-root-port-0  \
    -nodefaults \
    -device VGA,bus=pcie.0,addr=0x2 \
    -m 30720 \
    -object memory-backend-ram,size=30720M,id=mem-machine_mem  \
    -smp 10,maxcpus=10,cores=5,threads=1,dies=1,sockets=2  \
    -cpu 'Cascadelake-Server',ss=on,vmx=on,pdcm=on,hypervisor=on,tsc-adjust=on,umip=on,pku=on,md-clear=on,stibp=on,arch-capabilities=on,xsaves=on,ibpb=on,ibrs=on,amd-stibp=on,amd-ssbd=on,rdctl-no=on,ibrs-all=on,skip-l1dfl-vmentry=on,mds-no=on,pschange-mc-no=on,tsx-ctrl=on,hle=off,rtm=off,kvm_pv_unhalt=on \
    -chardev socket,wait=off,id=qmp_id_qmpmonitor1,server=on,path=/tmp/monitor-qmpmonitor1-20220601-053115-5q30Md45  \
    -mon chardev=qmp_id_qmpmonitor1,mode=control \
    -chardev socket,wait=off,id=qmp_id_catch_monitor,server=on,path=/tmp/monitor-catch_monitor-20220601-053115-5q30Md44  \
    -mon chardev=qmp_id_catch_monitor,mode=control \
    -device pvpanic,ioport=0x505,id=idtsFTaW \
    -chardev socket,wait=off,id=chardev_serial0,server=on,path=/tmp/serial-serial0-20220601-053115-5q30Md44 \
    -device isa-serial,id=serial0,chardev=chardev_serial0  \
    -chardev socket,id=seabioslog_id_20220601-053115-5q30Md44,path=/tmp/seabios-20220601-053115-5q30Md44,server=on,wait=off \
    -device isa-debugcon,chardev=seabioslog_id_20220601-053115-5q30Md44,iobase=0x402 \
    -device pcie-root-port,id=pcie-root-port-1,port=0x1,addr=0x1.0x1,bus=pcie.0,chassis=2 \
    -device qemu-xhci,id=usb1,bus=pcie-root-port-1,addr=0x0 \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \
    -device pcie-root-port,id=pcie-root-port-2,port=0x2,addr=0x1.0x2,bus=pcie.0,chassis=3 \
    -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pcie-root-port-2,addr=0x0 \
    -blockdev node-name=file_image1,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/mirror_system,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_image1,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_image1 \
    -device scsi-hd,id=image1,drive=drive_image1,write-cache=on \
    -blockdev node-name=file_data1,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/mirror_data1,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data1,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data1 \
    -device scsi-hd,id=data1,drive=drive_data1,write-cache=on \
    -blockdev node-name=file_data2,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/mirror_data2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data2,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data2 \
    -device scsi-hd,id=data2,drive=drive_data2,write-cache=on \
    -blockdev node-name=file_data3,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/mirror_data3,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data3,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data3 \
    -device scsi-hd,id=data3,drive=drive_data3,write-cache=on \
    -blockdev node-name=file_data4,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/mirror_data4,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data4,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data4 \
    -device scsi-hd,id=data4,drive=drive_data4,write-cache=on \
    -blockdev node-name=file_data5,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/mirror_data5,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data5,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data5 \
    -device scsi-hd,id=data5,drive=drive_data5,write-cache=on \
    -blockdev node-name=file_data6,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/mirror_data6,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data6,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data6 \
    -device scsi-hd,id=data6,drive=drive_data6,write-cache=on \
    -blockdev node-name=file_data7,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/mirror_data7,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data7,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data7 \
    -device scsi-hd,id=data7,drive=drive_data7,write-cache=on \
    -device pcie-root-port,id=pcie-root-port-3,port=0x3,addr=0x1.0x3,bus=pcie.0,chassis=4 \
    -device virtio-net-pci,mac=9a:a8:df:fa:7f:a8,id=idaEy3Yo,netdev=idodXikU,bus=pcie-root-port-3,addr=0x0  \
-netdev tap,id=idodXikU,vhost=on \
    -vnc :1  \
    -rtc base=utc,clock=host,driftfix=slew  \
    -boot menu=off,order=cdn,once=d,strict=off  \
    -no-shutdown \
    -enable-kvm \
    -device pcie-root-port,id=pcie_extra_root_port_0,multifunction=on,bus=pcie.0,addr=0x3,chassis=5 \
    -monitor stdio \
    -incoming defer \

  4. Start ndb server and expose all disks on dst
    { "execute": "nbd-server-start", "arguments": { "addr": { "type": "inet", "data": { "host": "10.73.196.25", "port": "3333" } } } }
{"return": {}}
{"execute":"block-export-add","arguments":{"id": "export0", "node-name": "drive_image1", "type": "nbd", "writable": true}}
{"return": {}}
{"execute":"block-export-add","arguments":{"id": "export1", "node-name": "drive_data1", "type": "nbd", "writable": true}}
{"return": {}}
{"execute":"block-export-add","arguments":{"id": "export2", "node-name": "drive_data2", "type": "nbd", "writable": true}}
{"return": {}}
{"execute":"block-export-add","arguments":{"id": "export3", "node-name": "drive_data3", "type": "nbd", "writable": true}}
{"return": {}}
{"execute":"block-export-add","arguments":{"id": "export4", "node-name": "drive_data4", "type": "nbd", "writable": true}}
{"return": {}}
{"execute":"block-export-add","arguments":{"id": "export5", "node-name": "drive_data5", "type": "nbd", "writable": true}}
{"return": {}}
{"execute":"block-export-add","arguments":{"id": "export6", "node-name": "drive_data6", "type": "nbd", "writable": true}}
{"return": {}}
{"execute":"block-export-add","arguments":{"id": "export7", "node-name": "drive_data7", "type": "nbd", "writable": true}}
{"return": {}}

  5. Login src guest, do fio test with randrw on all data disks.
     (guest)# mkfs.ext4 /dev/sdb && mkdir /mnt/1 && mount /dev/sdb /mnt/1
            # mkfs.ext4 /dev/sdc && mkdir /mnt/2 && mount /dev/sdb /mnt/2
            # mkfs.ext4 /dev/sdd && mkdir /mnt/3 && mount /dev/sdb /mnt/3
            # mkfs.ext4 /dev/sde && mkdir /mnt/4 && mount /dev/sdb /mnt/4
            # mkfs.ext4 /dev/sdf && mkdir /mnt/5 && mount /dev/sdb /mnt/5
            # mkfs.ext4 /dev/sdg && mkdir /mnt/6 && mount /dev/sdb /mnt/6
            # mkfs.ext4 /dev/sdh && mkdir /mnt/7 && mount /dev/sdb /mnt/7
          for i in range(1,8)
            # fio --name=stress$i --filename=/mnt/$i/atest --ioengine=libaio --rw=randrw --direct=1 --bs=4K --size=2G --iodepth=256 --numjobs=30 --runtime=1200 --time_based

 6. After fio test start, add target disks and do mirror from src to dst
    {"execute":"blockdev-add","arguments":{"driver":"nbd","node-name":"mirror","server":{"type":"inet","host":"10.73.196.25","port":"3333"},"export":"drive_image1"}}
    {"execute":"blockdev-add","arguments":{"driver":"nbd","node-name":"mirror_data1","server":{"type":"inet","host":"10.73.196.25","port":"3333"},"export":"drive_data1"}}
    {"execute":"blockdev-add","arguments":{"driver":"nbd","node-name":"mirror_data2","server":{"type":"inet","host":"10.73.196.25","port":"3333"},"export":"drive_data2"}}
    {"execute":"blockdev-add","arguments":{"driver":"nbd","node-name":"mirror_data3","server":{"type":"inet","host":"10.73.196.25","port":"3333"},"export":"drive_data3"}}
{"execute":"blockdev-add","arguments":{"driver":"nbd","node-name":"mirror_data4","server":{"type":"inet","host":"10.73.196.25","port":"3333"},"export":"drive_data4"}}
{"execute":"blockdev-add","arguments":{"driver":"nbd","node-name":"mirror_data5","server":{"type":"inet","host":"10.73.196.25","port":"3333"},"export":"drive_data5"}}
{"execute":"blockdev-add","arguments":{"driver":"nbd","node-name":"mirror_data6","server":{"type":"inet","host":"10.73.196.25","port":"3333"},"export":"drive_data6"}}
{"execute":"blockdev-add","arguments":{"driver":"nbd","node-name":"mirror_data7","server":{"type":"inet","host":"10.73.196.25","port":"3333"},"export":"drive_data7"}}

{ "execute": "blockdev-mirror", "arguments": { "device": "drive_image1","target": "mirror", "copy-mode":"write-blocking", "sync": "full","job-id":"j1" } }
{ "execute": "blockdev-mirror", "arguments": { "device": "drive_data1","target": "mirror_data1", "copy-mode":"write-blocking", "sync": "full","job-id":"j2"}}
{ "execute": "blockdev-mirror", "arguments": { "device": "drive_data2","target": "mirror_data2", "copy-mode":"write-blocking", "sync": "full","job-id":"j3"}}
{ "execute": "blockdev-mirror", "arguments": { "device": "drive_data3","target": "mirror_data3", "copy-mode":"write-blocking", "sync": "full","job-id":"j4"}}
{ "execute": "blockdev-mirror", "arguments": { "device": "drive_data4","target": "mirror_data4", "copy-mode":"write-blocking", "sync": "full","job-id":"j5"}}
{ "execute": "blockdev-mirror", "arguments": { "device": "drive_data5","target": "mirror_data5", "copy-mode":"write-blocking", "sync": "full","job-id":"j6"}}
{ "execute": "blockdev-mirror", "arguments": { "device": "drive_data6","target": "mirror_data6", "copy-mode":"write-blocking", "sync": "full","job-id":"j7"}}
{ "execute": "blockdev-mirror", "arguments": { "device": "drive_data7","target": "mirror_data7", "copy-mode":"write-blocking", "sync": "full","job-id":"j8"}}

Test Result:
  After step6, 
   1. all mirror jobs reach ready status within 5 minutes.
       {"execute":"query-block-jobs"}
{"return": [{"auto-finalize": true, "io-status": "ok", "device": "j8", "auto-dismiss": true, "busy": false, "len": 3221225472, "offset": 3221225472, "status": "ready", "paused": false, "speed": 0, "ready": true, "type": "mirror"}, {"auto-finalize": true, "io-status": "ok", "device": "j7", "auto-dismiss": true, "busy": false, "len": 3221225472, "offset": 3221225472, "status": "ready", "paused": false, "speed": 0, "ready": true, "type": "mirror"}, {"auto-finalize": true, "io-status": "ok", "device": "j6", "auto-dismiss": true, "busy": false, "len": 3221225472, "offset": 3221225472, "status": "ready", "paused": false, "speed": 0, "ready": true, "type": "mirror"}, {"auto-finalize": true, "io-status": "ok", "device": "j5", "auto-dismiss": true, "busy": false, "len": 3221225472, "offset": 3221225472, "status": "ready", "paused": false, "speed": 0, "ready": true, "type": "mirror"}, {"auto-finalize": true, "io-status": "ok", "device": "j4", "auto-dismiss": true, "busy": false, "len": 3221225472, "offset": 3221225472, "status": "ready", "paused": false, "speed": 0, "ready": true, "type": "mirror"}, {"auto-finalize": true, "io-status": "ok", "device": "j3", "auto-dismiss": true, "busy": false, "len": 3221225472, "offset": 3221225472, "status": "ready", "paused": false, "speed": 0, "ready": true, "type": "mirror"}, {"auto-finalize": true, "io-status": "ok", "device": "j2", "auto-dismiss": true, "busy": false, "len": 3811123200, "offset": 3810791424, "status": "ready", "paused": false, "speed": 0, "ready": true, "type": "mirror"}, {"auto-finalize": true, "io-status": "ok", "device": "j1", "auto-dismiss": true, "busy": true, "len": 21474837504, "offset": 21474837504, "status": "ready", "paused": false, "speed": 0, "ready": false, "type": "mirror"}]}
   2. fios in guest are still in running status.
    # fio --name=stress1 --filename=/mnt/1/atest --ioengine=libaio --rw=randrw --direct=1 --bs=4K --size=2G --iodepth=256 --numjobs=30 --runtime=1200 --time_based
stress1: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=256
...
fio-3.19
Starting 30 processes
stress1: Laying out IO file (1 file / 2048MiB)
^Cbs: 30 (f=30): [m(30)][28.3%][eta 14m:20s]                                              
fio: terminating on signal 2

   # fio --name=stress2 --filename=/mnt/2/atest --ioengine=libaio --rw=randrw --direct=1 --bs=4K --size=2G --iodepth=256 --numjobs=30 --runtime=1200 --time_based
stress2: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=256
...
fio-3.19
Starting 30 processes
^Cbs: 30 (f=30): [m(30)][28.6%][r=9169KiB/s,w=9365KiB/s][r=2292,w=2341 IOPS][eta 14m:17s] 
fio: terminating on signal 2

   # fio --name=stress3 --filename=/mnt/3/atest --ioengine=libaio --rw=randrw --direct=1 --bs=4K --size=2G --iodepth=256 --numjobs=30 --runtime=1200 --time_based
stress3: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=256
...
fio-3.19
Starting 30 processes
^Cbs: 30 (f=30): [m(30)][28.5%][r=21.5MiB/s,w=20.9MiB/s][r=5515,w=5361 IOPS][eta 14m:18s] 
fio: terminating on signal 2

   # fio --name=stress4 --filename=/mnt/4/atest --ioengine=libaio --rw=randrw --direct=1 --bs=4K --size=2G --iodepth=256 --numjobs=30 --runtime=1200 --time_based
stress4: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=256
...
fio-3.19
Starting 30 processes
^Cbs: 30 (f=30): [m(30)][28.4%][r=10.1MiB/s,w=10.5MiB/s][r=2579,w=2696 IOPS][eta 14m:19s] 
fio: terminating on signal 2

   # fio --name=stress5 --filename=/mnt/5/atest --ioengine=libaio --rw=randrw --direct=1 --bs=4K --size=2G --iodepth=256 --numjobs=30 --runtime=1200 --time_based
stress5: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=256
...
fio-3.19
Starting 30 processes
^Cbs: 30 (f=30): [m(30)][28.3%][eta 14m:20s]                                              
fio: terminating on signal 2

   # fio --name=stress6 --filename=/mnt/6/atest --ioengine=libaio --rw=randrw --direct=1 --bs=4K --size=2G --iodepth=256 --numjobs=30 --runtime=1200 --time_based
stress6: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=256
...
fio-3.19
Starting 30 processes
^Cbs: 30 (f=30): [m(30)][28.3%][eta 14m:20s]                                              
fio: terminating on signal 2

   # fio --name=stress7 --filename=/mnt/7/atest --ioengine=libaio --rw=randrw --direct=1 --bs=4K --size=2G --iodepth=256 --numjobs=30 --runtime=1200 --time_based
stress7: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=256
...
fio-3.19
Starting 30 processes
^Cbs: 30 (f=30): [m(30)][28.4%][r=134MiB/s,w=133MiB/s][r=34.4k,w=34.1k IOPS][eta 14m:19s]  
fio: terminating on signal 2

Comment 15 aihua liang 2023-01-31 02:36:51 UTC
As comment 11 and comment 12, set bug's status to "VERIFIED".

Comment 17 errata-xmlrpc 2023-05-16 08:16:30 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: virt:rhel and virt-devel:rhel security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:2757