RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2125119 - Mirror job with "copy-mode":"write-blocking" that used for storage migration can't converge under heavy I/O
Summary: Mirror job with "copy-mode":"write-blocking" that used for storage migration ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: qemu-kvm
Version: 8.7
Hardware: x86_64
OS: Unspecified
medium
high
Target Milestone: rc
: ---
Assignee: Hanna Czenczek
QA Contact: aihua liang
URL:
Whiteboard:
Depends On: 2123297
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-09-08 04:32 UTC by aihua liang
Modified: 2023-05-16 08:58 UTC (History)
10 users (show)

Fixed In Version: qemu-kvm-6.2.0-29.module+el8.8.0+17991+08d03241
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 2123297
Environment:
Last Closed: 2023-05-16 08:16:30 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Gitlab redhat/rhel/src/qemu-kvm qemu-kvm merge_requests 246 0 None None None 2023-01-12 13:39:09 UTC
Red Hat Issue Tracker RHELPLAN-133492 0 None None None 2022-09-08 04:33:51 UTC
Red Hat Product Errata RHSA-2023:2757 0 None None None 2023-05-16 08:18:13 UTC

Description aihua liang 2022-09-08 04:32:46 UTC
+++ This bug was initially created as a clone of Bug #2123297 +++

Description of problem:
 Mirror job with "copy-mode":"write-blocking" that used for storage migration can't converge under heavy I/O.

Version-Release number of selected component (if applicable):
 kernel version:4.18.0-419.el8.x86_64
 qemu-kvm version:qemu-kvm-6.2.0-20.module+el8.7.0+16496+35f7e655

How reproducible:
 100%


Steps to Reproduce:
Test Env prepare:
 #qemu-img create -f raw test.img 100G
 #losetup /dev/loop0 test.img
 #pvcreate /dev/loop0
 #vgcreate test /dev/loop0
 #lvcreate -L 20G -n system test
 #lvcreate -L 20G -n mirror_system test
 #lvcreate -L 3G -n data1 test
 # .... create data2 ~ data7 with size 3G
 #lvcreate -L 3G -n mirror_data1 test
 # ....create mirror_data2 ~ mirror_data7 with size 3G

Test Steps:
 1. install guest on /dev/tets/system

 2. Start src qemu cmd:
     /usr/libexec/qemu-kvm \
    -name 'avocado-vt-vm1'  \
    -sandbox on  \
    -machine q35,memory-backend=mem-machine_mem \
    -device pcie-root-port,id=pcie-root-port-0,multifunction=on,bus=pcie.0,addr=0x1,chassis=1 \
    -device pcie-pci-bridge,id=pcie-pci-bridge-0,addr=0x0,bus=pcie-root-port-0  \
    -nodefaults \
    -device VGA,bus=pcie.0,addr=0x2 \
    -m 30720 \
    -object memory-backend-ram,size=30720M,id=mem-machine_mem  \
    -smp 10,maxcpus=10,cores=5,threads=1,dies=1,sockets=2  \
    -cpu 'Cascadelake-Server',ss=on,vmx=on,pdcm=on,hypervisor=on,tsc-adjust=on,umip=on,pku=on,md-clear=on,stibp=on,arch-capabilities=on,xsaves=on,ibpb=on,ibrs=on,amd-stibp=on,amd-ssbd=on,rdctl-no=on,ibrs-all=on,skip-l1dfl-vmentry=on,mds-no=on,pschange-mc-no=on,tsx-ctrl=on,hle=off,rtm=off,kvm_pv_unhalt=on \
    -chardev socket,wait=off,id=qmp_id_qmpmonitor1,server=on,path=/tmp/monitor-qmpmonitor1-20220601-053115-5q30Md4A  \
    -mon chardev=qmp_id_qmpmonitor1,mode=control \
    -chardev socket,wait=off,id=qmp_id_catch_monitor,server=on,path=/tmp/monitor-catch_monitor-20220601-053115-5q30Md4A  \
    -mon chardev=qmp_id_catch_monitor,mode=control \
    -device pvpanic,ioport=0x505,id=idtsFTaW \
    -chardev socket,wait=off,id=chardev_serial0,server=on,path=/tmp/serial-serial0-20220601-053115-5q30Md44 \
    -device isa-serial,id=serial0,chardev=chardev_serial0  \
    -chardev socket,id=seabioslog_id_20220601-053115-5q30Md44,path=/tmp/seabios-20220601-053115-5q30Md44,server=on,wait=off \
    -device isa-debugcon,chardev=seabioslog_id_20220601-053115-5q30Md44,iobase=0x402 \
    -device pcie-root-port,id=pcie-root-port-1,port=0x1,addr=0x1.0x1,bus=pcie.0,chassis=2 \
    -device qemu-xhci,id=usb1,bus=pcie-root-port-1,addr=0x0 \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \
    -device pcie-root-port,id=pcie-root-port-2,port=0x2,addr=0x1.0x2,bus=pcie.0,chassis=3 \
    -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pcie-root-port-2,addr=0x0 \
    -blockdev node-name=file_image1,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/system,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_image1,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_image1 \
    -device scsi-hd,id=image1,drive=drive_image1,write-cache=on \
    -blockdev node-name=file_data1,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/data1,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data1,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data1 \
    -device scsi-hd,id=data1,drive=drive_data1,write-cache=on \
    -blockdev node-name=file_data2,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/data2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data2,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data2 \
    -device scsi-hd,id=data2,drive=drive_data2,write-cache=on \
    -blockdev node-name=file_data3,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/data3,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data3,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data3 \
    -device scsi-hd,id=data3,drive=drive_data3,write-cache=on \
    -blockdev node-name=file_data4,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/data4,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data4,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data4 \
    -device scsi-hd,id=data4,drive=drive_data4,write-cache=on \
    -blockdev node-name=file_data5,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/data5,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data5,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data5 \
    -device scsi-hd,id=data5,drive=drive_data5,write-cache=on \
    -blockdev node-name=file_data6,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/data6,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data6,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data6 \
    -device scsi-hd,id=data6,drive=drive_data6,write-cache=on \
    -blockdev node-name=file_data7,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/data7,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data7,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data7 \
    -device scsi-hd,id=data7,drive=drive_data7,write-cache=on \
    -device pcie-root-port,id=pcie-root-port-3,port=0x3,addr=0x1.0x3,bus=pcie.0,chassis=4 \

 3. Start dst vm by qemu cmd:
    /usr/libexec/qemu-kvm \
    -name 'avocado-vt-vm1'  \
    -sandbox on  \
    -machine q35,memory-backend=mem-machine_mem \
    -device pcie-root-port,id=pcie-root-port-0,multifunction=on,bus=pcie.0,addr=0x1,chassis=1 \
    -device pcie-pci-bridge,id=pcie-pci-bridge-0,addr=0x0,bus=pcie-root-port-0  \
    -nodefaults \
    -device VGA,bus=pcie.0,addr=0x2 \
    -m 30720 \
    -object memory-backend-ram,size=30720M,id=mem-machine_mem  \
    -smp 10,maxcpus=10,cores=5,threads=1,dies=1,sockets=2  \
    -cpu 'Cascadelake-Server',ss=on,vmx=on,pdcm=on,hypervisor=on,tsc-adjust=on,umip=on,pku=on,md-clear=on,stibp=on,arch-capabilities=on,xsaves=on,ibpb=on,ibrs=on,amd-stibp=on,amd-ssbd=on,rdctl-no=on,ibrs-all=on,skip-l1dfl-vmentry=on,mds-no=on,pschange-mc-no=on,tsx-ctrl=on,hle=off,rtm=off,kvm_pv_unhalt=on \
    -chardev socket,wait=off,id=qmp_id_qmpmonitor1,server=on,path=/tmp/monitor-qmpmonitor1-20220601-053115-5q30Md45  \
    -mon chardev=qmp_id_qmpmonitor1,mode=control \
    -chardev socket,wait=off,id=qmp_id_catch_monitor,server=on,path=/tmp/monitor-catch_monitor-20220601-053115-5q30Md44  \
    -mon chardev=qmp_id_catch_monitor,mode=control \
    -device pvpanic,ioport=0x505,id=idtsFTaW \
    -chardev socket,wait=off,id=chardev_serial0,server=on,path=/tmp/serial-serial0-20220601-053115-5q30Md44 \
    -device isa-serial,id=serial0,chardev=chardev_serial0  \
    -chardev socket,id=seabioslog_id_20220601-053115-5q30Md44,path=/tmp/seabios-20220601-053115-5q30Md44,server=on,wait=off \
    -device isa-debugcon,chardev=seabioslog_id_20220601-053115-5q30Md44,iobase=0x402 \
    -device pcie-root-port,id=pcie-root-port-1,port=0x1,addr=0x1.0x1,bus=pcie.0,chassis=2 \
    -device qemu-xhci,id=usb1,bus=pcie-root-port-1,addr=0x0 \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \
    -device pcie-root-port,id=pcie-root-port-2,port=0x2,addr=0x1.0x2,bus=pcie.0,chassis=3 \
    -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pcie-root-port-2,addr=0x0 \
    -blockdev node-name=file_image1,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/mirror_system,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_image1,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_image1 \
    -device scsi-hd,id=image1,drive=drive_image1,write-cache=on \
    -blockdev node-name=file_data1,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/mirror_data1,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data1,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data1 \
    -device scsi-hd,id=data1,drive=drive_data1,write-cache=on \
    -blockdev node-name=file_data2,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/mirror_data2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data2,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data2 \
    -device scsi-hd,id=data2,drive=drive_data2,write-cache=on \
    -blockdev node-name=file_data3,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/mirror_data3,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data3,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data3 \
    -device scsi-hd,id=data3,drive=drive_data3,write-cache=on \
    -blockdev node-name=file_data4,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/mirror_data4,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data4,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data4 \
    -device scsi-hd,id=data4,drive=drive_data4,write-cache=on \
    -blockdev node-name=file_data5,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/mirror_data5,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data5,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data5 \
    -device scsi-hd,id=data5,drive=drive_data5,write-cache=on \
    -blockdev node-name=file_data6,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/mirror_data6,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data6,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data6 \
    -device scsi-hd,id=data6,drive=drive_data6,write-cache=on \
    -blockdev node-name=file_data7,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/mirror_data7,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data7,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data7 \
    -device scsi-hd,id=data7,drive=drive_data7,write-cache=on \
    -device pcie-root-port,id=pcie-root-port-3,port=0x3,addr=0x1.0x3,bus=pcie.0,chassis=4 \

 4. Start ndb server and expose all disks on dst
    { "execute": "nbd-server-start", "arguments": { "addr": { "type": "inet", "data": { "host": "10.73.196.25", "port": "3333" } } } }
{"return": {}}
{"execute":"block-export-add","arguments":{"id": "export0", "node-name": "drive_image1", "type": "nbd", "writable": true}}
{"return": {}}
{"execute":"block-export-add","arguments":{"id": "export1", "node-name": "drive_data1", "type": "nbd", "writable": true}}
{"return": {}}
{"execute":"block-export-add","arguments":{"id": "export2", "node-name": "drive_data2", "type": "nbd", "writable": true}}
{"return": {}}
{"execute":"block-export-add","arguments":{"id": "export3", "node-name": "drive_data3", "type": "nbd", "writable": true}}
{"return": {}}
{"execute":"block-export-add","arguments":{"id": "export4", "node-name": "drive_data4", "type": "nbd", "writable": true}}
{"return": {}}
{"execute":"block-export-add","arguments":{"id": "export5", "node-name": "drive_data5", "type": "nbd", "writable": true}}
{"return": {}}
{"execute":"block-export-add","arguments":{"id": "export6", "node-name": "drive_data6", "type": "nbd", "writable": true}}
{"return": {}}
{"execute":"block-export-add","arguments":{"id": "export7", "node-name": "drive_data7", "type": "nbd", "writable": true}}
{"return": {}}

  5. Login src guest, do fio test with randrw on all data disks.
     (guest)# mkfs.ext4 /dev/sdb && mkdir /mnt/1 && mount /dev/sdb /mnt/1
            # mkfs.ext4 /dev/sdc && mkdir /mnt/2 && mount /dev/sdb /mnt/2
            # mkfs.ext4 /dev/sdd && mkdir /mnt/3 && mount /dev/sdb /mnt/3
            # mkfs.ext4 /dev/sde && mkdir /mnt/4 && mount /dev/sdb /mnt/4
            # mkfs.ext4 /dev/sdf && mkdir /mnt/5 && mount /dev/sdb /mnt/5
            # mkfs.ext4 /dev/sdg && mkdir /mnt/6 && mount /dev/sdb /mnt/6
            # mkfs.ext4 /dev/sdh && mkdir /mnt/7 && mount /dev/sdb /mnt/7
          for i in range(1,8)
            # fio --name=stress$i --filename=/mnt/$i/atest --ioengine=libaio --rw=randrw --direct=1 --bs=4K --size=2G --iodepth=256 --numjobs=30 --runtime=1200 --time_based

 6. After fio test start, add target disks and do mirror from src to dst
    {"execute":"blockdev-add","arguments":{"driver":"nbd","node-name":"mirror","server":{"type":"inet","host":"10.73.196.25","port":"3333"},"export":"drive_image1"}}
    {"execute":"blockdev-add","arguments":{"driver":"nbd","node-name":"mirror_data1","server":{"type":"inet","host":"10.73.196.25","port":"3333"},"export":"drive_data1"}}
    {"execute":"blockdev-add","arguments":{"driver":"nbd","node-name":"mirror_data2","server":{"type":"inet","host":"10.73.196.25","port":"3333"},"export":"drive_data2"}}
    {"execute":"blockdev-add","arguments":{"driver":"nbd","node-name":"mirror_data3","server":{"type":"inet","host":"10.73.196.25","port":"3333"},"export":"drive_data3"}}
{"execute":"blockdev-add","arguments":{"driver":"nbd","node-name":"mirror_data4","server":{"type":"inet","host":"10.73.196.25","port":"3333"},"export":"drive_data4"}}
{"execute":"blockdev-add","arguments":{"driver":"nbd","node-name":"mirror_data5","server":{"type":"inet","host":"10.73.196.25","port":"3333"},"export":"drive_data5"}}
{"execute":"blockdev-add","arguments":{"driver":"nbd","node-name":"mirror_data6","server":{"type":"inet","host":"10.73.196.25","port":"3333"},"export":"drive_data6"}}
{"execute":"blockdev-add","arguments":{"driver":"nbd","node-name":"mirror_data7","server":{"type":"inet","host":"10.73.196.25","port":"3333"},"export":"drive_data7"}}

{ "execute": "blockdev-mirror", "arguments": { "device": "drive_image1","target": "mirror", "copy-mode":"write-blocking", "sync": "full","job-id":"j1" } }
{ "execute": "blockdev-mirror", "arguments": { "device": "drive_data1","target": "mirror_data1", "copy-mode":"write-blocking", "sync": "full","job-id":"j2"}}
{ "execute": "blockdev-mirror", "arguments": { "device": "drive_data2","target": "mirror_data2", "copy-mode":"write-blocking", "sync": "full","job-id":"j3"}}
{ "execute": "blockdev-mirror", "arguments": { "device": "drive_data3","target": "mirror_data3", "copy-mode":"write-blocking", "sync": "full","job-id":"j4"}}
{ "execute": "blockdev-mirror", "arguments": { "device": "drive_data4","target": "mirror_data4", "copy-mode":"write-blocking", "sync": "full","job-id":"j5"}}
{ "execute": "blockdev-mirror", "arguments": { "device": "drive_data5","target": "mirror_data5", "copy-mode":"write-blocking", "sync": "full","job-id":"j6"}}
{ "execute": "blockdev-mirror", "arguments": { "device": "drive_data6","target": "mirror_data6", "copy-mode":"write-blocking", "sync": "full","job-id":"j7"}}
{ "execute": "blockdev-mirror", "arguments": { "device": "drive_data7","target": "mirror_data7", "copy-mode":"write-blocking", "sync": "full","job-id":"j8"}}


Actual Result:
 After step6, j1 reach ready status after 3 minutes, but other jobs reach ready status after 20 minutes.

Expected results:
  Mirror jobs can converge.

Will check if it's a regression and give info later.

Comment 1 aihua liang 2022-09-08 09:35:19 UTC
Hit this issue on qemu version greate than qemu4.2, like
Qemu-kvm-6.0.0-29.module+el8.6.0+12490+ec3e565c
Qemu-kvm-5.2.0-1.module+el8.4.0+9091+650b220a
Qemu-kvm-5.0.0-0.module+el8.3.0+6612+6b86f0c9


Not this issue on qemu4.2, and mirror converage in 5 minutes.
Qemu-kvm-4.2.0-49.module+el8.5.0+10804+ce89428a
Qemu-kvm-4.2.0-21.module+el8.2.1+6586+8b7713b9

So, it's a regression issue.

Comment 2 Vivek Goyal 2022-09-08 18:02:05 UTC
(In reply to aihua liang from comment #1)
> Hit this issue on qemu version greate than qemu4.2, like
> Qemu-kvm-6.0.0-29.module+el8.6.0+12490+ec3e565c
> Qemu-kvm-5.2.0-1.module+el8.4.0+9091+650b220a
> Qemu-kvm-5.0.0-0.module+el8.3.0+6612+6b86f0c9
> 
> 
> Not this issue on qemu4.2, and mirror converage in 5 minutes.
> Qemu-kvm-4.2.0-49.module+el8.5.0+10804+ce89428a
> Qemu-kvm-4.2.0-21.module+el8.2.1+6586+8b7713b9
> 
> So, it's a regression issue.

Based on the discussion in other bug, I don't think any particular time frame for job to converge was ever promised. So while it might be a change of behavior since qemu4.2, I will stop short of calling it a regression because qemu4.2 behavior was never a guarantee to begin with.

I guess Kevin and Hanna will have the final say on this. For now, I am dropping the priority to medium. But if they think this should be "High" prio instead, they can raise it.

Comment 3 Vivek Goyal 2022-09-08 18:03:11 UTC
I dropped "Regression" tag as well for now. Kevin and Haana, please correct me if that's not the case.

Comment 4 Vivek Goyal 2022-09-09 15:26:31 UTC
Hi Hanna,

As you are handling similar bug for rhel9, for now I am assigning this bug to you as well. Hopefully resolution for both the issues is same. Please let me know if somebody else should be handling it instead.

Comment 5 Hanna Czenczek 2022-09-12 07:16:13 UTC
Yes, I’ll take this one, too.

Comment 11 Yanan Fu 2023-01-28 02:31:28 UTC
QE bot(pre verify): Set 'Verified:Tested,SanityOnly' as gating/tier1 test pass.

Comment 12 aihua liang 2023-01-28 07:50:18 UTC
Test on qemu-kvm-6.2.0-29.module+el8.8.0+17991+08d03241, all mirror jobs reach ready status within 5 minutes, with all fio jobs still in running status.

Test Env prepare:
 #qemu-img create -f raw test.img 100G
 #losetup /dev/loop0 test.img
 #pvcreate /dev/loop0
 #vgcreate test /dev/loop0
 #lvcreate -L 20G -n system test
 #lvcreate -L 20G -n mirror_system test
 #lvcreate -L 3G -n data1 test
 # .... create data2 ~ data7 with size 3G
 #lvcreate -L 3G -n mirror_data1 test
 # ....create mirror_data2 ~ mirror_data7 with size 3G

Test Steps:
 1. install guest on /dev/test/system
 2. Start guest in src
   /usr/libexec/qemu-kvm \
    -name 'avocado-vt-vm1'  \
    -sandbox on  \
    -machine q35,memory-backend=mem-machine_mem \
    -device pcie-root-port,id=pcie-root-port-0,multifunction=on,bus=pcie.0,addr=0x1,chassis=1 \
    -device pcie-pci-bridge,id=pcie-pci-bridge-0,addr=0x0,bus=pcie-root-port-0  \
    -nodefaults \
    -device VGA,bus=pcie.0,addr=0x2 \
    -m 30720 \
    -object memory-backend-ram,size=30720M,id=mem-machine_mem  \
    -smp 10,maxcpus=10,cores=5,threads=1,dies=1,sockets=2  \
    -cpu 'Cascadelake-Server',ss=on,vmx=on,pdcm=on,hypervisor=on,tsc-adjust=on,umip=on,pku=on,md-clear=on,stibp=on,arch-capabilities=on,xsaves=on,ibpb=on,ibrs=on,amd-stibp=on,amd-ssbd=on,rdctl-no=on,ibrs-all=on,skip-l1dfl-vmentry=on,mds-no=on,pschange-mc-no=on,tsx-ctrl=on,hle=off,rtm=off,kvm_pv_unhalt=on \
    -chardev socket,wait=off,id=qmp_id_qmpmonitor1,server=on,path=/tmp/monitor-qmpmonitor1-20220601-053115-5q30Md4A  \
    -mon chardev=qmp_id_qmpmonitor1,mode=control \
    -chardev socket,wait=off,id=qmp_id_catch_monitor,server=on,path=/tmp/monitor-catch_monitor-20220601-053115-5q30Md4A  \
    -mon chardev=qmp_id_catch_monitor,mode=control \
    -device pvpanic,ioport=0x505,id=idtsFTaW \
    -chardev socket,wait=off,id=chardev_serial0,server=on,path=/tmp/serial-serial0-20220601-053115-5q30Md44 \
    -device isa-serial,id=serial0,chardev=chardev_serial0  \
    -chardev socket,id=seabioslog_id_20220601-053115-5q30Md44,path=/tmp/seabios-20220601-053115-5q30Md44,server=on,wait=off \
    -device isa-debugcon,chardev=seabioslog_id_20220601-053115-5q30Md44,iobase=0x402 \
    -device pcie-root-port,id=pcie-root-port-1,port=0x1,addr=0x1.0x1,bus=pcie.0,chassis=2 \
    -device qemu-xhci,id=usb1,bus=pcie-root-port-1,addr=0x0 \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \
    -device pcie-root-port,id=pcie-root-port-2,port=0x2,addr=0x1.0x2,bus=pcie.0,chassis=3 \
    -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pcie-root-port-2,addr=0x0 \
    -blockdev node-name=file_image1,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/system,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_image1,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_image1 \
    -device scsi-hd,id=image1,drive=drive_image1,write-cache=on \
    -blockdev node-name=file_data1,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/data1,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data1,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data1 \
    -device scsi-hd,id=data1,drive=drive_data1,write-cache=on \
    -blockdev node-name=file_data2,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/data2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data2,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data2 \
    -device scsi-hd,id=data2,drive=drive_data2,write-cache=on \
    -blockdev node-name=file_data3,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/data3,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data3,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data3 \
    -device scsi-hd,id=data3,drive=drive_data3,write-cache=on \
    -blockdev node-name=file_data4,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/data4,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data4,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data4 \
    -device scsi-hd,id=data4,drive=drive_data4,write-cache=on \
    -blockdev node-name=file_data5,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/data5,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data5,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data5 \
    -device scsi-hd,id=data5,drive=drive_data5,write-cache=on \
    -blockdev node-name=file_data6,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/data6,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data6,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data6 \
    -device scsi-hd,id=data6,drive=drive_data6,write-cache=on \
    -blockdev node-name=file_data7,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/data7,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data7,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data7 \
    -device scsi-hd,id=data7,drive=drive_data7,write-cache=on \
    -device pcie-root-port,id=pcie-root-port-3,port=0x3,addr=0x1.0x3,bus=pcie.0,chassis=4 \
    -device virtio-net-pci,mac=9a:a8:df:fa:7f:a8,id=idaEy3Yo,netdev=idodXikU,bus=pcie-root-port-3,addr=0x0  \
-netdev tap,id=idodXikU,vhost=on \
    -vnc :0  \
    -rtc base=utc,clock=host,driftfix=slew  \
    -boot menu=off,order=cdn,once=d,strict=off  \
    -no-shutdown \
    -enable-kvm \
    -device pcie-root-port,id=pcie_extra_root_port_0,multifunction=on,bus=pcie.0,addr=0x3,chassis=5 \
    -monitor stdio \

 3. Start guest in dst
    /usr/libexec/qemu-kvm \
    -name 'avocado-vt-vm1'  \
    -sandbox on  \
    -machine q35,memory-backend=mem-machine_mem \
    -device pcie-root-port,id=pcie-root-port-0,multifunction=on,bus=pcie.0,addr=0x1,chassis=1 \
    -device pcie-pci-bridge,id=pcie-pci-bridge-0,addr=0x0,bus=pcie-root-port-0  \
    -nodefaults \
    -device VGA,bus=pcie.0,addr=0x2 \
    -m 30720 \
    -object memory-backend-ram,size=30720M,id=mem-machine_mem  \
    -smp 10,maxcpus=10,cores=5,threads=1,dies=1,sockets=2  \
    -cpu 'Cascadelake-Server',ss=on,vmx=on,pdcm=on,hypervisor=on,tsc-adjust=on,umip=on,pku=on,md-clear=on,stibp=on,arch-capabilities=on,xsaves=on,ibpb=on,ibrs=on,amd-stibp=on,amd-ssbd=on,rdctl-no=on,ibrs-all=on,skip-l1dfl-vmentry=on,mds-no=on,pschange-mc-no=on,tsx-ctrl=on,hle=off,rtm=off,kvm_pv_unhalt=on \
    -chardev socket,wait=off,id=qmp_id_qmpmonitor1,server=on,path=/tmp/monitor-qmpmonitor1-20220601-053115-5q30Md45  \
    -mon chardev=qmp_id_qmpmonitor1,mode=control \
    -chardev socket,wait=off,id=qmp_id_catch_monitor,server=on,path=/tmp/monitor-catch_monitor-20220601-053115-5q30Md44  \
    -mon chardev=qmp_id_catch_monitor,mode=control \
    -device pvpanic,ioport=0x505,id=idtsFTaW \
    -chardev socket,wait=off,id=chardev_serial0,server=on,path=/tmp/serial-serial0-20220601-053115-5q30Md44 \
    -device isa-serial,id=serial0,chardev=chardev_serial0  \
    -chardev socket,id=seabioslog_id_20220601-053115-5q30Md44,path=/tmp/seabios-20220601-053115-5q30Md44,server=on,wait=off \
    -device isa-debugcon,chardev=seabioslog_id_20220601-053115-5q30Md44,iobase=0x402 \
    -device pcie-root-port,id=pcie-root-port-1,port=0x1,addr=0x1.0x1,bus=pcie.0,chassis=2 \
    -device qemu-xhci,id=usb1,bus=pcie-root-port-1,addr=0x0 \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \
    -device pcie-root-port,id=pcie-root-port-2,port=0x2,addr=0x1.0x2,bus=pcie.0,chassis=3 \
    -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pcie-root-port-2,addr=0x0 \
    -blockdev node-name=file_image1,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/mirror_system,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_image1,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_image1 \
    -device scsi-hd,id=image1,drive=drive_image1,write-cache=on \
    -blockdev node-name=file_data1,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/mirror_data1,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data1,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data1 \
    -device scsi-hd,id=data1,drive=drive_data1,write-cache=on \
    -blockdev node-name=file_data2,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/mirror_data2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data2,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data2 \
    -device scsi-hd,id=data2,drive=drive_data2,write-cache=on \
    -blockdev node-name=file_data3,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/mirror_data3,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data3,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data3 \
    -device scsi-hd,id=data3,drive=drive_data3,write-cache=on \
    -blockdev node-name=file_data4,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/mirror_data4,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data4,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data4 \
    -device scsi-hd,id=data4,drive=drive_data4,write-cache=on \
    -blockdev node-name=file_data5,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/mirror_data5,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data5,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data5 \
    -device scsi-hd,id=data5,drive=drive_data5,write-cache=on \
    -blockdev node-name=file_data6,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/mirror_data6,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data6,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data6 \
    -device scsi-hd,id=data6,drive=drive_data6,write-cache=on \
    -blockdev node-name=file_data7,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/mirror_data7,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data7,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data7 \
    -device scsi-hd,id=data7,drive=drive_data7,write-cache=on \
    -device pcie-root-port,id=pcie-root-port-3,port=0x3,addr=0x1.0x3,bus=pcie.0,chassis=4 \
    -device virtio-net-pci,mac=9a:a8:df:fa:7f:a8,id=idaEy3Yo,netdev=idodXikU,bus=pcie-root-port-3,addr=0x0  \
-netdev tap,id=idodXikU,vhost=on \
    -vnc :1  \
    -rtc base=utc,clock=host,driftfix=slew  \
    -boot menu=off,order=cdn,once=d,strict=off  \
    -no-shutdown \
    -enable-kvm \
    -device pcie-root-port,id=pcie_extra_root_port_0,multifunction=on,bus=pcie.0,addr=0x3,chassis=5 \
    -monitor stdio \
    -incoming defer \

  4. Start ndb server and expose all disks on dst
    { "execute": "nbd-server-start", "arguments": { "addr": { "type": "inet", "data": { "host": "10.73.196.25", "port": "3333" } } } }
{"return": {}}
{"execute":"block-export-add","arguments":{"id": "export0", "node-name": "drive_image1", "type": "nbd", "writable": true}}
{"return": {}}
{"execute":"block-export-add","arguments":{"id": "export1", "node-name": "drive_data1", "type": "nbd", "writable": true}}
{"return": {}}
{"execute":"block-export-add","arguments":{"id": "export2", "node-name": "drive_data2", "type": "nbd", "writable": true}}
{"return": {}}
{"execute":"block-export-add","arguments":{"id": "export3", "node-name": "drive_data3", "type": "nbd", "writable": true}}
{"return": {}}
{"execute":"block-export-add","arguments":{"id": "export4", "node-name": "drive_data4", "type": "nbd", "writable": true}}
{"return": {}}
{"execute":"block-export-add","arguments":{"id": "export5", "node-name": "drive_data5", "type": "nbd", "writable": true}}
{"return": {}}
{"execute":"block-export-add","arguments":{"id": "export6", "node-name": "drive_data6", "type": "nbd", "writable": true}}
{"return": {}}
{"execute":"block-export-add","arguments":{"id": "export7", "node-name": "drive_data7", "type": "nbd", "writable": true}}
{"return": {}}

  5. Login src guest, do fio test with randrw on all data disks.
     (guest)# mkfs.ext4 /dev/sdb && mkdir /mnt/1 && mount /dev/sdb /mnt/1
            # mkfs.ext4 /dev/sdc && mkdir /mnt/2 && mount /dev/sdb /mnt/2
            # mkfs.ext4 /dev/sdd && mkdir /mnt/3 && mount /dev/sdb /mnt/3
            # mkfs.ext4 /dev/sde && mkdir /mnt/4 && mount /dev/sdb /mnt/4
            # mkfs.ext4 /dev/sdf && mkdir /mnt/5 && mount /dev/sdb /mnt/5
            # mkfs.ext4 /dev/sdg && mkdir /mnt/6 && mount /dev/sdb /mnt/6
            # mkfs.ext4 /dev/sdh && mkdir /mnt/7 && mount /dev/sdb /mnt/7
          for i in range(1,8)
            # fio --name=stress$i --filename=/mnt/$i/atest --ioengine=libaio --rw=randrw --direct=1 --bs=4K --size=2G --iodepth=256 --numjobs=30 --runtime=1200 --time_based

 6. After fio test start, add target disks and do mirror from src to dst
    {"execute":"blockdev-add","arguments":{"driver":"nbd","node-name":"mirror","server":{"type":"inet","host":"10.73.196.25","port":"3333"},"export":"drive_image1"}}
    {"execute":"blockdev-add","arguments":{"driver":"nbd","node-name":"mirror_data1","server":{"type":"inet","host":"10.73.196.25","port":"3333"},"export":"drive_data1"}}
    {"execute":"blockdev-add","arguments":{"driver":"nbd","node-name":"mirror_data2","server":{"type":"inet","host":"10.73.196.25","port":"3333"},"export":"drive_data2"}}
    {"execute":"blockdev-add","arguments":{"driver":"nbd","node-name":"mirror_data3","server":{"type":"inet","host":"10.73.196.25","port":"3333"},"export":"drive_data3"}}
{"execute":"blockdev-add","arguments":{"driver":"nbd","node-name":"mirror_data4","server":{"type":"inet","host":"10.73.196.25","port":"3333"},"export":"drive_data4"}}
{"execute":"blockdev-add","arguments":{"driver":"nbd","node-name":"mirror_data5","server":{"type":"inet","host":"10.73.196.25","port":"3333"},"export":"drive_data5"}}
{"execute":"blockdev-add","arguments":{"driver":"nbd","node-name":"mirror_data6","server":{"type":"inet","host":"10.73.196.25","port":"3333"},"export":"drive_data6"}}
{"execute":"blockdev-add","arguments":{"driver":"nbd","node-name":"mirror_data7","server":{"type":"inet","host":"10.73.196.25","port":"3333"},"export":"drive_data7"}}

{ "execute": "blockdev-mirror", "arguments": { "device": "drive_image1","target": "mirror", "copy-mode":"write-blocking", "sync": "full","job-id":"j1" } }
{ "execute": "blockdev-mirror", "arguments": { "device": "drive_data1","target": "mirror_data1", "copy-mode":"write-blocking", "sync": "full","job-id":"j2"}}
{ "execute": "blockdev-mirror", "arguments": { "device": "drive_data2","target": "mirror_data2", "copy-mode":"write-blocking", "sync": "full","job-id":"j3"}}
{ "execute": "blockdev-mirror", "arguments": { "device": "drive_data3","target": "mirror_data3", "copy-mode":"write-blocking", "sync": "full","job-id":"j4"}}
{ "execute": "blockdev-mirror", "arguments": { "device": "drive_data4","target": "mirror_data4", "copy-mode":"write-blocking", "sync": "full","job-id":"j5"}}
{ "execute": "blockdev-mirror", "arguments": { "device": "drive_data5","target": "mirror_data5", "copy-mode":"write-blocking", "sync": "full","job-id":"j6"}}
{ "execute": "blockdev-mirror", "arguments": { "device": "drive_data6","target": "mirror_data6", "copy-mode":"write-blocking", "sync": "full","job-id":"j7"}}
{ "execute": "blockdev-mirror", "arguments": { "device": "drive_data7","target": "mirror_data7", "copy-mode":"write-blocking", "sync": "full","job-id":"j8"}}

Test Result:
  After step6, 
   1. all mirror jobs reach ready status within 5 minutes.
       {"execute":"query-block-jobs"}
{"return": [{"auto-finalize": true, "io-status": "ok", "device": "j8", "auto-dismiss": true, "busy": false, "len": 3221225472, "offset": 3221225472, "status": "ready", "paused": false, "speed": 0, "ready": true, "type": "mirror"}, {"auto-finalize": true, "io-status": "ok", "device": "j7", "auto-dismiss": true, "busy": false, "len": 3221225472, "offset": 3221225472, "status": "ready", "paused": false, "speed": 0, "ready": true, "type": "mirror"}, {"auto-finalize": true, "io-status": "ok", "device": "j6", "auto-dismiss": true, "busy": false, "len": 3221225472, "offset": 3221225472, "status": "ready", "paused": false, "speed": 0, "ready": true, "type": "mirror"}, {"auto-finalize": true, "io-status": "ok", "device": "j5", "auto-dismiss": true, "busy": false, "len": 3221225472, "offset": 3221225472, "status": "ready", "paused": false, "speed": 0, "ready": true, "type": "mirror"}, {"auto-finalize": true, "io-status": "ok", "device": "j4", "auto-dismiss": true, "busy": false, "len": 3221225472, "offset": 3221225472, "status": "ready", "paused": false, "speed": 0, "ready": true, "type": "mirror"}, {"auto-finalize": true, "io-status": "ok", "device": "j3", "auto-dismiss": true, "busy": false, "len": 3221225472, "offset": 3221225472, "status": "ready", "paused": false, "speed": 0, "ready": true, "type": "mirror"}, {"auto-finalize": true, "io-status": "ok", "device": "j2", "auto-dismiss": true, "busy": false, "len": 3811123200, "offset": 3810791424, "status": "ready", "paused": false, "speed": 0, "ready": true, "type": "mirror"}, {"auto-finalize": true, "io-status": "ok", "device": "j1", "auto-dismiss": true, "busy": true, "len": 21474837504, "offset": 21474837504, "status": "ready", "paused": false, "speed": 0, "ready": false, "type": "mirror"}]}
   2. fios in guest are still in running status.
    # fio --name=stress1 --filename=/mnt/1/atest --ioengine=libaio --rw=randrw --direct=1 --bs=4K --size=2G --iodepth=256 --numjobs=30 --runtime=1200 --time_based
stress1: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=256
...
fio-3.19
Starting 30 processes
stress1: Laying out IO file (1 file / 2048MiB)
^Cbs: 30 (f=30): [m(30)][28.3%][eta 14m:20s]                                              
fio: terminating on signal 2

   # fio --name=stress2 --filename=/mnt/2/atest --ioengine=libaio --rw=randrw --direct=1 --bs=4K --size=2G --iodepth=256 --numjobs=30 --runtime=1200 --time_based
stress2: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=256
...
fio-3.19
Starting 30 processes
^Cbs: 30 (f=30): [m(30)][28.6%][r=9169KiB/s,w=9365KiB/s][r=2292,w=2341 IOPS][eta 14m:17s] 
fio: terminating on signal 2

   # fio --name=stress3 --filename=/mnt/3/atest --ioengine=libaio --rw=randrw --direct=1 --bs=4K --size=2G --iodepth=256 --numjobs=30 --runtime=1200 --time_based
stress3: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=256
...
fio-3.19
Starting 30 processes
^Cbs: 30 (f=30): [m(30)][28.5%][r=21.5MiB/s,w=20.9MiB/s][r=5515,w=5361 IOPS][eta 14m:18s] 
fio: terminating on signal 2

   # fio --name=stress4 --filename=/mnt/4/atest --ioengine=libaio --rw=randrw --direct=1 --bs=4K --size=2G --iodepth=256 --numjobs=30 --runtime=1200 --time_based
stress4: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=256
...
fio-3.19
Starting 30 processes
^Cbs: 30 (f=30): [m(30)][28.4%][r=10.1MiB/s,w=10.5MiB/s][r=2579,w=2696 IOPS][eta 14m:19s] 
fio: terminating on signal 2

   # fio --name=stress5 --filename=/mnt/5/atest --ioengine=libaio --rw=randrw --direct=1 --bs=4K --size=2G --iodepth=256 --numjobs=30 --runtime=1200 --time_based
stress5: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=256
...
fio-3.19
Starting 30 processes
^Cbs: 30 (f=30): [m(30)][28.3%][eta 14m:20s]                                              
fio: terminating on signal 2

   # fio --name=stress6 --filename=/mnt/6/atest --ioengine=libaio --rw=randrw --direct=1 --bs=4K --size=2G --iodepth=256 --numjobs=30 --runtime=1200 --time_based
stress6: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=256
...
fio-3.19
Starting 30 processes
^Cbs: 30 (f=30): [m(30)][28.3%][eta 14m:20s]                                              
fio: terminating on signal 2

   # fio --name=stress7 --filename=/mnt/7/atest --ioengine=libaio --rw=randrw --direct=1 --bs=4K --size=2G --iodepth=256 --numjobs=30 --runtime=1200 --time_based
stress7: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=256
...
fio-3.19
Starting 30 processes
^Cbs: 30 (f=30): [m(30)][28.4%][r=134MiB/s,w=133MiB/s][r=34.4k,w=34.1k IOPS][eta 14m:19s]  
fio: terminating on signal 2

Comment 15 aihua liang 2023-01-31 02:36:51 UTC
As comment 11 and comment 12, set bug's status to "VERIFIED".

Comment 17 errata-xmlrpc 2023-05-16 08:16:30 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: virt:rhel and virt-devel:rhel security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:2757


Note You need to log in before you can comment on or make changes to this bug.