RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2123297 - Mirror job with "copy-mode":"write-blocking" that used for storage migration can't converge under heavy I/O
Summary: Mirror job with "copy-mode":"write-blocking" that used for storage migration ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: qemu-kvm
Version: 9.1
Hardware: x86_64
OS: Unspecified
medium
high
Target Milestone: rc
: ---
Assignee: Hanna Czenczek
QA Contact: aihua liang
URL:
Whiteboard:
Depends On:
Blocks: 2125119
TreeView+ depends on / blocked
 
Reported: 2022-09-01 10:37 UTC by aihua liang
Modified: 2023-05-09 07:44 UTC (History)
9 users (show)

Fixed In Version: qemu-kvm-7.2.0-1.el9
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 2125119 (view as bug list)
Environment:
Last Closed: 2023-05-09 07:20:04 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Allow both background and blocking requests simultaneously (3.63 KB, patch)
2022-09-20 09:05 UTC, Hanna Czenczek
no flags Details | Diff


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHELPLAN-132974 0 None None None 2022-09-01 10:38:56 UTC
Red Hat Product Errata RHSA-2023:2162 0 None None None 2023-05-09 07:20:28 UTC

Description aihua liang 2022-09-01 10:37:13 UTC
Description of problem:
 Mirror job with "copy-mode":"write-blocking" that used for storage migration can't converge under heavy I/O.

Version-Release number of selected component (if applicable):
 kernel version:5.14.0-85.el9.x86_64
 qemu-kvm version:qemu-kvm-7.0.0-12.el9

How reproducible:
 100%


Steps to Reproduce:
Test Env prepare:
 #qemu-img create -f raw test.img 100G
 #losetup /dev/loop0 test.img
 #pvcreate /dev/loop0
 #vgcreate test /dev/loop0
 #lvcreate -L 20G -n system test
 #lvcreate -L 20G -n mirror_system test
 #lvcreate -L 3G -n data1 test
 # .... create data2 ~ data7 with size 3G
 #lvcreate -L 3G -n mirror_data1 test
 # ....create mirror_data2 ~ mirror_data7 with size 3G

Test Steps:
 1. install guest on /dev/tets/system

 2. Start src qemu cmd:
     /usr/libexec/qemu-kvm \
    -name 'avocado-vt-vm1'  \
    -sandbox on  \
    -machine q35,memory-backend=mem-machine_mem \
    -device pcie-root-port,id=pcie-root-port-0,multifunction=on,bus=pcie.0,addr=0x1,chassis=1 \
    -device pcie-pci-bridge,id=pcie-pci-bridge-0,addr=0x0,bus=pcie-root-port-0  \
    -nodefaults \
    -device VGA,bus=pcie.0,addr=0x2 \
    -m 30720 \
    -object memory-backend-ram,size=30720M,id=mem-machine_mem  \
    -smp 10,maxcpus=10,cores=5,threads=1,dies=1,sockets=2  \
    -cpu 'Cascadelake-Server',ss=on,vmx=on,pdcm=on,hypervisor=on,tsc-adjust=on,umip=on,pku=on,md-clear=on,stibp=on,arch-capabilities=on,xsaves=on,ibpb=on,ibrs=on,amd-stibp=on,amd-ssbd=on,rdctl-no=on,ibrs-all=on,skip-l1dfl-vmentry=on,mds-no=on,pschange-mc-no=on,tsx-ctrl=on,hle=off,rtm=off,kvm_pv_unhalt=on \
    -chardev socket,wait=off,id=qmp_id_qmpmonitor1,server=on,path=/tmp/monitor-qmpmonitor1-20220601-053115-5q30Md4A  \
    -mon chardev=qmp_id_qmpmonitor1,mode=control \
    -chardev socket,wait=off,id=qmp_id_catch_monitor,server=on,path=/tmp/monitor-catch_monitor-20220601-053115-5q30Md4A  \
    -mon chardev=qmp_id_catch_monitor,mode=control \
    -device pvpanic,ioport=0x505,id=idtsFTaW \
    -chardev socket,wait=off,id=chardev_serial0,server=on,path=/tmp/serial-serial0-20220601-053115-5q30Md44 \
    -device isa-serial,id=serial0,chardev=chardev_serial0  \
    -chardev socket,id=seabioslog_id_20220601-053115-5q30Md44,path=/tmp/seabios-20220601-053115-5q30Md44,server=on,wait=off \
    -device isa-debugcon,chardev=seabioslog_id_20220601-053115-5q30Md44,iobase=0x402 \
    -device pcie-root-port,id=pcie-root-port-1,port=0x1,addr=0x1.0x1,bus=pcie.0,chassis=2 \
    -device qemu-xhci,id=usb1,bus=pcie-root-port-1,addr=0x0 \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \
    -device pcie-root-port,id=pcie-root-port-2,port=0x2,addr=0x1.0x2,bus=pcie.0,chassis=3 \
    -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pcie-root-port-2,addr=0x0 \
    -blockdev node-name=file_image1,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/system,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_image1,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_image1 \
    -device scsi-hd,id=image1,drive=drive_image1,write-cache=on \
    -blockdev node-name=file_data1,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/data1,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data1,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data1 \
    -device scsi-hd,id=data1,drive=drive_data1,write-cache=on \
    -blockdev node-name=file_data2,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/data2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data2,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data2 \
    -device scsi-hd,id=data2,drive=drive_data2,write-cache=on \
    -blockdev node-name=file_data3,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/data3,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data3,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data3 \
    -device scsi-hd,id=data3,drive=drive_data3,write-cache=on \
    -blockdev node-name=file_data4,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/data4,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data4,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data4 \
    -device scsi-hd,id=data4,drive=drive_data4,write-cache=on \
    -blockdev node-name=file_data5,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/data5,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data5,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data5 \
    -device scsi-hd,id=data5,drive=drive_data5,write-cache=on \
    -blockdev node-name=file_data6,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/data6,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data6,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data6 \
    -device scsi-hd,id=data6,drive=drive_data6,write-cache=on \
    -blockdev node-name=file_data7,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/data7,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data7,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data7 \
    -device scsi-hd,id=data7,drive=drive_data7,write-cache=on \
    -device pcie-root-port,id=pcie-root-port-3,port=0x3,addr=0x1.0x3,bus=pcie.0,chassis=4 \

 3. Start dst vm by qemu cmd:
    /usr/libexec/qemu-kvm \
    -name 'avocado-vt-vm1'  \
    -sandbox on  \
    -machine q35,memory-backend=mem-machine_mem \
    -device pcie-root-port,id=pcie-root-port-0,multifunction=on,bus=pcie.0,addr=0x1,chassis=1 \
    -device pcie-pci-bridge,id=pcie-pci-bridge-0,addr=0x0,bus=pcie-root-port-0  \
    -nodefaults \
    -device VGA,bus=pcie.0,addr=0x2 \
    -m 30720 \
    -object memory-backend-ram,size=30720M,id=mem-machine_mem  \
    -smp 10,maxcpus=10,cores=5,threads=1,dies=1,sockets=2  \
    -cpu 'Cascadelake-Server',ss=on,vmx=on,pdcm=on,hypervisor=on,tsc-adjust=on,umip=on,pku=on,md-clear=on,stibp=on,arch-capabilities=on,xsaves=on,ibpb=on,ibrs=on,amd-stibp=on,amd-ssbd=on,rdctl-no=on,ibrs-all=on,skip-l1dfl-vmentry=on,mds-no=on,pschange-mc-no=on,tsx-ctrl=on,hle=off,rtm=off,kvm_pv_unhalt=on \
    -chardev socket,wait=off,id=qmp_id_qmpmonitor1,server=on,path=/tmp/monitor-qmpmonitor1-20220601-053115-5q30Md45  \
    -mon chardev=qmp_id_qmpmonitor1,mode=control \
    -chardev socket,wait=off,id=qmp_id_catch_monitor,server=on,path=/tmp/monitor-catch_monitor-20220601-053115-5q30Md44  \
    -mon chardev=qmp_id_catch_monitor,mode=control \
    -device pvpanic,ioport=0x505,id=idtsFTaW \
    -chardev socket,wait=off,id=chardev_serial0,server=on,path=/tmp/serial-serial0-20220601-053115-5q30Md44 \
    -device isa-serial,id=serial0,chardev=chardev_serial0  \
    -chardev socket,id=seabioslog_id_20220601-053115-5q30Md44,path=/tmp/seabios-20220601-053115-5q30Md44,server=on,wait=off \
    -device isa-debugcon,chardev=seabioslog_id_20220601-053115-5q30Md44,iobase=0x402 \
    -device pcie-root-port,id=pcie-root-port-1,port=0x1,addr=0x1.0x1,bus=pcie.0,chassis=2 \
    -device qemu-xhci,id=usb1,bus=pcie-root-port-1,addr=0x0 \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \
    -device pcie-root-port,id=pcie-root-port-2,port=0x2,addr=0x1.0x2,bus=pcie.0,chassis=3 \
    -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pcie-root-port-2,addr=0x0 \
    -blockdev node-name=file_image1,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/mirror_system,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_image1,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_image1 \
    -device scsi-hd,id=image1,drive=drive_image1,write-cache=on \
    -blockdev node-name=file_data1,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/mirror_data1,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data1,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data1 \
    -device scsi-hd,id=data1,drive=drive_data1,write-cache=on \
    -blockdev node-name=file_data2,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/mirror_data2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data2,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data2 \
    -device scsi-hd,id=data2,drive=drive_data2,write-cache=on \
    -blockdev node-name=file_data3,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/mirror_data3,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data3,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data3 \
    -device scsi-hd,id=data3,drive=drive_data3,write-cache=on \
    -blockdev node-name=file_data4,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/mirror_data4,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data4,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data4 \
    -device scsi-hd,id=data4,drive=drive_data4,write-cache=on \
    -blockdev node-name=file_data5,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/mirror_data5,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data5,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data5 \
    -device scsi-hd,id=data5,drive=drive_data5,write-cache=on \
    -blockdev node-name=file_data6,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/mirror_data6,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data6,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data6 \
    -device scsi-hd,id=data6,drive=drive_data6,write-cache=on \
    -blockdev node-name=file_data7,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/mirror_data7,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data7,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data7 \
    -device scsi-hd,id=data7,drive=drive_data7,write-cache=on \
    -device pcie-root-port,id=pcie-root-port-3,port=0x3,addr=0x1.0x3,bus=pcie.0,chassis=4 \

 4. Start ndb server and expose all disks on dst
    { "execute": "nbd-server-start", "arguments": { "addr": { "type": "inet", "data": { "host": "10.73.196.25", "port": "3333" } } } }
{"return": {}}
{"execute":"block-export-add","arguments":{"id": "export0", "node-name": "drive_image1", "type": "nbd", "writable": true}}
{"return": {}}
{"execute":"block-export-add","arguments":{"id": "export1", "node-name": "drive_data1", "type": "nbd", "writable": true}}
{"return": {}}
{"execute":"block-export-add","arguments":{"id": "export2", "node-name": "drive_data2", "type": "nbd", "writable": true}}
{"return": {}}
{"execute":"block-export-add","arguments":{"id": "export3", "node-name": "drive_data3", "type": "nbd", "writable": true}}
{"return": {}}
{"execute":"block-export-add","arguments":{"id": "export4", "node-name": "drive_data4", "type": "nbd", "writable": true}}
{"return": {}}
{"execute":"block-export-add","arguments":{"id": "export5", "node-name": "drive_data5", "type": "nbd", "writable": true}}
{"return": {}}
{"execute":"block-export-add","arguments":{"id": "export6", "node-name": "drive_data6", "type": "nbd", "writable": true}}
{"return": {}}
{"execute":"block-export-add","arguments":{"id": "export7", "node-name": "drive_data7", "type": "nbd", "writable": true}}
{"return": {}}

  5. Login src guest, do fio test with randrw on all data disks.
     (guest)# mkfs.ext4 /dev/sdb && mkdir /mnt/1 && mount /dev/sdb /mnt/1
            # mkfs.ext4 /dev/sdc && mkdir /mnt/2 && mount /dev/sdb /mnt/2
            # mkfs.ext4 /dev/sdd && mkdir /mnt/3 && mount /dev/sdb /mnt/3
            # mkfs.ext4 /dev/sde && mkdir /mnt/4 && mount /dev/sdb /mnt/4
            # mkfs.ext4 /dev/sdf && mkdir /mnt/5 && mount /dev/sdb /mnt/5
            # mkfs.ext4 /dev/sdg && mkdir /mnt/6 && mount /dev/sdb /mnt/6
            # mkfs.ext4 /dev/sdh && mkdir /mnt/7 && mount /dev/sdb /mnt/7
          for i in range(1,8)
            # fio --name=stress$i --filename=/mnt/$i/atest --ioengine=libaio --rw=randrw --direct=1 --bs=4K --size=2G --iodepth=256 --numjobs=30 --runtime=1200 --time_based

 6. After fio test start, add target disks and do mirror from src to dst
    {"execute":"blockdev-add","arguments":{"driver":"nbd","node-name":"mirror","server":{"type":"inet","host":"10.73.196.25","port":"3333"},"export":"drive_image1"}}
    {"execute":"blockdev-add","arguments":{"driver":"nbd","node-name":"mirror_data1","server":{"type":"inet","host":"10.73.196.25","port":"3333"},"export":"drive_data1"}}
    {"execute":"blockdev-add","arguments":{"driver":"nbd","node-name":"mirror_data2","server":{"type":"inet","host":"10.73.196.25","port":"3333"},"export":"drive_data2"}}
    {"execute":"blockdev-add","arguments":{"driver":"nbd","node-name":"mirror_data3","server":{"type":"inet","host":"10.73.196.25","port":"3333"},"export":"drive_data3"}}
{"execute":"blockdev-add","arguments":{"driver":"nbd","node-name":"mirror_data4","server":{"type":"inet","host":"10.73.196.25","port":"3333"},"export":"drive_data4"}}
{"execute":"blockdev-add","arguments":{"driver":"nbd","node-name":"mirror_data5","server":{"type":"inet","host":"10.73.196.25","port":"3333"},"export":"drive_data5"}}
{"execute":"blockdev-add","arguments":{"driver":"nbd","node-name":"mirror_data6","server":{"type":"inet","host":"10.73.196.25","port":"3333"},"export":"drive_data6"}}
{"execute":"blockdev-add","arguments":{"driver":"nbd","node-name":"mirror_data7","server":{"type":"inet","host":"10.73.196.25","port":"3333"},"export":"drive_data7"}}

{ "execute": "blockdev-mirror", "arguments": { "device": "drive_image1","target": "mirror", "copy-mode":"write-blocking", "sync": "full","job-id":"j1" } }
{ "execute": "blockdev-mirror", "arguments": { "device": "drive_data1","target": "mirror_data1", "copy-mode":"write-blocking", "sync": "full","job-id":"j2"}}
{ "execute": "blockdev-mirror", "arguments": { "device": "drive_data2","target": "mirror_data2", "copy-mode":"write-blocking", "sync": "full","job-id":"j3"}}
{ "execute": "blockdev-mirror", "arguments": { "device": "drive_data3","target": "mirror_data3", "copy-mode":"write-blocking", "sync": "full","job-id":"j4"}}
{ "execute": "blockdev-mirror", "arguments": { "device": "drive_data4","target": "mirror_data4", "copy-mode":"write-blocking", "sync": "full","job-id":"j5"}}
{ "execute": "blockdev-mirror", "arguments": { "device": "drive_data5","target": "mirror_data5", "copy-mode":"write-blocking", "sync": "full","job-id":"j6"}}
{ "execute": "blockdev-mirror", "arguments": { "device": "drive_data6","target": "mirror_data6", "copy-mode":"write-blocking", "sync": "full","job-id":"j7"}}
{ "execute": "blockdev-mirror", "arguments": { "device": "drive_data7","target": "mirror_data7", "copy-mode":"write-blocking", "sync": "full","job-id":"j8"}}


Actual Result:
 After step6, j1 reach ready status after 3 minutes, but other jobs reach ready status after 20 minutes.

Expected results:
  Mirror jobs can converge.

Comment 1 aihua liang 2022-09-01 11:41:04 UTC
When run mirror to local host devices, mirror jobs can coverage in 5 minutes.
Test Env prepare:
 #qemu-img create -f raw test.img 100G
 #losetup /dev/loop0 test.img
 #pvcreate /dev/loop0
 #vgcreate test /dev/loop0
 #lvcreate -L 20G -n system test
 #lvcreate -L 20G -n mirror_system test
 #lvcreate -L 3G -n data1 test
 # .... create data2 ~ data7 with size 3G
 #lvcreate -L 3G -n mirror_data1 test
 # ....create mirror_data2 ~ mirror_data7 with size 3G
 #uname -r
 5.14.0-85.el9.x86_64
 #qemu-img --version
 qemu-img version 7.0.0 (qemu-kvm-7.0.0-12.el9)

Test Steps:
 1. install guest on /dev/tets/system

 2. Start src qemu cmd:
     /usr/libexec/qemu-kvm \
    -name 'avocado-vt-vm1'  \
    -sandbox on  \
    -machine q35,memory-backend=mem-machine_mem \
    -device pcie-root-port,id=pcie-root-port-0,multifunction=on,bus=pcie.0,addr=0x1,chassis=1 \
    -device pcie-pci-bridge,id=pcie-pci-bridge-0,addr=0x0,bus=pcie-root-port-0  \
    -nodefaults \
    -device VGA,bus=pcie.0,addr=0x2 \
    -m 30720 \
    -object memory-backend-ram,size=30720M,id=mem-machine_mem  \
    -smp 10,maxcpus=10,cores=5,threads=1,dies=1,sockets=2  \
    -cpu 'Cascadelake-Server',ss=on,vmx=on,pdcm=on,hypervisor=on,tsc-adjust=on,umip=on,pku=on,md-clear=on,stibp=on,arch-capabilities=on,xsaves=on,ibpb=on,ibrs=on,amd-stibp=on,amd-ssbd=on,rdctl-no=on,ibrs-all=on,skip-l1dfl-vmentry=on,mds-no=on,pschange-mc-no=on,tsx-ctrl=on,hle=off,rtm=off,kvm_pv_unhalt=on \
    -chardev socket,wait=off,id=qmp_id_qmpmonitor1,server=on,path=/tmp/monitor-qmpmonitor1-20220601-053115-5q30Md4A  \
    -mon chardev=qmp_id_qmpmonitor1,mode=control \
    -chardev socket,wait=off,id=qmp_id_catch_monitor,server=on,path=/tmp/monitor-catch_monitor-20220601-053115-5q30Md4A  \
    -mon chardev=qmp_id_catch_monitor,mode=control \
    -device pvpanic,ioport=0x505,id=idtsFTaW \
    -chardev socket,wait=off,id=chardev_serial0,server=on,path=/tmp/serial-serial0-20220601-053115-5q30Md44 \
    -device isa-serial,id=serial0,chardev=chardev_serial0  \
    -chardev socket,id=seabioslog_id_20220601-053115-5q30Md44,path=/tmp/seabios-20220601-053115-5q30Md44,server=on,wait=off \
    -device isa-debugcon,chardev=seabioslog_id_20220601-053115-5q30Md44,iobase=0x402 \
    -device pcie-root-port,id=pcie-root-port-1,port=0x1,addr=0x1.0x1,bus=pcie.0,chassis=2 \
    -device qemu-xhci,id=usb1,bus=pcie-root-port-1,addr=0x0 \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \
    -device pcie-root-port,id=pcie-root-port-2,port=0x2,addr=0x1.0x2,bus=pcie.0,chassis=3 \
    -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pcie-root-port-2,addr=0x0 \
    -blockdev node-name=file_image1,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/system,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_image1,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_image1 \
    -device scsi-hd,id=image1,drive=drive_image1,write-cache=on \
    -blockdev node-name=file_data1,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/data1,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data1,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data1 \
    -device scsi-hd,id=data1,drive=drive_data1,write-cache=on \
    -blockdev node-name=file_data2,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/data2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data2,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data2 \
    -device scsi-hd,id=data2,drive=drive_data2,write-cache=on \
    -blockdev node-name=file_data3,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/data3,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data3,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data3 \
    -device scsi-hd,id=data3,drive=drive_data3,write-cache=on \
    -blockdev node-name=file_data4,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/data4,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data4,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data4 \
    -device scsi-hd,id=data4,drive=drive_data4,write-cache=on \
    -blockdev node-name=file_data5,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/data5,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data5,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data5 \
    -device scsi-hd,id=data5,drive=drive_data5,write-cache=on \
    -blockdev node-name=file_data6,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/data6,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data6,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data6 \
    -device scsi-hd,id=data6,drive=drive_data6,write-cache=on \
    -blockdev node-name=file_data7,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/data7,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data7,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data7 \
    -device scsi-hd,id=data7,drive=drive_data7,write-cache=on \
    -device pcie-root-port,id=pcie-root-port-3,port=0x3,addr=0x1.0x3,bus=pcie.0,chassis=4 \

  3. Login src guest, do fio test with randrw on all data disks.
     (guest)# mkfs.ext4 /dev/sdb && mkdir /mnt/1 && mount /dev/sdb /mnt/1
            # mkfs.ext4 /dev/sdc && mkdir /mnt/2 && mount /dev/sdb /mnt/2
            # mkfs.ext4 /dev/sdd && mkdir /mnt/3 && mount /dev/sdb /mnt/3
            # mkfs.ext4 /dev/sde && mkdir /mnt/4 && mount /dev/sdb /mnt/4
            # mkfs.ext4 /dev/sdf && mkdir /mnt/5 && mount /dev/sdb /mnt/5
            # mkfs.ext4 /dev/sdg && mkdir /mnt/6 && mount /dev/sdb /mnt/6
            # mkfs.ext4 /dev/sdh && mkdir /mnt/7 && mount /dev/sdb /mnt/7
          for i in range(1,8)
            # fio --name=stress$i --filename=/mnt/$i/atest --ioengine=libaio --rw=randrw --direct=1 --bs=4K --size=2G --iodepth=256 --numjobs=30 --runtime=1200 --time_based

  4. After fio test start, add local target disks and do mirror from src to dst
    {"execute":"blockdev-add","arguments":{"driver":"host_device","filename":"/dev/test/mirror_system","node-name":"file_mirror","cache":{"direct":true,"no-flush":false},"auto-read-only":true,"discard":"unmap"}}
{"return": {}}
{"execute":"blockdev-add","arguments":{"node-name":"mirror","read-only":false,"cache":{"direct":true,"no-flush":false},"driver":"raw","file":"file_mirror"}}
{"return": {}}
{"execute":"blockdev-add","arguments":{"driver":"host_device","filename":"/dev/test/mirror_data1","node-name":"file_mirror_data1","cache":{"direct":true,"no-flush":false},"auto-read-only":true,"discard":"unmap"}}
{"return": {}}
{"execute":"blockdev-add","arguments":{"node-name":"mirror_data1","read-only":false,"cache":{"direct":true,"no-flush":false},"driver":"raw","file":"file_mirror_data1"}}
{"return": {}}
{"execute":"blockdev-add","arguments":{"driver":"host_device","filename":"/dev/test/mirror_data2","node-name":"file_mirror_data2","cache":{"direct":true,"no-flush":false},"auto-read-only":true,"discard":"unmap"}}
{"return": {}}
{"execute":"blockdev-add","arguments":{"node-name":"mirror_data2","read-only":false,"cache":{"direct":true,"no-flush":false},"driver":"raw","file":"file_mirror_data2"}}
{"return": {}}
{"execute":"blockdev-add","arguments":{"driver":"host_device","filename":"/dev/test/mirror_data3","node-name":"file_mirror_data3","cache":{"direct":true,"no-flush":false},"auto-read-only":true,"discard":"unmap"}}
{"return": {}}
{"execute":"blockdev-add","arguments":{"node-name":"mirror_data3","read-only":false,"cache":{"direct":true,"no-flush":false},"driver":"raw","file":"file_mirror_data3"}}
{"return": {}}
{"execute":"blockdev-add","arguments":{"driver":"host_device","filename":"/dev/test/mirror_data4","node-name":"file_mirror_data4","cache":{"direct":true,"no-flush":false},"auto-read-only":true,"discard":"unmap"}}
{"return": {}}
{"execute":"blockdev-add","arguments":{"node-name":"mirror_data4","read-only":false,"cache":{"direct":true,"no-flush":false},"driver":"raw","file":"file_mirror_data4"}}
{"return": {}}
{"execute":"blockdev-add","arguments":{"driver":"host_device","filename":"/dev/test/mirror_data5","node-name":"file_mirror_data5","cache":{"direct":true,"no-flush":false},"auto-read-only":true,"discard":"unmap"}}
{"return": {}}
{"execute":"blockdev-add","arguments":{"node-name":"mirror_data5","read-only":false,"cache":{"direct":true,"no-flush":false},"driver":"raw","file":"file_mirror_data5"}}
{"return": {}}
{"execute":"blockdev-add","arguments":{"driver":"host_device","filename":"/dev/test/mirror_data6","node-name":"file_mirror_data6","cache":{"direct":true,"no-flush":false},"auto-read-only":true,"discard":"unmap"}}
{"return": {}}
{"execute":"blockdev-add","arguments":{"node-name":"mirror_data6","read-only":false,"cache":{"direct":true,"no-flush":false},"driver":"raw","file":"file_mirror_data6"}}
{"return": {}}
{"execute":"blockdev-add","arguments":{"driver":"host_device","filename":"/dev/test/mirror_data7","node-name":"file_mirror_data7","cache":{"direct":true,"no-flush":false},"auto-read-only":true,"discard":"unmap"}}
{"return": {}}
{"execute":"blockdev-add","arguments":{"node-name":"mirror_data7","read-only":false,"cache":{"direct":true,"no-flush":false},"driver":"raw","file":"file_mirror_data7"}}
{"return": {}}

{ "execute": "blockdev-mirror", "arguments": { "device": "drive_image1","target": "mirror", "copy-mode":"write-blocking", "sync": "full","job-id":"j1" } }
{ "execute": "blockdev-mirror", "arguments": { "device": "drive_data1","target": "mirror_data1", "copy-mode":"write-blocking", "sync": "full","job-id":"j2"}}
{ "execute": "blockdev-mirror", "arguments": { "device": "drive_data2","target": "mirror_data2", "copy-mode":"write-blocking", "sync": "full","job-id":"j3"}}
{ "execute": "blockdev-mirror", "arguments": { "device": "drive_data3","target": "mirror_data3", "copy-mode":"write-blocking", "sync": "full","job-id":"j4"}}
{ "execute": "blockdev-mirror", "arguments": { "device": "drive_data4","target": "mirror_data4", "copy-mode":"write-blocking", "sync": "full","job-id":"j5"}}
{ "execute": "blockdev-mirror", "arguments": { "device": "drive_data5","target": "mirror_data5", "copy-mode":"write-blocking", "sync": "full","job-id":"j6"}}
{ "execute": "blockdev-mirror", "arguments": { "device": "drive_data6","target": "mirror_data6", "copy-mode":"write-blocking", "sync": "full","job-id":"j7"}}
{ "execute": "blockdev-mirror", "arguments": { "device": "drive_data7","target": "mirror_data7", "copy-mode":"write-blocking", "sync": "full","job-id":"j8"}}
 
 Actual Result:
   After step4, all mirror jobs reach ready status in 5 minutes, and all I/O test still running in guest.

Comment 2 aihua liang 2022-09-02 07:03:59 UTC
qemu-kvm-7.0.0-1.el9 also hit this issue.

Comment 3 aihua liang 2022-09-02 11:05:42 UTC
qemu-kvm-6.0.0-2.el9 also hit this issue.

Comment 4 Vivek Goyal 2022-09-07 18:28:34 UTC
Did it ever work? Trying to figure out if this is a regression.

Comment 5 aihua liang 2022-09-08 02:19:57 UTC
(In reply to Vivek Goyal from comment #4)
> Did it ever work? Trying to figure out if this is a regression.
Hi,Vivek
 As comment3, it's not a regression on RHEL9. And I'm trying to check it on RHEL8. If exist, I'll file a bug on RHEL8.

BR,
Aliang

Comment 6 Kevin Wolf 2022-09-08 07:59:51 UTC
(In reply to aihua liang from comment #0)
> Actual Result:
>  After step6, j1 reach ready status after 3 minutes, but other jobs reach
> ready status after 20 minutes.

If a job converges after 20 minutes, it still converges. Is there actually a part in this bug where the job doesn't converge?

This difference sounded like something to investigate, particularly because all images seem to be stored on the same filesystem. But is it actually the case that you run fio only on data1-data7, but not on system? If so, it's not surprising to me at all that mirroring the idle system disk is much faster than the data disks that get the I/O stress.

Hanna, do you see anything you would like to investigate here or does it look fully expected to you? (I guess the only potentially interesting thing is the performance comparison between a local copy and NBD; NBD is expected to be somewhat slower, but I'm not sure if a factor 4 is).

Comment 7 aihua liang 2022-09-08 10:16:29 UTC
(In reply to Kevin Wolf from comment #6)
> (In reply to aihua liang from comment #0)
> > Actual Result:
> >  After step6, j1 reach ready status after 3 minutes, but other jobs reach
> > ready status after 20 minutes.
> 
> If a job converges after 20 minutes, it still converges. Is there actually a
> part in this bug where the job doesn't converge?

Soory, Kevin, maybe I made a misleading conclusion.
The mirror converges only after all fio jobs done. In this test, I run fio jobs for 20 minutes, so the mirror jobs coveraged after 20 minutes.

As bz2125119, I tested it on qemu-kvm-4.2.0-49.module+el8.5.0+10804+ce89428a, found that mirror jobs can coverage in about 5 minutes. And after mirror jobs coveraged, all fio were still in running status.
> 
> This difference sounded like something to investigate, particularly because
> all images seem to be stored on the same filesystem. But is it actually the
> case that you run fio only on data1-data7, but not on system? If so, it's
> not surprising to me at all that mirroring the idle system disk is much
> faster than the data disks that get the I/O stress.

Yes, it's as expected.
> 
> Hanna, do you see anything you would like to investigate here or does it
> look fully expected to you? (I guess the only potentially interesting thing
> is the performance comparison between a local copy and NBD; NBD is expected
> to be somewhat slower, but I'm not sure if a factor 4 is).

Comment 8 Hanna Czenczek 2022-09-08 14:39:39 UTC
It’s interesting that it does converge in five minutes in 4.2, but off the top of my head I can’t think of anything that might’ve caused a regression in 5.0.

It must be noted that the write-blocking mode does not absolutely guarantee convergence, or convergence in a fixed time.  It only guarantees that source and target will not diverge.  If the bandwidth to the target is so small that basically only the mirrored guest writes make it through, leaving no space for background mirroring of the rest of the disk, then it is possible that convergence is never reached or takes very long.

This can be checked with the query-block-jobs command: The difference of @len - @offset reported should never increase.

(The disks are empty here, besides the FIO operation, which makes it questionable whether there is anything to be copied at all or not (beyond mirroring the FIO writes); but because they’re LVM, I think block-status doesn’t work and we will copy the whole disks even if they are “empty”.

As for the difference between NBD and local storage, I don’t see why a factor of 4 when comparing copying over a network vs. copying on local storage would be out of the ordinary.


All that said, there seems to be a difference between 4.2 and 5.0, so that’s something to be investigated.

Comment 9 Kevin Wolf 2022-09-08 16:08:16 UTC
(In reply to Hanna Reitz from comment #8)
> As for the difference between NBD and local storage, I don’t see why a
> factor of 4 when comparing copying over a network vs. copying on local
> storage would be out of the ordinary.

I thought this was to localhost, so it would essentially only be an added bounce buffer, but I probably misunderstood.
 
> All that said, there seems to be a difference between 4.2 and 5.0, so that’s
> something to be investigated.

It's yours then! I'm not assigning an ITR yet, though, so you can decide later whether this can be done for 9.2.

Comment 10 Hanna Czenczek 2022-09-19 11:18:49 UTC
Testing verifies that write-blocking works on upstream master.  This is my testing configuration:

- Source image is a 1 GB raw image on tmpfs
- Destination is a QSD instance with a null-co block device throttled to 10 MB/s
- FIO run in the guest, started 60 s into the mirror job

I continually measure divergence by running query-jobs, subtracting current-progress from total-progress (to get the remaining work to do), and divide the result by the disk size (1 GB).

Without write-blocking, FIO runs obviously unlimited with close to 4 GB/s.  The mirror job, which reached 42.19 % divergence (i.e. ~58 % completion) when FIO was started, diverges rapidly (in three seconds) to 100 %.  It stays at 100 % divergence for the rest of FIO’s runtime.

With write-blocking, FIO runs at the throttled speed (10 MB/s), indicating that indeed its writes are blocked until they are copied to the destination.  The mirror job does not diverge from the 42.19 % divergence it had when FIO was started, but it also doesn’t converge further while FIO is running.

As I had noted in comment 8, this isn’t unexpected.  write-blocking mode does not guarantee convergence, it only guarantees no further divergence.  I assume this is also what’s happening in comment 0.

In 4.2.0, I can see that interestingly when FIO is started, the divergence still continues to dip down for around three seconds, down to 39.85 % (from 42.19 %).  Beyond that, it continues to oscillate slightly between 39.84 % and 39.89 %, until FIO is done.  I don’t think that’s a case for write-blocking mode working better in 4.2.0, though, I think it’s just a case of our reporting and monitoring having become more stable.

So I don’t know what seems to be the difference between 4.2 and later versions, but I can’t reproduce it here.  It might have something to do with NBD performance when copying to the destination (i.e. which wouldn’t have much to do with mirror’s write-blocking parameter), but with a controlled target that’s throttled to 10 MB/s, I can’t see a real difference in behavior.

Comment 11 Hanna Czenczek 2022-09-19 11:24:14 UTC
Aliang, can you take a look at the following:

1. How is FIO’s performance in either case, i.e. with and without write-blocking?  Does write-blocking seem to limit its performance to the network speed?

2. When running query-jobs over QMP in regular intervals, how does the difference of the reported values @total-progress - @current-progress develop?  Does it increase over time, does it decrease, does it remain steady?  Also, how does it develop when FIO is started a couple of seconds after the mirror job is started?

Thanks!

Comment 12 Kevin Wolf 2022-09-19 13:44:08 UTC
(In reply to Hanna Reitz from comment #10)
> With write-blocking, FIO runs at the throttled speed (10 MB/s), indicating
> that indeed its writes are blocked until they are copied to the destination.
> The mirror job does not diverge from the 42.19 % divergence it had when FIO
> was started, but it also doesn’t converge further while FIO is running.

Is this because the intercepted and mirrored I/O is already taking up the whole bandwidth and so the job doesn't do new background I/O to make progress?

Hm, indeed, we actively avoid any new background copies while there are any active writes in flight:

        /* Do not start passive operations while there are active
         * writes in progress */
        while (s->in_active_write_counter) {
            mirror_wait_for_any_operation(s, true);
        }

Why is this a good idea? Unfortunately neither the comment nor the commit message (commit d06107ade0c) tell more about the reasoning behind this.

I think we need to aim for both background copies and active writes to make progress. Starvation shouldn't happen on either side.

Comment 13 Hanna Czenczek 2022-09-20 09:05:00 UTC
Created attachment 1913094 [details]
Allow both background and blocking requests simultaneously

I think it was because it made it simpler to think about the interactions between blocking (active) writes and background copying, because there would be basically none.  But active writes do properly put themselves into the in_flight bitmap, so they should interact with background copying just fine, actually.

I think we basically don’t need to care about active writes (not even for the purpose of when the job is READY or not) and just assert that there are none once the job drains the BDS before completing.  I’ve attached a patch that just drops the waiting code and adds an assertion, and with it, the job does reach convergence in my test case.  (And also passes all iotests fwiw, though, well, 151 is the only one to actually use write-blocking.)

Comment 14 aihua liang 2022-09-20 09:29:22 UTC
(In reply to Hanna Reitz from comment #11)
> Aliang, can you take a look at the following:
> 
> 1. How is FIO’s performance in either case, i.e. with and without
> write-blocking?  Does write-blocking seem to limit its performance to the
> network speed?
> 
> 2. When running query-jobs over QMP in regular intervals, how does the
> difference of the reported values @total-progress - @current-progress
> develop?  Does it increase over time, does it decrease, does it remain
> steady?  Also, how does it develop when FIO is started a couple of seconds
> after the mirror job is started?
> 
> Thanks!

OK,Hanna. My test env is not alivable now(running some automation test on it). I'll give a reply once the resouce is alivable.


BR,
Aliang

Comment 15 Kevin Wolf 2022-09-20 14:29:00 UTC
(In reply to Hanna Reitz from comment #13)
> But active writes do properly put themselves into the in_flight bitmap,
> so they should interact with background copying just fine, actually.

I'm not sure about some details in mirror_iteration(), because mirror_wait_on_conflicts() is not called on the real range to be copied, and there is a pause point between it and updating the in_flight bitmap. Also, in your patch, I think the total progress might go backwards when active writes complete, which is not allowed. But these are really implementation details to be discussed upstream.

In general it seems to me that this approach is more than viable, and it's a very desirable improvement, so let's plan to have this for 9.2.

Comment 16 aihua liang 2022-09-21 15:46:50 UTC
(In reply to Hanna Reitz from comment #11)
Not keep the original test env, run new test with following env
Test Env:
  kernel:5.14.0-162.3.1.el9.x86_64
  qemu-kvm:qemu-kvm-7.0.0-13.el9

> Aliang, can you take a look at the following:
> 
> 1. How is FIO’s performance in either case, i.e. with and without
> write-blocking?  Does write-blocking seem to limit its performance to the
> network speed?
In my test, still run fios first, then do block mirror with/without write-blocking.
 FIO cmd:
  #fio --name=stress$i --filename=/mnt/$i/atest --ioengine=libaio --rw=randrw --direct=1 --bs=4K --size=2G --iodepth=256 --numjobs=30 --runtime=1200 --time_based
 FIO's performace when mirror with write-blocking:
   Run status group 0 (all jobs):
   READ: bw=13.2MiB/s (13.8MB/s), 433KiB/s-464KiB/s (443kB/s-475kB/s), io=15.4GiB (16.6GB), run=1200039-1200244msec
   WRITE: bw=13.2MiB/s (13.8MB/s), 434KiB/s-464KiB/s (444kB/s-476kB/s), io=15.4GiB (16.6GB), run=1200039-1200244msec
   Disk stats (read/write):
    sde: ios=4043888/4045889, merge=0/190, ticks=54909772/59389055, in_queue=115197216, util=91.07%

 FIO's performace when mirror without write-blocking:
   Run status group 0 (all jobs):
   READ: bw=9583KiB/s (9813kB/s), 311KiB/s-330KiB/s (318kB/s-338kB/s), io=11.0GiB (11.8GB), run=1200018-1200498msec
   WRITE: bw=9579KiB/s (9809kB/s), 309KiB/s-329KiB/s (316kB/s-337kB/s), io=11.0GiB (11.8GB), run=1200018-1200498msec
   Disk stats (read/write):
    sde: ios=2876199/2874974, merge=0/29, ticks=222281403/223375305, in_queue=446796843, util=86.90%
 
List two disks fio test result in attachment,you can check it with the detail.
> 
> 2. When running query-jobs over QMP in regular intervals, how does the
> difference of the reported values @total-progress - @current-progress
> develop?  Does it increase over time, does it decrease, does it remain
> steady?  Also, how does it develop when FIO is started a couple of seconds
> after the mirror job is started?
Still with scenario: start fios first, then do block mirror.
Mirror with write-blocking:
 @total-progress - @current-progress decrease.
 (@total-progress - @current-progress) /@total-progress decrease
Mirror without write-blocking
 @total-progress - @current-progress is variable:decrease->increase->decrease
 (@total-progress - @current-progress) /@total-progress decrease

Get some block-job status via qmp/hmp, add it to the attachment.
> 
> Thanks!

Comment 18 Hanna Czenczek 2023-01-05 14:23:05 UTC
Fixed upstream in d69a879bdf1aed586478eaa161ee064fe1b92f1a, which is included in qemu 7.2.0, and thus fixed by rebase.

Comment 19 aihua liang 2023-01-11 07:24:31 UTC
Verfied on qemu-kvm-7.2.0-2.el9, all mirror jobs reach ready status within 5 minutes, with all fio jobs still in running status.


Test Env prepare:
 #qemu-img create -f raw test.img 100G
 #losetup /dev/loop0 test.img
 #pvcreate /dev/loop0
 #vgcreate test /dev/loop0
 #lvcreate -L 20G -n system test
 #lvcreate -L 20G -n mirror_system test
 #lvcreate -L 3G -n data1 test
 # .... create data2 ~ data7 with size 3G
 #lvcreate -L 3G -n mirror_data1 test
 # ....create mirror_data2 ~ mirror_data7 with size 3G

Test Steps:
 1. install guest on /dev/tets/system

 2. Start src qemu cmd:
     /usr/libexec/qemu-kvm \
    -name 'avocado-vt-vm1'  \
    -sandbox on  \
    -machine q35,memory-backend=mem-machine_mem \
    -device pcie-root-port,id=pcie-root-port-0,multifunction=on,bus=pcie.0,addr=0x1,chassis=1 \
    -device pcie-pci-bridge,id=pcie-pci-bridge-0,addr=0x0,bus=pcie-root-port-0  \
    -nodefaults \
    -device VGA,bus=pcie.0,addr=0x2 \
    -m 30720 \
    -object memory-backend-ram,size=30720M,id=mem-machine_mem  \
    -smp 10,maxcpus=10,cores=5,threads=1,dies=1,sockets=2  \
    -cpu 'Cascadelake-Server',ss=on,vmx=on,pdcm=on,hypervisor=on,tsc-adjust=on,umip=on,pku=on,md-clear=on,stibp=on,arch-capabilities=on,xsaves=on,ibpb=on,ibrs=on,amd-stibp=on,amd-ssbd=on,rdctl-no=on,ibrs-all=on,skip-l1dfl-vmentry=on,mds-no=on,pschange-mc-no=on,tsx-ctrl=on,hle=off,rtm=off,kvm_pv_unhalt=on \
    -chardev socket,wait=off,id=qmp_id_qmpmonitor1,server=on,path=/tmp/monitor-qmpmonitor1-20220601-053115-5q30Md4A  \
    -mon chardev=qmp_id_qmpmonitor1,mode=control \
    -chardev socket,wait=off,id=qmp_id_catch_monitor,server=on,path=/tmp/monitor-catch_monitor-20220601-053115-5q30Md4A  \
    -mon chardev=qmp_id_catch_monitor,mode=control \
    -device pvpanic,ioport=0x505,id=idtsFTaW \
    -chardev socket,wait=off,id=chardev_serial0,server=on,path=/tmp/serial-serial0-20220601-053115-5q30Md44 \
    -device isa-serial,id=serial0,chardev=chardev_serial0  \
    -chardev socket,id=seabioslog_id_20220601-053115-5q30Md44,path=/tmp/seabios-20220601-053115-5q30Md44,server=on,wait=off \
    -device isa-debugcon,chardev=seabioslog_id_20220601-053115-5q30Md44,iobase=0x402 \
    -device pcie-root-port,id=pcie-root-port-1,port=0x1,addr=0x1.0x1,bus=pcie.0,chassis=2 \
    -device qemu-xhci,id=usb1,bus=pcie-root-port-1,addr=0x0 \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \
    -device pcie-root-port,id=pcie-root-port-2,port=0x2,addr=0x1.0x2,bus=pcie.0,chassis=3 \
    -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pcie-root-port-2,addr=0x0 \
    -blockdev node-name=file_image1,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/system,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_image1,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_image1 \
    -device scsi-hd,id=image1,drive=drive_image1,write-cache=on \
    -blockdev node-name=file_data1,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/data1,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data1,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data1 \
    -device scsi-hd,id=data1,drive=drive_data1,write-cache=on \
    -blockdev node-name=file_data2,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/data2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data2,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data2 \
    -device scsi-hd,id=data2,drive=drive_data2,write-cache=on \
    -blockdev node-name=file_data3,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/data3,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data3,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data3 \
    -device scsi-hd,id=data3,drive=drive_data3,write-cache=on \
    -blockdev node-name=file_data4,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/data4,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data4,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data4 \
    -device scsi-hd,id=data4,drive=drive_data4,write-cache=on \
    -blockdev node-name=file_data5,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/data5,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data5,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data5 \
    -device scsi-hd,id=data5,drive=drive_data5,write-cache=on \
    -blockdev node-name=file_data6,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/data6,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data6,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data6 \
    -device scsi-hd,id=data6,drive=drive_data6,write-cache=on \
    -blockdev node-name=file_data7,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/data7,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data7,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data7 \
    -device scsi-hd,id=data7,drive=drive_data7,write-cache=on \
    -device pcie-root-port,id=pcie-root-port-3,port=0x3,addr=0x1.0x3,bus=pcie.0,chassis=4 \
    -device virtio-net-pci,mac=9a:86:72:db:85:f5,id=idi0GOUm,netdev=iddsg6bX,bus=pcie-root-port-3,addr=0x0  \
    -netdev tap,id=iddsg6bX,vhost=on \
    -vnc :0  \
    -monitor stdio \

 3. Start dst vm by qemu cmd:
    /usr/libexec/qemu-kvm \
    -name 'avocado-vt-vm1'  \
    -sandbox on  \
    -machine q35,memory-backend=mem-machine_mem \
    -device pcie-root-port,id=pcie-root-port-0,multifunction=on,bus=pcie.0,addr=0x1,chassis=1 \
    -device pcie-pci-bridge,id=pcie-pci-bridge-0,addr=0x0,bus=pcie-root-port-0  \
    -nodefaults \
    -device VGA,bus=pcie.0,addr=0x2 \
    -m 30720 \
    -object memory-backend-ram,size=30720M,id=mem-machine_mem  \
    -smp 10,maxcpus=10,cores=5,threads=1,dies=1,sockets=2  \
    -cpu 'Cascadelake-Server',ss=on,vmx=on,pdcm=on,hypervisor=on,tsc-adjust=on,umip=on,pku=on,md-clear=on,stibp=on,arch-capabilities=on,xsaves=on,ibpb=on,ibrs=on,amd-stibp=on,amd-ssbd=on,rdctl-no=on,ibrs-all=on,skip-l1dfl-vmentry=on,mds-no=on,pschange-mc-no=on,tsx-ctrl=on,hle=off,rtm=off,kvm_pv_unhalt=on \
    -chardev socket,wait=off,id=qmp_id_qmpmonitor1,server=on,path=/tmp/monitor-qmpmonitor1-20220601-053115-5q30Md45  \
    -mon chardev=qmp_id_qmpmonitor1,mode=control \
    -chardev socket,wait=off,id=qmp_id_catch_monitor,server=on,path=/tmp/monitor-catch_monitor-20220601-053115-5q30Md44  \
    -mon chardev=qmp_id_catch_monitor,mode=control \
    -device pvpanic,ioport=0x505,id=idtsFTaW \
    -chardev socket,wait=off,id=chardev_serial0,server=on,path=/tmp/serial-serial0-20220601-053115-5q30Md44 \
    -device isa-serial,id=serial0,chardev=chardev_serial0  \
    -chardev socket,id=seabioslog_id_20220601-053115-5q30Md44,path=/tmp/seabios-20220601-053115-5q30Md44,server=on,wait=off \
    -device isa-debugcon,chardev=seabioslog_id_20220601-053115-5q30Md44,iobase=0x402 \
    -device pcie-root-port,id=pcie-root-port-1,port=0x1,addr=0x1.0x1,bus=pcie.0,chassis=2 \
    -device qemu-xhci,id=usb1,bus=pcie-root-port-1,addr=0x0 \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \
    -device pcie-root-port,id=pcie-root-port-2,port=0x2,addr=0x1.0x2,bus=pcie.0,chassis=3 \
    -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pcie-root-port-2,addr=0x0 \
    -blockdev node-name=file_image1,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/mirror_system,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_image1,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_image1 \
    -device scsi-hd,id=image1,drive=drive_image1,write-cache=on \
    -blockdev node-name=file_data1,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/mirror_data1,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data1,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data1 \
    -device scsi-hd,id=data1,drive=drive_data1,write-cache=on \
    -blockdev node-name=file_data2,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/mirror_data2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data2,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data2 \
    -device scsi-hd,id=data2,drive=drive_data2,write-cache=on \
    -blockdev node-name=file_data3,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/mirror_data3,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data3,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data3 \
    -device scsi-hd,id=data3,drive=drive_data3,write-cache=on \
    -blockdev node-name=file_data4,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/mirror_data4,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data4,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data4 \
    -device scsi-hd,id=data4,drive=drive_data4,write-cache=on \
    -blockdev node-name=file_data5,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/mirror_data5,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data5,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data5 \
    -device scsi-hd,id=data5,drive=drive_data5,write-cache=on \
    -blockdev node-name=file_data6,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/mirror_data6,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data6,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data6 \
    -device scsi-hd,id=data6,drive=drive_data6,write-cache=on \
    -blockdev node-name=file_data7,driver=host_device,auto-read-only=on,discard=unmap,aio=threads,filename=/dev/test/mirror_data7,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_data7,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_data7 \
    -device scsi-hd,id=data7,drive=drive_data7,write-cache=on \
    -device pcie-root-port,id=pcie-root-port-3,port=0x3,addr=0x1.0x3,bus=pcie.0,chassis=4 \
    -device virtio-net-pci,mac=9a:86:72:db:85:f5,id=idi0GOUm,netdev=iddsg6bX,bus=pcie-root-port-3,addr=0x0  \
    -netdev tap,id=iddsg6bX,vhost=on \
    -vnc :1  \
    -monitor stdio \
    -incoming defer \

 4. Start ndb server and expose all disks on dst
    { "execute": "nbd-server-start", "arguments": { "addr": { "type": "inet", "data": { "host": "10.73.196.25", "port": "3333" } } } }
{"return": {}}
{"execute":"block-export-add","arguments":{"id": "export0", "node-name": "drive_image1", "type": "nbd", "writable": true}}
{"return": {}}
{"execute":"block-export-add","arguments":{"id": "export1", "node-name": "drive_data1", "type": "nbd", "writable": true}}
{"return": {}}
{"execute":"block-export-add","arguments":{"id": "export2", "node-name": "drive_data2", "type": "nbd", "writable": true}}
{"return": {}}
{"execute":"block-export-add","arguments":{"id": "export3", "node-name": "drive_data3", "type": "nbd", "writable": true}}
{"return": {}}
{"execute":"block-export-add","arguments":{"id": "export4", "node-name": "drive_data4", "type": "nbd", "writable": true}}
{"return": {}}
{"execute":"block-export-add","arguments":{"id": "export5", "node-name": "drive_data5", "type": "nbd", "writable": true}}
{"return": {}}
{"execute":"block-export-add","arguments":{"id": "export6", "node-name": "drive_data6", "type": "nbd", "writable": true}}
{"return": {}}
{"execute":"block-export-add","arguments":{"id": "export7", "node-name": "drive_data7", "type": "nbd", "writable": true}}
{"return": {}}

  5. Login src guest, do fio test with randrw on all data disks.
     (guest)# mkfs.ext4 /dev/sdb && mkdir /mnt/1 && mount /dev/sdb /mnt/1
            # mkfs.ext4 /dev/sdc && mkdir /mnt/2 && mount /dev/sdb /mnt/2
            # mkfs.ext4 /dev/sdd && mkdir /mnt/3 && mount /dev/sdb /mnt/3
            # mkfs.ext4 /dev/sde && mkdir /mnt/4 && mount /dev/sdb /mnt/4
            # mkfs.ext4 /dev/sdf && mkdir /mnt/5 && mount /dev/sdb /mnt/5
            # mkfs.ext4 /dev/sdg && mkdir /mnt/6 && mount /dev/sdb /mnt/6
            # mkfs.ext4 /dev/sdh && mkdir /mnt/7 && mount /dev/sdb /mnt/7
          for i in range(1,8)
            # fio --name=stress$i --filename=/mnt/$i/atest --ioengine=libaio --rw=randrw --direct=1 --bs=4K --size=2G --iodepth=256 --numjobs=30 --runtime=1200 --time_based

 6. After fio test start, add target disks and do mirror from src to dst
    {"execute":"blockdev-add","arguments":{"driver":"nbd","node-name":"mirror","server":{"type":"inet","host":"10.73.196.25","port":"3333"},"export":"drive_image1"}}
    {"execute":"blockdev-add","arguments":{"driver":"nbd","node-name":"mirror_data1","server":{"type":"inet","host":"10.73.196.25","port":"3333"},"export":"drive_data1"}}
    {"execute":"blockdev-add","arguments":{"driver":"nbd","node-name":"mirror_data2","server":{"type":"inet","host":"10.73.196.25","port":"3333"},"export":"drive_data2"}}
    {"execute":"blockdev-add","arguments":{"driver":"nbd","node-name":"mirror_data3","server":{"type":"inet","host":"10.73.196.25","port":"3333"},"export":"drive_data3"}}
{"execute":"blockdev-add","arguments":{"driver":"nbd","node-name":"mirror_data4","server":{"type":"inet","host":"10.73.196.25","port":"3333"},"export":"drive_data4"}}
{"execute":"blockdev-add","arguments":{"driver":"nbd","node-name":"mirror_data5","server":{"type":"inet","host":"10.73.196.25","port":"3333"},"export":"drive_data5"}}
{"execute":"blockdev-add","arguments":{"driver":"nbd","node-name":"mirror_data6","server":{"type":"inet","host":"10.73.196.25","port":"3333"},"export":"drive_data6"}}
{"execute":"blockdev-add","arguments":{"driver":"nbd","node-name":"mirror_data7","server":{"type":"inet","host":"10.73.196.25","port":"3333"},"export":"drive_data7"}}

{ "execute": "blockdev-mirror", "arguments": { "device": "drive_image1","target": "mirror", "copy-mode":"write-blocking", "sync": "full","job-id":"j1" } }
{ "execute": "blockdev-mirror", "arguments": { "device": "drive_data1","target": "mirror_data1", "copy-mode":"write-blocking", "sync": "full","job-id":"j2"}}
{ "execute": "blockdev-mirror", "arguments": { "device": "drive_data2","target": "mirror_data2", "copy-mode":"write-blocking", "sync": "full","job-id":"j3"}}
{ "execute": "blockdev-mirror", "arguments": { "device": "drive_data3","target": "mirror_data3", "copy-mode":"write-blocking", "sync": "full","job-id":"j4"}}
{ "execute": "blockdev-mirror", "arguments": { "device": "drive_data4","target": "mirror_data4", "copy-mode":"write-blocking", "sync": "full","job-id":"j5"}}
{ "execute": "blockdev-mirror", "arguments": { "device": "drive_data5","target": "mirror_data5", "copy-mode":"write-blocking", "sync": "full","job-id":"j6"}}
{ "execute": "blockdev-mirror", "arguments": { "device": "drive_data6","target": "mirror_data6", "copy-mode":"write-blocking", "sync": "full","job-id":"j7"}}
{ "execute": "blockdev-mirror", "arguments": { "device": "drive_data7","target": "mirror_data7", "copy-mode":"write-blocking", "sync": "full","job-id":"j8"}}

After step6, 
1. All mirror jobs reach ready status within 5 minutes.
 {"execute":"query-block-jobs"}
{"return": [{"auto-finalize": true, "io-status": "ok", "device": "j8", "auto-dismiss": true, "busy": false, "len": 3623645184, "offset": 3623120896, "status": "ready", "paused": false, "speed": 0, "ready": true, "type": "mirror"}, {"auto-finalize": true, "io-status": "ok", "device": "j7", "auto-dismiss": true, "busy": false, "len": 3655479296, "offset": 3654955008, "status": "ready", "paused": false, "speed": 0, "ready": true, "type": "mirror"}, {"auto-finalize": true, "io-status": "ok", "device": "j6", "auto-dismiss": true, "busy": false, "len": 3647967232, "offset": 3647938560, "status": "ready", "paused": false, "speed": 0, "ready": true, "type": "mirror"}, {"auto-finalize": true, "io-status": "ok", "device": "j5", "auto-dismiss": true, "busy": false, "len": 3661467648, "offset": 3660943360, "status": "ready", "paused": false, "speed": 0, "ready": true, "type": "mirror"}, {"auto-finalize": true, "io-status": "ok", "device": "j4", "auto-dismiss": true, "busy": false, "len": 3659554816, "offset": 3659030528, "status": "ready", "paused": false, "speed": 0, "ready": true, "type": "mirror"}, {"auto-finalize": true, "io-status": "ok", "device": "j3", "auto-dismiss": true, "busy": false, "len": 3661905920, "offset": 3661389824, "status": "ready", "paused": false, "speed": 0, "ready": true, "type": "mirror"}, {"auto-finalize": true, "io-status": "ok", "device": "j2", "auto-dismiss": true, "busy": false, "len": 3677872128, "offset": 3677433856, "status": "ready", "paused": false, "speed": 0, "ready": true, "type": "mirror"}, {"auto-finalize": true, "io-status": "ok", "device": "j1", "auto-dismiss": true, "busy": false, "len": 21474976256, "offset": 21474976256, "status": "ready", "paused": false, "speed": 0, "ready": true, "type": "mirror"}]}

2. All fio jobs are still running
  # fio --name=stress1 --filename=/mnt/1/atest --ioengine=libaio --rw=randrw --direct=1 --bs=4K --size=2G --iodepth=256 --numjobs=30 --runtime=1200 --time_based
stress1: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=256
...
fio-3.27
Starting 30 processes
^Cbs: 30 (f=30): [m(30)][32.7%][r=7347KiB/s,w=6910KiB/s][r=1836,w=1727 IOPS][eta 13m:28s]

 # fio --name=stress2 --filename=/mnt/2/atest --ioengine=libaio --rw=randrw --direct=1 --bs=4K --size=2G --iodepth=256 --numjobs=30 --runtime=1200 --time_based
stress2: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=256
...
fio-3.27
Starting 30 processes
stress2: Laying out IO file (1 file / 2048MiB)
^Cbs: 30 (f=30): [m(30)][33.0%][r=9408KiB/s,w=9332KiB/s][r=2352,w=2333 IOPS][eta 13m:24s]

 # fio --name=stress3 --filename=/mnt/3/atest --ioengine=libaio --rw=randrw --direct=1 --bs=4K --size=2G --iodepth=256 --numjobs=30 --runtime=1200 --time_based
stress3: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=256
...
fio-3.27
Starting 30 processes
stress3: Laying out IO file (1 file / 2048MiB)
^Cbs: 30 (f=30): [m(30)][32.1%][r=3131KiB/s,w=3519KiB/s][r=782,w=879 IOPS][eta 13m:35s]

 # fio --name=stress4 --filename=/mnt/4/atest --ioengine=libaio --rw=randrw --direct=1 --bs=4K --size=2G --iodepth=256 --numjobs=30 --runtime=1200 --time_based
stress4: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=256
...
fio-3.27
Starting 30 processes
stress4: Laying out IO file (1 file / 2048MiB)
^Cbs: 30 (f=30): [m(30)][32.4%][r=4252KiB/s,w=4364KiB/s][r=1063,w=1091 IOPS][eta 13m:31s]

 # fio --name=stress5 --filename=/mnt/5/atest --ioengine=libaio --rw=randrw --direct=1 --bs=4K --size=2G --iodepth=256 --numjobs=30 --runtime=1200 --time_based
stress5: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=256
...
fio-3.27
Starting 30 processes
stress5: Laying out IO file (1 file / 2048MiB)
^Cbs: 30 (f=30): [m(30)][32.8%][r=4700KiB/s,w=4172KiB/s][r=1175,w=1043 IOPS][eta 13m:26s]

 # fio --name=stress6 --filename=/mnt/6/atest --ioengine=libaio --rw=randrw --direct=1 --bs=4K --size=2G --iodepth=256 --numjobs=30 --runtime=1200 --time_based
stress6: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=256
...
fio-3.27
Starting 30 processes
stress6: Laying out IO file (1 file / 2048MiB)
^Cbs: 30 (f=30): [m(30)][33.0%][r=6294KiB/s,w=6246KiB/s][r=1573,w=1561 IOPS][eta 13m:25s]

 # fio --name=stress7 --filename=/mnt/7/atest --ioengine=libaio --rw=randrw --direct=1 --bs=4K --size=2G --iodepth=256 --numjobs=30 --runtime=1200 --time_based
stress7: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=256
...
fio-3.27
Starting 30 processes
stress7: Laying out IO file (1 file / 2048MiB)
^Cbs: 30 (f=30): [m(30)][35.7%][r=9636KiB/s,w=9464KiB/s][r=2409,w=2366 IOPS][eta 12m:52s]

Comment 20 Yanan Fu 2023-01-12 02:31:30 UTC
Set 'Verified:Tested,SanityOnly' as gating/tier1 test pass.

Comment 25 aihua liang 2023-01-17 01:31:37 UTC
As comment 19 and comment 20, set bug's status to "VERIFIED".

Comment 27 errata-xmlrpc 2023-05-09 07:20:04 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: qemu-kvm security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:2162


Note You need to log in before you can comment on or make changes to this bug.