RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2186725 - Qemu hang when commit during fio running(iothread enable)
Summary: Qemu hang when commit during fio running(iothread enable)
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: qemu-kvm
Version: 9.3
Hardware: x86_64
OS: Linux
high
high
Target Milestone: rc
: ---
Assignee: Kevin Wolf
QA Contact: aihua liang
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-04-14 09:07 UTC by aihua liang
Modified: 2023-11-07 09:25 UTC (History)
9 users (show)

Fixed In Version: qemu-kvm-8.0.0-5.el9
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-11-07 08:27:12 UTC
Type: ---
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Gitlab redhat/centos-stream/src qemu-kvm merge_requests 166 0 None opened block/graph-lock: Disable locking for now 2023-05-23 14:01:26 UTC
Red Hat Issue Tracker RHELPLAN-154704 0 None None None 2023-04-14 09:09:38 UTC
Red Hat Product Errata RHSA-2023:6368 0 None None None 2023-11-07 08:28:45 UTC

Description aihua liang 2023-04-14 09:07:05 UTC
Description of problem:
Qemu hang when commit during fio running

Version-Release number of selected component (if applicable):
kernel version:5.14.0-290.kpq1.el9.x86_64
qemu-kvm version:qemu-kvm-8.0.0-0.rc1.el9.candidate

How reproducible:
100%

Steps to Reproduce:
1.Start guest with qemu cmdline:
   /usr/libexec/qemu-kvm \
     -S  \
     -name 'avocado-vt-vm1'  \
     -sandbox on  \
     -blockdev '{"node-name": "file_ovmf_code", "driver": "file", "filename": "/usr/share/OVMF/OVMF_CODE.secboot.fd", "auto-read-only": true, "discard": "unmap"}' \
     -blockdev '{"node-name": "drive_ovmf_code", "driver": "raw", "read-only": true, "file": "file_ovmf_code"}' \
     -blockdev '{"node-name": "file_ovmf_vars", "driver": "file", "filename": "/root/avocado/data/avocado-vt/avocado-vt-vm1_rhel930-64-virtio_qcow2_filesystem_VARS.fd", "auto-read-only": true, "discard": "unmap"}' \
     -blockdev '{"node-name": "drive_ovmf_vars", "driver": "raw", "read-only": false, "file": "file_ovmf_vars"}' \
     -machine q35,memory-backend=mem-machine_mem,pflash0=drive_ovmf_code,pflash1=drive_ovmf_vars \
     -device '{"id": "pcie-root-port-0", "driver": "pcie-root-port", "multifunction": true, "bus": "pcie.0", "addr": "0x1", "chassis": 1}' \
     -device '{"id": "pcie-pci-bridge-0", "driver": "pcie-pci-bridge", "addr": "0x0", "bus": "pcie-root-port-0"}'  \
     -nodefaults \
     -device '{"driver": "VGA", "bus": "pcie.0", "addr": "0x2"}' \
     -m 30720 \
     -object '{"size": 32212254720, "id": "mem-machine_mem", "qom-type": "memory-backend-ram"}'  \
     -smp 10,maxcpus=10,cores=5,threads=1,dies=1,sockets=2  \
     -cpu 'Cascadelake-Server-noTSX',+kvm_pv_unhalt \
     -chardev socket,server=on,path=/var/tmp/monitor-qmpmonitor1-20230413-050654-SEkZfBqX,wait=off,id=qmp_id_qmpmonitor1  \
     -mon chardev=qmp_id_qmpmonitor1,mode=control \
     -chardev socket,server=on,path=/var/tmp/monitor-catch_monitor-20230413-050654-SEkZfBqX,wait=off,id=qmp_id_catch_monitor  \
     -mon chardev=qmp_id_catch_monitor,mode=control \
     -device '{"ioport": 1285, "driver": "pvpanic", "id": "idnRLXj0"}' \
     -chardev socket,server=on,path=/var/tmp/serial-serial0-20230413-050654-SEkZfBqX,wait=off,id=chardev_serial0 \
     -device '{"id": "serial0", "driver": "isa-serial", "chardev": "chardev_serial0"}'  \
     -chardev socket,id=seabioslog_id_20230413-050654-SEkZfBqX,path=/var/tmp/seabios-20230413-050654-SEkZfBqX,server=on,wait=off \
     -device isa-debugcon,chardev=seabioslog_id_20230413-050654-SEkZfBqX,iobase=0x402 \
     -device '{"id": "pcie-root-port-1", "port": 1, "driver": "pcie-root-port", "addr": "0x1.0x1", "bus": "pcie.0", "chassis": 2}' \
     -device '{"driver": "qemu-xhci", "id": "usb1", "bus": "pcie-root-port-1", "addr": "0x0"}' \
     -device '{"driver": "usb-tablet", "id": "usb-tablet1", "bus": "usb1.0", "port": "1"}' \
     -blockdev '{"node-name": "file_image1", "driver": "file", "auto-read-only": true, "discard": "unmap", "aio": "threads", "filename": "/home/kvm_autotest_root/images/rhel930-64-virtio.qcow2", "cache": {"direct": true, "no-flush": false}}' \
     -object '{"qom-type": "iothread", "id": "iothread0"}' \
     -blockdev '{"node-name": "drive_image1", "driver": "qcow2", "read-only": false, "cache": {"direct": true, "no-flush": false}, "file": "file_image1"}' \
     -device '{"id": "pcie-root-port-2", "port": 2, "driver": "pcie-root-port", "addr": "0x1.0x2", "bus": "pcie.0", "chassis": 3}' \
     -device '{"driver": "virtio-blk-pci", "id": "image1", "drive": "drive_image1", "bootindex": 0, "write-cache": "on", "bus": "pcie-root-port-2", "addr": "0x0", "iothread": "iothread0"}' \
     -blockdev '{"node-name": "file_data", "driver": "file", "auto-read-only": true, "discard": "unmap", "aio": "threads", "filename": "/root/avocado/data/avocado-vt/data.qcow2", "cache": {"direct": true, "no-flush": false}}' \
     -object '{"qom-type": "iothread", "id": "iothread1"}' \
     -blockdev '{"node-name": "drive_data", "driver": "qcow2", "read-only": false, "cache": {"direct": true, "no-flush": false}, "file": "file_data"}' \
     -device '{"id": "pcie-root-port-3", "port": 3, "driver": "pcie-root-port", "addr": "0x1.0x3", "bus": "pcie.0", "chassis": 4}' \
     -device '{"driver": "virtio-blk-pci", "id": "data", "drive": "drive_data", "bootindex": 1, "write-cache": "on", "bus": "pcie-root-port-3", "addr": "0x0", "iothread": "iothread1"}' \
     -device '{"id": "pcie-root-port-4", "port": 4, "driver": "pcie-root-port", "addr": "0x1.0x4", "bus": "pcie.0", "chassis": 5}' \
     -device '{"driver": "virtio-net-pci", "mac": "9a:51:97:3a:13:46", "id": "idaB6X7z", "netdev": "idj45xY4", "bus": "pcie-root-port-4", "addr": "0x0"}'  \
     -netdev tap,id=idj45xY4,vhost=on  \
     -vnc :0  \
     -rtc base=utc,clock=host,driftfix=slew  \
     -boot menu=off,order=cdn,once=c,strict=off \
     -enable-kvm \
     -device '{"id": "pcie_extra_root_port_0", "driver": "pcie-root-port", "multifunction": true, "bus": "pcie.0", "addr": "0x3", "chassis": 6}' \
     -monitor stdio \

2.Continue VM
  {'execute': 'cont', 'id': '3px5OUXF'}

3.Create target node
  {"execute": "blockdev-create", "arguments": {"options": {"driver": "file", "filename": "/root/avocado/data/avocado-vt/sn1.qcow2", "size": 21474836480}, "job-id": "file_sn1"}, "id": "M4oQAULN"}
  {"execute": "job-dismiss", "arguments": {"id": "file_sn1"}, "id": "dQHWNEFI"}
  {"execute": "blockdev-add", "arguments": {"node-name": "file_sn1", "driver": "file", "filename": "/root/avocado/data/avocado-vt/sn1.qcow2", "aio": "threads", "auto-read-only": true, "discard": "unmap"}, "id": "RH7PJoku"}
  {"execute": "blockdev-create", "arguments": {"options": {"driver": "qcow2", "file": "file_sn1", "size": 21474836480}, "job-id": "drive_sn1"}, "id": "J5k2UVQw"}
  {"execute": "job-dismiss", "arguments": {"id": "drive_sn1"}, "id": "aPMZvq9I"}
  {"execute": "blockdev-add", "arguments": {"node-name": "drive_sn1", "driver": "qcow2", "file": "file_sn1", "read-only": false}, "id": "lOyuTJrc"}

4.Do snapshot
  {"execute": "blockdev-snapshot", "arguments": {"overlay": "drive_sn1", "node": "drive_image1"}, "id": "kLH8S8Cu"}

5.Create a new file in guest
  (guest)#dd if=/dev/urandom of=/var/tmp/sn1 bs=1M count=10 oflag=direct
         #md5sum /var/tmp/sn1 > /var/tmp/sn1.md5 && sync

6.Run fio test in guest
  (guest)#/usr/bin/fio --name=stress --filename=/home/atest --ioengine=libaio --rw=write --direct=1 --bs=4K --size=2G --iodepth=256 --numjobs=256 --runtime=300 --time_based
stress: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=256
...
fio-3.27
Starting 256 processes
stress: Laying out IO file (1 file / 2048MiB)
Jobs: 256 (f=256): [W(256)][6.6%][eta 02h:40m:04s]

7.During fio test, do commit from snapshot node to base node
  {'execute': 'block-commit', 'arguments': {'device': 'drive_sn1', 'job-id': 'drive_sn1_Biue'}, 'id': 'VM6Y4EBI'}


Actual results:
After step7, only one event sent out from qmp monitor, then qemu hang.
 {"timestamp": {"seconds": 1681462571, "microseconds": 97895}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "drive_sn1_Biue"}}
 
 {"execute":"query-block-jobs"} --> no response from qmp monitor


Expected results:
Commit executed successfully with BLOCK_JOB_COMPLTED event received from qmp monitor.

Additional info:
Check guest, also hang.

Comment 1 aihua liang 2023-04-14 10:01:10 UTC
Test on qemu-kvm-7.2.0-14.el9_2, not hit this issue, so it's a regression issue.

Comment 5 Hanna Czenczek 2023-04-18 12:57:16 UTC
Hi,

I can reproduce this, it hangs in bdrv_graph_wrlock(), i.e. seems to be a deadlock at first glance (though I don’t know who’s holding the lock):

(gdb) bt
#0  0x00007f2c71822ad6 in ppoll () at /usr/lib/libc.so.6
#1  0x00005560bdc7a7a5 in ppoll (__ss=0x0, __timeout=0x0, __nfds=<optimized out>, __fds=<optimized out>) at /usr/include/bits/poll2.h:88
#2  0x00005560bdc62119 in fdmon_poll_wait (ctx=0x5560bfea6250, ready_list=0x7ffcd181a788, timeout=-1) at ../util/fdmon-poll.c:80
#3  0x00005560bdc615bd in aio_poll (ctx=ctx@entry=0x5560bfea6250, blocking=blocking@entry=true) at ../util/aio-posix.c:669
#4  0x00005560bdb4c055 in bdrv_graph_wrlock () at ../block/graph-lock.c:130
#5  0x00005560bdb21995 in bdrv_replace_child_noperm (child=0x5560c05be360, new_bs=0x5560c0188820) at ../block.c:2887
#6  0x00005560bdb27cfd in bdrv_attach_child_common
    (child_bs=child_bs@entry=0x5560c0188820, child_name=child_name@entry=0x5560bdde9992 "root", child_class=child_class@entry=0x5560be18e180 <child_root>, child_role=child_role@entry=20, perm=perm@entry=2, shared_perm=shared_perm@entry=7, opaque=0x5560c01dea60, tran=0x5560c022d2c0, errp=0x7ffcd181ab70) at ../block.c:3063
#7  0x00005560bdb28f89 in bdrv_root_attach_child
    (child_bs=child_bs@entry=0x5560c0188820, child_name=child_name@entry=0x5560bdde9992 "root", child_class=child_class@entry=0x5560be18e180 <child_root>, child_role=child_role@entry=20, perm=2, shared_perm=7, opaque=0x5560c01dea60, errp=0x7ffcd181ab70) at ../block.c:3130
#8  0x00005560bdb4610e in blk_insert_bs (blk=0x5560c01dea60, bs=bs@entry=0x5560c0188820, errp=errp@entry=0x7ffcd181ab70) at ../block/block-backend.c:906
#9  0x00005560bdb598f9 in mirror_start_job
    (job_id=job_id@entry=0x5560c0185f50 "commit", bs=bs@entry=0x5560c0cafce0, creation_flags=creation_flags@entry=0, target=target@entry=0x5560c0188820, replaces=replaces@entry=0x0, speed=speed@entry=0, granularity=65536, buf_size=16777216, backing_mode=MIRROR_LEAVE_BACKING_CHAIN, zero_target=false, on_source_error=BLOCKDEV_ON_ERROR_REPORT, on_target_error=BLOCKDEV_ON_ERROR_REPORT, unmap=true, cb=0x0, opaque=0x0, driver=0x5560be198bc0 <commit_active_job_driver>, is_none_mode=false, base=0x5560c0188820, auto_complete=false, filter_node_name=0x0, is_mirror=false, copy_mode=MIRROR_COPY_MODE_BACKGROUND, errp=0x7ffcd181ab70) at ../block/mirror.c:1777
#10 0x00005560bdb5c94e in commit_active_start
    (job_id=job_id@entry=0x5560c0185f50 "commit", bs=bs@entry=0x5560c0cafce0, base=base@entry=0x5560c0188820, creation_flags=creation_flags@entry=0, speed=speed@entry=0, on_error=on_error@entry=BLOCKDEV_ON_ERROR_REPORT, filter_node_name=0x0, cb=0x0, opaque=0x0, auto_complete=false, errp=0x7ffcd181ab70) at ../block/mirror.c:1958
#11 0x00005560bdb16d50 in qmp_block_commit
    (job_id=0x5560c0185f50 "commit", device=<optimized out>, base_node=0x0, base=0x0, top_node=<optimized out>, top=<optimized out>, backing_file=0x0, has_speed=false, speed=0, has_on_error=false, on_error=BLOCKDEV_ON_ERROR_REPORT, filter_node_name=0x0, has_auto_finalize=false, auto_finalize=false, has_auto_dismiss=false, auto_dismiss=false, errp=0x7ffcd181ac48) at ../blockdev.c:2753
#12 0x00005560bdbee949 in qmp_marshal_block_commit (args=<optimized out>, ret=<optimized out>, errp=0x7f2c6bfefe90) at qapi/qapi-commands-block-core.c:408
#13 0x00005560bdc57af9 in do_qmp_dispatch_bh (opaque=0x7f2c6bfefea0) at ../qapi/qmp-dispatch.c:128
#14 0x00005560bdc76885 in aio_bh_call (bh=0x7f2c580044d0) at ../util/async.c:155
#15 aio_bh_poll (ctx=ctx@entry=0x5560bfea6250) at ../util/async.c:184
#16 0x00005560bdc6145e in aio_dispatch (ctx=0x5560bfea6250) at ../util/aio-posix.c:421
#17 0x00005560bdc764ee in aio_ctx_dispatch (source=<optimized out>, callback=<optimized out>, user_data=<optimized out>) at ../util/async.c:326
#18 0x00007f2c72f0282b in g_main_context_dispatch () at /usr/lib/libglib-2.0.so.0
#19 0x00005560bdc77e28 in glib_pollfds_poll () at ../util/main-loop.c:290
#20 os_host_main_loop_wait (timeout=0) at ../util/main-loop.c:313
#21 main_loop_wait (nonblocking=nonblocking@entry=0) at ../util/main-loop.c:592
#22 0x00005560bd876457 in qemu_main_loop () at ../softmmu/runstate.c:731
#23 0x00005560bdade906 in qemu_default_main () at ../softmmu/main.c:37
#24 0x00007f2c7174b790 in  () at /usr/lib/libc.so.6
#25 0x00007f2c7174b84a in __libc_start_main () at /usr/lib/libc.so.6
#26 0x00005560bd685cc5 in _start ()

Comment 6 Hanna Czenczek 2023-04-18 14:03:27 UTC
I’m not familiar with the graph locking methods, but naïvely, it looks like calling bdrv_graph_wrlock() is safe only while no AioContext is locked; otherwise, there may be read lock holders in locked contexts, which won’t make progress and won’t release the lock.

Now, many QMP commands lock the concerned subtree’s AioContext, and historically, many block layer functions have required you to lock some context, so this might cause a conflict.

When I have mirror_start_job() release the job’s context around blk_insert_bs(), at least it only hangs in 4/10 cases instead of 10/10.  (Same (yes, also 4/10 hangs) if I release new_bs’s context (unless it’s the main context) in bdrv_replace_child_noperm() around bdrv_graph_wrlock(), but that feels very much unsafe.)

Now, the remaining hangs all don’t hang in bdrv_graph_wrlock()’s AIO_WAIT_WHILE()’s loop, but in bdrv_drain_all_end()’s aio_context_acquire(), which might indicate that there’s just yet another AioContext that must be released (though I don’t know which)?

Comment 7 Kevin Wolf 2023-04-18 18:15:58 UTC
(In reply to Hanna Czenczek from comment #6)
> Now, many QMP commands lock the concerned subtree’s AioContext, and
> historically, many block layer functions have required you to lock some
> context, so this might cause a conflict.

That's a good point actually... Maybe we should #ifdef out all of the actual logic in the graph locking functions for 9.3. It's not very mature and doesn't really add any safety as long as we still have the AioContext lock.

> Now, the remaining hangs all don’t hang in bdrv_graph_wrlock()’s
> AIO_WAIT_WHILE()’s loop, but in bdrv_drain_all_end()’s
> aio_context_acquire(), which might indicate that there’s just yet another
> AioContext that must be released (though I don’t know which)?

That is interesting, considering that the bdrv_drain_all_begin_nopoll() before didn't hang. The only thing that would make some sense to me is if we just added a BlockDriverState to the locked AioContext while waiting for reader_count() to become zero. That would be two graph changes happening at the same time.

Do you see another scenario where this would happen?

Comment 8 Hanna Czenczek 2023-04-20 09:33:27 UTC
(In reply to Kevin Wolf from comment #7)
> (In reply to Hanna Czenczek from comment #6)
> > Now, many QMP commands lock the concerned subtree’s AioContext, and
> > historically, many block layer functions have required you to lock some
> > context, so this might cause a conflict.
> 
> That's a good point actually... Maybe we should #ifdef out all of the actual
> logic in the graph locking functions for 9.3. It's not very mature and
> doesn't really add any safety as long as we still have the AioContext lock.

We could do that for 9.3, but I feel like we need to do something for upstream 8.0.1, too.  Is #ifdef-ing out everything an option for upstream?  (Cc-ing Paolo and Emanuele.)

> > Now, the remaining hangs all don’t hang in bdrv_graph_wrlock()’s
> > AIO_WAIT_WHILE()’s loop, but in bdrv_drain_all_end()’s
> > aio_context_acquire(), which might indicate that there’s just yet another
> > AioContext that must be released (though I don’t know which)?
> 
> That is interesting, considering that the bdrv_drain_all_begin_nopoll()
> before didn't hang. The only thing that would make some sense to me is if we
> just added a BlockDriverState to the locked AioContext while waiting for
> reader_count() to become zero. That would be two graph changes happening at
> the same time.
> 
> Do you see another scenario where this would happen?

Not yet, but bug 2185688 shows us a different scenario where aio_context_acquire() hangs.  These could be related, and bug 2185688 is much easier to reproduce (doesn’t require I/O).

Comment 9 Hanna Czenczek 2023-04-20 10:26:20 UTC
So after investigation the hang in bug 2185688 is probably because of an aio_context_acquire() in bdrv_drain_all_begin_nopoll().  The hang in bdrv_drain_all_end() seems unrelated to that.

Comment 10 Emanuele Giuseppe Esposito 2023-04-20 13:26:33 UTC
(In reply to Kevin Wolf from comment #7)
> (In reply to Hanna Czenczek from comment #6)
> > Now, many QMP commands lock the concerned subtree’s AioContext, and
> > historically, many block layer functions have required you to lock some
> > context, so this might cause a conflict.
> 
> That's a good point actually... Maybe we should #ifdef out all of the actual
> logic in the graph locking functions for 9.3. It's not very mature and
> doesn't really add any safety as long as we still have the AioContext lock.
> 
Fine by me, as long as we didn't remove the AioContext lock from some places where we thought the graph lock can already take its place.

Comment 17 Yanan Fu 2023-06-15 03:28:23 UTC
QE bot(pre verify): Set 'Verified:Tested,SanityOnly' as gating/tier1 test pass.

Comment 20 aihua liang 2023-06-19 02:05:58 UTC
Test run cases:blockdev_inc_backup_pull_mode_vm_reboot and blockdev_commit_standby with 100 times, all pass.
(099/100) repeat50.Host_RHEL.m9.u3.ovmf.qcow2.virtio_scsi.up.virtio_net.Guest.RHEL.9.3.0.x86_64.io-github-autotest-qemu.blockdev_commit_standby.q35: PASS (75.74 s)
 (100/100) repeat50.Host_RHEL.m9.u3.ovmf.qcow2.virtio_scsi.up.virtio_net.Guest.RHEL.9.3.0.x86_64.io-github-autotest-qemu.blockdev_inc_backup_pull_mode_vm_reboot.q35: STARTED
 (100/100) repeat50.Host_RHEL.m9.u3.ovmf.qcow2.virtio_scsi.up.virtio_net.Guest.RHEL.9.3.0.x86_64.io-github-autotest-qemu.blockdev_inc_backup_pull_mode_vm_reboot.q35: PASS (263.06 s)
RESULTS    : PASS 100 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 | CANCEL 0
JOB HTML   : /root/avocado/job-results/job-2023-06-18T07.04-5d0f7de/results.html
JOB TIME   : 17208.59 s

Also run case: blockdev_commit_fio, it pass.
(1/1) Host_RHEL.m9.u3.ovmf.qcow2.virtio_scsi.up.virtio_net.Guest.RHEL.9.3.0.x86_64.io-github-autotest-qemu.blockdev_commit_fio.q35: PASS (385.75 s)


And also run regression test, all pass, set the status to "VERIFIED".

Comment 21 aihua liang 2023-06-19 02:21:27 UTC
(In reply to aihua liang from comment #20)
> Test run cases:blockdev_inc_backup_pull_mode_vm_reboot and
> blockdev_commit_standby with 100 times, all pass.
> (099/100)
> repeat50.Host_RHEL.m9.u3.ovmf.qcow2.virtio_scsi.up.virtio_net.Guest.RHEL.9.3.
> 0.x86_64.io-github-autotest-qemu.blockdev_commit_standby.q35: PASS (75.74 s)
>  (100/100)
> repeat50.Host_RHEL.m9.u3.ovmf.qcow2.virtio_scsi.up.virtio_net.Guest.RHEL.9.3.
> 0.x86_64.io-github-autotest-qemu.blockdev_inc_backup_pull_mode_vm_reboot.q35:
> STARTED
>  (100/100)
> repeat50.Host_RHEL.m9.u3.ovmf.qcow2.virtio_scsi.up.virtio_net.Guest.RHEL.9.3.
> 0.x86_64.io-github-autotest-qemu.blockdev_inc_backup_pull_mode_vm_reboot.q35:
> PASS (263.06 s)
> RESULTS    : PASS 100 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 |
> CANCEL 0
> JOB HTML   :
> /root/avocado/job-results/job-2023-06-18T07.04-5d0f7de/results.html
> JOB TIME   : 17208.59 s
> 
> Also run case: blockdev_commit_fio, it pass.
> (1/1)
> Host_RHEL.m9.u3.ovmf.qcow2.virtio_scsi.up.virtio_net.Guest.RHEL.9.3.0.x86_64.
> io-github-autotest-qemu.blockdev_commit_fio.q35: PASS (385.75 s)
> 
> 
> And also run regression test, all pass, set the status to "VERIFIED".

Test qemu version: qemu-kvm-8.0.0-5.el9

Comment 23 errata-xmlrpc 2023-11-07 08:27:12 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: qemu-kvm security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:6368


Note You need to log in before you can comment on or make changes to this bug.