RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2021778 - Qemu core dump when do full backup during system reset
Summary: Qemu core dump when do full backup during system reset
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: qemu-kvm
Version: 8.6
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: rc
: ---
Assignee: Stefan Hajnoczi
QA Contact: aihua liang
URL:
Whiteboard:
Depends On:
Blocks: 2021820
TreeView+ depends on / blocked
 
Reported: 2021-11-10 07:59 UTC by aihua liang
Modified: 2022-05-11 03:05 UTC (History)
11 users (show)

Fixed In Version: qemu-kvm-6.2.0-6.module+el8.6.0+14165+5e5e76ac
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 2021820 (view as bug list)
Environment:
Last Closed: 2022-05-10 13:24:11 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHELPLAN-102345 0 None None None 2021-11-10 15:33:01 UTC
Red Hat Product Errata RHSA-2022:1759 0 None None None 2022-05-10 13:25:02 UTC

Description aihua liang 2021-11-10 07:59:48 UTC
Description of problem:
Qemu core dump when do full backup during system reset

Version-Release number of selected component (if applicable):
Kernel version:4.18.0-348.4.el8.kpq0.x86_64
qemu-kvm version:qemu-kvm-6.1.0-4.module+el8.6.0+13039+4b81a1dc


How reproducible:
20%


Steps to Reproduce:
1.Start guest with qemu cmd:
   /usr/libexec/qemu-kvm \
   -S  \
   -name 'avocado-vt-vm1'  \
   -sandbox on  \
   -machine q35,memory-backend=mem-machine_mem \
   -device pcie-root-port,id=pcie-root-port-0,multifunction=on,bus=pcie.0,addr=0x1,chassis=1 \
   -device pcie-pci-bridge,id=pcie-pci-bridge-0,addr=0x0,bus=pcie-root-port-0  \
   -nodefaults \
   -device VGA,bus=pcie.0,addr=0x2 \
   -m 30720 \
   -object memory-backend-ram,size=30720M,id=mem-machine_mem  \
   -smp 10,maxcpus=10,cores=5,threads=1,dies=1,sockets=2  \
   -cpu 'Cascadelake-Server-noTSX',+kvm_pv_unhalt \
   -chardev socket,wait=off,server=on,id=qmp_id_qmpmonitor1,path=/tmp/avocado_oxpfhqt7/monitor-qmpmonitor1-20211110-012521-TNCkxDmn  \
   -mon chardev=qmp_id_qmpmonitor1,mode=control \
   -chardev socket,wait=off,server=on,id=qmp_id_catch_monitor,path=/tmp/avocado_oxpfhqt7/monitor-catch_monitor-20211110-012521-TNCkxDmn  \
   -mon chardev=qmp_id_catch_monitor,mode=control \
   -device pvpanic,ioport=0x505,id=idgKHYrQ \
   -chardev socket,wait=off,server=on,id=chardev_serial0,path=/tmp/avocado_oxpfhqt7/serial-serial0-20211110-012521-TNCkxDmn \
   -device isa-serial,id=serial0,chardev=chardev_serial0  \
   -chardev socket,id=seabioslog_id_20211110-012521-TNCkxDmn,path=/tmp/avocado_oxpfhqt7/seabios-20211110-012521-TNCkxDmn,server=on,wait=off \
   -device isa-debugcon,chardev=seabioslog_id_20211110-012521-TNCkxDmn,iobase=0x402 \
   -device pcie-root-port,id=pcie-root-port-1,port=0x1,addr=0x1.0x1,bus=pcie.0,chassis=2 \
   -device qemu-xhci,id=usb1,bus=pcie-root-port-1,addr=0x0 \
   -device pcie-root-port,id=pcie-root-device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \
   -device pcie-root-port,id=pcie-root-port-2,port=0x2,addr=0x1.0x2,bus=pcie.0,chassis=3 \
   -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pcie-root-port-2,addr=0x0 \
   -blockdev node-name=file_image1,driver=file,auto-read-only=on,discard=unmap,aio=threads,filename=/home/kvm_autotest_root/images/rhel860-64-virtio-scsi.qcow2,cache.direct=on,cache.no-flush=off \
   -blockdev node-name=drive_image1,driver=qcow2,read-only=off,cache.direct=on,cache.no-flush=off,file=file_image1 \
   -device scsi-hd,id=image1,drive=drive_image1,write-cache=on \
   -blockdev node-name=file_src1,driver=file,auto-read-only=on,discard=unmap,aio=threads,filename=/root/avocado/data/avocado-vt/sr1.qcow2,cache.direct=on,cache.no-flush=off \
   -blockdev node-name=drive_src1,driver=qcow2,read-only=off,cache.direct=on,cache.no-flush=off,file=file_src1 \
   -device scsi-hd,id=src1,drive=drive_src1,write-cache=on \
   -device pcie-root-port,id=pcie-root-port-3,port=0x3,addr=0x1.0x3,bus=pcie.0,chassis=4 \
   -device virtio-net-pci,mac=9a:11:64:b0:5d:a8,id=idxnEEYY,netdev=idBjpylo,bus=pcie-root-port-3,addr=0x0  \
   -netdev tap,id=idBjpylo,vhost=on,vhostfd=22,fd=11  \
   -vnc :0  \
   -rtc base=utc,clock=host,driftfix=slew  \
   -boot menu=off,order=cdn,once=c,strict=off \
   -enable-kvm \
   -device pcie-root-port,id=pcie_extra_root_port_0,multifunction=on,bus=pcie.0,addr=0x3,chassis=5
     
2. Create backup target node:
    {'execute': 'blockdev-create', 'arguments': {'options': {'driver': 'file', 'filename': '/root/avocado/data/avocado-vt/dst1.qcow2', 'size': 209715200}, 'job-id': 'file_dst1'}, 'id': 'Fk1bF3FV'}
    {'execute': 'job-dismiss', 'arguments': {'id': 'file_dst1'}, 'id': '13R5TDSj'}
    {'execute': 'blockdev-add', 'arguments': {'node-name': 'file_dst1', 'driver': 'file', 'filename': '/root/avocado/data/avocado-vt/dst1.qcow2', 'aio': 'threads', 'auto-read-only': True, 'discard': 'unmap'}, 'id': 'VIzrN0zy'}
    {'execute': 'blockdev-create', 'arguments': {'options': {'driver': 'qcow2', 'file': 'file_dst1', 'size': 209715200}, 'job-id': 'drive_dst1'}, 'id': 'YX8t8hBs'}
    {'execute': 'job-dismiss', 'arguments': {'id': 'drive_dst1'}, 'id': 'OTZwYb7J'}
    {'execute': 'blockdev-add', 'arguments': {'node-name': 'drive_dst1', 'driver': 'qcow2', 'file': 'file_dst1', 'read-only': False}, 'id': 'QHyUxtql'}

3. Reset system:
   {'execute': 'system_reset', 'id': 'OREutgnz'}
   {"return": {}, "id": "OREutgnz"}

4. During system reset, do full backup
   {'execute': 'blockdev-backup', 'arguments': {'device': 'drive_src1', 'target': 'drive_dst1', 'job-id': 'drive_src1_qnFF', 'sync': 'full', 'speed': 0, 'compress': False, 'auto-finalize': True, 'auto-dismiss': True, 'on-source-error': 'report', 'on-target-error': 'report'}, 'id': 'WbDARa8c'}
   {"timestamp": {"seconds": 1636525643, "microseconds": 980645}, "event": "RESET", "data": {"guest": false, "reason": "host-qmp-system-reset"}}
   {"timestamp": {"seconds": 1636525643, "microseconds": 981386}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "drive_src1_qnFF"}}
   {"timestamp": {"seconds": 1636525643, "microseconds": 981429}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "drive_src1_qnFF"}}
   {"timestamp": {"seconds": 1636525643, "microseconds": 981454}, "event": "JOB_STATUS_CHANGE", "data": {"status": "paused", "id": "drive_src1_qnFF"}}
   {"timestamp": {"seconds": 1636525643, "microseconds": 981478}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "drive_src1_qnFF"}}
   {"return": {}, "id": "WbDARa8c"}
   {"timestamp": {"seconds": 1636525643, "microseconds": 993102}, "event": "JOB_STATUS_CHANGE", "data": {"status": "paused", "id": "drive_src1_qnFF"}}
   {"timestamp": {"seconds": 1636525643, "microseconds": 993153}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "drive_src1_qnFF"}}
   {"timestamp": {"seconds": 1636525643, "microseconds": 993187}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "drive_src1_qnFF"}}
   {"timestamp": {"seconds": 1636525643, "microseconds": 993208}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "drive_src1_qnFF"}}
   {"timestamp": {"seconds": 1636525643, "microseconds": 993265}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "drive_src1_qnFF", "len": 209715200, "offset": 209715200, "speed": 0, "type": "backup"}}
   {"timestamp": {"seconds": 1636525643, "microseconds": 993293}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "drive_src1_qnFF"}}
   {"timestamp": {"seconds": 1636525643, "microseconds": 993313}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "drive_src1_qnFF"}}

Actual results:
After step4, qemu core dump with info:
  01:28:07 WARNI| Could not receive data from monitor    ([Errno 104] Connection reset by peer)
01:28:07 DEBUG| [stderr] INFO:root:[qemu output] /tmp/aexpect_r4ILV5js/aexpect-2lmeo0i1.sh: line 1: 392547 Segmentation fault      (core dumped) MALLOC_PERTURB_=1 /usr/libexec/qemu-kvm -S -name 'avocado-vt-vm1' -sandbox on -machine q35,memory-backend=mem-machine_mem -device pcie-root-port,id=pcie-root-port-0,multifunction=on,bus=pcie.0,addr=0x1,chassis=1 -device pcie-pci-bridge,id=pcie-pci-bridge-0,addr=0x0,bus=pcie-root-port-0 -nodefaults ...

Coredump info as bellow:
 (gdb) bt
#0  0x00005579653d53f0 in bdrv_parent_drained_poll (ignore_bds_parents=false, ignore=0x0, bs=0x557967e6d5d0) at ../block/io.c:306
#1  bdrv_drain_poll (bs=0x557967e6d5d0, recursive=<optimized out>, ignore_parent=0x0, ignore_bds_parents=false) at ../block/io.c:306
#2  0x00005579653d5735 in bdrv_drain_poll_top_level (ignore_parent=0x0, recursive=false, bs=0x557967e6d5d0) at ../block/io.c:473
#3  bdrv_do_drained_begin (poll=true, ignore_bds_parents=false, parent=0x0, recursive=false, bs=0x557967e6d5d0) at ../block/io.c:473
#4  bdrv_do_drained_begin (bs=0x557967e6d5d0, recursive=<optimized out>, parent=0x0, ignore_bds_parents=<optimized out>, poll=<optimized out>) at ../block/io.c:439
#5  0x00005579653dfa3b in blk_drain (blk=0x557968fa24a0) at ../block/block-backend.c:1732
#6  0x00005579652075c7 in scsi_device_purge_requests (sdev=sdev@entry=0x557968fa20b0, sense=...) at ../hw/scsi/scsi-bus.c:1638
#7  0x000055796512f674 in scsi_disk_reset (dev=0x557968fa20b0) at ../hw/scsi/scsi-disk.c:2248
#8  0x00005579653a926d in qdev_reset_one (dev=<optimized out>, opaque=<optimized out>) at ../hw/core/qdev.c:303
#9  0x00005579653a6231 in qbus_walk_children (bus=<optimized out>, pre_devfn=0x5579653a7a10 <qdev_prereset>, pre_busfn=0x5579653a79f0 <qbus_prereset>, post_devfn=0x5579653a9260 <qdev_reset_one>, 
    post_busfn=0x5579653a80a0 <qbus_reset_one>, opaque=0x0) at ../hw/core/bus.c:54
#10 0x00005579653a87e6 in qbus_reset_all (bus=<optimized out>) at ../hw/core/qdev.c:333
#11 0x000055796532dc42 in virtio_scsi_reset (vdev=<optimized out>) at /usr/src/debug/qemu-kvm-6.1.0-4.module+el8.6.0+13039+4b81a1dc.x86_64/include/hw/qdev-core.h:210
#12 0x00005579652d0269 in virtio_reset (opaque=0x557968e92f00) at ../hw/virtio/virtio.c:1998
#13 0x00005579651dffeb in virtio_bus_reset (bus=bus@entry=0x557968e92e78) at ../hw/virtio/virtio-bus.c:100
#14 0x00005579651ebcc4 in virtio_pci_reset (qdev=<optimized out>) at ../hw/virtio/virtio-pci.c:1932
#15 0x00005579653a9f6b in resettable_phase_hold (obj=0x557968e8ac80, opaque=<optimized out>, type=RESET_TYPE_COLD) at ../hw/core/resettable.c:182
#16 0x00005579653a5f54 in bus_reset_child_foreach (obj=<optimized out>, cb=0x5579653a9ec0 <resettable_phase_hold>, opaque=0x0, type=RESET_TYPE_COLD) at ../hw/core/bus.c:97
#17 0x00005579653a9f38 in resettable_child_foreach (rc=0x557967c1cd50, type=RESET_TYPE_COLD, opaque=0x0, cb=0x5579653a9ec0 <resettable_phase_hold>, obj=0x557968dc9b20) at ../hw/core/resettable.c:96
#18 resettable_phase_hold (obj=0x557968dc9b20, opaque=<optimized out>, type=RESET_TYPE_COLD) at ../hw/core/resettable.c:173
#19 0x00005579653a7beb in device_reset_child_foreach (obj=<optimized out>, cb=0x5579653a9ec0 <resettable_phase_hold>, opaque=0x0, type=RESET_TYPE_COLD) at ../hw/core/qdev.c:366
#20 0x00005579653a9f38 in resettable_child_foreach (rc=0x557967d32df0, type=RESET_TYPE_COLD, opaque=0x0, cb=0x5579653a9ec0 <resettable_phase_hold>, obj=0x557968dc91c0) at ../hw/core/resettable.c:96
#21 resettable_phase_hold (obj=0x557968dc91c0, opaque=<optimized out>, type=RESET_TYPE_COLD) at ../hw/core/resettable.c:173
#22 0x00005579653a5f54 in bus_reset_child_foreach (obj=<optimized out>, cb=0x5579653a9ec0 <resettable_phase_hold>, opaque=0x0, type=RESET_TYPE_COLD) at ../hw/core/bus.c:97
#23 0x00005579653a9f38 in resettable_child_foreach (rc=0x557967c1cd50, type=RESET_TYPE_COLD, opaque=0x0, cb=0x5579653a9ec0 <resettable_phase_hold>, obj=0x557967fc02e0) at ../hw/core/resettable.c:96
#24 resettable_phase_hold (obj=0x557967fc02e0, opaque=<optimized out>, type=RESET_TYPE_COLD) at ../hw/core/resettable.c:173
#25 0x00005579653a7beb in device_reset_child_foreach (obj=<optimized out>, cb=0x5579653a9ec0 <resettable_phase_hold>, opaque=0x0, type=RESET_TYPE_COLD) at ../hw/core/qdev.c:366
#26 0x00005579653a9f38 in resettable_child_foreach (rc=0x557967c996c0, type=RESET_TYPE_COLD, opaque=0x0, cb=0x5579653a9ec0 <resettable_phase_hold>, obj=0x557967f82410) at ../hw/core/resettable.c:96
#27 resettable_phase_hold (obj=0x557967f82410, opaque=<optimized out>, type=RESET_TYPE_COLD) at ../hw/core/resettable.c:173
#28 0x00005579653a5f54 in bus_reset_child_foreach (obj=<optimized out>, cb=0x5579653a9ec0 <resettable_phase_hold>, opaque=0x0, type=RESET_TYPE_COLD) at ../hw/core/bus.c:97
#29 0x00005579653a9f38 in resettable_child_foreach (rc=0x557967ccc240, type=RESET_TYPE_COLD, opaque=0x0, cb=0x5579653a9ec0 <resettable_phase_hold>, obj=0x557967d58890) at ../hw/core/resettable.c:96
#30 resettable_phase_hold (obj=obj@entry=0x557967d58890, opaque=opaque@entry=0x0, type=type@entry=RESET_TYPE_COLD) at ../hw/core/resettable.c:173
#31 0x00005579653aa119 in resettable_assert_reset (obj=0x557967d58890, type=<optimized out>) at ../hw/core/resettable.c:60
#32 0x00005579653aa1e5 in resettable_reset (obj=0x557967d58890, type=RESET_TYPE_COLD) at ../hw/core/resettable.c:45
#33 0x00005579653a9d12 in qemu_devices_reset () at ../hw/core/reset.c:69
#34 0x000055796527754f in pc_machine_reset (machine=<optimized out>) at ../hw/i386/pc.c:1945
#35 0x00005579653600e1 in qemu_system_reset (reason=reason@entry=SHUTDOWN_CAUSE_GUEST_RESET) at ../softmmu/runstate.c:443
#36 0x00005579653607b8 in main_loop_should_exit () at ../softmmu/runstate.c:688
#37 qemu_main_loop () at ../softmmu/runstate.c:722
#38 0x0000557965111402 in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at ../softmmu/main.c:50

Comment 6 Stefan Hajnoczi 2021-12-09 14:30:20 UTC
RFC patch posted upstream:
https://patchew.org/QEMU/20211209142304.381253-1-stefanha@redhat.com/

I'm concerned that this might be a whole class of bugs and my patch only fixes some instances of the bug. Will discuss with the community and hopefully come up with a general fix.

Comment 8 Klaus Heinrich Kiwi 2022-01-27 17:41:38 UTC
@aliang are you able to test the patches that Stefan posted? Do you need help with a scratch build?

Comment 9 aihua liang 2022-01-28 11:11:34 UTC
Hi, Stefan

  Can you help to provide a scratch build of the patch in comment 6?

BR,
Aliang

Comment 10 Klaus Heinrich Kiwi 2022-01-28 11:47:30 UTC
(In reply to aihua liang from comment #9)
> Hi, Stefan
> 
>   Can you help to provide a scratch build of the patch in comment 6?
> 
> BR,
> Aliang

With Stefan out today, maybe @mrezanin can help?

Comment 11 Miroslav Rezanina 2022-01-31 10:40:31 UTC
Brew build available at:

http://batcave.lab.eng.brq.redhat.com/repos/test/bz2021778/

Comment 13 aihua liang 2022-02-07 05:56:14 UTC
Run auto with qemu-kvm-6.2.0-4.el8.next.candidate, don't hit this issue any more.

(01/10) repeat1.Host_RHEL.m8.u6.qcow2.virtio_scsi.up.virtio_net.Guest.RHEL.8.6.0.x86_64.io-github-autotest-qemu.blockdev_full_backup.during_reboot.q35: PASS (111.66 s)
 (02/10) repeat2.Host_RHEL.m8.u6.qcow2.virtio_scsi.up.virtio_net.Guest.RHEL.8.6.0.x86_64.io-github-autotest-qemu.blockdev_full_backup.during_reboot.q35: PASS (114.86 s)
 (03/10) repeat3.Host_RHEL.m8.u6.qcow2.virtio_scsi.up.virtio_net.Guest.RHEL.8.6.0.x86_64.io-github-autotest-qemu.blockdev_full_backup.during_reboot.q35: PASS (110.99 s)
 (04/10) repeat4.Host_RHEL.m8.u6.qcow2.virtio_scsi.up.virtio_net.Guest.RHEL.8.6.0.x86_64.io-github-autotest-qemu.blockdev_full_backup.during_reboot.q35: PASS (115.94 s)
 (05/10) repeat5.Host_RHEL.m8.u6.qcow2.virtio_scsi.up.virtio_net.Guest.RHEL.8.6.0.x86_64.io-github-autotest-qemu.blockdev_full_backup.during_reboot.q35: PASS (114.30 s)
 (06/10) repeat6.Host_RHEL.m8.u6.qcow2.virtio_scsi.up.virtio_net.Guest.RHEL.8.6.0.x86_64.io-github-autotest-qemu.blockdev_full_backup.during_reboot.q35: PASS (113.27 s)
 (07/10) repeat7.Host_RHEL.m8.u6.qcow2.virtio_scsi.up.virtio_net.Guest.RHEL.8.6.0.x86_64.io-github-autotest-qemu.blockdev_full_backup.during_reboot.q35: PASS (111.21 s)
 (08/10) repeat8.Host_RHEL.m8.u6.qcow2.virtio_scsi.up.virtio_net.Guest.RHEL.8.6.0.x86_64.io-github-autotest-qemu.blockdev_full_backup.during_reboot.q35: PASS (112.84 s)
 (09/10) repeat9.Host_RHEL.m8.u6.qcow2.virtio_scsi.up.virtio_net.Guest.RHEL.8.6.0.x86_64.io-github-autotest-qemu.blockdev_full_backup.during_reboot.q35: PASS (113.00 s)
 (10/10) repeat10.Host_RHEL.m8.u6.qcow2.virtio_scsi.up.virtio_net.Guest.RHEL.8.6.0.x86_64.io-github-autotest-qemu.blockdev_full_backup.during_reboot.q35: PASS (113.87 s)

Comment 17 aihua liang 2022-02-09 03:22:23 UTC
Run the test 20 times with qemu-kvm-6.2.0-6.module+el8.6.0+14165+5e5e76ac, all pass.
 (01/20) repeat1.Host_RHEL.m8.u6.product_rhel.qcow2.virtio_scsi.up.virtio_net.Guest.RHEL.8.6.0.x86_64.io-github-autotest-qemu.blockdev_full_backup.during_reboot.q35: PASS (117.25 s)
 (02/20) repeat2.Host_RHEL.m8.u6.product_rhel.qcow2.virtio_scsi.up.virtio_net.Guest.RHEL.8.6.0.x86_64.io-github-autotest-qemu.blockdev_full_backup.during_reboot.q35: PASS (118.62 s)
 (03/20) repeat3.Host_RHEL.m8.u6.product_rhel.qcow2.virtio_scsi.up.virtio_net.Guest.RHEL.8.6.0.x86_64.io-github-autotest-qemu.blockdev_full_backup.during_reboot.q35: PASS (118.43 s)
 (04/20) repeat4.Host_RHEL.m8.u6.product_rhel.qcow2.virtio_scsi.up.virtio_net.Guest.RHEL.8.6.0.x86_64.io-github-autotest-qemu.blockdev_full_backup.during_reboot.q35: PASS (119.37 s)
 (05/20) repeat5.Host_RHEL.m8.u6.product_rhel.qcow2.virtio_scsi.up.virtio_net.Guest.RHEL.8.6.0.x86_64.io-github-autotest-qemu.blockdev_full_backup.during_reboot.q35: PASS (117.88 s)
 (06/20) repeat6.Host_RHEL.m8.u6.product_rhel.qcow2.virtio_scsi.up.virtio_net.Guest.RHEL.8.6.0.x86_64.io-github-autotest-qemu.blockdev_full_backup.during_reboot.q35: PASS (117.76 s)
 (07/20) repeat7.Host_RHEL.m8.u6.product_rhel.qcow2.virtio_scsi.up.virtio_net.Guest.RHEL.8.6.0.x86_64.io-github-autotest-qemu.blockdev_full_backup.during_reboot.q35: PASS (117.97 s)
 (08/20) repeat8.Host_RHEL.m8.u6.product_rhel.qcow2.virtio_scsi.up.virtio_net.Guest.RHEL.8.6.0.x86_64.io-github-autotest-qemu.blockdev_full_backup.during_reboot.q35: PASS (119.54 s)
 (09/20) repeat9.Host_RHEL.m8.u6.product_rhel.qcow2.virtio_scsi.up.virtio_net.Guest.RHEL.8.6.0.x86_64.io-github-autotest-qemu.blockdev_full_backup.during_reboot.q35: PASS (117.71 s)
 (10/20) repeat10.Host_RHEL.m8.u6.product_rhel.qcow2.virtio_scsi.up.virtio_net.Guest.RHEL.8.6.0.x86_64.io-github-autotest-qemu.blockdev_full_backup.during_reboot.q35: PASS (117.64 s)
 (11/20) repeat11.Host_RHEL.m8.u6.product_rhel.qcow2.virtio_scsi.up.virtio_net.Guest.RHEL.8.6.0.x86_64.io-github-autotest-qemu.blockdev_full_backup.during_reboot.q35: PASS (119.88 s)
 (12/20) repeat12.Host_RHEL.m8.u6.product_rhel.qcow2.virtio_scsi.up.virtio_net.Guest.RHEL.8.6.0.x86_64.io-github-autotest-qemu.blockdev_full_backup.during_reboot.q35: PASS (119.19 s)
 (13/20) repeat13.Host_RHEL.m8.u6.product_rhel.qcow2.virtio_scsi.up.virtio_net.Guest.RHEL.8.6.0.x86_64.io-github-autotest-qemu.blockdev_full_backup.during_reboot.q35: PASS (118.52 s)
 (14/20) repeat14.Host_RHEL.m8.u6.product_rhel.qcow2.virtio_scsi.up.virtio_net.Guest.RHEL.8.6.0.x86_64.io-github-autotest-qemu.blockdev_full_backup.during_reboot.q35: PASS (117.46 s)
 (15/20) repeat15.Host_RHEL.m8.u6.product_rhel.qcow2.virtio_scsi.up.virtio_net.Guest.RHEL.8.6.0.x86_64.io-github-autotest-qemu.blockdev_full_backup.during_reboot.q35: PASS (114.31 s)
 (16/20) repeat16.Host_RHEL.m8.u6.product_rhel.qcow2.virtio_scsi.up.virtio_net.Guest.RHEL.8.6.0.x86_64.io-github-autotest-qemu.blockdev_full_backup.during_reboot.q35: PASS (118.20 s)
 (17/20) repeat17.Host_RHEL.m8.u6.product_rhel.qcow2.virtio_scsi.up.virtio_net.Guest.RHEL.8.6.0.x86_64.io-github-autotest-qemu.blockdev_full_backup.during_reboot.q35: PASS (118.28 s)
 (18/20) repeat18.Host_RHEL.m8.u6.product_rhel.qcow2.virtio_scsi.up.virtio_net.Guest.RHEL.8.6.0.x86_64.io-github-autotest-qemu.blockdev_full_backup.during_reboot.q35: PASS (118.56 s)
 (19/20) repeat19.Host_RHEL.m8.u6.product_rhel.qcow2.virtio_scsi.up.virtio_net.Guest.RHEL.8.6.0.x86_64.io-github-autotest-qemu.blockdev_full_backup.during_reboot.q35: PASS (117.96 s)
 (20/20) repeat20.Host_RHEL.m8.u6.product_rhel.qcow2.virtio_scsi.up.virtio_net.Guest.RHEL.8.6.0.x86_64.io-github-autotest-qemu.blockdev_full_backup.during_reboot.q35: PASS (117.66 s)

Comment 18 Yanan Fu 2022-02-09 06:14:12 UTC
QE bot(pre verify): Set 'Verified:Tested,SanityOnly' as gating/tier1 test pass.

Comment 19 aihua liang 2022-02-09 06:41:44 UTC
As comment 17 and comment 18, set bug's status to "VERIFIED".

Comment 21 errata-xmlrpc 2022-05-10 13:24:11 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: virt:rhel and virt-devel:rhel security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:1759


Note You need to log in before you can comment on or make changes to this bug.