RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2036178 - Qemu core dumped when do block-stream to a snapshot node on non-enough space storage
Summary: Qemu core dumped when do block-stream to a snapshot node on non-enough space ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: qemu-kvm
Version: 8.6
Hardware: All
OS: Linux
medium
medium
Target Milestone: rc
: 8.6
Assignee: Stefan Hajnoczi
QA Contact: aihua liang
URL:
Whiteboard:
: 2036289 2036801 (view as bug list)
Depends On:
Blocks: 2040123
TreeView+ depends on / blocked
 
Reported: 2021-12-30 09:07 UTC by Gu Nini
Modified: 2022-05-11 02:42 UTC (History)
11 users (show)

Fixed In Version: qemu-kvm-6.2.0-6.module+el8.6.0+14165+5e5e76ac
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 2040123 (view as bug list)
Environment:
Last Closed: 2022-05-10 13:25:20 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHELPLAN-106679 0 None None None 2021-12-30 09:10:39 UTC
Red Hat Product Errata RHSA-2022:1759 0 None None None 2022-05-10 13:25:59 UTC

Description Gu Nini 2021-12-30 09:07:48 UTC
Description of problem:
When do block-stream to a snapshot node on some storage without enough space, qemu core dumped:

[qemu output] /tmp/aexpect_F5uR6LZa/aexpect-2iijnxh_.sh: line 1: 20576 Segmentation fault      (core dumped) MALLOC_PERTURB_=1 /usr/libexec/qemu-kvm -S -name 'avocado-vt-vm1' -sandbox on -machine s390-ccw-virtio,memory-backend=mem-machine_mem -nodefaults -vga none -m 7168 -object memory-backend-ram,size=7168M,id=mem-machine_mem -smp 4,maxcpus=4,cores=2,threads=1,sockets=2 -cpu 'host' -chardev socket,path=/tmp/avocado_7ruqavcp/monitor-qmpmonitor1-20211230-030424-XB4N5POG,server=on,wait=off,id=qmp_id_qmpmonitor1 -mon chardev=qmp_id_qmpmonitor1,mode=control -chardev socket,path=/tmp/avocado_7ruqavcp/monitor-catch_monitor-20211230-030424-XB4N5POG,server=on,wait=off,id=qmp_id_catch_monitor -mon chardev=qmp_id_catch_monitor,mode=control -chardev socket,path=/tmp/avocado_7ruqavcp/serial-serial0-20211230-030424-XB4N5POG,server=on,wait=off,id=chardev_serial0 -device sclpconsole,id=serial0,chardev=chardev_serial0 -device virtio-scsi-ccw,id=virtio_scsi_ccw0 -blockdev node-name=file_image1,driver=file,auto-read-only=on,discard=unmap,aio=threads,filename=/home/ngu/kar/vt_test_images/rhel860-s390x-virtio-scsi.qcow2,cache.direct=on,cache.no-flush=off -blockdev node-name=drive_image1,driver=qcow2,read-only=off,cache.direct=on,cache.no-flush=off,file=file_image1 -device scsi-hd,id=image1,drive=drive_image1,write-cache=on -device virtio-net-ccw,mac=9a:a8:fc:e2:16:36,id=idLqLgts,netdev=id3tWnzy -netdev tap,id=id3tWnzy,vhost=on,vhostfd=24,fd=8 -nographic -rtc base=utc,clock=host,driftfix=slew -boot strict=on -enable-kvm -device virtio-mouse-ccw,id=input_mouse1 -device virtio-keyboard-ccw,id=input_keyboard1


Version-Release number of selected component (if applicable):
Host kernel: 4.18.0-357.el8.s390x
Guest kernel: 4.18.0-356.el8.s390x/4.18.0-348.7.1.el8_5.s390x
Qemu: qemu-kvm-6.2.0-1.module+el8.6.0+13725+61ae1949.s390x/qemu-kvm-6.0.0-33.module+el8.5.0+13514+2c386966.1.s390x

How reproducible:
4/10

Steps to Reproduce:
1. Boot up a guest:

/usr/libexec/qemu-kvm \
    -S  \
    -name 'avocado-vt-vm1'  \
    -sandbox on  \
    -machine s390-ccw-virtio,memory-backend=mem-machine_mem  \
    -nodefaults  \
    -vga none \
    -m 7168 \
    -object memory-backend-ram,size=7168M,id=mem-machine_mem  \
    -smp 4,maxcpus=4,cores=2,threads=1,sockets=2  \
    -cpu 'host' \
    -chardev socket,path=/tmp/avocado_1,server=on,wait=off,id=qmp_id_qmpmonitor1  \
    -mon chardev=qmp_id_qmpmonitor1,mode=control \
    -chardev socket,path=/tmp/avocado_2,server=on,wait=off,id=chardev_serial0 \
    -device sclpconsole,id=serial0,chardev=chardev_serial0 \
    -device virtio-scsi-ccw,id=virtio_scsi_ccw0 \
    -blockdev node-name=file_image1,driver=file,auto-read-only=on,discard=unmap,aio=threads,filename=/home/ngu/kar/vt_test_images/rhel860-s390x-virtio-scsi.qcow2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_image1,driver=qcow2,read-only=off,cache.direct=on,cache.no-flush=off,file=file_image1 \
    -device scsi-hd,id=image1,drive=drive_image1,write-cache=on \
    -device virtio-net-ccw,mac=9a:a8:fc:e2:16:36,id=idLqLgts,netdev=id3tWnzy  \
    -netdev tap,id=id3tWnzy,vhost=on  \
    -nographic  \
    -rtc base=utc,clock=host,driftfix=slew \
    -boot strict=on \
    -enable-kvm \
    -device virtio-mouse-ccw,id=input_mouse1 \
    -device virtio-keyboard-ccw,id=input_keyboard1 \
    -monitor stdio

2. Create a non-enough space storage with following commands:

# mkdir -p /tmp/tmp_target_path && dd if=/dev/urandom of=/tmp/tmp_image_file bs=1M count=100 && mkfs.ext4 /tmp/tmp_image_file && mount /tmp/tmp_image_file /tmp/tmp_target_path

3. Cont the guest, then create an image on above storage and hot plug it into the guest in qmp:

#nc -U /tmp/avocado_1
{'execute':'qmp_capabilities'}
{'execute': 'cont', 'id': 'mcUXWCPM'}
{'execute': 'blockdev-create', 'arguments': {'options': {'driver': 'file', 'filename': '/tmp/tmp_target_path/image1sn.qcow2', 'size': 21474836480}, 'job-id': 'file_image1sn'}, 'id': 'JocgSYcJ'}
{'execute': 'job-dismiss', 'arguments': {'id': 'file_image1sn'}, 'id': 'dt7FkI1w'}
{'execute': 'blockdev-add', 'arguments': {'node-name': 'file_image1sn', 'driver': 'file', 'filename': '/tmp/tmp_target_path/image1sn.qcow2', 'aio': 'threads', 'auto-read-only': true, 'discard': 'unmap'}, 'id': '0zdoCMBg'}
{'execute': 'blockdev-create', 'arguments': {'options': {'driver': 'qcow2', 'file': 'file_image1sn', 'size': 21474836480}, 'job-id': 'drive_image1sn'}, 'id': 'Ut8bBFRp'}
{'execute': 'job-dismiss', 'arguments': {'id': 'drive_image1sn'}, 'id': 'pAO1hxme'}
{'execute': 'blockdev-add', 'arguments': {'node-name': 'drive_image1sn', 'driver': 'qcow2', 'file': 'file_image1sn', 'read-only': false}, 'id': 'QJtv7WvD'}

4. Create a snapshot on above image node, then do block-stream to it:

{'execute': 'blockdev-snapshot', 'arguments': {'node': 'drive_image1', 'overlay': 'drive_image1sn'}, 'id': 'wcxosVSe'}
{'execute': 'block-stream', 'arguments': {'device': 'drive_image1sn', 'job-id': 'drive_image1sn_YNFd', 'on-error': 'report'}, 'id': '8HbfzhFV'}


Actual results:
Qemu core dumped as showed in the description part.


Expected results:
Qemu won't core dumped although there are following non-enough space error prompt in qmp:

2021-12-30 03:04:30: {"execute": "block-stream", "arguments": {"device": "drive_image1sn", "job-id": "drive_image1sn_YNFd", "on-error": "report"}, "id": "8HbfzhFV"}
2021-12-30 03:04:30: {"timestamp": {"seconds": 1640851470, "microseconds": 231090}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "drive_image1sn_YNFd"}}
2021-12-30 03:04:30: {"timestamp": {"seconds": 1640851470, "microseconds": 231127}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "drive_image1sn_YNFd"}}
2021-12-30 03:04:30: {"return": {}, "id": "8HbfzhFV"}
2021-12-30 03:04:31: {"timestamp": {"seconds": 1640851471, "microseconds": 110342}, "event": "BLOCK_JOB_ERROR", "data": {"device": "drive_image1sn_YNFd", "operation": "read", "action": "report"}}
2021-12-30 03:04:31: {"timestamp": {"seconds": 1640851471, "microseconds": 110398}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "drive_image1sn_YNFd"}}
2021-12-30 03:04:31: {"timestamp": {"seconds": 1640851471, "microseconds": 110440}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "drive_image1sn_YNFd", "len": 21474836480, "offset": 94175232, "speed": 0, "type": "stream", "error": "No space left on device"}}
2021-12-30 03:04:31: {"timestamp": {"seconds": 1640851471, "microseconds": 110468}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "drive_image1sn_YNFd"}}
2021-12-30 03:04:31: {"timestamp": {"seconds": 1640851471, "microseconds": 110488}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "drive_image1sn_YNFd"}}


Additional info:
1. Please do steps 3 and 4 immediately once the guest starts to boot up after issue 'cont' in qmp, otherwise the bug might not be triggered.
2. Have tried the same test on both x86_64 and ppc64le, failed to reproduce the same issue.
3. Please note this is not a regression bug since it occurred on both qemu-kvm-6.2.0-1.module+el8.6.0+13725+61ae1949.s390x and qemu-kvm-6.0.0-33.module+el8.5.0+13514+2c386966.1.s390x

Comment 3 Klaus Heinrich Kiwi 2021-12-30 14:19:31 UTC
I'd normally classify a crash at the high severity, but since it apparently happens on an error path (short disk space) I feel medium is adequated.

Submitter, do you have evidence that this is exclusive to s390x? i.e., does the same test succeeds in other arches? Thomas & Kevin, fyi for now.

 -Klaus

Comment 4 Gu Nini 2021-12-31 01:09:05 UTC
*** Bug 2036289 has been marked as a duplicate of this bug. ***

Comment 5 Gu Nini 2021-12-31 01:23:19 UTC
(In reply to Klaus Heinrich Kiwi from comment #3)
> I'd normally classify a crash at the high severity, but since it apparently
> happens on an error path (short disk space) I feel medium is adequated.
> 
> Submitter, do you have evidence that this is exclusive to s390x? i.e., does
> the same test succeeds in other arches? Thomas & Kevin, fyi for now.
> 
>  -Klaus

Agree, from the gdb debug log, the crash does not relate to the block-stream feature although it's triggered in that scenario.

I have tried the same test on both ppc64le and x86_64 with the host kernel and qemu version, failed to reproduce the bug.

Comment 8 Gu Nini 2022-01-04 03:03:21 UTC
*** Bug 2036801 has been marked as a duplicate of this bug. ***

Comment 9 Klaus Heinrich Kiwi 2022-01-06 17:49:47 UTC
Thomas, I'm a bit confused in the next steps for the virt-storage-sst team to assist. Is there anyone who we should tag-team to reproduce this on s390x?

Hanna, can you take this one?

Comment 10 Hanna Czenczek 2022-01-06 19:21:32 UTC
From the back trace and the QMP log, it looks to me like the block job has completed before/while the scsi-hd (attached via virtio-scsi) is reset and draining it.  Could be that this has nothing to do with ENOSPC, and the problem simply is that the block job completes and vanishes in the blk_drain() invoked through virtio_reset() -> scsi_disk_reset().  (ENOSPC is just the reason why it completes so quickly and can concide with the virtio-scsi reset, which also comes very quickly after `cont`.)

I would have liked to say that this could have something to do with what was fixed in the “block: Attempt on fixing 030-reported errors” series (because that also concerned itself with a stream job completing, and this breaking the block graph for another user)...  But that is part of 6.2.0, so should be included in the version for which the bug was reported. :/

Regarding s390x, the only connection I can think of is CCW vs. PCI.  I wouldn’t put this as the culprit, but maybe CCW does make reproducing the crash easier.

Hanna

Comment 11 Hanna Czenczek 2022-01-10 18:51:40 UTC
I believe I’ve been able to come up with a reproducer (on x86) that works as follows:
- Create the backing configuration as reported (one base image with some data, and one top image)
- Add them to the VM, but ensure that the stream job will encounter an error when it writes to the top[1]
- Attach the virtio-scsi scsi-hd device to any of those images[2]
- Trigger a virtio-scsi reset[3] after the job has started, but before it has completed[4]

[1] Comment 0 described an ENOSPC case, but a blkdebug node that returns EIO on the first write access works just as well.
[2] Comment 0 described attaching it to the base node, but attaching it to the top node is a bit simpler, because then you can safe yourself the blockdev-snapshot command, and attach all nodes at VM startup.
[3] I believe in comment 0, this was triggered by the firmware resetting the device (after `cont`), but you can also just reset the whole system, so there’s one timing window less to worry about.
[4] To hit this other timing window, you can throttle the base image, such that there is some time to get in between the stream job having started and it encountering the error when writing to the target.

I’ve written an iotest for this:
https://gitlab.com/hreitz/qemu/-/blob/stefans-fix-and-a-test/tests/qemu-iotests/tests/stream-error-on-reset

And at least this test no longer segfaults after applying Stefan’s upstream patch “[PATCH v2] block-backend: prevent dangling BDS pointers across aio_poll()”: https://lists.nongnu.org/archive/html/qemu-block/2021-12/msg00249.html

I’ll try to provide a brew build with this patch included for testing tomorrow.

Hanna

Comment 12 Hanna Czenczek 2022-01-11 11:01:59 UTC
Hi,

I’ve run a brew build with Stefan’s patch included, the resulting repo is here:

http://brew-task-repos.usersys.redhat.com/repos/scratch/hreitz/qemu-kvm/6.2.0/2.el8.hreitz202201111001/

Can you please try whether the bug still appears with that build?

Thanks!
Hanna

Comment 13 Gu Nini 2022-01-11 11:22:36 UTC
(In reply to Hanna Reitz from comment #12)
> Hi,
> 
> I’ve run a brew build with Stefan’s patch included, the resulting repo is
> here:
> 
> http://brew-task-repos.usersys.redhat.com/repos/scratch/hreitz/qemu-kvm/6.2.
> 0/2.el8.hreitz202201111001/
> 
> Can you please try whether the bug still appears with that build?
> 
> Thanks!
> Hanna

OK, I will try to test the original scenario on s390x.

Comment 14 Stefan Hajnoczi 2022-01-11 15:44:05 UTC
I have sent v3 of the QEMU patch series with Hanna's test case for this BZ:
https://patchew.org/QEMU/20220111153613.25453-1-stefanha@redhat.com/

Hanna: Feel free to assign this BZ to me if you want. I think a single backport will handle both bz2021778 and this BZ.

Comment 15 aihua liang 2022-01-12 10:12:40 UTC
Can reproduce this issue with hanna's iotest script

Test Env:
  kernel version: 4.18.0-357.el8.x86_64
  qemu-kvm version: qemu-kvm-6.2.0-2.module+el8.6.0+13738+17338784

Test steps:
1.Create base.img and top.img
 #qemu-img create -f qcow2 base.img 2G
 #qemu-img create -f qcow2 top.img 2G
 
2.Write data to base.img
 #qemu-io -c 'write 0 2147483136' base.img

3.Start vm with accel=tcg
   /usr/libexec/qemu-kvm \
    -name 'avocado-vt-vm1'  \
    -sandbox on  \
    -machine q35,memory-backend=mem-machine_mem,accel=tcg \
    -device pcie-root-port,id=pcie-root-port-0,multifunction=on,bus=pcie.0,addr=0x1,chassis=1 \
    -device pcie-pci-bridge,id=pcie-pci-bridge-0,addr=0x0,bus=pcie-root-port-0  \
    -nodefaults \
    -device VGA,bus=pcie.0,addr=0x2 \
    -m 30720 \
    -object memory-backend-ram,size=30720M,id=mem-machine_mem  \
    -smp 10,maxcpus=10,cores=5,threads=1,dies=1,sockets=2  \
    -cpu 'Cascadelake-Server',ss=on,vmx=on,pdcm=on,hypervisor=on,tsc-adjust=on,umip=on,pku=on,md-clear=on,stibp=on,arch-capabilities=on,xsaves=on,ibpb=on,ibrs=on,amd-stibp=on,amd-ssbd=on,rdctl-no=on,ibrs-all=on,skip-l1dfl-vmentry=on,mds-no=on,pschange-mc-no=on,tsx-ctrl=on,hle=off,rtm=off,kvm_pv_unhalt=on \
    -chardev socket,wait=off,id=qmp_id_qmpmonitor1,path=/tmp/monitor-qmpmonitor1-20220105-012140-9igI6qZZ,server=on  \
    -mon chardev=qmp_id_qmpmonitor1,mode=control \
    -chardev socket,wait=off,id=qmp_id_catch_monitor,path=/tmp/monitor-catch_monitor-20220105-012140-9igI6qZZ,server=on  \
    -mon chardev=qmp_id_catch_monitor,mode=control \
    -device pvpanic,ioport=0x505,id=idsULHUn \
    -chardev socket,wait=off,id=chardev_serial0,path=/tmp/serial-serial0-20220105-012140-9igI6qZZ,server=on \
    -device isa-serial,id=serial0,chardev=chardev_serial0  \
    -chardev socket,id=seabioslog_id_20220105-012140-9igI6qZZ,path=/tmp/seabios-20220105-012140-9igI6qZZ,server=on,wait=off \
    -device isa-debugcon,chardev=seabioslog_id_20220105-012140-9igI6qZZ,iobase=0x402 \
    -device pcie-root-port,id=pcie-root-port-1,port=0x1,addr=0x1.0x1,bus=pcie.0,chassis=2 \
    -device qemu-xhci,id=usb1,bus=pcie-root-port-1,addr=0x0 \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \
    -device pcie-root-port,id=pcie-root-port-2,port=0x2,addr=0x1.0x2,bus=pcie.0,chassis=3 \
    -device pcie-root-port,id=pcie-root-port-3,port=0x3,addr=0x1.0x3,bus=pcie.0,chassis=4 \
    -device virtio-net-pci,mac=9a:d5:a4:20:d5:09,id=idVy3YIg,netdev=idXjRF2L,bus=pcie-root-port-3,addr=0x0  \
    -netdev tap,id=idXjRF2L,vhost=on  \
    -vnc :0  \
    -rtc base=utc,clock=host,driftfix=slew  \
    -boot menu=off,order=cdn,once=c,strict=off \
    -enable-kvm \
    -device pcie-root-port,id=pcie_extra_root_port_0,multifunction=on,bus=pcie.0,addr=0x3,chassis=5 \
    -monitor stdio \


4.Add throttle group
  { "execute" : "object-add", "arguments": { "qom-type" : "throttle-group","id": "thrgr","x-bps-total":16384}}

5.Add base node and top node
  {'execute':'blockdev-add','arguments':{'driver':'qcow2','node-name':'base','file':{'driver':'throttle','throttle-group':'thrgr','file':{'driver':'file','filename':'/home/base.img'}}}}
{"return": {}}
{'execute':'blockdev-add','arguments':{'driver':'qcow2','node-name':'top','file':{'driver':'blkdebug','inject-error':[{'event':'pwritev','immediately':true,'once':true}],'image':{'driver':'file','filename':'/home/top.img'}},'backing':'base'}}
{"return": {}}

6.Add scsi device
  {'execute':'device_add','arguments':{'driver': 'virtio-scsi-pci','id':'vscsi','bus':'pcie-root-port-2'}}
{"return": {}}
{'execute':'device_add','arguments':{'driver': 'scsi-hd','bus':'vscsi.0','drive':'top'}}
{"return": {}}

7.Do stream
  {'execute':'block-stream','arguments':{'device':'top','job-id':'j1'}}
{"timestamp": {"seconds": 1641980367, "microseconds": 374744}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "j1"}}
{"timestamp": {"seconds": 1641980367, "microseconds": 374806}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "j1"}}
{"return": {}}

8. During step7, reset vm
  {'execute':'system_reset'}

After step8, vm core dump with info:
  (qemu) qemu-kvm: ../block/io.c:516: bdrv_do_drained_end: Assertion `bs->quiesce_counter > 0' failed.
bug.txt: line 35: 747344 Aborted                 (core dumped) /usr/libexec/qemu-kvm -name 'avocado-vt-vm1' -sandbox on -machine q35,memory-backend=mem-machine_mem,accel=tcg -device pcie-root-port,id=pcie-root-port-0,multifunction=on,bus=pcie.0,addr=0x1,chassis=1 -device pcie-pci-bridge,id=pcie-pci-bridge-0,addr=0x0,bus=pcie-root-port-0 -nodefaults -device VGA,bus=pcie.0,addr=0x2 -m 30720 -object memory-backend-ram,size=30720M,id=mem-machine_mem -smp 10,maxcpus=10,cores=5,threads=1,dies=1,sockets=2 -cpu 'Cascadelake-Server',ss=on,vmx=on,pdcm=on,hypervisor=on,tsc-adjust=on,umip=on,pku=on,md-clear=on,stibp=on,arch-capabilities=on,xsaves=on,ibpb=on,ibrs=on,amd-stibp=on,amd-ssbd=on,rdctl-no=on,ibrs-all=on,skip-l1dfl-vmentry=on,mds-no=on,pschange-mc-no=on,tsx-ctrl=on,hle=off,rtm=off,kvm_pv_unhalt=on ...

And output of step7 is:
  {"timestamp": {"seconds": 1641980375, "microseconds": 277732}, "event": "BLOCK_JOB_ERROR", "data": {"device": "j1", "operation": "read", "action": "report"}}
{"timestamp": {"seconds": 1641980375, "microseconds": 277777}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "j1"}}
{"timestamp": {"seconds": 1641980375, "microseconds": 277830}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "j1", "len": 2147483648, "offset": 0, "speed": 0, "type": "stream", "error": "Input/output error"}}
{"timestamp": {"seconds": 1641980375, "microseconds": 277858}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "j1"}}
{"timestamp": {"seconds": 1641980375, "microseconds": 277878}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "j1"}}


Coredump info:
 (gdb) bt
#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
#1  0x00007fea9868ddb5 in __GI_abort () at abort.c:79
#2  0x00007fea9868dc89 in __assert_fail_base (fmt=0x7fea987f67d8 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n", assertion=0x56171a20d532 "bs->quiesce_counter > 0", file=0x56171a2178f6 "../block/io.c", line=516, function=<optimized out>)
    at assert.c:92
#3  0x00007fea986b33a6 in __GI___assert_fail (assertion=assertion@entry=0x56171a20d532 "bs->quiesce_counter > 0", file=file@entry=0x56171a2178f6 "../block/io.c", line=line@entry=516, 
    function=function@entry=0x56171a218700 <__PRETTY_FUNCTION__.30959> "bdrv_do_drained_end") at assert.c:101
#4  0x000056171a022dcd in bdrv_do_drained_end (bs=bs@entry=0x56171c41cc60, recursive=recursive@entry=false, parent=parent@entry=0x0, ignore_bds_parents=ignore_bds_parents@entry=false, 
    drained_end_counter=drained_end_counter@entry=0x7ffda2f859a4) at ../block/io.c:516
#5  0x000056171a0239b5 in bdrv_drained_end (bs=0x56171c41cc60) at ../block/io.c:541
#6  0x000056171a019eb4 in blk_drain (blk=<optimized out>) at ../block/block-backend.c:1716
#7  0x0000561719e46157 in scsi_device_purge_requests (sdev=sdev@entry=0x56171c8a2550, sense=...) at ../hw/scsi/scsi-bus.c:1638
#8  0x0000561719e47c34 in scsi_disk_reset (dev=0x56171c8a2550) at ../hw/scsi/scsi-disk.c:2254
#9  0x0000561719fcf78b in resettable_phase_hold (obj=0x56171c8a2550, opaque=<optimized out>, type=RESET_TYPE_COLD) at ../hw/core/resettable.c:182
#10 0x0000561719fcbf74 in bus_reset_child_foreach (obj=<optimized out>, cb=0x561719fcf6e0 <resettable_phase_hold>, opaque=0x0, type=RESET_TYPE_COLD) at ../hw/core/bus.c:97
#11 0x0000561719fcf758 in resettable_child_foreach (rc=0x56171bea1b50, type=RESET_TYPE_COLD, opaque=0x0, cb=0x561719fcf6e0 <resettable_phase_hold>, obj=0x56171bf8b898) at ../hw/core/resettable.c:96
#12 resettable_phase_hold (obj=0x56171bf8b898, opaque=<optimized out>, type=RESET_TYPE_COLD) at ../hw/core/resettable.c:173
#13 0x0000561719fcda9b in device_reset_child_foreach (obj=<optimized out>, cb=0x561719fcf6e0 <resettable_phase_hold>, opaque=0x0, type=RESET_TYPE_COLD) at ../hw/core/qdev.c:317
#14 0x0000561719fcf758 in resettable_child_foreach (rc=0x56171be0f9c0, type=RESET_TYPE_COLD, opaque=0x0, cb=0x561719fcf6e0 <resettable_phase_hold>, obj=0x56171bf8b640) at ../hw/core/resettable.c:96
#15 resettable_phase_hold (obj=0x56171bf8b640, opaque=<optimized out>, type=RESET_TYPE_COLD) at ../hw/core/resettable.c:173
#16 0x0000561719fcbf74 in bus_reset_child_foreach (obj=<optimized out>, cb=0x561719fcf6e0 <resettable_phase_hold>, opaque=0x0, type=RESET_TYPE_COLD) at ../hw/core/bus.c:97
#17 0x0000561719fcf758 in resettable_child_foreach (rc=0x56171be4a6b0, type=RESET_TYPE_COLD, opaque=0x0, cb=0x561719fcf6e0 <resettable_phase_hold>, obj=0x56171bf8b5b8) at ../hw/core/resettable.c:96
#18 resettable_phase_hold (obj=0x56171bf8b5b8, opaque=<optimized out>, type=RESET_TYPE_COLD) at ../hw/core/resettable.c:173
#19 0x0000561719fcda9b in device_reset_child_foreach (obj=<optimized out>, cb=0x561719fcf6e0 <resettable_phase_hold>, opaque=0x0, type=RESET_TYPE_COLD) at ../hw/core/qdev.c:317
#20 0x0000561719fcf758 in resettable_child_foreach (rc=0x56171be4b830, type=RESET_TYPE_COLD, opaque=0x0, cb=0x561719fcf6e0 <resettable_phase_hold>, obj=0x56171bf833b0) at ../hw/core/resettable.c:96
#21 resettable_phase_hold (obj=0x56171bf833b0, opaque=<optimized out>, type=RESET_TYPE_COLD) at ../hw/core/resettable.c:173
#22 0x0000561719fcbf74 in bus_reset_child_foreach (obj=<optimized out>, cb=0x561719fcf6e0 <resettable_phase_hold>, opaque=0x0, type=RESET_TYPE_COLD) at ../hw/core/bus.c:97
#23 0x0000561719fcf758 in resettable_child_foreach (rc=0x56171be11760, type=RESET_TYPE_COLD, opaque=0x0, cb=0x561719fcf6e0 <resettable_phase_hold>, obj=0x56171d08e240) at ../hw/core/resettable.c:96
#24 resettable_phase_hold (obj=0x56171d08e240, opaque=<optimized out>, type=RESET_TYPE_COLD) at ../hw/core/resettable.c:173
#25 0x0000561719fcda9b in device_reset_child_foreach (obj=<optimized out>, cb=0x561719fcf6e0 <resettable_phase_hold>, opaque=0x0, type=RESET_TYPE_COLD) at ../hw/core/qdev.c:317
#26 0x0000561719fcf758 in resettable_child_foreach (rc=0x56171bf27af0, type=RESET_TYPE_COLD, opaque=0x0, cb=0x561719fcf6e0 <resettable_phase_hold>, obj=0x56171d08d8d0) at ../hw/core/resettable.c:96
#27 resettable_phase_hold (obj=0x56171d08d8d0, opaque=<optimized out>, type=RESET_TYPE_COLD) at ../hw/core/resettable.c:173
#28 0x0000561719fcbf74 in bus_reset_child_foreach (obj=<optimized out>, cb=0x561719fcf6e0 <resettable_phase_hold>, opaque=0x0, type=RESET_TYPE_COLD) at ../hw/core/bus.c:97
#29 0x0000561719fcf758 in resettable_child_foreach (rc=0x56171be11760, type=RESET_TYPE_COLD, opaque=0x0, cb=0x561719fcf6e0 <resettable_phase_hold>, obj=0x56171c17a700) at ../hw/core/resettable.c:96
#30 resettable_phase_hold (obj=0x56171c17a700, opaque=<optimized out>, type=RESET_TYPE_COLD) at ../hw/core/resettable.c:173
#31 0x0000561719fcda9b in device_reset_child_foreach (obj=<optimized out>, cb=0x561719fcf6e0 <resettable_phase_hold>, opaque=0x0, type=RESET_TYPE_COLD) at ../hw/core/qdev.c:317
#32 0x0000561719fcf758 in resettable_child_foreach (rc=0x56171be8e9d0, type=RESET_TYPE_COLD, opaque=0x0, cb=0x561719fcf6e0 <resettable_phase_hold>, obj=0x56171c13d840) at ../hw/core/resettable.c:96
#33 resettable_phase_hold (obj=0x56171c13d840, opaque=<optimized out>, type=RESET_TYPE_COLD) at ../hw/core/resettable.c:173
#34 0x0000561719fcbf74 in bus_reset_child_foreach (obj=<optimized out>, cb=0x561719fcf6e0 <resettable_phase_hold>, opaque=0x0, type=RESET_TYPE_COLD) at ../hw/core/bus.c:97
#35 0x0000561719fcf758 in resettable_child_foreach (rc=0x56171bec28b0, type=RESET_TYPE_COLD, opaque=0x0, cb=0x561719fcf6e0 <resettable_phase_hold>, obj=0x56171bf4bbe0) at ../hw/core/resettable.c:96
#36 resettable_phase_hold (obj=obj@entry=0x56171bf4bbe0, opaque=opaque@entry=0x0, type=type@entry=RESET_TYPE_COLD) at ../hw/core/resettable.c:173
#37 0x0000561719fcf939 in resettable_assert_reset (obj=0x56171bf4bbe0, type=<optimized out>) at ../hw/core/resettable.c:60
#38 0x0000561719fcfa05 in resettable_reset (obj=0x56171bf4bbe0, type=RESET_TYPE_COLD) at ../hw/core/resettable.c:45
#39 0x0000561719fcf532 in qemu_devices_reset () at ../hw/core/reset.c:69
#40 0x0000561719e9890f in pc_machine_reset (machine=<optimized out>) at ../hw/i386/pc.c:1948
#41 0x0000561719f02271 in qemu_system_reset (reason=reason@entry=SHUTDOWN_CAUSE_HOST_QMP_SYSTEM_RESET) at ../softmmu/runstate.c:443
#42 0x0000561719f02948 in main_loop_should_exit () at ../softmmu/runstate.c:688
#43 qemu_main_loop () at ../softmmu/runstate.c:722
#44 0x0000561719d365d2 in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at ../softmmu/main.c:50

Comment 18 aihua liang 2022-01-13 03:25:40 UTC
Test on qemu-kvm-6.2.0-2.el8.hreitz202201111001, don't hit this issue any more.

Test Steps:
 1.Create base.img and top.img
 #qemu-img create -f qcow2 base.img 2G
 #qemu-img create -f qcow2 top.img 2G
 
2.Write data to base.img
 #qemu-io -c 'write 0 2147483136' base.img

3.Start vm with accel=tcg
   /usr/libexec/qemu-kvm \
    -name 'avocado-vt-vm1'  \
    -sandbox on  \
    -machine q35,memory-backend=mem-machine_mem,accel=tcg \
    -device pcie-root-port,id=pcie-root-port-0,multifunction=on,bus=pcie.0,addr=0x1,chassis=1 \
    -device pcie-pci-bridge,id=pcie-pci-bridge-0,addr=0x0,bus=pcie-root-port-0  \
    -nodefaults \
    -device VGA,bus=pcie.0,addr=0x2 \
    -m 30720 \
    -object memory-backend-ram,size=30720M,id=mem-machine_mem  \
    -smp 10,maxcpus=10,cores=5,threads=1,dies=1,sockets=2  \
    -cpu 'Cascadelake-Server',ss=on,vmx=on,pdcm=on,hypervisor=on,tsc-adjust=on,umip=on,pku=on,md-clear=on,stibp=on,arch-capabilities=on,xsaves=on,ibpb=on,ibrs=on,amd-stibp=on,amd-ssbd=on,rdctl-no=on,ibrs-all=on,skip-l1dfl-vmentry=on,mds-no=on,pschange-mc-no=on,tsx-ctrl=on,hle=off,rtm=off,kvm_pv_unhalt=on \
    -chardev socket,wait=off,id=qmp_id_qmpmonitor1,path=/tmp/monitor-qmpmonitor1-20220105-012140-9igI6qZZ,server=on  \
    -mon chardev=qmp_id_qmpmonitor1,mode=control \
    -chardev socket,wait=off,id=qmp_id_catch_monitor,path=/tmp/monitor-catch_monitor-20220105-012140-9igI6qZZ,server=on  \
    -mon chardev=qmp_id_catch_monitor,mode=control \
    -device pvpanic,ioport=0x505,id=idsULHUn \
    -chardev socket,wait=off,id=chardev_serial0,path=/tmp/serial-serial0-20220105-012140-9igI6qZZ,server=on \
    -device isa-serial,id=serial0,chardev=chardev_serial0  \
    -chardev socket,id=seabioslog_id_20220105-012140-9igI6qZZ,path=/tmp/seabios-20220105-012140-9igI6qZZ,server=on,wait=off \
    -device isa-debugcon,chardev=seabioslog_id_20220105-012140-9igI6qZZ,iobase=0x402 \
    -device pcie-root-port,id=pcie-root-port-1,port=0x1,addr=0x1.0x1,bus=pcie.0,chassis=2 \
    -device qemu-xhci,id=usb1,bus=pcie-root-port-1,addr=0x0 \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \
    -device pcie-root-port,id=pcie-root-port-2,port=0x2,addr=0x1.0x2,bus=pcie.0,chassis=3 \
    -device pcie-root-port,id=pcie-root-port-3,port=0x3,addr=0x1.0x3,bus=pcie.0,chassis=4 \
    -device virtio-net-pci,mac=9a:d5:a4:20:d5:09,id=idVy3YIg,netdev=idXjRF2L,bus=pcie-root-port-3,addr=0x0  \
    -netdev tap,id=idXjRF2L,vhost=on  \
    -vnc :0  \
    -rtc base=utc,clock=host,driftfix=slew  \
    -boot menu=off,order=cdn,once=c,strict=off \
    -enable-kvm \
    -device pcie-root-port,id=pcie_extra_root_port_0,multifunction=on,bus=pcie.0,addr=0x3,chassis=5 \
    -monitor stdio \


4.Add throttle group
  { "execute" : "object-add", "arguments": { "qom-type" : "throttle-group","id": "thrgr","x-bps-total":16384}}

5.Add base node and top node
  {'execute':'blockdev-add','arguments':{'driver':'qcow2','node-name':'base','file':{'driver':'throttle','throttle-group':'thrgr','file':{'driver':'file','filename':'/home/base.img'}}}}
{"return": {}}
{'execute':'blockdev-add','arguments':{'driver':'qcow2','node-name':'top','file':{'driver':'blkdebug','inject-error':[{'event':'pwritev','immediately':true,'once':true}],'image':{'driver':'file','filename':'/home/top.img'}},'backing':'base'}}
{"return": {}}

6.Add scsi device
  {'execute':'device_add','arguments':{'driver': 'virtio-scsi-pci','id':'vscsi','bus':'pcie-root-port-2'}}
{"return": {}}
{'execute':'device_add','arguments':{'driver': 'scsi-hd','bus':'vscsi.0','drive':'top'}}
{"return": {}}

7.Do stream
  {'execute':'block-stream','arguments':{'device':'top','job-id':'j1'}}
{"timestamp": {"seconds": 1641980367, "microseconds": 374744}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "j1"}}
{"timestamp": {"seconds": 1641980367, "microseconds": 374806}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "j1"}}
{"return": {}}

8. During step7, reset vm
  {'execute':'system_reset'}

After step8,
 Block job completed with "input/output error", then execute the system_reset cmdline.
 {"timestamp": {"seconds": 1642042558, "microseconds": 778476}, "event": "BLOCK_JOB_ERROR", "data": {"device": "j1", "operation": "read", "action": "report"}}
{"timestamp": {"seconds": 1642042558, "microseconds": 778526}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "j1"}}
{"timestamp": {"seconds": 1642042558, "microseconds": 778587}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "j1", "len": 2147483648, "offset": 9961472, "speed": 0, "type": "stream", "error": "Input/output error"}}
{"timestamp": {"seconds": 1642042558, "microseconds": 778615}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "j1"}}
{"timestamp": {"seconds": 1642042558, "microseconds": 778634}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "j1"}}
"{"timestamp": {"seconds": 1642042558, "microseconds": 783202}, "event": "RESET", "data": {"guest": false, "reason": "host-qmp-system-reset"}}
{"timestamp": {"seconds": 1642042558, "microseconds": 785442}, "event": "RESET", "data": {"guest": true, "reason": "guest-reset"}}

Comment 19 Thomas Huth 2022-01-13 10:05:15 UTC
(In reply to Klaus Heinrich Kiwi from comment #9)
> Thomas, I'm a bit confused in the next steps for the virt-storage-sst team
> to assist. Is there anyone who we should tag-team to reproduce this on s390x?

Sorry for the late reply - I've been on vacation during the last days. Anyway, seems like Hanna reproduced this on x86 as well, so I'm clearing the NeedInfo to myself.

I'm also setting ITR to 8.6.0 now, since it sounds like there is a fix in the works and we will be available to provide it in this release, right? Hanna, Stefan, could you please adjust DTM as well?

Comment 20 Hanna Czenczek 2022-01-31 14:13:34 UTC
After deliberating for much too long, I think it’s indeed better if you do the backport, Stefan, just because it’ll be better if I do a downstream review of your fix rather than you reviewing your own patch. O:)

(And so I’ll also leave the DTM to you.)

Comment 24 Yanan Fu 2022-02-09 06:13:52 UTC
QE bot(pre verify): Set 'Verified:Tested,SanityOnly' as gating/tier1 test pass.

Comment 25 aihua liang 2022-02-09 08:06:22 UTC
Test with qemu-kvm-6.2.0-6.module+el8.6.0+14165+5e5e76ac, the problem has been resolve.

Test Steps:
 1.Create base.img and top.img
 #qemu-img create -f qcow2 base.img 2G
 #qemu-img create -f qcow2 top.img 2G
 
 2.Write data to base.img
 #qemu-io -c 'write 0 2147483136' base.img

 3.Start vm with accel=tcg
   /usr/libexec/qemu-kvm \
    -name 'avocado-vt-vm1'  \
    -sandbox on  \
    -machine q35,memory-backend=mem-machine_mem,accel=tcg \
    -device pcie-root-port,id=pcie-root-port-0,multifunction=on,bus=pcie.0,addr=0x1,chassis=1 \
    -device pcie-pci-bridge,id=pcie-pci-bridge-0,addr=0x0,bus=pcie-root-port-0  \
    -nodefaults \
    -device VGA,bus=pcie.0,addr=0x2 \
    -m 30720 \
    -object memory-backend-ram,size=30720M,id=mem-machine_mem  \
    -smp 10,maxcpus=10,cores=5,threads=1,dies=1,sockets=2  \
    -cpu 'Cascadelake-Server',ss=on,vmx=on,pdcm=on,hypervisor=on,tsc-adjust=on,umip=on,pku=on,md-clear=on,stibp=on,arch-capabilities=on,xsaves=on,ibpb=on,ibrs=on,amd-stibp=on,amd-ssbd=on,rdctl-no=on,ibrs-all=on,skip-l1dfl-vmentry=on,mds-no=on,pschange-mc-no=on,tsx-ctrl=on,hle=off,rtm=off,kvm_pv_unhalt=on \
    -chardev socket,wait=off,id=qmp_id_qmpmonitor1,path=/tmp/monitor-qmpmonitor1-20220105-012140-9igI6qZZ,server=on  \
    -mon chardev=qmp_id_qmpmonitor1,mode=control \
    -chardev socket,wait=off,id=qmp_id_catch_monitor,path=/tmp/monitor-catch_monitor-20220105-012140-9igI6qZZ,server=on  \
    -mon chardev=qmp_id_catch_monitor,mode=control \
    -device pvpanic,ioport=0x505,id=idsULHUn \
    -chardev socket,wait=off,id=chardev_serial0,path=/tmp/serial-serial0-20220105-012140-9igI6qZZ,server=on \
    -device isa-serial,id=serial0,chardev=chardev_serial0  \
    -chardev socket,id=seabioslog_id_20220105-012140-9igI6qZZ,path=/tmp/seabios-20220105-012140-9igI6qZZ,server=on,wait=off \
    -device isa-debugcon,chardev=seabioslog_id_20220105-012140-9igI6qZZ,iobase=0x402 \
    -device pcie-root-port,id=pcie-root-port-1,port=0x1,addr=0x1.0x1,bus=pcie.0,chassis=2 \
    -device qemu-xhci,id=usb1,bus=pcie-root-port-1,addr=0x0 \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \
    -device pcie-root-port,id=pcie-root-port-2,port=0x2,addr=0x1.0x2,bus=pcie.0,chassis=3 \
    -device pcie-root-port,id=pcie-root-port-3,port=0x3,addr=0x1.0x3,bus=pcie.0,chassis=4 \
    -device virtio-net-pci,mac=9a:d5:a4:20:d5:09,id=idVy3YIg,netdev=idXjRF2L,bus=pcie-root-port-3,addr=0x0  \
    -netdev tap,id=idXjRF2L,vhost=on  \
    -vnc :0  \
    -rtc base=utc,clock=host,driftfix=slew  \
    -boot menu=off,order=cdn,once=c,strict=off \
    -enable-kvm \
    -device pcie-root-port,id=pcie_extra_root_port_0,multifunction=on,bus=pcie.0,addr=0x3,chassis=5 \
    -monitor stdio \


 4.Add throttle group
  { "execute" : "object-add", "arguments": { "qom-type" : "throttle-group","id": "thrgr","x-bps-total":16384}}

 5.Add base node and top node
  {'execute':'blockdev-add','arguments':{'driver':'qcow2','node-name':'base','file':{'driver':'throttle','throttle-group':'thrgr','file':{'driver':'file','filename':'/home/base.img'}}}}
{"return": {}}
{'execute':'blockdev-add','arguments':{'driver':'qcow2','node-name':'top','file':{'driver':'blkdebug','inject-error':[{'event':'pwritev','immediately':true,'once':true}],'image':{'driver':'file','filename':'/home/top.img'}},'backing':'base'}}
{"return": {}}

 6.Add scsi device
  {'execute':'device_add','arguments':{'driver': 'virtio-scsi-pci','id':'vscsi','bus':'pcie-root-port-2'}}
{"return": {}}
{'execute':'device_add','arguments':{'driver': 'scsi-hd','bus':'vscsi.0','drive':'top'}}
{"return": {}}

 7.Do stream
  {'execute':'block-stream','arguments':{'device':'top','job-id':'j1'}}
{"timestamp": {"seconds": 1644393766, "microseconds": 506482}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "j1"}}
{"timestamp": {"seconds": 1644393766, "microseconds": 506544}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "j1"}}
{"return": {}}

 8. During step7, reset vm
  {'execute':'system_reset'}

After step8, block job completed with input/output error.
  {"timestamp": {"seconds": 1644393774, "microseconds": 411343}, "event": "BLOCK_JOB_ERROR", "data": {"device": "j1", "operation": "read", "action": "report"}}
{"timestamp": {"seconds": 1644393774, "microseconds": 411386}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "j1"}}
{"timestamp": {"seconds": 1644393774, "microseconds": 411439}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "j1", "len": 2147483648, "offset": 0, "speed": 0, "type": "stream", "error": "Input/output error"}}
{"timestamp": {"seconds": 1644393774, "microseconds": 411466}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "j1"}}
{"timestamp": {"seconds": 1644393774, "microseconds": 411486}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "j1"}}
{"timestamp": {"seconds": 1644393774, "microseconds": 415731}, "event": "RESET", "data": {"guest": false, "reason": "host-qmp-system-reset"}}
{"timestamp": {"seconds": 1644393774, "microseconds": 418562}, "event": "RESET", "data": {"guest": true, "reason": "guest-reset"}}

Comment 26 aihua liang 2022-02-09 08:07:49 UTC
As comment 24 and comment 25, set bug's status to "VERIFIED".

Comment 28 errata-xmlrpc 2022-05-10 13:25:20 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: virt:rhel and virt-devel:rhel security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:1759


Note You need to log in before you can comment on or make changes to this bug.