RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1977549 - Qemu hang when do stream with backing-file whose node behind the base node(nbd+iothread enable)
Summary: Qemu hang when do stream with backing-file whose node behind the base node(nb...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: qemu-kvm
Version: 9.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: beta
: ---
Assignee: Sergio Lopez
QA Contact: aihua liang
URL:
Whiteboard:
Depends On: 1976149 1997408
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-06-30 02:21 UTC by aihua liang
Modified: 2022-05-17 12:25 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1976149
Environment:
Last Closed: 2022-05-17 12:23:27 UTC
Type: Bug
Target Upstream Version:
Embargoed:
pm-rhel: mirror+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2022:2307 0 None None None 2022-05-17 12:23:56 UTC

Description aihua liang 2021-06-30 02:21:20 UTC
+++ This bug was initially created as a clone of Bug #1976149 +++

Description of problem:
 Qemu hang when do stream with backing-file whose node behind the base node(iothread enable)

Version-Release number of selected component (if applicable):
 Kernel version:4.18.0-315.el8.x86_64
 qemu-kvm version:qemu-kvm-6.0.0-21.module+el8.5.0+11555+e0ab0d09


How reproducible:
 100%


Steps to Reproduce:
1.Expose image via qemu-nbd
   #qemu-nbd -f qcow2 /home/kvm_autotest_root/images/rhel850-64-virtio-scsi.qcow2 -p 9000 -t
2.Start guest with qemu cmd:
   /usr/libexec/qemu-kvm \
    -name 'avocado-vt-vm1'  \
    -sandbox on  \
    -machine q35,memory-backend=mem-machine_mem \
    -device pcie-root-port,id=pcie-root-port-0,multifunction=on,bus=pcie.0,addr=0x1,chassis=1 \
    -device pcie-pci-bridge,id=pcie-pci-bridge-0,addr=0x0,bus=pcie-root-port-0  \
    -nodefaults \
    -device VGA,bus=pcie.0,addr=0x2 \
    -m 30720 \
    -object memory-backend-ram,size=30720M,id=mem-machine_mem  \
    -smp 10,maxcpus=10,cores=5,threads=1,dies=1,sockets=2  \
    -cpu 'Cascadelake-Server-noTSX',+kvm_pv_unhalt \
    -chardev socket,wait=off,path=/tmp/monitor-qmpmonitor1-20210623-231231-bgIzjYFA,server=on,id=qmp_id_qmpmonitor1  \
    -mon chardev=qmp_id_qmpmonitor1,mode=control \
    -chardev socket,wait=off,path=/tmp/monitor-catch_monitor-20210623-231231-bgIzjYFA,server=on,id=qmp_id_catch_monitor  \
    -mon chardev=qmp_id_catch_monitor,mode=control \
    -device pvpanic,ioport=0x505,id=id1Mu1Au \
    -chardev socket,wait=off,path=/tmp/serial-serial0-20210623-231231-bgIzjYFA,server=on,id=chardev_serial0 \
    -device isa-serial,id=serial0,chardev=chardev_serial0  \
    -chardev socket,id=seabioslog_id_20210623-231231-bgIzjYFA,path=/tmp/seabios-20210623-231231-bgIzjYFA,server=on,wait=off \
    -device isa-debugcon,chardev=seabioslog_id_20210623-231231-bgIzjYFA,iobase=0x402 \
    -device pcie-root-port,id=pcie-root-port-1,port=0x1,addr=0x1.0x1,bus=pcie.0,chassis=2 \
    -device qemu-xhci,id=usb1,bus=pcie-root-port-1,addr=0x0 \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \
    -blockdev node-name=file_image1,driver=nbd,auto-read-only=on,discard=unmap,server.host=10.73.196.25,server.port=9000,server.type=inet,cache.direct=on,cache.no-flush=off \
    -object iothread,id=iothread0 \
    -blockdev node-name=drive_image1,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_image1 \
    -device pcie-root-port,id=pcie-root-port-2,port=0x2,addr=0x1.0x2,bus=pcie.0,chassis=3 \
    -device virtio-blk-pci,id=image1,drive=drive_image1,bootindex=0,write-cache=on,bus=pcie-root-port-2,addr=0x0,iothread=iothread0 \
    -device pcie-root-port,id=pcie-root-port-3,port=0x3,addr=0x1.0x3,bus=pcie.0,chassis=4 \
    -device virtio-net-pci,mac=9a:bf:8a:84:7c:8e,id=idNeSCU2,netdev=id0TINZs,bus=pcie-root-port-3,addr=0x0  \
    -netdev tap,id=id0TINZs,vhost=on  \
    -vnc :0  \
    -rtc base=utc,clock=host,driftfix=slew  \
    -boot menu=off,order=cdn,once=c,strict=off \
    -enable-kvm \
    -device pcie-root-port,id=pcie_extra_root_port_0,multifunction=on,bus=pcie.0,addr=0x3,chassis=5 \
    -monitor stdio \
    -qmp tcp:0:3000,server=on,wait=off \

3. Create snapshot chain: base->sn1->sn2->sn3
    #create snapshot nodes
     for i in range(1,4)
    {'execute':'blockdev-create','arguments':{'options': {'driver':'file','filename':'/root/sn$i','size':21474836480},'job-id':'job1'}}
    {'execute':'blockdev-add','arguments':{'driver':'file','node-name':'drive_sn$i','filename':'/root/sn$i'}}
    {'execute':'blockdev-create','arguments':{'options': {'driver': 'qcow2','file':'drive_sn$i','size':21474836480},'job-id':'job2'}}
    {'execute':'blockdev-a{'execute':'blockdev-snapshot','arguments':{'node':'drive_image1','overlay':'sn1'}}dd','arguments':{'driver':'qcow2','node-name':'sn$i','file':'drive_sn$i'}}
    {'execute':'job-dismiss','arguments':{'id':'job1'}}
    {'execute':'job-dismiss','arguments':{'id':'job2'}}
   
   #do snapshot
    {'execute':'blockdev-snapshot','arguments':{'node':'drive_image1','overlay':'sn1'}}
    {'execute':'blockdev-snapshot','arguments':{'node':'sn1','overlay':'sn2'}}
    {'execute':'blockdev-snapshot','arguments':{'node':'sn2','overlay':'sn3'}}

4.Check block info
  (qemu)info block
   sn3: json:{"backing": {"backing": {"backing": {"driver": "raw", "file": {"server.port": "9000", "server.host": "10.73.196.25", "driver": "nbd", "server.type": "inet"}}, "driver": "qcow2", "file": {"driver": "file", "filename": "/root/sn1"}}, "driver": "qcow2", "file": {"driver": "file", "filename": "/root/sn2"}}, "driver": "qcow2", "file": {"driver": "file", "filename": "/root/sn3"}} (qcow2)
    Attached to:      /machine/peripheral/image1/virtio-backend
    Cache mode:       writeback
    Backing file:     json:{"backing": {"backing": {"driver": "raw", "file": {"server.port": "9000", "server.host": "10.73.196.25", "driver": "nbd", "server.type": "inet"}}, "driver": "qcow2", "file": {"driver": "file", "filename": "/root/sn1"}}, "driver": "qcow2", "file": {"driver": "file", "filename": "/root/sn2"}} (chain depth: 3)

5.Do stream with base-node:"sn2", backing-file:"/root/sn1"
  {"execute":"block-stream","arguments":{"device":"sn3","base-node":"sn2","job-id":"j1","backing-file":"/root/sn1"}}
{"timestamp": {"seconds": 1624614831, "microseconds": 911334}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "j1"}}
{"timestamp": {"seconds": 1624614831, "microseconds": 911381}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "j1"}}
{"return": {}}
{"timestamp": {"seconds": 1624614831, "microseconds": 911439}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "j1"}}
{"timestamp": {"seconds": 1624614831, "microseconds": 911464}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "j1"}}

6.Check block job status
  {"execute":"query-block-jobs"}

Actual results:

  After step6, no response from qmp.

Expected result:
  Stream can executed successfully.

Gdb info:
 #gdb -p 147326
GNU gdb (GDB) Red Hat Enterprise Linux 8.2-15.el8
Copyright (C) 2018 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Type "show copying" and "show warranty" for details.
This GDB was configured as "x86_64-redhat-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
    <http://www.gnu.org/software/gdb/documentation/>.

For help, type "help".
Type "apropos word" to search for commands related to "word".
Attaching to process 147326
[New LWP 147327]
[New LWP 147328]
[New LWP 147338]
[New LWP 147339]
[New LWP 147340]
[New LWP 147341]
[New LWP 147342]
[New LWP 147343]
[New LWP 147344]
[New LWP 147345]
[New LWP 147346]
[New LWP 147347]
[New LWP 147348]
[New LWP 147388]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
0x00007f0288af4b36 in __GI_ppoll (fds=0x564ca211ef80, nfds=1, timeout=<optimized out>, timeout@entry=0x7ffee80770b0, sigmask=sigmask@entry=0x0)
    at ../sysdeps/unix/sysv/linux/ppoll.c:39
39	  return SYSCALL_CANCEL (ppoll, fds, nfds, timeout, sigmask, _NSIG / 8);
(gdb) bt
#0  0x00007f0288af4b36 in __GI_ppoll (fds=0x564ca211ef80, nfds=1, timeout=<optimized out>, timeout@entry=0x7ffee80770b0, sigmask=sigmask@entry=0x0)
    at ../sysdeps/unix/sysv/linux/ppoll.c:39
#1  0x0000564c9fa19a45 in ppoll (__ss=0x0, __timeout=0x7ffee80770b0, __nfds=<optimized out>, __fds=<optimized out>) at /usr/include/bits/poll2.h:77
#2  qemu_poll_ns (fds=<optimized out>, nfds=<optimized out>, timeout=timeout@entry=599999822667) at ../util/qemu-timer.c:348
#3  0x0000564c9fa31279 in fdmon_poll_wait (ctx=0x564ca1f8b770, ready_list=0x7ffee8077130, timeout=599999822667) at ../util/fdmon-poll.c:80
#4  0x0000564c9fa1f9b1 in aio_poll (ctx=0x564ca1f8b770, blocking=blocking@entry=true) at ../util/aio-posix.c:607
#5  0x0000564c9f938267 in bdrv_drained_end (bs=bs@entry=0x564ca211f540) at ../block/io.c:509
#6  0x0000564c9f97eb6a in bdrv_set_aio_context_ignore (bs=0x564ca211f540, new_context=new_context@entry=0x564ca2119d40, ignore=ignore@entry=0x7ffee80772f0)
    at ../block.c:6574
#7  0x0000564c9f97e97b in bdrv_set_aio_context_ignore (bs=0x564ca2125c70, new_context=new_context@entry=0x564ca2119d40, ignore=ignore@entry=0x7ffee80772f0)
    at ../block.c:6542
#8  0x0000564c9f97e97b in bdrv_set_aio_context_ignore (bs=0x564ca22e9a00, new_context=new_context@entry=0x564ca2119d40, ignore=ignore@entry=0x7ffee80772f0)
    at ../block.c:6542
#9  0x0000564c9f97e97b in bdrv_set_aio_context_ignore (bs=bs@entry=0x564ca2113830, new_context=new_context@entry=0x564ca2119d40, ignore=ignore@entry=0x7ffee80772f0)
    at ../block.c:6542
#10 0x0000564c9f97ef63 in bdrv_child_try_set_aio_context (bs=bs@entry=0x564ca2113830, ctx=ctx@entry=0x564ca2119d40, ignore_child=ignore_child@entry=0x0, 
    errp=errp@entry=0x7ffee8077358) at ../block.c:6659
#11 0x0000564c9f97ff17 in bdrv_try_set_aio_context (errp=0x7ffee8077358, ctx=0x564ca2119d40, bs=0x564ca2113830) at ../block.c:6668
#12 bdrv_root_attach_child (child_bs=child_bs@entry=0x564ca2113830, child_name=child_name@entry=0x564c9fb265e8 "backing", 
    child_class=child_class@entry=0x564ca01a5280 <child_of_bds>, child_role=child_role@entry=8, ctx=0x564ca2119d40, perm=1, shared_perm=21, opaque=0x564ca3036010, 
    errp=0x7ffee8077460) at ../block.c:2720
#13 0x0000564c9f9800ff in bdrv_attach_child (parent_bs=parent_bs@entry=0x564ca3036010, child_bs=child_bs@entry=0x564ca2113830, 
    child_name=child_name@entry=0x564c9fb265e8 "backing", child_class=child_class@entry=0x564ca01a5280 <child_of_bds>, child_role=8, errp=errp@entry=0x7ffee8077460)
    at ../block.c:6373
#14 0x0000564c9f980e29 in bdrv_set_backing_hd (bs=bs@entry=0x564ca3036010, backing_hd=backing_hd@entry=0x564ca2113830, errp=errp@entry=0x7ffee8077460) at ../block.c:2875
#15 0x0000564c9f985b6c in stream_prepare (job=0x564ca2224170) at ../block/stream.c:74
#16 0x0000564c9f97550e in job_prepare (job=0x564ca2224170) at ../job.c:787
#17 0x0000564c9f976051 in job_txn_apply (job=job@entry=0x564ca2224170, fn=fn@entry=0x564c9f9754f0 <job_prepare>) at ../job.c:158
#18 0x0000564c9f976a2f in job_do_finalize (job=0x564ca2224170) at ../job.c:804
#19 0x0000564c9f976c15 in job_exit (opaque=0x564ca2224170) at ../job.c:891
#20 0x0000564c9fa239ed in aio_bh_call (bh=0x7f0274003530) at ../util/async.c:164
#21 aio_bh_poll (ctx=ctx@entry=0x564ca1f8b770) at ../util/async.c:164
#22 0x0000564c9fa1f482 in aio_dispatch (ctx=0x564ca1f8b770) at ../util/aio-posix.c:381
#23 0x0000564c9fa238d2 in aio_ctx_dispatch (source=<optimized out>, callback=<optimized out>, user_data=<optimized out>) at ../util/async.c:306
#24 0x00007f02897d287d in g_main_dispatch (context=0x564ca1f8b8d0) at gmain.c:3193
#25 g_main_context_dispatch (context=context@entry=0x564ca1f8b8d0) at gmain.c:3873
#26 0x0000564c9fa1d810 in glib_pollfds_poll () at ../util/main-loop.c:231
#27 os_host_main_loop_wait (timeout=<optimized out>) at ../util/main-loop.c:254
#28 main_loop_wait (nonblocking=nonblocking@entry=0) at ../util/main-loop.c:530
#29 0x0000564c9f8713d9 in qemu_main_loop () at ../softmmu/runstate.c:725
#30 0x0000564c9f656512 in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at ../softmmu/main.c:50
 

 Note: 
   1. When test without iothread setting, don't hit this issue, stream can complete successfully.
    #device without iothread setting
    ....
    -device virtio-blk-pci,id=image1,drive=drive_image1,bootindex=0,write-cache=on,bus=pcie-root-port-2,addr=0x0 \
    ....

  2. Both virtio_blk and virio_scsi can hit this issue.

--- Additional comment from John Ferlan on 2021-06-28 21:03:47 UTC ---

May be interesting to know if this same sequence occurs in the previous packages (e.g., qemu-kvm-6.0.0-20.module+el8.5.0+11499+199527ef) or earlier.

There was a change in -21 for IOThreads.

I'm going to set ITR=8.5.0 to plan to fix for the release.

--- Additional comment from aihua liang on 2021-06-30 02:19:41 UTC ---

Keypoint to reproduce this bug is: NBD+iothread.

Comment 1 John Ferlan 2021-06-30 15:56:14 UTC
Changing ITR=9-Beta since this is a RHEL9 bug (that's a bugzilla-ism not clearing ITR on the clone)

Sergio - assigning to you just for completeness, if the bug is fixed downstream in/for 8.5.0, then this bug can move directly to POST referencing the 8.5.0 downstream commit and using bug 1957194 as a depends on (e.g. the bug Mirek is using to "mirror" all 8.5.0 downstream commits).

Comment 2 John Ferlan 2021-07-20 14:01:36 UTC
See discussion in the cloned from bug 1976149

Similarly moving this bug to 9.0.0 with the expectation the fix is picked up by the planned qemu-6.1 rebase.

Comment 3 John Ferlan 2021-09-09 11:51:11 UTC
Please provide a qa_ack+/ITM - this is resolved by the qemu-6.1 rebase

Comment 5 aihua liang 2021-09-22 03:22:57 UTC
Test with qemu-kvm-6.1.0-2.el9, the problem has been resolved.

Test Env:
  kernel version:5.14.0-2.el9.x86_64
  qemu-kvm version:qemu-kvm-6.1.0-2.el9

Test Steps:
1.Expose image via qemu-nbd
   #qemu-nbd -f qcow2 /home/kvm_autotest_root/images/rhel900-64-virtio-scsi.qcow2 -p 9000 -t
2.Start guest with qemu cmd:
   /usr/libexec/qemu-kvm \
    -name 'avocado-vt-vm1'  \
    -sandbox on  \
    -machine q35,memory-backend=mem-machine_mem \
    -device pcie-root-port,id=pcie-root-port-0,multifunction=on,bus=pcie.0,addr=0x1,chassis=1 \
    -device pcie-pci-bridge,id=pcie-pci-bridge-0,addr=0x0,bus=pcie-root-port-0  \
    -nodefaults \
    -device VGA,bus=pcie.0,addr=0x2 \
    -m 30720 \
    -object memory-backend-ram,size=30720M,id=mem-machine_mem  \
    -smp 10,maxcpus=10,cores=5,threads=1,dies=1,sockets=2  \
    -cpu 'Cascadelake-Server-noTSX',+kvm_pv_unhalt \
    -chardev socket,wait=off,path=/tmp/monitor-qmpmonitor1-20210623-231231-bgIzjYFA,server=on,id=qmp_id_qmpmonitor1  \
    -mon chardev=qmp_id_qmpmonitor1,mode=control \
    -chardev socket,wait=off,path=/tmp/monitor-catch_monitor-20210623-231231-bgIzjYFA,server=on,id=qmp_id_catch_monitor  \
    -mon chardev=qmp_id_catch_monitor,mode=control \
    -device pvpanic,ioport=0x505,id=id1Mu1Au \
    -chardev socket,wait=off,path=/tmp/serial-serial0-20210623-231231-bgIzjYFA,server=on,id=chardev_serial0 \
    -device isa-serial,id=serial0,chardev=chardev_serial0  \
    -chardev socket,id=seabioslog_id_20210623-231231-bgIzjYFA,path=/tmp/seabios-20210623-231231-bgIzjYFA,server=on,wait=off \
    -device isa-debugcon,chardev=seabioslog_id_20210623-231231-bgIzjYFA,iobase=0x402 \
    -device pcie-root-port,id=pcie-root-port-1,port=0x1,addr=0x1.0x1,bus=pcie.0,chassis=2 \
    -device qemu-xhci,id=usb1,bus=pcie-root-port-1,addr=0x0 \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \
    -blockdev node-name=file_image1,driver=nbd,auto-read-only=on,discard=unmap,server.host=10.73.114.14,server.port=9000,server.type=inet,cache.direct=on,cache.no-flush=off \
    -object iothread,id=iothread0 \
    -blockdev node-name=drive_image1,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_image1 \
    -device pcie-root-port,id=pcie-root-port-2,port=0x2,addr=0x1.0x2,bus=pcie.0,chassis=3 \
    -device virtio-blk-pci,id=image1,drive=drive_image1,bootindex=0,write-cache=on,bus=pcie-root-port-2,addr=0x0,iothread=iothread0 \
    -device pcie-root-port,id=pcie-root-port-3,port=0x3,addr=0x1.0x3,bus=pcie.0,chassis=4 \
    -device virtio-net-pci,mac=9a:bf:8a:84:7c:8e,id=idNeSCU2,netdev=id0TINZs,bus=pcie-root-port-3,addr=0x0  \
    -netdev tap,id=id0TINZs,vhost=on  \
    -vnc :0  \
    -rtc base=utc,clock=host,driftfix=slew  \
    -boot menu=off,order=cdn,once=c,strict=off \
    -enable-kvm \
    -device pcie-root-port,id=pcie_extra_root_port_0,multifunction=on,bus=pcie.0,addr=0x3,chassis=5 \
    -monitor stdio \
    -qmp tcp:0:3000,server=on,wait=off \

3. Create snapshot chain: base->sn1->sn2->sn3
    #create snapshot nodes
     for i in range(1,4)
    {'execute':'blockdev-create','arguments':{'options': {'driver':'file','filename':'/root/sn$i','size':21474836480},'job-id':'job1'}}
    {'execute':'blockdev-add','arguments':{'driver':'file','node-name':'drive_sn$i','filename':'/root/sn$i'}}
    {'execute':'blockdev-create','arguments':{'options': {'driver': 'qcow2','file':'drive_sn$i','size':21474836480},'job-id':'job2'}}
    {'execute':'blockdev-a{'execute':'blockdev-snapshot','arguments':{'node':'drive_image1','overlay':'sn1'}}dd','arguments':{'driver':'qcow2','node-name':'sn$i','file':'drive_sn$i'}}
    {'execute':'job-dismiss','arguments':{'id':'job1'}}
    {'execute':'job-dismiss','arguments':{'id':'job2'}}
   
   #do snapshot
    {'execute':'blockdev-snapshot','arguments':{'node':'drive_image1','overlay':'sn1'}}
    {'execute':'blockdev-snapshot','arguments':{'node':'sn1','overlay':'sn2'}}
    {'execute':'blockdev-snapshot','arguments':{'node':'sn2','overlay':'sn3'}}

4.Check block info
  (qemu)info block
   sn3: json:{"backing": {"backing": {"backing": {"driver": "raw", "file": {"server.port": "9000", "server.host": "10.73.114.14", "driver": "nbd", "server.type": "inet"}}, "driver": "qcow2", "file": {"driver": "file", "filename": "/root/sn1"}}, "driver": "qcow2", "file": {"driver": "file", "filename": "/root/sn2"}}, "driver": "qcow2", "file": {"driver": "file", "filename": "/root/sn3"}} (qcow2)
    Attached to:      /machine/peripheral/image1/virtio-backend
    Cache mode:       writeback
    Backing file:     json:{"backing": {"backing": {"driver": "raw", "file": {"server.port": "9000", "server.host": "10.73.114.14", "driver": "nbd", "server.type": "inet"}}, "driver": "qcow2", "file": {"driver": "file", "filename": "/root/sn1"}}, "driver": "qcow2", "file": {"driver": "file", "filename": "/root/sn2"}} (chain depth: 3)

5.Do stream with base-node:"sn2", backing-file:"/root/sn1"
  {"execute":"block-stream","arguments":{"device":"sn3","base-node":"sn2","job-id":"j1","backing-file":"/root/sn1"}}
{"timestamp": {"seconds": 1632280729, "microseconds": 200089}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "j1"}}
{"timestamp": {"seconds": 1632280729, "microseconds": 200145}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "j1"}}
{"return": {}}
{"timestamp": {"seconds": 1632280729, "microseconds": 200198}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "j1"}}
{"timestamp": {"seconds": 1632280729, "microseconds": 200224}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "j1"}}
{"timestamp": {"seconds": 1632280729, "microseconds": 200359}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "j1", "len": 0, "offset": 0, "speed": 0, "type": "stream"}}
{"timestamp": {"seconds": 1632280729, "microseconds": 200391}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "j1"}}
{"timestamp": {"seconds": 1632280729, "microseconds": 200414}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "j1"}}

Comment 6 aihua liang 2021-09-22 03:29:12 UTC
As comment5, set bug status to "Verified".

Comment 9 errata-xmlrpc 2022-05-17 12:23:27 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (new packages: qemu-kvm), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2022:2307


Note You need to log in before you can comment on or make changes to this bug.