RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1976149 - Qemu hang when do stream with backing-file whose node behind the base node(iothread enable)
Summary: Qemu hang when do stream with backing-file whose node behind the base node(io...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: qemu-kvm
Version: 8.5
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: rc
: 8.6
Assignee: Sergio Lopez
QA Contact: aihua liang
URL:
Whiteboard:
Depends On: 1997410
Blocks: 1977549
TreeView+ depends on / blocked
 
Reported: 2021-06-25 10:22 UTC by aihua liang
Modified: 2021-12-07 22:45 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1977549 (view as bug list)
Environment:
Last Closed: 2021-09-18 10:43:22 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description aihua liang 2021-06-25 10:22:08 UTC
Description of problem:
 Qemu hang when do stream with backing-file whose node behind the base node(iothread enable)

Version-Release number of selected component (if applicable):
 Kernel version:4.18.0-315.el8.x86_64
 qemu-kvm version:qemu-kvm-6.0.0-21.module+el8.5.0+11555+e0ab0d09


How reproducible:
 100%


Steps to Reproduce:
1.Expose image via qemu-nbd
   #qemu-nbd -f qcow2 /home/kvm_autotest_root/images/rhel850-64-virtio-scsi.qcow2 -p 9000 -t
2.Start guest with qemu cmd:
   /usr/libexec/qemu-kvm \
    -name 'avocado-vt-vm1'  \
    -sandbox on  \
    -machine q35,memory-backend=mem-machine_mem \
    -device pcie-root-port,id=pcie-root-port-0,multifunction=on,bus=pcie.0,addr=0x1,chassis=1 \
    -device pcie-pci-bridge,id=pcie-pci-bridge-0,addr=0x0,bus=pcie-root-port-0  \
    -nodefaults \
    -device VGA,bus=pcie.0,addr=0x2 \
    -m 30720 \
    -object memory-backend-ram,size=30720M,id=mem-machine_mem  \
    -smp 10,maxcpus=10,cores=5,threads=1,dies=1,sockets=2  \
    -cpu 'Cascadelake-Server-noTSX',+kvm_pv_unhalt \
    -chardev socket,wait=off,path=/tmp/monitor-qmpmonitor1-20210623-231231-bgIzjYFA,server=on,id=qmp_id_qmpmonitor1  \
    -mon chardev=qmp_id_qmpmonitor1,mode=control \
    -chardev socket,wait=off,path=/tmp/monitor-catch_monitor-20210623-231231-bgIzjYFA,server=on,id=qmp_id_catch_monitor  \
    -mon chardev=qmp_id_catch_monitor,mode=control \
    -device pvpanic,ioport=0x505,id=id1Mu1Au \
    -chardev socket,wait=off,path=/tmp/serial-serial0-20210623-231231-bgIzjYFA,server=on,id=chardev_serial0 \
    -device isa-serial,id=serial0,chardev=chardev_serial0  \
    -chardev socket,id=seabioslog_id_20210623-231231-bgIzjYFA,path=/tmp/seabios-20210623-231231-bgIzjYFA,server=on,wait=off \
    -device isa-debugcon,chardev=seabioslog_id_20210623-231231-bgIzjYFA,iobase=0x402 \
    -device pcie-root-port,id=pcie-root-port-1,port=0x1,addr=0x1.0x1,bus=pcie.0,chassis=2 \
    -device qemu-xhci,id=usb1,bus=pcie-root-port-1,addr=0x0 \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \
    -blockdev node-name=file_image1,driver=nbd,auto-read-only=on,discard=unmap,server.host=10.73.196.25,server.port=9000,server.type=inet,cache.direct=on,cache.no-flush=off \
    -object iothread,id=iothread0 \
    -blockdev node-name=drive_image1,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_image1 \
    -device pcie-root-port,id=pcie-root-port-2,port=0x2,addr=0x1.0x2,bus=pcie.0,chassis=3 \
    -device virtio-blk-pci,id=image1,drive=drive_image1,bootindex=0,write-cache=on,bus=pcie-root-port-2,addr=0x0,iothread=iothread0 \
    -device pcie-root-port,id=pcie-root-port-3,port=0x3,addr=0x1.0x3,bus=pcie.0,chassis=4 \
    -device virtio-net-pci,mac=9a:bf:8a:84:7c:8e,id=idNeSCU2,netdev=id0TINZs,bus=pcie-root-port-3,addr=0x0  \
    -netdev tap,id=id0TINZs,vhost=on  \
    -vnc :0  \
    -rtc base=utc,clock=host,driftfix=slew  \
    -boot menu=off,order=cdn,once=c,strict=off \
    -enable-kvm \
    -device pcie-root-port,id=pcie_extra_root_port_0,multifunction=on,bus=pcie.0,addr=0x3,chassis=5 \
    -monitor stdio \
    -qmp tcp:0:3000,server=on,wait=off \

3. Create snapshot chain: base->sn1->sn2->sn3
    #create snapshot nodes
     for i in range(1,4)
    {'execute':'blockdev-create','arguments':{'options': {'driver':'file','filename':'/root/sn$i','size':21474836480},'job-id':'job1'}}
    {'execute':'blockdev-add','arguments':{'driver':'file','node-name':'drive_sn$i','filename':'/root/sn$i'}}
    {'execute':'blockdev-create','arguments':{'options': {'driver': 'qcow2','file':'drive_sn$i','size':21474836480},'job-id':'job2'}}
    {'execute':'blockdev-a{'execute':'blockdev-snapshot','arguments':{'node':'drive_image1','overlay':'sn1'}}dd','arguments':{'driver':'qcow2','node-name':'sn$i','file':'drive_sn$i'}}
    {'execute':'job-dismiss','arguments':{'id':'job1'}}
    {'execute':'job-dismiss','arguments':{'id':'job2'}}
   
   #do snapshot
    {'execute':'blockdev-snapshot','arguments':{'node':'drive_image1','overlay':'sn1'}}
    {'execute':'blockdev-snapshot','arguments':{'node':'sn1','overlay':'sn2'}}
    {'execute':'blockdev-snapshot','arguments':{'node':'sn2','overlay':'sn3'}}

4.Check block info
  (qemu)info block
   sn3: json:{"backing": {"backing": {"backing": {"driver": "raw", "file": {"server.port": "9000", "server.host": "10.73.196.25", "driver": "nbd", "server.type": "inet"}}, "driver": "qcow2", "file": {"driver": "file", "filename": "/root/sn1"}}, "driver": "qcow2", "file": {"driver": "file", "filename": "/root/sn2"}}, "driver": "qcow2", "file": {"driver": "file", "filename": "/root/sn3"}} (qcow2)
    Attached to:      /machine/peripheral/image1/virtio-backend
    Cache mode:       writeback
    Backing file:     json:{"backing": {"backing": {"driver": "raw", "file": {"server.port": "9000", "server.host": "10.73.196.25", "driver": "nbd", "server.type": "inet"}}, "driver": "qcow2", "file": {"driver": "file", "filename": "/root/sn1"}}, "driver": "qcow2", "file": {"driver": "file", "filename": "/root/sn2"}} (chain depth: 3)

5.Do stream with base-node:"sn2", backing-file:"/root/sn1"
  {"execute":"block-stream","arguments":{"device":"sn3","base-node":"sn2","job-id":"j1","backing-file":"/root/sn1"}}
{"timestamp": {"seconds": 1624614831, "microseconds": 911334}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "j1"}}
{"timestamp": {"seconds": 1624614831, "microseconds": 911381}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "j1"}}
{"return": {}}
{"timestamp": {"seconds": 1624614831, "microseconds": 911439}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "j1"}}
{"timestamp": {"seconds": 1624614831, "microseconds": 911464}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "j1"}}

6.Check block job status
  {"execute":"query-block-jobs"}

Actual results:

  After step6, no response from qmp.

Expected result:
  Stream can executed successfully.

Gdb info:
 #gdb -p 147326
GNU gdb (GDB) Red Hat Enterprise Linux 8.2-15.el8
Copyright (C) 2018 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Type "show copying" and "show warranty" for details.
This GDB was configured as "x86_64-redhat-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
    <http://www.gnu.org/software/gdb/documentation/>.

For help, type "help".
Type "apropos word" to search for commands related to "word".
Attaching to process 147326
[New LWP 147327]
[New LWP 147328]
[New LWP 147338]
[New LWP 147339]
[New LWP 147340]
[New LWP 147341]
[New LWP 147342]
[New LWP 147343]
[New LWP 147344]
[New LWP 147345]
[New LWP 147346]
[New LWP 147347]
[New LWP 147348]
[New LWP 147388]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
0x00007f0288af4b36 in __GI_ppoll (fds=0x564ca211ef80, nfds=1, timeout=<optimized out>, timeout@entry=0x7ffee80770b0, sigmask=sigmask@entry=0x0)
    at ../sysdeps/unix/sysv/linux/ppoll.c:39
39	  return SYSCALL_CANCEL (ppoll, fds, nfds, timeout, sigmask, _NSIG / 8);
(gdb) bt
#0  0x00007f0288af4b36 in __GI_ppoll (fds=0x564ca211ef80, nfds=1, timeout=<optimized out>, timeout@entry=0x7ffee80770b0, sigmask=sigmask@entry=0x0)
    at ../sysdeps/unix/sysv/linux/ppoll.c:39
#1  0x0000564c9fa19a45 in ppoll (__ss=0x0, __timeout=0x7ffee80770b0, __nfds=<optimized out>, __fds=<optimized out>) at /usr/include/bits/poll2.h:77
#2  qemu_poll_ns (fds=<optimized out>, nfds=<optimized out>, timeout=timeout@entry=599999822667) at ../util/qemu-timer.c:348
#3  0x0000564c9fa31279 in fdmon_poll_wait (ctx=0x564ca1f8b770, ready_list=0x7ffee8077130, timeout=599999822667) at ../util/fdmon-poll.c:80
#4  0x0000564c9fa1f9b1 in aio_poll (ctx=0x564ca1f8b770, blocking=blocking@entry=true) at ../util/aio-posix.c:607
#5  0x0000564c9f938267 in bdrv_drained_end (bs=bs@entry=0x564ca211f540) at ../block/io.c:509
#6  0x0000564c9f97eb6a in bdrv_set_aio_context_ignore (bs=0x564ca211f540, new_context=new_context@entry=0x564ca2119d40, ignore=ignore@entry=0x7ffee80772f0)
    at ../block.c:6574
#7  0x0000564c9f97e97b in bdrv_set_aio_context_ignore (bs=0x564ca2125c70, new_context=new_context@entry=0x564ca2119d40, ignore=ignore@entry=0x7ffee80772f0)
    at ../block.c:6542
#8  0x0000564c9f97e97b in bdrv_set_aio_context_ignore (bs=0x564ca22e9a00, new_context=new_context@entry=0x564ca2119d40, ignore=ignore@entry=0x7ffee80772f0)
    at ../block.c:6542
#9  0x0000564c9f97e97b in bdrv_set_aio_context_ignore (bs=bs@entry=0x564ca2113830, new_context=new_context@entry=0x564ca2119d40, ignore=ignore@entry=0x7ffee80772f0)
    at ../block.c:6542
#10 0x0000564c9f97ef63 in bdrv_child_try_set_aio_context (bs=bs@entry=0x564ca2113830, ctx=ctx@entry=0x564ca2119d40, ignore_child=ignore_child@entry=0x0, 
    errp=errp@entry=0x7ffee8077358) at ../block.c:6659
#11 0x0000564c9f97ff17 in bdrv_try_set_aio_context (errp=0x7ffee8077358, ctx=0x564ca2119d40, bs=0x564ca2113830) at ../block.c:6668
#12 bdrv_root_attach_child (child_bs=child_bs@entry=0x564ca2113830, child_name=child_name@entry=0x564c9fb265e8 "backing", 
    child_class=child_class@entry=0x564ca01a5280 <child_of_bds>, child_role=child_role@entry=8, ctx=0x564ca2119d40, perm=1, shared_perm=21, opaque=0x564ca3036010, 
    errp=0x7ffee8077460) at ../block.c:2720
#13 0x0000564c9f9800ff in bdrv_attach_child (parent_bs=parent_bs@entry=0x564ca3036010, child_bs=child_bs@entry=0x564ca2113830, 
    child_name=child_name@entry=0x564c9fb265e8 "backing", child_class=child_class@entry=0x564ca01a5280 <child_of_bds>, child_role=8, errp=errp@entry=0x7ffee8077460)
    at ../block.c:6373
#14 0x0000564c9f980e29 in bdrv_set_backing_hd (bs=bs@entry=0x564ca3036010, backing_hd=backing_hd@entry=0x564ca2113830, errp=errp@entry=0x7ffee8077460) at ../block.c:2875
#15 0x0000564c9f985b6c in stream_prepare (job=0x564ca2224170) at ../block/stream.c:74
#16 0x0000564c9f97550e in job_prepare (job=0x564ca2224170) at ../job.c:787
#17 0x0000564c9f976051 in job_txn_apply (job=job@entry=0x564ca2224170, fn=fn@entry=0x564c9f9754f0 <job_prepare>) at ../job.c:158
#18 0x0000564c9f976a2f in job_do_finalize (job=0x564ca2224170) at ../job.c:804
#19 0x0000564c9f976c15 in job_exit (opaque=0x564ca2224170) at ../job.c:891
#20 0x0000564c9fa239ed in aio_bh_call (bh=0x7f0274003530) at ../util/async.c:164
#21 aio_bh_poll (ctx=ctx@entry=0x564ca1f8b770) at ../util/async.c:164
#22 0x0000564c9fa1f482 in aio_dispatch (ctx=0x564ca1f8b770) at ../util/aio-posix.c:381
#23 0x0000564c9fa238d2 in aio_ctx_dispatch (source=<optimized out>, callback=<optimized out>, user_data=<optimized out>) at ../util/async.c:306
#24 0x00007f02897d287d in g_main_dispatch (context=0x564ca1f8b8d0) at gmain.c:3193
#25 g_main_context_dispatch (context=context@entry=0x564ca1f8b8d0) at gmain.c:3873
#26 0x0000564c9fa1d810 in glib_pollfds_poll () at ../util/main-loop.c:231
#27 os_host_main_loop_wait (timeout=<optimized out>) at ../util/main-loop.c:254
#28 main_loop_wait (nonblocking=nonblocking@entry=0) at ../util/main-loop.c:530
#29 0x0000564c9f8713d9 in qemu_main_loop () at ../softmmu/runstate.c:725
#30 0x0000564c9f656512 in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at ../softmmu/main.c:50
 

 Note: 
   1. When test without iothread setting, don't hit this issue, stream can complete successfully.
    #device without iothread setting
    ....
    -device virtio-blk-pci,id=image1,drive=drive_image1,bootindex=0,write-cache=on,bus=pcie-root-port-2,addr=0x0 \
    ....

  2. Both virtio_blk and virio_scsi can hit this issue.

Comment 1 John Ferlan 2021-06-28 21:03:47 UTC
May be interesting to know if this same sequence occurs in the previous packages (e.g., qemu-kvm-6.0.0-20.module+el8.5.0+11499+199527ef) or earlier.

There was a change in -21 for IOThreads.

I'm going to set ITR=8.5.0 to plan to fix for the release.

Comment 2 aihua liang 2021-06-30 02:19:41 UTC
Keypoint to reproduce this bug is: NBD+iothread.

Comment 3 Sergio Lopez 2021-07-09 10:34:21 UTC
Seems there's been a copy/paste issue in the reproducer:

    {'execute':'blockdev-create','arguments':{'options': {'driver':'file','filename':'/root/sn$i','size':21474836480},'job-id':'job1'}}
    {'execute':'blockdev-add','arguments':{'driver':'file','node-name':'drive_sn$i','filename':'/root/sn$i'}}
    {'execute':'blockdev-create','arguments':{'options': {'driver': 'qcow2','file':'drive_sn$i','size':21474836480},'job-id':'job2'}}
    {'execute':'blockdev-a{'execute':'blockdev-snapshot','arguments':{'node':'drive_image1','overlay':'sn1'}}dd','arguments':{'driver':'qcow2','node-name':'sn$i','file':'drive_sn$i'}}
    {'execute':'job-dismiss','arguments':{'id':'job1'}}
    {'execute':'job-dismiss','arguments':{'id':'job2'}}

Note the broken 'blockdev-a' command in the fourth line. Could you please paste this section of the reproducer again?

Comment 4 aihua liang 2021-07-09 10:43:44 UTC
(In reply to Sergio Lopez from comment #3)
> Seems there's been a copy/paste issue in the reproducer:
> 
>     {'execute':'blockdev-create','arguments':{'options':
> {'driver':'file','filename':'/root/sn$i','size':21474836480},'job-id':
> 'job1'}}
>    
> {'execute':'blockdev-add','arguments':{'driver':'file','node-name':
> 'drive_sn$i','filename':'/root/sn$i'}}
>     {'execute':'blockdev-create','arguments':{'options': {'driver':
> 'qcow2','file':'drive_sn$i','size':21474836480},'job-id':'job2'}}
>    
> {'execute':'blockdev-a{'execute':'blockdev-snapshot','arguments':{'node':
> 'drive_image1','overlay':'sn1'}}dd','arguments':{'driver':'qcow2','node-
> name':'sn$i','file':'drive_sn$i'}}
>     {'execute':'job-dismiss','arguments':{'id':'job1'}}
>     {'execute':'job-dismiss','arguments':{'id':'job2'}}

correct cmd:
 {'execute':'blockdev-add','arguments':{'driver':'qcow2','node-name':'sn$i','file':'drive_sn$i'}}

Sorry, I pasted the snapshot cmd by mistake and truncated the blockdev-add cmd.
> 
> Note the broken 'blockdev-a' command in the fourth line. Could you please
> paste this section of the reproducer again?

3. Create snapshot chain: base->sn1->sn2->sn3
    #create snapshot nodes
     for i in range(1,4)
    {'execute':'blockdev-create','arguments':{'options': {'driver':'file','filename':'/root/sn$i','size':21474836480},'job-id':'job1'}}
    {'execute':'blockdev-add','arguments':{'driver':'file','node-name':'drive_sn$i','filename':'/root/sn$i'}}
    {'execute':'blockdev-create','arguments':{'options': {'driver': 'qcow2','file':'drive_sn$i','size':21474836480},'job-id':'job2'}}
    {'execute':'blockdev-add','arguments':{'driver':'qcow2','node-name':'sn$i','file':'drive_sn$i'}}
    {'execute':'job-dismiss','arguments':{'id':'job1'}}
    {'execute':'job-dismiss','arguments':{'id':'job2'}}
   
   #do snapshot
    {'execute':'blockdev-snapshot','arguments':{'node':'drive_image1','overlay':'sn1'}}
    {'execute':'blockdev-snapshot','arguments':{'node':'sn1','overlay':'sn2'}}
    {'execute':'blockdev-snapshot','arguments':{'node':'sn2','overlay':'sn3'}}

Comment 7 Sergio Lopez 2021-07-19 12:35:44 UTC
Thanks, I was able to reproduce the problem with qemu-img-6.0.0-23.module+el8.5.0+11740+35571f13.x86_64.

With current upstream, the issue is not reproducible. I've been tracing back to find when it was fixed, and found it to be part of this series:

- https://patchew.org/QEMU/20210428151804.439460-1-vsementsov@virtuozzo.com/

Comment 12 John Ferlan 2021-07-20 13:59:50 UTC
As noted in comment 7, the issue is fixed by a specific upstream series that was determined to be too risky to backport into 8.5.0.

Move this to 8.6.0 where qemu-6.1 is expected to be picked up as part of a planned rebase

Comment 13 John Ferlan 2021-09-09 11:52:25 UTC
Bulk update: Move RHEL-AV bugs to RHEL8 with existing RHEL9 clone.

Comment 14 John Ferlan 2021-09-09 11:53:40 UTC
Please provide the qa_ack+/ITM - this should be resolved by the qemu-6.1 rebase bug 1997410

Comment 16 aihua liang 2021-09-18 08:01:31 UTC
Test on qemu-kvm-6.1.0-1.module+el8.6.0+12535+4e2af250, both virtio_blk+iothread+nbd and virtio_scsi+iothread+nbd don't hit this issue any more.

Test Env:
  kernel version:4.18.0-340.el8.x86_64 
  qemu-kvm version:qemu-kvm-6.1.0-1.module+el8.6.0+12535+4e2af250

Test Steps:
  1.Expose image via nbd
    #qemu-nbd -f qcow2 /home/kvm_autotest_root/images/rhel860-64-virtio-scsi.qcow2 -p 9000 -t

  2.Start guest with this exposed image
   /usr/libexec/qemu-kvm \
    -name 'avocado-vt-vm1'  \
    -sandbox on  \
    -machine q35,memory-backend=mem-machine_mem \
    -device pcie-root-port,id=pcie-root-port-0,multifunction=on,bus=pcie.0,addr=0x1,chassis=1 \
    -device pcie-pci-bridge,id=pcie-pci-bridge-0,addr=0x0,bus=pcie-root-port-0  \
    -nodefaults \
    -device VGA,bus=pcie.0,addr=0x2 \
    -m 30720 \
    -object memory-backend-ram,size=30720M,id=mem-machine_mem  \
    -smp 10,maxcpus=10,cores=5,threads=1,dies=1,sockets=2  \
    -cpu 'Cascadelake-Server-noTSX',+kvm_pv_unhalt \
    -chardev socket,id=qmp_id_qmpmonitor1,server=on,path=/tmp/monitor-qmpmonitor1-20210913-223137-drnRdJs8,wait=off  \
    -mon chardev=qmp_id_qmpmonitor1,mode=control \
    -chardev socket,id=qmp_id_catch_monitor,server=on,path=/tmp/monitor-catch_monitor-20210913-223137-drnRdJs8,wait=off  \
    -mon chardev=qmp_id_catch_monitor,mode=control \
    -device pvpanic,ioport=0x505,id=idM2Q7IB \
    -chardev socket,id=chardev_serial0,server=on,path=/tmp/serial-serial0-20210913-223137-drnRdJs8,wait=off \
    -device isa-serial,id=serial0,chardev=chardev_serial0  \
    -chardev socket,id=seabioslog_id_20210913-223137-drnRdJs8,path=/tmp/seabios-20210913-223137-drnRdJs8,server=on,wait=off \
    -device isa-debugcon,chardev=seabioslog_id_20210913-223137-drnRdJs8,iobase=0x402 \
    -device pcie-root-port,id=pcie-root-port-1,port=0x1,addr=0x1.0x1,bus=pcie.0,chassis=2 \
    -device qemu-xhci,id=usb1,bus=pcie-root-port-1,addr=0x0 \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \
    -object iothread,id=iothread0 \
    -device pcie-root-port,id=pcie-root-port-2,port=0x2,addr=0x1.0x2,bus=pcie.0,chassis=3 \
    -blockdev node-name=file_image1,driver=nbd,auto-read-only=on,discard=unmap,server.host=10.73.114.14,server.port=9000,server.type=inet,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_image1,driver=raw,read-only=off,cache.direct=on,cache.no-flush=off,file=file_image1 \
    -device virtio-blk-pci,id=image1,drive=drive_image1,write-cache=on,bus=pcie-root-port-2,addr=0x0,iothread=iothread0 \
    -device pcie-root-port,id=pcie-root-port-3,port=0x3,addr=0x1.0x3,bus=pcie.0,chassis=4 \
    -device virtio-net-pci,mac=9a:fb:5a:e8:4b:7b,id=idVnkhgS,netdev=id9XmH0X,bus=pcie-root-port-3,addr=0x0  \
    -netdev tap,id=id9XmH0X,vhost=on  \
    -vnc :0  \
    -rtc base=utc,clock=host,driftfix=slew  \
    -boot menu=off,order=cdn,once=c,strict=off \
    -enable-kvm \
    -device pcie-root-port,id=pcie_extra_root_port_0,multifunction=on,bus=pcie.0,addr=0x3,chassis=5 \
    -monitor stdio \

 3. Create snapshot chain:base->sn1->sn2->sn3
    3.1 create snapshot targets
     for i in range(1,4)
      {'execute':'blockdev-create','arguments':{'options': {'driver':'file','filename':'/root/sn$i','size':21474836480},'job-id':'job1'}}
      {'execute':'blockdev-add','arguments':{'driver':'file','node-name':'drive_sn$i','filename':'/root/sn$i'}}
      {'execute':'blockdev-create','arguments':{'options': {'driver': 'qcow2','file':'drive_sn$i','size':21474836480},'job-id':'job2'}}
      {'execute':'blockdev-add','arguments':{'driver':'qcow2','node-name':'sn$i','file':'drive_sn$i'}}
      {'execute':'job-dismiss','arguments':{'id':'job1'}}
      {'execute':'job-dismiss','arguments':{'id':'job2'}}
     3.2 do snapshot
      {'execute':'blockdev-snapshot','arguments':{'node':'drive_image1','overlay':'sn1'}}
      {'execute':'blockdev-snapshot','arguments':{'node':'sn1','overlay':'sn2'}}
      {'execute':'blockdev-snapshot','arguments':{'node':'sn2','overlay':'sn3'}}

 4. Do stream with backing file, the file behind the base
     {"execute":"block-stream","arguments":{"device":"sn3","base-node":"sn2","job-id":"j1","backing-file":"/root/sn1"}}
     {"timestamp": {"seconds": 1631950557, "microseconds": 133915}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "j1"}}
{"timestamp": {"seconds": 1631950557, "microseconds": 133968}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "j1"}}
{"return": {}}
{"timestamp": {"seconds": 1631950557, "microseconds": 134022}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "j1"}}
{"timestamp": {"seconds": 1631950557, "microseconds": 134047}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "j1"}}
{"timestamp": {"seconds": 1631950557, "microseconds": 134186}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "j1", "len": 0, "offset": 0, "speed": 0, "type": "stream"}}
{"timestamp": {"seconds": 1631950557, "microseconds": 134224}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "j1"}}
{"timestamp": {"seconds": 1631950557, "microseconds": 134247}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "j1"}}


Test Result:
 In step4, stream can finished successfully.


Note You need to log in before you can comment on or make changes to this bug.