Bug 1746217 - Src qemu hang when do storage vm migration during guest installation
Summary: Src qemu hang when do storage vm migration during guest installation
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux Advanced Virtualization
Classification: Red Hat
Component: qemu-kvm
Version: 8.1
Hardware: Unspecified
OS: Unspecified
medium
unspecified
Target Milestone: rc
: 8.1
Assignee: Sergio Lopez
QA Contact: aihua liang
URL:
Whiteboard:
Depends On:
Blocks: 1758964
TreeView+ depends on / blocked
 
Reported: 2019-08-28 01:55 UTC by aihua liang
Modified: 2020-05-05 09:50 UTC (History)
9 users (show)

Fixed In Version: qemu-kvm-4.2.0-10.module+el8.2.0+5740+c3dff59e
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-05-05 09:49:40 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2020:2017 0 None None None 2020-05-05 09:50:56 UTC

Description aihua liang 2019-08-28 01:55:07 UTC
Description of problem:
  Src qemu hang when do storage vm migration during guest installation

Version-Release number of selected component (if applicable):
 kernel version:4.18.0-134.el8.x86_64
 qemu-kvm version: qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64
 backend: gluster(mounted)

How reproducible:
 50%

Steps to Reproduce:
1.Create empty image "/mnt/nfs/install.qcow2", start src guest with qemu cmds:
   /usr/libexec/qemu-kvm \
    -name 'avocado-vt-vm1' \
    -machine q35  \
    -nodefaults \
    -device VGA,bus=pcie.0,addr=0x1  \
    -chardev socket,id=qmp_id_qmpmonitor1,path=/var/tmp/monitor-qmpmonitor1-20190820-032540-OesJUJdj,server,nowait \
    -mon chardev=qmp_id_qmpmonitor1,mode=control  \
    -chardev socket,id=qmp_id_catch_monitor,path=/var/tmp/monitor-catch_monitor-20190820-032540-OesJUJdj,server,nowait \
    -mon chardev=qmp_id_catch_monitor,mode=control \
    -device pvpanic,ioport=0x505,id=idbJPqrG \
    -chardev socket,id=chardev_serial0,server,path=/var/tmp/serial-serial0-20190820-032540-OesJUJdj,nowait \
    -device isa-serial,id=serial0,chardev=chardev_serial0  \
    -chardev socket,id=seabioslog_id_20190820-032540-OesJUJdj,path=/var/tmp/seabios-20190820-032540-OesJUJdj,server,nowait \
    -device isa-debugcon,chardev=seabioslog_id_20190820-032540-OesJUJdj,iobase=0x402 \
    -device pcie-root-port,id=pcie.0-root-port-2,slot=2,chassis=2,addr=0x2,bus=pcie.0 \
    -device qemu-xhci,id=usb1,bus=pcie.0-root-port-2,addr=0x0 \
    -object iothread,id=iothread0 \
    -object iothread,id=iothread1 \
    -device pcie-root-port,id=pcie.0-root-port-3,slot=3,chassis=3,addr=0x3,bus=pcie.0 \
    -drive id=drive_image1,if=none,snapshot=off,cache=none,format=qcow2,file=/mnt/nfs/install.qcow2 \
    -device virtio-blk-pci,id=image1,drive=drive_image1,bootindex=0,bus=pcie.0-root-port-3,addr=0x0,iothread=iothread0 \
    -drive id=drive_cd1,if=none,snapshot=off,cache=none,media=cdrom,file=/home/kvm_autotest_root/iso/linux/RHEL8.1.0-BaseOS-x86_64.iso \
    -device ide-cd,id=cd1,drive=drive_cd1,bootindex=2,bus=ide.0,unit=0 \
    -device pcie-root-port,id=pcie.0-root-port-6,slot=6,chassis=6,addr=0x6,bus=pcie.0 \
    -drive id=drive_data1,if=none,snapshot=off,cache=none,format=qcow2,file=/mnt/nfs/data.qcow2 \
    -device virtio-blk-pci,id=data1,drive=drive_data1,iothread=iothread1,bus=pcie.0-root-port-6,addr=0x0 \
    -device pcie-root-port,id=pcie.0-root-port-7,slot=7,chassis=7,addr=0x7,bus=pcie.0 \
    -device pcie-root-port,id=pcie.0-root-port-4,slot=4,chassis=4,addr=0x4,bus=pcie.0 \
    -device virtio-net-pci,mac=9a:19:6a:3c:a6:a5,id=idq14C2Q,netdev=idHzG7Zk,bus=pcie.0-root-port-4,addr=0x0  \
    -netdev tap,id=idHzG7Zk,vhost=on \
    -m 2048  \
    -smp 4,maxcpus=4,cores=2,threads=1,sockets=2  \
    -cpu 'Skylake-Client',+kvm_pv_unhalt \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1  \
    -vnc :0  \
    -rtc base=utc,clock=host,driftfix=slew  \
    -boot order=cdn,once=c,menu=off,strict=off \
    -enable-kvm \
    -device pcie-root-port,id=pcie_extra_root_port_0,slot=5,chassis=5,addr=0x5,bus=pcie.0 \
    -monitor stdio \
    -device virtio-serial-pci,id=virtio-serial0,bus=pcie_extra_root_port_0,addr=0x0 \
    -chardev socket,path=/tmp/qga.sock,server,nowait,id=qga0 \
    -device virtserialport,bus=virtio-serial0.0,chardev=qga0,id=qemu-ga0,name=org.qemu.guest_agent.0 \
    -qmp tcp:0:3000,server,nowait \

2. Create empty image "rhel810-64-virtio.qcow2" and start dst guest with qemu cmds:
    /usr/libexec/qemu-kvm \
    -name 'avocado-vt-vm1' \
    -machine q35  \
    -nodefaults \
    -device VGA,bus=pcie.0,addr=0x1  \
    -chardev socket,id=qmp_id_qmpmonitor1,path=/var/tmp/monitor-qmpmonitor1-20190820-032540-OesJUJdk,server,nowait \
    -mon chardev=qmp_id_qmpmonitor1,mode=control  \
    -chardev socket,id=qmp_id_catch_monitor,path=/var/tmp/monitor-catch_monitor-20190820-032540-OesJUJdj,server,nowait \
    -mon chardev=qmp_id_catch_monitor,mode=control \
    -device pvpanic,ioport=0x505,id=idbJPqrG \
    -chardev socket,id=chardev_serial0,server,path=/var/tmp/serial-serial0-20190820-032540-OesJUJdj,nowait \
    -device isa-serial,id=serial0,chardev=chardev_serial0  \
    -chardev socket,id=seabioslog_id_20190820-032540-OesJUJdj,path=/var/tmp/seabios-20190820-032540-OesJUJdj,server,nowait \
    -device isa-debugcon,chardev=seabioslog_id_20190820-032540-OesJUJdj,iobase=0x402 \
    -device pcie-root-port,id=pcie.0-root-port-2,slot=2,chassis=2,addr=0x2,bus=pcie.0 \
    -device qemu-xhci,id=usb1,bus=pcie.0-root-port-2,addr=0x0 \
    -object iothread,id=iothread0 \
    -object iothread,id=iothread1 \
    -device pcie-root-port,id=pcie.0-root-port-3,slot=3,chassis=3,addr=0x3,bus=pcie.0 \
    -drive id=drive_image1,if=none,snapshot=off,cache=none,format=qcow2,file=/home/rhel810-64-virtio.qcow2 \
    -device virtio-blk-pci,id=image1,drive=drive_image1,bootindex=0,bus=pcie.0-root-port-3,addr=0x0,iothread=iothread0 \
    -drive id=drive_cd1,if=none,snapshot=off,cache=none,media=cdrom,file=/home/kvm_autotest_root/iso/linux/RHEL8.1.0-BaseOS-x86_64.iso \
    -device ide-cd,id=cd1,drive=drive_cd1,bootindex=2,bus=ide.0,unit=0 \
    -device pcie-root-port,id=pcie.0-root-port-6,slot=6,chassis=6,addr=0x6,bus=pcie.0 \
    -drive id=drive_data1,if=none,snapshot=off,cache=none,format=qcow2,file=/mnt/nfs/data.qcow2 \
    -device virtio-blk-pci,id=data1,drive=drive_data1,iothread=iothread1,bus=pcie.0-root-port-6,addr=0x0 \
    -device pcie-root-port,id=pcie.0-root-port-7,slot=7,chassis=7,addr=0x7,bus=pcie.0 \
    -device pcie-root-port,id=pcie.0-root-port-4,slot=4,chassis=4,addr=0x4,bus=pcie.0 \
    -device virtio-net-pci,mac=9a:19:6a:3c:a6:a5,id=idq14C2Q,netdev=idHzG7Zk,bus=pcie.0-root-port-4,addr=0x0  \
    -netdev tap,id=idHzG7Zk,vhost=on \
    -m 2048  \
    -smp 4,maxcpus=4,cores=2,threads=1,sockets=2  \
    -cpu 'Skylake-Client',+kvm_pv_unhalt \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1  \
    -vnc :1  \
    -rtc base=utc,clock=host,driftfix=slew  \
    -boot order=cdn,once=c,menu=off,strict=off \
    -enable-kvm \
    -device pcie-root-port,id=pcie_extra_root_port_0,slot=5,chassis=5,addr=0x5,bus=pcie.0 \
    -monitor stdio \
    -device virtio-serial-pci,id=virtio-serial0,bus=pcie_extra_root_port_0,addr=0x0 \
    -chardev socket,path=/tmp/qga.sock,server,nowait,id=qga0 \
    -device virtserialport,bus=virtio-serial0.0,chardev=qga0,id=qemu-ga0,name=org.qemu.guest_agent.0 \
    -qmp tcp:0:3001,server,nowait \
    -incoming tcp:0:5000 \

3. In dst, start nbd server, and expose drive_image1
    { "execute": "nbd-server-start", "arguments": { "addr": { "type": "inet","data": { "host": "10.73.224.68", "port": "3333" } } } }
    {"return": {}}
    { "execute": "nbd-server-add", "arguments": { "device": "drive_image1", "writable": true } }
    {"return": {}}

4. During guest installation, start mirror from src to dst.
     { "execute": "drive-mirror", "arguments": { "device": "drive_image1","target": "nbd://10.73.224.68:3333/drive_image1", "sync": "full","format": "raw", "mode": "existing" } }

Actual results:
  After step4, no response from src qemu and src qemu hang.
  (gdb) bt
#0  0x00007f21e82de306 in __GI_ppoll (fds=0x55ea8ea63850, nfds=1, timeout=<optimized out>, timeout@entry=0x0, sigmask=sigmask@entry=0x0) at ../sysdeps/unix/sysv/linux/ppoll.c:39
#1  0x000055ea8c4881b9 in ppoll (__ss=0x0, __timeout=0x0, __nfds=<optimized out>, __fds=<optimized out>) at /usr/include/bits/poll2.h:77
#2  0x000055ea8c4881b9 in qemu_poll_ns (fds=<optimized out>, nfds=<optimized out>, timeout=<optimized out>) at util/qemu-timer.c:322
#3  0x000055ea8c48a1d4 in aio_poll (ctx=0x55ea8ea3c4e0, blocking=blocking@entry=true) at util/aio-posix.c:669
#4  0x000055ea8c4874da in aio_wait_bh_oneshot (ctx=0x55ea8ea4d620, cb=<optimized out>, opaque=<optimized out>) at util/aio-wait.c:71
#5  0x000055ea8c3c36e8 in bdrv_attach_aio_context (new_context=0x55ea8ea4d620, bs=0x55ea8eb8d250) at block.c:5898
#6  0x000055ea8c3c36e8 in bdrv_set_aio_context_ignore (bs=0x55ea8eb8d250, new_context=new_context@entry=0x55ea8ea4d620, ignore=ignore@entry=0x7ffdf7bbe8d0) at block.c:5963
#7  0x000055ea8c3c37bc in bdrv_set_aio_context_ignore (bs=bs@entry=0x55ea8f6053a0, new_context=new_context@entry=0x55ea8ea4d620, ignore=ignore@entry=0x7ffdf7bbe8d0) at block.c:5945
#8  0x000055ea8c3c3b33 in bdrv_child_try_set_aio_context (bs=bs@entry=0x55ea8f6053a0, ctx=ctx@entry=0x55ea8ea4d620, ignore_child=ignore_child@entry=0x0, errp=errp@entry=0x7ffdf7bbe9b8)
    at block.c:6058
#9  0x000055ea8c3c522e in bdrv_try_set_aio_context (bs=bs@entry=0x55ea8f6053a0, ctx=ctx@entry=0x55ea8ea4d620, errp=errp@entry=0x7ffdf7bbe9b8) at block.c:6067
#10 0x000055ea8c26a211 in qmp_drive_mirror (arg=arg@entry=0x7ffdf7bbe9c0, errp=errp@entry=0x7ffdf7bbe9b8) at blockdev.c:3933
#11 0x000055ea8c380bb9 in qmp_marshal_drive_mirror (args=<optimized out>, ret=<optimized out>, errp=0x7ffdf7bbeab8) at qapi/qapi-commands-block-core.c:619
#12 0x000055ea8c43fecc in do_qmp_dispatch (errp=0x7ffdf7bbeab0, allow_oob=<optimized out>, request=<optimized out>, cmds=0x55ea8cd1b7a0 <qmp_commands>) at qapi/qmp-dispatch.c:131
#13 0x000055ea8c43fecc in qmp_dispatch (cmds=0x55ea8cd1b7a0 <qmp_commands>, request=<optimized out>, allow_oob=<optimized out>) at qapi/qmp-dispatch.c:174
#14 0x000055ea8c3624f1 in monitor_qmp_dispatch (mon=0x55ea8ea77600, req=<optimized out>) at monitor/qmp.c:120
#15 0x000055ea8c362b3a in monitor_qmp_bh_dispatcher (data=<optimized out>) at monitor/qmp.c:209
#16 0x000055ea8c486c26 in aio_bh_call (bh=0x55ea8e9b4b20) at util/async.c:117
#17 0x000055ea8c486c26 in aio_bh_poll (ctx=ctx@entry=0x55ea8e9b36d0) at util/async.c:117
#18 0x000055ea8c48a064 in aio_dispatch (ctx=0x55ea8e9b36d0) at util/aio-posix.c:459
#19 0x000055ea8c486b02 in aio_ctx_dispatch (source=<optimized out>, callback=<optimized out>, user_data=<optimized out>) at util/async.c:260
#20 0x00007f21ed0a267d in g_main_dispatch (context=0x55ea8ea3d880) at gmain.c:3176
#21 0x00007f21ed0a267d in g_main_context_dispatch (context=context@entry=0x55ea8ea3d880) at gmain.c:3829
#22 0x000055ea8c489118 in glib_pollfds_poll () at util/main-loop.c:218
#23 0x000055ea8c489118 in os_host_main_loop_wait (timeout=<optimized out>) at util/main-loop.c:241
#24 0x000055ea8c489118 in main_loop_wait (nonblocking=<optimized out>) at util/main-loop.c:517
#25 0x000055ea8c272169 in main_loop () at vl.c:1809
#26 0x000055ea8c121fd3 in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:4506


Expected results:
 Storage vm migration can executed successfully.

Additional info:
 pstack 12286
Thread 10 (Thread 0x7f21babff700 (LWP 12313)):
#0  0x00007f21e85be47c in futex_wait_cancelable (private=0, expected=0, futex_word=0x55ea8f77331c) at ../sysdeps/unix/sysv/linux/futex-internal.h:88
#1  0x00007f21e85be47c in __pthread_cond_wait_common (abstime=0x0, mutex=0x55ea8f773328, cond=0x55ea8f7732f0) at pthread_cond_wait.c:502
#2  0x00007f21e85be47c in __pthread_cond_wait (cond=0x55ea8f7732f0, mutex=mutex@entry=0x55ea8f773328) at pthread_cond_wait.c:655
#3  0x000055ea8c48c86d in qemu_cond_wait_impl (cond=<optimized out>, mutex=0x55ea8f773328, file=0x55ea8c608c37 "ui/vnc-jobs.c", line=214) at util/qemu-thread-posix.c:161
#4  0x000055ea8c3b5d71 in vnc_worker_thread_loop (queue=queue@entry=0x55ea8f7732f0) at ui/vnc-jobs.c:214
#5  0x000055ea8c3b6330 in vnc_worker_thread (arg=0x55ea8f7732f0) at ui/vnc-jobs.c:324
#6  0x000055ea8c48c4b4 in qemu_thread_start (args=0x55ea8edd3ee0) at util/qemu-thread-posix.c:502
#7  0x00007f21e85b82de in start_thread (arg=<optimized out>) at pthread_create.c:486
#8  0x00007f21e82e9133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
Thread 9 (Thread 0x7f21d1376700 (LWP 12305)):
#0  0x00007f21e85c18dd in __lll_lock_wait () at ../sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:103
#1  0x00007f21e85baaf9 in __GI___pthread_mutex_lock (mutex=mutex@entry=0x55ea8cce8f60 <qemu_global_mutex>) at ../nptl/pthread_mutex_lock.c:80
#2  0x000055ea8c48c59d in qemu_mutex_lock_impl (mutex=0x55ea8cce8f60 <qemu_global_mutex>, file=0x55ea8c531a58 "/builddir/build/BUILD/qemu-4.1.0/accel/kvm/kvm-all.c", line=2353) at util/qemu-thread-posix.c:66
#3  0x000055ea8c16d39e in qemu_mutex_lock_iothread_impl (file=file@entry=0x55ea8c531a58 "/builddir/build/BUILD/qemu-4.1.0/accel/kvm/kvm-all.c", line=line@entry=2353) at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/cpus.c:1859
#4  0x000055ea8c188408 in kvm_cpu_exec (cpu=<optimized out>) at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/accel/kvm/kvm-all.c:2353
#5  0x000055ea8c16d56e in qemu_kvm_cpu_thread_fn (arg=0x55ea8eb59c80) at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/cpus.c:1285
#6  0x000055ea8c48c4b4 in qemu_thread_start (args=0x55ea8eb7cc80) at util/qemu-thread-posix.c:502
#7  0x00007f21e85b82de in start_thread (arg=<optimized out>) at pthread_create.c:486
#8  0x00007f21e82e9133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
Thread 8 (Thread 0x7f21d1b77700 (LWP 12304)):
#0  0x00007f21e85c18dd in __lll_lock_wait () at ../sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:103
#1  0x00007f21e85baaf9 in __GI___pthread_mutex_lock (mutex=mutex@entry=0x55ea8cce8f60 <qemu_global_mutex>) at ../nptl/pthread_mutex_lock.c:80
#2  0x000055ea8c48c59d in qemu_mutex_lock_impl (mutex=0x55ea8cce8f60 <qemu_global_mutex>, file=0x55ea8c531a58 "/builddir/build/BUILD/qemu-4.1.0/accel/kvm/kvm-all.c", line=2353) at util/qemu-thread-posix.c:66
#3  0x000055ea8c16d39e in qemu_mutex_lock_iothread_impl (file=file@entry=0x55ea8c531a58 "/builddir/build/BUILD/qemu-4.1.0/accel/kvm/kvm-all.c", line=line@entry=2353) at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/cpus.c:1859
#4  0x000055ea8c188408 in kvm_cpu_exec (cpu=<optimized out>) at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/accel/kvm/kvm-all.c:2353
#5  0x000055ea8c16d56e in qemu_kvm_cpu_thread_fn (arg=0x55ea8eb36600) at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/cpus.c:1285
#6  0x000055ea8c48c4b4 in qemu_thread_start (args=0x55ea8eb59440) at util/qemu-thread-posix.c:502
#7  0x00007f21e85b82de in start_thread (arg=<optimized out>) at pthread_create.c:486
#8  0x00007f21e82e9133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
Thread 7 (Thread 0x7f21d2378700 (LWP 12303)):
#0  0x00007f21e85c18dd in __lll_lock_wait () at ../sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:103
#1  0x00007f21e85baaf9 in __GI___pthread_mutex_lock (mutex=mutex@entry=0x55ea8cce8f60 <qemu_global_mutex>) at ../nptl/pthread_mutex_lock.c:80
#2  0x000055ea8c48c59d in qemu_mutex_lock_impl (mutex=0x55ea8cce8f60 <qemu_global_mutex>, file=0x55ea8c531a58 "/builddir/build/BUILD/qemu-4.1.0/accel/kvm/kvm-all.c", line=2353) at util/qemu-thread-posix.c:66
#3  0x000055ea8c16d39e in qemu_mutex_lock_iothread_impl (file=file@entry=0x55ea8c531a58 "/builddir/build/BUILD/qemu-4.1.0/accel/kvm/kvm-all.c", line=line@entry=2353) at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/cpus.c:1859
#4  0x000055ea8c188408 in kvm_cpu_exec (cpu=<optimized out>) at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/accel/kvm/kvm-all.c:2353
#5  0x000055ea8c16d56e in qemu_kvm_cpu_thread_fn (arg=0x55ea8eb12250) at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/cpus.c:1285
#6  0x000055ea8c48c4b4 in qemu_thread_start (args=0x55ea8eb35dc0) at util/qemu-thread-posix.c:502
#7  0x00007f21e85b82de in start_thread (arg=<optimized out>) at pthread_create.c:486
#8  0x00007f21e82e9133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
Thread 6 (Thread 0x7f21d2b79700 (LWP 12302)):
#0  0x00007f21e85c18dd in __lll_lock_wait () at ../sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:103
#1  0x00007f21e85baaf9 in __GI___pthread_mutex_lock (mutex=mutex@entry=0x55ea8cce8f60 <qemu_global_mutex>) at ../nptl/pthread_mutex_lock.c:80
#2  0x000055ea8c48c59d in qemu_mutex_lock_impl (mutex=0x55ea8cce8f60 <qemu_global_mutex>, file=0x55ea8c526068 "/builddir/build/BUILD/qemu-4.1.0/exec.c", line=3301) at util/qemu-thread-posix.c:66
#3  0x000055ea8c16d39e in qemu_mutex_lock_iothread_impl (file=<optimized out>, line=<optimized out>) at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/cpus.c:1859
#4  0x000055ea8c1258f9 in prepare_mmio_access (mr=<optimized out>, mr=<optimized out>) at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/exec.c:3301
#5  0x000055ea8c126990 in flatview_write_continue (fv=0x7f21c82a0230, addr=4271993144, attrs=..., buf=0x7f21ed9ee028 "\200", len=4, addr1=<optimized out>, l=<optimized out>, mr=0x55ea8f3d0c20) at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/exec.c:3332
#6  0x000055ea8c126b46 in flatview_write (fv=0x7f21c82a0230, addr=4271993144, attrs=..., buf=0x7f21ed9ee028 "\200", len=4) at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/exec.c:3376
#7  0x000055ea8c12ad6f in address_space_write (as=<optimized out>, addr=<optimized out>, attrs=..., buf=<optimized out>, len=<optimized out>) at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/exec.c:3466
#8  0x000055ea8c1884ca in kvm_cpu_exec (cpu=<optimized out>) at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/accel/kvm/kvm-all.c:2298
#9  0x000055ea8c16d56e in qemu_kvm_cpu_thread_fn (arg=0x55ea8eac4730) at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/cpus.c:1285
#10 0x000055ea8c48c4b4 in qemu_thread_start (args=0x55ea8eae72b0) at util/qemu-thread-posix.c:502
#11 0x00007f21e85b82de in start_thread (arg=<optimized out>) at pthread_create.c:486
#12 0x00007f21e82e9133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
Thread 5 (Thread 0x7f21d337a700 (LWP 12301)):
#0  0x00007f21e82de211 in __GI___poll (fds=0x55ea8ea66ac0, nfds=5, timeout=-1) at ../sysdeps/unix/sysv/linux/poll.c:29
#1  0x00007f21ed0a29b6 in g_main_context_poll (priority=<optimized out>, n_fds=5, fds=0x55ea8ea66ac0, timeout=<optimized out>, context=0x55ea8ea77be0) at gmain.c:4203
#2  0x00007f21ed0a29b6 in g_main_context_iterate (context=0x55ea8ea77be0, block=block@entry=1, dispatch=dispatch@entry=1, self=<optimized out>) at gmain.c:3897
#3  0x00007f21ed0a2d72 in g_main_loop_run (loop=0x55ea8ea77d20) at gmain.c:4098
#4  0x000055ea8c26cb31 in iothread_run (opaque=0x55ea8e9de500) at iothread.c:82
#5  0x000055ea8c48c4b4 in qemu_thread_start (args=0x55ea8ea77d60) at util/qemu-thread-posix.c:502
#6  0x00007f21e85b82de in start_thread (arg=<optimized out>) at pthread_create.c:486
#7  0x00007f21e82e9133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
Thread 4 (Thread 0x7f21e0868700 (LWP 12289)):
#0  0x00007f21e82de306 in __GI_ppoll (fds=0x7f21d8001fb0, nfds=2, timeout=<optimized out>, timeout@entry=0x7f21e0867640, sigmask=sigmask@entry=0x0) at ../sysdeps/unix/sysv/linux/ppoll.c:39
#1  0x000055ea8c488175 in ppoll (__ss=0x0, __timeout=0x7f21e0867640, __nfds=<optimized out>, __fds=<optimized out>) at /usr/include/bits/poll2.h:77
#2  0x000055ea8c488175 in qemu_poll_ns (fds=<optimized out>, nfds=<optimized out>, timeout=<optimized out>) at util/qemu-timer.c:334
#3  0x000055ea8c48a1d4 in aio_poll (ctx=0x55ea8ea4e000, blocking=blocking@entry=true) at util/aio-posix.c:669
#4  0x000055ea8c26cb04 in iothread_run (opaque=0x55ea8e9f8760) at iothread.c:75
#5  0x000055ea8c48c4b4 in qemu_thread_start (args=0x55ea8ea4e4e0) at util/qemu-thread-posix.c:502
#6  0x00007f21e85b82de in start_thread (arg=<optimized out>) at pthread_create.c:486
#7  0x00007f21e82e9133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
Thread 3 (Thread 0x7f21e1069700 (LWP 12288)):
#0  0x00007f21e85c18dd in __lll_lock_wait () at ../sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:103
#1  0x00007f21e85babc4 in __GI___pthread_mutex_lock (mutex=mutex@entry=0x55ea8ea4d680) at ../nptl/pthread_mutex_lock.c:115
#2  0x000055ea8c48c59d in qemu_mutex_lock_impl (mutex=0x55ea8ea4d680, file=0x55ea8c62efdf "util/async.c", line=510) at util/qemu-thread-posix.c:66
#3  0x000055ea8c4878e3 in thread_pool_completion_bh (opaque=0x7f21d4007840) at util/thread-pool.c:167
#4  0x000055ea8c486c26 in aio_bh_call (bh=0x7f21d4006370) at util/async.c:117
#5  0x000055ea8c486c26 in aio_bh_poll (ctx=ctx@entry=0x55ea8ea4d620) at util/async.c:117
#6  0x000055ea8c48a2bc in aio_poll (ctx=0x55ea8ea4d620, blocking=blocking@entry=true) at util/aio-posix.c:728
#7  0x000055ea8c26cb04 in iothread_run (opaque=0x55ea8ea3cc00) at iothread.c:75
#8  0x000055ea8c48c4b4 in qemu_thread_start (args=0x55ea8ea4db30) at util/qemu-thread-posix.c:502
#9  0x00007f21e85b82de in start_thread (arg=<optimized out>) at pthread_create.c:486
#10 0x00007f21e82e9133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
Thread 2 (Thread 0x7f21e186a700 (LWP 12287)):
#0  0x00007f21e82e399d in syscall () at ../sysdeps/unix/sysv/linux/x86_64/syscall.S:38
#1  0x000055ea8c48ccdf in qemu_futex_wait (val=<optimized out>, f=<optimized out>) at util/qemu-thread-posix.c:438
#2  0x000055ea8c48ccdf in qemu_event_wait (ev=ev@entry=0x55ea8cd1dec8 <rcu_call_ready_event>) at util/qemu-thread-posix.c:442
#3  0x000055ea8c49e862 in call_rcu_thread (opaque=<optimized out>) at util/rcu.c:260
#4  0x000055ea8c48c4b4 in qemu_thread_start (args=0x55ea8e9445a0) at util/qemu-thread-posix.c:502
#5  0x00007f21e85b82de in start_thread (arg=<optimized out>) at pthread_create.c:486
#6  0x00007f21e82e9133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
Thread 1 (Thread 0x7f21ed9b8ec0 (LWP 12286)):
#0  0x00007f21e82de306 in __GI_ppoll (fds=0x55ea8ea63850, nfds=1, timeout=<optimized out>, timeout@entry=0x0, sigmask=sigmask@entry=0x0) at ../sysdeps/unix/sysv/linux/ppoll.c:39
#1  0x000055ea8c4881b9 in ppoll (__ss=0x0, __timeout=0x0, __nfds=<optimized out>, __fds=<optimized out>) at /usr/include/bits/poll2.h:77
#2  0x000055ea8c4881b9 in qemu_poll_ns (fds=<optimized out>, nfds=<optimized out>, timeout=<optimized out>) at util/qemu-timer.c:322
#3  0x000055ea8c48a1d4 in aio_poll (ctx=0x55ea8ea3c4e0, blocking=blocking@entry=true) at util/aio-posix.c:669
#4  0x000055ea8c4874da in aio_wait_bh_oneshot (ctx=0x55ea8ea4d620, cb=<optimized out>, opaque=<optimized out>) at util/aio-wait.c:71
#5  0x000055ea8c3c36e8 in bdrv_attach_aio_context (new_context=0x55ea8ea4d620, bs=0x55ea8eb8d250) at block.c:5898
#6  0x000055ea8c3c36e8 in bdrv_set_aio_context_ignore (bs=0x55ea8eb8d250, new_context=new_context@entry=0x55ea8ea4d620, ignore=ignore@entry=0x7ffdf7bbe8d0) at block.c:5963
#7  0x000055ea8c3c37bc in bdrv_set_aio_context_ignore (bs=bs@entry=0x55ea8f6053a0, new_context=new_context@entry=0x55ea8ea4d620, ignore=ignore@entry=0x7ffdf7bbe8d0) at block.c:5945
#8  0x000055ea8c3c3b33 in bdrv_child_try_set_aio_context (bs=bs@entry=0x55ea8f6053a0, ctx=ctx@entry=0x55ea8ea4d620, ignore_child=ignore_child@entry=0x0, errp=errp@entry=0x7ffdf7bbe9b8) at block.c:6058
#9  0x000055ea8c3c522e in bdrv_try_set_aio_context (bs=bs@entry=0x55ea8f6053a0, ctx=ctx@entry=0x55ea8ea4d620, errp=errp@entry=0x7ffdf7bbe9b8) at block.c:6067
#10 0x000055ea8c26a211 in qmp_drive_mirror (arg=arg@entry=0x7ffdf7bbe9c0, errp=errp@entry=0x7ffdf7bbe9b8) at blockdev.c:3933
#11 0x000055ea8c380bb9 in qmp_marshal_drive_mirror (args=<optimized out>, ret=<optimized out>, errp=0x7ffdf7bbeab8) at qapi/qapi-commands-block-core.c:619
#12 0x000055ea8c43fecc in do_qmp_dispatch (errp=0x7ffdf7bbeab0, allow_oob=<optimized out>, request=<optimized out>, cmds=0x55ea8cd1b7a0 <qmp_commands>) at qapi/qmp-dispatch.c:131
#13 0x000055ea8c43fecc in qmp_dispatch (cmds=0x55ea8cd1b7a0 <qmp_commands>, request=<optimized out>, allow_oob=<optimized out>) at qapi/qmp-dispatch.c:174
#14 0x000055ea8c3624f1 in monitor_qmp_dispatch (mon=0x55ea8ea77600, req=<optimized out>) at monitor/qmp.c:120
#15 0x000055ea8c362b3a in monitor_qmp_bh_dispatcher (data=<optimized out>) at monitor/qmp.c:209
#16 0x000055ea8c486c26 in aio_bh_call (bh=0x55ea8e9b4b20) at util/async.c:117
#17 0x000055ea8c486c26 in aio_bh_poll (ctx=ctx@entry=0x55ea8e9b36d0) at util/async.c:117
#18 0x000055ea8c48a064 in aio_dispatch (ctx=0x55ea8e9b36d0) at util/aio-posix.c:459
#19 0x000055ea8c486b02 in aio_ctx_dispatch (source=<optimized out>, callback=<optimized out>, user_data=<optimized out>) at util/async.c:260
#20 0x00007f21ed0a267d in g_main_dispatch (context=0x55ea8ea3d880) at gmain.c:3176
#21 0x00007f21ed0a267d in g_main_context_dispatch (context=context@entry=0x55ea8ea3d880) at gmain.c:3829
#22 0x000055ea8c489118 in glib_pollfds_poll () at util/main-loop.c:218
#23 0x000055ea8c489118 in os_host_main_loop_wait (timeout=<optimized out>) at util/main-loop.c:241
#24 0x000055ea8c489118 in main_loop_wait (nonblocking=<optimized out>) at util/main-loop.c:517
#25 0x000055ea8c272169 in main_loop () at vl.c:1809
#26 0x000055ea8c121fd3 in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:4506

Comment 1 aihua liang 2019-08-28 01:58:49 UTC
Reproduce rate of this bug is not 100%, set its priority to "medium".

Comment 4 John Ferlan 2019-09-12 17:54:07 UTC
If IOThreads weren't configured - is this still reproducible? 

IOW: Trying to determine IOThreads or NBD type problem.

FWIW: indicating "backend:gluster" and usage of file=/mnt/nfs/install.qcow in src is confusing

Comment 5 aihua liang 2019-09-16 07:50:18 UTC
(In reply to John Ferlan from comment #4)
> If IOThreads weren't configured - is this still reproducible? 

The issue not exist when iothreads not configured.
When iothreads configured, both virtio_blk and virtio_scsi have this issue.

> 
> IOW: Trying to determine IOThreads or NBD type problem.
> 
It works ok when just do drive mirror on localfs (iothreads configured) via cmd.:
{ "execute": "drive-mirror", "arguments": { "device": "drive_image1","target": "/home/rhel810-64-virtio.qcow2", "sync": "full","format": "qcow2", "mode": "existing" } }

> FWIW: indicating "backend:gluster" and usage of file=/mnt/nfs/install.qcow
> in src is confusing

backend:gluster(mounted) means backend is gluster, then i mounted it via cmd:
  "mount.glusterfs intel-5405-32-2.englab.nay.redhat.com:/aliang /mnt/nfs", 
  then i create the system disk and data disk images on it.
  Sorry for the confusing.

Comment 6 Sergio Lopez 2019-10-03 09:58:37 UTC
Looks like bdrv_try_set_aio_context() is called with the wrong context acquired. We have a patch upstream addressing this issue, but has not been yet merged.

https://lists.gnu.org/archive/html/qemu-block/2019-09/msg00643.html

Comment 8 John Ferlan 2019-11-13 16:45:07 UTC
Update for upstream posting:

https://lists.nongnu.org/archive/html/qemu-devel/2019-11/msg01657.html

Comment 9 aihua liang 2019-11-18 08:40:32 UTC
Hi, Sergio

   I hit this issue in my RHEL8.2.0 test, and its reproduce rate is 100%.
   
   The gdb info looks a little different from that in the description, can you help to check if they are the same issue? Thanks

   (gdb) bt
#0  0x00007f4b71412306 in __GI_ppoll (fds=0x559b919c39b0, nfds=1, timeout=<optimized out>, 
    timeout@entry=0x0, sigmask=sigmask@entry=0x0) at ../sysdeps/unix/sysv/linux/ppoll.c:39
#1  0x0000559b9082a909 in ppoll (__ss=0x0, __timeout=0x0, __nfds=<optimized out>, __fds=<optimized out>)
    at /usr/include/bits/poll2.h:77
#2  0x0000559b9082a909 in qemu_poll_ns (fds=<optimized out>, nfds=<optimized out>, timeout=<optimized out>)
    at util/qemu-timer.c:336
#3  0x0000559b9082c8c4 in aio_poll (ctx=0x559b9199a570, blocking=blocking@entry=true) at util/aio-posix.c:669
#4  0x0000559b907a815f in bdrv_drained_end (bs=bs@entry=0x559b91cbb510) at block/io.c:497
#5  0x0000559b90761a8b in bdrv_set_aio_context_ignore
    (bs=0x559b91cbb510, new_context=new_context@entry=0x559b919b01e0, ignore=ignore@entry=0x7ffccf8f9f60) at block.c:6019
#6  0x0000559b90761adc in bdrv_set_aio_context_ignore
    (bs=bs@entry=0x559b91b08200, new_context=new_context@entry=0x559b919b01e0, ignore=ignore@entry=0x7ffccf8f9f60)
    at block.c:5989
#7  0x0000559b90761e53 in bdrv_child_try_set_aio_context
    (bs=bs@entry=0x559b91b08200, ctx=ctx@entry=0x559b919b01e0, ignore_child=ignore_child@entry=0x0, errp=errp@entry=0x7ffccf8fa048) at block.c:6102
#8  0x0000559b9076346e in bdrv_try_set_aio_context
    (bs=bs@entry=0x559b91b08200, ctx=ctx@entry=0x559b919b01e0, errp=errp@entry=0x7ffccf8fa048) at block.c:6111
#9  0x0000559b90604f8e in qmp_drive_mirror (arg=arg@entry=0x7ffccf8fa050, errp=errp@entry=0x7ffccf8fa048) at blockdev.c:3996
#10 0x0000559b9071e6d9 in qmp_marshal_drive_mirror (args=<optimized out>, ret=<optimized out>, errp=0x7ffccf8fa148)
    at qapi/qapi-commands-block-core.c:619
#11 0x0000559b907e198c in do_qmp_dispatch
    (errp=0x7ffccf8fa140, allow_oob=<optimized out>, request=<optimized out>, cmds=0x559b910cdcc0 <qmp_commands>)
    at qapi/qmp-dispatch.c:132
#12 0x0000559b907e198c in qmp_dispatch
    (cmds=0x559b910cdcc0 <qmp_commands>, request=<optimized out>, allow_oob=<optimized out>) at qapi/qmp-dispatch.c:175
#13 0x0000559b90700141 in monitor_qmp_dispatch (mon=0x559b919bb340, req=<optimized out>) at monitor/qmp.c:120
#14 0x0000559b9070078a in monitor_qmp_bh_dispatcher (data=<optimized out>) at monitor/qmp.c:209
#15 0x0000559b90829366 in aio_bh_call (bh=0x559b91911c60) at util/async.c:117
#16 0x0000559b90829366 in aio_bh_poll (ctx=ctx@entry=0x559b91910840) at util/async.c:117
#17 0x0000559b9082c754 in aio_dispatch (ctx=0x559b91910840) at util/aio-posix.c:459
#18 0x0000559b90829242 in aio_ctx_dispatch (source=<optimized out>, callback=<optimized out>, user_data=<optimized out>)
--Type <RET> for more, q to quit, c to continue without paging--
    at util/async.c:260
#19 0x00007f4b75bda67d in g_main_dispatch (context=0x559b9199b9c0) at gmain.c:3176
#20 0x00007f4b75bda67d in g_main_context_dispatch (context=context@entry=0x559b9199b9c0) at gmain.c:3829
#21 0x0000559b9082b808 in glib_pollfds_poll () at util/main-loop.c:219
#22 0x0000559b9082b808 in os_host_main_loop_wait (timeout=<optimized out>) at util/main-loop.c:242
#23 0x0000559b9082b808 in main_loop_wait (nonblocking=<optimized out>) at util/main-loop.c:518
#24 0x0000559b9060d201 in main_loop () at vl.c:1828
#25 0x0000559b904b9b82 in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:4504

Test Env:
   kernel version: 4.18.0-147.el8.x86_64\
   qemu-kvm version: qemu-kvm-4.2.0-0.module+el8.2.0+4714+8670762e.x86_64

Reproduce Rate:
   100%

Test steps:
  1.Create an empty disk in dst, start guest with it and expose it.
     #qemu-img create -f qcow2 /home/aliang/mirror.qcow2
     /usr/libexec/qemu-kvm \
    -name 'avocado-vt-vm1' \
    -machine q35  \
    -nodefaults \
    -device VGA,bus=pcie.0,addr=0x1 \
    -m 7168  \
    -smp 4,maxcpus=4,cores=2,threads=1,dies=1,sockets=2  \
    -cpu 'Skylake-Client',+kvm_pv_unhalt  \
    -chardev socket,id=qmp_id_qmpmonitor1,path=/var/tmp/monitor-qmpmonitor1-20191118-011823-gEG3j1mt,server,nowait \
    -mon chardev=qmp_id_qmpmonitor1,mode=control  \
    -chardev socket,id=qmp_id_catch_monitor,path=/var/tmp/monitor-catch_monitor-20191118-011823-gEG3j1mt,server,nowait \
    -mon chardev=qmp_id_catch_monitor,mode=control \
    -device pvpanic,ioport=0x505,id=id4p8G4l \
    -chardev socket,server,id=chardev_serial0,path=/var/tmp/serial-serial0-20191118-011823-gEG3j1mt,nowait \
    -device isa-serial,id=serial0,chardev=chardev_serial0  \
    -chardev socket,id=seabioslog_id_20191118-011823-gEG3j1mt,path=/var/tmp/seabios-20191118-011823-gEG3j1mt,server,nowait \
    -device isa-debugcon,chardev=seabioslog_id_20191118-011823-gEG3j1mt,iobase=0x402 \
    -device pcie-root-port,id=pcie.0-root-port-2,slot=2,chassis=2,addr=0x2,bus=pcie.0 \
    -device qemu-xhci,id=usb1,bus=pcie.0-root-port-2,addr=0x0 \
    -object iothread,id=iothread0 \
    -drive id=drive_image1,if=none,snapshot=off,aio=threads,cache=none,format=qcow2,file=/home/aliang/mirror.qcow2 \
    -device pcie-root-port,id=pcie.0-root-port-3,slot=3,chassis=3,addr=0x3,bus=pcie.0 \
    -device virtio-scsi-pci,bus=pcie.0-root-port-3,addr=0x0,id=scsi0,iothread=iothread0 \
    -device scsi-hd,id=image1,drive=drive_image1,bootindex=0,bus=scsi0.0 \
    -device pcie-root-port,id=pcie.0-root-port-4,slot=4,chassis=4,addr=0x4,bus=pcie.0 \
    -device virtio-net-pci,mac=9a:4f:f4:e5:bd:67,id=idkQvhgf,netdev=idnMcj5J,bus=pcie.0-root-port-4,addr=0x0  \
    -netdev tap,id=idnMcj5J,vhost=on \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1  \
    -vnc :1  \
    -rtc base=utc,clock=host,driftfix=slew  \
    -boot order=cdn,once=c,menu=off,strict=off \
    -enable-kvm \
    -device pcie-root-port,id=pcie_extra_root_port_0,slot=5,chassis=5,addr=0x5,bus=pcie.0 \
    -monitor stdio \
    -incoming tcp:0:5000 \
   
    { "execute": "nbd-server-start", "arguments": { "addr": { "type": "inet","data": { "host": "10.73.224.68", "port": "3333" } } } }
{"return": {}}
{ "execute": "nbd-server-add", "arguments": { "device": "drive_image1","writable": true } }
{"return": {}}

  2. In src, start guest with qemu cmds:
      /usr/libexec/qemu-kvm \
    -name 'avocado-vt-vm1' \
    -machine q35  \
    -nodefaults \
    -device VGA,bus=pcie.0,addr=0x1 \
    -m 7168  \
    -smp 4,maxcpus=4,cores=2,threads=1,dies=1,sockets=2  \
    -cpu 'Skylake-Client',+kvm_pv_unhalt  \
    -chardev socket,id=qmp_id_qmpmonitor1,path=/var/tmp/monitor-qmpmonitor1-20191118-011823-gEG3j1ms,server,nowait \
    -mon chardev=qmp_id_qmpmonitor1,mode=control  \
    -chardev socket,id=qmp_id_catch_monitor,path=/var/tmp/monitor-catch_monitor-20191118-011823-gEG3j1mt,server,nowait \
    -mon chardev=qmp_id_catch_monitor,mode=control \
    -device pvpanic,ioport=0x505,id=id4p8G4l \
    -chardev socket,server,id=chardev_serial0,path=/var/tmp/serial-serial0-20191118-011823-gEG3j1mt,nowait \
    -device isa-serial,id=serial0,chardev=chardev_serial0  \
    -chardev socket,id=seabioslog_id_20191118-011823-gEG3j1mt,path=/var/tmp/seabios-20191118-011823-gEG3j1mt,server,nowait \
    -device isa-debugcon,chardev=seabioslog_id_20191118-011823-gEG3j1mt,iobase=0x402 \
    -device pcie-root-port,id=pcie.0-root-port-2,slot=2,chassis=2,addr=0x2,bus=pcie.0 \
    -device qemu-xhci,id=usb1,bus=pcie.0-root-port-2,addr=0x0 \
    -object iothread,id=iothread0 \
    -drive id=drive_image1,if=none,snapshot=off,aio=threads,cache=none,format=qcow2,file=/home/kvm_autotest_root/images/rhel820-64-virtio.qcow2 \
    -device pcie-root-port,id=pcie.0-root-port-3,slot=3,chassis=3,addr=0x3,bus=pcie.0 \
    -device virtio-scsi-pci,id=scsi0,bus=pcie.0-root-port-3,addr=0x0,iothread=iothread0 \
    -device scsi-hd,id=image1,drive=drive_image1,bootindex=0,bus=scsi0.0 \
    -device pcie-root-port,id=pcie.0-root-port-4,slot=4,chassis=4,addr=0x4,bus=pcie.0 \
    -device virtio-net-pci,mac=9a:4f:f4:e5:bd:67,id=idkQvhgf,netdev=idnMcj5J,bus=pcie.0-root-port-4,addr=0x0  \
    -netdev tap,id=idnMcj5J,vhost=on \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1  \
    -vnc :0  \
    -rtc base=utc,clock=host,driftfix=slew  \
    -boot order=cdn,once=c,menu=off,strict=off \
    -enable-kvm \
    -device pcie-root-port,id=pcie_extra_root_port_0,slot=5,chassis=5,addr=0x5,bus=pcie.0 \
    -monitor stdio \

  3. Do mirror from src to dst.
     { "execute": "drive-mirror", "arguments": { "device": "drive_image1","target": "nbd://10.73.224.68:3333/drive_image1", "sync": "full","format": "raw", "mode": "existing" } }

  After step3, src qemu hang.
 
 Additional info: 
   Both virtio_blk+dataplane+NBD and virtio_scsi+dataplane+NBD hit this issue.
   When disable dataplane, it works ok.
   When mirror to image on localfs, it works ok.

Comment 10 Sergio Lopez 2019-11-18 09:49:34 UTC
Hi,

Looking at the backtrace, it looks like a slightly different issue that should be fixed by the same patch series.

Thanks,
Sergio.

Comment 11 aihua liang 2019-11-18 10:57:20 UTC
(In reply to Sergio Lopez from comment #10)
> Hi,
> 
> Looking at the backtrace, it looks like a slightly different issue that
> should be fixed by the same patch series.
> 
> Thanks,
> Sergio.

Hi, Sergio
 
  The new issue blocked all my storage_vm_migration tests, it's has a priority of high.

  Filed a new bug bz#1773517 to track the new issue, leave the original one behind to cover different test scenario.

  As the new issue can fixed by the same patch series, Sergio, can you help to update the patch info in bz#1773517?


Thanks,
aliang

Comment 30 Ademar Reis 2020-02-05 23:04:08 UTC
QEMU has been recently split into sub-components and as a one-time operation to avoid breakage of tools, we are setting the QEMU sub-component of this BZ to "General". Please review and change the sub-component if necessary the next time you review this BZ. Thanks

Comment 35 aihua liang 2020-02-19 07:04:41 UTC
Verified on qemu-kvm-4.2.0-10.module+el8.2.0+5740+c3dff59e, the issue has been resolved, set bug's status to "Verified".

Comment 37 errata-xmlrpc 2020-05-05 09:49:40 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2017


Note You need to log in before you can comment on or make changes to this bug.