Bug 1634219 - src qemu hang with data plane enabled when shutdown src guest after mirror to NBD driver.
Summary: src qemu hang with data plane enabled when shutdown src guest after mirror to...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux Advanced Virtualization
Classification: Red Hat
Component: qemu-kvm
Version: ---
Hardware: Unspecified
OS: Unspecified
high
medium
Target Milestone: rc
: 8.0
Assignee: Sergio Lopez
QA Contact: aihua liang
URL:
Whiteboard:
: 1503437 1539530 1602264 (view as bug list)
Depends On:
Blocks: 1503437 1649160
TreeView+ depends on / blocked
 
Reported: 2018-09-29 03:18 UTC by CongLi
Modified: 2019-11-06 07:12 UTC (History)
14 users (show)

Fixed In Version: qemu-kvm-4.1.0-10.module+el8.1.0+4234+33aa4f57
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-11-06 07:12:03 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
trace log of src qemu (694.45 KB, text/plain)
2018-09-29 03:18 UTC, CongLi
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:3723 0 None None None 2019-11-06 07:12:47 UTC

Description CongLi 2018-09-29 03:18:04 UTC
Created attachment 1488263 [details]
trace log of src qemu

Description of problem:
src qemu hang with data plane enabled when shutdown src guest after finishing  mirror to NBD driver.

Version-Release number of selected component (if applicable):
qemu-kvm-rhev-2.12.0-18.el7.x86_64

How reproducible:
80%

Steps to Reproduce:
1. boot up dst guest an empty disk with data plane enabled.
# qemu-img create -f qcow2 mirror.qcow2 20G
qemu CML:
    -object iothread,id=iothread0 \
    -device virtio-scsi-pci,id=virtio_scsi_pci0,iothread=iothread0 \
    -drive id=drive_image1,if=none,snapshot=off,aio=threads,cache=none,format=qcow2,file=/root/mirror.qcow2 \
    -device scsi-hd,id=image1,drive=drive_image1,werror=stop,rerror=stop \


2. export the empty disk to nbd driver.
{ "execute": "qmp_capabilities" }
{ "execute": "nbd-server-start", "arguments": { "addr": { "type": "inet","data": { "host":"127.0.0.1", "port": "9000" } } } }
{"execute":"nbd-server-add","arguments":{"device":"drive_image1", "writable": true}}

3. boot src image with data plane enabled over gluster backend.
    -object iothread,id=iothread0 \
    -device virtio-scsi-pci,id=virtio_scsi_pci0,iothread=iothread0 \
    -drive id=drive_image1,if=none,snapshot=off,aio=threads,cache=none,format=qcow2,file=gluster://10.73.196.67/aliang/coli/rhel76-64-virtio-scsi.qcow2 \
    -device scsi-hd,id=image1,drive=drive_image1,werror=stop,rerror=stop \

4. mirror src image to dst nbd driver.
{ "execute": "qmp_capabilities" }
{ "execute": "drive-mirror", "arguments": { "device": "drive_image1", "target": "nbd://127.0.0.1:9000/drive_image1", "sync": "full", "format": "raw", "mode": "existing" } }

5. after src finish mirror and get 'BLOCK_JOB_READY' event in QMP, shutdown src guest.

Actual results:
after step 5, src qemu hang.

Expected results:
after step 5, src qemu should quit successfully as w/o data plane.

Additional info:
1. no hang issue w/o data plane enabled.
2. trace log of src qemu attached(the log from src guest shutdown).

Comment 2 Gu Nini 2018-09-29 05:04:19 UTC
We already have bz1503437 which is also reported by lolyu. So please check the gdb info with cmd 'gdb -batch -ex bt -p PID' to decide if it's the same issue.

Please note the test scenario in the bz is negative since the storage vm migration needs deeper steps. Please refer following case for details:
https://polarion.engineering.redhat.com/polarion/#/project/RedHatEnterpriseLinux7/workitem?id=RHEL7-62535

Comment 3 CongLi 2018-09-29 05:29:53 UTC
(In reply to Gu Nini from comment #2)
> We already have bz1503437 which is also reported by lolyu. So please check
> the gdb info with cmd 'gdb -batch -ex bt -p PID' to decide if it's the same
> issue.

(gdb) bt
#0  0x00007fcaffbca2cf in __GI_ppoll (fds=0x563c33179500, nfds=1, timeout=<optimized out>, timeout@entry=0x0, sigmask=sigmask@entry=0x0)
    at ../sysdeps/unix/sysv/linux/ppoll.c:56
#1  0x0000563c2fcfc85b in qemu_poll_ns (__ss=0x0, __timeout=0x0, __nfds=<optimized out>, __fds=<optimized out>) at /usr/include/bits/poll2.h:77
#2  0x0000563c2fcfc85b in qemu_poll_ns (fds=<optimized out>, nfds=<optimized out>, timeout=<optimized out>) at util/qemu-timer.c:322
#3  0x0000563c2fcfe5d7 in aio_poll (ctx=0x563c33001b80, blocking=blocking@entry=true) at util/aio-posix.c:645
#4  0x0000563c2fc8179a in nbd_client_close (bs=0x563c33bd2800) at block/nbd-client.c:62
#5  0x0000563c2fc8179a in nbd_client_close (bs=0x563c33bd2800) at block/nbd-client.c:961
#6  0x0000563c2fc7f0ca in nbd_close (bs=<optimized out>) at block/nbd.c:491
#7  0x0000563c2fc28142 in bdrv_unref (bs=0x563c33bd2800) at block.c:3358
#8  0x0000563c2fc28142 in bdrv_unref (bs=0x563c33bd2800) at block.c:3542
#9  0x0000563c2fc28142 in bdrv_unref (bs=0x563c33bd2800) at block.c:4598
#10 0x0000563c2fc2816f in bdrv_unref (bs=0x563c33198800) at block.c:3365
#11 0x0000563c2fc2816f in bdrv_unref (bs=0x563c33198800) at block.c:3542
#12 0x0000563c2fc2816f in bdrv_unref (bs=0x563c33198800) at block.c:4598
#13 0x0000563c2fc2b774 in block_job_remove_all_bdrv (job=job@entry=0x563c331918c0) at blockjob.c:200
#14 0x0000563c2fc71bcd in mirror_exit_common (job=0x563c331918c0) at block/mirror.c:577
#15 0x0000563c2fc2d682 in job_do_finalize (job=0x563c331918c0) at job.c:766
#16 0x0000563c2fc2d682 in job_do_finalize (txn=<optimized out>, fn=<optimized out>) at job.c:146
#17 0x0000563c2fc2d682 in job_do_finalize (job=0x563c331918c0) at job.c:783
#18 0x0000563c2fc2d940 in job_exit (opaque=0x563c331918c0) at job.c:869
#19 0x0000563c2fcfb3c1 in aio_bh_poll (bh=0x563c34c3d1a0) at util/async.c:90
#20 0x0000563c2fcfb3c1 in aio_bh_poll (ctx=ctx@entry=0x563c33001b80) at util/async.c:118
#21 0x0000563c2fcfe984 in aio_poll (ctx=0x563c33001b80, blocking=blocking@entry=true) at util/aio-posix.c:704
#22 0x0000563c2fc2d457 in job_finish_sync (job=job@entry=0x563c331918c0, finish=finish@entry=
    0x563c2fc2d9f0 <job_cancel_err>, errp=errp@entry=0x0) at job.c:989
#23 0x0000563c2fc2da45 in job_cancel_sync_all (job=0x563c331918c0) at job.c:936
#24 0x0000563c2fc2da45 in job_cancel_sync_all () at job.c:947
#25 0x0000563c2f9a0896 in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:4775

> 
> Please note the test scenario in the bz is negative since the storage vm
> migration needs deeper steps. Please refer following case for details:
> https://polarion.engineering.redhat.com/polarion/#/project/
> RedHatEnterpriseLinux7/workitem?id=RHEL7-62535

Yes, I met this issue when verifying BZ1503480, so I did not do the following steps during the verification.

Thanks.

Comment 4 Gu Nini 2018-09-30 09:19:04 UTC
I could reproduce the bug on localfs, so it's not must related to glusterfs; and this should be the same issue as that in https://bugzilla.redhat.com/show_bug.cgi?id=1503437#c10.


# gdb -batch -ex bt -p 23250
[New LWP 23289]
[New LWP 23275]
[New LWP 23274]
[New LWP 23273]
[New LWP 23271]
[New LWP 23270]
[New LWP 23252]
[New LWP 23251]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
0x00007f059184e2cf in ppoll () from /lib64/libc.so.6
#0  0x00007f059184e2cf in ppoll () at /lib64/libc.so.6
#1  0x000055e64848085b in qemu_poll_ns (__ss=0x0, __timeout=0x0, __nfds=<optimized out>, __fds=<optimized out>) at /usr/include/bits/poll2.h:77
#2  0x000055e64848085b in qemu_poll_ns (fds=<optimized out>, nfds=<optimized out>, timeout=<optimized out>) at util/qemu-timer.c:322
#3  0x000055e6484825d7 in aio_poll (ctx=0x55e649c96500, blocking=blocking@entry=true) at util/aio-posix.c:645
#4  0x000055e64840579a in nbd_client_close (bs=0x55e649e84000) at block/nbd-client.c:62
#5  0x000055e64840579a in nbd_client_close (bs=0x55e649e84000) at block/nbd-client.c:961
#6  0x000055e6484030ca in nbd_close (bs=<optimized out>) at block/nbd.c:491
#7  0x000055e6483ac142 in bdrv_unref (bs=0x55e649e84000) at block.c:3358
#8  0x000055e6483ac142 in bdrv_unref (bs=0x55e649e84000) at block.c:3542
#9  0x000055e6483ac142 in bdrv_unref (bs=0x55e649e84000) at block.c:4598
#10 0x000055e6483ac16f in bdrv_unref (bs=0x55e649cd2800) at block.c:3365
#11 0x000055e6483ac16f in bdrv_unref (bs=0x55e649cd2800) at block.c:3542
#12 0x000055e6483ac16f in bdrv_unref (bs=0x55e649cd2800) at block.c:4598
#13 0x000055e6483af774 in block_job_remove_all_bdrv (job=job@entry=0x55e649ccb8c0) at blockjob.c:200
#14 0x000055e6483f5bcd in mirror_exit_common (job=0x55e649ccb8c0) at block/mirror.c:577
#15 0x000055e6483b1682 in job_do_finalize (job=0x55e649ccb8c0) at job.c:766
#16 0x000055e6483b1682 in job_do_finalize (txn=<optimized out>, fn=<optimized out>) at job.c:146
#17 0x000055e6483b1682 in job_do_finalize (job=0x55e649ccb8c0) at job.c:783
#18 0x000055e6483b1940 in job_exit (opaque=0x55e649ccb8c0) at job.c:869
#19 0x000055e64847f3c1 in aio_bh_poll (bh=0x55e64bbd7830) at util/async.c:90
#20 0x000055e64847f3c1 in aio_bh_poll (ctx=ctx@entry=0x55e649c96500) at util/async.c:118
#21 0x000055e648482984 in aio_poll (ctx=0x55e649c96500, blocking=blocking@entry=true) at util/aio-posix.c:704
#22 0x000055e6483b1457 in job_finish_sync (job=job@entry=0x55e649ccb8c0, finish=finish@entry=0x55e6483b19f0 <job_cancel_err>, errp=errp@entry=0x0) at job.c:989
#23 0x000055e6483b1a45 in job_cancel_sync_all (job=0x55e649ccb8c0) at job.c:936
#24 0x000055e6483b1a45 in job_cancel_sync_all () at job.c:947
#25 0x000055e648124896 in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:4775
[root@intel-e52650-16-2 home]#

Comment 5 Kevin Wolf 2018-12-12 15:46:43 UTC
*** Bug 1503437 has been marked as a duplicate of this bug. ***

Comment 6 Kevin Wolf 2018-12-12 16:05:35 UTC
This looks like an NBD client bug, probably missing a bdrv_wakeup() call (or directly aio_wait_kick()) somewhere so the main thread doesn't notice when another thread completes the shutdown request.

Another option could be double AioContext locking somewhere in the call path, which would prevent nbd_read_reply_entry() from running and completing the shutdown. Though I had a quick look and couldn't see any obvious double locking.

If we had the backtrace of all threads, these cases could be distinguished (is the iothread idle or waiting for the lock?)

Comment 9 Kevin Wolf 2018-12-13 10:08:02 UTC
We got a new backtrace in bug 1602264 which looks very similar. It shows that the iothreads are not waiting for a lock, but idle. This means we're probably indeed missing some notification (aio_wait_kick) in the NBD code.

Comment 10 aihua liang 2018-12-14 03:03:11 UTC
Test on qemu-kvm-rhev-2.12.0-18.el7_6.3.x86_64, also hit this issue:
 Gdb info:
  [root@ibm-x3250m4-05 images]#  gdb -batch -ex bt -p 29682
[New LWP 29820]
[New LWP 29819]
[New LWP 29818]
[New LWP 29817]
[New LWP 29816]
[New LWP 29814]
[New LWP 29813]
[New LWP 29812]
[New LWP 29811]
[New LWP 29810]
[New LWP 29809]
[New LWP 29808]
[New LWP 29807]
[New LWP 29806]
[New LWP 29803]
[New LWP 29799]
[New LWP 29798]
[New LWP 29797]
[New LWP 29796]
[New LWP 29795]
[New LWP 29794]
[New LWP 29793]
[New LWP 29790]
[New LWP 29788]
[New LWP 29787]
[New LWP 29786]
[New LWP 29715]
[New LWP 29714]
[New LWP 29709]
[New LWP 29708]
[New LWP 29707]
[New LWP 29706]
[New LWP 29705]
[New LWP 29704]
[New LWP 29703]
[New LWP 29702]
[New LWP 29684]
[New LWP 29683]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
0x00007f379a88d2cf in ppoll () from /lib64/libc.so.6
#0  0x00007f379a88d2cf in ppoll () at /lib64/libc.so.6
#1  0x000055de7f6eb40b in qemu_poll_ns (__ss=0x0, __timeout=0x0, __nfds=<optimized out>, __fds=<optimized out>) at /usr/include/bits/poll2.h:77
#2  0x000055de7f6eb40b in qemu_poll_ns (fds=<optimized out>, nfds=<optimized out>, timeout=<optimized out>) at util/qemu-timer.c:322
#3  0x000055de7f6ed187 in aio_poll (ctx=0x55de80d197c0, blocking=blocking@entry=true) at util/aio-posix.c:645
#4  0x000055de7f6702fa in nbd_client_close (bs=0x55de81842800) at block/nbd-client.c:62
#5  0x000055de7f6702fa in nbd_client_close (bs=0x55de81842800) at block/nbd-client.c:961
#6  0x000055de7f66dc2a in nbd_close (bs=<optimized out>) at block/nbd.c:491
#7  0x000055de7f616c82 in bdrv_unref (bs=0x55de81842800) at block.c:3392
#8  0x000055de7f616c82 in bdrv_unref (bs=0x55de81842800) at block.c:3576
#9  0x000055de7f616c82 in bdrv_unref (bs=0x55de81842800) at block.c:4654
#10 0x000055de7f616caf in bdrv_unref (bs=0x55de80df8800) at block.c:3399
#11 0x000055de7f616caf in bdrv_unref (bs=0x55de80df8800) at block.c:3576
#12 0x000055de7f616caf in bdrv_unref (bs=0x55de80df8800) at block.c:4654
#13 0x000055de7f6587e1 in blk_remove_bs (blk=blk@entry=0x55de83268dc0) at block/block-backend.c:784
#14 0x000055de7f658a5f in blk_unref (blk=0x55de83268dc0) at block/block-backend.c:402
#15 0x000055de7f658a5f in blk_unref (blk=0x55de83268dc0) at block/block-backend.c:458
#16 0x000055de7f61bf58 in job_finish_sync (job=job@entry=0x55de832698c0, finish=finish@entry=0x55de7f61c550 <job_cancel_err>, errp=errp@entry=0x0) at job.c:989
#17 0x000055de7f61c5a5 in job_cancel_sync_all (job=0x55de832698c0) at job.c:936
#18 0x000055de7f61c5a5 in job_cancel_sync_all () at job.c:947
#19 0x000055de7f38eb26 in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:4775


Reproduce steps:
 1.In dst, start guest with qemu cmds:
     /usr/libexec/qemu-kvm \
    -name 'avocado-vt-vm1'  \
    -sandbox off  \
    -machine pc  \
    -nodefaults \
    -device VGA,bus=pci.0,addr=0x2  \
    -chardev socket,id=qmp_id_qmpmonitor1,path=/var/tmp/monitor-qmpmonitor1-20181107-005924-PkIxnG9p,server,nowait \
    -mon chardev=qmp_id_qmpmonitor1,mode=control  \
    -chardev socket,id=qmp_id_catch_monitor,path=/var/tmp/monitor-catch_monitor-20181107-005924-PkIxnG9p,server,nowait \
    -mon chardev=qmp_id_catch_monitor,mode=control \
    -device pvpanic,ioport=0x505,id=idkp9HYI  \
    -chardev socket,id=serial_id_serial0,path=/var/tmp/serial-serial0-20181107-005924-PkIxnG9p,server,nowait \
    -device isa-serial,chardev=serial_id_serial0  \
    -chardev socket,id=seabioslog_id_20181107-005924-PkIxnG9p,path=/var/tmp/seabios-20181107-005924-PkIxnG9p,server,nowait \
    -device isa-debugcon,chardev=seabioslog_id_20181107-005924-PkIxnG9p,iobase=0x402 \
    -device ich9-usb-ehci1,id=usb1,addr=0x1d.7,multifunction=on,bus=pci.0 \
    -device ich9-usb-uhci1,id=usb1.0,multifunction=on,masterbus=usb1.0,addr=0x1d.0,firstport=0,bus=pci.0 \
    -device ich9-usb-uhci2,id=usb1.1,multifunction=on,masterbus=usb1.0,addr=0x1d.2,firstport=2,bus=pci.0 \
    -device ich9-usb-uhci3,id=usb1.2,multifunction=on,masterbus=usb1.0,addr=0x1d.4,firstport=4,bus=pci.0 \
    -device virtio-net-pci,mac=9a:44:45:46:47:48,id=iddDGLIi,vectors=4,netdev=idDdrbRp,bus=pci.0,addr=0x7  \
    -netdev tap,id=idDdrbRp,vhost=on \
    -m 2048  \
    -smp 10,maxcpus=10,cores=5,threads=1,sockets=2  \
    -cpu SandyBridge \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1  \
    -vnc :0  \
     -rtc base=localtime,clock=host,driftfix=slew  \
    -boot order=cdn,once=c,menu=off,strict=off \
    -enable-kvm \
    -monitor stdio \
    -object iothread,id=iothread0 \
    -device virtio-scsi-pci,id=scsi0,iothread=iothread0 \
    -drive if=none,id=drive_image1,aio=threads,cache=none,format=qcow2,file=/home/mirror.qcow2 \

  2.In dst, start nbd server and expose the empty system disk:
     { "execute": "nbd-server-start", "arguments": { "addr": { "type": "inet","data": { "host": "10.66.144.34", "port": "3333"}}}}
     { "execute": "nbd-server-add", "arguments": { "device": "drive_image1","writable": true } }

  3.In src, start guest with gluster backend:
     /usr/libexec/qemu-kvm \
    -name 'avocado-vt-vm1'  \
    -sandbox off  \
    -machine pc  \
    -nodefaults \
    -device VGA,bus=pci.0,addr=0x2  \
    -chardev socket,id=qmp_id_qmpmonitor1,path=/var/tmp/monitor-qmpmonitor1-20181107-005924-PkIxnG9p,server,nowait \
    -mon chardev=qmp_id_qmpmonitor1,mode=control  \
    -chardev socket,id=qmp_id_catch_monitor,path=/var/tmp/monitor-catch_monitor-20181107-005924-PkIxnG9p,server,nowait \
    -mon chardev=qmp_id_catch_monitor,mode=control \
    -device pvpanic,ioport=0x505,id=idkp9HYI  \
    -chardev socket,id=serial_id_serial0,path=/var/tmp/serial-serial0-20181107-005924-PkIxnG9p,server,nowait \
    -device isa-serial,chardev=serial_id_serial0  \
    -chardev socket,id=seabioslog_id_20181107-005924-PkIxnG9p,path=/var/tmp/seabios-20181107-005924-PkIxnG9p,server,nowait \
    -device isa-debugcon,chardev=seabioslog_id_20181107-005924-PkIxnG9p,iobase=0x402 \
    -device ich9-usb-ehci1,id=usb1,addr=0x1d.7,multifunction=on,bus=pci.0 \
    -device ich9-usb-uhci1,id=usb1.0,multifunction=on,masterbus=usb1.0,addr=0x1d.0,firstport=0,bus=pci.0 \
    -device ich9-usb-uhci2,id=usb1.1,multifunction=on,masterbus=usb1.0,addr=0x1d.2,firstport=2,bus=pci.0 \
    -device ich9-usb-uhci3,id=usb1.2,multifunction=on,masterbus=usb1.0,addr=0x1d.4,firstport=4,bus=pci.0 \
    -device virtio-net-pci,mac=9a:44:45:46:47:48,id=iddDGLIi,vectors=4,netdev=idDdrbRp,bus=pci.0,addr=0x7  \
    -netdev tap,id=idDdrbRp,vhost=on \
    -m 2048  \
    -smp 10,maxcpus=10,cores=5,threads=1,sockets=2  \
    -cpu SandyBridge \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1  \
    -vnc :0  \
     -rtc base=localtime,clock=host,driftfix=slew  \
    -boot order=cdn,once=c,menu=off,strict=off \
    -enable-kvm \
    -monitor stdio \
    -object iothread,id=iothread0 \
    -device virtio-scsi-pci,id=scsi0,iothread=iothread0 \
    -drive if=none,id=drive_image1,aio=threads,cache=none,format=qcow2,file=gluster://intel-e52650-16-4.englab.nay.redhat.com/aliang/rhel76-64-virtio-scsi.qcow2 \
    -device scsi-hd,id=image1,drive=drive_image1,bootindex=0,bus=scsi0.0 \

  4. In src, do block mirror to dst:
      { "execute": "drive-mirror", "arguments": { "device": "drive_image1", "target": "nbd://10.66.144.34:3333/drive_image1", "sync": "full", "format": "raw", "mode": "existing" } }

  5. In src, after mirror reach ready status, quit vm.
     (qemu)quit
   ---> qemu hang with pstack info:
 # pstack 29682
Thread 25 (Thread 0x7f37930c7700 (LWP 29683)):
#0  0x00007f379a8921c9 in syscall () at /lib64/libc.so.6
#1  0x000055de7f6ef410 in qemu_event_wait (val=<optimized out>, f=<optimized out>) at /usr/src/debug/qemu-2.12.0/include/qemu/futex.h:29
#2  0x000055de7f6ef410 in qemu_event_wait (ev=ev@entry=0x55de80376be8 <rcu_call_ready_event>) at util/qemu-thread-posix.c:445
#3  0x000055de7f6ff93e in call_rcu_thread (opaque=<optimized out>) at util/rcu.c:261
#4  0x00007f379ab6edd5 in start_thread () at /lib64/libpthread.so.0
#5  0x00007f379a897ead in clone () at /lib64/libc.so.6
Thread 24 (Thread 0x7f37928c6700 (LWP 29684)):
#0  0x00007f379a88d2cf in ppoll () at /lib64/libc.so.6
#1  0x000055de7f6eb40b in qemu_poll_ns (__ss=0x0, __timeout=0x0, __nfds=<optimized out>, __fds=<optimized out>) at /usr/include/bits/poll2.h:77
#2  0x000055de7f6eb40b in qemu_poll_ns (fds=<optimized out>, nfds=<optimized out>, timeout=<optimized out>) at util/qemu-timer.c:322
#3  0x000055de7f6ed187 in aio_poll (ctx=0x55de80d19900, blocking=blocking@entry=true) at util/aio-posix.c:645
#4  0x000055de7f4bcd5e in iothread_run (opaque=0x55de80d37ce0) at iothread.c:64
#5  0x00007f379ab6edd5 in start_thread () at /lib64/libpthread.so.0
#6  0x00007f379a897ead in clone () at /lib64/libc.so.6
Thread 23 (Thread 0x7f37920c5700 (LWP 29702)):
#0  0x00007f379a85ee2d in nanosleep () at /lib64/libc.so.6
#1  0x00007f379a85ecc4 in sleep () at /lib64/libc.so.6
#2  0x00007f379e48220d in pool_sweeper () at /lib64/libglusterfs.so.0
#3  0x00007f379ab6edd5 in start_thread () at /lib64/libpthread.so.0
#4  0x00007f379a897ead in clone () at /lib64/libc.so.6
Thread 22 (Thread 0x7f3790dc4700 (LWP 29703)):
#0  0x00007f379ab72d12 in pthread_cond_timedwait@@GLIBC_2.3.2 () at /lib64/libpthread.so.0
#1  0x00007f379e494e68 in syncenv_task () at /lib64/libglusterfs.so.0
#2  0x00007f379e495d30 in syncenv_processor () at /lib64/libglusterfs.so.0
#3  0x00007f379ab6edd5 in start_thread () at /lib64/libpthread.so.0
#4  0x00007f379a897ead in clone () at /lib64/libc.so.6
Thread 21 (Thread 0x7f37905c3700 (LWP 29704)):
#0  0x00007f379ab72d12 in pthread_cond_timedwait@@GLIBC_2.3.2 () at /lib64/libpthread.so.0
#1  0x00007f379e494e68 in syncenv_task () at /lib64/libglusterfs.so.0
#2  0x00007f379e495d30 in syncenv_processor () at /lib64/libglusterfs.so.0
#3  0x00007f379ab6edd5 in start_thread () at /lib64/libpthread.so.0
#4  0x00007f379a897ead in clone () at /lib64/libc.so.6
Thread 20 (Thread 0x7f378f182700 (LWP 29705)):
#0  0x00007f379ab72d12 in pthread_cond_timedwait@@GLIBC_2.3.2 () at /lib64/libpthread.so.0
#1  0x00007f379e494e68 in syncenv_task () at /lib64/libglusterfs.so.0
#2  0x00007f379e495d30 in syncenv_processor () at /lib64/libglusterfs.so.0
#3  0x00007f379ab6edd5 in start_thread () at /lib64/libpthread.so.0
#4  0x00007f379a897ead in clone () at /lib64/libc.so.6
Thread 19 (Thread 0x7f378e981700 (LWP 29706)):
#0  0x00007f379ab72d12 in pthread_cond_timedwait@@GLIBC_2.3.2 () at /lib64/libpthread.so.0
#1  0x00007f379e494e68 in syncenv_task () at /lib64/libglusterfs.so.0
#2  0x00007f379e495d30 in syncenv_processor () at /lib64/libglusterfs.so.0
#3  0x00007f379ab6edd5 in start_thread () at /lib64/libpthread.so.0
#4  0x00007f379a897ead in clone () at /lib64/libc.so.6
Thread 18 (Thread 0x7f3787c56700 (LWP 29707)):
#0  0x00007f379ab75e3d in nanosleep () at /lib64/libpthread.so.0
#1  0x00007f379e4679d6 in gf_timer_proc () at /lib64/libglusterfs.so.0
#2  0x00007f379ab6edd5 in start_thread () at /lib64/libpthread.so.0
#3  0x00007f379a897ead in clone () at /lib64/libc.so.6
Thread 17 (Thread 0x7f3787252700 (LWP 29708)):
#0  0x00007f379ab6ff47 in pthread_join () at /lib64/libpthread.so.0
#1  0x00007f379e4b7af8 in event_dispatch_epoll () at /lib64/libglusterfs.so.0
#2  0x00007f379e735634 in glfs_poller () at /lib64/libgfapi.so.0
#3  0x00007f379ab6edd5 in start_thread () at /lib64/libpthread.so.0
#4  0x00007f379a897ead in clone () at /lib64/libc.so.6
Thread 16 (Thread 0x7f3786a51700 (LWP 29709)):
#0  0x00007f379a898483 in epoll_wait () at /lib64/libc.so.6
#1  0x00007f379e4b7392 in event_dispatch_epoll_worker () at /lib64/libglusterfs.so.0
#2  0x00007f379ab6edd5 in start_thread () at /lib64/libpthread.so.0
#3  0x00007f379a897ead in clone () at /lib64/libc.so.6
Thread 15 (Thread 0x7f3784222700 (LWP 29715)):
#0  0x00007f379a898483 in epoll_wait () at /lib64/libc.so.6
#1  0x00007f379e4b7392 in event_dispatch_epoll_worker () at /lib64/libglusterfs.so.0
#2  0x00007f379ab6edd5 in start_thread () at /lib64/libpthread.so.0
#3  0x00007f379a897ead in clone () at /lib64/libc.so.6
Thread 14 (Thread 0x7f3783920700 (LWP 29786)):
#0  0x00007f379a88d2cf in ppoll () at /lib64/libc.so.6
#1  0x000055de7f6eb40b in qemu_poll_ns (__ss=0x0, __timeout=0x0, __nfds=<optimized out>, __fds=<optimized out>) at /usr/include/bits/poll2.h:77
#2  0x000055de7f6eb40b in qemu_poll_ns (fds=<optimized out>, nfds=<optimized out>, timeout=<optimized out>) at util/qemu-timer.c:322
#3  0x000055de7f6ed187 in aio_poll (ctx=0x55de80d19cc0, blocking=blocking@entry=true) at util/aio-posix.c:645
#4  0x000055de7f4bcd5e in iothread_run (opaque=0x55de80f0a8c0) at iothread.c:64
#5  0x00007f379ab6edd5 in start_thread () at /lib64/libpthread.so.0
#6  0x00007f379a897ead in clone () at /lib64/libc.so.6
Thread 13 (Thread 0x7f378311f700 (LWP 29787)):
#0  0x00007f379ab72965 in pthread_cond_wait@@GLIBC_2.3.2 () at /lib64/libpthread.so.0
#1  0x000055de7f6eefe9 in qemu_cond_wait_impl (cond=<optimized out>, mutex=mutex@entry=0x55de7ff3d0e0 <qemu_global_mutex>, file=file@entry=0x55de7f783308 "/builddir/build/BUILD/qemu-2.12.0/cpus.c", line=line@entry=1176) at util/qemu-thread-posix.c:164
#2  0x000055de7f3d0d1f in qemu_wait_io_event (cpu=cpu@entry=0x55de81af0000) at /usr/src/debug/qemu-2.12.0/cpus.c:1176
#3  0x000055de7f3d2420 in qemu_kvm_cpu_thread_fn (arg=0x55de81af0000) at /usr/src/debug/qemu-2.12.0/cpus.c:1220
#4  0x00007f379ab6edd5 in start_thread () at /lib64/libpthread.so.0
#5  0x00007f379a897ead in clone () at /lib64/libc.so.6
Thread 12 (Thread 0x7f378291e700 (LWP 29788)):
#0  0x00007f379ab72965 in pthread_cond_wait@@GLIBC_2.3.2 () at /lib64/libpthread.so.0
#1  0x000055de7f6eefe9 in qemu_cond_wait_impl (cond=<optimized out>, mutex=mutex@entry=0x55de7ff3d0e0 <qemu_global_mutex>, file=file@entry=0x55de7f783308 "/builddir/build/BUILD/qemu-2.12.0/cpus.c", line=line@entry=1176) at util/qemu-thread-posix.c:164
#2  0x000055de7f3d0d1f in qemu_wait_io_event (cpu=cpu@entry=0x55de81b48000) at /usr/src/debug/qemu-2.12.0/cpus.c:1176
#3  0x000055de7f3d2420 in qemu_kvm_cpu_thread_fn (arg=0x55de81b48000) at /usr/src/debug/qemu-2.12.0/cpus.c:1220
#4  0x00007f379ab6edd5 in start_thread () at /lib64/libpthread.so.0
#5  0x00007f379a897ead in clone () at /lib64/libc.so.6
Thread 11 (Thread 0x7f378211d700 (LWP 29790)):
#0  0x00007f379ab72965 in pthread_cond_wait@@GLIBC_2.3.2 () at /lib64/libpthread.so.0
#1  0x000055de7f6eefe9 in qemu_cond_wait_impl (cond=<optimized out>, mutex=mutex@entry=0x55de7ff3d0e0 <qemu_global_mutex>, file=file@entry=0x55de7f783308 "/builddir/build/BUILD/qemu-2.12.0/cpus.c", line=line@entry=1176) at util/qemu-thread-posix.c:164
#2  0x000055de7f3d0d1f in qemu_wait_io_event (cpu=cpu@entry=0x55de81b68000) at /usr/src/debug/qemu-2.12.0/cpus.c:1176
#3  0x000055de7f3d2420 in qemu_kvm_cpu_thread_fn (arg=0x55de81b68000) at /usr/src/debug/qemu-2.12.0/cpus.c:1220
#4  0x00007f379ab6edd5 in start_thread () at /lib64/libpthread.so.0
#5  0x00007f379a897ead in clone () at /lib64/libc.so.6
Thread 10 (Thread 0x7f378191c700 (LWP 29793)):
#0  0x00007f379ab72965 in pthread_cond_wait@@GLIBC_2.3.2 () at /lib64/libpthread.so.0
#1  0x000055de7f6eefe9 in qemu_cond_wait_impl (cond=<optimized out>, mutex=mutex@entry=0x55de7ff3d0e0 <qemu_global_mutex>, file=file@entry=0x55de7f783308 "/builddir/build/BUILD/qemu-2.12.0/cpus.c", line=line@entry=1176) at util/qemu-thread-posix.c:164
#2  0x000055de7f3d0d1f in qemu_wait_io_event (cpu=cpu@entry=0x55de81b88000) at /usr/src/debug/qemu-2.12.0/cpus.c:1176
#3  0x000055de7f3d2420 in qemu_kvm_cpu_thread_fn (arg=0x55de81b88000) at /usr/src/debug/qemu-2.12.0/cpus.c:1220
#4  0x00007f379ab6edd5 in start_thread () at /lib64/libpthread.so.0
#5  0x00007f379a897ead in clone () at /lib64/libc.so.6
Thread 9 (Thread 0x7f378111b700 (LWP 29794)):
#0  0x00007f379ab72965 in pthread_cond_wait@@GLIBC_2.3.2 () at /lib64/libpthread.so.0
#1  0x000055de7f6eefe9 in qemu_cond_wait_impl (cond=<optimized out>, mutex=mutex@entry=0x55de7ff3d0e0 <qemu_global_mutex>, file=file@entry=0x55de7f783308 "/builddir/build/BUILD/qemu-2.12.0/cpus.c", line=line@entry=1176) at util/qemu-thread-posix.c:164
#2  0x000055de7f3d0d1f in qemu_wait_io_event (cpu=cpu@entry=0x55de81baa000) at /usr/src/debug/qemu-2.12.0/cpus.c:1176
#3  0x000055de7f3d2420 in qemu_kvm_cpu_thread_fn (arg=0x55de81baa000) at /usr/src/debug/qemu-2.12.0/cpus.c:1220
#4  0x00007f379ab6edd5 in start_thread () at /lib64/libpthread.so.0
#5  0x00007f379a897ead in clone () at /lib64/libc.so.6
Thread 8 (Thread 0x7f378091a700 (LWP 29795)):
#0  0x00007f379ab72965 in pthread_cond_wait@@GLIBC_2.3.2 () at /lib64/libpthread.so.0
#1  0x000055de7f6eefe9 in qemu_cond_wait_impl (cond=<optimized out>, mutex=mutex@entry=0x55de7ff3d0e0 <qemu_global_mutex>, file=file@entry=0x55de7f783308 "/builddir/build/BUILD/qemu-2.12.0/cpus.c", line=line@entry=1176) at util/qemu-thread-posix.c:164
#2  0x000055de7f3d0d1f in qemu_wait_io_event (cpu=cpu@entry=0x55de81bc6000) at /usr/src/debug/qemu-2.12.0/cpus.c:1176
#3  0x000055de7f3d2420 in qemu_kvm_cpu_thread_fn (arg=0x55de81bc6000) at /usr/src/debug/qemu-2.12.0/cpus.c:1220
#4  0x00007f379ab6edd5 in start_thread () at /lib64/libpthread.so.0
#5  0x00007f379a897ead in clone () at /lib64/libc.so.6
Thread 7 (Thread 0x7f3780119700 (LWP 29796)):
#0  0x00007f379ab72965 in pthread_cond_wait@@GLIBC_2.3.2 () at /lib64/libpthread.so.0
#1  0x000055de7f6eefe9 in qemu_cond_wait_impl (cond=<optimized out>, mutex=mutex@entry=0x55de7ff3d0e0 <qemu_global_mutex>, file=file@entry=0x55de7f783308 "/builddir/build/BUILD/qemu-2.12.0/cpus.c", line=line@entry=1176) at util/qemu-thread-posix.c:164
#2  0x000055de7f3d0d1f in qemu_wait_io_event (cpu=cpu@entry=0x55de81be8000) at /usr/src/debug/qemu-2.12.0/cpus.c:1176
#3  0x000055de7f3d2420 in qemu_kvm_cpu_thread_fn (arg=0x55de81be8000) at /usr/src/debug/qemu-2.12.0/cpus.c:1220
#4  0x00007f379ab6edd5 in start_thread () at /lib64/libpthread.so.0
#5  0x00007f379a897ead in clone () at /lib64/libc.so.6
Thread 6 (Thread 0x7f377f918700 (LWP 29797)):
#0  0x00007f379ab72965 in pthread_cond_wait@@GLIBC_2.3.2 () at /lib64/libpthread.so.0
#1  0x000055de7f6eefe9 in qemu_cond_wait_impl (cond=<optimized out>, mutex=mutex@entry=0x55de7ff3d0e0 <qemu_global_mutex>, file=file@entry=0x55de7f783308 "/builddir/build/BUILD/qemu-2.12.0/cpus.c", line=line@entry=1176) at util/qemu-thread-posix.c:164
#2  0x000055de7f3d0d1f in qemu_wait_io_event (cpu=cpu@entry=0x55de81c06000) at /usr/src/debug/qemu-2.12.0/cpus.c:1176
#3  0x000055de7f3d2420 in qemu_kvm_cpu_thread_fn (arg=0x55de81c06000) at /usr/src/debug/qemu-2.12.0/cpus.c:1220
#4  0x00007f379ab6edd5 in start_thread () at /lib64/libpthread.so.0
#5  0x00007f379a897ead in clone () at /lib64/libc.so.6
Thread 5 (Thread 0x7f377f117700 (LWP 29798)):
#0  0x00007f379ab72965 in pthread_cond_wait@@GLIBC_2.3.2 () at /lib64/libpthread.so.0
#1  0x000055de7f6eefe9 in qemu_cond_wait_impl (cond=<optimized out>, mutex=mutex@entry=0x55de7ff3d0e0 <qemu_global_mutex>, file=file@entry=0x55de7f783308 "/builddir/build/BUILD/qemu-2.12.0/cpus.c", line=line@entry=1176) at util/qemu-thread-posix.c:164
#2  0x000055de7f3d0d1f in qemu_wait_io_event (cpu=cpu@entry=0x55de81c2c000) at /usr/src/debug/qemu-2.12.0/cpus.c:1176
#3  0x000055de7f3d2420 in qemu_kvm_cpu_thread_fn (arg=0x55de81c2c000) at /usr/src/debug/qemu-2.12.0/cpus.c:1220
#4  0x00007f379ab6edd5 in start_thread () at /lib64/libpthread.so.0
#5  0x00007f379a897ead in clone () at /lib64/libc.so.6
Thread 4 (Thread 0x7f377e916700 (LWP 29799)):
#0  0x00007f379ab72965 in pthread_cond_wait@@GLIBC_2.3.2 () at /lib64/libpthread.so.0
#1  0x000055de7f6eefe9 in qemu_cond_wait_impl (cond=<optimized out>, mutex=mutex@entry=0x55de7ff3d0e0 <qemu_global_mutex>, file=file@entry=0x55de7f783308 "/builddir/build/BUILD/qemu-2.12.0/cpus.c", line=line@entry=1176) at util/qemu-thread-posix.c:164
#2  0x000055de7f3d0d1f in qemu_wait_io_event (cpu=cpu@entry=0x55de81c4e000) at /usr/src/debug/qemu-2.12.0/cpus.c:1176
#3  0x000055de7f3d2420 in qemu_kvm_cpu_thread_fn (arg=0x55de81c4e000) at /usr/src/debug/qemu-2.12.0/cpus.c:1220
#4  0x00007f379ab6edd5 in start_thread () at /lib64/libpthread.so.0
#5  0x00007f379a897ead in clone () at /lib64/libc.so.6
Thread 3 (Thread 0x7f36fc5ff700 (LWP 29803)):
#0  0x00007f379ab72965 in pthread_cond_wait@@GLIBC_2.3.2 () at /lib64/libpthread.so.0
#1  0x000055de7f6eefe9 in qemu_cond_wait_impl (cond=cond@entry=0x55de80ce3cb0, mutex=mutex@entry=0x55de80ce3ce8, file=file@entry=0x55de7f858d07 "ui/vnc-jobs.c", line=line@entry=212) at util/qemu-thread-posix.c:164
#2  0x000055de7f609b1f in vnc_worker_thread_loop (queue=queue@entry=0x55de80ce3cb0) at ui/vnc-jobs.c:212
#3  0x000055de7f60a0e8 in vnc_worker_thread (arg=0x55de80ce3cb0) at ui/vnc-jobs.c:319
#4  0x00007f379ab6edd5 in start_thread () at /lib64/libpthread.so.0
#5  0x00007f379a897ead in clone () at /lib64/libc.so.6
Thread 2 (Thread 0x7f36fdefe700 (LWP 29806)):
#0  0x00007f379ab72d12 in pthread_cond_timedwait@@GLIBC_2.3.2 () at /lib64/libpthread.so.0
#1  0x00007f37846aad3c in iot_worker () at /usr/lib64/glusterfs/3.12.2/xlator/performance/io-threads.so
#2  0x00007f379ab6edd5 in start_thread () at /lib64/libpthread.so.0
#3  0x00007f379a897ead in clone () at /lib64/libc.so.6
Thread 1 (Thread 0x7f37b3f7edc0 (LWP 29682)):
#0  0x00007f379a88d2cf in ppoll () at /lib64/libc.so.6
#1  0x000055de7f6eb40b in qemu_poll_ns (__ss=0x0, __timeout=0x0, __nfds=<optimized out>, __fds=<optimized out>) at /usr/include/bits/poll2.h:77
#2  0x000055de7f6eb40b in qemu_poll_ns (fds=<optimized out>, nfds=<optimized out>, timeout=<optimized out>) at util/qemu-timer.c:322
#3  0x000055de7f6ed187 in aio_poll (ctx=0x55de80d197c0, blocking=blocking@entry=true) at util/aio-posix.c:645
#4  0x000055de7f6702fa in nbd_client_close (bs=0x55de81842800) at block/nbd-client.c:62
#5  0x000055de7f6702fa in nbd_client_close (bs=0x55de81842800) at block/nbd-client.c:961
#6  0x000055de7f66dc2a in nbd_close (bs=<optimized out>) at block/nbd.c:491
#7  0x000055de7f616c82 in bdrv_unref (bs=0x55de81842800) at block.c:3392
#8  0x000055de7f616c82 in bdrv_unref (bs=0x55de81842800) at block.c:3576
#9  0x000055de7f616c82 in bdrv_unref (bs=0x55de81842800) at block.c:4654
#10 0x000055de7f616caf in bdrv_unref (bs=0x55de80df8800) at block.c:3399
#11 0x000055de7f616caf in bdrv_unref (bs=0x55de80df8800) at block.c:3576
#12 0x000055de7f616caf in bdrv_unref (bs=0x55de80df8800) at block.c:4654
#13 0x000055de7f6587e1 in blk_remove_bs (blk=blk@entry=0x55de83268dc0) at block/block-backend.c:784
#14 0x000055de7f658a5f in blk_unref (blk=0x55de83268dc0) at block/block-backend.c:402
#15 0x000055de7f658a5f in blk_unref (blk=0x55de83268dc0) at block/block-backend.c:458
#16 0x000055de7f61bf58 in job_finish_sync (job=job@entry=0x55de832698c0, finish=finish@entry=0x55de7f61c550 <job_cancel_err>, errp=errp@entry=0x0) at job.c:989
#17 0x000055de7f61c5a5 in job_cancel_sync_all (job=0x55de832698c0) at job.c:936
#18 0x000055de7f61c5a5 in job_cancel_sync_all () at job.c:947
#19 0x000055de7f38eb26 in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:4775

Comment 12 Sergio Lopez 2019-09-02 07:58:40 UTC
*** Bug 1539530 has been marked as a duplicate of this bug. ***

Comment 13 Sergio Lopez 2019-09-03 11:30:37 UTC
*** Bug 1602264 has been marked as a duplicate of this bug. ***

Comment 15 aihua liang 2019-09-18 04:36:38 UTC
Test on qemu-kvm-4.1.0-10.module+el8.1.0+4234+33aa4f57.x86_64, don't hit this issue any more. Set bug's status to "Verified".

 Reproduce steps:
 1.In dst, start guest with qemu cmds:
     /usr/libexec/qemu-kvm \
    -name 'avocado-vt-vm1'  \
    -sandbox off  \
    -machine pc  \
    -nodefaults \
    -device VGA,bus=pci.0,addr=0x2  \
    -chardev socket,id=qmp_id_qmpmonitor1,path=/var/tmp/monitor-qmpmonitor1-20181107-005924-PkIxnG9p,server,nowait \
    -mon chardev=qmp_id_qmpmonitor1,mode=control  \
    -chardev socket,id=qmp_id_catch_monitor,path=/var/tmp/monitor-catch_monitor-20181107-005924-PkIxnG9p,server,nowait \
    -mon chardev=qmp_id_catch_monitor,mode=control \
    -device pvpanic,ioport=0x505,id=idkp9HYI  \
    -chardev socket,id=serial_id_serial0,path=/var/tmp/serial-serial0-20181107-005924-PkIxnG9p,server,nowait \
    -device isa-serial,chardev=serial_id_serial0  \
    -chardev socket,id=seabioslog_id_20181107-005924-PkIxnG9p,path=/var/tmp/seabios-20181107-005924-PkIxnG9p,server,nowait \
    -device isa-debugcon,chardev=seabioslog_id_20181107-005924-PkIxnG9p,iobase=0x402 \
    -device ich9-usb-ehci1,id=usb1,addr=0x1d.7,multifunction=on,bus=pci.0 \
    -device ich9-usb-uhci1,id=usb1.0,multifunction=on,masterbus=usb1.0,addr=0x1d.0,firstport=0,bus=pci.0 \
    -device ich9-usb-uhci2,id=usb1.1,multifunction=on,masterbus=usb1.0,addr=0x1d.2,firstport=2,bus=pci.0 \
    -device ich9-usb-uhci3,id=usb1.2,multifunction=on,masterbus=usb1.0,addr=0x1d.4,firstport=4,bus=pci.0 \
    -device virtio-net-pci,mac=9a:44:45:46:47:48,id=iddDGLIi,vectors=4,netdev=idDdrbRp,bus=pci.0,addr=0x7  \
    -netdev tap,id=idDdrbRp,vhost=on \
    -m 2048  \
    -smp 10,maxcpus=10,cores=5,threads=1,sockets=2  \
    -cpu SandyBridge \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1  \
    -vnc :0  \
     -rtc base=localtime,clock=host,driftfix=slew  \
    -boot order=cdn,once=c,menu=off,strict=off \
    -enable-kvm \
    -monitor stdio \
    -object iothread,id=iothread0 \
    -device virtio-scsi-pci,id=scsi0,iothread=iothread0 \
    -drive if=none,id=drive_image1,aio=threads,cache=none,format=qcow2,file=/home/mirror.qcow2 \

  2.In dst, start nbd server and expose the empty system disk:
     { "execute": "nbd-server-start", "arguments": { "addr": { "type": "inet","data": { "host": "10.73.224.68", "port": "3333"}}}}
     { "execute": "nbd-server-add", "arguments": { "device": "drive_image1","writable": true } }

  3.In src, start guest with gluster backend:
     /usr/libexec/qemu-kvm \
    -name 'avocado-vt-vm1'  \
    -sandbox off  \
    -machine pc  \
    -nodefaults \
    -device VGA,bus=pci.0,addr=0x2  \
    -chardev socket,id=qmp_id_qmpmonitor1,path=/var/tmp/monitor-qmpmonitor1-20181107-005924-PkIxnG9o,server,nowait \
    -mon chardev=qmp_id_qmpmonitor1,mode=control  \
    -chardev socket,id=qmp_id_catch_monitor,path=/var/tmp/monitor-catch_monitor-20181107-005924-PkIxnG9p,server,nowait \
    -mon chardev=qmp_id_catch_monitor,mode=control \
    -device pvpanic,ioport=0x505,id=idkp9HYI  \
    -chardev socket,id=serial_id_serial0,path=/var/tmp/serial-serial0-20181107-005924-PkIxnG9p,server,nowait \
    -device isa-serial,chardev=serial_id_serial0  \
    -chardev socket,id=seabioslog_id_20181107-005924-PkIxnG9p,path=/var/tmp/seabios-20181107-005924-PkIxnG9p,server,nowait \
    -device isa-debugcon,chardev=seabioslog_id_20181107-005924-PkIxnG9p,iobase=0x402 \
    -device ich9-usb-ehci1,id=usb1,addr=0x1d.7,multifunction=on,bus=pci.0 \
    -device ich9-usb-uhci1,id=usb1.0,multifunction=on,masterbus=usb1.0,addr=0x1d.0,firstport=0,bus=pci.0 \
    -device ich9-usb-uhci2,id=usb1.1,multifunction=on,masterbus=usb1.0,addr=0x1d.2,firstport=2,bus=pci.0 \
    -device ich9-usb-uhci3,id=usb1.2,multifunction=on,masterbus=usb1.0,addr=0x1d.4,firstport=4,bus=pci.0 \
    -device virtio-net-pci,mac=9a:44:45:46:47:48,id=iddDGLIi,vectors=4,netdev=idDdrbRp,bus=pci.0,addr=0x7  \
    -netdev tap,id=idDdrbRp,vhost=on \
    -m 2048  \
    -smp 10,maxcpus=10,cores=5,threads=1,sockets=2  \
    -cpu SandyBridge \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1  \
    -vnc :1  \
     -rtc base=localtime,clock=host,driftfix=slew  \
    -boot order=cdn,once=c,menu=off,strict=off \
    -enable-kvm \
    -monitor stdio \
    -object iothread,id=iothread0 \
    -device virtio-scsi-pci,id=scsi0,iothread=iothread0 \
    -drive if=none,id=drive_image1,aio=threads,cache=none,format=qcow2,file=gluster://intel-5405-32-2.englab.nay.redhat.com/aliang/rhel76-64-virtio-scsi.qcow2 \
    -device scsi-hd,id=image1,drive=drive_image1,bootindex=0,bus=scsi0.0 \

  4. In src, do block mirror to dst:
      { "execute": "drive-mirror", "arguments": { "device": "drive_image1", "target": "nbd://10.73.224.68:3333/drive_image1", "sync": "full", "format": "raw", "mode": "existing" } }

  5. In src, after mirror reach ready status, quit vm.
     (qemu)quit

After step5, vm can quit successfully without any coredump.

Comment 17 errata-xmlrpc 2019-11-06 07:12:03 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:3723


Note You need to log in before you can comment on or make changes to this bug.