Bug 1745606 - Qemu hang when do incremental live backup in transaction mode without bitmap
Summary: Qemu hang when do incremental live backup in transaction mode without bitmap
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux Advanced Virtualization
Classification: Red Hat
Component: qemu-kvm
Version: 8.1
Hardware: Unspecified
OS: Unspecified
high
medium
Target Milestone: rc
: ---
Assignee: Sergio Lopez
QA Contact: aihua liang
URL:
Whiteboard:
Depends On:
Blocks: 1758964
TreeView+ depends on / blocked
 
Reported: 2019-08-26 13:30 UTC by aihua liang
Modified: 2020-05-05 09:50 UTC (History)
9 users (show)

Fixed In Version: qemu-kvm-4.2.0-10.module+el8.2.0+5740+c3dff59e
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-05-05 09:49:40 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2020:2017 0 None None None 2020-05-05 09:50:56 UTC

Description aihua liang 2019-08-26 13:30:07 UTC
Description of problem:
  Qemu hang when do incremental live backup in transaction mode without bitmap

Version-Release number of selected component (if applicable):
  kernel version: 4.18.0-134.el8.x86_64
  qemu-kvm version:qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64

How reproducible:
  100%

Steps to Reproduce:
1.Start guest with qemu cmds:
   /usr/libexec/qemu-kvm \
    -name 'avocado-vt-vm1' \
    -machine q35  \
    -nodefaults \
    -device VGA,bus=pcie.0,addr=0x1  \
    -chardev socket,id=qmp_id_qmpmonitor1,path=/var/tmp/monitor-qmpmonitor1-20190820-032540-OesJUJdj,server,nowait \
    -mon chardev=qmp_id_qmpmonitor1,mode=control  \
    -chardev socket,id=qmp_id_catch_monitor,path=/var/tmp/monitor-catch_monitor-20190820-032540-OesJUJdj,server,nowait \
    -mon chardev=qmp_id_catch_monitor,mode=control \
    -device pvpanic,ioport=0x505,id=idbJPqrG \
    -chardev socket,id=chardev_serial0,server,path=/var/tmp/serial-serial0-20190820-032540-OesJUJdj,nowait \
    -device isa-serial,id=serial0,chardev=chardev_serial0  \
    -chardev socket,id=seabioslog_id_20190820-032540-OesJUJdj,path=/var/tmp/seabios-20190820-032540-OesJUJdj,server,nowait \
    -device isa-debugcon,chardev=seabioslog_id_20190820-032540-OesJUJdj,iobase=0x402 \
    -device pcie-root-port,id=pcie.0-root-port-2,slot=2,chassis=2,addr=0x2,bus=pcie.0 \
    -device qemu-xhci,id=usb1,bus=pcie.0-root-port-2,addr=0x0 \
    -object iothread,id=iothread0 \
    -object iothread,id=iothread1 \
    -device pcie-root-port,id=pcie.0-root-port-3,slot=3,chassis=3,addr=0x3,bus=pcie.0 \
    -drive id=drive_image1,if=none,snapshot=off,cache=none,format=qcow2,file=/mnt/nfs/rhel810-64-virtio.qcow2 \
    -device virtio-blk-pci,id=image1,drive=drive_image1,bootindex=0,bus=pcie.0-root-port-3,addr=0x0,iothread=iothread0 \
    -device pcie-root-port,id=pcie.0-root-port-6,slot=6,chassis=6,addr=0x6,bus=pcie.0 \
    -drive id=drive_data1,if=none,snapshot=off,cache=none,format=qcow2,file=/mnt/nfs/data.qcow2 \
    -device virtio-blk-pci,id=data1,drive=drive_data1,iothread=iothread1,bus=pcie.0-root-port-6,addr=0x0 \
    -device pcie-root-port,id=pcie.0-root-port-7,slot=7,chassis=7,addr=0x7,bus=pcie.0 \
    -device pcie-root-port,id=pcie.0-root-port-4,slot=4,chassis=4,addr=0x4,bus=pcie.0 \
    -device virtio-net-pci,mac=9a:19:6a:3c:a6:a5,id=idq14C2Q,netdev=idHzG7Zk,bus=pcie.0-root-port-4,addr=0x0  \
    -netdev tap,id=idHzG7Zk,vhost=on \
    -m 7168  \
    -smp 4,maxcpus=4,cores=2,threads=1,sockets=2  \
    -cpu 'Skylake-Client',+kvm_pv_unhalt \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1  \
    -vnc :0  \
    -rtc base=utc,clock=host,driftfix=slew  \
    -boot order=cdn,once=c,menu=off,strict=off \
    -enable-kvm \
    -device pcie-root-port,id=pcie_extra_root_port_0,slot=5,chassis=5,addr=0x5,bus=pcie.0 \
    -monitor stdio \
    -device virtio-serial-pci,id=virtio-serial0,bus=pcie_extra_root_port_0,addr=0x0 \
    -chardev socket,path=/tmp/qga.sock,server,nowait,id=qga0 \
    -device virtserialport,bus=virtio-serial0.0,chardev=qga0,id=qemu-ga0,name=org.qemu.guest_agent.0 \
    -qmp tcp:0:3000,server,nowait \

2. Do full backup and add bitmaps
    { "execute": "transaction", "arguments": { "actions": [ {"type":"drive-backup","data":{"device":"drive_image1","target":"full_backup0.img","sync":"full","format":"qcow2"}},{"type": "block-dirty-bitmap-add", "data": { "node": "drive_image1", "name": "bitmap0" } },{"type":"drive-backup","data":{"device":"drive_data1","target":"full_backup1.img","sync":"full","format":"qcow2"}},{"type": "block-dirty-bitmap-add", "data": { "node": "drive_data1", "name": "bitmap0" } }]}}

3. DD file on guest
   (guest)#dd if=/dev/urandom of=test bs=1M count=1000
          #md5sum test > sum1

          #mount /dev/vdb /mnt 
          #cd /mnt
          #dd if=/dev/urandom of=test bs=1M count=1000
          #md5sum test > sum1

4. Do incremental backups without bitmap
    { "execute": "transaction", "arguments": { "actions": [ {"type":"drive-backup","data":{"device":"drive_image1","target":"inc0.img","sync":"incremental"}},{"type":"drive-backup","data":{"device":"drive_data1","target":"inc1.img","sync":"incremental"}}]}}

Actual results:
 After step4, qemu hang
 (gdb) bt
#0  0x00007f2f60a32306 in __GI_ppoll (fds=0x55a540bf0fa0, nfds=1, timeout=<optimized out>, timeout@entry=0x0, sigmask=sigmask@entry=0x0) at ../sysdeps/unix/sysv/linux/ppoll.c:39
#1  0x000055a53e5ff1b9 in ppoll (__ss=0x0, __timeout=0x0, __nfds=<optimized out>, __fds=<optimized out>) at /usr/include/bits/poll2.h:77
#2  0x000055a53e5ff1b9 in qemu_poll_ns (fds=<optimized out>, nfds=<optimized out>, timeout=<optimized out>) at util/qemu-timer.c:322
#3  0x000055a53e6011d4 in aio_poll (ctx=0x55a540bc9a20, blocking=blocking@entry=true) at util/aio-posix.c:669
#4  0x000055a53e580e4a in bdrv_flush (bs=bs@entry=0x55a540d188e0) at block/io.c:2698
#5  0x000055a53e53b31e in bdrv_close (bs=0x55a540d188e0) at block.c:4018
#6  0x000055a53e53b31e in bdrv_delete (bs=<optimized out>) at block.c:4264
#7  0x000055a53e53b31e in bdrv_unref (bs=0x55a540d188e0) at block.c:5598
#8  0x000055a53e3db010 in do_drive_backup (backup=backup@entry=0x55a541402cf0, txn=0x0, errp=errp@entry=0x7ffc77a590c0) at blockdev.c:3584
#9  0x000055a53e3db2b4 in drive_backup_prepare (common=0x55a541acaf30, errp=0x7ffc77a59128) at blockdev.c:1790
#10 0x000055a53e3df0e2 in qmp_transaction (dev_list=<optimized out>, has_props=<optimized out>, props=0x55a5419c0bf0, errp=errp@entry=0x7ffc77a59198) at blockdev.c:2289
#11 0x000055a53e505575 in qmp_marshal_transaction (args=<optimized out>, ret=<optimized out>, errp=0x7ffc77a59208) at qapi/qapi-commands-transaction.c:44
#12 0x000055a53e5b6ecc in do_qmp_dispatch (errp=0x7ffc77a59200, allow_oob=<optimized out>, request=<optimized out>, cmds=0x55a53ee927a0 <qmp_commands>) at qapi/qmp-dispatch.c:131
#13 0x000055a53e5b6ecc in qmp_dispatch (cmds=0x55a53ee927a0 <qmp_commands>, request=<optimized out>, allow_oob=<optimized out>) at qapi/qmp-dispatch.c:174
#14 0x000055a53e4d94f1 in monitor_qmp_dispatch (mon=0x55a540c282b0, req=<optimized out>) at monitor/qmp.c:120
#15 0x000055a53e4d9b3a in monitor_qmp_bh_dispatcher (data=<optimized out>) at monitor/qmp.c:209
#16 0x000055a53e5fdc26 in aio_bh_call (bh=0x55a540b42b20) at util/async.c:117
#17 0x000055a53e5fdc26 in aio_bh_poll (ctx=ctx@entry=0x55a540b416d0) at util/async.c:117
#18 0x000055a53e601064 in aio_dispatch (ctx=0x55a540b416d0) at util/aio-posix.c:459
#19 0x000055a53e5fdb02 in aio_ctx_dispatch (source=<optimized out>, callback=<optimized out>, user_data=<optimized out>) at util/async.c:260
#20 0x00007f2f657f667d in g_main_dispatch (context=0x55a540bcae70) at gmain.c:3176
#21 0x00007f2f657f667d in g_main_context_dispatch (context=context@entry=0x55a540bcae70) at gmain.c:3829
#22 0x000055a53e600118 in glib_pollfds_poll () at util/main-loop.c:218
#23 0x000055a53e600118 in os_host_main_loop_wait (timeout=<optimized out>) at util/main-loop.c:241
#24 0x000055a53e600118 in main_loop_wait (nonblocking=<optimized out>) at util/main-loop.c:517
#25 0x000055a53e3e9169 in main_loop () at vl.c:1809
#26 0x000055a53e298fd3 in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:4506

Expected results:
 Error msg reported if no bitmaps provided when do incremental live backup.

Additional info:
 pstack 7076
Thread 10 (Thread 0x7f2f32dff700 (LWP 7132)):
#0  0x00007f2f60d1247c in futex_wait_cancelable (private=0, expected=0, futex_word=0x55a542206c28) at ../sysdeps/unix/sysv/linux/futex-internal.h:88
#1  0x00007f2f60d1247c in __pthread_cond_wait_common (abstime=0x0, mutex=0x55a542206c38, cond=0x55a542206c00) at pthread_cond_wait.c:502
#2  0x00007f2f60d1247c in __pthread_cond_wait (cond=0x55a542206c00, mutex=mutex@entry=0x55a542206c38) at pthread_cond_wait.c:655
#3  0x000055a53e60386d in qemu_cond_wait_impl (cond=<optimized out>, mutex=0x55a542206c38, file=0x55a53e77fc37 "ui/vnc-jobs.c", line=214) at util/qemu-thread-posix.c:161
#4  0x000055a53e52cd71 in vnc_worker_thread_loop (queue=queue@entry=0x55a542206c00) at ui/vnc-jobs.c:214
#5  0x000055a53e52d330 in vnc_worker_thread (arg=0x55a542206c00) at ui/vnc-jobs.c:324
#6  0x000055a53e6034b4 in qemu_thread_start (args=0x55a54124ae50) at util/qemu-thread-posix.c:502
#7  0x00007f2f60d0c2de in start_thread (arg=<optimized out>) at pthread_create.c:486
#8  0x00007f2f60a3d133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
Thread 9 (Thread 0x7f2f497fa700 (LWP 7093)):
#0  0x00007f2f60d158dd in __lll_lock_wait () at ../sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:103
#1  0x00007f2f60d0eaf9 in __GI___pthread_mutex_lock (mutex=mutex@entry=0x55a53ee5ff60 <qemu_global_mutex>) at ../nptl/pthread_mutex_lock.c:80
#2  0x000055a53e60359d in qemu_mutex_lock_impl (mutex=0x55a53ee5ff60 <qemu_global_mutex>, file=0x55a53e69d068 "/builddir/build/BUILD/qemu-4.1.0/exec.c", line=3301) at util/qemu-thread-posix.c:66
#3  0x000055a53e2e439e in qemu_mutex_lock_iothread_impl (file=<optimized out>, line=<optimized out>) at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/cpus.c:1859
#4  0x000055a53e29c8f9 in prepare_mmio_access (mr=<optimized out>, mr=<optimized out>) at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/exec.c:3301
#5  0x000055a53e29d990 in flatview_write_continue (fv=0x7f2f382db7c0, addr=4271965456, attrs=..., buf=0x7f2f66139028 "", len=2, addr1=<optimized out>, l=<optimized out>, mr=0x55a5415d17d0) at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/exec.c:3332
#6  0x000055a53e29db46 in flatview_write (fv=0x7f2f382db7c0, addr=4271965456, attrs=..., buf=0x7f2f66139028 "", len=2) at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/exec.c:3376
#7  0x000055a53e2a1d6f in address_space_write (as=<optimized out>, addr=<optimized out>, attrs=..., buf=<optimized out>, len=<optimized out>) at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/exec.c:3466
#8  0x000055a53e2ff4ca in kvm_cpu_exec (cpu=<optimized out>) at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/accel/kvm/kvm-all.c:2298
#9  0x000055a53e2e456e in qemu_kvm_cpu_thread_fn (arg=0x55a540cd92d0) at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/cpus.c:1285
#10 0x000055a53e6034b4 in qemu_thread_start (args=0x55a540cfc050) at util/qemu-thread-posix.c:502
#11 0x00007f2f60d0c2de in start_thread (arg=<optimized out>) at pthread_create.c:486
#12 0x00007f2f60a3d133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
Thread 8 (Thread 0x7f2f49ffb700 (LWP 7092)):
#0  0x00007f2f60d158dd in __lll_lock_wait () at ../sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:103
#1  0x00007f2f60d0eaf9 in __GI___pthread_mutex_lock (mutex=mutex@entry=0x55a53ee5ff60 <qemu_global_mutex>) at ../nptl/pthread_mutex_lock.c:80
#2  0x000055a53e60359d in qemu_mutex_lock_impl (mutex=0x55a53ee5ff60 <qemu_global_mutex>, file=0x55a53e69d068 "/builddir/build/BUILD/qemu-4.1.0/exec.c", line=3301) at util/qemu-thread-posix.c:66
#3  0x000055a53e2e439e in qemu_mutex_lock_iothread_impl (file=<optimized out>, line=<optimized out>) at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/cpus.c:1859
#4  0x000055a53e29c8f9 in prepare_mmio_access (mr=<optimized out>, mr=<optimized out>) at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/exec.c:3301
#5  0x000055a53e2a197f in flatview_read_continue (fv=0x7f2f382d9930, addr=1017, attrs=..., buf=<optimized out>, len=1, addr1=<optimized out>, l=<optimized out>, mr=0x55a541184700) at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/exec.c:3396
#6  0x000055a53e2a1ba3 in flatview_read (fv=0x7f2f382d9930, addr=1017, attrs=..., buf=0x7f2f6613d000 "", len=1) at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/exec.c:3436
#7  0x000055a53e2a1ccf in address_space_read_full (as=<optimized out>, addr=<optimized out>, attrs=..., buf=<optimized out>, len=<optimized out>) at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/exec.c:3449
#8  0x000055a53e2ff544 in kvm_handle_io (count=1, size=1, direction=<optimized out>, data=<optimized out>, attrs=..., port=1017) at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/accel/kvm/kvm-all.c:2042
#9  0x000055a53e2ff544 in kvm_cpu_exec (cpu=<optimized out>) at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/accel/kvm/kvm-all.c:2288
#10 0x000055a53e2e456e in qemu_kvm_cpu_thread_fn (arg=0x55a540cb5ab0) at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/cpus.c:1285
#11 0x000055a53e6034b4 in qemu_thread_start (args=0x55a540cd8a90) at util/qemu-thread-posix.c:502
#12 0x00007f2f60d0c2de in start_thread (arg=<optimized out>) at pthread_create.c:486
#13 0x00007f2f60a3d133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
Thread 7 (Thread 0x7f2f4a7fc700 (LWP 7091)):
#0  0x00007f2f60d158dd in __lll_lock_wait () at ../sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:103
#1  0x00007f2f60d0eaf9 in __GI___pthread_mutex_lock (mutex=mutex@entry=0x55a53ee5ff60 <qemu_global_mutex>) at ../nptl/pthread_mutex_lock.c:80
#2  0x000055a53e60359d in qemu_mutex_lock_impl (mutex=0x55a53ee5ff60 <qemu_global_mutex>, file=0x55a53e6a8a58 "/builddir/build/BUILD/qemu-4.1.0/accel/kvm/kvm-all.c", line=2353) at util/qemu-thread-posix.c:66
#3  0x000055a53e2e439e in qemu_mutex_lock_iothread_impl (file=file@entry=0x55a53e6a8a58 "/builddir/build/BUILD/qemu-4.1.0/accel/kvm/kvm-all.c", line=line@entry=2353) at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/cpus.c:1859
#4  0x000055a53e2ff408 in kvm_cpu_exec (cpu=<optimized out>) at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/accel/kvm/kvm-all.c:2353
#5  0x000055a53e2e456e in qemu_kvm_cpu_thread_fn (arg=0x55a540c91980) at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/cpus.c:1285
#6  0x000055a53e6034b4 in qemu_thread_start (args=0x55a540cb5270) at util/qemu-thread-posix.c:502
#7  0x00007f2f60d0c2de in start_thread (arg=<optimized out>) at pthread_create.c:486
#8  0x00007f2f60a3d133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
Thread 6 (Thread 0x7f2f4affd700 (LWP 7090)):
#0  0x00007f2f60d158dd in __lll_lock_wait () at ../sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:103
#1  0x00007f2f60d0eaf9 in __GI___pthread_mutex_lock (mutex=mutex@entry=0x55a53ee5ff60 <qemu_global_mutex>) at ../nptl/pthread_mutex_lock.c:80
#2  0x000055a53e60359d in qemu_mutex_lock_impl (mutex=0x55a53ee5ff60 <qemu_global_mutex>, file=0x55a53e6a8a58 "/builddir/build/BUILD/qemu-4.1.0/accel/kvm/kvm-all.c", line=2353) at util/qemu-thread-posix.c:66
#3  0x000055a53e2e439e in qemu_mutex_lock_iothread_impl (file=file@entry=0x55a53e6a8a58 "/builddir/build/BUILD/qemu-4.1.0/accel/kvm/kvm-all.c", line=line@entry=2353) at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/cpus.c:1859
#4  0x000055a53e2ff408 in kvm_cpu_exec (cpu=<optimized out>) at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/accel/kvm/kvm-all.c:2353
#5  0x000055a53e2e456e in qemu_kvm_cpu_thread_fn (arg=0x55a540c43980) at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/cpus.c:1285
#6  0x000055a53e6034b4 in qemu_thread_start (args=0x55a540c66300) at util/qemu-thread-posix.c:502
#7  0x00007f2f60d0c2de in start_thread (arg=<optimized out>) at pthread_create.c:486
#8  0x00007f2f60a3d133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
Thread 5 (Thread 0x7f2f4b7fe700 (LWP 7089)):
#0  0x00007f2f60a32211 in __GI___poll (fds=0x55a540c08040, nfds=5, timeout=-1) at ../sysdeps/unix/sysv/linux/poll.c:29
#1  0x00007f2f657f69b6 in g_main_context_poll (priority=<optimized out>, n_fds=5, fds=0x55a540c08040, timeout=<optimized out>, context=0x55a540c06960) at gmain.c:4203
#2  0x00007f2f657f69b6 in g_main_context_iterate (context=0x55a540c06960, block=block@entry=1, dispatch=dispatch@entry=1, self=<optimized out>) at gmain.c:3897
#3  0x00007f2f657f6d72 in g_main_loop_run (loop=0x55a540c06aa0) at gmain.c:4098
#4  0x000055a53e3e3b31 in iothread_run (opaque=0x55a540b6b860) at iothread.c:82
#5  0x000055a53e6034b4 in qemu_thread_start (args=0x55a540c06ae0) at util/qemu-thread-posix.c:502
#6  0x00007f2f60d0c2de in start_thread (arg=<optimized out>) at pthread_create.c:486
#7  0x00007f2f60a3d133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
Thread 4 (Thread 0x7f2f58fbc700 (LWP 7079)):
#0  0x00007f2f60a32306 in __GI_ppoll (fds=0x7f2f50001fb0, nfds=2, timeout=<optimized out>, timeout@entry=0x7f2f58fbb640, sigmask=sigmask@entry=0x0) at ../sysdeps/unix/sysv/linux/ppoll.c:39
#1  0x000055a53e5ff175 in ppoll (__ss=0x0, __timeout=0x7f2f58fbb640, __nfds=<optimized out>, __fds=<optimized out>) at /usr/include/bits/poll2.h:77
#2  0x000055a53e5ff175 in qemu_poll_ns (fds=<optimized out>, nfds=<optimized out>, timeout=<optimized out>) at util/qemu-timer.c:334
#3  0x000055a53e6011d4 in aio_poll (ctx=0x55a540bdb400, blocking=blocking@entry=true) at util/aio-posix.c:669
#4  0x000055a53e3e3b04 in iothread_run (opaque=0x55a540b94360) at iothread.c:75
#5  0x000055a53e6034b4 in qemu_thread_start (args=0x55a540bdb8e0) at util/qemu-thread-posix.c:502
#6  0x00007f2f60d0c2de in start_thread (arg=<optimized out>) at pthread_create.c:486
#7  0x00007f2f60a3d133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
Thread 3 (Thread 0x7f2f597bd700 (LWP 7078)):
#0  0x00007f2f60d158dd in __lll_lock_wait () at ../sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:103
#1  0x00007f2f60d0ebc4 in __GI___pthread_mutex_lock (mutex=mutex@entry=0x55a540bdaa80) at ../nptl/pthread_mutex_lock.c:115
#2  0x000055a53e60359d in qemu_mutex_lock_impl (mutex=0x55a540bdaa80, file=0x55a53e7a5fdf "util/async.c", line=510) at util/qemu-thread-posix.c:66
#3  0x000055a53e5fe2a8 in co_schedule_bh_cb (opaque=0x55a540bdaa20) at util/async.c:398
#4  0x000055a53e5fdc26 in aio_bh_call (bh=0x55a540bd0250) at util/async.c:117
#5  0x000055a53e5fdc26 in aio_bh_poll (ctx=ctx@entry=0x55a540bdaa20) at util/async.c:117
#6  0x000055a53e6012bc in aio_poll (ctx=0x55a540bdaa20, blocking=blocking@entry=true) at util/aio-posix.c:728
#7  0x000055a53e3e3b04 in iothread_run (opaque=0x55a540bc1760) at iothread.c:75
#8  0x000055a53e6034b4 in qemu_thread_start (args=0x55a540bdaf30) at util/qemu-thread-posix.c:502
#9  0x00007f2f60d0c2de in start_thread (arg=<optimized out>) at pthread_create.c:486
#10 0x00007f2f60a3d133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
Thread 2 (Thread 0x7f2f59fbe700 (LWP 7077)):
#0  0x00007f2f60a3799d in syscall () at ../sysdeps/unix/sysv/linux/x86_64/syscall.S:38
#1  0x000055a53e603cdf in qemu_futex_wait (val=<optimized out>, f=<optimized out>) at util/qemu-thread-posix.c:438
#2  0x000055a53e603cdf in qemu_event_wait (ev=ev@entry=0x55a53ee94ec8 <rcu_call_ready_event>) at util/qemu-thread-posix.c:442
#3  0x000055a53e615862 in call_rcu_thread (opaque=<optimized out>) at util/rcu.c:260
#4  0x000055a53e6034b4 in qemu_thread_start (args=0x55a540ad25a0) at util/qemu-thread-posix.c:502
#5  0x00007f2f60d0c2de in start_thread (arg=<optimized out>) at pthread_create.c:486
#6  0x00007f2f60a3d133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
Thread 1 (Thread 0x7f2f6610cec0 (LWP 7076)):
#0  0x00007f2f60a32306 in __GI_ppoll (fds=0x55a540bf0fa0, nfds=1, timeout=<optimized out>, timeout@entry=0x0, sigmask=sigmask@entry=0x0) at ../sysdeps/unix/sysv/linux/ppoll.c:39
#1  0x000055a53e5ff1b9 in ppoll (__ss=0x0, __timeout=0x0, __nfds=<optimized out>, __fds=<optimized out>) at /usr/include/bits/poll2.h:77
#2  0x000055a53e5ff1b9 in qemu_poll_ns (fds=<optimized out>, nfds=<optimized out>, timeout=<optimized out>) at util/qemu-timer.c:322
#3  0x000055a53e6011d4 in aio_poll (ctx=0x55a540bc9a20, blocking=blocking@entry=true) at util/aio-posix.c:669
#4  0x000055a53e580e4a in bdrv_flush (bs=bs@entry=0x55a540d188e0) at block/io.c:2698
#5  0x000055a53e53b31e in bdrv_close (bs=0x55a540d188e0) at block.c:4018
#6  0x000055a53e53b31e in bdrv_delete (bs=<optimized out>) at block.c:4264
#7  0x000055a53e53b31e in bdrv_unref (bs=0x55a540d188e0) at block.c:5598
#8  0x000055a53e3db010 in do_drive_backup (backup=backup@entry=0x55a541402cf0, txn=0x0, errp=errp@entry=0x7ffc77a590c0) at blockdev.c:3584
#9  0x000055a53e3db2b4 in drive_backup_prepare (common=0x55a541acaf30, errp=0x7ffc77a59128) at blockdev.c:1790
#10 0x000055a53e3df0e2 in qmp_transaction (dev_list=<optimized out>, has_props=<optimized out>, props=0x55a5419c0bf0, errp=errp@entry=0x7ffc77a59198) at blockdev.c:2289
#11 0x000055a53e505575 in qmp_marshal_transaction (args=<optimized out>, ret=<optimized out>, errp=0x7ffc77a59208) at qapi/qapi-commands-transaction.c:44
#12 0x000055a53e5b6ecc in do_qmp_dispatch (errp=0x7ffc77a59200, allow_oob=<optimized out>, request=<optimized out>, cmds=0x55a53ee927a0 <qmp_commands>) at qapi/qmp-dispatch.c:131
#13 0x000055a53e5b6ecc in qmp_dispatch (cmds=0x55a53ee927a0 <qmp_commands>, request=<optimized out>, allow_oob=<optimized out>) at qapi/qmp-dispatch.c:174
#14 0x000055a53e4d94f1 in monitor_qmp_dispatch (mon=0x55a540c282b0, req=<optimized out>) at monitor/qmp.c:120
#15 0x000055a53e4d9b3a in monitor_qmp_bh_dispatcher (data=<optimized out>) at monitor/qmp.c:209
#16 0x000055a53e5fdc26 in aio_bh_call (bh=0x55a540b42b20) at util/async.c:117
#17 0x000055a53e5fdc26 in aio_bh_poll (ctx=ctx@entry=0x55a540b416d0) at util/async.c:117
#18 0x000055a53e601064 in aio_dispatch (ctx=0x55a540b416d0) at util/aio-posix.c:459
#19 0x000055a53e5fdb02 in aio_ctx_dispatch (source=<optimized out>, callback=<optimized out>, user_data=<optimized out>) at util/async.c:260
#20 0x00007f2f657f667d in g_main_dispatch (context=0x55a540bcae70) at gmain.c:3176
#21 0x00007f2f657f667d in g_main_context_dispatch (context=context@entry=0x55a540bcae70) at gmain.c:3829
#22 0x000055a53e600118 in glib_pollfds_poll () at util/main-loop.c:218
#23 0x000055a53e600118 in os_host_main_loop_wait (timeout=<optimized out>) at util/main-loop.c:241
#24 0x000055a53e600118 in main_loop_wait (nonblocking=<optimized out>) at util/main-loop.c:517
#25 0x000055a53e3e9169 in main_loop () at vl.c:1809
#26 0x000055a53e298fd3 in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:4506

Comment 1 aihua liang 2019-08-26 13:46:08 UTC
It's a regression bug, see https://bugzilla.redhat.com/show_bug.cgi?id=1650493#c8.

Comment 3 Ademar Reis 2019-08-28 14:58:54 UTC
Corner case and I don't know if libvirt will hit this. Needs fixing, but doesn't seem to be urgent.

Comment 4 Sergio Lopez 2019-09-12 18:21:39 UTC
I confirmed this is reproducible with latest upstream, and sent a patch addressing the issue:

 - https://lists.gnu.org/archive/html/qemu-block/2019-09/msg00563.html

Comment 11 Ademar Reis 2020-02-05 23:03:55 UTC
QEMU has been recently split into sub-components and as a one-time operation to avoid breakage of tools, we are setting the QEMU sub-component of this BZ to "General". Please review and change the sub-component if necessary the next time you review this BZ. Thanks

Comment 16 aihua liang 2020-02-19 03:24:59 UTC
Test on qemu-kvm-4.2.0-10.module+el8.2.0+5740+c3dff59e, the problem has been resolved, set bug's status to "Verified",

Test Steps:
  1.Start guest with qemu cmds:
   /usr/libexec/qemu-kvm \
    -name 'avocado-vt-vm1' \
    -machine q35  \
    -nodefaults \
    -device VGA,bus=pcie.0,addr=0x1  \
    -chardev socket,id=qmp_id_qmpmonitor1,path=/var/tmp/monitor-qmpmonitor1-20190820-032540-OesJUJdj,server,nowait \
    -mon chardev=qmp_id_qmpmonitor1,mode=control  \
    -chardev socket,id=qmp_id_catch_monitor,path=/var/tmp/monitor-catch_monitor-20190820-032540-OesJUJdj,server,nowait \
    -mon chardev=qmp_id_catch_monitor,mode=control \
    -device pvpanic,ioport=0x505,id=idbJPqrG \
    -chardev socket,id=chardev_serial0,server,path=/var/tmp/serial-serial0-20190820-032540-OesJUJdj,nowait \
    -device isa-serial,id=serial0,chardev=chardev_serial0  \
    -chardev socket,id=seabioslog_id_20190820-032540-OesJUJdj,path=/var/tmp/seabios-20190820-032540-OesJUJdj,server,nowait \
    -device isa-debugcon,chardev=seabioslog_id_20190820-032540-OesJUJdj,iobase=0x402 \
    -device pcie-root-port,id=pcie.0-root-port-2,slot=2,chassis=2,addr=0x2,bus=pcie.0 \
    -device qemu-xhci,id=usb1,bus=pcie.0-root-port-2,addr=0x0 \
    -object iothread,id=iothread0 \
    -object iothread,id=iothread1 \
    -device pcie-root-port,id=pcie.0-root-port-3,slot=3,chassis=3,addr=0x3,bus=pcie.0 \
    -drive id=drive_image1,if=none,snapshot=off,cache=none,format=qcow2,file=/mnt/nfs/rhel810-64-virtio.qcow2 \
    -device virtio-blk-pci,id=image1,drive=drive_image1,bootindex=0,bus=pcie.0-root-port-3,addr=0x0,iothread=iothread0 \
    -device pcie-root-port,id=pcie.0-root-port-6,slot=6,chassis=6,addr=0x6,bus=pcie.0 \
    -drive id=drive_data1,if=none,snapshot=off,cache=none,format=qcow2,file=/mnt/nfs/data.qcow2 \
    -device virtio-blk-pci,id=data1,drive=drive_data1,iothread=iothread1,bus=pcie.0-root-port-6,addr=0x0 \
    -device pcie-root-port,id=pcie.0-root-port-7,slot=7,chassis=7,addr=0x7,bus=pcie.0 \
    -device pcie-root-port,id=pcie.0-root-port-4,slot=4,chassis=4,addr=0x4,bus=pcie.0 \
    -device virtio-net-pci,mac=9a:19:6a:3c:a6:a5,id=idq14C2Q,netdev=idHzG7Zk,bus=pcie.0-root-port-4,addr=0x0  \
    -netdev tap,id=idHzG7Zk,vhost=on \
    -m 7168  \
    -smp 4,maxcpus=4,cores=2,threads=1,sockets=2  \
    -cpu 'Skylake-Client',+kvm_pv_unhalt \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1  \
    -vnc :0  \
    -rtc base=utc,clock=host,driftfix=slew  \
    -boot order=cdn,once=c,menu=off,strict=off \
    -enable-kvm \
    -device pcie-root-port,id=pcie_extra_root_port_0,slot=5,chassis=5,addr=0x5,bus=pcie.0 \
    -monitor stdio \
    -device virtio-serial-pci,id=virtio-serial0,bus=pcie_extra_root_port_0,addr=0x0 \
    -chardev socket,path=/tmp/qga.sock,server,nowait,id=qga0 \
    -device virtserialport,bus=virtio-serial0.0,chardev=qga0,id=qemu-ga0,name=org.qemu.guest_agent.0 \
    -qmp tcp:0:3000,server,nowait \

2. Do full backup and add bitmaps
    { "execute": "transaction", "arguments": { "actions": [ {"type":"drive-backup","data":{"device":"drive_image1","target":"full_backup0.img","sync":"full","format":"qcow2"}},{"type": "block-dirty-bitmap-add", "data": { "node": "drive_image1", "name": "bitmap0" } },{"type":"drive-backup","data":{"device":"drive_data1","target":"full_backup1.img","sync":"full","format":"qcow2"}},{"type": "block-dirty-bitmap-add", "data": { "node": "drive_data1", "name": "bitmap0" } }]}}

3. DD file on guest
   (guest)#dd if=/dev/urandom of=test bs=1M count=1000
          #md5sum test > sum1

          #mount /dev/vdb /mnt 
          #cd /mnt
          #dd if=/dev/urandom of=test bs=1M count=1000
          #md5sum test > sum1

4. Do incremental backups without bitmap
    { "execute": "transaction", "arguments": { "actions": [ {"type":"drive-backup","data":{"device":"drive_image1","target":"inc0.img","sync":"incremental"}},{"type":"drive-backup","data":{"device":"drive_data1","target":"inc1.img","sync":"incremental"}}]}}
    {"error": {"class": "GenericError", "desc": "must provide a valid bitmap name for 'incremental' sync mode"}}

Comment 18 errata-xmlrpc 2020-05-05 09:49:40 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2017


Note You need to log in before you can comment on or make changes to this bug.