RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1745329 - qemu core dump when hotplug and unplug PF for many times in Win2019 guest
Summary: qemu core dump when hotplug and unplug PF for many times in Win2019 guest
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: qemu-kvm
Version: unspecified
Hardware: x86_64
OS: Linux
medium
high
Target Milestone: rc
: ---
Assignee: Michael S. Tsirkin
QA Contact: Yanghang Liu
URL:
Whiteboard:
Depends On:
Blocks: 1897025
TreeView+ depends on / blocked
 
Reported: 2019-08-25 12:26 UTC by Yanghang Liu
Modified: 2022-04-26 07:27 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-04-26 07:27:37 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Yanghang Liu 2019-08-25 12:26:10 UTC
Description of problem:
qemu core dump when hotplug and unplug PF for many times in Win2019 guest


Version-Release number of selected component (if applicable):
host:
qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93
kernel-4.18.0-135.el8.x86_64


How reproducible:
2/6


Steps to Reproduce:
1.Boot up the Win2019 guest 
/usr/libexec/qemu-kvm -name Win2019 \
-M q35,kernel-irqchip=split -m 4G \
-nodefaults \
-cpu Haswell-noTSX,hv_stimer,hv_synic,hv_time,hv_relaxed,hv_vpindex,hv_spinlocks=0xfff,hv_vapic,hv_reset,hv_crash \
-smp 4,sockets=1,cores=4,threads=1 \
-device pcie-root-port,id=root.1,chassis=1 \
-device pcie-root-port,id=root.2,chassis=2 \
-device pcie-root-port,id=root.3,chassis=3 \
-device pcie-root-port,id=root.4,chassis=4 \
-blockdev driver=file,cache.direct=off,cache.no-flush=on,filename=/home/images/win2019_3.qcow2,node-name=my_file \
-blockdev driver=qcow2,node-name=my,file=my_file \
-device virtio-blk-pci,drive=my,id=virtio-blk0,bus=root.1 \
-drive id=drive_cd1,if=none,snapshot=off,aio=native,cache=none,media=cdrom,file=/home/images/en_windows_server_2019_x64_dvd_4cb967d8.iso \
-device ide-cd,id=cd1,drive=drive_cd1,bus=ide.0,unit=0 \
-drive id=drive_winutils,if=none,snapshot=off,aio=native,cache=none,media=cdrom,file=/usr/share/virtio-win/virtio-win-1.9.8.iso \
-device ide-cd,id=winutils,drive=drive_winutils,bus=ide.1,unit=0 \
-vnc :0 \
-vga qxl \
-monitor unix:/tmp/monitor,server,nowait \
-usb -device usb-tablet \
-boot menu=on \
-qmp tcp:0:5555,server,nowait \

2. use the script to hotplug and unplug the pf of Win2019 guest for 500 times
#!/bin/bash 
for n in {1..500}
do	
    echo "device_add vfio-pci,host=07:00.0,id=pf1,bus=root.3" | nc -U /tmp/monitor
    sleep 10
    echo "device_del pf1" | nc -U /tmp/monitor
    sleep 12
    echo $n
done


Actual results:
qemu core dump happened


Expected results:
qemu should not counter core dump,Win2019 guest work well after hotpluging and unpluging PF for many times.


Additional info:
Tried to reproduce this problem by booting a new Win2019 guest and running the hotplug/unplug script for 6 times and qemu core dump happened 2 twice.

the backtrace info about qemu core dump is as followed: 
(gdb) bt 
#0  0x0000559f6d9ed07b in msix_table_mmio_read (opaque=0x559f6ebea940, addr=0, size=4) at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/include/qemu/bswap.h:384
#1  0x0000559f6d857741 in memory_region_read_accessor (mr=0x559f6ebeaed0, addr=0, value=0x7fe0057fe5f0, size=4, shift=0, mask=4294967295, attrs=...) at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/memory.c:444
#2  0x0000559f6d855896 in access_with_adjusted_size (addr=addr@entry=0, value=value@entry=0x7fe0057fe5f0, size=size@entry=4, access_size_min=<optimized out>, access_size_max=<optimized out>, access_fn=
    0x559f6d857710 <memory_region_read_accessor>, mr=0x559f6ebeaed0, attrs=...) at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/memory.c:574
#3  0x0000559f6d8596db in memory_region_dispatch_read1 (attrs=..., size=4, pval=0x7fe0057fe5f0, addr=0, mr=0x559f6ebeaed0) at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/memory.c:1425
#4  0x0000559f6d8596db in memory_region_dispatch_read (mr=0x559f6ebeaed0, addr=0, pval=0x7fe0057fe5f0, size=4, attrs=...) at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/memory.c:1452
#5  0x0000559f6d80a9f6 in flatview_read_continue (fv=0x7fdff066db70, addr=4235706368, attrs=..., buf=<optimized out>, len=4, addr1=<optimized out>, l=<optimized out>, mr=0x559f6ebeaed0)
    at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/exec.c:3398
#6  0x0000559f6d80aba3 in flatview_read (fv=0x7fdff066db70, addr=4235706368, attrs=..., buf=0x7fe014211028 <error: Cannot access memory at address 0x7fe014211028>, len=4)
    at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/exec.c:3436
#7  0x0000559f6d80accf in address_space_read_full (as=<optimized out>, addr=<optimized out>, attrs=..., buf=<optimized out>, len=<optimized out>) at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/exec.c:3449
#8  0x0000559f6d8684ca in kvm_cpu_exec (cpu=<optimized out>) at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/accel/kvm/kvm-all.c:2298
#9  0x0000559f6d84d56e in qemu_kvm_cpu_thread_fn (arg=0x559f6e6cf430) at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/cpus.c:1285
#10 0x0000559f6db6c4b4 in qemu_thread_start (args=0x559f6e6f2fa0) at util/qemu-thread-posix.c:502
#11 0x00007fe00ede12de in start_thread (arg=<optimized out>) at pthread_create.c:486
#12 0x00007fe00eb12133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Comment 1 Yanghang Liu 2019-08-26 10:11:48 UTC
Additional info:
(gdb) t a a bt

Thread 15 (Thread 0x7fdfefdff700 (LWP 26826)):
#0  0x00007fe00eb08b2b in ioctl () at ../sysdeps/unix/syscall-template.S:78
#1  0x0000559f6d868259 in kvm_vcpu_ioctl (cpu=cpu@entry=0x559f6e717690, type=type@entry=44672) at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/accel/kvm/kvm-all.c:2411
#2  0x0000559f6d868319 in kvm_cpu_exec (cpu=<optimized out>) at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/accel/kvm/kvm-all.c:2248
#3  0x0000559f6d84d56e in qemu_kvm_cpu_thread_fn (arg=0x559f6e717690) at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/cpus.c:1285
#4  0x0000559f6db6c4b4 in qemu_thread_start (args=0x559f6e73a690) at util/qemu-thread-posix.c:502
#5  0x00007fe00ede12de in start_thread (arg=<optimized out>) at pthread_create.c:486
#6  0x00007fe00eb12133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 14 (Thread 0x7fe0141dbec0 (LWP 26817)):
#0  0x00007fe00edea8dd in __lll_lock_wait () at ../sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:103
#1  0x00007fe00ede3af9 in __GI___pthread_mutex_lock (mutex=mutex@entry=0x559f6e3c8f60 <qemu_global_mutex>) at ../nptl/pthread_mutex_lock.c:80
#2  0x0000559f6db6c59d in qemu_mutex_lock_impl (mutex=0x559f6e3c8f60 <qemu_global_mutex>, file=0x559f6dd0f2c2 "util/main-loop.c", line=239) at util/qemu-thread-posix.c:66
#3  0x0000559f6d84d39e in qemu_mutex_lock_iothread_impl (file=file@entry=0x559f6dd0f2c2 "util/main-loop.c", line=line@entry=239)
    at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/cpus.c:1859
#4  0x0000559f6db6908d in os_host_main_loop_wait (timeout=<optimized out>) at util/main-loop.c:239
#5  0x0000559f6db6908d in main_loop_wait (nonblocking=<optimized out>) at util/main-loop.c:517
#6  0x0000559f6d952169 in main_loop () at vl.c:1809
#7  0x0000559f6d801fd3 in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:4506

Thread 13 (Thread 0x7fded37fe700 (LWP 6317)):
--Type <RET> for more, q to quit, c to continue without paging--RET
#0  0x00007fe00edea072 in futex_abstimed_wait_cancelable (private=0, abstime=0x7fded37fd6d0, expected=0, futex_word=0x559f6e626b88) at ../sysdeps/unix/sysv/linux/futex-internal.h:205
#1  0x00007fe00edea072 in do_futex_wait (sem=sem@entry=0x559f6e626b88, abstime=abstime@entry=0x7fded37fd6d0) at sem_waitcommon.c:111
#2  0x00007fe00edea183 in __new_sem_wait_slow (sem=sem@entry=0x559f6e626b88, abstime=abstime@entry=0x7fded37fd6d0) at sem_waitcommon.c:181
#3  0x00007fe00edea211 in sem_timedwait (sem=sem@entry=0x559f6e626b88, abstime=abstime@entry=0x7fded37fd6d0) at sem_timedwait.c:39
#4  0x0000559f6db6ca7f in qemu_sem_timedwait (sem=sem@entry=0x559f6e626b88, ms=ms@entry=10000) at util/qemu-thread-posix.c:289
#5  0x0000559f6db677c4 in worker_thread (opaque=0x559f6e626b10) at util/thread-pool.c:91
#6  0x0000559f6db6c4b4 in qemu_thread_start (args=0x7fde98000b20) at util/qemu-thread-posix.c:502
#7  0x00007fe00ede12de in start_thread (arg=<optimized out>) at pthread_create.c:486
#8  0x00007fe00eb12133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 12 (Thread 0x7fe0062cc700 (LWP 26822)):
#0  0x00007fe00eb08b2b in ioctl () at ../sysdeps/unix/syscall-template.S:78
#1  0x0000559f6d868259 in kvm_vcpu_ioctl (cpu=cpu@entry=0x559f6e680d20, type=type@entry=44672) at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/accel/kvm/kvm-all.c:2411
#2  0x0000559f6d868319 in kvm_cpu_exec (cpu=<optimized out>) at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/accel/kvm/kvm-all.c:2248
#3  0x0000559f6d84d56e in qemu_kvm_cpu_thread_fn (arg=0x559f6e680d20) at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/cpus.c:1285
#4  0x0000559f6db6c4b4 in qemu_thread_start (args=0x559f6e6a50a0) at util/qemu-thread-posix.c:502
#5  0x00007fe00ede12de in start_thread (arg=<optimized out>) at pthread_create.c:486
#6  0x00007fe00eb12133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 11 (Thread 0x7fded2ffd700 (LWP 6931)):
#0  0x00007fe00edea072 in futex_abstimed_wait_cancelable (private=0, abstime=0x7fded2ffc6d0, expected=0, futex_word=0x559f6e626b88) at ../sysdeps/unix/sysv/linux/futex-internal.h:205
#1  0x00007fe00edea072 in do_futex_wait (sem=sem@entry=0x559f6e626b88, abstime=abstime@entry=0x7fded2ffc6d0) at sem_waitcommon.c:111
--Type <RET> for more, q to quit, c to continue without paging--RET
#2  0x00007fe00edea183 in __new_sem_wait_slow (sem=sem@entry=0x559f6e626b88, abstime=abstime@entry=0x7fded2ffc6d0) at sem_waitcommon.c:181
#3  0x00007fe00edea211 in sem_timedwait (sem=sem@entry=0x559f6e626b88, abstime=abstime@entry=0x7fded2ffc6d0) at sem_timedwait.c:39
#4  0x0000559f6db6ca7f in qemu_sem_timedwait (sem=sem@entry=0x559f6e626b88, ms=ms@entry=10000) at util/qemu-thread-posix.c:289
#5  0x0000559f6db677c4 in worker_thread (opaque=0x559f6e626b10) at util/thread-pool.c:91
#6  0x0000559f6db6c4b4 in qemu_thread_start (args=0x7fdeb0000b20) at util/qemu-thread-posix.c:502
#7  0x00007fe00ede12de in start_thread (arg=<optimized out>) at pthread_create.c:486
#8  0x00007fe00eb12133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 10 (Thread 0x7fdebe3e3700 (LWP 6932)):
#0  0x00007fe00edea072 in futex_abstimed_wait_cancelable (private=0, abstime=0x7fdebe3e26d0, expected=0, futex_word=0x559f6e626b88) at ../sysdeps/unix/sysv/linux/futex-internal.h:205
#1  0x00007fe00edea072 in do_futex_wait (sem=sem@entry=0x559f6e626b88, abstime=abstime@entry=0x7fdebe3e26d0) at sem_waitcommon.c:111
#2  0x00007fe00edea183 in __new_sem_wait_slow (sem=sem@entry=0x559f6e626b88, abstime=abstime@entry=0x7fdebe3e26d0) at sem_waitcommon.c:181
#3  0x00007fe00edea211 in sem_timedwait (sem=sem@entry=0x559f6e626b88, abstime=abstime@entry=0x7fdebe3e26d0) at sem_timedwait.c:39
#4  0x0000559f6db6ca7f in qemu_sem_timedwait (sem=sem@entry=0x559f6e626b88, ms=ms@entry=10000) at util/qemu-thread-posix.c:289
#5  0x0000559f6db677c4 in worker_thread (opaque=0x559f6e626b10) at util/thread-pool.c:91
#6  0x0000559f6db6c4b4 in qemu_thread_start (args=0x7fdff8000b40) at util/qemu-thread-posix.c:502
#7  0x00007fe00ede12de in start_thread (arg=<optimized out>) at pthread_create.c:486
#8  0x00007fe00eb12133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 9 (Thread 0x7fe008093700 (LWP 26818)):
#0  0x00007fe00eb0c99d in syscall () at ../sysdeps/unix/sysv/linux/x86_64/syscall.S:38
#1  0x0000559f6db6ccdf in qemu_futex_wait (val=<optimized out>, f=<optimized out>) at util/qemu-thread-posix.c:438
--Type <RET> for more, q to quit, c to continue without paging--RET
#2  0x0000559f6db6ccdf in qemu_event_wait (ev=ev@entry=0x559f6e3fdec0 <rcu_gp_event>) at util/qemu-thread-posix.c:442
#3  0x0000559f6db7e597 in wait_for_readers () at util/rcu.c:134
#4  0x0000559f6db7e597 in synchronize_rcu () at util/rcu.c:170
#5  0x0000559f6db7e875 in call_rcu_thread (opaque=<optimized out>) at util/rcu.c:267
#6  0x0000559f6db6c4b4 in qemu_thread_start (args=0x559f6e52c5a0) at util/qemu-thread-posix.c:502
#7  0x00007fe00ede12de in start_thread (arg=<optimized out>) at pthread_create.c:486
#8  0x00007fe00eb12133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 8 (Thread 0x7fe006acd700 (LWP 26821)):
#0  0x00007fe00eb07211 in __GI___poll (fds=0x7fdffc0023e0, nfds=3, timeout=-1) at ../sysdeps/unix/sysv/linux/poll.c:29
#1  0x00007fe0138cb9b6 in g_main_context_poll (priority=<optimized out>, n_fds=3, fds=0x7fdffc0023e0, timeout=<optimized out>, context=0x559f6e67fea0) at gmain.c:4203
#2  0x00007fe0138cb9b6 in g_main_context_iterate (context=0x559f6e67fea0, block=block@entry=1, dispatch=dispatch@entry=1, self=<optimized out>) at gmain.c:3897
#3  0x00007fe0138cbd72 in g_main_loop_run (loop=0x559f6e67ffe0) at gmain.c:4098
#4  0x0000559f6d94cb31 in iothread_run (opaque=0x559f6e5fe260) at iothread.c:82
#5  0x0000559f6db6c4b4 in qemu_thread_start (args=0x559f6e680020) at util/qemu-thread-posix.c:502
#6  0x00007fe00ede12de in start_thread (arg=<optimized out>) at pthread_create.c:486
#7  0x00007fe00eb12133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 7 (Thread 0x7fdfede1b700 (LWP 26827)):
#0  0x00007fe00eb07211 in __GI___poll (fds=0x7fded4001fb0, nfds=2, timeout=2147483647) at ../sysdeps/unix/sysv/linux/poll.c:29
#1  0x00007fe0138cb9b6 in g_main_context_poll (priority=<optimized out>, n_fds=2, fds=0x7fded4001fb0, timeout=<optimized out>, context=0x559f6f1b0b60) at gmain.c:4203
#2  0x00007fe0138cb9b6 in g_main_context_iterate (context=0x559f6f1b0b60, block=block@entry=1, dispatch=dispatch@entry=1, self=<optimized out>) at gmain.c:3897
--Type <RET> for more, q to quit, c to continue without paging--RET
#3  0x00007fe0138cbd72 in g_main_loop_run (loop=0x7fded4002100) at gmain.c:4098
#4  0x00007fe010cb947b in red_worker_main (arg=0x559f6f1b0a90) at red-worker.c:1139
#5  0x00007fe00ede12de in start_thread (arg=<optimized out>) at pthread_create.c:486
#6  0x00007fe00eb12133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 6 (Thread 0x7fe004bff700 (LWP 26825)):
#0  0x00007fe00edea8dd in __lll_lock_wait () at ../sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:103
#1  0x00007fe00ede3af9 in __GI___pthread_mutex_lock (mutex=mutex@entry=0x559f6e3c8f60 <qemu_global_mutex>) at ../nptl/pthread_mutex_lock.c:80
#2  0x0000559f6db6c59d in qemu_mutex_lock_impl (mutex=0x559f6e3c8f60 <qemu_global_mutex>, file=0x559f6dc06068 "/builddir/build/BUILD/qemu-4.1.0/exec.c", line=3301) at util/qemu-thread-posix.c:66
#3  0x0000559f6d84d39e in qemu_mutex_lock_iothread_impl (file=<optimized out>, line=<optimized out>) at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/cpus.c:1859
#4  0x0000559f6d8058f9 in prepare_mmio_access (mr=<optimized out>, mr=<optimized out>) at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/exec.c:3301
#5  0x0000559f6d80a97f in flatview_read_continue (fv=0x7fdfe81d1240, addr=2952921388, attrs=..., buf=<optimized out>, len=4, addr1=<optimized out>, l=<optimized out>, mr=0x559f6e851960)
    at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/exec.c:3396
#6  0x0000559f6d80aba3 in flatview_read (fv=0x7fdfe81d1240, addr=2952921388, attrs=..., buf=0x7fe01420e028 <error: Cannot access memory at address 0x7fe01420e028>, len=4)
    at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/exec.c:3436
#7  0x0000559f6d80accf in address_space_read_full (as=<optimized out>, addr=<optimized out>, attrs=..., buf=<optimized out>, len=<optimized out>)
    at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/exec.c:3449
#8  0x0000559f6d8684ca in kvm_cpu_exec (cpu=<optimized out>) at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/accel/kvm/kvm-all.c:2298
#9  0x0000559f6d84d56e in qemu_kvm_cpu_thread_fn (arg=0x559f6e6f4010) at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/cpus.c:1285
#10 0x0000559f6db6c4b4 in qemu_thread_start (args=0x559f6e716e50) at util/qemu-thread-posix.c:502
#11 0x00007fe00ede12de in start_thread (arg=<optimized out>) at pthread_create.c:486
#12 0x00007fe00eb12133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
--Type <RET> for more, q to quit, c to continue without paging--RET

Thread 5 (Thread 0x7fdeabfff700 (LWP 6930)):
#0  0x00007fe00edea072 in futex_abstimed_wait_cancelable (private=0, abstime=0x7fdeabffe6d0, expected=0, futex_word=0x559f6e626b88) at ../sysdeps/unix/sysv/linux/futex-internal.h:205
#1  0x00007fe00edea072 in do_futex_wait (sem=sem@entry=0x559f6e626b88, abstime=abstime@entry=0x7fdeabffe6d0) at sem_waitcommon.c:111
#2  0x00007fe00edea183 in __new_sem_wait_slow (sem=sem@entry=0x559f6e626b88, abstime=abstime@entry=0x7fdeabffe6d0) at sem_waitcommon.c:181
#3  0x00007fe00edea211 in sem_timedwait (sem=sem@entry=0x559f6e626b88, abstime=abstime@entry=0x7fdeabffe6d0) at sem_timedwait.c:39
#4  0x0000559f6db6ca7f in qemu_sem_timedwait (sem=sem@entry=0x559f6e626b88, ms=ms@entry=10000) at util/qemu-thread-posix.c:289
#5  0x0000559f6db677c4 in worker_thread (opaque=0x559f6e626b10) at util/thread-pool.c:91
#6  0x0000559f6db6c4b4 in qemu_thread_start (args=0x7fdec4000b20) at util/qemu-thread-posix.c:502
#7  0x00007fe00ede12de in start_thread (arg=<optimized out>) at pthread_create.c:486
#8  0x00007fe00eb12133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 4 (Thread 0x7fded08dd700 (LWP 2917)):
#0  0x00007fe00edea072 in futex_abstimed_wait_cancelable (private=0, abstime=0x7fded08dc6d0, expected=0, futex_word=0x559f6e626b88) at ../sysdeps/unix/sysv/linux/futex-internal.h:205
#1  0x00007fe00edea072 in do_futex_wait (sem=sem@entry=0x559f6e626b88, abstime=abstime@entry=0x7fded08dc6d0) at sem_waitcommon.c:111
#2  0x00007fe00edea183 in __new_sem_wait_slow (sem=sem@entry=0x559f6e626b88, abstime=abstime@entry=0x7fded08dc6d0) at sem_waitcommon.c:181
#3  0x00007fe00edea211 in sem_timedwait (sem=sem@entry=0x559f6e626b88, abstime=abstime@entry=0x7fded08dc6d0) at sem_timedwait.c:39
#4  0x0000559f6db6ca7f in qemu_sem_timedwait (sem=sem@entry=0x559f6e626b88, ms=ms@entry=10000) at util/qemu-thread-posix.c:289
#5  0x0000559f6db677c4 in worker_thread (opaque=0x559f6e626b10) at util/thread-pool.c:91
#6  0x0000559f6db6c4b4 in qemu_thread_start (args=0x7fdff8000b20) at util/qemu-thread-posix.c:502
#7  0x00007fe00ede12de in start_thread (arg=<optimized out>) at pthread_create.c:486
#8  0x00007fe00eb12133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
--Type <RET> for more, q to quit, c to continue without paging--RET

Thread 3 (Thread 0x7fdebcbe0700 (LWP 6813)):
#0  0x00007fe00edea072 in futex_abstimed_wait_cancelable (private=0, abstime=0x7fdebcbdf6d0, expected=0, futex_word=0x559f6e626b88) at ../sysdeps/unix/sysv/linux/futex-internal.h:205
#1  0x00007fe00edea072 in do_futex_wait (sem=sem@entry=0x559f6e626b88, abstime=abstime@entry=0x7fdebcbdf6d0) at sem_waitcommon.c:111
#2  0x00007fe00edea183 in __new_sem_wait_slow (sem=sem@entry=0x559f6e626b88, abstime=abstime@entry=0x7fdebcbdf6d0) at sem_waitcommon.c:181
#3  0x00007fe00edea211 in sem_timedwait (sem=sem@entry=0x559f6e626b88, abstime=abstime@entry=0x7fdebcbdf6d0) at sem_timedwait.c:39
#4  0x0000559f6db6ca7f in qemu_sem_timedwait (sem=sem@entry=0x559f6e626b88, ms=ms@entry=10000) at util/qemu-thread-posix.c:289
#5  0x0000559f6db677c4 in worker_thread (opaque=0x559f6e626b10) at util/thread-pool.c:91
#6  0x0000559f6db6c4b4 in qemu_thread_start (args=0x559f6ebb0e20) at util/qemu-thread-posix.c:502
#7  0x00007fe00ede12de in start_thread (arg=<optimized out>) at pthread_create.c:486
#8  0x00007fe00eb12133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 2 (Thread 0x7fdfed5ff700 (LWP 26828)):
#0  0x00007fe00ede747c in futex_wait_cancelable (private=0, expected=0, futex_word=0x559f6e625398) at ../sysdeps/unix/sysv/linux/futex-internal.h:88
#1  0x00007fe00ede747c in __pthread_cond_wait_common (abstime=0x0, mutex=0x559f6e6253a8, cond=0x559f6e625370) at pthread_cond_wait.c:502
#2  0x00007fe00ede747c in __pthread_cond_wait (cond=0x559f6e625370, mutex=mutex@entry=0x559f6e6253a8) at pthread_cond_wait.c:655
#3  0x0000559f6db6c86d in qemu_cond_wait_impl (cond=<optimized out>, mutex=0x559f6e6253a8, file=0x559f6dce8c37 "ui/vnc-jobs.c", line=214) at util/qemu-thread-posix.c:161
#4  0x0000559f6da95d71 in vnc_worker_thread_loop (queue=queue@entry=0x559f6e625370) at ui/vnc-jobs.c:214
#5  0x0000559f6da96330 in vnc_worker_thread (arg=0x559f6e625370) at ui/vnc-jobs.c:324
#6  0x0000559f6db6c4b4 in qemu_thread_start (args=0x559f6e816410) at util/qemu-thread-posix.c:502
#7  0x00007fe00ede12de in start_thread (arg=<optimized out>) at pthread_create.c:486
#8  0x00007fe00eb12133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
--Type <RET> for more, q to quit, c to continue without paging--RET

Thread 1 (Thread 0x7fe0057ff700 (LWP 26824)):
#0  0x0000559f6d9ed07b in msix_table_mmio_read (opaque=0x559f6ebea940, addr=0, size=4) at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/include/qemu/bswap.h:384
#1  0x0000559f6d857741 in memory_region_read_accessor (mr=0x559f6ebeaed0, addr=0, value=0x7fe0057fe5f0, size=4, shift=0, mask=4294967295, attrs=...)
    at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/memory.c:444
#2  0x0000559f6d855896 in access_with_adjusted_size
    (addr=addr@entry=0, value=value@entry=0x7fe0057fe5f0, size=size@entry=4, access_size_min=<optimized out>, access_size_max=<optimized out>, access_fn=
    0x559f6d857710 <memory_region_read_accessor>, mr=0x559f6ebeaed0, attrs=...) at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/memory.c:574
#3  0x0000559f6d8596db in memory_region_dispatch_read1 (attrs=..., size=4, pval=0x7fe0057fe5f0, addr=0, mr=0x559f6ebeaed0)
    at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/memory.c:1425
#4  0x0000559f6d8596db in memory_region_dispatch_read (mr=0x559f6ebeaed0, addr=0, pval=0x7fe0057fe5f0, size=4, attrs=...)
    at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/memory.c:1452
#5  0x0000559f6d80a9f6 in flatview_read_continue (fv=0x7fdff066db70, addr=4235706368, attrs=..., buf=<optimized out>, len=4, addr1=<optimized out>, l=<optimized out>, mr=0x559f6ebeaed0)
    at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/exec.c:3398
#6  0x0000559f6d80aba3 in flatview_read (fv=0x7fdff066db70, addr=4235706368, attrs=..., buf=0x7fe014211028 <error: Cannot access memory at address 0x7fe014211028>, len=4)
    at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/exec.c:3436
#7  0x0000559f6d80accf in address_space_read_full (as=<optimized out>, addr=<optimized out>, attrs=..., buf=<optimized out>, len=<optimized out>)
    at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/exec.c:3449
#8  0x0000559f6d8684ca in kvm_cpu_exec (cpu=<optimized out>) at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/accel/kvm/kvm-all.c:2298
#9  0x0000559f6d84d56e in qemu_kvm_cpu_thread_fn (arg=0x559f6e6cf430) at /usr/src/debug/qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64/cpus.c:1285
#10 0x0000559f6db6c4b4 in qemu_thread_start (args=0x559f6e6f2fa0) at util/qemu-thread-posix.c:502
#11 0x00007fe00ede12de in start_thread (arg=<optimized out>) at pthread_create.c:486
--Type <RET> for more, q to quit, c to continue without paging--RET
#12 0x00007fe00eb12133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Comment 3 Ademar Reis 2020-02-05 23:03:48 UTC
QEMU has been recently split into sub-components and as a one-time operation to avoid breakage of tools, we are setting the QEMU sub-component of this BZ to "General". Please review and change the sub-component if necessary the next time you review this BZ. Thanks

Comment 6 RHEL Program Management 2021-03-15 07:38:40 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release.  Therefore, it is being closed.  If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.

Comment 7 Yanghang Liu 2021-03-17 06:15:09 UTC
Hi Michael,

> qemu core dump when hotplug and unplug PF for 500 times in Win2019 guest

Do you plan to fix this bz ? Is it ok to close the bz as WONTFIX?

Comment 8 Yanghang Liu 2021-04-12 07:06:20 UTC
Hi Michael,

Could you please help check the comment 7 and give some justification for this bug ?

QE will modify the test strategy for this test scenario based on your feedback.

Thanks a lot.

Comment 9 Michael S. Tsirkin 2021-04-20 10:20:18 UTC
I think it got fixed with the latest rebase. So should be ok to keep the test scenario.

Comment 10 Michael S. Tsirkin 2021-04-20 10:21:34 UTC
In other words retest from 8.5 and on. for now known issue.

Comment 11 Yanghang Liu 2021-04-26 03:59:29 UTC
Reopen this bug first based on comment 9 and comment 10.


I will re-test this bug. 
Once my test results show that this bug has been fixed already, I will close this bug as CURRENTRELEASE.

Comment 13 John Ferlan 2021-09-09 13:57:40 UTC
Bulk update: Move RHEL-AV bugs to RHEL9. If necessary to resolve in RHEL8, then clone to the current RHEL8 release.

Comment 15 John Ferlan 2021-09-29 11:24:03 UTC
Reset assignee/qa contact lost when bug moved from AV to RHEL9. Also extended the stale date to allow for time to test for release.

Comment 16 RHEL Program Management 2022-04-26 07:27:37 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release.  Therefore, it is being closed.  If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.


Note You need to log in before you can comment on or make changes to this bug.