Bug 1634746
| Summary: | qemu-system-x86_64 crashes with SIGSEGV | ||||||
|---|---|---|---|---|---|---|---|
| Product: | [Fedora] Fedora | Reporter: | Mikkel Lauritsen <renard> | ||||
| Component: | qemu | Assignee: | Fedora Virtualization Maintainers <virt-maint> | ||||
| Status: | CLOSED EOL | QA Contact: | Fedora Extras Quality Assurance <extras-qa> | ||||
| Severity: | unspecified | Docs Contact: | |||||
| Priority: | unspecified | ||||||
| Version: | 28 | CC: | amit, berrange, cfergeau, crobinso, dwmw2, itamar, pbonzini, renard, rjones, virt-maint | ||||
| Target Milestone: | --- | ||||||
| Target Release: | --- | ||||||
| Hardware: | x86_64 | ||||||
| OS: | Linux | ||||||
| Whiteboard: | |||||||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |||||
| Doc Text: | Story Points: | --- | |||||
| Clone Of: | Environment: | ||||||
| Last Closed: | 2019-05-21 15:21:02 UTC | Type: | Bug | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Embargoed: | |||||||
| Attachments: |
|
||||||
|
Description
Mikkel Lauritsen
2018-10-01 14:11:19 UTC
Sometimes it crashes with SIGABRT as well: Oct 02 08:57:16 localhost.localdomain audit[16738]: ANOM_ABEND auid=4294967295 uid=107 gid=107 ses=4294967295 pid=16738 comm=43505520302F4B564D exe="/usr/bin/qemu-system-x86_64" sig=6 res=1 Oct 02 08:57:16 localhost.localdomain systemd[1]: Started Process Core Dump (PID 16787/UID 0). Oct 02 08:57:16 localhost.localdomain audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=systemd-coredump@4-16787-0 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 02 08:57:16 localhost.localdomain systemd-coredump[16788]: Resource limits disable core dumping for process 16738 (qemu-system-x86). Oct 02 08:57:16 localhost.localdomain systemd-coredump[16788]: Process 16738 (qemu-system-x86) of user 107 dumped core. Oct 02 08:57:16 localhost.localdomain audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=systemd-coredump@4-16787-0 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 02 08:57:16 localhost.localdomain libvirtd[823]: 2018-10-02 06:57:16.393+0000: 823: error : qemuMonitorIO:723 : internal error: End of file from qemu monitor Oct 02 08:57:16 localhost.localdomain audit[823]: VIRT_CONTROL pid=823 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm op=stop reason=failed vm="Win10" uuid=42446303-20b6-449c-999a-6ab2493f493a vm-pid=-1 exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? r> Oct 02 08:57:16 localhost.localdomain systemd-machined[722]: Machine qemu-5-Win10 terminated. Oct 02 08:57:16 localhost.localdomain abrt-dump-journal-core[775]: Failed to obtain all required information from journald Oct 02 08:57:16 localhost.localdomain abrt-dump-journal-core[775]: Failed to save detect problem data in abrt database You need to catch the core dump and get a stack trace. Take a look at the "coredumpctl" command. Stack trace of thread 17863:
#0 0x00007f05e8887eab raise (libc.so.6)
#1 0x00007f05e88725b9 abort (libc.so.6)
#2 0x000055a7fdb38adf qemu_aio_coroutine_enter (qemu-system-x86_64)
#3 0x000055a7fdb23020 aio_co_enter (qemu-system-x86_64)
#4 0x000055a7fdaa5060 blk_aio_prwv (qemu-system-x86_64)
#5 0x000055a7fdaa5154 blk_aio_pwritev (qemu-system-x86_64)
#6 0x000055a7fd8a535d dma_blk_cb (qemu-system-x86_64)
#7 0x000055a7fd8a572a dma_blk_io (qemu-system-x86_64)
#8 0x000055a7fd8a57ee dma_blk_write (qemu-system-x86_64)
#9 0x000055a7fd95a25f ide_dma_cb (qemu-system-x86_64)
#10 0x000055a7fd95dd1c bmdma_cmd_writeb (qemu-system-x86_64)
#11 0x000055a7fd7c6dd6 memory_region_write_accessor (qemu-system-x86_64)
#12 0x000055a7fd7c5196 access_with_adjusted_size (qemu-system-x86_64)
#13 0x000055a7fd7c8e1e memory_region_dispatch_write (qemu-system-x86_64)
#14 0x000055a7fd7859a1 flatview_write (qemu-system-x86_64)
#15 0x000055a7fd789993 address_space_write (qemu-system-x86_64)
#16 0x000055a7fd7d7270 kvm_cpu_exec (qemu-system-x86_64)
#17 0x000055a7fd7b5720 qemu_kvm_cpu_thread_fn (qemu-system-x86_64)
#18 0x00007f05e8c17594 start_thread (libpthread.so.0)
#19 0x00007f05e894ae6f __clone (libc.so.6)
Stack trace of thread 17864:
#0 0x00007f05e8941c57 ioctl (libc.so.6)
#1 0x000055a7fd7d6fa9 kvm_vcpu_ioctl (qemu-system-x86_64)
#2 0x000055a7fd7d7062 kvm_cpu_exec (qemu-system-x86_64)
#3 0x000055a7fd7b5720 qemu_kvm_cpu_thread_fn (qemu-system-x86_64)
#4 0x00007f05e8c17594 start_thread (libpthread.so.0)
#5 0x00007f05e894ae6f __clone (libc.so.6)
Stack trace of thread 17860:
#0 0x00007f05e8945879 syscall (libc.so.6)
#1 0x000055a7fdb27c3f qemu_event_wait (qemu-system-x86_64)
#2 0x000055a7fdb38778 call_rcu_thread (qemu-system-x86_64)
#3 0x00007f05e8c17594 start_thread (libpthread.so.0)
#4 0x00007f05e894ae6f __clone (libc.so.6)
Stack trace of thread 17866:
#0 0x00007f05e89403e9 __poll (libc.so.6)
#1 0x00007f05f0c90bc6 g_main_context_iterate.isra.21 (libglib-2.0.so.0)
#2 0x00007f05f0c90f82 g_main_loop_run (libglib-2.0.so.0)
#3 0x00007f05ea26c22e red_worker_main (libspice-server.so.1)
#4 0x00007f05e8c17594 start_thread (libpthread.so.0)
#5 0x00007f05e894ae6f __clone (libc.so.6)
Stack trace of thread 17844:
#0 0x00007f05e89404e6 ppoll (libc.so.6)
#1 0x000055a7fdb23bb5 qemu_poll_ns (qemu-system-x86_64)
#2 0x000055a7fdb249e3 main_loop_wait (qemu-system-x86_64)
#3 0x000055a7fd780091 main (qemu-system-x86_64)
#4 0x00007f05e887411b __libc_start_main (libc.so.6)
#5 0x000055a7fd7836ea _start (qemu-system-x86_64)
Stack trace of thread 25846:
#0 0x00007f05e8c1ffc2 do_futex_wait (libpthread.so.0)
#1 0x00007f05e8c200d3 __new_sem_wait_slow (libpthread.so.0)
#2 0x000055a7fdb279df qemu_sem_timedwait (qemu-system-x86_64)
#3 0x000055a7fdb2327c worker_thread (qemu-system-x86_64)
#4 0x00007f05e8c17594 start_thread (libpthread.so.0)
#5 0x00007f05e894ae6f __clone (libc.so.6)
Probably a bug in the IDE drive. If you install the qemu debuginfo package then we would be able to see more detail in the stack trace. https://fedoraproject.org/wiki/StackTraces Created attachment 1489476 [details]
Stacktrace with debuginfo
Added stacktrace with debuginfo
Certainly something in the block layer gets horribly confused shortly after an IDE disk request is made. I think the most important thing you can try now is to see if it still happens with qemu 3.0 in Fedora 29: # dnf update --best qemu --releasever=29 If it's still happening with qemu 3.0 then we can take it to the upstream list / bugtracker. Just obtained a couple of stacktraces - the SIGABRT is always raised in qemu-coroutine.c:128 and the only difference between the stacktraces is whether #9 is dma-helpers.c:245 (dma_blk_read) or dma-helpers.c:263 (dma_blk_write). I'll see if an update makes any difference. It seems to work a lot better with qemu 3.0 . I haven't had a lot of time to test it, but no crashes so far. This message is a reminder that Fedora 28 is nearing its end of life. On 2019-May-28 Fedora will stop maintaining and issuing updates for Fedora 28. It is Fedora's policy to close all bug reports from releases that are no longer maintained. At that time this bug will be closed as EOL if it remains open with a Fedora 'version' of '28'. Package Maintainer: If you wish for this bug to remain open because you plan to fix it in a currently maintained version, simply change the 'version' to a later Fedora version. Thank you for reporting this issue and we are sorry that we were not able to fix it before Fedora 28 is end of life. If you would still like to see this bug fixed and are able to reproduce it against a later version of Fedora, you are encouraged change the 'version' to a later Fedora version prior this bug is closed as described in the policy above. Although we aim to fix as many bugs as possible during every release's lifetime, sometimes those efforts are overtaken by events. Often a more recent Fedora release includes newer upstream software that fixes bugs or makes them obsolete. Sounds like it is fixed with qemu 3.0 which is in f29+ |