RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1596013 - sometime L1 qemu process core dump when rebooting L2 RHEL7.6 guest
Summary: sometime L1 qemu process core dump when rebooting L2 RHEL7.6 guest
Keywords:
Status: CLOSED DUPLICATE of bug 1567733
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: qemu-kvm-rhev
Version: 7.6
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: rc
: ---
Assignee: Bandan Das
QA Contact: FuXiangChun
URL:
Whiteboard:
Depends On:
Blocks: 1599260
TreeView+ depends on / blocked
 
Reported: 2018-06-28 03:49 UTC by FuXiangChun
Modified: 2019-03-22 11:11 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1599260 (view as bug list)
Environment:
Last Closed: 2018-07-10 17:22:34 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
L1 dmesg log (6.73 KB, text/plain)
2018-06-30 06:10 UTC, FuXiangChun
no flags Details
part of message log (101.14 KB, text/plain)
2018-06-30 06:12 UTC, FuXiangChun
no flags Details

Description FuXiangChun 2018-06-28 03:49:19 UTC
Description of problem:
Reboot L2 guest inside guest. sometimes L1 qemu process core dump.

Version-Release number of selected component (if applicable):
3.10.0-915.el7.x86_64
qemu-kvm-rhev-2.12.0-5.el7.x86_64

How reproducible:
2/6

Steps to Reproduce:
1.Boot L1 RHEL7.6 guest

/usr/libexec/qemu-kvm -enable-kvm -M q35 -cpu host -nodefaults -smp 8,cores=2,threads=2,sockets=2 -m 24G -name vm1 -drive file=rhel7.6-l1.qcow2,if=none,id=guest-img,format=qcow2,werror=stop,rerror=stop -device ide-hd,drive=guest-img,bus=ide.0,unit=0,id=os-disk,bootindex=1 -spice port=5931,disable-ticketing -vga qxl -monitor stdio -boot menu=on,splash-time=10000 -device intel-iommu -vnc :1 -device ahci,id=ahci0 -device virtio-net-pci,netdev=tap10,mac=08:9e:01:c2:6d:6e,disable-legacy=off,disable-modern=off -netdev tap,id=tap10

2.Boot L2 RHEL7.6 guest

/usr/libexec/qemu-kvm -name guest=r7,debug-threads=on -enable-kvm -M q35 -cpu Broadwell -m 4096 -realtime mlock=off -smp 4,sockets=1,cores=4,threads=1 -boot strict=on -rtc base=localtime,clock=host,driftfix=slew -drive file=/home/rhel7.6-l2.qcow2,if=none,id=drive-system-disk,format=qcow2,cache=none,aio=native,werror=stop,rerror=stop,serial=QEMU-DISK1 -device virtio-scsi-pci,id=scsi0,ioeventfd=off -device scsi-hd,drive=drive-system-disk,id=system-disk,channel=0,scsi-id=0,lun=0,ver=mike,serial=ababab,bootindex=1 -device qxl-vga,id=video0,ram_size=67108864,vram_size=67108864,vram64_size_mb=0,vgamem_mb=16,addr=0x4 -vnc :1 -monitor stdio -serial unix:/tmp/console,server,nowait

3.Reboot L2 inside guest

#reboot

Actual results:
(process:2581): Spice-WARNING **: 23:34:06.510: display-channel.c:2432:display_channel_validate_surface: failed on 0

(process:2581): Spice-CRITICAL **: 23:34:06.510: display-channel.c:2035:display_channel_update: condition `display_channel_validate_surface(display, surface_id)' failed
Thread 10 (Thread 0x7fb543f9a700 (LWP 2582)):
#0  0x00007fb54b3281c9 in syscall () from /lib64/libc.so.6
#1  0x000055a5c04f65c0 in qemu_futex_wait (val=<optimized out>, f=<optimized out>) at /usr/src/debug/qemu-2.12.0/include/qemu/futex.h:29
#2  qemu_event_wait (ev=ev@entry=0x55a5c1176c08 <rcu_call_ready_event>) at util/qemu-thread-posix.c:445
#3  0x000055a5c0506aee in call_rcu_thread (opaque=<optimized out>) at util/rcu.c:261
#4  0x00007fb54b604dd5 in start_thread () from /lib64/libpthread.so.0
#5  0x00007fb54b32dead in clone () from /lib64/libc.so.6
Thread 9 (Thread 0x7fb542f98700 (LWP 2585)):
#0  0x00007fb54b3232cf in ppoll () from /lib64/libc.so.6
#1  0x000055a5c04f262b in ppoll (__ss=0x0, __timeout=0x0, __nfds=<optimized out>, __fds=<optimized out>) at /usr/include/bits/poll2.h:77
#2  qemu_poll_ns (fds=<optimized out>, nfds=<optimized out>, timeout=timeout@entry=-1) at util/qemu-timer.c:322
#3  0x000055a5c04f4375 in aio_poll (ctx=0x55a5c20efa40, blocking=blocking@entry=true) at util/aio-posix.c:629
#4  0x000055a5c02ca89e in iothread_run (opaque=0x55a5c20bfea0) at iothread.c:64
#5  0x00007fb54b604dd5 in start_thread () from /lib64/libpthread.so.0
#6  0x00007fb54b32dead in clone () from /lib64/libc.so.6
Thread 8 (Thread 0x7fb542797700 (LWP 2587)):
#0  0x00007fb54b3248d7 in ioctl () from /lib64/libc.so.6
#1  0x000055a5c0203db5 in kvm_vcpu_ioctl (cpu=cpu@entry=0x55a5c2424000, type=type@entry=44672) at /usr/src/debug/qemu-2.12.0/accel/kvm/kvm-all.c:2105
#2  0x000055a5c0203e83 in kvm_cpu_exec (cpu=cpu@entry=0x55a5c2424000) at /usr/src/debug/qemu-2.12.0/accel/kvm/kvm-all.c:1942
#3  0x000055a5c01e1b56 in qemu_kvm_cpu_thread_fn (arg=0x55a5c2424000) at /usr/src/debug/qemu-2.12.0/cpus.c:1215
#4  0x00007fb54b604dd5 in start_thread () from /lib64/libpthread.so.0
#5  0x00007fb54b32dead in clone () from /lib64/libc.so.6
Thread 7 (Thread 0x7fb541f96700 (LWP 2588)):
#0  0x00007fb54b3248d7 in ioctl () from /lib64/libc.so.6
#1  0x000055a5c0203db5 in kvm_vcpu_ioctl (cpu=cpu@entry=0x55a5c2484000, type=type@entry=44672) at /usr/src/debug/qemu-2.12.0/accel/kvm/kvm-all.c:2105
#2  0x000055a5c0203e83 in kvm_cpu_exec (cpu=cpu@entry=0x55a5c2484000) at /usr/src/debug/qemu-2.12.0/accel/kvm/kvm-all.c:1942
#3  0x000055a5c01e1b56 in qemu_kvm_cpu_thread_fn (arg=0x55a5c2484000) at /usr/src/debug/qemu-2.12.0/cpus.c:1215
#4  0x00007fb54b604dd5 in start_thread () from /lib64/libpthread.so.0
#5  0x00007fb54b32dead in clone () from /lib64/libc.so.6
Thread 6 (Thread 0x7fb541795700 (LWP 2589)):
#0  0x00007fb54b3248d7 in ioctl () from /lib64/libc.so.6
#1  0x000055a5c0203db5 in kvm_vcpu_ioctl (cpu=cpu@entry=0x55a5c24a6000, type=type@entry=44672) at /usr/src/debug/qemu-2.12.0/accel/kvm/kvm-all.c:2105
#2  0x000055a5c0203e83 in kvm_cpu_exec (cpu=cpu@entry=0x55a5c24a6000) at /usr/src/debug/qemu-2.12.0/accel/kvm/kvm-all.c:1942
#3  0x000055a5c01e1b56 in qemu_kvm_cpu_thread_fn (arg=0x55a5c24a6000) at /usr/src/debug/qemu-2.12.0/cpus.c:1215
#4  0x00007fb54b604dd5 in start_thread () from /lib64/libpthread.so.0
#5  0x00007fb54b32dead in clone () from /lib64/libc.so.6
Thread 5 (Thread 0x7fb540f94700 (LWP 2590)):
#0  0x00007fb54b3248d7 in ioctl () from /lib64/libc.so.6
#1  0x000055a5c0203db5 in kvm_vcpu_ioctl (cpu=cpu@entry=0x55a5c24c4000, type=type@entry=44672) at /usr/src/debug/qemu-2.12.0/accel/kvm/kvm-all.c:2105
#2  0x000055a5c0203e83 in kvm_cpu_exec (cpu=cpu@entry=0x55a5c24c4000) at /usr/src/debug/qemu-2.12.0/accel/kvm/kvm-all.c:1942
#3  0x000055a5c01e1b56 in qemu_kvm_cpu_thread_fn (arg=0x55a5c24c4000) at /usr/src/debug/qemu-2.12.0/cpus.c:1215
#4  0x00007fb54b604dd5 in start_thread () from /lib64/libpthread.so.0
#5  0x00007fb54b32dead in clone () from /lib64/libc.so.6
Thread 4 (Thread 0x7fb436604700 (LWP 2592)):
#0  0x00007fb54b60b6ed in read () from /lib64/libpthread.so.0
#1  0x00007fb54caa33a1 in spice_backtrace_gstack () from /lib64/libspice-server.so.1
#2  0x00007fb54caaad17 in spice_log () from /lib64/libspice-server.so.1
#3  0x00007fb54ca5f6e8 in display_channel_update () from /lib64/libspice-server.so.1
#4  0x00007fb54ca8e25b in handle_dev_update_async () from /lib64/libspice-server.so.1
#5  0x00007fb54ca5967d in dispatcher_handle_recv_read () from /lib64/libspice-server.so.1
#6  0x00007fb54ca5fe8b in watch_func () from /lib64/libspice-server.so.1
#7  0x00007fb5639ce049 in g_main_context_dispatch () from /lib64/libglib-2.0.so.0
#8  0x00007fb5639ce3a8 in g_main_context_iterate.isra.19 () from /lib64/libglib-2.0.so.0
#9  0x00007fb5639ce67a in g_main_loop_run () from /lib64/libglib-2.0.so.0
#10 0x00007fb54ca8e5fa in red_worker_main () from /lib64/libspice-server.so.1
#11 0x00007fb54b604dd5 in start_thread () from /lib64/libpthread.so.0
#12 0x00007fb54b32dead in clone () from /lib64/libc.so.6
Thread 3 (Thread 0x7fb435bff700 (LWP 2593)):
#0  0x00007fb54b608965 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x000055a5c04f6199 in qemu_cond_wait_impl (cond=cond@entry=0x55a5c396e090, mutex=mutex@entry=0x55a5c396e0c8, file=file@entry=0x55a5c065fa47 "ui/vnc-jobs.c", line=line@entry=212) at util/qemu-thread-posix.c:164
#2  0x000055a5c0416b8f in vnc_worker_thread_loop (queue=queue@entry=0x55a5c396e090) at ui/vnc-jobs.c:212
#3  0x000055a5c0417158 in vnc_worker_thread (arg=0x55a5c396e090) at ui/vnc-jobs.c:319
#4  0x00007fb54b604dd5 in start_thread () from /lib64/libpthread.so.0
#5  0x00007fb54b32dead in clone () from /lib64/libc.so.6
Thread 2 (Thread 0x7fb543799700 (LWP 2667)):
#0  0x00007fb54b60acbf in do_futex_wait () from /lib64/libpthread.so.0
#1  0x00007fb54b60ad97 in __new_sem_wait_slow () from /lib64/libpthread.so.0
#2  0x00007fb54b60ae35 in sem_timedwait () from /lib64/libpthread.so.0
#3  0x000055a5c04f6357 in qemu_sem_timedwait (sem=sem@entry=0x55a5c20bfe38, ms=ms@entry=10000) at util/qemu-thread-posix.c:292
#4  0x000055a5c04f1c18 in worker_thread (opaque=0x55a5c20bfdc0) at util/thread-pool.c:92
#5  0x00007fb54b604dd5 in start_thread () from /lib64/libpthread.so.0
#6  0x00007fb54b32dead in clone () from /lib64/libc.so.6
Thread 1 (Thread 0x7fb5642c5dc0 (LWP 2581)):
#0  0x00007fb54b3232cf in ppoll () from /lib64/libc.so.6
#1  0x000055a5c04f2609 in ppoll (__ss=0x0, __timeout=0x7fffc4b7a4a0, __nfds=<optimized out>, __fds=<optimized out>) at /usr/include/bits/poll2.h:77
#2  qemu_poll_ns (fds=<optimized out>, nfds=<optimized out>, timeout=timeout@entry=26967847) at util/qemu-timer.c:334
#3  0x000055a5c04f34ee in os_host_main_loop_wait (timeout=<optimized out>) at util/main-loop.c:233
#4  main_loop_wait (nonblocking=nonblocking@entry=0) at util/main-loop.c:497
#5  0x000055a5c019e0f7 in main_loop () at vl.c:1963
#6  main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:4768
Aborted (core dumped)


Expected results:
works

Additional info:

host info:

hostname:dell-per230-03.khw.lab.eng.bos.redhat.com

cpu info:

Vendor 	GenuineIntel
Model Name 	Intel(R) Xeon(R) CPU E3-1240 v5 @ 3.50GHz
Family 	6
Model 	94
Stepping 	3
Speed 	3504.01
Processors 	8
Cores 	4
Sockets 	1
Hyper 	True
Flags 	lm fpu fpu_exception wp vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch ida arat epb xsaveopt pln pts dtherm hwp hwp_noitfy hwp_act_window hwp_epp tpr_shadow vnmi flexpriority ept vpid fsgsbase bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx

Comment 2 Bandan Das 2018-06-29 16:49:34 UTC
Is the backtrace from the qemu process inside L1 ? Because none of the threads in the backtrace show signs  of a fault! Also, can you please post dmesg from host when this happens ?

Comment 3 FuXiangChun 2018-06-30 06:02:24 UTC
(In reply to Bandan Das from comment #2)
> Is the backtrace from the qemu process inside L1 ? Because none of the
> threads in the backtrace show signs  of a fault! Also, can you please post
> dmesg from host when this happens ?

qemu process is from inside L1.  sometimes system_power command also can trigger this bug. I will upload L1 host's dmesg and /var/log/message log to attachment.

Comment 4 FuXiangChun 2018-06-30 06:10:28 UTC
Created attachment 1455602 [details]
L1 dmesg log

Comment 5 FuXiangChun 2018-06-30 06:12:02 UTC
Created attachment 1455603 [details]
part of message log

Comment 6 Bandan Das 2018-07-02 21:28:28 UTC
Thank you for the logs. Sorry I wasn't clear, can you please post the dmesg from L0 when the qemu in L1 aborts ? If the syscall in the first thread leads to abort, I am assuming whatever's happening in the kernel should print a message as well.

Comment 7 Bandan Das 2018-07-02 21:29:10 UTC
Oops, forgot to set needinfo for comment 6

Comment 8 FuXiangChun 2018-07-03 06:04:32 UTC
I reproduced it again. This is L0 dmesg log as below. L0 kernel didn't print useful message when qemu process core dump in L1.

# dmesg
[ 1333.832860] switch: port 2(tap0) entered disabled state
[ 1333.839437] device tap0 left promiscuous mode
[ 1333.844315] switch: port 2(tap0) entered disabled state
[ 1369.796789] switch: port 2(tap0) entered blocking state
[ 1369.802637] switch: port 2(tap0) entered disabled state
[ 1369.808503] device tap0 entered promiscuous mode
[ 1369.813697] switch: port 2(tap0) entered blocking state
[ 1369.819531] switch: port 2(tap0) entered forwarding state
[10667.222600] switch: port 2(tap0) entered disabled state
[10667.229289] device tap0 left promiscuous mode
[10667.234166] switch: port 2(tap0) entered disabled state
[10877.594898] switch: port 2(tap0) entered blocking state
[10877.600735] switch: port 2(tap0) entered disabled state
[10877.606625] device tap0 entered promiscuous mode
[10877.611824] switch: port 2(tap0) entered blocking state
[10877.617672] switch: port 2(tap0) entered forwarding state

Comment 9 FuXiangChun 2018-07-05 09:14:40 UTC
I tested another 2 scenarios in RHEL8.0 host.

Action: reboot L2 guest inside guest.

s1) L0 and L1 and L2 are RHEL8->works
S2) L0 and L1 are RHEL8,But L2 is RHEL7.6-> L1 qemu process Aborted (core dumped)

and (L0 and L1)dmesg log doesn't print any useful message.

(process:7776): Spice-CRITICAL **: 05:03:52.062: display-channel.c:2035:display_channel_update: condition `display_channel_validate_surface(display, surface_id)' failed
Thread 23 (Thread 0x7fdf07fff700 (LWP 7851)):
#0  0x00007fe081f19032 in do_futex_wait () from /lib64/libpthread.so.0
#1  0x00007fe081f19143 in __new_sem_wait_slow () from /lib64/libpthread.so.0
#2  0x000055fb635ce39f in qemu_sem_timedwait ()
#3  0x000055fb635c9a84 in worker_thread ()
#4  0x00007fe081f105f4 in start_thread () from /lib64/libpthread.so.0
#5  0x00007fe081c4405f in clone () from /lib64/libc.so.6
Thread 22 (Thread 0x7fdf208f2700 (LWP 7850)):
#0  0x00007fe081f19032 in do_futex_wait () from /lib64/libpthread.so.0
#1  0x00007fe081f19143 in __new_sem_wait_slow () from /lib64/libpthread.so.0
#2  0x000055fb635ce39f in qemu_sem_timedwait ()
#3  0x000055fb635c9a84 in worker_thread ()
#4  0x00007fe081f105f4 in start_thread () from /lib64/libpthread.so.0
#5  0x00007fe081c4405f in clone () from /lib64/libc.so.6
Thread 21 (Thread 0x7fdf211f4700 (LWP 7849)):
#0  0x00007fe081f19032 in do_futex_wait () from /lib64/libpthread.so.0
#1  0x00007fe081f19143 in __new_sem_wait_slow () from /lib64/libpthread.so.0
#2  0x000055fb635ce39f in qemu_sem_timedwait ()
#3  0x000055fb635c9a84 in worker_thread ()
#4  0x00007fe081f105f4 in start_thread () from /lib64/libpthread.so.0
#5  0x00007fe081c4405f in clone () from /lib64/libc.so.6
Thread 20 (Thread 0x7fdf219f5700 (LWP 7848)):
#0  0x00007fe081f19032 in do_futex_wait () from /lib64/libpthread.so.0
#1  0x00007fe081f19143 in __new_sem_wait_slow () from /lib64/libpthread.so.0
#2  0x000055fb635ce39f in qemu_sem_timedwait ()
#3  0x000055fb635c9a84 in worker_thread ()
#4  0x00007fe081f105f4 in start_thread () from /lib64/libpthread.so.0
#5  0x00007fe081c4405f in clone () from /lib64/libc.so.6
Thread 19 (Thread 0x7fdf221f6700 (LWP 7847)):
#0  0x00007fe081f19032 in do_futex_wait () from /lib64/libpthread.so.0
#1  0x00007fe081f19143 in __new_sem_wait_slow () from /lib64/libpthread.so.0
#2  0x000055fb635ce39f in qemu_sem_timedwait ()
#3  0x000055fb635c9a84 in worker_thread ()
#4  0x00007fe081f105f4 in start_thread () from /lib64/libpthread.so.0
#5  0x00007fe081c4405f in clone () from /lib64/libc.so.6
Thread 18 (Thread 0x7fdf229f7700 (LWP 7846)):
#0  0x00007fe081f19032 in do_futex_wait () from /lib64/libpthread.so.0
#1  0x00007fe081f19143 in __new_sem_wait_slow () from /lib64/libpthread.so.0
#2  0x000055fb635ce39f in qemu_sem_timedwait ()
#3  0x000055fb635c9a84 in worker_thread ()
#4  0x00007fe081f105f4 in start_thread () from /lib64/libpthread.so.0
#5  0x00007fe081c4405f in clone () from /lib64/libc.so.6
Thread 17 (Thread 0x7fdf231f8700 (LWP 7845)):
#0  0x00007fe081f19032 in do_futex_wait () from /lib64/libpthread.so.0
#1  0x00007fe081f19143 in __new_sem_wait_slow () from /lib64/libpthread.so.0
#2  0x000055fb635ce39f in qemu_sem_timedwait ()
#3  0x000055fb635c9a84 in worker_thread ()
#4  0x00007fe081f105f4 in start_thread () from /lib64/libpthread.so.0
#5  0x00007fe081c4405f in clone () from /lib64/libc.so.6
Thread 16 (Thread 0x7fdf239f9700 (LWP 7844)):
#0  0x00007fe081f19032 in do_futex_wait () from /lib64/libpthread.so.0
#1  0x00007fe081f19143 in __new_sem_wait_slow () from /lib64/libpthread.so.0
#2  0x000055fb635ce39f in qemu_sem_timedwait ()
#3  0x000055fb635c9a84 in worker_thread ()
#4  0x00007fe081f105f4 in start_thread () from /lib64/libpthread.so.0
#5  0x00007fe081c4405f in clone () from /lib64/libc.so.6
Thread 15 (Thread 0x7fdf388e7700 (LWP 7843)):
#0  0x00007fe081f19032 in do_futex_wait () from /lib64/libpthread.so.0
#1  0x00007fe081f19143 in __new_sem_wait_slow () from /lib64/libpthread.so.0
#2  0x000055fb635ce39f in qemu_sem_timedwait ()
#3  0x000055fb635c9a84 in worker_thread ()
#4  0x00007fe081f105f4 in start_thread () from /lib64/libpthread.so.0
#5  0x00007fe081c4405f in clone () from /lib64/libc.so.6
Thread 14 (Thread 0x7fdf390e8700 (LWP 7842)):
#0  0x00007fe081f19032 in do_futex_wait () from /lib64/libpthread.so.0
#1  0x00007fe081f19143 in __new_sem_wait_slow () from /lib64/libpthread.so.0
#2  0x000055fb635ce39f in qemu_sem_timedwait ()
#3  0x000055fb635c9a84 in worker_thread ()
#4  0x00007fe081f105f4 in start_thread () from /lib64/libpthread.so.0
#5  0x00007fe081c4405f in clone () from /lib64/libc.so.6
Thread 13 (Thread 0x7fdf398e9700 (LWP 7841)):
#0  0x00007fe081f19032 in do_futex_wait () from /lib64/libpthread.so.0
#1  0x00007fe081f19143 in __new_sem_wait_slow () from /lib64/libpthread.so.0
#2  0x000055fb635ce39f in qemu_sem_timedwait ()
#3  0x000055fb635c9a84 in worker_thread ()
#4  0x00007fe081f105f4 in start_thread () from /lib64/libpthread.so.0
#5  0x00007fe081c4405f in clone () from /lib64/libc.so.6
Thread 12 (Thread 0x7fdf3a0ea700 (LWP 7840)):
#0  0x00007fe081f19032 in do_futex_wait () from /lib64/libpthread.so.0
#1  0x00007fe081f19143 in __new_sem_wait_slow () from /lib64/libpthread.so.0
#2  0x000055fb635ce39f in qemu_sem_timedwait ()
#3  0x000055fb635c9a84 in worker_thread ()
#4  0x00007fe081f105f4 in start_thread () from /lib64/libpthread.so.0
#5  0x00007fe081c4405f in clone () from /lib64/libc.so.6
Thread 11 (Thread 0x7fdf3a9ec700 (LWP 7839)):
#0  0x00007fe081f19032 in do_futex_wait () from /lib64/libpthread.so.0
#1  0x00007fe081f19143 in __new_sem_wait_slow () from /lib64/libpthread.so.0
#2  0x000055fb635ce39f in qemu_sem_timedwait ()
#3  0x000055fb635c9a84 in worker_thread ()
#4  0x00007fe081f105f4 in start_thread () from /lib64/libpthread.so.0
#5  0x00007fe081c4405f in clone () from /lib64/libc.so.6
Thread 10 (Thread 0x7fdf437ff700 (LWP 7833)):
#0  0x00007fe081f1658c in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x000055fb635ce18d in qemu_cond_wait_impl ()
#2  0x000055fb634eab9b in vnc_worker_thread_loop ()
#3  0x000055fb634eb490 in vnc_worker_thread ()
#4  0x00007fe081f105f4 in start_thread () from /lib64/libpthread.so.0
#5  0x00007fe081c4405f in clone () from /lib64/libc.so.6
Thread 9 (Thread 0x7fe068bff700 (LWP 7818)):
#0  0x00007fe081f19ad4 in read () from /lib64/libpthread.so.0
#1  0x00007fe0830dcfb9 in ?? () from /lib64/libspice-server.so.1
#2  0x00007fe0830e4580 in ?? () from /lib64/libspice-server.so.1
#3  0x00007fe083098bf8 in ?? () from /lib64/libspice-server.so.1
#4  0x00007fe0830c5e7e in ?? () from /lib64/libspice-server.so.1
#5  0x00007fe083092aa8 in ?? () from /lib64/libspice-server.so.1
#6  0x00007fe0830994ef in ?? () from /lib64/libspice-server.so.1
#7  0x00007fe0879058ad in g_main_context_dispatch () from /lib64/libglib-2.0.so.0
#8  0x00007fe087905c78 in ?? () from /lib64/libglib-2.0.so.0
#9  0x00007fe087905fa2 in g_main_loop_run () from /lib64/libglib-2.0.so.0
#10 0x00007fe0830c623e in ?? () from /lib64/libspice-server.so.1
#11 0x00007fe081f105f4 in start_thread () from /lib64/libpthread.so.0
#12 0x00007fe081c4405f in clone () from /lib64/libc.so.6
Thread 8 (Thread 0x7fe069926700 (LWP 7794)):
#0  0x00007fe081c3ae47 in ioctl () from /lib64/libc.so.6
#1  0x000055fb632d8d09 in kvm_vcpu_ioctl ()
#2  0x000055fb632d8dc2 in kvm_cpu_exec ()
#3  0x000055fb632b610e in qemu_kvm_cpu_thread_fn ()
#4  0x00007fe081f105f4 in start_thread () from /lib64/libpthread.so.0
#5  0x00007fe081c4405f in clone () from /lib64/libc.so.6
Thread 7 (Thread 0x7fe06a127700 (LWP 7793)):
#0  0x00007fe081c3ae47 in ioctl () from /lib64/libc.so.6
#1  0x000055fb632d8d09 in kvm_vcpu_ioctl ()
#2  0x000055fb632d8dc2 in kvm_cpu_exec ()
#3  0x000055fb632b610e in qemu_kvm_cpu_thread_fn ()
#4  0x00007fe081f105f4 in start_thread () from /lib64/libpthread.so.0
#5  0x00007fe081c4405f in clone () from /lib64/libc.so.6
Thread 6 (Thread 0x7fe06a928700 (LWP 7792)):
#0  0x00007fe081c3ae47 in ioctl () from /lib64/libc.so.6
#1  0x000055fb632d8d09 in kvm_vcpu_ioctl ()
#2  0x000055fb632d8dc2 in kvm_cpu_exec ()
#3  0x000055fb632b610e in qemu_kvm_cpu_thread_fn ()
#4  0x00007fe081f105f4 in start_thread () from /lib64/libpthread.so.0
#5  0x00007fe081c4405f in clone () from /lib64/libc.so.6
Thread 5 (Thread 0x7fe06b129700 (LWP 7790)):
#0  0x00007fe081c3ae47 in ioctl () from /lib64/libc.so.6
#1  0x000055fb632d8d09 in kvm_vcpu_ioctl ()
#2  0x000055fb632d8dc2 in kvm_cpu_exec ()
#3  0x000055fb632b610e in qemu_kvm_cpu_thread_fn ()
#4  0x00007fe081f105f4 in start_thread () from /lib64/libpthread.so.0
#5  0x00007fe081c4405f in clone () from /lib64/libc.so.6
Thread 4 (Thread 0x7fe06ba1b700 (LWP 7789)):
#0  0x00007fe081c396d6 in ppoll () from /lib64/libc.so.6
#1  0x000055fb635ca459 in qemu_poll_ns ()
#2  0x000055fb635cc5cc in aio_poll ()
#3  0x000055fb6339dc4e in iothread_run ()
#4  0x00007fe081f105f4 in start_thread () from /lib64/libpthread.so.0
#5  0x00007fe081c4405f in clone () from /lib64/libc.so.6
Thread 3 (Thread 0x7fe06c31e700 (LWP 7788)):
#0  0x00007fe081f19032 in do_futex_wait () from /lib64/libpthread.so.0
#1  0x00007fe081f19143 in __new_sem_wait_slow () from /lib64/libpthread.so.0
#2  0x000055fb635ce39f in qemu_sem_timedwait ()
#3  0x000055fb635c9a84 in worker_thread ()
#4  0x00007fe081f105f4 in start_thread () from /lib64/libpthread.so.0
#5  0x00007fe081c4405f in clone () from /lib64/libc.so.6
Thread 2 (Thread 0x7fe06da02700 (LWP 7777)):
#0  0x00007fe081c3ea69 in syscall () from /lib64/libc.so.6
#1  0x000055fb635ce5ff in qemu_event_wait ()
#2  0x000055fb635df342 in call_rcu_thread ()
#3  0x00007fe081f105f4 in start_thread () from /lib64/libpthread.so.0
#4  0x00007fe081c4405f in clone () from /lib64/libc.so.6
Thread 1 (Thread 0x7fe088206200 (LWP 7776)):
#0  0x00007fe081c396d6 in ppoll () from /lib64/libc.so.6
#1  0x000055fb635ca415 in qemu_poll_ns ()
#2  0x000055fb635cb348 in main_loop_wait ()
#3  0x000055fb63273c75 in main ()
Aborted (core dumped)

Comment 10 Bandan Das 2018-07-05 17:43:22 UTC
Thank you for trying out the other scenario. Can you please give me access to the system or maybe, just copy the guest image to a place from where I can download it ? I am running into install issues when using the nightly iso or even the qcow2 image.

Comment 11 Bandan Das 2018-07-05 19:34:50 UTC
I finally got my system setup... I am using a slightly different Skylake host (E3-1270 v5) and the same cmd line as yours. I am scripted the reboot and let it run for 10 minutes and still didn't hit the crash. I think it would be much for easier if I can jump on to your system. Please let me know.

Comment 13 Bandan Das 2018-07-09 21:33:03 UTC
Thank you, I was able to reproduce it on your system. I believe this should be fixed by a more recent qemu and is probably related to the vga-qxl on your command line. Can you please try qemu-kvm-rhev-2.12.0-7.el7 both on L0 and L1 and check if you can still reproduce the problem ?

Comment 14 FuXiangChun 2018-07-10 14:22:12 UTC
(In reply to Bandan Das from comment #13)
> Thank you, I was able to reproduce it on your system. I believe this should
> be fixed by a more recent qemu and is probably related to the vga-qxl on
> your command line. Can you please try qemu-kvm-rhev-2.12.0-7.el7 both on L0
> and L1 and check if you can still reproduce the problem ?

I tested the latest qemu-kvm-rhev-2.12.0-7.el7.  can not reproduce the problem.  This problem is gone.

Comment 15 Bandan Das 2018-07-10 17:22:34 UTC
Thank you for confirming, I am marking this a duplicate of bug 1567733.

*** This bug has been marked as a duplicate of bug 1567733 ***


Note You need to log in before you can comment on or make changes to this bug.