Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 920069

Summary: qemu core dump on source host after migration finished (spice client opened)
Product: Red Hat Enterprise Linux 7 Reporter: Qunfang Zhang <qzhang>
Component: spiceAssignee: Juan Quintela <quintela>
Status: CLOSED CURRENTRELEASE QA Contact: Desktop QE <desktop-qa-list>
Severity: high Docs Contact:
Priority: high    
Version: 7.0CC: acathrow, hhuang, juli, juzhang, kraxel, marcandre.lureau, mazhang, michen, owasserm, quintela, qzhang, virt-maint, xwei, yhalperi
Target Milestone: rcKeywords: Regression
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2013-12-18 14:46:53 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Qunfang Zhang 2013-03-11 10:02:28 UTC
Description of problem:
Migrate guest with spice and keep guest desktop open with spice client, and then do migration. After migration finished, qemu command line gets core dump though the migration finished successfully and guest runs on destination host already.

This bug does not exist when closed spice client.

This bug does not exist in qemu-kmv-1.3, so should be a regression.

I filed this bug against qemu-kvm, because with same spice-server packages, qemu-1.3 has no problem. Please feel free to change component if it's incorrect.

Version-Release number of selected component (if applicable):
[root@localhost home]# uname -r
3.8.0-0.38.el7.x86_64
[root@localhost home]# rpm -q qemu-kvm
qemu-kvm-1.4.0-1.el7.x86_64
[root@localhost home]# rpm -q spice-server
spice-server-0.12.2-1.el7.x86_64


How reproducible:
Always

Steps to Reproduce:
1. Boot guest with spice
(gdb)  r  -enable-kvm -m 2048 -smp 2,sockets=2,cores=1,threads=1 -name rhel6.4-64 -uuid 9a0e67ec-f286-d8e7-0548-0c1c9ec93009 -nodefconfig -nodefaults -monitor stdio -rtc base=utc -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x7 -drive file=/root/RHEL-Server-6.4-64-virtio.qcow2,if=none,id=drive-virtio-disk0,format=qcow2,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev tap,id=hostnet0,vhost=on -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:d5:51:8a,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -chardev spicevmc,id=charchannel0,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.spice.0 -device usb-tablet,id=input0 -spice port=5901,addr=0.0.0.0,disable-ticketing,seamless-migration=on -vga qxl -global qxl-vga.vram_size=67108864 -device intel-hda,id=sound0,bus=pci.0,addr=0x4 -device hda-duplex,id=sound0-codec0,bus=sound0.0,cad=0 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 

2. Open the guest with remote-viewer (could be a rhel6 remote-viewer)

3. Boot the guest with listening mode "-incoming tcp:0:5800"

4. Migrate guest
(qemu) migrate -d tcp:0:5800
  
Actual results:
qemu core dump on src host, but migration finished.

Expected results:
No core dump happens.

Additional info:
The logs are not the same after several times attempts.

(gdb) bt
#0  0x00007ffff2afaba5 in raise () from /lib64/libc.so.6
#1  0x00007ffff2afc358 in abort () from /lib64/libc.so.6
#2  0x00007ffff2b3a59b in __libc_message () from /lib64/libc.so.6
#3  0x00007ffff2b41a8e in _int_free () from /lib64/libc.so.6
#4  0x00007ffff76fd79f in g_free () from /lib64/libglib-2.0.so.0
#5  0x00005555557674fb in migration_end () at /usr/src/debug/qemu-1.4.0/arch_init.c:541
#6  0x0000555555768637 in ram_save_complete (f=0x555556625e00, opaque=<optimized out>)
    at /usr/src/debug/qemu-1.4.0/arch_init.c:677
#7  0x00005555557d3811 in qemu_savevm_state_complete (f=0x555556625e00) at /usr/src/debug/qemu-1.4.0/savevm.c:1707
#8  0x00005555556fd645 in buffered_file_thread (opaque=0x555555c4f400 <current_migration.19506>) at migration.c:711
#9  0x00007ffff6487d15 in start_thread () from /lib64/libpthread.so.0
#10 0x00007ffff2bb746d in clone () from /lib64/libc.so.6
(gdb) 


(gdb) bt
#0  0x00007ffff2afaba5 in raise () from /lib64/libc.so.6
#1  0x00007ffff2afc358 in abort () from /lib64/libc.so.6
#2  0x00007ffff38839f5 in spice_logv (log_domain=0x7ffff38fa0c6 "Spice", log_level=SPICE_LOG_LEVEL_ERROR, 
    strloc=0x7ffff38fce70 "red_channel.c:1711", function=0x7ffff38fd520 <__FUNCTION__.22504> "red_client_destroy", 
    format=0x7ffff38fa09e "assertion `%s' failed", args=args@entry=0x7ffeeb7fd9a8) at log.c:109
#3  0x00007ffff3883b38 in spice_log (log_domain=log_domain@entry=0x7ffff38fa0c6 "Spice", 
    log_level=log_level@entry=SPICE_LOG_LEVEL_ERROR, strloc=strloc@entry=0x7ffff38fce70 "red_channel.c:1711", 
    function=function@entry=0x7ffff38fd520 <__FUNCTION__.22504> "red_client_destroy", 
    format=format@entry=0x7ffff38fa09e "assertion `%s' failed") at log.c:123
#4  0x00007ffff3842570 in red_client_destroy (client=0x55555691ae60) at red_channel.c:1711
#5  0x00007ffff3867352 in reds_client_disconnect (client=0x55555691ae60) at reds.c:561
#6  reds_client_disconnect (client=0x55555691ae60) at reds.c:518
#7  0x00007ffff38678c1 in reds_disconnect () at reds.c:589
#8  0x00007ffff386c207 in spice_server_migrate_end (s=<optimized out>, completed=1) at reds.c:4400
#9  0x000055555587e404 in notifier_list_notify (list=list@entry=0x555556068268 <migration_state_notifiers>, 
    data=data@entry=0x555555c4f400 <current_migration.19506>) at util/notify.c:39
#10 0x00005555556fd684 in migrate_fd_completed (s=0x555555c4f400 <current_migration.19506>) at migration.c:294
#11 buffered_file_thread (opaque=0x555555c4f400 <current_migration.19506>) at migration.c:716
#12 0x00007ffff6487d15 in start_thread () from /lib64/libpthread.so.0
#13 0x00007ffff2bb746d in clone () from /lib64/libc.so.6

Comment 1 Qunfang Zhang 2013-03-11 10:07:21 UTC
Also catch some logs without using a gdb.

(qemu) info migrate(/usr/libexec/qemu-kvm:13673): Spice-Warning **: reds.c:4399:spice_server_migrate_end: spice_server_migrate_info was not called, disconnecting clients
red_client_destroy: destroy client with #channels 6
(/usr/libexec/qemu-kvm:13673): Spice-ERROR **: red_channel.c:1711:red_client_destroy: assertion `pthread_equal(pthread_self(), client->thread_id)' failed
Thread 6 (Thread 0x7f0c8dc60700 (LWP 13683)):
#0  0x00007f0c993575e5 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f0c9b1ea159 in qemu_cond_wait (cond=<optimized out>, mutex=mutex@entry=0x7f0c9ba00820 <qemu_global_mutex>) at util/qemu-thread-posix.c:116
#2  0x00007f0c9b0e183b in qemu_kvm_wait_io_event (env=0x7f0c9cf8b0c0) at /usr/src/debug/qemu-1.4.0/cpus.c:727
#3  qemu_kvm_cpu_thread_fn (arg=0x7f0c9cf8b0c0) at /usr/src/debug/qemu-1.4.0/cpus.c:764
#4  0x00007f0c99353d15 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f0c95a8346d in clone () from /lib64/libc.so.6
Thread 5 (Thread 0x7f0c8d45f700 (LWP 13684)):
#0  0x00007f0c993575e5 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f0c9b1ea159 in qemu_cond_wait (cond=<optimized out>, mutex=mutex@entry=0x7f0c9ba00820 <qemu_global_mutex>) at util/qemu-thread-posix.c:116
#2  0x00007f0c9b0e183b in qemu_kvm_wait_io_event (env=0x7f0c9cfb6a00) at /usr/src/debug/qemu-1.4.0/cpus.c:727
#3  qemu_kvm_cpu_thread_fn (arg=0x7f0c9cfb6a00) at /usr/src/debug/qemu-1.4.0/cpus.c:764
#4  0x00007f0c99353d15 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f0c95a8346d in clone () from /lib64/libc.so.6
Thread 4 (Thread 0x7f0c8ca23700 (LWP 13685)):
#0  0x00007f0c95a7a98d in poll () from /lib64/libc.so.6
#1  0x00007f0c9672fbd9 in red_worker_main () from /lib64/libspice-server.so.1
#2  0x00007f0c99353d15 in start_thread () from /lib64/libpthread.so.0
#3  0x00007f0c95a8346d in clone () from /lib64/libc.so.6
Thread 3 (Thread 0x7f0bb67fc700 (LWP 13825)):
#0  0x00007f0c9935a12d in read () from /lib64/libpthread.so.0
#1  0x00007f0c96747e93 in spice_backtrace_gstack () from /lib64/libspice-server.so.1
#2  0x00007f0c9674f9ef in spice_logv () from /lib64/libspice-server.so.1
#3  0x00007f0c9674fb38 in spice_log () from /lib64/libspice-server.so.1
#4  0x00007f0c9670e570 in red_client_destroy () from /lib64/libspice-server.so.1
#5  0x00007f0c96733352 in reds_client_disconnect () from /lib64/libspice-server.so.1
#6  0x00007f0c967338c1 in reds_disconnect () from /lib64/libspice-server.so.1
#7  0x00007f0c96738207 in spice_server_migrate_end () from /lib64/libspice-server.so.1
#8  0x00007f0c9b1f5404 in notifier_list_notify (list=list@entry=0x7f0c9b9df268 <migration_state_notifiers>, data=data@entry=0x7f0c9b5c6400 <current_migration.19506>) at util/notify.c:39
#9  0x00007f0c9b074684 in migrate_fd_completed (s=0x7f0c9b5c6400 <current_migration.19506>) at migration.c:294
#10 buffered_file_thread (opaque=0x7f0c9b5c6400 <current_migration.19506>) at migration.c:716
#11 0x00007f0c99353d15 in start_thread () from /lib64/libpthread.so.0
#12 0x00007f0c95a8346d in clone () from /lib64/libc.so.6
Thread 2 (Thread 0x7f0bf69fb700 (LWP 13831)):
#0  0x00007f0c99359780 in sem_timedwait () from /lib64/libpthread.so.0
#1  0x00007f0c9b1ea31b in qemu_sem_timedwait (sem=sem@entry=0x7f0c9b9e0940 <sem>, ms=ms@entry=10000) at util/qemu-thread-posix.c:237
#2  0x00007f0c9b0b54fe in worker_thread (unused=<optimized out>) at thread-pool.c:88
#3  0x00007f0c99353d15 in start_thread () from /lib64/libpthread.so.0
#4  0x00007f0c95a8346d in clone () from /lib64/libc.so.6
Thread 1 (Thread 0x7f0c9ae99a00 (LWP 13673)):
#0  0x00007f0c99359e4d in __lll_lock_wait () from /lib64/libpthread.so.0
#1  0x00007f0c99355ca6 in _L_lock_836 () from /lib64/libpthread.so.0
#2  0x00007f0c99355ba8 in pthread_mutex_lock () from /lib64/libpthread.so.0
#3  0x00007f0c9b1e9f39 in qemu_mutex_lock (mutex=mutex@entry=0x7f0c9ba00820 <qemu_global_mutex>) at util/qemu-thread-posix.c:57
#4  0x00007f0c9b0e2dc0 in qemu_mutex_lock_iothread () at /usr/src/debug/qemu-1.4.0/cpus.c:909
#5  0x00007f0c9b072aaa in os_host_main_loop_wait (timeout=1) at main-loop.c:233
#6  main_loop_wait (nonblocking=<optimized out>) at main-loop.c:416
#7  0x00007f0c9af4c385 in main_loop () at vl.c:2001
#8  main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:4326
Aborted (core dumped)

Comment 4 Xiaoqing Wei 2013-03-18 05:30:01 UTC
*** Bug 922466 has been marked as a duplicate of this bug. ***

Comment 5 Marc-Andre Lureau 2013-06-21 19:03:22 UTC
I just got it from command line too, moving to spice server

elmarco@makai:~$ qemu-kvm -smp 4 -m 1024 -vga qxl -spice port=5901,disable-ticketing,jpeg-wan-compression=never,zlib-glz-wan-compression=never,seamless-migration=on -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -chardev spicevmc,id=charchannel0,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.spice.0  -monitor stdio  -snapshot ~/320g/win7.img 
QEMU 1.4.1 monitor - type 'help' for more information
(qemu) main_channel_link: add main channel client
main_channel_handle_parsed: net test: invalid values, latency 0 roundtrip 13722. assuming highbandwidth
inputs_connect: inputs channel client create
red_dispatcher_set_cursor_peer: 
main_channel_handle_parsed: agent start

(qemu) 
(qemu) 
(qemu) client_migrate_info spice 127.0.0.1 5903
main_channel_client_handle_migrate_connected: client 0x7f5e2429a540 connected: 1 seamless 1
(qemu) migrate -d tcp:localhost:4444
(qemu) red_client_migrate: migrate client with #channels 4
(/usr/bin/qemu-kvm:10464): Spice-ERROR **: red_channel.c:1696:red_client_migrate: assertion `pthread_equal(pthread_self(), client->thread_id)' failed
Thread 8 (Thread 0x7f5e123ab700 (LWP 10465)):
#0  sem_timedwait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/sem_timedwait.S:101
#1  0x00007f5e214825f7 in qemu_sem_timedwait (sem=sem@entry=0x7f5e21c85480 <sem>, ms=ms@entry=10000) at util/qemu-thread-posix.c:237
#2  0x00007f5e2133a33e in worker_thread (unused=<optimized out>) at thread-pool.c:88
#3  0x00007f5e1f390c53 in start_thread (arg=0x7f5e123ab700) at pthread_create.c:308
#4  0x00007f5e1acd1ecd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113
Thread 7 (Thread 0x7f5e11baa700 (LWP 10466)):
#0  pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
#1  0x00007f5e21482429 in qemu_cond_wait (cond=<optimized out>, mutex=mutex@entry=0x7f5e21ca5340 <qemu_global_mutex>) at util/qemu-thread-posix.c:116
#2  0x00007f5e2136646b in qemu_kvm_wait_io_event (env=0x7f5e241b0850) at /usr/src/debug/qemu-1.4.1/cpus.c:727
#3  qemu_kvm_cpu_thread_fn (arg=0x7f5e241b0850) at /usr/src/debug/qemu-1.4.1/cpus.c:764
#4  0x00007f5e1f390c53 in start_thread (arg=0x7f5e11baa700) at pthread_create.c:308
#5  0x00007f5e1acd1ecd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113
Thread 6 (Thread 0x7f5e113a9700 (LWP 10467)):
#0  pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
#1  0x00007f5e21482429 in qemu_cond_wait (cond=<optimized out>, mutex=mutex@entry=0x7f5e21ca5340 <qemu_global_mutex>) at util/qemu-thread-posix.c:116
#2  0x00007f5e2136646b in qemu_kvm_wait_io_event (env=0x7f5e241dc190) at /usr/src/debug/qemu-1.4.1/cpus.c:727
#3  qemu_kvm_cpu_thread_fn (arg=0x7f5e241dc190) at /usr/src/debug/qemu-1.4.1/cpus.c:764
#4  0x00007f5e1f390c53 in start_thread (arg=0x7f5e113a9700) at pthread_create.c:308
#5  0x00007f5e1acd1ecd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113
Thread 5 (Thread 0x7f5e10ba8700 (LWP 10470)):
#0  pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
#1  0x00007f5e21482429 in qemu_cond_wait (cond=<optimized out>, mutex=mutex@entry=0x7f5e21ca5340 <qemu_global_mutex>) at util/qemu-thread-posix.c:116
#2  0x00007f5e2136646b in qemu_kvm_wait_io_event (env=0x7f5e241ec910) at /usr/src/debug/qemu-1.4.1/cpus.c:727
#3  qemu_kvm_cpu_thread_fn (arg=0x7f5e241ec910) at /usr/src/debug/qemu-1.4.1/cpus.c:764
#4  0x00007f5e1f390c53 in start_thread (arg=0x7f5e10ba8700) at pthread_create.c:308
#5  0x00007f5e1acd1ecd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113
Thread 4 (Thread 0x7f5e03fff700 (LWP 10472)):
#0  pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
#1  0x00007f5e21482429 in qemu_cond_wait (cond=<optimized out>, mutex=mutex@entry=0x7f5e21ca5340 <qemu_global_mutex>) at util/qemu-thread-posix.c:116
#2  0x00007f5e2136646b in qemu_kvm_wait_io_event (env=0x7f5e241fd090) at /usr/src/debug/qemu-1.4.1/cpus.c:727
#3  qemu_kvm_cpu_thread_fn (arg=0x7f5e241fd090) at /usr/src/debug/qemu-1.4.1/cpus.c:764
#4  0x00007f5e1f390c53 in start_thread (arg=0x7f5e03fff700) at pthread_create.c:308
#5  0x00007f5e1acd1ecd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113
Thread 3 (Thread 0x7f5e027fd700 (LWP 10473)):
#0  0x00007f5e1acc78fd in poll () at ../sysdeps/unix/syscall-template.S:81
#1  0x00007f5e1b99269f in red_worker_main () from /lib64/libspice-server.so.1
#2  0x00007f5e1f390c53 in start_thread (arg=0x7f5e027fd700) at pthread_create.c:308
#3  0x00007f5e1acd1ecd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113
Thread 2 (Thread 0x7f5e012ed700 (LWP 10558)):
#0  0x00007f5e1f3970cd in read () at ../sysdeps/unix/syscall-template.S:81
#1  0x00007f5e1b9a9df1 in spice_backtrace_gstack () from /lib64/libspice-server.so.1
#2  0x00007f5e1b9b1597 in spice_logv () from /lib64/libspice-server.so.1
#3  0x00007f5e1b9b16e8 in spice_log () from /lib64/libspice-server.so.1
#4  0x00007f5e1b970b9b in red_client_migrate () from /lib64/libspice-server.so.1
#5  0x00007f5e1b99a88d in spice_server_migrate_end () from /lib64/libspice-server.so.1
#6  0x00007f5e2148d1b7 in notifier_list_notify (list=list@entry=0x7f5e21c83dd0 <migration_state_notifiers>, data=data@entry=0x7f5e2186a8a0 <current_migration.19549>) at util/notify.c:39
#7  0x00007f5e212fb11d in migrate_fd_completed (s=0x7f5e2186a8a0 <current_migration.19549>) at migration.c:294
#8  buffered_file_thread (opaque=0x7f5e2186a8a0 <current_migration.19549>) at migration.c:716
#9  0x00007f5e1f390c53 in start_thread (arg=0x7f5e012ed700) at pthread_create.c:308
#10 0x00007f5e1acd1ecd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113
Thread 1 (Thread 0x7f5e2110ba40 (LWP 10464)):
#0  __lll_lock_wait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:135
#1  0x00007f5e1f392ba1 in _L_lock_790 () from /lib64/libpthread.so.0
#2  0x00007f5e1f392aa7 in __GI___pthread_mutex_lock (mutex=mutex@entry=0x7f5e21ca5340 <qemu_global_mutex>) at pthread_mutex_lock.c:64
#3  0x00007f5e21482209 in qemu_mutex_lock (mutex=mutex@entry=0x7f5e21ca5340 <qemu_global_mutex>) at util/qemu-thread-posix.c:57
#4  0x00007f5e213676b0 in qemu_mutex_lock_iothread () at /usr/src/debug/qemu-1.4.1/cpus.c:909
#5  0x00007f5e212f9586 in os_host_main_loop_wait (timeout=1000) at main-loop.c:233
#6  main_loop_wait (nonblocking=<optimized out>) at main-loop.c:416
#7  0x00007f5e211d5e25 in main_loop () at vl.c:2001
#8  main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:4326
Abortado (`core' generado)

Comment 6 Yonit Halperin 2013-10-08 14:51:02 UTC
Hi Qunfang,
This looks like bug 962954 which was fixed in qemu-kvm-1.4.0-3.el7.
I couldn't reproduce the bug with the latest qemu-kvm. Can you please check if it works for you now?

Comment 7 Jun Li 2013-10-10 08:51:41 UTC
(In reply to Yonit Halperin from comment #6)
> Hi Qunfang,
> This looks like bug 962954 which was fixed in qemu-kvm-1.4.0-3.el7.
> I couldn't reproduce the bug with the latest qemu-kvm. Can you please check
> if it works for you now?

I couldn't reproduce this bug, too. But I hit bug 1009297.

Version-Release number of selected component (if applicable):
# uname -r && rpm -qa|grep qemu-kvm-rhev
3.10.0-22.el7.x86_64
qemu-kvm-rhev-debuginfo-1.5.3-6.el7.x86_64
qemu-kvm-rhev-1.5.3-6.el7.x86_64


Steps to Reproduce:
1. Boot guest with spice
src host cli:
# /usr/libexec/qemu-kvm -S -M pc-i440fx-rhel7.0.0 -cpu Nehalem -enable-kvm -m 4G -smp 4,sockets=2,cores=2,threads=1 -name juli -uuid 355a2475-4e03-4cdd-bf7b-5d6a59edaa61 -rtc base=localtime,clock=host,driftfix=slew -drive file=/dev/sdb,if=none,cache=none,aio=native,format=qcow2,rerror=stop,werror=stop,id=drive0 -device virtio-blk-pci,bus=pci.0,addr=0x8,drive=drive0,id=sys-disk,scsi=off,bootindex=0  -device virtio-balloon-pci,id=ballooning -global PIIX4_PM.disable_s3=0 -global PIIX4_PM.disable_s4=0 -netdev tap,id=hostnet0,vhost=off,queues=4,script=/etc/qemu-ifup -device virtio-net-pci,mq=on,vectors=17,netdev=hostnet0,id=virtio-net-pci0,mac=24:be:05:14:0d:82,addr=0x17,bootindex=2 -k en-us -boot menu=on,reboot-timeout=-1,strict=on -qmp tcp:0:4445,server,nowait -serial unix:/tmp/ttyS0,server,nowait -vnc :3 -spice port=5932,disable-ticketing -vga qxl -monitor stdio -monitor tcp:0:7445,server,nowait -monitor unix:/tmp/monitor1,server,nowait -device intel-hda,id=sound0,bus=pci.0 -device hda-duplex

dst host cli:
# /usr/libexec/qemu-kvm -S -M pc-i440fx-rhel7.0.0 -cpu Nehalem -enable-kvm -m 4G -smp 4,sockets=2,cores=2,threads=1 -name juli -uuid 355a2475-4e03-4cdd-bf7b-5d6a59edaa61 -rtc base=localtime,clock=host,driftfix=slew -drive file=/dev/sdb,if=none,cache=none,aio=native,format=qcow2,rerror=stop,werror=stop,id=drive0 -device virtio-blk-pci,bus=pci.0,addr=0x8,drive=drive0,id=sys-disk,scsi=off,bootindex=0  -device virtio-balloon-pci,id=ballooning -global PIIX4_PM.disable_s3=0 -global PIIX4_PM.disable_s4=0 -netdev tap,id=hostnet0,vhost=off,queues=4,script=/etc/qemu-ifup -device virtio-net-pci,mq=on,vectors=17,netdev=hostnet0,id=virtio-net-pci0,mac=24:be:05:14:0d:82,addr=0x17,bootindex=2 -k en-us -boot menu=on,reboot-timeout=-1,strict=on -qmp tcp:0:4445,server,nowait -serial unix:/tmp/ttyS0,server,nowait -vnc :3 -spice port=5932,disable-ticketing -vga qxl -monitor stdio -monitor tcp:0:7445,server,nowait -monitor unix:/tmp/monitor1,server,nowait -device intel-hda,id=sound0,bus=pci.0 -device hda-duplex -incoming tcp:0:5800

2. Open the guest with remote-viewer 

3. Boot the guest with listening mode "-incoming tcp:0:5800"

4. Migrate guest
(qemu) migrate -d tcp:0:5800
  
Actual results:
qemu works well dump on src host, migration finished.
(qemu) info status 
VM status: paused (postmigrate)
qemu works well dump on dst host, but hit bug 1009297.
(qemu) info status 
VM status: running