RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 867816 - Guest aborted on the dst host after migration during guest doing S3
Summary: Guest aborted on the dst host after migration during guest doing S3
Keywords:
Status: CLOSED DUPLICATE of bug 868256
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: qemu-kvm
Version: 6.4
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: rc
: ---
Assignee: Orit Wasserman
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-10-18 10:58 UTC by Qunfang Zhang
Modified: 2014-03-04 00:24 UTC (History)
13 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2012-10-23 13:43:55 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Qunfang Zhang 2012-10-18 10:58:02 UTC
Description of problem:
Boot a guest and migrating it to destination host during guest implementing S3, guest aborted on the dst host side after migration.

Version-Release number of selected component (if applicable):
kernel-2.6.32-331.el6.x86_64
qemu-kvm-0.12.1.2-2.327.el6.x86_64
seabios-0.6.1.2-25.el6.x86_64
spice-server-0.12.0-1.el6.x86_64

How reproducible:
Always

Steps to Reproduce:
1. Boot a guest.
(gdb) r -M rhel6.4.0 -cpu Conroe -enable-kvm -m 2048 -smp 2,sockets=2,cores=1,threads=1 -enable-kvm -name rhel6.4-64 -uuid feebc8fd-f8b0-4e75-abc3-e63fcdb67170 -smbios type=1,manufacturer='Red Hat',product='RHEV Hypervisor',version=el6,serial=koTUXQrb,uuid=feebc8fd-f8b0-4e75-abc3-e63fcdb67170 -k en-us -rtc base=localtime,clock=host,driftfix=slew -no-kvm-pit-reinjection -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device usb-tablet,id=input0 -drive file=/mnt/rhel5.9-64-virtio.qcow2,if=none,id=disk0,format=qcow2,werror=stop,rerror=stop,aio=native -device ide-drive,drive=disk0,id=disk0,bus=ide.0,unit=1,bootindex=1 -drive file=/mnt/boot.iso,if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev tap,id=hostnet0,fd=6 6<>/dev/tap6 -device e1000,netdev=hostnet0,id=net0,mac=00:10:1A:4A:25:28,bus=pci.0,addr=0x4  -monitor stdio -qmp tcp:0:6667,server,nowait -boot c -chardev socket,path=/tmp/isa-serial2,server,nowait,id=isa1 -device isa-serial,chardev=isa1,id=isa-serial1 -drive if=none,id=drive-fdc0-0-0,readonly=on,format=raw -global isa-fdc.driveA=drive-fdc0-0-0 -spice seamless-migration=on,port=5930,password=redhat -global qxl-vga.vram_size=33554432 -k en-us -vga qxl  -device usb-host,hostbus=2,hostaddr=4,id=hostdev  -global PIIX4_PM.disable_s3=0 -global PIIX4_PM.disable_s4=0

2. Boot it on the dst host with listening mode.

3. Connect guest desktop
#remote-viewer spice://$src_host_ip:5930 &

4. On src host:
(qemu) __com.redhat_spice_migrate_info 10.66.7.170 5930 
main_channel_client_handle_migrate_connected: client 0x7ffff9020ac0 connected: 1 seamless 1
(qemu) mig
migrate               migrate_cancel        migrate_set_speed     
migrate_set_downtime  
(qemu) migrate -d tcp:t4:5800

5. Do S3 inside guest during migration
#pm-suspend

  
Actual results:
Guest aborted on dst host.

Expected results:
Guest should resume on dst host and work well.

Additional info:

(qemu) 
(qemu) main_channel_link: add main channel client
inputs_connect: inputs channel client create
red_dispatcher_set_cursor_peer: 
id 0, group 0, virt start 0, virt end ffffffffffffffff, generation 0, delta 0
id 0, group 1, virt start 7f02efc00000, virt end 7f02f3c00000, generation 0, delta 7f02efc00000
(/usr/libexec/qemu-kvm:15893): Spice-CRITICAL **: red_memslots.c:94:validate_virt: virtual address out of range
    virt=0x7f03e0c00c08+0x96 slot_id=0 group_id=1
    slot=0x7f02efc00000-0x7f02f3c00000 delta=0x7f02efc00000
Thread 5 (Thread 0x7f038195d700 (LWP 15903)):
#0  0x00007f038ab7f054 in __lll_lock_wait () from /lib64/libpthread.so.0
#1  0x00007f038ab7a388 in _L_lock_854 () from /lib64/libpthread.so.0
#2  0x00007f038ab7a257 in pthread_mutex_lock () from /lib64/libpthread.so.0
#3  0x00007f038b23f6f0 in post_kvm_run (kvm=<value optimized out>, env=0x7f038d420d30) at /usr/src/debug/qemu-kvm-0.12.1.2/qemu-kvm.c:922
#4  0x00007f038b240cbb in kvm_run (env=0x7f038d420d30) at /usr/src/debug/qemu-kvm-0.12.1.2/qemu-kvm.c:1024
#5  0x00007f038b241119 in kvm_cpu_exec (env=<value optimized out>) at /usr/src/debug/qemu-kvm-0.12.1.2/qemu-kvm.c:1743
#6  0x00007f038b241ffd in kvm_main_loop_cpu (_env=0x7f038d420d30) at /usr/src/debug/qemu-kvm-0.12.1.2/qemu-kvm.c:2004
#7  ap_main_loop (_env=0x7f038d420d30) at /usr/src/debug/qemu-kvm-0.12.1.2/qemu-kvm.c:2060
#8  0x00007f038ab78851 in start_thread () from /lib64/libpthread.so.0
#9  0x00007f0388c3967d in clone () from /lib64/libc.so.6
Thread 4 (Thread 0x7f0380f5c700 (LWP 15904)):
#0  0x00007f038ab7f54d in read () from /lib64/libpthread.so.0
#1  0x00007f038939b953 in read (fd=20, buf=0x7f0380f5b9fc "\027", size=4, block=<value optimized out>) at /usr/include/bits/unistd.h:45
#2  read_safe (fd=20, buf=0x7f0380f5b9fc "\027", size=4, block=<value optimized out>) at dispatcher.c:76
#3  0x00007f038939bb86 in dispatcher_send_message (dispatcher=0x7f038d465318, message_type=19, payload=0x7f0380f5ba30) at dispatcher.c:188
#4  0x00007f038939c068 in red_dispatcher_destroy_surfaces (qxl_worker=<value optimized out>) at red_dispatcher.c:432
#5  qxl_worker_destroy_surfaces (qxl_worker=<value optimized out>) at red_dispatcher.c:439
#6  0x00007f038b3ab9d8 in qxl_spice_destroy_surfaces (qxl=0x7f038de4f840, async=<value optimized out>) at /usr/src/debug/qemu-kvm-0.12.1.2/hw/qxl.c:260
#7  0x00007f038b3ad025 in qxl_reset_surfaces (d=0x7f038de4f840, loadvm=0) at /usr/src/debug/qemu-kvm-0.12.1.2/hw/qxl.c:1225
#8  qxl_hard_reset (d=0x7f038de4f840, loadvm=0) at /usr/src/debug/qemu-kvm-0.12.1.2/hw/qxl.c:1091
#9  0x00007f038b3af2b3 in ioport_write (opaque=0x7f038de4f840, addr=<value optimized out>, val=0) at /usr/src/debug/qemu-kvm-0.12.1.2/hw/qxl.c:1512
#10 0x00007f038b240ee7 in kvm_handle_io (env=0x7f038d43a010) at /usr/src/debug/qemu-kvm-0.12.1.2/kvm-all.c:144
#11 kvm_run (env=0x7f038d43a010) at /usr/src/debug/qemu-kvm-0.12.1.2/qemu-kvm.c:1048
#12 0x00007f038b241119 in kvm_cpu_exec (env=<value optimized out>) at /usr/src/debug/qemu-kvm-0.12.1.2/qemu-kvm.c:1743
#13 0x00007f038b241ffd in kvm_main_loop_cpu (_env=0x7f038d43a010) at /usr/src/debug/qemu-kvm-0.12.1.2/qemu-kvm.c:2004
#14 ap_main_loop (_env=0x7f038d43a010) at /usr/src/debug/qemu-kvm-0.12.1.2/qemu-kvm.c:2060
#15 0x00007f038ab78851 in start_thread () from /lib64/libpthread.so.0
#16 0x00007f0388c3967d in clone () from /lib64/libc.so.6
Thread 3 (Thread 0x7f0378dfd700 (LWP 15905)):
#0  0x00007f038ab7f54d in read () from /lib64/libpthread.so.0
#1  0x00007f03893d5b00 in read () at /usr/include/bits/unistd.h:45
#2  spice_backtrace_gstack () at backtrace.c:100
#3  0x00007f03893ddc30 in spice_logv (log_domain=0x7f0389454c4e "Spice", log_level=SPICE_LOG_LEVEL_CRITICAL, strloc=0x7f03894584ba "red_memslots.c:94", function=0x7f038945859f "validate_virt", format=0x7f03894582c8 "virtual address out of range\n    virt=0x%lx+0x%x slot_id=%d group_id=%d\n    slot=0x%lx-0x%lx delta=0x%lx", args=0x7f0378dfc810) at log.c:108
#4  0x00007f03893ddd6a in spice_log (log_domain=<value optimized out>, log_level=<value optimized out>, strloc=<value optimized out>, function=<value optimized out>, format=<value optimized out>) at log.c:123
#5  0x00007f038939e403 in validate_virt (info=<value optimized out>, virt=139654632311816, slot_id=0, add_size=150, group_id=1) at red_memslots.c:90
#6  0x00007f038939e553 in get_virt (info=<value optimized out>, addr=<value optimized out>, add_size=<value optimized out>, group_id=1, error=0x7f0378dfc9d8) at red_memslots.c:142
#7  0x00007f038939ece0 in red_get_cursor_cmd (slots=0x7f02e81d3f20, group_id=1, red=0x7f02e82c28a0, addr=<value optimized out>) at red_parse_qxl.c:1303
#8  0x00007f03893a5181 in red_process_cursor (worker=0x7f02e80008c0, ring_is_empty=0x7f0378dfcaac, max_pipe_size=50) at red_worker.c:4837
#9  0x00007f03893bcf23 in flush_cursor_commands (worker=0x7f02e80008c0) at red_worker.c:9382
#10 flush_all_qxl_commands (worker=0x7f02e80008c0) at red_worker.c:9422
#11 0x00007f03893bdbc0 in dev_destroy_surfaces (opaque=<value optimized out>, payload=<value optimized out>) at red_worker.c:10813
#12 handle_dev_destroy_surfaces (opaque=<value optimized out>, payload=<value optimized out>) at red_worker.c:10842
#13 0x00007f038939bcc7 in dispatcher_handle_single_read (dispatcher=0x7f038d465318) at dispatcher.c:139
#14 dispatcher_handle_recv_read (dispatcher=0x7f038d465318) at dispatcher.c:162
#15 0x00007f03893bc88e in red_worker_main (arg=<value optimized out>) at red_worker.c:11782
#16 0x00007f038ab78851 in start_thread () from /lib64/libpthread.so.0
#17 0x00007f0388c3967d in clone () from /lib64/libc.so.6
Thread 2 (Thread 0x7f03834fb700 (LWP 15919)):
#0  0x00007f038ab7c7bb in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f038b25c2f7 in cond_timedwait (unused=<value optimized out>) at posix-aio-compat.c:102
#2  aio_thread (unused=<value optimized out>) at posix-aio-compat.c:329
#3  0x00007f038ab78851 in start_thread () from /lib64/libpthread.so.0
#4  0x00007f0388c3967d in clone () from /lib64/libc.so.6
Thread 1 (Thread 0x7f038b193940 (LWP 15893)):
#0  0x00007f038ab7f054 in __lll_lock_wait () from /lib64/libpthread.so.0
#1  0x00007f038ab7a388 in _L_lock_854 () from /lib64/libpthread.so.0
#2  0x00007f038ab7a257 in pthread_mutex_lock () from /lib64/libpthread.so.0
#3  0x00007f038b23e920 in kvm_mutex_lock () at /usr/src/debug/qemu-kvm-0.12.1.2/qemu-kvm.c:2669
#4  0x00007f038b21d388 in main_loop_wait (timeout=1000) at /usr/src/debug/qemu-kvm-0.12.1.2/vl.c:3988
#5  0x00007f038b23f1ba in kvm_main_loop () at /usr/src/debug/qemu-kvm-0.12.1.2/qemu-kvm.c:2244
#6  0x00007f038b21ff65 in main_loop (argc=20, argv=<value optimized out>, envp=<value optimized out>) at /usr/src/debug/qemu-kvm-0.12.1.2/vl.c:4206
#7  main (argc=20, argv=<value optimized out>, envp=<value optimized out>) at /usr/src/debug/qemu-kvm-0.12.1.2/vl.c:6443
Aborted (core dumped)

Comment 2 langfang 2012-10-19 08:12:48 UTC
test on win2008r2 guest

version:
host
# uname -r
2.6.32-327.el6.x86_64
# rpm -q qemu-kvm
qemu-kvm-0.12.1.2-2.327.el6.x86_64

guest:
win2008-r2


the steps as same as reproduce


resutls:
sometimes 1) 
after src qemu finished migration.the qemu on des machine quit.

...........-incoming tcp:0:5999
QEMU 0.12.1 monitor - type 'help' for more information
(qemu) Guest moved used index from 2 to 1[root@hp-dl385g7-01 ~]# 

sometimes 2)
after src qemu finished migration.the guest on des machine will hang,guest show dark.


addinfo:if not do migration while s3 .can be resume successfully

Comment 3 Ademar Reis 2012-10-19 21:38:27 UTC
probably related: bug 867787

Comment 4 Orit Wasserman 2012-10-21 09:17:14 UTC
Can you reproduce it with VNC (instead of spice)?

Comment 5 Qunfang Zhang 2012-10-22 02:18:58 UTC
(In reply to comment #4)
> Can you reproduce it with VNC (instead of spice)?

Hi, Orit
I should mention it earlier in the bug description. Actually can not reproduce with VNC.

Comment 6 Qunfang Zhang 2012-10-22 03:04:13 UTC
Hi, Orit
Maybe we don't need to dig into this issue too much as I hit the problem when using rhel5.9 guest. And Amit mentioned we don't support S3/S4 for rhel5.9 in rhel6.4. Refer to bug 868198.
And then I re-test with rhel6.4-64 guest with the same steps, can not reproduce the issue.

Thanks,
Qunfang

Comment 7 Orit Wasserman 2012-10-22 07:18:04 UTC
I think you hit two issues:
the first is the S3 which we don't support for 5.9 
the second is the crash which looks similar to https://bugzilla.redhat.com/show_bug.cgi?id=868256.

Comment 8 Orit Wasserman 2012-10-23 13:43:55 UTC

*** This bug has been marked as a duplicate of bug 868256 ***


Note You need to log in before you can comment on or make changes to this bug.