RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 871306 - destination qemu will core dump when do S3 while migrating a win7 sp1 guest
Summary: destination qemu will core dump when do S3 while migrating a win7 sp1 guest
Keywords:
Status: CLOSED DUPLICATE of bug 874574
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: qemu-kvm
Version: 6.4
Hardware: Unspecified
OS: Unspecified
high
medium
Target Milestone: rc
: ---
Assignee: Virtualization Maintenance
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-10-30 08:05 UTC by Sibiao Luo
Modified: 2012-12-03 01:49 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2012-10-30 08:46:02 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Sibiao Luo 2012-10-30 08:05:21 UTC
Description of problem:
do S3 in windows7 sp1 64bit guest while migrating the guest from src to dest, but qemu will core dump in destination.
BTW, if did not do S3 in geust, just do migration from src to dst with the same qemu-kvm command line, it can finish successfully.

Version-Release number of selected component (if applicable):
host info:
kernel-2.6.32-335.el6.x86_64
qemu-kvm-0.12.1.2-2.331.el6.x86_64
spice-gtk-0.14-5.el6.x86_64
spice-gtk-tools-0.14-5.el6.x86_64
spice-server-0.12.0-1.el6.x86_64
guest info:
win7 sp1 64bit
virtio-win-prewhql-0.1-41

How reproducible:
always

Steps to Reproduce:
1.sync the host time with ntp server.
# ntpdate $ntp_server
2.boot guest with '-global PIIX4_PM.disable_s3=0 -global PIIX4_PM.disable_s4=0 -spice port=5931,disable-ticketing,seamless-migration=on -vga qxl -global qxl-vga.vram_size=67108864' in src and append '-incoming tcp:0:5888,server,nowait' in dst.
3.sync guest system clock same to host.
4.Suspend guest to memory inside guest:
click Star ---> sleep
5.do migration from src to dst.
(qemu) __com.redhat_spice_migrate_info $dst_ip $port
main_channel_client_handle_migrate_connected: client 0x7ffff8b75450 connected: 1 seamless 1
(qemu) migrate -d tcp:$ip_addr:$port

Actual results:
after the step 5, the qemu will core dump in destination, while complete in src.
- src qemu:
(qemu) info migrate 
Migration status: completed
- dst qemu:
(qemu) info status 
VM status: paused (incoming-migration)
(qemu) 
main_channel_link: add main channel client
inputs_connect: inputs channel client create
red_dispatcher_set_cursor_peer: 
id 0, group 0, virt start 0, virt end ffffffffffffffff, generation 0, delta 0
(/usr/bin/gdb:2762): Spice-CRITICAL **: red_memslots.c:94:validate_virt: virtual address out of range
    virt=0x0+0x300000 slot_id=1 group_id=1
    slot=0x0-0x0 delta=0x0
Detaching after fork from child process 2790.

Program received signal SIGABRT, Aborted.
0x00007ffff57468a5 in raise () from /lib64/libc.so.6
(gdb) bt
#0  0x00007ffff57468a5 in raise () from /lib64/libc.so.6
#1  0x00007ffff5748085 in abort () from /lib64/libc.so.6
#2  0x00007ffff5fa0c35 in spice_logv (log_domain=0x7ffff6017c4e "Spice", log_level=SPICE_LOG_LEVEL_CRITICAL, strloc=0x7ffff601b4ba "red_memslots.c:94", function=
    0x7ffff601b59f "validate_virt", format=0x7ffff601b2c8 "virtual address out of range\n    virt=0x%lx+0x%x slot_id=%d group_id=%d\n    slot=0x%lx-0x%lx delta=0x%lx", 
    args=0x7fffe65fc890) at log.c:109
#3  0x00007ffff5fa0d6a in spice_log (log_domain=<value optimized out>, log_level=<value optimized out>, strloc=<value optimized out>, function=<value optimized out>, 
    format=<value optimized out>) at log.c:123
#4  0x00007ffff5f61403 in validate_virt (info=<value optimized out>, virt=0, slot_id=1, add_size=3145728, group_id=1) at red_memslots.c:90
#5  0x00007ffff5f61553 in get_virt (info=<value optimized out>, addr=<value optimized out>, add_size=<value optimized out>, group_id=1, error=0x7fffe65fca7c)
    at red_memslots.c:142
#6  0x00007ffff5f76717 in dev_create_primary_surface (worker=0x7fff440008c0, surface_id=<value optimized out>, surface=...) at red_worker.c:10976
#7  0x00007ffff5f76cf3 in handle_dev_create_primary_surface_async (opaque=<value optimized out>, payload=<value optimized out>) at red_worker.c:11187
#8  0x00007ffff5f5ecc7 in dispatcher_handle_single_read (dispatcher=0x7ffff8a25ed8) at dispatcher.c:139
#9  dispatcher_handle_recv_read (dispatcher=0x7ffff8a25ed8) at dispatcher.c:162
#10 0x00007ffff5f7f88e in red_worker_main (arg=<value optimized out>) at red_worker.c:11782
#11 0x00007ffff773d851 in start_thread () from /lib64/libpthread.so.0
#12 0x00007ffff57fc90d in clone () from /lib64/libc.so.6
(gdb)

Expected results:
migration successfully after do S3 in src guest, and then can resume it successfully in dst guest by clicking ps/2 mouse/keyboards, or send "system_wakeup" qemu command.

Additional info:
qemu-kvm-command-lines:
eg: # /usr/libexec/qemu-kvm -M rhel6.4.0 -cpu SandyBridge -enable-kvm -m 2048 -smp 4,sockets=2,cores=2,threads=1 -usb -device usb-tablet,id=input0 -name sluo_acpi -uuid 990ea161-6b67-47b2-b803-19fb01d30d30 -rtc base=localtime,clock=host,driftfix=slew -drive file=/mnt/windows_7_ultimate_sp1_x64.qcow2,if=none,id=drive-virtio-disk0,format=qcow2,cache=none,werror=stop,rerror=stop -device virtio-blk-pci,drive=drive-virtio-disk0,id=virtio_drive,bus=pci.0,addr=0x3,bootindex=1 -netdev tap,id=hostnet0,vhost=on,script=/etc/qemu-ifup -device virtio-net-pci,netdev=hostnet0,id=virtio-net-pci0,mac=08:2E:5F:0A:0D:B1,bus=pci.0,addr=0x4 -device usb-ehci,id=ehci,addr=0x5 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 -spice port=5931,disable-ticketing,seamless-migration=on -vga qxl -global qxl-vga.vram_size=67108864 -device intel-hda,id=sound0,bus=pci.0,addr=0x7 -drive file=/mnt/my-data-disk.qcow2,if=none,id=drive-ide0-0-0,format=qcow2,cache=none,werror=stop,rerror=stop -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -global PIIX4_PM.disable_s3=0 -global PIIX4_PM.disable_s4=0 -boot menu=on -monitor stdio -incoming tcp:0:5888,server,nowait

Comment 1 Sibiao Luo 2012-10-30 08:08:20 UTC
(In reply to comment #0)
> Description of problem:
> do S3 in windows7 sp1 64bit guest while migrating the guest from src to
> dest, but qemu will core dump in destination.
> BTW, if did not do S3 in geust, just do migration from src to dst with the
> same qemu-kvm command line, it can finish successfully.
> 
this issue was discovered from bug 870716.

My two hosts are the same, there are SandyBridge host.
cpu info as following:
processor	: 7
vendor_id	: GenuineIntel
cpu family	: 6
model		: 42
model name	: Intel(R) Core(TM) i7-2600 CPU @ 3.40GHz
stepping	: 7
cpu MHz		: 1600.000
cache size	: 8192 KB
physical id	: 0
siblings	: 8
core id		: 3
cpu cores	: 4
apicid		: 7
initial apicid	: 7
fpu		: yes
fpu_exception	: yes
cpuid level	: 13
wp		: yes
flags		: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx lahf_lm ida arat epb xsaveopt pln pts dts tpr_shadow vnmi flexpriority ept vpid
bogomips	: 6783.75
clflush size	: 64
cache_alignment	: 64
address sizes	: 36 bits physical, 48 bits virtual
power management:

Comment 2 Qunfang Zhang 2012-10-30 08:18:42 UTC
Is this a duplicate of bug 868256 (or bug 867816)?

Comment 4 Sibiao Luo 2012-10-30 08:46:02 UTC
base on comment 2, close it as DUP.

*** This bug has been marked as a duplicate of bug 868256 ***

Comment 5 Sibiao Luo 2012-12-03 01:48:37 UTC

*** This bug has been marked as a duplicate of bug 874574 ***

Comment 6 Sibiao Luo 2012-12-03 01:49:58 UTC
(In reply to comment #5)
> 
> *** This bug has been marked as a duplicate of bug 874574 ***
bug #871306 matches bug #874574, so I change it to be a duplicate of it, thx.


Note You need to log in before you can comment on or make changes to this bug.