Bug 618205

Summary: SPICE - race in KVM/Spice would cause migration to fail (slots are not registered properly?)
Product: Red Hat Enterprise Linux 5 Reporter: RHEL Product and Program Management <pm-rhel>
Component: kvmAssignee: Virtualization Maintenance <virt-maint>
Status: CLOSED ERRATA QA Contact: Virtualization Bugs <virt-bugs>
Severity: medium Docs Contact:
Priority: urgent    
Version: 5.5CC: cmeadors, iheim, kraxel, llim, michen, mkenneth, oschreib, plyons, pm-eus, Rhev-m-bugs, tburke, virt-maint, ykaul
Target Milestone: rcKeywords: ZStream
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: kvm-83-164.el5_5.18 Doc Type: Bug Fix
Doc Text:
Previously, the migration process would fail for a virtual machine because the virtual machine could not map all the memory. This was caused by a conflict that was initiated when a virtual machine was initially run and then migrated right away. With this update, the conflict no longer occurs and the migration process no longer fails.
Story Points: ---
Clone Of: Environment:
Last Closed: 2010-08-19 21:32:22 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
Bug Depends On: 567046    
Bug Blocks:    

Description RHEL Product and Program Management 2010-07-26 12:05:06 UTC
This bug has been copied from bug #567046 and has been proposed
to be backported to 5.5 z-stream (EUS).

Comment 6 Miya Chen 2010-08-04 05:16:52 UTC
Reproduce this problem with kvm-83-164.el5_5.12, do migration immediately after "system_reset", got 3 failed results:
1. (qemu) Error while reading ram block header
Error block header
2. (qemu) load of migration failed
3. (qemu) ram_save_block: update dirty pages log failed -2
ram_save_block: update dirty pages log failed -2
ram_save_block: update dirty pages log failed -2
ram_save_block: update dirty pages log failed -2

steps:
1. start source guest with spice:
# /usr/libexec/qemu-kvm -m 1G -smp 2 -cpu qemu64,+sse2  -monitor stdio -drive file=/root/rhel5.5-64-virtio.qcow2,if=virtio,boot=on,cache=none  -net nic,macaddr=20:20:20:12:23:16,model=virtio,vlan=0 -net tap,script=/etc/qemu-ifup,vlan=0 -usbdevice tablet -spice host=0,ic=on,port=5930,disable-ticketing -qxl 1
2. start destination:
<qemu-kvm commandline> -incoming tcp:0:5888
3. in src qemu monitor, do migration immediately after system_reset:
(qemu) system_reset 
(qemu) qxl_display_resize
qxl_reset
handle_dev_input: detach
handle_dev_input: attach
create_cairo_context: using cairo canvas
vdi_port_io_map: base 0xc040 size 0x10
vdi_port_ram_map: addr 0xc1000000 size 0x10000
ram_map: addr 0xc4000000 size 0x4000000
vram_map: addr 0xc8000000 size 0x1000
rom_map: addr 0xc8002000 size 0x2000
ioport_map: base 0xc050 size 0x8

(qemu) migrate -d tcp:ip:5888


Tested with kvm-83-164.el5_5.20 using the above 3 steps and do ping-pong migration for 10 times, all migrations completed successfully, so this bug has been fixed.

Comment 9 Martin Prpič 2010-08-17 11:21:14 UTC
    Technical note added. If any revisions are required, please edit the "Technical Notes" field
    accordingly. All revisions will be proofread by the Engineering Content Services team.
    
    New Contents:
Previously, the migration process would fail for a virtual machine because the virtual machine could not map all the memory. This was caused by a conflict that was initiated when a virtual machine was initially run and then migrated right away. With this update, the conflict no longer occurs and the migration process no longer fails.

Comment 10 errata-xmlrpc 2010-08-19 21:32:22 UTC
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on therefore solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.

http://rhn.redhat.com/errata/RHSA-2010-0627.html