Bug 732949 - Guest screen becomes abnormal after migration with spice
Summary: Guest screen becomes abnormal after migration with spice
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: qemu-kvm
Version: 6.2
Hardware: Unspecified
OS: Unspecified
Target Milestone: rc
: ---
Assignee: Yonit Halperin
QA Contact: Virtualization Bugs
Depends On:
Blocks: 743047
TreeView+ depends on / blocked
Reported: 2011-08-24 09:22 UTC by Qunfang Zhang
Modified: 2014-01-21 00:00 UTC (History)
7 users (show)

Fixed In Version: qemu-kvm-
Doc Type: Bug Fix
Doc Text:
Clone Of:
Last Closed: 2011-12-06 15:56:30 UTC
Target Upstream Version:

Attachments (Terms of Use)
proposed fix (1.80 KB, patch)
2011-09-07 08:18 UTC, Yonit Halperin
no flags Details | Diff

System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2011:1531 0 normal SHIPPED_LIVE Moderate: qemu-kvm security, bug fix, and enhancement update 2011-12-06 01:23:30 UTC

Description Qunfang Zhang 2011-08-24 09:22:39 UTC
Description of problem:
Find this problem during test Bug 729869, please check Bug 729869c#12 and c#13.
Running 2Dtom inside a win7 guest, then implement migration. After finish migration, the 2Dtom screen changes very slow.

Version-Release number of selected component (if applicable):
qxl: qxl-win-0.1-9

How reproducible:

Steps to Reproduce:
1.Boot a win7 guest and install virtio serial driver, qxl driver and vdagent-win inside guest.

/usr/libexec/qemu-kvm -M rhel6.2.0 -cpu cpu64-rhel6,+x2apic  -enable-kvm -m 2048 -smp 2,sockets=2,cores=1,threads=1 -name RHEL6 -uuid 7d955163-2ddd-4711-9347-ce6180998070 -monitor stdio -rtc base=localtime -boot c -device virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x4 -drive file=/opt/win7-32-virtio.qcow2,if=none,id=drive-virtio-disk0,format=qcow2,cache=none,werror=stop,rerror=stop,aio=threads -device virtio-blk-pci,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0 -netdev tap,id=hostnet0,vhost=on -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:10:20:54,bus=pci.0,addr=0x3 -chardev socket,id=charchannel0,path=/tmp/foo,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev spicevmc,id=charchannel1,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=com.redhat.spice.0 -usb -spice port=5930,disable-ticketing -k en-us -vga qxl -global qxl-vga.vram_size=67108864 

2. Running 2Dtom inside guest. I download the 2dtom from:

3. Migrate the guest to another host
Actual results:
The guest screen displays abnormal, 2Dtom changes the display slowly.

Expected results:
2Dtom should work and display normally, the same as before migration.

Additional info:

Comment 2 Qunfang Zhang 2011-08-29 06:04:03 UTC
Meet another phenomenon with the same steps in bug description but using a different hardware.
Migrate guest during guest running  2Dtom, after finish migration the 2Dtom screen is quiescent instead of dynamic. Guest keyboard and mouse got unavailable. Sometimes I just implement migration while guest is not run any application can also reproduce this. Guest got no interactive after migration.

I borrow the hosts from another guy. If need to login the host to have a look, I may ask the guy if the hosts are available.

The host hardware for this specific phenomenon:

processor	: 3
vendor_id	: GenuineIntel
cpu family	: 6
model		: 42
model name	: Intel(R) Core(TM) i5-2400 CPU @ 3.10GHz
stepping	: 7
cpu MHz		: 3093.206
cache size	: 6144 KB
physical id	: 0
siblings	: 4
core id		: 3
cpu cores	: 4
apicid		: 6
initial apicid	: 6
fpu		: yes
fpu_exception	: yes
cpuid level	: 13
wp		: yes
flags		: fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 x2apic popcnt aes xsave avx lahf_lm ida arat epb xsaveopt pln pts dts tpr_shadow vnmi flexpriority ept vpid
bogomips	: 6185.15
clflush size	: 64
cache_alignment	: 64
address sizes	: 36 bits physical, 48 bits virtual
power management:

Comment 3 Yonit Halperin 2011-08-29 06:21:06 UTC
For the slowness scenario: 
After some user operations in the guest (e.g., pressing escape, alt-tab), the guest returns to its usual speed. qemu cpu usage, when the guest is slow, is about 30% from the cpu usage when it turns responsive.

Comment 4 Yonit Halperin 2011-08-29 12:46:38 UTC
    Found the problem (at least for the slowness scenario):

    src qemu
    1) qxl driver waits for interrupt (specifically for QXL_INTERRUPT_DISPLAY)
    2) red worker thread calls qxl_send_events with QXL_INTERRUPT_DISPLAY
    3) interrupt request is pushed to a pipe
    4) Migration occurs before io thread read the pipe and call qxl_set_irq

    target qemu
    1) qxl driver still waits for the interrupt (which was not sent)
    2) when red_worker thread calls qxl_send_events with QXL_INTERRUPT_DISPLAY
       it is ignored since qxl->ram->int_pending was set to 1 for  
       QXL_INTERRUPT_DISPLAY (in qxl_send_events that was called in src_qemu)
    ==> the driver is starved.

Comment 5 Yonit Halperin 2011-09-07 08:18:01 UTC
Created attachment 521813 [details]
proposed fix

Comment 9 Qunfang Zhang 2011-09-15 07:32:55 UTC
Reproduced with qemu-kvm- with the same command line and steps as bug description. 
Can reproduce at the first time migration. Guest screen stops but not dynamically running the 2Dtom.

Verified on qemu-kvm-, can not reproduce this issue after 20 times migration.

So, I will set the status to VERIFIED.

Comment 11 errata-xmlrpc 2011-12-06 15:56:30 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.