Bug 822683

Summary: Two display channels with the same id get connected to the server after migration in switch-mode
Product: Red Hat Enterprise Linux 6 Reporter: Marc-Andre Lureau <marcandre.lureau>
Component: virt-viewerAssignee: Daniel Berrangé <berrange>
Status: CLOSED ERRATA QA Contact: Virtualization Bugs <virt-bugs>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 6.3CC: cfergeau, dblechte, dyasny, hjiang, mzhan, rwu, syeghiay, yhalperi, yupzhang, zpeng
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: virt-viewer-0.5.2-9.el6 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2012-06-20 12:12:55 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
from 'localhost' to 'remote'
none
from 'remote' to 'localhost' none

Description Marc-Andre Lureau 2012-05-17 18:52:37 UTC
Description of problem:

Two display channels with the same id get connected to the server after migration in switch-mode.

Version-Release number of selected component (if applicable):


How reproducible:
100%

Steps to Reproduce:
1. connect virt-viewer to a VM
2. do a switch-host migration
  
Actual results:

2 "same id" display channels connected

Expected results:

only one display channel

Comment 2 Marc-Andre Lureau 2012-05-17 19:51:08 UTC
the fix is already in virt-viewer upstream:

http://git.fedorahosted.org/git/?p=virt-viewer.git;a=commit;h=0341125ca4dd77b58de0d0a580a0bc4515a59332

Comment 5 Huming Jiang 2012-05-22 08:54:54 UTC
This bug could be reproduced in virt-viewer-0.5.2-8.el6.x86_64.

Test with virt-viewer-0.5.2-9.el6.x86_64.

Steps:
1 #virsh dumpxml $guest
    <graphics type='spice' autoport='yes' listen='127.0.0.1'>
      <listen type='address' address='127.0.0.1'/>
    </graphics>

2 # virt-viewer $guest (and do a switch-host migration)

  (virt-viewer:10616): GSpice-WARNING **: error or unhandled channel event during migration: 20
   ...

  (The connection to the server is closed.)

3. #virsh dumpxml $guest
    <graphics type='spice' port='5900' autoport='yes' listen='0.0.0.0'>
      <listen type='address' address='0.0.0.0'/>
    </graphics>
   
4. # virt-viewer $guest (and do a switch-host migration)

   (virt-viewer:10962): GSpice-CRITICAL **: Received frame with invalid 0 timestamp! perhaps wrong graphic driver?

   (virt-viewer:10962): GSpice-WARNING **: set_sink_input_volume() failed: Invalid argument

   (A connection to the server is still existed. )

5. #virt-viewer -c qemu+ssh://root@IP/system $guest (and do a switch-host migration)

   (A connection to the server is still existed. )

If i use a 'vnc' graphics to redo the test, all the connections in the steps above will be closed. So move the status of this bug to 'assigned'.

Comment 6 Marc-Andre Lureau 2012-05-22 15:09:04 UTC
Huming, what is the host? How do you perform the switch-host migration?

Comment 7 Huming Jiang 2012-05-23 03:15:51 UTC
I do the test use the tree of 'RH6.3 snapshot4(That is RHEL6.3-20120516.0).Two hosts have the same configuration.
#uname -a
Linux test2 2.6.32-272.el6.x86_64 #1 SMP Tue May 15 16:25:49 EDT 2012 x86_64 x86_64 x86_64 GNU/Linux
#rpm -qa python-virtinst virt-viewer spice-client virt-manager libvirt kernel
virt-viewer-0.5.2-9.el6.x86_64
kernel-2.6.32-272.el6.x86_64
python-virtinst-0.600.0-8.el6.noarch
virt-manager-0.9.0-14.el6.x86_64
libvirt-0.9.10-20.el6.x86_64
spice-client-0.8.2-15.el6.x86_64

I use the following steps to do the  switch-host migration:
There exists two hosts with ip '10.66.3.131' and '10.66.3.157' and also a nfs server('10.66.90.121').
1.Both on '10.66.3.131' and '10.66.3.157', do the followings:
#iptables -F
#setsebool -P virt_use_nfs 1
#mount -t nfs 10.66.90.121:/vol/S3/libvirtauto /var/lib/libvirt/migrate/ -o vers=3
#add hostnames to /etc/hosts

2.On '10.66.3.131':
# virsh dumpxml $guest
   <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='none'/>
      <source file='/var/lib/libvirt/migrate/windows7.img'/>
      <target dev='hda' bus='ide'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
...
    <graphics type='spice' autoport='yes' listen='0.0.0.0'>
      <listen type='address' address='0.0.0.0'/>
    </graphics>
...
# Create a remote-connection to '10.66.3.157' using virt-manager.
# start the guest 
# virt-viewer r
# do the migration on virt-manager.(Right click on guest -> 'Migrate' -> 'Migrate').

And then, i found the connection that virt-viewer had created is always there when i do the migration from 'localhost' to 'remote' and from 'remote' to 'localhost'.
See the screenshot in attachment.

Comment 8 Huming Jiang 2012-05-23 03:17:26 UTC
Created attachment 586233 [details]
from 'localhost' to 'remote'

Comment 9 Huming Jiang 2012-05-23 03:18:18 UTC
Created attachment 586234 [details]
from 'remote' to 'localhost'

Comment 10 Marc-Andre Lureau 2012-05-23 11:41:35 UTC
That doesn't say much about which Spice migration method is being used. We have basically 3 ways, "switch-host" is one of them, and I think you need an older host to really test it.

(the way I tested it is by disabling the semi-seamless migration capability, so the server falls-back on "switch-host" method)

Yonit, which way or which rhel version can be used to test switch-host method for qa?

Comment 11 Marc-Andre Lureau 2012-05-23 11:44:20 UTC
(In reply to comment #5)
> If i use a 'vnc' graphics to redo the test, all the connections in the steps
> above will be closed. So move the status of this bug to 'assigned'.

vnc is not covered by this patch

Comment 12 Huming Jiang 2012-05-24 05:51:36 UTC
(In reply to comment #10)
> That doesn't say much about which Spice migration method is being used. We
> have basically 3 ways, "switch-host" is one of them, and I think you need an
> older host to really test it.
> 
> (the way I tested it is by disabling the semi-seamless migration capability,
> so the server falls-back on "switch-host" method)

Hi Marc-Andre,
What i did is just a usual migration as we did it usually, does 'switch host' migration need any other configurations?
since we don't know how to do the switch-host migration, could you please tell us more about it?

> Yonit, which way or which rhel version can be used to test switch-host
> method for qa?

Comment 13 Yonit Halperin 2012-05-24 05:57:31 UTC
(In reply to comment #12)
> (In reply to comment #10)
> > That doesn't say much about which Spice migration method is being used. We
> > have basically 3 ways, "switch-host" is one of them, and I think you need an
> > older host to really test it.
> > 
> > (the way I tested it is by disabling the semi-seamless migration capability,
> > so the server falls-back on "switch-host" method)
> 
> Hi Marc-Andre,
> What i did is just a usual migration as we did it usually, does 'switch
> host' migration need any other configurations?
> since we don't know how to do the switch-host migration, could you please
> tell us more about it?
> 
> > Yonit, which way or which rhel version can be used to test switch-host
> > method for qa?

The following should work:
- call client_migrate_info before the target qemu is up.
  - the client fails to establish the initial connection to the target
- start the target qemu
- start the migration
  - spice should fallback to "switch-host"

Comment 14 Huming Jiang 2012-05-24 12:58:01 UTC
Hi Marc-Andre,
I retested this bug with qemu command and called the client_migrate_info. But in this way, i can access the guest only by remote-viewer and spicec, no by virt-viewer. So i could not reproduce this bug, would you please give me some suggestions about how to verify this bug?
Thanks in advance.

Comment 16 Marc-Andre Lureau 2012-05-24 13:07:34 UTC
(In reply to comment #14)
> Hi Marc-Andre,
> I retested this bug with qemu command and called the client_migrate_info.
> But in this way, i can access the guest only by remote-viewer and spicec, no
> by virt-viewer. So i could not reproduce this bug, would you please give me
> some suggestions about how to verify this bug?
> Thanks in advance.

Testing with remote-viewer is enough. virt-/libvirt doesn't influence spice migration.

Comment 18 Marc-Andre Lureau 2012-05-24 14:01:41 UTC
please re-test with remote-viewer, only with spice and following comment #13.

Comment 19 Huming Jiang 2012-05-24 14:21:39 UTC
(In reply to comment #18)
> please re-test with remote-viewer, only with spice and following comment #13.

Test with virt-viewer-0.5.2-8.el6.x86_64.rpm. Result is:

Steps:
1.# /usr/libexec/qemu-kvm -enable-kvm -m 1G -smp 2 -uuid b2e2b41c-d562-4f7f-82bb-1d45c1919220 -M rhel6.2.0 -name aaa -drive file=/var/lib/libvirt/migrate/windows7.img,if=none,id=virtio,format=raw,cache=none,werror=stop,rerror=stop -device ide-drive,drive=virtio,id=drive-virtio0-0-0 -spice port=5830,disable-ticketing -vga qxl -monitor stdio

...

2.(qemu) client_migrate_info spice test2 5830
spice_server_migrate_connect: 
spice_server_migrate_connect: no client connected

3.#remote-viewer spice://10.66.3.131:5830

4.(qemu) reds_handle_auth_mechanism: Auth method: 1
reds_handle_main_link: 
reds_disconnect: 
reds_show_new_channel: channel 1:0, connected successfully, over Non Secure link
main_channel_link: add main channel client
reds_handle_main_link: NEW Client 0x7fe2ab6be0b0 mcc 0x7fe2ab6c3010 connect-id 1804289383
main_channel_handle_parsed: net test: latency 0.217000 ms, bitrate 12880503144 bps (12283.805031 Mbps)
reds_handle_auth_mechanism: Auth method: 1
reds_show_new_channel: channel 2:0, connected successfully, over Non Secure link
red_dispatcher_set_display_peer: 
handle_dev_display_connect: connect
handle_new_display_channel: add display channel client
handle_new_display_channel: New display (client 0x7fe2ab6be0b0) dcc 0x7fe29406a880 stream 0x7fe2ab6be5c0
handle_new_display_channel: jpeg disabled
handle_new_display_channel: zlib-over-glz disabled
listen_to_new_client_channel: NEW ID = 0
reds_handle_auth_mechanism: Auth method: 1
reds_show_new_channel: channel 3:0, connected successfully, over Non Secure link
inputs_connect: inputs channel client create
reds_handle_auth_mechanism: Auth method: 1
reds_show_new_channel: channel 4:0, connected successfully, over Non Secure link
red_dispatcher_set_cursor_peer: 
display_channel_client_wait_for_init: creating encoder with id == 0
handle_dev_cursor_connect: cursor connect
red_connect_cursor: add cursor channel client
listen_to_new_client_channel: NEW ID = 0
...

5.(qemu) migrate -d tcp:10.66.3.157:5999
spice_server_migrate_start: 
(qemu) handle_dev_stop: stop
spice_server_migrate_end: 
reds_mig_finished: 
main_channel_migrate_complete: 
main_channel_migrate_complete: client 0x7fe2ab6be0b0 SWITCH_HOST
main_channel_marshall_migrate_switch: 
red_channel_client_disconnect: 0x7fe29406a880 (channel 0x7fe2940458d0 type 2 id 0)
display_channel_client_on_disconnect: 
red_channel_client_disconnect: 0x7fe2ab6f5010 (channel 0x7fe2ab3357a0 type 3 id 0)
red_channel_client_disconnect: 0x7fe2ab6c3010 (channel 0x7fe2ab32a2c0 type 1 id 0)
red_channel_client_disconnect: 0x7fe294123460 (channel 0x7fe294045e90 type 4 id 0)
main_channel_client_on_disconnect: rcc=0x7fe2ab6c3010
reds_client_disconnect: 
red_client_destroy: destroy client with #channels 4
red_dispatcher_disconnect_cursor_peer: 
handle_dev_cursor_disconnect: disconnect cursor client
red_channel_client_disconnect: 0x7fe294123460 (channel 0x7fe294045e90 type 4 id 0)
red_channel_client_disconnect: 0x7fe2ab6f5010 (channel 0x7fe2ab3357a0 type 3 id 0)
red_dispatcher_disconnect_display_peer: 
handle_dev_display_disconnect: disconnect display client
red_channel_client_disconnect: 0x7fe29406a880 (channel 0x7fe2940458d0 type 2 id 0)
red_channel_client_disconnect: 0x7fe2ab6c3010 (channel 0x7fe2ab32a2c0 type 1 id 0)
reds_handle_auth_mechanism: Auth method: 1
reds_handle_main_link: 
reds_disconnect: 
reds_show_new_channel: channel 1:0, connected successfully, over Non Secure link
main_channel_link: add main channel client
reds_handle_main_link: NEW Client 0x7fe2ab709270 mcc 0x7fe2ab7092e0 connect-id 846930886
main_channel_handle_parsed: net test: latency 0.170000 ms, bitrate 40960000000 bps (39062.500000 Mbps)
reds_handle_auth_mechanism: Auth method: 1
reds_show_new_channel: channel 3:0, connected successfully, over Non Secure link
inputs_connect: inputs channel client create
reds_handle_auth_mechanism: Auth method: 1
reds_handle_auth_mechanism: Auth method: 1
reds_show_new_channel: channel 2:0, connected successfully, over Non Secure link
red_dispatcher_set_display_peer: 
handle_dev_display_connect: connect
handle_new_display_channel: add display channel client
reds_handle_auth_mechanism: Auth method: 1
handle_new_display_channel: New display (client 0x7fe2ab709270) dcc 0x7fe29406a540 stream 0x7fe2ab6be180
handle_new_display_channel: jpeg disabled
handle_new_display_channel: zlib-over-glz disabled
listen_to_new_client_channel: NEW ID = 0
reds_show_new_channel: channel 4:0, connected successfully, over Non Secure link
red_dispatcher_set_cursor_peer: 
display_channel_client_wait_for_init: creating encoder with id == 0
handle_dev_cursor_connect: cursor connect
red_connect_cursor: add cursor channel client
listen_to_new_client_channel: NEW ID = 0

Check Step 3, the connection is still there.

Test with virt-viewer-0.5.2-9.el6.x86_64.rpm. Result is:

Steps:
6.# /usr/libexec/qemu-kvm -enable-kvm -m 1G -smp 2 -uuid b2e2b41c-d562-4f7f-82bb-1d45c1919220 -M rhel6.2.0 -name aaa -drive file=/var/lib/libvirt/migrate/windows7.img,if=none,id=virtio,format=raw,cache=none,werror=stop,rerror=stop -device ide-drive,drive=virtio,id=drive-virtio0-0-0 -spice port=5830,disable-ticketing -vga qxl -monitor stdio

...

7.(qemu) client_migrate_info spice test2 5830
spice_server_migrate_connect: 
spice_server_migrate_connect: no client connected

8.#remote-viewer spice://10.66.3.131:5830

9.(qemu) reds_handle_auth_mechanism: Auth method: 1
reds_handle_main_link: 
reds_disconnect: 
reds_show_new_channel: channel 1:0, connected successfully, over Non Secure link
main_channel_link: add main channel client
reds_handle_main_link: NEW Client 0x7f5fbd1a43a0 mcc 0x7f5fbd19a740 connect-id 1804289383
main_channel_handle_parsed: net test: latency 0.174000 ms, bitrate 9142857142 bps (8719.308035 Mbps)
reds_handle_auth_mechanism: Auth method: 1
reds_show_new_channel: channel 3:0, connected successfully, over Non Secure link
inputs_connect: inputs channel client create
reds_handle_auth_mechanism: Auth method: 1
reds_show_new_channel: channel 4:0, connected successfully, over Non Secure link
red_dispatcher_set_cursor_peer: 
handle_dev_cursor_connect: cursor connect
red_connect_cursor: add cursor channel client
reds_handle_auth_mechanism: Auth method: 1
listen_to_new_client_channel: NEW ID = 0
reds_show_new_channel: channel 2:0, connected successfully, over Non Secure link
red_dispatcher_set_display_peer: 
handle_dev_display_connect: connect
handle_new_display_channel: add display channel client
handle_new_display_channel: New display (client 0x7f5fbd1a43a0) dcc 0x7f5f5406da80 stream 0x7f5fbd19a3a0
handle_new_display_channel: jpeg disabled
handle_new_display_channel: zlib-over-glz disabled
listen_to_new_client_channel: NEW ID = 0
display_channel_client_wait_for_init: creating encoder with id == 0
...

10.(qemu) migrate -d tcp:10.66.3.157:5999
spice_server_migrate_start: 
(qemu) handle_dev_stop: stop
spice_server_migrate_end: 
reds_mig_finished: 
main_channel_migrate_complete: 
main_channel_migrate_complete: client 0x7f5fbd1a43a0 SWITCH_HOST
main_channel_marshall_migrate_switch: 
red_channel_client_disconnect: 0x7f5fbd19a740 (channel 0x7f5fbcf4b2c0 type 1 id 0)
main_channel_client_on_disconnect: rcc=0x7f5fbd19a740
reds_client_disconnect: 
red_client_destroy: destroy client with #channels 4
red_dispatcher_disconnect_display_peer: 
handle_dev_display_disconnect: disconnect display client
red_channel_client_disconnect: 0x7f5f5406da80 (channel 0x7f5f540458d0 type 2 id 0)
display_channel_client_on_disconnect: 
red_dispatcher_disconnect_cursor_peer: 
handle_dev_cursor_disconnect: disconnect cursor client
red_channel_client_disconnect: 0x7f5f5406a820 (channel 0x7f5f54045e90 type 4 id 0)
red_channel_client_disconnect: 0x7f5fbd1db1d0 (channel 0x7f5fbcf567a0 type 3 id 0)
red_channel_client_disconnect: 0x7f5fbd19a740 (channel 0x7f5fbcf4b2c0 type 1 id 0)
reds_handle_auth_mechanism: Auth method: 1
reds_handle_main_link: 
reds_disconnect: 
reds_show_new_channel: channel 1:0, connected successfully, over Non Secure link
main_channel_link: add main channel client
reds_handle_main_link: NEW Client 0x7f5fbd1db3e0 mcc 0x7f5fbd19a740 connect-id 846930886
main_channel_handle_parsed: net test: latency 0.202000 ms, bitrate 49951219512 bps (47637.195122 Mbps)
reds_handle_auth_mechanism: Auth method: 1
reds_show_new_channel: channel 4:0, connected successfully, over Non Secure link
red_dispatcher_set_cursor_peer: 
handle_dev_cursor_connect: cursor connect
red_connect_cursor: add cursor channel client
listen_to_new_client_channel: NEW ID = 0
reds_handle_auth_mechanism: Auth method: 1
reds_show_new_channel: channel 2:0, connected successfully, over Non Secure link
red_dispatcher_set_display_peer: 
handle_dev_display_connect: connect
handle_new_display_channel: add display channel client
handle_new_display_channel: New display (client 0x7f5fbd1db3e0) dcc 0x7f5f5406b4b0 stream 0x7f5fbd1db010
handle_new_display_channel: jpeg disabled
handle_new_display_channel: zlib-over-glz disabled
listen_to_new_client_channel: NEW ID = 0
display_channel_client_wait_for_init: creating encoder with id == 0
reds_handle_auth_mechanism: Auth method: 1
reds_show_new_channel: channel 3:0, connected successfully, over Non Secure link
inputs_connect: inputs channel client create

Check Step 8, the connection is still there.

Are the test results expected? I have not found much difference between them.

Comment 20 Marc-Andre Lureau 2012-05-24 15:04:37 UTC
(In reply to comment #19)
> Are the test results expected? I have not found much difference between them.

With the previous remote-viewer version, you should have had 2 "reds_show_new_channel: channel 2:0, connected successfully, over Non Secure link" on the server log after switch-host migration.

Switch host method is implemented by disconnecting the source session, destroying its channels and connecting to the destination. Since the channel life-time in the session is quite tight to the object life-time, the client needs to cooperate and not keep a reference around. This is something we need to address eventually. There is no strong reason to keep a reference on the channel in the client, and that's what the patch is fixing.

If a channel from the original session is not destroyed correctly, it will end up in the list of the "destination channels", and we will connect to it, resulting eventually with two same connections (the original left-over and the new one listed by the server).

Comment 21 Huming Jiang 2012-05-25 04:04:39 UTC

(In reply to comment #20)
> (In reply to comment #19)
> > Are the test results expected? I have not found much difference between them.
> 
> With the previous remote-viewer version, you should have had 2
> "reds_show_new_channel: channel 2:0, connected successfully, over Non Secure
> link" on the server log after switch-host migration.


Reproduced it with virt-viewer-0.5.2-8.el6.x86_64.rpm:
# /usr/libexec/qemu-kvm -enable-kvm -m 1G -smp 2 -M rhel6.2.0 -name aaa -drive file=/var/lib/libvirt/migrate/windows7.img,if=none,id=virtio,format=raw,cache=none,werror=stop,rerror=stop -device ide-drive,drive=virtio,id=drive-virtio0-0-0 -spice port=5830,disable-ticketing -vga qxl -monitor stdio -incoming tcp:0:5999
...
(After switch_host migration)
reds_handle_auth_mechanism: Auth method: 1
reds_handle_main_link: 
reds_disconnect: 
reds_show_new_channel: channel 1:0, connected successfully, over Non Secure link
main_channel_link: add main channel client
reds_handle_main_link: NEW Client 0x7fb5fcbe8db0 mcc 0x7fb5fc726620 connect-id 1804289383
main_channel_handle_parsed: net test: latency 0.455000 ms, bitrate 752941176 bps (718.060661 Mbps)
reds_handle_auth_mechanism: Auth method: 1
reds_show_new_channel: channel 4:0, connected successfully, over Non Secure link
red_dispatcher_set_cursor_peer: 
handle_dev_cursor_connect: cursor connect
red_connect_cursor: add cursor channel client
listen_to_new_client_channel: NEW ID = 0
reds_handle_auth_mechanism: Auth method: 1
reds_show_new_channel: channel 2:0, connected successfully, over Non Secure link
red_dispatcher_set_display_peer: 
handle_dev_display_connect: connect
handle_new_display_channel: add display channel client
reds_handle_auth_mechanism: Auth method: 1
handle_new_display_channel: New display (client 0x7fb5fcbe8db0) dcc 0x7fb5fb4cf010 stream 0x7fb5fcbe8d00
handle_new_display_channel: jpeg disabled
handle_new_display_channel: zlib-over-glz disabled
listen_to_new_client_channel: NEW ID = 0
reds_show_new_channel: channel 2:0, connected successfully, over Non Secure link
red_dispatcher_set_display_peer: 
display_channel_client_wait_for_init: creating encoder with id == 0
handle_dev_display_connect: connect
handle_new_display_channel: add display channel client
handle_new_display_channel: New display (client 0x7fb5fcbe8db0) dcc 0x7fb594087c00 stream 0x7fb5fc69eba0
handle_new_display_channel: jpeg disabled
handle_new_display_channel: zlib-over-glz disabled
listen_to_new_client_channel: NEW ID = 0
display_channel_client_wait_for_init: creating encoder with id == 0
reds_handle_auth_mechanism: Auth method: 1
reds_show_new_channel: channel 3:0, connected successfully, over Non Secure link
inputs_connect: inputs channel client create

Verified it with virt-viewer-0.5.2-9.el6.x86_64.rpm

Steps:
1. On source host:
# /usr/libexec/qemu-kvm -enable-kvm -m 1G -smp 2 -uuid b2e2b41c-d562-4f7f-82bb-1d45c1919220 -M rhel6.2.0 -name aaa -drive file=/var/lib/libvirt/migrate/windows7.img,if=none,id=virtio,format=raw,cache=none,werror=stop,rerror=stop -device ide-drive,drive=virtio,id=drive-virtio0-0-0 -spice port=5830,disable-ticketing -vga qxl -monitor stdio
...
(qemu) client_migrate_info spice $target_hostname 5830

#remote-viewer spice://SOURCEIP:5830

2. On target host:
/usr/libexec/qemu-kvm -enable-kvm -m 1G -smp 2 -M rhel6.2.0 -name aaa -drive file=/var/lib/libvirt/migrate/windows7.img,if=none,id=virtio,format=raw,cache=none,werror=stop,rerror=stop -device ide-drive,drive=virtio,id=drive-virtio0-0-0 -spice port=5830,disable-ticketing -vga qxl -monitor stdio -incoming tcp:0:5999
...

3. On source host, do migration:
...
(qemu)migrate -d tcp:target_ip:5999
...

4.On target host, log is shown below:
(After switch_host migration)
(qemu) reds_handle_auth_mechanism: Auth method: 1
reds_handle_main_link: 
reds_disconnect: 
reds_show_new_channel: channel 1:0, connected successfully, over Non Secure link
main_channel_link: add main channel client
reds_handle_main_link: NEW Client 0x7f3fa3b1adb0 mcc 0x7f3fa36585a0 connect-id 1804289383
main_channel_handle_parsed: net test: latency 0.726000 ms, bitrate 735896514 bps (701.805605 Mbps)
reds_handle_auth_mechanism: Auth method: 1
reds_show_new_channel: channel 2:0, connected successfully, over Non Secure link
red_dispatcher_set_display_peer: 
handle_dev_display_connect: connect
handle_new_display_channel: add display channel client
handle_new_display_channel: New display (client 0x7f3fa3b1adb0) dcc 0x7f3fa0bf5010 stream 0x7f3fa35d0af0
handle_new_display_channel: jpeg disabled
handle_new_display_channel: zlib-over-glz disabled
listen_to_new_client_channel: NEW ID = 0
reds_handle_auth_mechanism: Auth method: 1
reds_show_new_channel: channel 4:0, connected successfully, over Non Secure link
red_dispatcher_set_cursor_peer: 
display_channel_client_wait_for_init: creating encoder with id == 0
handle_dev_cursor_connect: cursor connect
red_connect_cursor: add cursor channel client
listen_to_new_client_channel: NEW ID = 0
reds_handle_auth_mechanism: Auth method: 1
reds_show_new_channel: channel 3:0, connected successfully, over Non Secure link
inputs_connect: inputs channel client create
...

According to comment 20, i think this bug is verified. Is it right, Marc-Andre?

Comment 22 Marc-Andre Lureau 2012-05-25 11:47:19 UTC
(In reply to comment #21)
> According to comment 20, i think this bug is verified. Is it right,
> Marc-Andre?

correct

2x "channel 2:0, connected successfully" vs 1x after

Comment 23 yuping zhang 2012-05-28 02:10:13 UTC
Per comments 21 & 22.Change the bug status to VERIFIED.

Comment 25 errata-xmlrpc 2012-06-20 12:12:55 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2012-0772.html