Bug 964962 - Destination server crash after migration with different parameters
Destination server crash after migration with different parameters
Status: CLOSED NOTABUG
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: spice-server (Show other bugs)
6.4
Unspecified Unspecified
unspecified Severity unspecified
: rc
: ---
Assigned To: Default Assignee for SPICE Bugs
Desktop QE
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-05-20 04:21 EDT by Tomas Jamrisko
Modified: 2014-06-09 13:20 EDT (History)
5 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2014-06-09 13:20:11 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Tomas Jamrisko 2013-05-20 04:21:14 EDT
Description of problem:
The client hangs after migrating a guest to a VM with different compression settings


Version-Release number of selected component (if applicable):
spice-gtk-0.14-7.el6.x86_64
spice-server-0.12.0-12.el6_4.1.x86_64

How reproducible:
Always

Steps to Reproduce:
1. Two VMs with different spice options:
   a) jpeg-wan-compression=never,zlib-glz-wan-compression=never
   b) jpeg-wan-compression=always,zlib-glz-wan-compression=always

2. Connect to one of them
3. Migrate the guest between them

Actual results:
Client hangs until restarted
Comment 2 Marc-Andre Lureau 2013-06-21 18:28:20 EDT
Apparently, there is no in f19, seamless or not. I will give it a try with rhel6 soon
Comment 3 Marc-Andre Lureau 2013-06-25 16:08:30 EDT
I have tested, all on rhel6

/usr/libexec/qemu-kvm -smp 4 -m 1024 -vga qxl -spice port=5902,disable-ticketing,jpeg-wan-compression=never,zlib-glz-wan-compression=never,seamless-migration=on -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -chardev spicevmc,id=charchannel0,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.spice.0  -snapshot ~/VirtualMachines/win7-x64.img -monitor stdio

remote-viewer spice://localhost:5902

/usr/libexec/qemu-kvm -smp 4 -m 1024 -vga qxl -spice port=5903,disable-ticketing,jpeg-wan-compression=always,zlib-glz-wan-compression=always,seamless-migration=on -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -chardev spicevmc,id=charchannel0,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.spice.0  -snapshot ~/VirtualMachines/win7-x64.img -monitor stdio  -incoming tcp:0:4444

client_migrate_info spice 192.168.1.39 5903
migrate -d tcp:localhost:4444

no hangs reproducible so far
Comment 4 Marc-Andre Lureau 2013-06-25 16:14:58 EDT
however, I managed to crash the server 2/3 (but not in fedora)
Comment 5 Marc-Andre Lureau 2013-06-26 06:36:57 EDT
server crash:

Starting program: /usr/libexec/qemu-kvm -smp 4 -m 1024 -vga qxl -spice port=5903,disable-ticketing,jpeg-wan-compression=always,zlib-glz-wan-compression=always,seamless-migration=on -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -chardev spicevmc,id=charchannel0,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.spice.0 -snapshot /home/elmarco/VirtualMachines/win7-x64.img -monitor stdio -incoming tcp:0:4444
[Thread debugging using libthread_db enabled]
QEMU 0.12.1 monitor - type 'help' for more information
(qemu) 
(qemu) main_channel_link: add main channel client
red_dispatcher_set_cursor_peer: 
inputs_connect: inputs channel client create
low band 0
jpeg 1

Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7fff8b9fc700 (LWP 4656)]
0x00007ffff312f887 in pixman_region32_init () from /usr/lib64/libpixman-1.so.0
(gdb) bt
#0  0x00007ffff312f887 in pixman_region32_init () from /usr/lib64/libpixman-1.so.0
#1  0x00007ffff5f63208 in display_channel_client_restore_surfaces_lossy (rcc=
    0x7fffd8268320, size=<value optimized out>, message=<value optimized out>)
    at red_worker.c:9815
#2  display_channel_handle_migrate_data (rcc=0x7fffd8268320, 
    size=<value optimized out>, message=<value optimized out>) at red_worker.c:9892
#3  0x00007ffff5f568df in red_channel_handle_migrate_data (rcc=0x7fffd8268320, size=
    148, type=<value optimized out>, message=0x7fffd82b65d0) at red_channel.c:1157
#4  red_channel_client_handle_message (rcc=0x7fffd8268320, size=148, 
    type=<value optimized out>, message=0x7fffd82b65d0) at red_channel.c:1189
#5  0x00007ffff5f561eb in red_peer_handle_incoming (rcc=0x7fffd8268320)
    at red_channel.c:272
#6  red_channel_client_receive (rcc=0x7fffd8268320) at red_channel.c:294
#7  0x00007ffff5f56a7c in red_channel_client_event (fd=<value optimized out>, 
    event=<value optimized out>, data=0x7fffd8268320) at red_channel.c:1204
#8  0x00007ffff5f7b766 in red_worker_main (arg=<value optimized out>)
    at red_worker.c:11854
#9  0x00007ffff7739851 in start_thread () from /lib64/libpthread.so.0
#10 0x00007ffff57f890d in clone () from /lib64/libc.so.6
(gdb)
Comment 6 Marc-Andre Lureau 2013-06-26 07:36:19 EDT
I am afraid using only migrate_data low_bandwidth_setting to decide enable_jpeg and enable_zlib_glz_wrap isn't enough. Or we should ignore == SPICE_WAN_COMPRESSION_AUTO condition, and set value anyway?

moving to spice-server
Comment 7 Yonit Halperin 2013-06-26 09:23:43 EDT
This is not really a bug. Migration is not supported with different parameters. The basic assumption is that the qemu command lines are identical (except for -incoming).
If it is a requirement to change this assumption, it is an RFE.
Comment 8 RHEL Product and Program Management 2013-10-13 23:32:41 EDT
This request was not resolved in time for the current release.
Red Hat invites you to ask your support representative to
propose this request, if still desired, for consideration in
the next release of Red Hat Enterprise Linux.
Comment 10 David Blechter 2014-06-09 13:20:11 EDT
close according to https://bugzilla.redhat.com/show_bug.cgi?id=964962#c7
please, follow the recommendations in the comment.

Note You need to log in before you can comment on or make changes to this bug.