Bug 1018258
Summary: | Migrating a guest, virt-viewer freezes then closes and the VM and RHEV-M continues migration. | ||||||
---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise Virtualization Manager | Reporter: | Bill Sanford <bsanford> | ||||
Component: | mingw-virt-viewer | Assignee: | Default Assignee for SPICE Bugs <rh-spice-bugs> | ||||
Status: | CLOSED WORKSFORME | QA Contact: | SPICE QE bug list <spice-qe-bugs> | ||||
Severity: | medium | Docs Contact: | |||||
Priority: | low | ||||||
Version: | 3.3.0 | CC: | bsanford, cfergeau, dblechte, djasa, ecohen, marcandre.lureau, mkenneth, mkrcmari, pvine, rbalakri, Rhev-m-bugs, rh-spice-bugs, vipatel, yeylon, ylavi | ||||
Target Milestone: | ovirt-3.6.1 | Keywords: | Reopened | ||||
Target Release: | 3.6.1 | ||||||
Hardware: | Unspecified | ||||||
OS: | Unspecified | ||||||
Whiteboard: | spice | ||||||
Fixed In Version: | Doc Type: | Bug Fix | |||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | Environment: | ||||||
Last Closed: | 2015-11-19 23:15:30 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | Spice | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Attachments: |
|
Description
Bill Sanford
2013-10-11 14:45:09 UTC
Created attachment 811188 [details]
Logfile for the SRC and DST host (Same machine)
Hi, I can't find info in the log about a crash or about migration (there is no -incoming in the cmd line, and no client_migrate_info or migrate commands appear in the logs). Can you please attach the logs again, one file for the src, one file for the dest, and with high debug level (export SPICE_DEBUG_LEVEL=5)? Do you know if it happens with linux remote-viewer as well? Thanks, Yonit. I saw this from the first terminal window: (qemu) snd_receive: Connection reset by peer red_channel_client_disconnect_dummy: rcc=0x7f73be594e80 (channel=0x7f73bd0f22b0 type=6 id=0) red_peer_receive: Connection reset by peer red_channel_client_disconnect: rcc=0x7f728c2455f0 (channel=0x7f728c21f350 type=2 id=1) red_peer_receive: Connection reset by peer red_channel_client_disconnect: rcc=0x7f728c2a2450 (channel=0x7f728c21f910 type=4 id=1) red_peer_receive: Connection reset by peer red_channel_client_disconnect: rcc=0x7f729c248810 (channel=0x7f729c21f350 type=2 id=0) snd_channel_put: SndChannel=0x7f73be5838b0 freed red_peer_receive: Connection reset by peer red_channel_client_disconnect: rcc=0x7f73be5990a0 (channel=0x7f73bd090cc0 type=3 id=0) snd_receive: Connection reset by peer red_channel_client_disconnect_dummy: rcc=0x7f73be57b150 (channel=0x7f73bd0f2010 type=5 id=0) snd_channel_put: SndChannel=0x7f73be572400 freed red_peer_receive: Connection reset by peer red_channel_client_disconnect: rcc=0x7f73be569c50 (channel=0x7f73bd085830 type=1 id=0) red_peer_receive: Connection reset by peer red_channel_client_disconnect: rcc=0x7f729c295010 (channel=0x7f729c21f910 type=4 id=0) main_channel_client_on_disconnect: rcc=0x7f73be569c50 red_client_destroy: destroy client 0x7f73bd340830 with #channels=8 red_dispatcher_disconnect_cursor_peer: red_dispatcher_disconnect_display_peer: red_dispatcher_disconnect_cursor_peer: red_dispatcher_disconnect_display_peer: What did you mean by "crashes the guest"? I thought that the vm aborted. the output below logs disconnection of the client. If the vm aborted, on which side? The src or the dst? The output you attached to the bug doesn't contain info about migration, so I'm not sure these are the right logs. For example, none of the qemu execution lines in the file contain "-incoming". And there is no evidence for "migrate" command. (In reply to Bill Sanford from comment #3) > I saw this from the first terminal window: > > (qemu) snd_receive: Connection reset by peer > red_channel_client_disconnect_dummy: rcc=0x7f73be594e80 > (channel=0x7f73bd0f22b0 type=6 id=0) > red_peer_receive: Connection reset by peer > red_channel_client_disconnect: rcc=0x7f728c2455f0 (channel=0x7f728c21f350 > type=2 id=1) > red_peer_receive: Connection reset by peer > red_channel_client_disconnect: rcc=0x7f728c2a2450 (channel=0x7f728c21f910 > type=4 id=1) > red_peer_receive: Connection reset by peer > red_channel_client_disconnect: rcc=0x7f729c248810 (channel=0x7f729c21f350 > type=2 id=0) > snd_channel_put: SndChannel=0x7f73be5838b0 freed > red_peer_receive: Connection reset by peer > red_channel_client_disconnect: rcc=0x7f73be5990a0 (channel=0x7f73bd090cc0 > type=3 id=0) > snd_receive: Connection reset by peer > red_channel_client_disconnect_dummy: rcc=0x7f73be57b150 > (channel=0x7f73bd0f2010 type=5 id=0) > snd_channel_put: SndChannel=0x7f73be572400 freed > red_peer_receive: Connection reset by peer > red_channel_client_disconnect: rcc=0x7f73be569c50 (channel=0x7f73bd085830 > type=1 id=0) > red_peer_receive: Connection reset by peer > red_channel_client_disconnect: rcc=0x7f729c295010 (channel=0x7f729c21f910 > type=4 id=0) > main_channel_client_on_disconnect: rcc=0x7f73be569c50 > red_client_destroy: destroy client 0x7f73bd340830 with #channels=8 > red_dispatcher_disconnect_cursor_peer: > red_dispatcher_disconnect_display_peer: > red_dispatcher_disconnect_cursor_peer: > red_dispatcher_disconnect_display_peer: Low priority, and medium severity, not regression, and not test blocker. See the justification below. 1. Win-8 client was never supported before ( not regression ); 2. Wondering how the streaming video failure during migration is a Test Blocker? Usage of Keywords should be justified. 3. Win 8 as a client is a low priority for rhevm. Note: This failure did not happen when rhel spice client was tested in rhel 6.5, and was not reported for Win 7 or Win XP during rhevm 3.3 testing; 4. migr5ation failure when it was initiated via qemu command has lower priority, than using rhevm environment. I have tested this again with RHEL 6.5 as Guest, Client and Host. Just a simple migration will cause the remote-viewer to close and not migrate the guest. This is how I executed the migration of RHEL 6.5 snap3. /usr/libexec/qemu-kvm -m 4096 -spice port=3000,disable-ticketing,addr=127.0.0.1,seamless-migration=on -vga qxl -global qxl-vga.vram_size=67108864 -device virtio-serial -chardev spicevmc,id=vdagent,name=vdagent -device virtserialport,chardev=vdagent,name=com.redhat.spice.0 -chardev spicevmc,name=usbredir,id=usbredirchardev1 /home/test/images/RHEL65.img -monitor stdio /usr/libexec/qemu-kvm -m 4096 -spice port=3001,disable-ticketing,addr=127.0.0.1,seamless-migration=on -vga qxl -global qxl-vga.vram_size=67108864 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x8 -chardev spicevmc,id=charchannel0,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.spice.0 /home/test/images/RHEL65.img -monitor stdio -incoming tcp:127.0.0.1:4444 client_migrate_info spice 127.0.0.1 3001 migrate -d tcp:127.0.0.1:4444 info migrate The below is the output of STDERR to the screens SRCIPADDR (qemu) (qemu) client_migrate_info spice 127.0.0.1 3001 main_channel_client_handle_migrate_connected: client 0x7faab23d1f40 connected: 1 seamless 1 (qemu) migrate -d tcp:127.0.0.1:4444 (qemu) info migrate Migration status: active total time: 7 milliseconds transferred ram: 224 kbytes remaining ram: 4325308 kbytes total ram: 4325768 kbytes (qemu) red_client_migrate: migrate client with #channels 4 red_dispatcher_cursor_migrate: channel type 4 id 0 red_dispatcher_display_migrate: channel type 2 id 0 red_channel_client_disconnect: rcc=0x7faab37e0300 (channel=0x7faab2323530 type=1 id=0) main_channel_client_on_disconnect: rcc=0x7faab37e0300 red_client_destroy: destroy client 0x7faab23d1f40 with #channels=4 red_dispatcher_disconnect_cursor_peer: red_channel_client_disconnect: rcc=0x7fa990270f70 (channel=0x7fa99021f910 type=4 id=0) red_channel_client_disconnect: rcc=0x7faab37e6770 (channel=0x7faab232ea50 type=3 id=0) red_dispatcher_disconnect_display_peer: red_channel_client_disconnect: rcc=0x7faaa5139010 (channel=0x7fa99021f350 type=2 id=0) DESTIPADDR (qemu) (qemu) main_channel_link: add main channel client red_dispatcher_set_cursor_peer: inputs_connect: inputs channel client create red_dispatcher_loadvm_commands: Unknown savevm section or instance '0000:00:04.0/virtio-console' 0 load of migration failed [root@salusa ~]# (In reply to Bill Sanford from comment #6) > I have tested this again with RHEL 6.5 as Guest, Client and Host. Just a > simple migration will cause the remote-viewer to close and not migrate the > guest. > > This is how I executed the migration of RHEL 6.5 snap3. > > /usr/libexec/qemu-kvm -m 4096 -spice > port=3000,disable-ticketing,addr=127.0.0.1,seamless-migration=on -vga qxl > -global qxl-vga.vram_size=67108864 -device virtio-serial -chardev > spicevmc,id=vdagent,name=vdagent -device > virtserialport,chardev=vdagent,name=com.redhat.spice.0 -chardev > spicevmc,name=usbredir,id=usbredirchardev1 /home/test/images/RHEL65.img > -monitor stdio > > /usr/libexec/qemu-kvm -m 4096 -spice > port=3001,disable-ticketing,addr=127.0.0.1,seamless-migration=on -vga qxl > -global qxl-vga.vram_size=67108864 -device > virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x8 -chardev > spicevmc,id=charchannel0,name=vdagent -device > virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0, > name=com.redhat.spice.0 /home/test/images/RHEL65.img -monitor stdio > -incoming tcp:127.0.0.1:4444 You are not using the same command lines. -device virtio-serial vs virtio-serial-pci? (snap) > (qemu) main_channel_link: add main channel client > red_dispatcher_set_cursor_peer: > inputs_connect: inputs channel client create > red_dispatcher_loadvm_commands: > Unknown savevm section or instance '0000:00:04.0/virtio-console' 0 > load of migration failed This pointed you to it. I have changed the -device virtio-serial vs virtio-serial-pci to remain the same with or without the -pci. I tested with both switches being the same. The migration still fails. The SPICE guest is closed, but when it is reopened, the guest has frozen and it is unresponsive. (In reply to Bill Sanford from comment #8) > I have changed the -device virtio-serial vs virtio-serial-pci to remain the > same with or without the -pci. I tested with both switches being the same. > The migration still fails. > > The SPICE guest is closed, but when it is reopened, the guest has frozen and > it is unresponsive. First make sure that the command lines in the src host and dst host are the same, besides the spice ports and the -incoming option. If migration still fails, attach the *complete* output on the src and the dst, including the command lines. Yonit, what part of comment#6 did I miss in what you want for information? (In reply to Bill Sanford from comment #10) > Yonit, what part of comment#6 did I miss in what you want for information? You said you re-executed the migration. So we would like to have the new command lines and full output of execution. But first make sure your src/dst qemu execution parameters are identical. BTW. I don't have a problem in executing migration. Works for me. Comment 6 has the information. Comment 8, is where I took the lines of comment 6 and edited the -device virtio-serial vs virtio-serial-pci to be the same in each line. (In reply to Bill Sanford from comment #12) > Comment 6 has the information. Comment 8, is where I took the lines of > comment 6 and edited the -device virtio-serial vs virtio-serial-pci to be > the same in each line. so if you just s/virtio-serial-pci/virtio-serial the commands lines are still not the same. You need the command lines to be the same. e.g. SRC > /usr/libexec/qemu-kvm -m 4096 -spice > port=3000,disable-ticketing,addr=127.0.0.1,seamless-migration=on -vga qxl > -global qxl-vga.vram_size=67108864 -device virtio-serial -chardev > spicevmc,id=vdagent,name=vdagent -device > virtserialport,chardev=vdagent,name=com.redhat.spice.0 -chardev > spicevmc,name=usbredir,id=usbredirchardev1 /home/test/images/RHEL65.img > -monitor stdio DST /usr/libexec/qemu-kvm -m 4096 -spice > port=3001,disable-ticketing,addr=127.0.0.1,seamless-migration=on -vga qxl > -global qxl-vga.vram_size=67108864 -device virtio-serial -chardev > spicevmc,id=vdagent,name=vdagent -device > virtserialport,chardev=vdagent,name=com.redhat.spice.0 -chardev > spicevmc,name=usbredir,id=usbredirchardev1 /home/test/images/RHEL65.img > -monitor stdio incoming tcp:127.0.0.1:4444 I have used the values that were slightly off and this setup used to work and migrate fine. Using the command lines with the same values, works. Migrating a guest, virt-viewer freezes then closes and the VM within RHEV-M continues migration. If you open the closed VM that was migrated from the User Portal, the video is still playing like normal migration occured. RHEV-M: RHEV-M 3.3 (is 24.2) RHEL6.5-20131111.0-Server-x86_64-DVD1.iso Hosts: RHEL6.5-20131111.0-Server-x86_64-DVD1.iso Guest: Windows 7x32 rhev-guest-tools-iso-3.3-6.noarch.rpm vdagent - https://brewweb.devel.redhat.com/buildinfo?buildID=310319 Client: Windows 7x64 - Using IE11 mingw64-usbclerk-0.0.1.1-3.el6_4.noarch.rpm mingw64-virt-viewer-0.5.6-8.el6_4.noarch.rpm This does not happen if you just being up virt-viewer without doing anything in the VM. If you play a video, stream music, or even just bring up IE, this bug will occur. (In reply to Bill Sanford from comment #15) > Migrating a guest, Can you give the exact steps you use to trigger the migration? > virt-viewer freezes then closes Can you grab the client log? The steps to start migration: From the Admin portal, select the VM and then click "Migrate" from the menu then click OK where it asks to migrate to the destination VM. I don't see a client log file. this is an automated message. oVirt 3.6.0 RC3 has been released and GA is targeted to next week, Nov 4th 2015. Please review this bug and if not a blocker, please postpone to a later release. All bugs not postponed on GA release will be automatically re-targeted to - 3.6.1 if severity >= high - 4.0 if severity < high I've seen something similar, when destination qemu doesn't launch, client exits and VM keeps running in source qemu - but that proved very hard to reproduce. Let's close and reopen/report new if we see it again. |