Hide Forgot
+++ This bug was initially created as a clone of Bug #1009886 +++ From source inspection, this is still a problem in 7.0 Description of problem: Source libvirtd crashes when performing migration on 6.5 host to 6.5 remote host with established spice session (spice client connected), The host is managed by RHEVM3.3. The crash does not happen when migration VM without spice session established, I am not able to reproduce that in different setup but my original setup has more complex setup (especially in networking) It uses separated display network for Spice traffic, separated network for VMs network interfaces. That all on bonded NICs. But I could reproduce even without display network. I am attaching snip from source libvirtd where libvirtd crash is caught. As well as I attach core dump of libvirtd process. Version-Release number of selected component (if applicable): rpm -qa | egrep "libvirt|qemu-kvm|vdsm" libvirt-client-0.10.2-24.el6.x86_64 vdsm-xmlrpc-4.12.0-138.gitab256be.el6ev.noarch qemu-kvm-rhev-0.12.1.2-2.404.el6.x86_64 libvirt-lock-sanlock-0.10.2-24.el6.x86_64 vdsm-cli-4.12.0-138.gitab256be.el6ev.noarch qemu-kvm-rhev-tools-0.12.1.2-2.404.el6.x86_64 libvirt-python-0.10.2-24.el6.x86_64 vdsm-python-4.12.0-138.gitab256be.el6ev.x86_64 vdsm-4.12.0-138.gitab256be.el6ev.x86_64 vdsm-python-cpopen-4.12.0-138.gitab256be.el6ev.x86_64 libvirt-0.10.2-24.el6.x86_64 How reproducible: Always on my setup. Steps to Reproduce: 1. In RHEV3.3 environment migrate VM with established Spice session. Actual results: Source Libvirtd crash. Expected results: No crash on source. Additional info: I can keep the setup for some short time. --- Additional comment from Martin Kletzander on 2013-09-20 11:19:33 EDT --- Patch proposed upstream: http://www.redhat.com/archives/libvir-list/2013-September/msg01208.html --- Additional comment from Shanzhi Yu on 2013-09-22 06:18:25 EDT --- Hi Marian, I can't reproduce this error with below package in rhevm 3.2 environment. vdsm-python-4.10.2-25.0.el6ev.x86_64 qemu-kvm-rhev-debuginfo-0.12.1.2-2.404.el6.x86_64 libvirt-python-0.10.2-26.el6.x86_64 libvirt-0.10.2-26.el6.x86_64 libvirt-debuginfo-0.10.2-26.el6.x86_64 vdsm-cli-4.10.2-25.0.el6ev.noarch qemu-kvm-rhev-0.12.1.2-2.404.el6.x86_64 libvirt-lock-sanlock-0.10.2-26.el6.x86_64 vdsm-4.10.2-25.0.el6ev.x86_64 qemu-kvm-rhev-tools-0.12.1.2-2.404.el6.x86_64 libvirt-devel-0.10.2-26.el6.x86_64 libvirt-client-0.10.2-26.el6.x86_64 vdsm-xmlrpc-4.10.2-25.0.el6ev.noarch Do i must setup rhevm 3.3 to repoduce this bug? AFAIK,rhevm 3.3 is not released, how can i get it? --- Additional comment from Marian Krcmarik on 2013-09-22 12:00:53 EDT --- (In reply to Shanzhi Yu from comment #8) > Hi Marian, > I can't reproduce this error with below package in rhevm 3.2 environment. > > vdsm-python-4.10.2-25.0.el6ev.x86_64 > qemu-kvm-rhev-debuginfo-0.12.1.2-2.404.el6.x86_64 > libvirt-python-0.10.2-26.el6.x86_64 > libvirt-0.10.2-26.el6.x86_64 > libvirt-debuginfo-0.10.2-26.el6.x86_64 > vdsm-cli-4.10.2-25.0.el6ev.noarch > qemu-kvm-rhev-0.12.1.2-2.404.el6.x86_64 > libvirt-lock-sanlock-0.10.2-26.el6.x86_64 > vdsm-4.10.2-25.0.el6ev.x86_64 > qemu-kvm-rhev-tools-0.12.1.2-2.404.el6.x86_64 > libvirt-devel-0.10.2-26.el6.x86_64 > libvirt-client-0.10.2-26.el6.x86_64 > vdsm-xmlrpc-4.10.2-25.0.el6ev.noarch > > Do i must setup rhevm 3.3 to repoduce this bug? AFAIK,rhevm 3.3 is not > released, how can i get it? Try to slown down Spice migration -> Limit bandwidth between your client machine and hosts, Install spice component (spice guest agent) on the VM and open more monitors and redirect some USB devices through the native USB redir. I am not sure, maybe 3.3 vdsm has some effect on that (http://bob.eng.lab.tlv.redhat.com/builds/is15/). I still have the setup and new build with the fix will be available soon I assume so In the worst case I can verify myself.
Version-Release number of selected component (if applicable): qemu-kvm-rhev-1.5.3-19.el7.x86_64 libvirt-1.1.1-7.el7.x86_64 Preparations: 1. mount NFS server to both source and target server.Both source and target server has two NICs(eth0 and eth1). one is used for spice migrations which be limit low speed. define net1 on two server based on eth0 2. define an guest with usbredir #virsh dumpxml rhel6 <redirdev bus='usb' type='spicevmc'> </redirdev> <redirdev bus='usb' type='spicevmc'> </redirdev> <redirdev bus='usb' type='spicevmc'> </redirdev> <redirdev bus='usb' type='spicevmc'> </redirdev> <graphics type='spice' port='5900' autoport='no'> <listen type='network' network='net1'/> </graphics> 3. install vdagent in guest. 4. do network transport limit on source server(eth0)(limit eth0 to 64kbps ). # tc qdisc add dev eth0 ingress # tc filter add dev eth0 parent ffff: protocol ip u32 match ip src 0.0.0.0/0 police rate 64kbit burst 64kbit mtu 64kb drop flowid :1 (has test it with "dd" and "nc", the two command above works fine) Steps: 1. start guest # virsh start rhel6 2. open an display by remote-view on client server use net1 and plugin two usb disk on source server. Do usb redirection. 3. Do migration from source server to target server with eth1' IP # time virsh migrate --live rhel6 qemu+ssh://targetserver/system root@targetserver's password: Results: succeed migrating the guest. So set it to verified.
This request was resolved in Red Hat Enterprise Linux 7.0. Contact your manager or support representative in case you have further questions about the request.