Bug 704473 - Tunnelled migration failed with error "Cannot extract Qemu version from '/usr/libexec/qemu-kvm'"
Summary: Tunnelled migration failed with error "Cannot extract Qemu version from '/usr...
Keywords:
Status: CLOSED WORKSFORME
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: libvirt
Version: 6.1
Hardware: x86_64
OS: Linux
high
high
Target Milestone: rc
: ---
Assignee: Jiri Denemark
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2011-05-13 09:27 UTC by weizhang
Modified: 2011-12-22 14:41 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2011-12-22 14:41:54 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description weizhang 2011-05-13 09:27:00 UTC
Description of problem:
I do tunnelled migration with 1024 round and since about 500 round it will report an error
# virsh migrate --live --p2p --tunnelled kvm-rhel6-i386 qemu+ssh://10.66.85.231/system
error: internal error Cannot extract Qemu version from '/usr/libexec/qemu-kvm'

Version-Release number of selected component (if applicable):
# rpm -qa libvirt qemu-kvm kernel
kernel-2.6.32-131.0.5.el6.x86_64
qemu-kvm-0.12.1.2-2.161.el6.x86_64
libvirt-0.8.7-18.el6.x86_64

How reproducible:
always

Steps to Reproduce:
1. Prepare 2 hosts and prepare a nfs which is mounted on both hosts, and setting the virt_use_nfs boolean on both sides
# setsebool -P virt_use_nfs 1
and close the iptable on both sides
# iptables -F

2. start a domain on source host using nfs
3. sh migrate.sh <guestname> <source_host_ip> <target_host_ip>
cat migrate.sh
#!/bin/bash

GUEST=$1
HOST1=$2
HOST2=$3
OPTIONS="--live --p2p --tunnelled"
TRANSPORT="ssh"

date
for i in `seq 1 1024`;
do
    echo "Loop ${i}: Migrating ${GUEST} from ${HOST1} to ${HOST2}"
    echo "COMMAND: virsh migrate ${OPTIONS} ${GUEST} qemu+${TRANSPORT}://${HOST2}/system"
    time virsh migrate ${OPTIONS} ${GUEST} qemu+${TRANSPORT}://${HOST2}/system >> /tmp/mig_result 2>&1 >> /tmp/mig_result

    echo "Loop ${i}: Migrating ${GUEST} back from ${HOST2} to ${HOST1}"
    echo "COMMAND: virsh -c qemu+${TRANSPORT}://${HOST2}/system migrate ${OPTIONS} ${GUEST} qemu+${TRANSPORT}://${HOST1}/system"
    time virsh -c qemu+${TRANSPORT}://${HOST2}/system migrate ${OPTIONS} ${GUEST} qemu+${TRANSPORT}://${HOST1}/system 2>&1 >> /tmp/mig_result
done
date
  
4. see screen output and /tmp/mig_result file

Actual results:
in the end, there will be errors when doing migration from target to source
error: internal error Cannot extract Qemu version from '/usr/libexec/qemu-kvm'

Expected results:
all 1024 round migration will succeed with no error

Additional info:

Comment 1 Daniel Veillard 2011-05-17 08:44:51 UTC
I you restart the test, do you get the same error ? And is it failing 
more or less after the same number of migrations ?

If yes that smells like running out of file descriptors because I don't
see what else could be failing in that code ...

Daniel

Comment 2 weizhang 2011-05-17 11:02:48 UTC
(In reply to comment #1)
> I you restart the test, do you get the same error ? And is it failing 
> more or less after the same number of migrations ?
> 
> If yes that smells like running out of file descriptors because I don't
> see what else could be failing in that code ...
> 
> Daniel

I do more than 2 times and still can get the error info.

Comment 5 Dave Allan 2011-12-21 20:08:29 UTC
Is this still reproducible on the latest 0.9.8 builds?

Comment 6 weizhang 2011-12-22 12:45:49 UTC
(In reply to comment #5)
> Is this still reproducible on the latest 0.9.8 builds?

I can not reproduce on 
qemu-kvm-0.12.1.2-2.211.el6.x86_64
kernel-2.6.32-223.el6.x86_64
libvirt-0.9.8-1.el6.x86_64

Comment 7 Dave Allan 2011-12-22 14:41:54 UTC
(In reply to comment #6)
> (In reply to comment #5)
> > Is this still reproducible on the latest 0.9.8 builds?
> 
> I can not reproduce on 
> qemu-kvm-0.12.1.2-2.211.el6.x86_64
> kernel-2.6.32-223.el6.x86_64
> libvirt-0.9.8-1.el6.x86_64

Ok, I'll close, but of course don't hesitate to reopen if the problem reappears.


Note You need to log in before you can comment on or make changes to this bug.