Bug 807910 - Tunnelled migration speed is much slower than without it
Summary: Tunnelled migration speed is much slower than without it
Keywords:
Status: CLOSED DEFERRED
Alias: None
Product: Virtualization Tools
Classification: Community
Component: libvirt
Version: unspecified
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: ---
Assignee: Libvirt Maintainers
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-03-29 06:21 UTC by weizhang
Modified: 2020-11-03 16:34 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-11-03 16:34:16 UTC


Attachments (Terms of Use)

Description weizhang 2012-03-29 06:21:44 UTC
Description of problem:
I test with migrate 514 running guests (with 500 rhel6u3 guest
and other 14 kinds of rhel5/rhel4/rhel3/windows guests) one by one with
virsh migrate --live rhel5u8-i386 qemu+tcp://10.66.104.93/system --verbose --p2p --tunnelled

I found that the speed is verify low than with out --p2p --tunnelled flag
# time virsh migrate --live rhel5u8-i386 qemu+tcp://10.66.104.94/system --verbose --p2p --tunnelled
Migration: [100 %]

real	2m30.295s
user	0m0.114s
sys	0m3.281s

# time virsh migrate --live rhel5u8-i386 qemu+tcp://10.66.104.94/system --verbose
Migration: [100 %]

real	0m17.528s
user	0m0.041s
sys	0m0.593s


Version-Release number of selected component (if applicable):
libvirt-0.9.10-9.el6.x86_64
qemu-kvm-0.12.1.2-2.262.el6.x86_64
kernel-2.6.32-250.el6.x86_64

How reproducible:
100%

Steps to Reproduce:
1. Start 514 guests
2. Do migration
# time virsh migrate --live rhel5u8-i386 qemu+tcp://10.66.104.94/system --verbose --p2p --tunnelled
3. Do migration
# time virsh migrate --live rhel5u8-i386 qemu+tcp://10.66.104.94/system --verbose 
to compare the speed
  
Actual results:
with --p2p --tunnelled, migration time is 2m30.295s,  without them, migration time is 0m17.528s

Expected results:
Above 2 migration speed should be close on

Additional info:

Comment 6 chhu 2017-07-07 02:40:41 UTC
Tested with packages:
libvirt-3.2.0-14.el7.x86_64
qemu-kvm-rhev-2.9.0-14.el7.x86_64
kernel-3.10.0-681.el7.x86_64

Test steps:
1. Migrate 512 guests from source to target with --p2p --tunnelled. The real time is around 0m4.0s, but the real time of last 16 guests are around: 0m26.0s. 

# for i in {1..512}; do time virsh migrate --live r7-4-$i qemu+tcp://ipaddress/system --verbose --p2p --tunnelled; done
Migration: [100 %]

real	0m3.663s
user	0m0.021s
sys	0m0.050s
Migration: [100 %]

real	0m3.715s
user	0m0.018s
sys	0m0.038s
......
real	0m4.413s
user	0m0.019s
sys	0m0.023s
Migration: [100 %]
......
real	0m26.312s
user	0m0.021s
sys	0m0.038s
Migration: [100 %]

real	0m26.116s
user	0m0.020s
sys	0m0.041s
Migration: [100 %]

2. Migrate from source to target with --p2p --tunnelled, then migrate back spent  much more time.

 # time virsh migrate --live r7-4-500 qemu+tcp://target/system --verbose --p2p --tunnelled
Migration: [100 %]

real    0m4.286s
user    0m0.024s
sys    0m0.058s

# time virsh migrate --live r7-4-500 qemu+tcp://source/system --verbose --p2p --tunnelled
Migration: [100 %]

real    0m27.604s
user    0m0.017s
sys    0m0.021s

3. Migrate from source to target then migrate back. The real time both are around Om4.0s.

# time virsh migrate --live r7-4-500 qemu+tcp://target/system --verbose
Migration: [100 %]

real	0m3.978s
user	0m0.025s
sys	0m0.069s
# time virsh migrate --live r7-4-500 qemu+tcp://source/system --verbose
Migration: [100 %]

real    0m4.521s
user    0m0.017s
sys    0m0.018s

Comment 7 chhu 2017-07-07 05:48:11 UTC
(In reply to chhu from comment #6)
> Tested with packages:
> libvirt-3.2.0-14.el7.x86_64
> qemu-kvm-rhev-2.9.0-14.el7.x86_64
> kernel-3.10.0-681.el7.x86_64
> 
> Test steps:
> 1. Migrate 512 guests from source to target with --p2p --tunnelled. The real
> time is around 0m4.0s, but the real time of last 16 guests are around:
> 0m26.0s. 
> 

Migration over 500 guests will cost more time. (Om4.0s -> Om26.0s)

> # for i in {1..512}; do time virsh migrate --live r7-4-$i
> qemu+tcp://ipaddress/system --verbose --p2p --tunnelled; done
> Migration: [100 %]
> 
> real	0m3.663s
> user	0m0.021s
> sys	0m0.050s
> Migration: [100 %]
> 
> real	0m3.715s
> user	0m0.018s
> sys	0m0.038s
> ......
> real	0m4.413s
> user	0m0.019s
> sys	0m0.023s
> Migration: [100 %]
> ......
> real	0m26.312s
> user	0m0.021s
> sys	0m0.038s
> Migration: [100 %]
> 
> real	0m26.116s
> user	0m0.020s
> sys	0m0.041s
> Migration: [100 %]
> 
> 2. Migrate from source to target with --p2p --tunnelled, then migrate back
> spent  much more time.

Debug more, this is related to the machine. Do migration from machine A to machine B will use much more time than from machine B to machine A. This is
not a issue on migration.  
> 
>  # time virsh migrate --live r7-4-500 qemu+tcp://target/system --verbose
> --p2p --tunnelled
> Migration: [100 %]
> 
> real    0m4.286s
> user    0m0.024s
> sys    0m0.058s
> 
> # time virsh migrate --live r7-4-500 qemu+tcp://source/system --verbose
> --p2p --tunnelled
> Migration: [100 %]
> 
> real    0m27.604s
> user    0m0.017s
> sys    0m0.021s
> 
> 3. Migrate from source to target then migrate back. The real time both are
> around Om4.0s.
> 
> # time virsh migrate --live r7-4-500 qemu+tcp://target/system --verbose
> Migration: [100 %]
> 
> real	0m3.978s
> user	0m0.025s
> sys	0m0.069s
> # time virsh migrate --live r7-4-500 qemu+tcp://source/system --verbose
> Migration: [100 %]
> 
> real    0m4.521s
> user    0m0.017s
> sys    0m0.018s

Comment 8 Daniel Berrangé 2020-11-03 16:34:16 UTC
Thank you for reporting this issue to the libvirt project. Unfortunately we have been unable to resolve this issue due to insufficient maintainer capacity and it will now be closed. This is not a reflection on the possible validity of the issue, merely the lack of resources to investigate and address it, for which we apologise. If you none the less feel the issue is still important, you may choose to report it again at the new project issue tracker https://gitlab.com/libvirt/libvirt/-/issues The project also welcomes contribution from anyone who believes they can provide a solution.


Note You need to log in before you can comment on or make changes to this bug.