Hide Forgot
Description of problem: I test with migrate 514 running guests (with 500 rhel6u3 guest and other 14 kinds of rhel5/rhel4/rhel3/windows guests) one by one with virsh migrate --live rhel5u8-i386 qemu+tcp://10.66.104.93/system --verbose --p2p --tunnelled I found that the speed is verify low than with out --p2p --tunnelled flag # time virsh migrate --live rhel5u8-i386 qemu+tcp://10.66.104.94/system --verbose --p2p --tunnelled Migration: [100 %] real 2m30.295s user 0m0.114s sys 0m3.281s # time virsh migrate --live rhel5u8-i386 qemu+tcp://10.66.104.94/system --verbose Migration: [100 %] real 0m17.528s user 0m0.041s sys 0m0.593s Version-Release number of selected component (if applicable): libvirt-0.9.10-9.el6.x86_64 qemu-kvm-0.12.1.2-2.262.el6.x86_64 kernel-2.6.32-250.el6.x86_64 How reproducible: 100% Steps to Reproduce: 1. Start 514 guests 2. Do migration # time virsh migrate --live rhel5u8-i386 qemu+tcp://10.66.104.94/system --verbose --p2p --tunnelled 3. Do migration # time virsh migrate --live rhel5u8-i386 qemu+tcp://10.66.104.94/system --verbose to compare the speed Actual results: with --p2p --tunnelled, migration time is 2m30.295s, without them, migration time is 0m17.528s Expected results: Above 2 migration speed should be close on Additional info:
Tested with packages: libvirt-3.2.0-14.el7.x86_64 qemu-kvm-rhev-2.9.0-14.el7.x86_64 kernel-3.10.0-681.el7.x86_64 Test steps: 1. Migrate 512 guests from source to target with --p2p --tunnelled. The real time is around 0m4.0s, but the real time of last 16 guests are around: 0m26.0s. # for i in {1..512}; do time virsh migrate --live r7-4-$i qemu+tcp://ipaddress/system --verbose --p2p --tunnelled; done Migration: [100 %] real 0m3.663s user 0m0.021s sys 0m0.050s Migration: [100 %] real 0m3.715s user 0m0.018s sys 0m0.038s ...... real 0m4.413s user 0m0.019s sys 0m0.023s Migration: [100 %] ...... real 0m26.312s user 0m0.021s sys 0m0.038s Migration: [100 %] real 0m26.116s user 0m0.020s sys 0m0.041s Migration: [100 %] 2. Migrate from source to target with --p2p --tunnelled, then migrate back spent much more time. # time virsh migrate --live r7-4-500 qemu+tcp://target/system --verbose --p2p --tunnelled Migration: [100 %] real 0m4.286s user 0m0.024s sys 0m0.058s # time virsh migrate --live r7-4-500 qemu+tcp://source/system --verbose --p2p --tunnelled Migration: [100 %] real 0m27.604s user 0m0.017s sys 0m0.021s 3. Migrate from source to target then migrate back. The real time both are around Om4.0s. # time virsh migrate --live r7-4-500 qemu+tcp://target/system --verbose Migration: [100 %] real 0m3.978s user 0m0.025s sys 0m0.069s # time virsh migrate --live r7-4-500 qemu+tcp://source/system --verbose Migration: [100 %] real 0m4.521s user 0m0.017s sys 0m0.018s
(In reply to chhu from comment #6) > Tested with packages: > libvirt-3.2.0-14.el7.x86_64 > qemu-kvm-rhev-2.9.0-14.el7.x86_64 > kernel-3.10.0-681.el7.x86_64 > > Test steps: > 1. Migrate 512 guests from source to target with --p2p --tunnelled. The real > time is around 0m4.0s, but the real time of last 16 guests are around: > 0m26.0s. > Migration over 500 guests will cost more time. (Om4.0s -> Om26.0s) > # for i in {1..512}; do time virsh migrate --live r7-4-$i > qemu+tcp://ipaddress/system --verbose --p2p --tunnelled; done > Migration: [100 %] > > real 0m3.663s > user 0m0.021s > sys 0m0.050s > Migration: [100 %] > > real 0m3.715s > user 0m0.018s > sys 0m0.038s > ...... > real 0m4.413s > user 0m0.019s > sys 0m0.023s > Migration: [100 %] > ...... > real 0m26.312s > user 0m0.021s > sys 0m0.038s > Migration: [100 %] > > real 0m26.116s > user 0m0.020s > sys 0m0.041s > Migration: [100 %] > > 2. Migrate from source to target with --p2p --tunnelled, then migrate back > spent much more time. Debug more, this is related to the machine. Do migration from machine A to machine B will use much more time than from machine B to machine A. This is not a issue on migration. > > # time virsh migrate --live r7-4-500 qemu+tcp://target/system --verbose > --p2p --tunnelled > Migration: [100 %] > > real 0m4.286s > user 0m0.024s > sys 0m0.058s > > # time virsh migrate --live r7-4-500 qemu+tcp://source/system --verbose > --p2p --tunnelled > Migration: [100 %] > > real 0m27.604s > user 0m0.017s > sys 0m0.021s > > 3. Migrate from source to target then migrate back. The real time both are > around Om4.0s. > > # time virsh migrate --live r7-4-500 qemu+tcp://target/system --verbose > Migration: [100 %] > > real 0m3.978s > user 0m0.025s > sys 0m0.069s > # time virsh migrate --live r7-4-500 qemu+tcp://source/system --verbose > Migration: [100 %] > > real 0m4.521s > user 0m0.017s > sys 0m0.018s
Thank you for reporting this issue to the libvirt project. Unfortunately we have been unable to resolve this issue due to insufficient maintainer capacity and it will now be closed. This is not a reflection on the possible validity of the issue, merely the lack of resources to investigate and address it, for which we apologise. If you none the less feel the issue is still important, you may choose to report it again at the new project issue tracker https://gitlab.com/libvirt/libvirt/-/issues The project also welcomes contribution from anyone who believes they can provide a solution.