Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
For bugs related to Red Hat Enterprise Linux 5 product line. The current stable release is 5.10. For Red Hat Enterprise Linux 6 and above, please visit Red Hat JIRA https://issues.redhat.com/secure/CreateIssue!default.jspa?pid=12332745 to report new issues.

Bug 696155

Summary: Enlarging migrate_set_speed does not raise migration network transfer rates to the real network bandwidth
Product: Red Hat Enterprise Linux 5 Reporter: RHEL Program Management <pm-rhel>
Component: kvmAssignee: Virtualization Maintenance <virt-maint>
Status: CLOSED ERRATA QA Contact: Virtualization Bugs <virt-bugs>
Severity: urgent Docs Contact:
Priority: urgent    
Version: 5.6CC: bcao, cww, dyasny, ehabkost, gcosta, iheim, juzhang, knoel, llim, lyarwood, mkalinin, mkenneth, mshao, pm-eus, quintela, tburke, virt-maint, vromanov
Target Milestone: rcKeywords: Regression, ZStream
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: kvm-83-224.el5_6.1 Doc Type: Bug Fix
Doc Text:
Due to a regression introduced in Red Hat Enterprise Linux 5.6, duplicate pages may have been transferred during a live migration of a KVM virtual machine. Consequent to this, when a system was under heavy load, such a migration may have failed to complete in some scenarios. This update applies a patch that reverts this regression. As a result, the live migration is now more efficient and no longer fails to complete under heavy load.
Story Points: ---
Clone Of: Environment:
Last Closed: 2011-05-10 07:44:28 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 690521, 713392    
Bug Blocks:    

Description RHEL Program Management 2011-04-13 12:55:13 UTC
This bug has been copied from bug #690521 and has been proposed
to be backported to 5.6 z-stream (EUS).

Comment 8 Mike Cao 2011-04-27 08:30:23 UTC
Reproduced on kvm-83-224.el5.
VERIFIED on kvm-83-224.el5_6.1

In order to rule networks' bottleneck out,I migrated to localhost.

1.boot src guest
#//usr/libexec/qemu-kvm -rtc-td-hack -usbdevice tablet -no-hpet -drive file=/home/rhel5u6_64.qcow2.2,if=virtio,boot=on,werror=stop,cache=none,format=qcow2,media=disk -cpu qemu64,+sse2 -smp 2 -m 4G -net nic,macaddr=00:12:24:5f:a6:2f,model=virtio,vlan=0 -net tap,script=/etc/qemu-ifup,vlan=0  -uuid `uuidgen` -vnc :2 -boot dc  -balloon virtio -monitor unix:/tmp/tt1,server,nowait -notify all -M rhel5.6.0

2./usr/libexec/qemu-kvm -rtc-td-hack -usbdevice tablet -no-hpet -drive file=/home/rhel5u6_64.qcow2.2,if=virtio,boot=on,werror=stop,cache=none,format=qcow2,media=disk -cpu qemu64,+sse2 -smp 2 -m 4G -net nic,macaddr=00:12:24:5f:a6:2f,model=virtio,vlan=0 -net tap,script=/etc/qemu-ifup,vlan=0  -uuid `uuidgen` -vnc :3 -boot dc  -balloon virtio -monitor unix:/tmp/tt1,server,nowait -notify all -M rhel5.6.0 -incoming tcp:0:5888

3.in src qemu
3.1 run comment0 program in guest.
3.2 migrate_set_speed 1G
3.3 do migration
3.4 check "info migrate",watch "transferred ram"

Results:
on kvm-83-224.el5 ,the average speed is 60MB ,max speed could reach reach 
~137MBps
on kvm-83-224.el5_6.1 ,the average speed is 150MB/s ,max speed could reach 400MB/s

Based on above ,speed increased alot ;
So this issue has been fixed ald ,move status to VERIFIED.

Comment 9 Dan Yasny 2011-05-06 11:24:53 UTC
the comment-0 program should be run at least as many times as the VM has cores, it doesn't really create a load, it just dirties some RAM constantly

Comment 10 Jaromir Hradilek 2011-05-09 16:34:39 UTC
    Technical note added. If any revisions are required, please edit the "Technical Notes" field
    accordingly. All revisions will be proofread by the Engineering Content Services team.
    
    New Contents:
Due to a regression introduced in Red Hat Enterprise Linux 5.6, duplicate pages may have been transferred during a live migration of a KVM virtual machine. Consequent to this, when a system was under heavy load, such a migration may have failed to complete in some scenarios. This update applies a patch that reverts this regression. As a result, the live migration is now more efficient and no longer fails to complete under heavy load.

Comment 11 errata-xmlrpc 2011-05-10 07:44:28 UTC
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on therefore solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.

http://rhn.redhat.com/errata/RHBA-2011-0499.html