RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 693187 - Test case live migration speed/performance impact.
Summary: Test case live migration speed/performance impact.
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: TestPlan-KVM
Version: 6.1
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Mike Cao
QA Contact: Keqin Hong
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2011-04-03 12:39 UTC by Dor Laor
Modified: 2013-01-09 21:20 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2011-04-20 12:11:46 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Dor Laor 2011-04-03 12:39:46 UTC
As can seen in https://bugzilla.redhat.com/show_bug.cgi?id=690521 we need to test the network bandwidth implications when doing live migration. In addition I like to see that we test also the performance over head for the guest when going through live migration.

Scenario: Let's run a guest with some app in it. The app can/should be one of cpu/net/blk intensive. Need to measure its performance before/during/post live migration. Performance == throughput and latency. We also need to measure the length of the downtime at the last stage of the live migration process.

Lastly we need to see that the live migration stage should converge assuming reasonable dirty page/sec number. This means that if the bandwidth for live migration is high enough (let's assume 1gb/s) migration finishes w/ donwtime of 0.1s - 0.5s.

A very nice and full data can be retried form the live migration bible document: http://www.cl.cam.ac.uk/research/srg/netos/papers/2005-migration-nsdi-pre.pdf

Comment 2 RHEL Program Management 2011-04-03 12:57:30 UTC
This request was evaluated by Red Hat Product Management for
inclusion in the current release of Red Hat Enterprise Linux.
Because the affected component is not scheduled to be updated
in the current release, Red Hat is unfortunately unable to
address this request at this time. Red Hat invites you to
ask your support representative to propose this request, if
appropriate and relevant, in the next release of Red Hat
Enterprise Linux. If you would like it considered as an
exception in the current release, please ask your support
representative.

Comment 3 Michael Doyle 2011-04-03 23:23:31 UTC
Mike, please let us know what changes are required to the KVM IEEE Test Plan to capture this testing.

Comment 4 Mike Cao 2011-04-07 11:46:56 UTC
16. Measure migration speed
-       migraion successed.After migration ,guest works fine.
+	Run a guest with some app in it. The app can/should be one of cpu/net/blk +intensive. Need to measure its performance before/during/post live
+ migration. Performance == throughput and latency migraion successed.
+       enlarge the migration speed during migration ,should check the +transferring speed is more or leass same as the speed set in qemu-monitor.
+migration. and migration should successed.
+        After migration ,guest works fine.

Comment 5 Mike Cao 2011-04-07 11:51:41 UTC
> Lastly we need to see that the live migration stage should converge assuming
> reasonable dirty page/sec number. This means that if the bandwidth for live
> migration is high enough (let's assume 1gb/s) migration finishes w/ donwtime of
> 0.1s - 0.5s.
Hi, Dor 

the default migration downtime is 0.3s ,in the testing ,We want to use ping command to measure it .but the offset of ping are much larger than migration max downtime. 
Could you provided me How to measure migration_max_downtime?

Thanks,
Mike 
> 
> A very nice and full data can be retried form the live migration bible
> document:
> http://www.cl.cam.ac.uk/research/srg/netos/papers/2005-migration-nsdi-pre.pdf

Comment 6 Mike Cao 2011-04-20 12:11:46 UTC
QE tried this case in RC migraion functional testing .did not find the bug described in comment #0.

I will close this issue.

Thanks,
Mike


Note You need to log in before you can comment on or make changes to this bug.