RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 736499 - kvm guests kills host IO performance
Summary: kvm guests kills host IO performance
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: kernel
Version: 6.1
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Red Hat Kernel Manager
QA Contact: Red Hat Kernel QE team
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2011-09-07 20:44 UTC by Orion Poplawski
Modified: 2012-06-05 14:15 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2012-06-05 14:15:29 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Orion Poplawski 2011-09-07 20:44:07 UTC
Description of problem:

I've had to disable raid-check because the check/resync process absolutely kills performance.  Server is used for backups and for multiple kvm guests so a fair amount of IO.

Version-Release number of selected component (if applicable):
2.6.32-131.12.1.el6.x86_64
mdadm-3.2.2-1.el6.2.x86_64

Comment 1 Orion Poplawski 2011-09-07 20:44:39 UTC
# cat /proc/mdstat
Personalities : [raid1] [raid10] 
md1 : active raid10 sdb2[0] sde1[7] sdf1[6] sdg1[5] sdh1[4] sdc1[3] sdd1[2] sda2[1]
      3906203648 blocks 256K chunks 2 near-copies [8/8] [UUUUUUUU]
      
md0 : active raid1 sdb1[0] sda1[1]
      200704 blocks [2/2] [UU]

Comment 3 RHEL Program Management 2011-10-07 15:47:43 UTC
Since RHEL 6.2 External Beta has begun, and this bug remains
unresolved, it has been rejected as it is not proposed as
exception or blocker.

Red Hat invites you to ask your support representative to
propose this request, if appropriate and relevant, in the
next release of Red Hat Enterprise Linux.

Comment 4 Orion Poplawski 2011-10-20 19:10:15 UTC
It seems to be the combination of kvm guests and the raid10 resync that kills IO performance for some reason.  Stopping all kvm guests during the resync (or stopping the resync if it is just a check) gets me back to >100MB/s read/write performance.

The kvm guests are using raw lvm volumes on a volume group on the raid10 array and using virtio.  One kvm guest in particular that triggers it is a 2 cpu Fedora 14 instance running zabbix-server - so a fairly steady cpu and io load.

2.6.32-131.17.1.el6.x86_64

Comment 5 Orion Poplawski 2011-11-18 18:20:27 UTC
I just noticed that a copy between two disks on the host was crawling (5-10MB/s).  I starting shutting down kvm guests (which were mainly idle) and now I'm up to 50-70MB/s.  Something about kvm guests is really killing the host IO performance.

Any chance that having hyperthreading enabled would have any effect on this?  I may disable just to see.

Comment 6 Orion Poplawski 2011-11-18 18:26:55 UTC
This is the controller I'm using:

07:00.0 SCSI storage controller: LSI Logic / Symbios Logic MegaRAID SAS 8208ELP/8208ELP (rev 08)

with mptsas driver.

Comment 7 Orion Poplawski 2012-02-17 06:05:04 UTC
This appears to be fixed in recent kernels.  Feel free to close.

Comment 8 Nerijus Baliūnas 2012-04-05 01:21:45 UTC
Still happens on F16 latest kernel BTW.

Comment 9 Orion Poplawski 2012-04-05 03:45:54 UTC
Is this a regular F16 system?  If so I would open a new bug against F16.  I'm not seeing any trouble now with a current 6.2 system.


Note You need to log in before you can comment on or make changes to this bug.