Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 707558

Summary: KVMs running on more than one numa cell are highly suboptimal
Product: Red Hat Enterprise Linux 6 Reporter: Kai Mosebach <redhat-bugzilla>
Component: qemu-kvmAssignee: Andrea Arcangeli <aarcange>
Status: CLOSED DEFERRED QA Contact: Virtualization Bugs <virt-bugs>
Severity: medium Docs Contact:
Priority: medium    
Version: 6.1CC: ehabkost, gcosta, juzhang, k.georgiou, mkenneth, redhat-bugzilla, tburke, virt-maint
Target Milestone: rc   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2011-12-13 10:29:32 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 580951    

Description Kai Mosebach 2011-05-25 11:57:32 UTC
Description of problem:

When running a virtual machine w/ QEMU-KVM which has more cores than one numa cell or is spread over several numa cells, the performance of the machine drops by ~50%.

This leaves us with a maximal VM size of 6 cores and 32GB memory (on an AMD MagnyCours system) which has a total of 48 cores and 256GB memory. Enterprise support should not be limited at these low boundaries.

Some sort of NUMA intelligence within the VM (or the QEMU process) would be required to solve this.

Version-Release number of selected component (if applicable):

RedHat 6.0
RedHat 6.1
qemu-0.12.1.2-2.113
(qemu-0.14.1 also tested)

How reproducible:

a.) Run an >6 core instance on a magny cours AMD system and within this VM run any kind of benchmark
b.) Run an >4 core instance on an Intel Nehalem system and within this VM run any kind of benchmark
c.) Run an unpinned VM on any numa enabled system and within this VM run any kind of benchmark

Steps to Reproduce:
1.
2.
3.
  
Actual results:

runtime performance of benchmarks (like stream) or heavy computations using the whole memory in the unpinned VM drops by >50%

Expected results:

a.) The VM is numa aware and therefore knows itself how to handle numa affinities
b.) The QEMU KVM process handles the NUMA logic (supposedly inefficient?)

Additional info:

different internal computations were done. parts of the results can be revealed on request.

Comment 5 Dor Laor 2011-12-13 10:29:32 UTC
We support numa pinning in kvm so one can use libvirt and pin the vcpus/memory to the right nodes. The numa topology can be exposed to the guest as well using -numa command line option.

Automatic numa allocation and migration of memory w/o pinning is planned for further releases. We have folks working on userspace daemon that load balance memory and match it against the physical numa topology and for rhel7 we're working on a similar solution on the kernel level. That's why I'm inclined to close this issue since it will be solved by the above options.