RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1294475 - libvirt start vm under incorrect NUMA memory mode
Summary: libvirt start vm under incorrect NUMA memory mode
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.2
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: rc
: ---
Assignee: Michal Privoznik
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-12-28 11:41 UTC by Artyom
Modified: 2016-01-05 08:00 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-01-04 13:39:43 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Artyom 2015-12-28 11:41:43 UTC
Description of problem:
I start vm under NUMA memory strict mode, but when I check vm process numa_maps I can see that process run under prefer mode.

Version-Release number of selected component (if applicable):
# cat /etc/redhat-release 
Red Hat Enterprise Linux Server release 7.2 (Maipo)
# uname -r
3.10.0-327.el7.x86_64
# rpm -qa | grep libvirt
libvirt-client-1.2.17-13.el7_2.2.x86_64
libvirt-daemon-config-nwfilter-1.2.17-13.el7_2.2.x86_64
libvirt-daemon-driver-lxc-1.2.17-13.el7_2.2.x86_64
libvirt-lock-sanlock-1.2.17-13.el7_2.2.x86_64
libvirt-daemon-driver-nwfilter-1.2.17-13.el7_2.2.x86_64
libvirt-daemon-driver-interface-1.2.17-13.el7_2.2.x86_64
libvirt-daemon-config-network-1.2.17-13.el7_2.2.x86_64
libvirt-daemon-1.2.17-13.el7_2.2.x86_64
libvirt-daemon-driver-storage-1.2.17-13.el7_2.2.x86_64
libvirt-daemon-driver-network-1.2.17-13.el7_2.2.x86_64
libvirt-1.2.17-13.el7_2.2.x86_64
libvirt-daemon-kvm-1.2.17-13.el7_2.2.x86_64
libvirt-daemon-driver-secret-1.2.17-13.el7_2.2.x86_64
libvirt-daemon-driver-qemu-1.2.17-13.el7_2.2.x86_64
libvirt-python-1.2.17-2.el7.x86_64
libvirt-daemon-driver-nodedev-1.2.17-13.el7_2.2.x86_64

How reproducible:
Always

Steps to Reproduce:
1. Start vm under numa strict mode:
<numatune>
    <memory mode='strict' nodeset='0,2'/>
</numatune>
2. check numa_maps under vm process
# cat /proc/14328/numa_maps | head -n 1
7fb2677fa000 prefer:2

Actual results:
vm process run under prefer mode

Expected results:
vm process run under bind mode

Additional info:

Comment 2 Michal Privoznik 2016-01-04 13:39:43 UTC
This is deliberate. Problem is, if we run it strictly under configured nodes there would be no way how to change it afterwards on a running guest. Therefore, libvirt - just before spawning qemu process - calls numa APIs to set affinity only in preferred mode and uses cgroups to enforce the strictness. See bug 1198645 or upstream commit ea576ee543d6fb955 for more info.

Comment 3 Artyom 2016-01-05 08:00:29 UTC
Thanks for explanation, I will check bug 1198645.


Note You need to log in before you can comment on or make changes to this bug.