Bug 1360584
Summary: | Bind policy don't work well when numad is running | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | Yumei Huang <yuhuang> |
Component: | numad | Assignee: | Lukáš Nykrýn <lnykryn> |
Status: | CLOSED WONTFIX | QA Contact: | qe-baseos-daemons |
Severity: | medium | Docs Contact: | Yehuda Zimmerman <yzimmerm> |
Priority: | unspecified | ||
Version: | 7.3 | CC: | bgray, chayang, drjones, juzhang, knoel, qzhang, virt-maint, yzimmerm |
Target Milestone: | rc | ||
Target Release: | --- | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Known Issue | |
Doc Text: |
numad changes QEMU memory bindings
Currently, the *numad* daemon cannot distinguish between memory bindings that *numad* sets and memory bindings set explicitly by the memory mappings of a process. As a consequence, *numad* changes QEMU memory bindings, even when the NUMA memory policy is specified in the QEMU command line. To work around this problem, if manual NUMA bindings are specified in the guest, disable *numad*. This ensures that manual bindings configured in virtual machines are not changed by *numad*.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2020-12-15 07:43:30 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Yumei Huang
2016-07-27 06:11:06 UTC
Hit same issue when specify ram for both nodes with prealloc=yes. And when set prealloc=false, the issue is gone. So change the summary. QE retest again, seems it has nothing to do with prealloc. When numad is inactive, the bind policy can work well, both memory objects are bound to right host nodes. When numad is running, one of the two memory objects is bound to wrong host node. Moving to numad component. Doc Text updated for Release Notes *** Bug 1361058 has been marked as a duplicate of this bug. *** After evaluating this issue, there are no plans to address it further or fix it in an upcoming release. Therefore, it is being closed. If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened. The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days |