Bug 2223464
| Summary: | Wrong cpuset.mems cgroup is set when setting numa tuning with "restrictive" mode. | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 8 | Reporter: | liang cong <lcong> |
| Component: | libvirt | Assignee: | Michal Privoznik <mprivozn> |
| Status: | CLOSED ERRATA | QA Contact: | liang cong <lcong> |
| Severity: | medium | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 8.9 | CC: | fqi, hshuai, jdenemar, lmen, mprivozn, virt-maint, xuzhang |
| Target Milestone: | rc | Keywords: | AutomationTriaged, Triaged |
| Target Release: | --- | Flags: | pm-rhel:
mirror+
|
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | libvirt-8.0.0-22.el8 | Doc Type: | Bug Fix |
| Doc Text: |
Cause:
For 'restrictive' numatune, libvirt wasn't setting cpuset.mems on domain startup, which kind of defeats the purpose of the configuration knob.
Consequence:
Domain was not restricted to run on configured NUMA nodes.
Fix:
Simple.
Result:
Libvirt now sets up cpuset CGroup controller for 'restrictive' numatune mode even on domain startup, so domain runs on desired NUMA nodes from the beginning.
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2023-11-14 15:33:28 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
I think this is the same as RHEL-9 bug 2070380. Upstream it was fixed by the following commit: commit 629282d8845407c1aff9a26f5dc026e15121f8cd Author: Michal Prívozník <mprivozn> AuthorDate: Fri Apr 1 14:30:05 2022 +0200 Commit: Michal Prívozník <mprivozn> CommitDate: Thu Apr 7 12:12:11 2022 +0200 lib: Set up cpuset controller for restrictive numatune The aim of 'restrictive' numatune mode is to rely solely on CGroups to have QEMU running on configured NUMA nodes. However, we were never setting the cpuset controller when a domain was starting up. We are doing so only when virDomainSetNumaParameters() is called (aka live pinning). This is obviously wrong. Fortunately, fix is simple as 'restrictive' is similar to 'strict' - every location where VIR_DOMAIN_NUMATUNE_MEM_STRICT occurs can be audited and VIR_DOMAIN_NUMATUNE_MEM_RESTRICTIVE case can be added. Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=2070380 Signed-off-by: Michal Privoznik <mprivozn> Reviewed-by: Ján Tomko <jtomko> v8.3.0-rc1~138 To POST: https://gitlab.com/redhat/rhel/src/libvirt/-/merge_requests/117 Scratch build: https://kojihub.stream.rdu2.redhat.com/koji/taskinfo?taskID=2522724 Pre-verified on scratch build: # rpm -q libvirt libvirt-8.0.0-22.el8_rc.3ae0219671.x86_64 Test steps: 1. Define a guest with numatune configed xml: <numatune> <memory mode="restrictive" nodeset="1"/> </numatune> 2. Start the guest # virsh start vm1 Domain 'vm1' started 3. Check the cpuset.mems cgroup setting # cat /sys/fs/cgroup/cpuset/machine.slice/machine-qemu\\x2d4\\x2dvm1.scope/libvirt/emulator/cpuset.mems 1 # cat /sys/fs/cgroup/cpuset/machine.slice/machine-qemu\\x2d4\\x2dvm1.scope/libvirt/vcpu0/cpuset.mems 1 # cat /sys/fs/cgroup/cpuset/machine.slice/machine-qemu\\x2d4\\x2dvm1.scope/libvirt/vcpu1/cpuset.mems 1 Also tested with "strict", "interleave", "preferred" modes and with guest numa node Verified on build: # rpm -q libvirt libvirt-8.0.0-22.module+el8.9.0+19544+b3045133.x86_64 Test steps: 1. Define a guest with numatune configed xml: <numatune> <memory mode="restrictive" nodeset="0"/> </numatune> 2. Start the guest # virsh start vm1 Domain 'vm1' started 3. Check the cpuset.mems cgroup setting # cat /sys/fs/cgroup/cpuset/machine.slice/machine-qemu\\x2d4\\x2dvm1.scope/libvirt/emulator/cpuset.mems 0 # cat /sys/fs/cgroup/cpuset/machine.slice/machine-qemu\\x2d4\\x2dvm1.scope/libvirt/vcpu0/cpuset.mems 0 # cat /sys/fs/cgroup/cpuset/machine.slice/machine-qemu\\x2d4\\x2dvm1.scope/libvirt/vcpu1/cpuset.mems 0 Also tested with "strict", "interleave", "preferred" modes and with guest numa node Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: virt:rhel and virt-devel:rhel security, bug fix, and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2023:6980 |
Description of problem:Wrong cpuset.mems cgroup is set when setting numa tuning with "restrictive" mode. Version-Release number of selected component (if applicable): # rpm -q libvirt qemu-kvm libvirt-8.0.0-21.module+el8.9.0+19166+e262ca96.x86_64 qemu-kvm-6.2.0-35.module+el8.9.0+19166+e262ca96.x86_64 How reproducible: 100% Steps to Reproduce: 1 Start a guest with below setting: <numatune> <memory mode="restrictive" nodeset="0"/> </numatune> 2 Check cpuset.mems cgroup setting # cat /sys/fs/cgroup/cpuset/machine.slice/machine-qemu\\x2d1\\x2dvm1.scope/libvirt/emulator/cpuset.mems 0-1 # cat /sys/fs/cgroup/cpuset/machine.slice/machine-qemu\\x2d1\\x2dvm1.scope/libvirt/vcpu0/cpuset.mems 0-1 # cat /sys/fs/cgroup/cpuset/machine.slice/machine-qemu\\x2d1\\x2dvm1.scope/libvirt/vcpu1/cpuset.mems 0-1 Actual results: We could see that the cpuset.mems cgroup setting of the guest is different with numa tuning setting Expected results: With restrictive mode, the cpuset.mems cgroup setting should follow the numa tuning setting.